From wbaxter at gmail.com Mon May 1 00:20:03 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 1 00:20:03 2006 Subject: [Numpy-discussion] Some questions about dot() Message-ID: Some questions about numpy.dot() 1) Is there a reason that dot() doesn't transpose the first argument? As far as I know, that flies in the face of the mathematical definition of what a dot product is. 2) From the docstring: dot(...) matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. 2a) What is matrixproduct(a,b)? I don't see such a function in numpy 2b) What is this "generic numpy equivalent" vaguely referred to? Seems like it would make more sense to have dot() follow the mathematical convention of a.T * b, and have a separate function, like mult() or matrixmult(), do what dot() does currently. Is there historical baggage of some kind here preventing that? Or some maybe there's a different definition of dot product from another branch of mathematics that I'm not familiar with? Thanks, --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Mon May 1 02:16:11 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon May 1 02:16:11 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: Message-ID: <4455D188.7030505@bigpond.net.au> Hi Bill, It looks to me like dot() is doing the right thing. Can you post an example of why you think it's wrong? 2a) re. the docstring - this looks like a 'bug'; presumably an old docstring not correctly updated. 2b) "generic numpy equivalent" - agree that this isn't very enlightening. Gary R. Bill Baxter wrote: > Some questions about numpy.dot() > > 1) Is there a reason that dot() doesn't transpose the first argument? > As far as I know, that flies in the face of the mathematical definition > of what a dot product is. > > 2) From the docstring: > dot(...) > matrixproduct(a,b) > Returns the dot product of a and b for arrays of floating point types. > Like the generic numpy equivalent the product sum is over > the last dimension of a and the second-to-last dimension of b. > NB: The first argument is not conjugated. > > 2a) What is matrixproduct(a,b)? I don't see such a function in numpy > > 2b) What is this "generic numpy equivalent" vaguely referred to? > > > Seems like it would make more sense to have dot() follow the > mathematical convention of a.T * b, and have a separate function, like > mult() or matrixmult(), do what dot() does currently. Is there > historical baggage of some kind here preventing that? Or some maybe > there's a different definition of dot product from another branch of > mathematics that I'm not familiar with? > > Thanks, > --Bill From wbaxter at gmail.com Mon May 1 04:49:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 1 04:49:15 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: <4455D188.7030505@bigpond.net.au> Message-ID: Hi Gary, On 5/1/06, Gary Ruben wrote: > > Hi Bill, > > It looks to me like dot() is doing the right thing. Can you post an > example of why you think it's wrong? It /is/ behaving as documented, if that's what you mean. But the question is why it acts that way. Simple example: >>> numpy.__version__, os.name ('0.9.5', 'nt') >>> a = numpy.asmatrix([1.,2.,3.]).T >>> a matrix([[ 1.], [ 2.], [ 3.]]) >>> numpy.dot(a,a) Traceback (most recent call last): File "", line 1, in ? ValueError: matrices are not aligned >>> numpy.dot(a.T,a) matrix([[ 14.]]) Everywhere I've ever encountered a dot product before it's been equivalent to the transpose of A times B. So a 'dot()' function that acts exactly like a matrix multiply is a bit surprising to me. After poking around some more I found numpy.vdot() which is apparently supposed to do the standard "vector" dot product. However, all I get from that is: >>> a matrix([[ 1.], [ 2.], [ 3.]]) >>> numpy.vdot(a,a) Traceback (most recent call last): File "", line 1, in ? ValueError: vectors have different lengths >>> Also in the same numpy.core._dotblas module as dot and vdot, there's an 'inner', which claims to be an inner product, but seems to only work when called with both arguments transposed as follows: >>> numpy.inner(a.T, a.T) array([[ 14.]]) 2a) re. the docstring - this looks like a 'bug'; presumably an old > docstring not correctly updated. I think maybe 'matrixproduct' is supposed to be 'matrixmultiply' which /is/ a synonym for dot. 2b) "generic numpy equivalent" - agree that this isn't very enlightening. -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon May 1 05:26:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon May 1 05:26:01 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: Message-ID: On Mon, 1 May 2006, Bill Baxter apparently wrote: > Seems like it would make more sense to have dot() follow > the mathematical convention of a.T * b, and have > a separate function, like mult() or matrixmult(), do what > dot() does currently. Is there historical baggage of some > kind here preventing that? Or some maybe there's > a different definition of dot product from another branch > of mathematics that I'm not familiar with? Historically, 'dot' was essentially an alias for 'multiarray.matrixproduct' in Numeric. This is a long standing use that I would not expect to change. (But I am just a user.) I believe you have found a documentation bug, as matrixproduct either no longer exists or is well hidden. On the more general point ... Can you point to a definition that matches your proposed use? The most common definition I know for 'dot' is between vectors, which do not "transpose". In numpy this is 'vdot', which returns a scalar product. The production of a dot product between two column vectors in a linear algebra context by transposing and then matrix multiplying is, I believe, a convenience rather than a definition of any sort. If we care about the details, the result is also not a scalar. Cheers, Alan Isaac From p.barbier-de-reuille at uea.ac.uk Mon May 1 07:34:06 2006 From: p.barbier-de-reuille at uea.ac.uk (Pierre Barbier de Reuille) Date: Mon May 1 07:34:06 2006 Subject: [Numpy-discussion] Bug in ndarray.argmax Message-ID: <44561C11.3000601@cmp.uea.ac.uk> Hello, I notices a bug in ndarray.argmax which prevent from getting the argmax from any axis but the last one. I join a patch to correct this. Also, here is a small python code to test the behaviour of argmax I implemented : ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== from numpy import array, random, all a = random.normal( 0, 1, ( 4,5,6,7,8 ) ) for i in xrange( a.ndim ): amax = a.max( i ) aargmax = a.argmax( i ) axes = range( a.ndim ) axes.remove( i ) assert all( amax == aargmax.choose( *a.transpose( i, *axes ) ) ) ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== Pierre -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: swapback.patch URL: From gruben at bigpond.net.au Mon May 1 07:39:10 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon May 1 07:39:10 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: <4455D188.7030505@bigpond.net.au> Message-ID: <44561D5E.7020905@bigpond.net.au> Hi Bill, I see what you mean. It looks to me like the functionality hasn't been extended to the matrix type in a sensible way, but I can see issues with this. I would expect dot() to be defined for a rank-1 array or vector object, but not necessarily for a rank-2 array or matrix. Linear algebra texts tend to hide the issue of the rank of the object being transposed when they equate the dot product with a.T*b (or a*b.T). It could be argued that if dot takes arguments which are matrix objects of shape (n,1), it could sensibly consider them to be 'column vectors', and should return a scalar. However, these are rank-2 arrays and I think Travis explicitly wrote the dot functionality for rank-2 arrays to support taking multiple rank-1 dot products simultaneously. I don't know if there are other issues around changing the dot() behaviour to act differently for matrix objects and array objects, but I suspect that's what is required. Gary R. Bill Baxter wrote: > Hi Gary, > > On 5/1/06, *Gary Ruben* < gruben at bigpond.net.au > > wrote: > > Hi Bill, > > It looks to me like dot() is doing the right thing. Can you post an > example of why you think it's wrong? > > > It /is/ behaving as documented, if that's what you mean. But the > question is why it acts that way. > Simple example: > >>> numpy.__version__, os.name > ('0.9.5', 'nt') > >>> a = numpy.asmatrix([1.,2.,3.]).T > >>> a > matrix([[ 1.], > [ 2.], > [ 3.]]) > >>> numpy.dot(a,a) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: matrices are not aligned > >>> numpy.dot(a.T,a) > matrix([[ 14.]]) > > Everywhere I've ever encountered a dot product before it's been > equivalent to the transpose of A times B. So a 'dot()' function that > acts exactly like a matrix multiply is a bit surprising to me. > > After poking around some more I found numpy.vdot() which is apparently > supposed to do the standard "vector" dot product. However, all I get > from that is: > >>> a > matrix([[ 1.], > [ 2.], > [ 3.]]) > >>> numpy.vdot(a,a) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: vectors have different lengths > >>> > > Also in the same numpy.core._dotblas module as dot and vdot, there's an > 'inner', which claims to be an inner product, but seems to only work > when called with both arguments transposed as follows: > >>> numpy.inner(a.T, a.T) > array([[ 14.]]) From fullung at gmail.com Mon May 1 15:12:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 1 15:12:06 2006 Subject: [Numpy-discussion] ctypes and NumPy Message-ID: <00d901c66d6c$2d2569c0$0a84a8c0@dsp.sun.ac.za> Hello all I've been working on wrapping a C library for use with NumPy for the past week. After struggling to get it "just right" with SWIG and hand-written C API, I tried ctypes and I was able to do with ctypes in 4 hours what I was unable to do with SWIG or the C API in 5 days (probably mostly due to incompetence on my part ;-)). So there's my ctypes testimonial. I have a few questions regarding using of ctypes with NumPy. I'd appreciate any feedback on better ways of accomplishing what I've done so far. 1. Passing pointers to NumPy data to C functions I would like to pass the data pointer of a NumPy array to a C function via ctypes. Currently I'm doing the following in C: #ifdef _DEBUG #define DEBUG__ #undef _DEBUG #endif #include "Python.h" #ifdef DEBUG__ #define _DEBUG #undef DEBUG__ #endif typedef struct PyArrayObject { PyObject_HEAD char* data; int nd; void* dimensions; void* strides; void* descr; int flags; void* weakreflist; } PyArrayObject; extern void* PyArray_DATA(PyObject* obj) { return (void*) (((PyArrayObject*)(obj))->data); } First some notes regarding the code above. The preprocessor goop is there to turn off the _DEBUG define, if any, to prevent Python from trying to link against its debug library (python24_d) on Windows, even when you do a debug build of your own code. Including arrayobject.h seems to introduce some Python library symbols, so that's why I also had to extract the definition of PyArrayObject from arrayobject.h. Now I can build my code in debug or release mode without having to worry about Python. As a companion to this C function that allows me to get the data pointer of a NumPy array I have on the Python side: def c_arraydata(a, t): return cast(foo.PyArray_DATA(a), POINTER(t)) def arraydata_intp(a): return N.intp(foo.PyArray_DATA(a)) I use c_arraydata to cast a NumPy array for wrapped functions expecting something like a double*. I use arraydata_intp when I need to deal with something like a double**. I make the double** buffer as an array of intp and then assign each element to point to an array of 'f8', the address of which I get from arraydata_intp. The reason I'm jumping through all these hoops is so that I can get at the data pointer of a NumPy array. ndarray.data is a Python buffer object and I didn't manage to find any other way to obtain this pointer. If it doesn't exist already, it would be very useful if NumPy arrays exposed a way to get this information, by calling something like ndarray.dataptr or ndarray.dataintp. Once this is possible, there could be more integration with ctypes. See item 3. 2. Checking struct alignment With the following ctypes struct: class svm_node(Structure): _fields_ = [ ('index', c_int), ('value', c_double) ] I can do: print svm_node.index.offset print svm_node.index.size print svm_node.value.offset print svm_node.value.size which prints out: 0, 4, 8, 8 on my system. The corresponding array description is: In [58]: dtype({'names' : ['index', 'value'], 'formats' : [intc, 'f8']}, align=1) Out[58]: dtype([('index', ' In [48]: descr['index'].alignment Out[48]: 4 In [49]: descr['index'].itemsize Out[49]: 4 However, there doesn't seem to be an equivalent in the array description to the offset parameter that ctypes structs have. Is there a way to get this information? It would be useful to have it, since then one could make sure that the NumPy array and the ctypes struct line up in memory. 3. Further integration with ctypes >From the ctypes tutorial: """You can also customize ctypes argument conversion to allow instances of your own classes be used as function arguments. ctypes looks for an _as_parameter_ attribute and uses this as the function argument. Of course, it must be one of integer, string, or unicode: ... If you don't want to store the instance's data in the _as_parameter_ instance variable, you could define a property which makes the data available.""" If I understand correctly, you could also accomplish the same thing by implementing the from_param class method. I don't think it's well defined what _as_parameter_ (or from_param) should do for an arbitrary NumPy array, so there are a few options. 1. Allow the user to add _as_parameter_ or from_param to an ndarray instance. I don't know if this is possible at all (it doesn't seem to work at the moment because ndarray is a "built-in" type). 2. Allow _as_parameter_ to be a property with the user being able to specify the get method at construction time (or allow the user to specify the from_param method). For svm_node I might do something like: def svm_node_as_parameter(self): return cast(self.dataptr, POINTER(svm_node)) svm_node_descr = \ dtype({'names' : ['index', 'value'], 'formats' : [N.intc, 'f8']}, align=1) node = array([...], dtype=svm_node_descr, ctypes_as_parameter=svm_node_as_parameter) 3. As a next step, provide defaults for _as_parameter_ where possible. The scalar types could set it to the corresponding ctype (or None if ctypes can't be imported). Arrays with "basic" data, such as 'f8' and friends could set up a property that calls ctypes.cast(self.dataptr, POINTER(corresponding_ctype)). Thanks for reading. :-) Comments would be appreciated. If some of my suggestions seem implementation-worty, I'd be willing to try to implement them. Regards, Albert From Chris.Barker at noaa.gov Mon May 1 17:07:22 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon May 1 17:07:22 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. Message-ID: <4456A284.90805@noaa.gov> Hi all, I was just trying to use the putmask() method to replace a bunch of values with the same value and got and error, where the putmask() function works fine: >>> import numpy as N >>> a = N.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a.putmask(a > 3, 3) Traceback (most recent call last): File "", line 1, in ? ValueError: putmask: mask and data must be the same size >>> N.putmask(a, a > 3, 3) >>> a array([0, 1, 2, 3, 3]) and: >>> help (N.ndarray.putmask) putmask(...) a.putmask(values, mask) sets a.flat[n] = v[n] for each n where mask.flat[n] is TRUE. v can be scalar. indicates that it should work. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ndarray at mac.com Mon May 1 19:07:02 2006 From: ndarray at mac.com (Sasha) Date: Mon May 1 19:07:02 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: <4456A284.90805@noaa.gov> References: <4456A284.90805@noaa.gov> Message-ID: Do we really need the putmask method? With fancy indexing the obvious way to do it is >>> a[a>3] = 3 and it works fine. Maybe we can drop putmask method before 1.0 instead of fixing the bug. On 5/1/06, Christopher Barker wrote: > Hi all, > > I was just trying to use the putmask() method to replace a bunch of > values with the same value and got and error, where the putmask() > function works fine: > > >>> import numpy as N > >>> a = N.arange(5) > >>> a > array([0, 1, 2, 3, 4]) > >>> a.putmask(a > 3, 3) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: putmask: mask and data must be the same size > > >>> N.putmask(a, a > 3, 3) > >>> a > array([0, 1, 2, 3, 3]) > > and: > > >>> help (N.ndarray.putmask) > putmask(...) > a.putmask(values, mask) sets a.flat[n] = v[n] for each n where > mask.flat[n] is TRUE. v can be scalar. > > indicates that it should work. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jdhunter at ace.bsd.uchicago.edu Mon May 1 20:43:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon May 1 20:43:04 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: (Sasha's message of "Mon, 1 May 2006 22:06:15 -0400") References: <4456A284.90805@noaa.gov> Message-ID: <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Sasha" == Sasha writes: Sasha> Do we really need the putmask method? With fancy indexing Sasha> the obvious way to do it is >>>> a[a>3] = 3 Sasha> and it works fine. Maybe we can drop putmask method before Sasha> 1.0 instead of fixing the bug. I'm +1 for backwards compatibility with the Numeric API where the cost of a fix isn't too painful. The way to speed numpy adoption is to make old Numeric code "just work" where possible with the new API. Bear in mind that many people may not even begin the port attempt until well after numpy 1.0. JDH From andorxor at gmx.de Tue May 2 01:30:08 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Tue May 2 01:30:08 2006 Subject: [Numpy-discussion] Why cblas not blas? Message-ID: <4457165D.8050508@gmx.de> Hi While I'm just trying to get Atlas compiled on my Win system (so far without success) I'm wondering why NumPy is using the cblas interface to BLAS instead of the normal Fortran one. If it was using the normal BLAS interface, one could just use the binaries from the optimized ACML, MKL, or GOTO libraries. Is this for technical or purely historical reasons? Stephan From N.Gorsic at vipnet.hr Tue May 2 05:27:22 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue May 2 05:27:22 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> I have WinXp SP1 and I installed: python-2.4.3.msi and 3 packeges: numpy-0.9.6r1.win32-py2.4.exe scipy-0.4.8.win32-py2.4-pentium4sse2.exe py2exe-0.6.5.win32-py2.4.exe All 3 directories are placed in the C:\Python24\Lib\site-packages. After I type import Numeric I got "ImportError: No module named Numeric". But SciPy which needs NumPy works fine (no Impert Error). Can You help me what to do? Neven -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Tue May 2 05:36:17 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue May 2 05:36:17 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> Message-ID: <200605021434.59560.a.u.r.e.l.i.a.n@gmx.net> On Tuesday 02 May 2006 14:25, Neven Gorsic wrote: > I have WinXp SP1 and I installed: > > python-2.4.3.msi > > and 3 packeges: > > numpy-0.9.6r1.win32-py2.4.exe > scipy-0.4.8.win32-py2.4-pentium4sse2.exe > py2exe-0.6.5.win32-py2.4.exe > > All 3 directories are placed in the C:\Python24\Lib\site-packages. > After I type import Numeric I got "ImportError: No module named > Numeric". > But SciPy which needs NumPy works fine (no Impert Error). NumPy is the 'new' Numeric, i.e. the successor of Numeric. Instead of ``import Numeric``, use ``import numpy``. Johannes From luszczek at cs.utk.edu Tue May 2 06:05:00 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue May 2 06:05:00 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <4457165D.8050508@gmx.de> References: <4457165D.8050508@gmx.de> Message-ID: <200605020900.24814.luszczek@cs.utk.edu> On Tuesday 02 May 2006 03:20, Stephan Tolksdorf wrote: > Hi > > While I'm just trying to get Atlas compiled on my Win system (so far > without success) I'm wondering why NumPy is using the cblas interface > to BLAS instead of the normal Fortran one. If it was using the normal > BLAS interface, one could just use the binaries from the optimized > ACML, MKL, or GOTO libraries. Is this for technical or purely > historical reasons? Stephan, I cannot speak about history but I can mention a few technical reasons. 1. CBLAS handles both C's row-major order as well as Fortran's column-major order of storing matrices. Since numpy supports both, it's important to be efficient in both cases. 2. There is a set of wrappers that provide CBLAS interface on top of vendor BLAS. It's here: http://netlib.org/blas/blast-forum/cblas.tgz It uses a few tricks to make sure that nearly all operations are as efficient as vendor BLAS regardless of element order (C or Fortran). 3. CBLAS is supposed be adopted by vendors. Just like BLAS once only was the underlying library for LINPACK and then LAPACK. Now it's considered a standard. So there is nothing "normal" about Fortran BLAS. It's just something that grew standard over time. And in fact Intel has CBLAS in MKL (at least version 8.0): http://www.ualberta.ca/AICT/RESEARCH/LinuxClusters/doc/mkl/mklqref/cblas.htm Piotr PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. I've seen problems in performance and (!) correctness. Mr. Goto has been notified and hopefully is working on it. From schofield at ftw.at Tue May 2 06:17:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue May 2 06:17:01 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C0101186D@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C0101186D@MAIL02.win.vipnet.hr> Message-ID: <44575C83.1000003@ftw.at> Neven Gorsic wrote: > I try to learn to use NumPy but all examples from numpy.pdf are > unusable: > vector1 = array((1,2,3,4,5)), ... > Can You tel me, where can I find any up-to-date manual/tutoria about > numpy > or any sintax/example? > Yes, for now you can use "from numpy import *" at the top of your file or session; then the examples should work. I also suggest you read the Python tutorial at http://www.python.org/doc/current/tut/ (e.g. Section 6.4 on Packages). -- Ed From cwmoad at gmail.com Tue May 2 06:47:04 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Tue May 2 06:47:04 2006 Subject: [Numpy-discussion] Guide to Numpy book In-Reply-To: References: <3FA6601C-819F-4F15-A670-829FC428F47B@cortechs.net> <4452C145.8050803@geodynamics.org> Message-ID: <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> On 4/30/06, Vidar Gundersen wrote: > ===== Original message from Luis Armendariz | 29 Apr 2006: > >> What is the newest version of Guide to numpy? The recent one I got is > >> dated at Jan 9 2005 on the cover. > > The one I got yesterday is dated March 15, 2006. > > aren't the updates supposed to be sent out > to customers when available? I was waiting to hear a reply on this, because I am curious about getting updates as well. Our labs copy reads Jan 20. How often should we expect updates? I am guessing the date variations on the front page are from latex each time the doc is regenerated. Thanks, Charlie From andorxor at gmx.de Tue May 2 06:59:21 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Tue May 2 06:59:21 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <200605020900.24814.luszczek@cs.utk.edu> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: <4457656C.5030605@gmx.de> Hi Piotr, thanks for the informative answer! My worry was that the additional level of indirection of the Netlib cblas wrapper might notably slow down calculations (at least for small matrices). On the other side, this effect is probably negligible when taking into account all the Python overhead and the fact that NumPy supports both column- and row-major matrices. Although ACML supports a C interface it is not compatible to CBLAS, as far as I know. MKL seems to do better in this regard. Goto is now available as a source distribution, I wonder where this might lead. By the way, I got my Atlas 3.7.11 compilation working on Cygwin by selecting the P4 branch, although it's running on an Opteron processor. Stephan From dr.popovici at gmail.com Tue May 2 07:27:04 2006 From: dr.popovici at gmail.com (Vlad Popovici) Date: Tue May 2 07:27:04 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() Message-ID: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Hi, It seems to me that there is a bug in the matrix code: both methods std() and var() fail with "ValueError: matrices are not aligned" exception. Note that the same methods for array objects work correctly. I am using NumPy version 0.9.7. Example: >>> m = matrix(arange(1,10)).reshape(3,3) >>> m.std() Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", line 149, in __mul__ return N.dot(self, other) ValueError: matrices are not aligned >>> asarray(m).std() 2.7386127875258306 Temporarily, I can use asarray() to get the right values, but this should be corrected. Best wishes, Vlad -- Vlad POPOVICI, PhD. Swiss Institute for Bioinformatics -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 2 07:53:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 07:53:05 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <4457656C.5030605@gmx.de> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> <4457656C.5030605@gmx.de> Message-ID: Hi, On 5/2/06, Stephan Tolksdorf wrote: > > Hi Piotr, > > thanks for the informative answer! > > My worry was that the additional level of indirection of the Netlib > cblas wrapper might notably slow down calculations (at least for small > matrices). Atlas *is* a bit slow for small, dense matrices due to its generality. It also produces very large binaries when statically linked. For numpy I think this is fine, but if you want high performance with small matrices and need small binaries you will do better rolling your own routines. I think the Goto code for some architectures has been moved into Atlas, but I don't recall which ones offhand. Pearu is probably the expert for those sort of questions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue May 2 08:35:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 08:35:01 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: References: <4456A284.90805@noaa.gov> Message-ID: On 5/1/06, Sasha wrote: > Do we really need the putmask method? With fancy indexing the obvious > way to do it is > > >>> a[a>3] = 3 That would be a great example for the NumPy for Matlab Users page. http://www.scipy.org/NumPy_for_Matlab_Users From schofield at ftw.at Tue May 2 09:04:03 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue May 2 09:04:03 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() In-Reply-To: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> References: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Message-ID: <445783F1.7030102@ftw.at> Vlad Popovici wrote: > Hi, > > It seems to me that there is a bug in the matrix code: both methods > std() and var() fail with > "ValueError: matrices are not aligned" exception. Note that the same > methods for array objects work correctly. > I am using NumPy version 0.9.7. Travis fixed this last week in SVN. But thanks for reporting it! -- Ed From faltet at carabos.com Tue May 2 09:27:10 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue May 2 09:27:10 2006 Subject: [Numpy-discussion] bug: wrong itemsizes between NumPy strings and numarray Message-ID: <200605021826.21114.faltet@carabos.com> Hi, The PyTables test suite has just discovered a subtle bug when doing conversions between NumPy strings and numarray. In principle I'd say that this is a problem with numarray, but as this problem raised when updating numpy to the latest SVN version, I don't know what to think, frankly. The problem can be seen in: In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2466' In [3]: from numarray import strings In [4]: strings.array(numpy.array(['aa'],'S1')).itemsize() Out[4]: 1 In [5]: strings.array(numpy.array(['aa'],'S2')).itemsize() Out[5]: 2 In [6]: strings.array(numpy.array(['aa'],'S3')).itemsize() Out[6]: 2 In [7]: strings.array(numpy.array(['aa'],'S30')).itemsize() Out[7]: 2 i.e. the numarray element size out of the conversion is always 2 (i.e. the actual size of the element in the list), despite that the original NumPy object can have bigger sizes. However, with another NumPy version (dated from 3 weeks ago or so): In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2278' In [3]: from numarray import strings In [4]: strings.array(numpy.array(['aa'],'S1')).itemsize() Out[4]: 1 In [5]: strings.array(numpy.array(['aa'],'S2')).itemsize() Out[5]: 2 In [6]: strings.array(numpy.array(['aa'],'S3')).itemsize() Out[6]: 3 In [7]: strings.array(numpy.array(['aa'],'S30')).itemsize() Out[7]: 30 i.e. it works as intended. I report the bug here because I don't know who is the actual culprit. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tgrav at mac.com Tue May 2 09:34:03 2006 From: tgrav at mac.com (Tommy Grav) Date: Tue May 2 09:34:03 2006 Subject: [Numpy-discussion] where function Message-ID: Hi, I have a fits file that I read in with pyfits (which use numpy). Some of the elements in the data has a 'nan' value. How can I most easily find these elements and replace them with 0.? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue May 2 10:02:10 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue May 2 10:02:10 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> References: <4456A284.90805@noaa.gov> <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44579074.1020607@noaa.gov> John Hunter wrote: > Sasha> Do we really need the putmask method? With fancy indexing > Sasha> the obvious way to do it is > > >>>> a[a>3] = 3 Duh! I totally forgot about that, despite the fact that that was the one thing I missed when moving form Matlab to Numeric a few years back. > I'm +1 for backwards compatibility with the Numeric API where the cost > of a fix isn't too painful. I agree, however, does Numeric have a putmask() method? or only a putmask function? The numpy putmask function works fine. Still, this looks like a bug that may well apply to other methods, so it's probably worth fixing. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tgrav at mac.com Tue May 2 10:12:07 2006 From: tgrav at mac.com (Tommy Grav) Date: Tue May 2 10:12:07 2006 Subject: [Numpy-discussion] where function In-Reply-To: References: Message-ID: <71497875-D8D7-4369-BAA0-9369C48974E9@mac.com> > Thanks I had tried that earlier but it failed, so I checked again and realized that pyfits was set up to work with numarray. Changing that to numpy made it work perfectly :) Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 2 10:37:07 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 10:37:07 2006 Subject: [Numpy-discussion] bug: wrong itemsizes between NumPy strings and numarray In-Reply-To: <200605021826.21114.faltet@carabos.com> References: <200605021826.21114.faltet@carabos.com> Message-ID: Francesc, Completely off topic, but are you aware of the lexsort function in numpy and numarray? It is like argsort but takes a list of (vector)keys and performs a stable sort on each key in turn, so for record arrays you can get the effect of sorting on column A, then column B, etc. I thought I would mention it because you seem to use argsort a lot and, well, because I wrote it ;) BTW, thanks for PyTables, it is quite wonderful. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue May 2 10:39:07 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 10:39:07 2006 Subject: [Numpy-discussion] where function In-Reply-To: References: Message-ID: On 5/2/06, Tommy Grav wrote: > I have a fits file that I read in with pyfits (which use numpy). > Some of the elements in the data has a 'nan' value. How can > I most easily find these elements and replace them with 0.? I learned how to do this a few minutes ago from an example on this list. The example was a[a>3] = 3. I took a guess at isnan since that is what it is called in Octave. >> x = asmatrix(random.uniform(0,1,(3,3))) >> x matrix([[ 0.6183926 , 0.00306816, 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) >> x[0,1] = nan >> x matrix([[ 0.6183926 , nan, 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) >> x[isnan(x)] = 0 >> x matrix([[ 0.6183926 , 0. , 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) From faltet at carabos.com Tue May 2 11:29:05 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue May 2 11:29:05 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: References: <200605021826.21114.faltet@carabos.com> Message-ID: <200605022027.59369.faltet@carabos.com> A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > Francesc, > > Completely off topic, but are you aware of the lexsort function in numpy > and numarray? It is like argsort but takes a list of (vector)keys and > performs a stable sort on each key in turn, so for record arrays you can > get the effect of sorting on column A, then column B, etc. I thought I > would mention it because you seem to use argsort a lot and, well, because I > wrote it ;) Thanks for pointing this out. In fact, I had no idea of this capability in numarray (numpy neither). I'll have to look more carefully into this to fully realize the kind of things that can be done with it. But it seems very promising anyway :-) > BTW, thanks for PyTables, it is quite wonderful. You are welcome! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From kwgoodman at gmail.com Tue May 2 11:47:05 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 11:47:05 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() In-Reply-To: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> References: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Message-ID: On 5/2/06, Vlad Popovici wrote: > Hi, > > It seems to me that there is a bug in the matrix code: both methods std() > and var() fail with > "ValueError: matrices are not aligned" exception. Note that the same methods > for array objects work correctly. > I am using NumPy version 0.9.7. > > Example: > >>> m = matrix(arange(1,10)).reshape(3,3) > >>> m.std() > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", > line 149, in __mul__ > return N.dot(self, other) > ValueError: matrices are not aligned > >>> asarray(m).std() > 2.7386127875258306 That might have been fixed recently in SVN. Have a look at this thread: http://sourceforge.net/mailarchive/forum.php?thread_id=10254986&forum_id=4890 >> m = matrix(arange(1,10)).reshape(3,3) >> m.std() matrix([[ 2.73861279]]) >> m.var() matrix([[ 7.5]]) >> numpy.__version__ '0.9.7.2435' From fperez.net at gmail.com Tue May 2 12:23:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue May 2 12:23:05 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <200605020900.24814.luszczek@cs.utk.edu> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: Hi Piotr, On 5/2/06, Piotr Luszczek wrote: > PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. > I've seen problems in performance and (!) correctness. Mr. Goto > has been notified and hopefully is working on it. Quick question: a colleague from ORNL to whom I forwarded this information asked me whether the problem appears also on single-cpus used with multithreaded code. Do you know? I think he uses both AMD and Itanium chips, but I'm not 100% certain. Cheers, f From luszczek at cs.utk.edu Tue May 2 12:36:11 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue May 2 12:36:11 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: <200605021536.06303.luszczek@cs.utk.edu> On Tuesday 02 May 2006 14:22, Fernando Perez wrote: > Hi Piotr, > > On 5/2/06, Piotr Luszczek wrote: > > PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. > > I've seen problems in performance and (!) correctness. Mr. Goto > > has been notified and hopefully is working on it. > > Quick question: a colleague from ORNL to whom I forwarded this > information asked me whether the problem appears also on single-cpus > used with multithreaded code. Do you know? I think he uses both AMD > and Itanium chips, but I'm not 100% certain. The problems we had were in multithreaded code on Dual Core Intel. And on Itanium the problems were with single-threaded code. I don't know the details on problems with AMD cpus it's been reported on GOTO BLAS mailing list. Piotr From fullung at gmail.com Tue May 2 16:28:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue May 2 16:28:01 2006 Subject: [Numpy-discussion] Crash on failed memory allocation Message-ID: <00d001c66e3f$ef293cd0$0a84a8c0@dsp.sun.ac.za> Hello all Stefan van der Walt and I have discovered two bugs when working with large blocks of memory and array descriptors. Example code that causes problems: import numpy as N print N.__version__ x=[] i = 20000 j = 10000 names = ['a', 'b'] formats = ['f8', 'f8'] descr = N.dtype({'names' : names, 'formats' : formats}) for y in range(20000): x.append(N.empty((10000,),dtype=descr)['a']) N.asarray(x) With i and j large and a big descriptor, you run out of process address space (typically 2 GB?) during the list append. This raises a MemoryError. However, with a slightly smaller list, you run out of memory in asarray. This causes a segfault on Linux. This problem can also manifest itself as a TypeError. On Windows with r2462 I got this message: Traceback (most recent call last): File "numpybug.py", line 7, in ? N.asarray(x) File "C:\Python24\Lib\site-packages\numpy\core\numeric.py", line 116, in asarray return array(a, dtype, copy=False, order=order) TypeError: a float is required Stefan also discovered the following bug: import numpy as N descr = N.dtype({'names' : ['a'], 'formats' : ['foo']}, align=1) Notice the invalid typestring. Combined with align=1 this seem to be guaranteed to crash. Regards, Albert From ted.horst at earthlink.net Tue May 2 20:11:02 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Tue May 2 20:11:02 2006 Subject: [Numpy-discussion] scalarmath fails to build Message-ID: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Here is a patch: --- numpy/core/src/scalarmathmodule.c.src (revision 2471) +++ numpy/core/src/scalarmathmodule.c.src (working copy) @@ -597,7 +597,12 @@ { PyObject *ret; @name@ arg1, arg2, out; +#if @cmplx@ + @otyp@ out1; + out1.real = out.imag = 0; +#else @otyp@ out1=0; +#endif int retstatus; switch(_ at name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { From charlesr.harris at gmail.com Tue May 2 21:12:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 21:12:05 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: <200605022027.59369.faltet@carabos.com> References: <200605021826.21114.faltet@carabos.com> <200605022027.59369.faltet@carabos.com> Message-ID: Hi, On 5/2/06, Francesc Altet wrote: > > A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > > Francesc, > > > > Completely off topic, but are you aware of the lexsort function in numpy > > and numarray? It is like argsort but takes a list of (vector)keys and > > performs a stable sort on each key in turn, so for record arrays you can > > get the effect of sorting on column A, then column B, etc. I thought I > > would mention it because you seem to use argsort a lot and, well, > because I > > wrote it ;) > > Thanks for pointing this out. In fact, I had no idea of this > capability in numarray (numpy neither). I'll have to look more > carefully into this to fully realize the kind of things that can be > done with it. But it seems very promising anyway :-) As an example: In [21]: a Out[21]: array([[0, 1], [1, 0], [1, 1], [0, 1], [1, 0]]) In [22]: a[lexsort((a[:,1],a[:,0]))] Out[22]: array([[0, 1], [0, 1], [1, 0], [1, 0], [1, 1]]) Hmm, I notice that lexsort requires a tuple and won't accept a list. I wonder if there is a good reason for that. Travis? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Tue May 2 23:51:04 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue May 2 23:51:04 2006 Subject: [Numpy-discussion] scalarmath fails to build In-Reply-To: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> References: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Message-ID: On Tue, 2 May 2006, Ted Horst wrote: > Here is a patch: > > > --- numpy/core/src/scalarmathmodule.c.src (revision 2471) > +++ numpy/core/src/scalarmathmodule.c.src (working copy) > @@ -597,7 +597,12 @@ > { > PyObject *ret; > @name@ arg1, arg2, out; > +#if @cmplx@ > + @otyp@ out1; > + out1.real = out.imag = 0; > +#else > @otyp@ out1=0; > +#endif > int retstatus; > switch(_ at name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { Thanks - applied: http://projects.scipy.org/scipy/numpy/changeset/2472 (Travis, I hope I got that right ;-) After this numpy compiles for me, but on testing (64 Bit opteron) I get: In [1]: import numpy from lib import * -> failed: 'module' object has no attribute 'nmath' import linalg -> failed: 'module' object has no attribute 'nmath' In numpy/lib/__init__.py the construct __all__ = ['nmath','math'] is used, but `nmath` is not mentioned explicitely before. In [2]: numpy.__version__ Out[2]: '0.9.7.2471' In [3]: numpy.test(10) Found 5 tests for numpy.distutils.misc_util Found 4 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 8 tests for numpy.lib.arraysetops Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_type_check.py:7: AttributeError: 'modul e' object has no attribute 'nmath' (in ?) Found 93 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 2 tests for numpy.core.oldnumeric Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_twodim_base.py:7: ImportError: cannot i mport name rot90 (in ?) Found 8 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py:7: AttributeError: 'mo dule' object has no attribute 'nmath' (in ?) Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_index_tricks.py:4: ImportError: cannot import name r_ (in ?) Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_shape_base.py:5: AttributeError: 'modul e' object has no attribute 'nmath' (in ?) Found 0 tests for __main__ ......................................................E............................................................................ ...............................................................E...E.............................. ====================================================================== ERROR: check_manyways (numpy.lib.tests.test_arraysetops.test_aso) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_arraysetops.py", line 128, in c heck_manyways a = numpy.fix( nItem / 10 * numpy.random.random( nItem ) ) AttributeError: 'module' object has no attribute 'fix' ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_algebra) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 115, in ch eck_basic import numpy.linalg as linalg File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 19, in ? from numpy.lib import * AttributeError: 'module' object has no attribute 'nmath' ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_properties) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 44, in che ck_basic import numpy.linalg as linalg File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 19, in ? from numpy.lib import * AttributeError: 'module' object has no attribute 'nmath' ---------------------------------------------------------------------- Ran 229 tests in 0.440s FAILED (errors=3) Out[3]: Best, Arnd From arnd.baecker at web.de Wed May 3 00:03:14 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 3 00:03:14 2006 Subject: [Numpy-discussion] scalarmath fails to build In-Reply-To: References: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Message-ID: On Wed, 3 May 2006, Arnd Baecker wrote: > In [1]: import numpy > from lib import * -> failed: 'module' object has no attribute 'nmath' > import linalg -> failed: 'module' object has no attribute 'nmath' > > In numpy/lib/__init__.py the construct > __all__ = ['nmath','math'] > is used, but `nmath` is not mentioned explicitely before. By looking in the trac history I found the simple reason: Index: numpy/lib/__init__.py =================================================================== --- numpy/lib/__init__.py (revision 2472) +++ numpy/lib/__init__.py (working copy) @@ -18,7 +18,7 @@ from arraysetops import * import math -__all__ = ['nmath','math'] +__all__ = ['emath','math'] __all__ += type_check.__all__ __all__ += index_tricks.__all__ __all__ += function_base.__all__ Patch is applied, http://projects.scipy.org/scipy/numpy/changeset/2473 So now numpy builds again and no errors on test. Back to real work ;-). Arnd From a.u.r.e.l.i.a.n at gmx.net Wed May 3 01:16:01 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed May 3 01:16:01 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 Message-ID: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Hi, with the current svn version, numpy.test(10,10) gives the following: [...everything ok...] check_scalar (numpy.lib.tests.test_function_base.test_vectorize) ... ok check_simple (numpy.lib.tests.test_function_base.test_vectorize) ... ok ====================================================================== ERROR: check_doctests (numpy.lib.tests.test_ufunclike.test_docs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_ufunclike.py", line 59, in check_doctests def check_doctests(self): return self.rundocs() File "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", line 185, in rundocs tests = doctest.DocTestFinder().find(m) AttributeError: 'module' object has no attribute 'DocTestFinder' ====================================================================== ERROR: check_doctests (numpy.lib.tests.test_polynomial.test_docs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_polynomial.py", line 79, in check_doctests def check_doctests(self): return self.rundocs() File "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", line 185, in rundocs tests = doctest.DocTestFinder().find(m) AttributeError: 'module' object has no attribute 'DocTestFinder' ---------------------------------------------------------------------- Ran 364 tests in 1.747s FAILED (errors=2) Out[5]: In [6]: numpy.__version__ Out[6]: '0.9.7.2473' Johannes From nwagner at iam.uni-stuttgart.de Wed May 3 01:20:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 3 01:20:59 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <445867A6.3060304@iam.uni-stuttgart.de> Johannes Loehnert wrote: > Hi, > > with the current svn version, numpy.test(10,10) gives the following: > > [...everything ok...] > check_scalar (numpy.lib.tests.test_function_base.test_vectorize) ... ok > check_simple (numpy.lib.tests.test_function_base.test_vectorize) ... ok > > ====================================================================== > ERROR: check_doctests (numpy.lib.tests.test_ufunclike.test_docs) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_ufunclike.py", line > 59, in check_doctests > def check_doctests(self): return self.rundocs() > File > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' > > ====================================================================== > ERROR: check_doctests (numpy.lib.tests.test_polynomial.test_docs) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_polynomial.py", > line 79, in check_doctests > def check_doctests(self): return self.rundocs() > File > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' > > ---------------------------------------------------------------------- > Ran 364 tests in 1.747s > > FAILED (errors=2) > Out[5]: > > In [6]: numpy.__version__ > Out[6]: '0.9.7.2473' > > > Johannes > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > I cannot reproduce these errors. 32bit Ran 364 tests in 2.457s OK >>> numpy.__version__ '0.9.7.2473' 64bit Ran 364 tests in 0.858s OK Nils From arnd.baecker at web.de Wed May 3 01:35:04 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 3 01:35:04 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: <4451C076.40608@ieee.org> References: <4451C076.40608@ieee.org> Message-ID: On Fri, 28 Apr 2006, Travis Oliphant wrote: > > The scalar math module is complete and ready to be tested. It should > speed up code that relies heavily on scalar arithmetic by by-passing the > ufunc machinery. > > It needs lots of testing to be sure that it is doing the "right" > thing. To enable scalarmath you need to > > import numpy.core.scalarmath After numpy compiles on the 64 Bit machine I tried: import numpy import numpy.core.scalarmath numpy.test(1) The only failure is ====================================================================== FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", line 149, in check_basic assert_equal(prod(a),26400) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 128, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 26400 ACTUAL: 26400L ---------------------------------------------------------------------- Ran 363 tests in 0.246s Not sure if this is more a bug of assert_equal? Testing scipy with `import numpy.core.scalarmath` before leads to quite a few errors, many of them related to sparse, see below for an exerpt, and a couple of others. Best, Arnd ====================================================================== ERROR: check_eye (scipy.sparse.tests.test_sparse.test_construct_utils) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 626, in check_eye a = speye(2, 3 ) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2810, in speye return spdiags(diags, k, n, m) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_identity (scipy.sparse.tests.test_sparse.test_construct_utils) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 621, in check_identity a = spidentity(3) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2800, in spidentity return spdiags( diags, 0, n, n ) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_add (scipy.sparse.tests.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: Check whether the copy=True and copy=False keywords work ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz [....] ====================================================================== ERROR: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 67, in setUp self.a = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 79, in check_exact W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/morestats.py", line 591, in ansari pval = 2.0*sum(a1[:cind+1])/total TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 53, in check_simple_complex assert_array_almost_equal(w,exact_w) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.2983779e+00 +6.5630282e-01j -5.5210285e-16 -1.7300711e-16j -2.9837791e-01 +3.4369718e-01j] Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j 1.+0.75j 1.+1.j 1.-0.5714286j 1.-1.j ] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 340, in check_definition assert_array_almost_equal(y,y1) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1118, in check_h1vp assert_almost_equal(h1,h1real,8) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004+63.055272295669909j) ACTUAL: (1+126.58490844594492j) ====================================================================== FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1125, in check_h2vp assert_almost_equal(h2,h2real,8) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004-63.055272295669909j) ACTUAL: (1-126.58490844594492j) ====================================================================== FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 42, in check_nils assert_array_almost_equal(r,cr) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] [ -5.808 2.936 -8.712 -9.68 1.936] [ -6.24 2.08 -8.36 -10.4 ... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ====================================================================== FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 99, in check_bad assert_array_almost_equal(dot(esa,esa),a) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ nan +nanj nan +n... Array 2: [[ 1. 0. 0. 1. ] [ 0. 0.03125 0. 0. ] [ 0. 0. 0.03125 0. ] [ ... ---------------------------------------------------------------------- Ran 1119 tests in 1.151s FAILED (failures=7, errors=82) From nwagner at iam.uni-stuttgart.de Wed May 3 01:42:04 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 3 01:42:04 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: References: <4451C076.40608@ieee.org> Message-ID: <44586C88.80702@iam.uni-stuttgart.de> Arnd Baecker wrote: > On Fri, 28 Apr 2006, Travis Oliphant wrote: > > >> The scalar math module is complete and ready to be tested. It should >> speed up code that relies heavily on scalar arithmetic by by-passing the >> ufunc machinery. >> >> It needs lots of testing to be sure that it is doing the "right" >> thing. To enable scalarmath you need to >> >> import numpy.core.scalarmath >> > > > After numpy compiles on the 64 Bit machine I tried: > > import numpy > import numpy.core.scalarmath > numpy.test(1) > > The only failure is > > ====================================================================== > FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", > line 149, in check_basic > assert_equal(prod(a),26400) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 128, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > DESIRED: 26400 > ACTUAL: 26400L > > ---------------------------------------------------------------------- > Ran 363 tests in 0.246s > > > Not sure if this is more a bug of assert_equal? > > > Testing scipy with `import numpy.core.scalarmath` before leads > to quite a few errors, many of them related to sparse, see below for an > exerpt, and a couple of others. > > Best, Arnd > > > ====================================================================== > ERROR: check_eye (scipy.sparse.tests.test_sparse.test_construct_utils) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 626, in check_eye > a = speye(2, 3 ) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2810, in speye > return spdiags(diags, k, n, m) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: check_identity > (scipy.sparse.tests.test_sparse.test_construct_utils) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 621, in check_identity > a = spidentity(3) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2800, in spidentity > return spdiags( diags, 0, n, n ) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: check_add (scipy.sparse.tests.test_sparse.test_csc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: Check whether the copy=True and copy=False keywords work > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > > [....] > > > ====================================================================== > ERROR: Solve: single precision > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", > line 67, in setUp > self.a = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > > > ====================================================================== > ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", > line 79, in check_exact > W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/morestats.py", > line 591, in ansari > pval = 2.0*sum(a1[:cind+1])/total > TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' > > ====================================================================== > FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", > line 53, in check_simple_complex > assert_array_almost_equal(w,exact_w) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 66.6666666667%): > Array 1: [ 9.2983779e+00 +6.5630282e-01j -5.5210285e-16 > -1.7300711e-16j > -2.9837791e-01 +3.4369718e-01j] > Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", > line 183, in check_definition > assert_array_almost_equal(y,y1) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 > -0.5j > 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... > Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j > 1.+0.75j > 1.+1.j 1.-0.5714286j 1.-1.j ] > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", > line 340, in check_definition > assert_array_almost_equal(y,y1) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 > 0.4356602 -0.375 > 0.9356602] > Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] > > > ====================================================================== > FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", > line 1118, in check_h1vp > assert_almost_equal(h1,h1real,8) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 148, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: (0.49812630170362004+63.055272295669909j) > ACTUAL: (1+126.58490844594492j) > > ====================================================================== > FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", > line 1125, in check_h2vp > assert_almost_equal(h2,h2real,8) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 148, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: (0.49812630170362004-63.055272295669909j) > ACTUAL: (1-126.58490844594492j) > > ====================================================================== > FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", > line 42, in check_nils > assert_array_almost_equal(r,cr) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] > [ -5.808 2.936 -8.712 -9.68 1.936] > [ -6.24 2.08 -8.36 -10.4 ... > Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 > -2.2453333] > [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... > > > ====================================================================== > FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", > line 99, in check_bad > assert_array_almost_equal(dot(esa,esa),a) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[ nan > +nanj > nan +n... > Array 2: [[ 1. 0. 0. 1. ] > [ 0. 0.03125 0. 0. ] > [ 0. 0. 0.03125 0. ] > [ ... > > > ---------------------------------------------------------------------- > Ran 1119 tests in 1.151s > > FAILED (failures=7, errors=82) > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > On 32 bit : scipy.test(1) results in ====================================================================== ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 79, in check_exact W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) File "/usr/lib/python2.4/site-packages/scipy/stats/morestats.py", line 591, in ansari pval = 2.0*sum(a1[:cind+1])/total TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 53, in check_simple_complex assert_array_almost_equal(w,exact_w) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.2983779e+00 +6.5630282e-01j -4.1690526e-16 -9.3738002e-17j -2.9837791e-01 +3.4369718e-01j] Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j 1.+0.75j 1.+1.j 1.-0.5714286j 1.-1.j ] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 340, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1118, in check_h1vp assert_almost_equal(h1,h1real,8) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004+63.055272295669901j) ACTUAL: (1+126.58490844594493j) ====================================================================== FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1125, in check_h2vp assert_almost_equal(h2,h2real,8) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004-63.055272295669901j) ACTUAL: (1-126.58490844594493j) ====================================================================== FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 42, in check_nils assert_array_almost_equal(r,cr) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] [ -5.808 2.936 -8.712 -9.68 1.936] [ -6.24 2.08 -8.36 -10.4 ... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ====================================================================== FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 99, in check_bad assert_array_almost_equal(dot(esa,esa),a) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ nan +nanj nan +nanj nan ... Array 2: [[ 1. 0. 0. 1. ] [ 0. 0.03125 0. 0. ] [ 0. 0. 0.03125 0. ] [ ... ---------------------------------------------------------------------- Ran 1516 tests in 7.346s FAILED (failures=7, errors=1) and numpy.test(1) yields ====================================================================== FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", line 149, in check_basic assert_equal(prod(a),26400) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 128, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 26400 ACTUAL: 26400L ---------------------------------------------------------------------- Ran 363 tests in 0.935s FAILED (failures=1) Nils From fullung at gmail.com Wed May 3 04:18:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 04:18:01 2006 Subject: [Numpy-discussion] Array descriptors not compared correctly causes asarray to copy data Message-ID: <019401c66ea3$1c754590$0a84a8c0@dsp.sun.ac.za> Hello all There seems to be a problem with asarray when using more than one instance of an equivalent dtype. Example code: import numpy as N print N.__version__ import time a = N.dtype({'names' : ['i', 'j'], 'formats' : [N.float64, N.float64]}) b = N.dtype({'names' : ['i', 'j'], 'formats' : [N.float64, N.float64]}) print a == b print a.descr == b.descr _starttime = time.clock() arr = N.zeros((5000,1000), dtype=a) for x in arr: y = N.asarray(x, dtype=a) print 'done in %f seconds' % (time.clock() - _starttime,) _starttime = time.clock() for x in arr: y = N.asarray(x, dtype=b) print 'done in %f seconds' % (time.clock() - _starttime,) On my system I get the following result: 0.9.7.2462 False True done in 0.153871 seconds done in 8.726785 seconds So while the descrs are equal, Python doesn't seem to think that the dtypes are which is probably causing the problem. Trac ticket at http://projects.scipy.org/scipy/numpy/ticket/94. Regards, Albert From schofield at ftw.at Wed May 3 04:28:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 3 04:28:15 2006 Subject: [Numpy-discussion] Object array creation from sequences Message-ID: <445894E0.7030303@ftw.at> Hi all, NumPy currently does the following: >>> s = set([1, 100, 10]) >>> a = numpy.array(s) >>> a array(set([1, 100, 10]), dtype=object) >>> a.shape () Many functions in NumPy's functional interface, like numpy.sort(), inherit this behaviour: >>> b = numpy.sort(s) >>> b array(set([1, 10, 100]), dtype=object) >>> b.shape () I'd like to propose two modifications to improve array construction from non-list sequences: 1. We inspect whether the data has a __len__ method. If it does, and it returns an integer, we construct an array out of the C equivalent of list(data). Others on this list have noted that NumPy also creates a rank-0 object array from generators: >>> c = numpy.array(i*2 for i in xrange(10)) >>> c array(, dtype=object) This proposal wouldn't affect this case, since generators do not in general have a __len__ attribute. 2. A stronger version of the above proposal: if the data has an __iter__ method, we construct an array out of the C equivalent of list(data). This would handle the generator case above correctly until we have a more efficient implementation. Creating an array from an infinite sequence would loop forever, just as list(inf_generator) does. -- Ed From schofield at ftw.at Wed May 3 04:59:05 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 3 04:59:05 2006 Subject: [Numpy-discussion] Object array creation from sequences In-Reply-To: <445894E0.7030303@ftw.at> References: <445894E0.7030303@ftw.at> Message-ID: <44589BD0.9060909@ftw.at> Ed Schofield wrote: >>>> s = set([1, 100, 10]) >>>> a = numpy.array(s) >>>> a >>>> > array(set([1, 100, 10]), dtype=object) > [snip] >>>> b = numpy.sort(s) >>>> b >>>> > array(set([1, 10, 100]), dtype=object) > Oops, the output I gave here was perhaps confusing. The set elements can appear in arbitrary order, but the output here should be the same as above. -- Ed From umut.tabak at student.kuleuven.be Wed May 3 08:34:10 2006 From: umut.tabak at student.kuleuven.be (umut tabak) Date: Wed May 3 08:34:10 2006 Subject: [Numpy-discussion] Numpy installation failure Message-ID: <4458CD34.6000604@student.kuleuven.be> Dear all, I am new to python however not new to programming world, I have got the Numeric package I think because the output seems OK >>> from Numeric import * >>> seems OK but when I try to view a picture file which is supplied in the install and test documentation of numpy it fails >>>view(greece) Traceback (most recent call last): File "", line 1, in ? NameError: name 'view' is not defined There it is written that the numpy directory must be under the demo directory but I do not have a demo directory under the python directory. Sth is missing I guess but could not figure out, actually I did not have much time to search and find the error :), hope for your understanding. I am seeking a step by step installation description so your help is very much appreciated. Regards, U.T. From ndarray at mac.com Wed May 3 08:49:23 2006 From: ndarray at mac.com (Sasha) Date: Wed May 3 08:49:23 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On 5/3/06, Johannes Loehnert wrote: > [snip] > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' It looks like this test expects python 2.4 version of doctest. If we are going to support 2.3, this is a bug. From fullung at gmail.com Wed May 3 09:00:13 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 09:00:13 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: Message-ID: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Sasha > Sent: 03 May 2006 17:49 > To: Johannes Loehnert > Cc: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Test fails for rev. 2473 > > On 5/3/06, Johannes Loehnert wrote: > > > [snip] > > "/scratch/jloehnert/python-svn/lib/python2.3/site- > packages/numpy/testing/numpytest.py", > > line 185, in rundocs > > tests = doctest.DocTestFinder().find(m) > > AttributeError: 'module' object has no attribute 'DocTestFinder' > > It looks like this test expects python 2.4 version of doctest. If we > are going to support 2.3, this is a bug. Are doctests really worth it? They are reasonably useful when developing new code, but "normal" tests might be better once the code is done. I also found out that they break trace.py, a very useful tool included with Python for determining code coverage -- knowledge that would be very useful for identifying areas where NumPy's test suite can be improved. I prepared two patches to turn the existing doctests into "normal" tests: http://projects.scipy.org/scipy/numpy/ticket/87 http://projects.scipy.org/scipy/numpy/ticket/88 Regards, Albert From pearu at scipy.org Wed May 3 09:04:02 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 3 09:04:02 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On Wed, 3 May 2006, Sasha wrote: > On 5/3/06, Johannes Loehnert wrote: > >> [snip] >> "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", >> line 185, in rundocs >> tests = doctest.DocTestFinder().find(m) >> AttributeError: 'module' object has no attribute 'DocTestFinder' > > It looks like this test expects python 2.4 version of doctest. If we > are going to support 2.3, this is a bug. The bug is fixed in svn. Thanks, Pearu From tim.hochberg at cox.net Wed May 3 09:20:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 3 09:20:05 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> References: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> Message-ID: <4458D85F.5040202@cox.net> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >>discussion-admin at lists.sourceforge.net] On Behalf Of Sasha >>Sent: 03 May 2006 17:49 >>To: Johannes Loehnert >>Cc: numpy-discussion at lists.sourceforge.net >>Subject: Re: [Numpy-discussion] Test fails for rev. 2473 >> >>On 5/3/06, Johannes Loehnert wrote: >> >> >> >>>[snip] >>>"/scratch/jloehnert/python-svn/lib/python2.3/site- >>> >>> >>packages/numpy/testing/numpytest.py", >> >> >>>line 185, in rundocs >>> tests = doctest.DocTestFinder().find(m) >>>AttributeError: 'module' object has no attribute 'DocTestFinder' >>> >>> >>It looks like this test expects python 2.4 version of doctest. If we >>are going to support 2.3, this is a bug. >> >> > >Are doctests really worth it? They are reasonably useful when developing new >code, but "normal" tests might be better once the code is done. I also found >out that they break trace.py, a very useful tool included with Python for >determining code coverage -- knowledge that would be very useful for >identifying areas where NumPy's test suite can be improved. > >I prepared two patches to turn the existing doctests into "normal" tests: > >http://projects.scipy.org/scipy/numpy/ticket/87 >http://projects.scipy.org/scipy/numpy/ticket/88 > > > In general, I'm not a big fan of turning doctests into "normal" tests. Any time you muck with a working test you have to worry about introducing bugs into the test suite. In addition doctests are frequently, although certainly not always. clearer than their unit test counterparts. It seems that the time would be better spent fixing trace to do the right thing in the presence of doctests. Then again, you've already spent the time so it's to late to unspend it I suppose. -tim From fullung at gmail.com Wed May 3 09:41:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 09:41:08 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <4458D85F.5040202@cox.net> Message-ID: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> Hello Tim > -----Original Message----- > From: Tim Hochberg [mailto:tim.hochberg at cox.net] > Sent: 03 May 2006 18:21 > To: Albert Strasheim > Cc: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Test fails for rev. 2473 > > Albert Strasheim wrote: > >I prepared two patches to turn the existing doctests into "normal" tests: > > > >http://projects.scipy.org/scipy/numpy/ticket/87 > >http://projects.scipy.org/scipy/numpy/ticket/88 > > > > > > > In general, I'm not a big fan of turning doctests into "normal" tests. > Any time you muck with a working test you have to worry about > introducing bugs into the test suite. In addition doctests are > frequently, although certainly not always. clearer than their unit test > counterparts. It seems that the time would be better spent fixing trace > to do the right thing in the presence of doctests. Then again, you've > already spent the time so it's to late to unspend it I suppose. I agree with you -- rewriting tests is suboptimal. However, in the presence of not-perfect-yet tools I prefer to adapt my way of working so that I can still use the tool instead of hoping that the tool will get fixed at some undefined time in the future. Anyway, as I noted in ticket #87, the last doctest in test_ufunclike might be hiding a bug or there might be floating point math issues I'm unaware of. When I run my "normal" tests for log2, I get: Traceback (most recent call last): File "numpybug3.py", line 7, in ? assert_array_equal(b, array([2.169925, 1.20163386, 2.70043972])) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 204, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 100.0%): Array 1: [ 2.1699250014423126 1.2016338611696504 2.7004397181410922] Array 2: [ 2.1699250000000001 1.2016338600000001 2.7004397199999999] Is this the expected behaviour? Regards, Albert From tim.hochberg at cox.net Wed May 3 10:00:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 3 10:00:13 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> References: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> Message-ID: <4458E1DD.9070008@cox.net> Albert Strasheim wrote: >Hello Tim > > > >>-----Original Message----- >>From: Tim Hochberg [mailto:tim.hochberg at cox.net] >>Sent: 03 May 2006 18:21 >>To: Albert Strasheim >>Cc: numpy-discussion at lists.sourceforge.net >>Subject: Re: [Numpy-discussion] Test fails for rev. 2473 >> >>Albert Strasheim wrote: >> >> > > > > > >>>I prepared two patches to turn the existing doctests into "normal" tests: >>> >>>http://projects.scipy.org/scipy/numpy/ticket/87 >>>http://projects.scipy.org/scipy/numpy/ticket/88 >>> >>> >>> >>> >>> >>In general, I'm not a big fan of turning doctests into "normal" tests. >>Any time you muck with a working test you have to worry about >>introducing bugs into the test suite. In addition doctests are >>frequently, although certainly not always. clearer than their unit test >>counterparts. It seems that the time would be better spent fixing trace >>to do the right thing in the presence of doctests. Then again, you've >>already spent the time so it's to late to unspend it I suppose. >> >> > >I agree with you -- rewriting tests is suboptimal. However, in the presence >of not-perfect-yet tools I prefer to adapt my way of working so that I can >still use the tool instead of hoping that the tool will get fixed at some >undefined time in the future. > > I just googled and there seem to be some patches relating to trace.py and doctest, but not being a user of trace, I don't really know what its issues are with doctest, so I'm unsure if they would address the issue you are having. >Anyway, as I noted in ticket #87, the last doctest in test_ufunclike might >be hiding a bug or there might be floating point math issues I'm unaware of. >When I run my "normal" tests for log2, I get: > >Traceback (most recent call last): > File "numpybug3.py", line 7, in ? > assert_array_equal(b, array([2.169925, 1.20163386, 2.70043972])) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 204, in >assert_array_equal > assert cond,\ >AssertionError: >Arrays are not equal (mismatch 100.0%): > Array 1: [ 2.1699250014423126 1.2016338611696504 >2.7004397181410922] > Array 2: [ 2.1699250000000001 1.2016338600000001 >2.7004397199999999] > >Is this the expected behaviour? > > I think that this is a non bug, it's just that the regular test is more picky here. Take a look at this: >>> a = n.array([ 2.1699250014423126, 1.2016338611696504,2.7004397181410922]) >>> a array([ 2.169925 , 1.20163386, 2.70043972]) >>> b = n.array([ 2.169925 , 1.20163386, 2.70043972]) >>> b array([ 2.169925 , 1.20163386, 2.70043972]) >>> a == b array([False, False, False], dtype=bool) >>> [x for x in a] [2.1699250014423126, 1.2016338611696504, 2.7004397181410922] >>> [x for x in b] [2.1699250000000001, 1.2016338600000001, 2.7004397199999999] So what we have here is that numpy displays less digits whan it actually knows. Presumably this doctest was pasted in from the interpreter where not all the digits show up, then when you moved this to unittest where you are comparing the actual numbers, things start to fail. Regards, -tim From faltet at carabos.com Wed May 3 10:27:12 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 3 10:27:12 2006 Subject: [Numpy-discussion] ANN: PyTables 1.3.1 - Hierarchical datasets Message-ID: <200605031925.51430.faltet@carabos.com> =========================== Announcing PyTables 1.3.1 =========================== This is a new minor release of PyTables. On it, you will find support for NumPy integers as indexes of datasets and many bug fixes. *Important note*: one of the fixes adresses a bug in the flushing of I/O buffers that was introduced back in PyTables 1.1. So, for those of you that want to improve the integrity of the PyTables files during unexpected crashes, an upgrade is strongly encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Improvements: - NumPy integer scalars are supported as indexes for ``__getitem__()`` and ``__setitem__()`` methods in ``Leaf`` objects. In addition, any object that exposes the __index__() method (see PEP 357) is supported as well. This later feature will be more useful for those of you that start using Python 2.5. Meanwhile, PyTables will use its own guessing (quite trivial, in fact) in order to convert indexes to 64-bit integers (you know, PyTables does support 64-bit indexes even in 32-bit platforms). Bug fixes: - ``Leaf.flush()`` didn't force an actual flush on-disk at HDF5 level, raising the chances of letting the file in an inconsistent state during an unexpected shutdown of the program. Now, it works as it should. This bug was around from PyTables 1.1 on. Thanks to Andrew Straw for complaining about this persistently enough. ;-) - The code for recognizing a leaf class in a native HDF5 file was not aware of the ``trMap`` option if ``openFile()``, thus giving potentially scary error messages. This has been fixed. - When an iterator was used on an unbound table, PyTables crashed. A workaround has been implemented so as to avoid this. In addition, a better solution has been devised, but as it requires an in-deep refactorisation, it will be delivered with PyTables 1.4 series. - Added ``Enum.__ne__()`` method to avoid equal enumerations comparing to both True and False. Closes ticket #8 (thanks to Ashley Walsh). - Work around Python bug when storing floats in attributes under some locales. See ticket #9 (thanks to Fabio Zadrozny). Deprecated features: - None Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From nodwell at physics.ubc.ca Wed May 3 11:44:20 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Wed May 3 11:44:20 2006 Subject: [Numpy-discussion] extension to io.read_array : parse strings Message-ID: I've made a small modification to import_array.py so that I can parse a string as if it were a file. For example: >>> B = """# comment line ... 3 4 5 ... 2 1 5""" >>> io.read_array(B, datainstring=1) array([[ 3., 4., 5.], [ 2., 1., 5.]]) I rather expect that there already exists a standard way to do this, and if so, I would appreciate it if someone could illustrate. Otherwise, if no convenient way of parsing strings into arrays exists at the moment, and if anyone is of the opinion that this might be generally useful, then I will submit this as a patch. (For my own purposes I needed to parse web form input.) Eric From tom.denniston at alum.dartmouth.org Wed May 3 13:01:12 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Wed May 3 13:01:12 2006 Subject: [Numpy-discussion] numpy imports Message-ID: When I import numpy in python -v mode I notice that it imports a slew of modules. Is this intentional? Are the implicit dependencies necessary or have I just misconfigured or build something wrong? Has anyone noticed the same behavior? --Tom From robert.kern at gmail.com Wed May 3 13:15:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 3 13:15:01 2006 Subject: [Numpy-discussion] Re: numpy imports In-Reply-To: References: Message-ID: Tom Denniston wrote: > When I import numpy in python -v mode I notice that it imports a slew > of modules. Is this intentional? More or less. There is quite a bit of accumulated history from Numeric and scipy_base that numpy is trying to stay relatively close to API-wise. numpy et. al. are frequently used from the interactive prompt, probably more so than many other Python packages. There is an expectation that "import numpy" makes nearly everything public in the package accessible (maybe a dot or two down, but still accessible). Unfortunately, that entails a lot of internal imports. Paring down that list of imports in any significant manner is going to break code. I wish we could, but I don't think it's practical at this point in time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From customer-dept at creditunions.com Wed May 3 13:16:14 2006 From: customer-dept at creditunions.com (Credit Unions Bank.) Date: Wed May 3 13:16:14 2006 Subject: [Numpy-discussion] Credit Union`s Bank Security Notice ID# 13182 Message-ID: An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed May 3 15:21:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed May 3 15:21:10 2006 Subject: [Numpy-discussion] numpy imports In-Reply-To: References: Message-ID: On 5/3/06, Tom Denniston wrote: > When I import numpy in python -v mode I notice that it imports a slew > of modules. Is this intentional? Are the implicit dependencies > necessary or have I just misconfigured or build something wrong? Has > anyone noticed the same behavior? > > --Tom > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Where does the IBM ad come from? Are ads attached to the email I send to this list? The ad does not appear in the archive: http://sourceforge.net/mailarchive/forum.php?thread_id=10299357&forum_id=4890 From robert.kern at gmail.com Wed May 3 15:25:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 3 15:25:04 2006 Subject: [Numpy-discussion] Re: numpy imports In-Reply-To: References: Message-ID: Keith Goodman wrote: >> ------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > > Where does the IBM ad come from? Are ads attached to the email I send > to this list? > > The ad does not appear in the archive: > > http://sourceforge.net/mailarchive/forum.php?thread_id=10299357&forum_id=4890 Sourceforge adds them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Wed May 3 15:44:14 2006 From: ndarray at mac.com (Sasha) Date: Wed May 3 15:44:14 2006 Subject: [Numpy-discussion] Doctests vs. unittests Message-ID: In a recent thread (see "Test fails for rev. 2473") Albert suggested that unittests should be preferred over doctests. I disagree and would like to hear some opinions on this topic. I believe unitests and doctests serve two distinct purposes. Unittests should provide bigger coverage than doctests, particularly tests for rarely used corner cases or tests that are added when a bug is fixed should be written as unittests. Doctests should mostly be used as automatically tested examples in docstrings. Doctests should be selected primarily based on their readibility and relevance to the rest of the docstring rather than completeness of test coverage. There is one use of doctests that may justify conversion to unittests. We should encourage users to submit tests and doctest format is much more accessible than unittest. Doctests that appear in separate test files rather than as examples in docstings should probably be converted eventually, but this should not discourage anyone from writing the tests as doctests in the first place. From oliphant at ee.byu.edu Wed May 3 22:39:11 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed May 3 22:39:11 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: References: <200605021826.21114.faltet@carabos.com> <200605022027.59369.faltet@carabos.com> Message-ID: <4459937A.2080108@ee.byu.edu> Charles R Harris wrote: > Hi, > > On 5/2/06, *Francesc Altet* > wrote: > > A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > > Francesc, > > > > Completely off topic, but are you aware of the lexsort function > in numpy > > and numarray? It is like argsort but takes a list of > (vector)keys and > > performs a stable sort on each key in turn, so for record arrays > you can > > get the effect of sorting on column A, then column B, etc. I > thought I > > would mention it because you seem to use argsort a lot and, > well, because I > > wrote it ;) > > Thanks for pointing this out. In fact, I had no idea of this > capability in numarray (numpy neither). I'll have to look more > carefully into this to fully realize the kind of things that can be > done with it. But it seems very promising anyway :-) > > > > As an example: > > In [21]: a > Out[21]: > array([[0, 1], > [1, 0], > [1, 1], > [0, 1], > [1, 0]]) > > In [22]: a[lexsort((a[:,1],a[:,0]))] > Out[22]: > array([[0, 1], > [0, 1], > [1, 0], > [1, 0], > [1, 1]]) > > > Hmm, I notice that lexsort requires a tuple and won't accept a list. I > wonder if there is a good reason for that. Travis? > Can't think of one right now except haste in coding... -Travis From arnd.baecker at web.de Thu May 4 00:34:13 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 00:34:13 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... Message-ID: Hi, I am trying to profile some code which I am converting from Numeric to numpy. However, the whole profiling seems to break down on the level of hotshot, depending on whether the code is run with Numeric/numpy or (old) scipy. For the attached test I get: - old scipy: all is fine! But all these fail - Numeric 24.2: - numpy version: 0.9.7.2256 - scipy version: 0.4.8 "Failing" means that I don't get a break-down on which routine takes how much time. First I would be interested whether someone else sees the same behaviour, or if we screwed up something with our installation. If this is reproducible, the question is, whether one can do something about it ... Any help and ideas are very welcome! Best, Arnd python test_profile_problem.py scipy ############################################# scipy version: 0.3.2 scipy case: Size: 79776 18001 function calls in 0.660 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.269 0.269 0.660 0.660 test_profile_problem.py:38(main) 1000 0.118 0.000 0.235 0.000 scimath.py:31(log) 2000 0.068 0.000 0.129 0.000 type_check.py:96(isreal) 2000 0.053 0.000 0.057 0.000 type_check.py:86(imag) 6000 0.039 0.000 0.039 0.000 type_check.py:12(asarray) 1000 0.036 0.000 0.156 0.000 scimath.py:25(sqrt) 2000 0.034 0.000 0.034 0.000 Numeric.py:583(ravel) 2000 0.027 0.000 0.027 0.000 Numeric.py:655(sometrue) 2000 0.017 0.000 0.078 0.000 function_base.py:36(any) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py Numeric ############################################# Numeric version: 24.2 Numeric case: Size: 1374 1 function calls in 0.385 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.385 0.385 0.385 0.385 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py numpy ############################################# numpy version: 0.9.7.2256 numpy case: Size: 1426 1 function calls in 0.346 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.346 0.346 0.346 0.346 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py new-scipy ############################################# scipy version: 0.4.8 new-scipy case: Size: 1426 1 function calls in 0.327 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.327 0.327 0.327 0.327 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) -------------- next part -------------- A non-text attachment was scrubbed... Name: test_profile_problem.py Type: text/x-python Size: 1597 bytes Desc: URL: From schofield at ftw.at Thu May 4 01:31:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 4 01:31:15 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: Message-ID: <4459BCDE.6070002@ftw.at> Arnd Baecker wrote: > Hi, > > I am trying to profile some code which I am converting from > Numeric to numpy. > However, the whole profiling seems to break down on the level of hotshot, > depending on whether the code is run with Numeric/numpy or (old) scipy. > > For the attached test I get: > - old scipy: all is fine! > But all these fail > - Numeric 24.2: > - numpy version: 0.9.7.2256 > - scipy version: 0.4.8 > > "Failing" means that I don't get a break-down on which routine takes how > much time. > > First I would be interested whether someone else sees the same behaviour, > or if we screwed up something with our installation. > I've had trouble with this too. I get more meaningful results using prof.run('function()') instead of prof.runcall. I wrote myself a little wrapper function, which I'll include below. But I'm still mystified why hotshot's runcall doesn't work ... -- Ed ----------------------- def profilerun(function, logfilename='temp.prof'): """A nice wrapper for the hotshot profiler. Usage: profilerun('my_statement') Example: >>> from scipy.linalg import inv >>> from numpy import rand >>> def timewaste(arg1=None, arg2=None): >>> print "Arguments 1 and 2 are: " + str(arg1) + " and " + str(arg2) >>> a = rand(1000,1000) >>> b = linalg.inv(a) >>> >>> profilerun('timewaste()') Example output: 7 function calls in 0.917 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.916 0.916 0.917 0.917 basic.py:176(inv) 1 0.000 0.000 0.000 0.000 function_base.py:162(asarray_chkfinite) 1 0.000 0.000 0.917 0.917 :1(timewaste) 1 0.000 0.000 0.000 0.000 __init__.py:28(get_lapack_funcs) 1 0.000 0.000 0.000 0.000 _internal.py:28(__init__) 1 0.000 0.000 0.000 0.000 numeric.py:70(asarray) 1 0.000 0.000 0.000 0.000 _internal.py:36(__getitem__) 0 0.000 0.000 profile:0(profiler) """ prof = hotshot.Profile(logfilename) output = prof.run(function) print "Output of function is:" print output prof.close() stats = hotshot.stats.load(logfilename) stats.strip_dirs() stats.sort_stats('time', 'calls') stats.print_stats() From arnd.baecker at web.de Thu May 4 01:32:15 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 01:32:15 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: <4459B603.2060304@ftw.at> References: <4459B603.2060304@ftw.at> Message-ID: Hi Ed, On Thu, 4 May 2006, Ed Schofield wrote: > Arnd Baecker wrote: > > Hi, > > > > I am trying to profile some code which I am converting from > > Numeric to numpy. > > However, the whole profiling seems to break down on the level of hotshot, > > depending on whether the code is run with Numeric/numpy or (old) scipy. > > > > For the attached test I get: > > - old scipy: all is fine! > > But all these fail > > - Numeric 24.2: > > - numpy version: 0.9.7.2256 > > - scipy version: 0.4.8 > > > > "Failing" means that I don't get a break-down on which routine takes how > > much time. > > > > First I would be interested whether someone else sees the same behaviour, > > or if we screwed up something with our installation. > > > > I've had trouble with this too. I get more meaningful results using > prof.run('function()') instead of prof.runcall. I wrote myself a little > wrapper function, which I'll include below. Thanks for the wrapper - but it seems that is does not to help in my case: I replaced in my script prof.runcall(main) by prof.run("main()") and still see the same output, i.e. no information in the Numeric/numpy/scipy cases. For your example (with the corresponding modifications) I get meaningful results in all cases. Completely puzzled ... > But I'm still mystified why hotshot's runcall doesn't work ... Something really weird must be going on. Thanks, Arnd > ----------------------- > > > def profilerun(function, logfilename='temp.prof'): > """A nice wrapper for the hotshot profiler. > Usage: > profilerun('my_statement') > > Example: > >>> from scipy.linalg import inv > >>> from numpy import rand > >>> def timewaste(arg1=None, arg2=None): > >>> print "Arguments 1 and 2 are: " + str(arg1) + " and " + > str(arg2) > >>> a = rand(1000,1000) > >>> b = linalg.inv(a) > >>> > >>> profilerun('timewaste()') > > Example output: > 7 function calls in 0.917 CPU seconds > > Ordered by: internal time, call count > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.916 0.916 0.917 0.917 basic.py:176(inv) > 1 0.000 0.000 0.000 0.000 > function_base.py:162(asarray_chkfinite) > 1 0.000 0.000 0.917 0.917 console>:1(timewaste) > 1 0.000 0.000 0.000 0.000 > __init__.py:28(get_lapack_funcs) > 1 0.000 0.000 0.000 0.000 _internal.py:28(__init__) > 1 0.000 0.000 0.000 0.000 numeric.py:70(asarray) > 1 0.000 0.000 0.000 0.000 > _internal.py:36(__getitem__) > 0 0.000 0.000 profile:0(profiler) > > """ > prof = hotshot.Profile(logfilename) > output = prof.run(function) > print "Output of function is:" > print output > prof.close() > stats = hotshot.stats.load(logfilename) > stats.strip_dirs() > stats.sort_stats('time', 'calls') > stats.print_stats() > > > From steffen.loeck at gmx.de Thu May 4 06:59:14 2006 From: steffen.loeck at gmx.de (Steffen Loeck) Date: Thu May 4 06:59:14 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: <4451C076.40608@ieee.org> References: <4451C076.40608@ieee.org> Message-ID: <200605041557.15172.steffen.loeck@gmx.de> > The scalar math module is complete and ready to be tested. It should > speed up code that relies heavily on scalar arithmetic by by-passing the > ufunc machinery. > > It needs lots of testing to be sure that it is doing the "right" > thing. To enable scalarmath you need to > > import numpy.core.scalarmath After changing some code to numpy/new scipy my programs slowed down remarkably, so i did some comparison between Numeric, numpy and math using timeit.py. At first the sin(x) function was tested: Results (in usec per loop): sin-scalar sin-array Numeric 118 130 numpy 66.3 191 numpy + scalarmath 65.8 112 numpy + math 12.3 143 numpy + math + scalarmath 14.7 37.8 Numeric + math 14.8 22.3 The scripts are shown at the end. So using numpy.core.scalarmath improves speed. The fastest way however is using Numeric and the sin function from math. Second is the test of the modulo operation %: %-array Numeric 17.1 numpy 299 numpy + scalarmath 55.6 Things get faster using scalarmath but are still four times slower then with Numeric. Is there any possibility to speed up the modulo operation? scripts: Numeric: /usr/lib/python2.3/timeit.py \ -s "from Numeric import sin,arange; x=0.1" "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from Numeric import sin,zeros,arange; x=x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy: /usr/lib/python2.3/timeit.py \ -s "from numpy import sin,arange; x=0.1" "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from numpy import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy + scalarmath: /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import sin; x=0.1"\ "for i in xrange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy + math: /usr/lib/python2.3/timeit.py \ -s "from math import sin; from numpy import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from math import sin; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" numpy + scalarmath + math /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from math import sin; from numpy import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from math import sin; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" Numeric + math /usr/lib/python2.3/timeit.py \ -s "from math import sin; from Numeric import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from math import sin; from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" Regards, Steffen From stefan at sun.ac.za Thu May 4 09:08:05 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu May 4 09:08:05 2006 Subject: [Numpy-discussion] memory de-allocation problems running 'digitize' Message-ID: <20060504160705.GA15868@mentat.za.net> Hi everyone, Albert Strasheim and I found a memory de-allocation error in _compiled_base_.c, which manifests itself when running 'digitize'. This code triggers the bug: import numpy as N for i in range(100): N.digitize([1,2,3,4],[1,3]) N.digitize([0,1,2,3,4],[1,3]) The ticket, with valgrind output, is filed at http://projects.scipy.org/scipy/numpy/ticket/95 A docstring describing what 'digitize' does would also be a useful addition. Regards St?fan From fperez.net at gmail.com Thu May 4 09:17:10 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 4 09:17:10 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On 5/4/06, Arnd Baecker wrote: > Thanks for the wrapper - but it seems that is does not to help in my case: > I replaced in my script > prof.runcall(main) > by > prof.run("main()") > and still see the same output, i.e. no information in the > Numeric/numpy/scipy cases. Have you tried in ipython %prun or '%run -p'? The first will run a single statement, the second a whole script, undre the control of the OLD python profiler (not hotshot). While the 'profile' module lacks some of the niceties of hotshot, it may at least work here. Not a permanent solution, but it could get you moving. Cheers, f From jonathan.taylor at stanford.edu Thu May 4 10:48:05 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu May 4 10:48:05 2006 Subject: [Numpy-discussion] recarray behaviour Message-ID: <445A3E32.2000405@stanford.edu> I posted a message to this list about a weird recarray error (and sent a second one with the actual pickled data that failed for me). Just wondering if anyone is/was able to reproduce this error. In recarray.py and recarrayII.py I have the exact same dtype description and try to create an array with the exact same list (their equality evaluates to True, and printed they are identical), but one raises an error, the other doesn't. Further, I can't use pdb to investigate the error... I have another example (recarrayIII.py) that gives this same error and behaves rather unpythonically (if that really is a word....). I couldn't figure out what was going on based on the recarray description in the numpy book. ========================================= jtaylo at kff:~/Desktop$ python recarrayII.py [(12.0, (2005, 4, 22, 0, 0, 0, 4, 112, -1), 501.0, 1.0, 2.0, 0.0, 0.0, 1.0, 91.5, 1.0, 1.0, 87.0, 1.0, 129.0, 76.0, 107.0, 11.0), (24.0, (2005, 2, 1, 0, 0, 0, 1, 32, -1), 504.0, 1.0, 2.0, 0.0, 0.0, 1.0, 166.0, 2.0, 1.0, 84.0, 1.0, 128.0, 78.0, 401.0, 7.0)] [('Week', ' -------------- next part -------------- A non-text attachment was scrubbed... Name: recarrayII.py Type: text/x-python Size: 135 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: dump.pickle URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recarrayIII.py Type: text/x-python Size: 148 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From arnd.baecker at web.de Thu May 4 11:38:01 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 11:38:01 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On Thu, 4 May 2006, Fernando Perez wrote: > On 5/4/06, Arnd Baecker wrote: > > > Thanks for the wrapper - but it seems that is does not to help in my case: > > I replaced in my script > > prof.runcall(main) > > by > > prof.run("main()") > > and still see the same output, i.e. no information in the > > Numeric/numpy/scipy cases. > > Have you tried in ipython %prun or '%run -p'? The first will run a > single statement, the second a whole script, undre the control of the > OLD python profiler (not hotshot). While the 'profile' module lacks > some of the niceties of hotshot, it may at least work here. I use hotshot in combination with hotshot2cachegrind and kcachegrind to get a nice visual display and line-by-line profilings (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html has all the links ;-). As far as I know this is not possible with the "old profiler". So yes > Not a permanent solution, but it could get you moving. At the moment even timeit.py is sufficient, but `prun` is indeed a very convenient option! However, the output of this is also puzzling, ranging from 6 (almost useless) lines for Numeric and to overwhelming 6 screens (also useless?) for numpy (see below). ((Also, if you vary the order of the tests you also get different results...)) Ouch, this looks like a mess - I know benchmarking/profiling is considered to be tricky - but that tricky? Many thanks, Arnd ############################################# Numeric version: 24.2 4 function calls in 0.490 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.480 0.480 0.480 0.480 test_profile_problem_via_ipython.py:38(main) 1 0.010 0.010 0.490 0.490 :1(?) 1 0.000 0.000 0.480 0.480 test_profile_problem_via_ipython.py:7(?) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.490 0.490 profile:0(execfile(filename,prog_ns)) ############################################# scipy version: 0.3.2 18004 function calls in 1.080 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.380 0.380 1.080 1.080 test_profile_problem_via_ipython.py:38(main) 1000 0.230 0.000 0.460 0.000 scimath.py:31(log) 2000 0.090 0.000 0.190 0.000 type_check.py:96(isreal) 2000 0.080 0.000 0.080 0.000 Numeric.py:655(sometrue) 2000 0.070 0.000 0.190 0.000 function_base.py:36(any) 6000 0.070 0.000 0.070 0.000 type_check.py:12(asarray) 2000 0.070 0.000 0.080 0.000 type_check.py:86(imag) 1000 0.050 0.000 0.240 0.000 scimath.py:25(sqrt) 2000 0.040 0.000 0.040 0.000 Numeric.py:583(ravel) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 1.080 1.080 profile:0(execfile(filename,prog_ns)) 1 0.000 0.000 1.080 1.080 test_profile_problem_via_ipython.py:7(?) 1 0.000 0.000 1.080 1.080 :1(?) ############################################# numpy version: 0.9.7.2256 33673 function calls (31376 primitive calls) in 1.380 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.480 0.480 0.480 0.480 test_profile_problem_via_ipython.py:43(main) 1438 0.130 0.000 0.130 0.000 posixpath.py:171(exists) 3523 0.090 0.000 0.130 0.000 sre_parse.py:206(get) 486/59 0.080 0.000 0.380 0.006 sre_parse.py:367(_parse) 886/59 0.060 0.000 0.100 0.002 sre_compile.py:24(_compile) 364/59 0.060 0.000 0.380 0.006 sre_parse.py:312(_parse_sub) 4426 0.050 0.000 0.050 0.000 sre_parse.py:187(__next) 1032/408 0.050 0.000 0.050 0.000 sre_parse.py:147(getwidth) 81 0.040 0.000 0.040 0.000 auxfuncs.py:253(l_and) 150/94 0.030 0.000 0.190 0.002 glob.py:9(glob) 2269 0.030 0.000 0.040 0.000 sre_parse.py:200(match) 291 0.020 0.000 0.020 0.000 sre_compile.py:180(_optimize_charset) 2228 0.020 0.000 0.020 0.000 sre_parse.py:145(append) 1 0.020 0.020 0.020 0.020 linalg.py:3(?) 886 0.020 0.000 0.020 0.000 sre_parse.py:98(__init__) 59 0.020 0.000 0.020 0.000 sre_parse.py:75(__init__) 59 0.020 0.000 0.060 0.001 sre_compile.py:331(_compile_info) 79 0.010 0.000 0.010 0.000 auxfuncs.py:265(l_not) 111 0.010 0.000 0.010 0.000 sre_parse.py:221(isname) 1400 0.010 0.000 0.010 0.000 glob.py:49() 30 0.010 0.000 0.010 0.000 numerictypes.py:284(_add_array_type) 1 0.010 0.010 0.010 0.010 ma.py:9(?) 349 0.010 0.000 0.020 0.000 sre_compile.py:324(_simple) 56 0.010 0.000 0.020 0.000 glob.py:42(glob1) 40 0.010 0.000 0.010 0.000 sre_parse.py:135(__delitem__) 17/1 0.010 0.001 1.380 1.380 :1(?) 1 0.010 0.010 0.530 0.530 crackfortran.py:14(?) 1 0.010 0.010 0.010 0.010 defmatrix.py:2(?) 1 0.010 0.010 0.010 0.010 numeric.py:1(?) 349 0.010 0.000 0.010 0.000 sre_parse.py:141(__getslice__) 55 0.010 0.000 0.010 0.000 posixpath.py:117(dirname) 2914 0.010 0.000 0.010 0.000 posixpath.py:56(join) 1 0.010 0.010 0.020 0.020 capi_maps.py:12(?) 1 0.000 0.000 0.000 0.000 records.py:1(?) 1 0.000 0.000 0.000 0.000 numerictypes.py:161(_add_aliases) 1 0.000 0.000 0.000 0.000 unittest.py:640(TextTestRunner) 1 0.000 0.000 0.000 0.000 ma.py:76(default_fill_value) 1 0.000 0.000 0.000 0.000 arraysetops.py:26(?) 1 0.000 0.000 0.000 0.000 function_base.py:2(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:343(_index_expression_class) 1 0.000 0.000 0.000 0.000 ma.py:263(__init__) 473 0.000 0.000 0.000 0.000 sre_parse.py:269(_escape) 1 0.000 0.000 0.000 0.000 ma.py:320(domain_safe_divide) 1 0.000 0.000 0.000 0.000 ma.py:38(_MaskedPrintOption) 1 0.000 0.000 0.000 0.000 ma.py:261(domain_tan) 1 0.000 0.000 0.100 0.100 _import_tools.py:301(get_pkgdocs) 2 0.000 0.000 0.000 0.000 info.py:27(?) 1 0.000 0.000 0.310 0.310 __init__.py:15(?) 2 0.000 0.000 0.000 0.000 UserDict.py:41(has_key) 1 0.000 0.000 0.000 0.000 __config__.py:3(?) 16 0.000 0.000 0.000 0.000 auxfuncs.py:259(l_or) 1 0.000 0.000 0.010 0.010 numerictypes.py:76(?) 1 0.000 0.000 0.000 0.000 cfuncs.py:15(?) 1 0.000 0.000 0.000 0.000 __version__.py:1(?) 1 0.000 0.000 0.000 0.000 copy.py:389(_EmptyClass) 47 0.000 0.000 0.000 0.000 ma.py:2104(_m) 1 0.000 0.000 0.000 0.000 records.py:42(format_parser) 56 0.000 0.000 0.000 0.000 posixpath.py:39(normcase) 1 0.000 0.000 0.000 0.000 numpytest.py:180(_SciPyTextTestResult) 1 0.000 0.000 0.000 0.000 numpytest.py:99(_dummy_stream) 1 0.000 0.000 0.000 0.000 numeric.py:484(_setdef) 2 0.000 0.000 0.000 0.000 __svn_version__.py:1(?) 1 0.000 0.000 1.380 1.380 test_profile_problem_via_ipython.py:12(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:229(r_class) 2 0.000 0.000 0.000 0.000 ma.py:496(size) 2 0.000 0.000 0.000 0.000 _import_tools.py:280(_format_titles) 5 0.000 0.000 0.000 0.000 ma.py:209(filled) 1 0.000 0.000 0.000 0.000 add_newdocs.py:2(?) 1 0.000 0.000 0.000 0.000 ma.py:1976(_maximum_operation) 2 0.000 0.000 0.000 0.000 ma.py:493(shape) 1 0.000 0.000 0.000 0.000 _import_tools.py:92(_get_sorted_names) 3 0.000 0.000 0.000 0.000 info.py:3(?) 6 0.000 0.000 0.000 0.000 sre_parse.py:218(isdigit) 16 0.000 0.000 0.000 0.000 _import_tools.py:258(log) 1 0.000 0.000 0.000 0.000 ufunclike.py:4(?) 59 0.000 0.000 0.000 0.000 sre_parse.py:183(__init__) 2 0.000 0.000 0.000 0.000 sre_compile.py:229(_mk_bitmap) 1 0.000 0.000 0.000 0.000 ma.py:29(MAError) 1 0.000 0.000 0.000 0.000 index_tricks.py:241(c_class) 1 0.000 0.000 0.000 0.000 auxfuncs.py:247(__init__) 1 0.000 0.000 0.000 0.000 unittest.py:78(TestResult) 1 0.000 0.000 0.000 0.000 index_tricks.py:278(ndindex) 1 0.000 0.000 0.000 0.000 fileinput.py:174(FileInput) 1 0.000 0.000 0.000 0.000 copy.py:54(Error) 2 0.000 0.000 0.000 0.000 info.py:28(?) 18 0.000 0.000 0.000 0.000 numerictypes.py:93(_evalname) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 machar.py:8(?) 1 0.000 0.000 0.000 0.000 numpytest.py:197(SciPyTextTestRunner) 21 0.000 0.000 0.000 0.000 numerictypes.py:349(obj2sctype) 1 0.000 0.000 0.000 0.000 _import_tools.py:331(PackageLoaderDebug) 97 0.000 0.000 0.000 0.000 string.py:125(join) 1 0.000 0.000 0.210 0.210 _import_tools.py:118(__call__) 1 0.000 0.000 0.000 0.000 version.py:1(?) 349 0.000 0.000 0.000 0.000 sre_parse.py:139(__setitem__) 1 0.000 0.000 0.000 0.000 unittest.py:136(TestCase) 1 0.000 0.000 0.000 0.000 arrayprint.py:4(?) 14 0.000 0.000 0.110 0.008 _import_tools.py:236(_execcmd) 1 0.000 0.000 0.000 0.000 ma.py:499(MaskedArray) 1 0.000 0.000 0.000 0.000 stat.py:54(S_ISREG) 1 0.000 0.000 0.590 0.590 __init__.py:3(?) 2 0.000 0.000 0.000 0.000 UserDict.py:50(get) 59 0.000 0.000 0.560 0.009 sre.py:216(_compile) 1 0.000 0.000 0.000 0.000 index_tricks.py:145(concatenator) 1 0.000 0.000 0.000 0.000 unittest.py:686(TestProgram) 1 0.000 0.000 0.000 0.000 index_tricks.py:3(?) 1 0.000 0.000 0.000 0.000 utils.py:1(?) 1 0.000 0.000 0.000 0.000 oldnumeric.py:3(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:250(__init__) 1 0.000 0.000 0.000 0.000 auxfuncs.py:15(?) 1 0.000 0.000 0.000 0.000 fnmatch.py:72(translate) 1 0.000 0.000 0.000 0.000 function_base.py:847(add_newdoc) 15 0.000 0.000 0.000 0.000 auxfuncs.py:393(gentitle) 2 0.000 0.000 0.000 0.000 index_tricks.py:82(__init__) 1 0.000 0.000 0.000 0.000 auxfuncs.py:243(F2PYError) 1 0.000 0.000 0.000 0.000 twodim_base.py:3(?) 8 0.000 0.000 0.000 0.000 _import_tools.py:268(_get_doc_title) 1 0.000 0.000 0.000 0.000 records.py:133(recarray) 1 0.000 0.000 0.000 0.000 defchararray.py:16(chararray) 2 0.000 0.000 0.000 0.000 oldnumeric.py:552(size) 264 0.000 0.000 0.000 0.000 sre_parse.py:80(opengroup) [...] From david.huard at gmail.com Thu May 4 12:10:03 2006 From: david.huard at gmail.com (David Huard) Date: Thu May 4 12:10:03 2006 Subject: [Numpy-discussion] recarray behaviour In-Reply-To: <445A3E32.2000405@stanford.edu> References: <445A3E32.2000405@stanford.edu> Message-ID: <91cf711d0605041209s4ad089c3n7a679a9c6f7e40ce@mail.gmail.com> I couldn't reproduce the error message you got. The scripts you provided seem to lack the pickle part. However, my guess is that you dumped an array object using pickle, and tried to retrieve it later on. For me this has never worked. I believe you should use the load and dump functions provided by numpy. If this is not solve your problem, please send one complete script that brings it up. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From andorxor at gmx.de Thu May 4 12:13:12 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Thu May 4 12:13:12 2006 Subject: [Numpy-discussion] System_info options for setup.py Message-ID: <445A5226.3060305@gmx.de> Hi, is there any information besides scipy.org/FAQ and Installing_SciPy/Windows on building NumPy (on Windows)? Is there any documentation of the various system_info options configurable in site.cfg other than (the uncommented) system_info.py? More specifically, what do the various BLAS/LAPACK options stand for? Should I use atlas or blas_opt/lapack_opt? Is amd an option to directly support ACML? Can I build NumPy with BLAS only, without LAPACK? It should be possible to combine ATLAS/LAPACK compiled with GCC/G77 and Numpy with Visual C++, right? Thanks in advance for any help. I'll try to update the wiki with the information I get. Regards, Stephan From fullung at gmail.com Thu May 4 13:59:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 13:59:04 2006 Subject: [Numpy-discussion] API inconsistencies Message-ID: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Hello all I noticed some inconsistencies in the NumPy API that might warrant some attention. On the one hand we have functions like the following: empty((d1,...,dn),dtype=int,order='C') empty((d1,...,dn),dtype=int,order='C') ones(shape, dtype=int_) (no order?) whereas the functions for generating random matrices look like this: rand(d0, d1, ..., dn) randn(d0, d1, ..., dn) Was this done for a specific reason? Numeric compatibility perhaps? Is this something that should be changed? Thanks! Regards, Albert From robert.kern at gmail.com Thu May 4 14:09:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 14:09:02 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > I noticed some inconsistencies in the NumPy API that might warrant some > attention. On the one hand we have functions like the following: > > empty((d1,...,dn),dtype=int,order='C') > empty((d1,...,dn),dtype=int,order='C') > ones(shape, dtype=int_) (no order?) > > whereas the functions for generating random matrices look like this: > > rand(d0, d1, ..., dn) > randn(d0, d1, ..., dn) > > Was this done for a specific reason? Numeric compatibility perhaps? Mostly. rand() and randn() are convenience functions anyways, so they present an API that is convenient for their particular task. > Is this > something that should be changed? Not unless if you want to be responsible for breaking substantial amounts of code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Thu May 4 14:11:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:11:08 2006 Subject: [Numpy-discussion] System_info options for setup.py In-Reply-To: <445A5226.3060305@gmx.de> Message-ID: <03f201c66fbf$393acf70$0a84a8c0@dsp.sun.ac.za> Hello Stephan > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Stephan Tolksdorf > Sent: 04 May 2006 21:13 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] System_info options for setup.py > > Hi, > > is there any information besides scipy.org/FAQ and > Installing_SciPy/Windows on building NumPy (on Windows)? > > Is there any documentation of the various system_info options > configurable in site.cfg other than (the uncommented) system_info.py? > More specifically, what do the various BLAS/LAPACK options stand for? > Should I use atlas or blas_opt/lapack_opt? Is amd an option to directly > support ACML? > Can I build NumPy with BLAS only, without LAPACK? > It should be possible to combine ATLAS/LAPACK compiled with GCC/G77 and > Numpy with Visual C++, right? I can confirm that this is possible. However, I noticed that my system is actually performing really badly with NumPy built in this way, so something probably went awry somewhere in my build. I'm investigating the issue. Anyway, the build goes something like this: Build ATLAS with GCC/G77. This is described in the ATLAS FAQ. Build FLAPACK from sources. To do this, I used NumPy lapack_src option. When the build crashes due to too many open files, restart it. Hopefully you'll get a liblapack.a out here somewhere. Throw this somewhere with your ATLAS libs. Rename libatlas.a to atlas.lib, etc. and in site.cfg in numpy/numpy/distutils put the following: [atlas] library_dirs = C:\home\albert\work2\ATLAS\lib\WinNT_P4SSE2 atlas_libs = lapack, f77blas, cblas, atlas, g2c This LAPACK library is the complete LAPACK library. ATLAS provides some LAPACK functions. To use these instead of the ones in FLAPACK, use ar x to unpack the object files from the (tiny) LAPACK library made by ATLAS and ar r to pack them pack into the complete LAPACK library. You need to grab libg2c.a from a Cygwin or MinGW install if I remember correctly. Now build with something like this: copy site.cfg c:\home\albert\work2\numpy\numpy\distutils cd /d "c:\home\albert\work2\numpy" del /f /s /q build del /f /s /q dist del /f /s /q numpy\core\__svn_version__.py del /f /s /q numpy\f2py\__svn_version__.py python setup.py config --compiler=msvc build --compiler=msvc bdist_wininst I also tried building with CLAPACK but this effort ended in tears. I'll try it again another day. If you want to try, you need to build libI77, libF77 and the LAPACK library using the solution files provided on the netlib site. You might also need to add blaswr.c (might be cblaswr.c, included with the CLAPACK sources) in there somewhere to get CLAPACK to call ATLAS BLAS. Other things I've noticed is that you can reasonably link against GCC-compiled libraries with MSVC, but using MSVC libraries with GCC can cause link or other problems. Also, GCC-compiled code can't be debugged with the MSVC debugger as far as I can tell If anybody else has some suggestions for easing this process, I'd love to hear from you. Regards, Albert From fullung at gmail.com Thu May 4 14:37:10 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:37:10 2006 Subject: [Numpy-discussion] scalarmathmodule changes broke MSVC build; patch available Message-ID: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> Hello all Some changes to scalarmathmodule.c.src have broken the build with at least MSVC 7.1. The generated .c contains code like this: static PyObject * cfloat_power(PyObject *a, PyObject *b, PyObject *c) { PyObject *ret; cfloat arg1, arg2, out; #if 1 cfloat out1; out1.real = out.imag = 0; #else cfloat out1=0; #endif int retstatus; As far as I know, this is not valid C. Ticket with patch here: http://projects.scipy.org/scipy/numpy/ticket/96 By the way, how about setting up buildbot so that we can avoid these problems in future? I'd be very happy to maintain build slaves for a few Windows configurations. Regards, Albert From fullung at gmail.com Thu May 4 14:51:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:51:06 2006 Subject: [Numpy-discussion] Matrix multiply benchmark Message-ID: <03fd01c66fc4$c058b530$0a84a8c0@dsp.sun.ac.za> Hello all My current work involves multiplication of some rather large matrices and vectors, so I was wondering about benchmarks for figuring out how fast NumPy is multiplying. Matrix Toolkits for Java (MTJ) has a Java Native Interface for calling through to ATLAS, MKL and friends. Some benchmark results with some interesting graphs are here: http://rs.cipr.uib.no/mtj/benchmark.html There is also some Java code for measuring the number of floating point operations per second here: http://rs.cipr.uib.no/mtj/bench/NNIGEMM.html I attempted to adapt this code to Python (suggestions and fixes welcome). My attempt at benchmarking general matrix-matrix multiplication: #!/usr/bin/env python import time import numpy as N print N.__version__ print N.__config__.blas_opt_info for n in range(50,501,10): A = N.rand(n,n) B = N.rand(n,n) C = N.empty_like(A) alpha = N.rand() beta = N.rand() if n < 100: r = 100 else: r = 10 # this gets the cache warmed up? for i in range(10): C[:,:] = N.dot(alpha*A, beta*B) t1 = time.clock() for i in range(r): C[:,:] = N.dot(alpha*A, beta*B) t2 = time.clock() s = t2 - t1 f = 2 * (n + 1) * n * n; mfs = (f / (s * 1000000.)) * r; print '%d %f' % (n, mfs) I think you might want to make r a bit larger to get more accurate results for smaller matrices, depending on your CPU speed. Is this benchmark comparable to the MTJ benchmark? NumPy might not be performing the same operations. MTJ probably uses BLAS to do the scaling and multiplication with one function call to dgemm. By the way, is there a better way of assigning into a preallocated matrix? I eagerly await your comments and/or results. Regards, Albert From stefan at sun.ac.za Thu May 4 14:57:19 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu May 4 14:57:19 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Message-ID: <20060504215627.GC11436@mentat.za.net> On Thu, May 04, 2006 at 04:08:04PM -0500, Robert Kern wrote: > Albert Strasheim wrote: > > Hello all > > > > I noticed some inconsistencies in the NumPy API that might warrant some > > attention. On the one hand we have functions like the following: > > > > empty((d1,...,dn),dtype=int,order='C') > > empty((d1,...,dn),dtype=int,order='C') > > ones(shape, dtype=int_) (no order?) > > > > whereas the functions for generating random matrices look like this: > > > > rand(d0, d1, ..., dn) > > randn(d0, d1, ..., dn) > > > > Was this done for a specific reason? Numeric compatibility perhaps? > > Mostly. rand() and randn() are convenience functions anyways, so they present an > API that is convenient for their particular task. > > > Is this > > something that should be changed? > > Not unless if you want to be responsible for breaking substantial amounts of code. Why can't we modify the code to accept a new calling convention, in addition to the older one? Maybe try to run standard_normal(args), but if it throws a TypeError call standard_normal(*args). Regards St?fan From robert.kern at gmail.com Thu May 4 15:08:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:08:09 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <20060504215627.GC11436@mentat.za.net> References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> <20060504215627.GC11436@mentat.za.net> Message-ID: Stefan van der Walt wrote: > Maybe try to run standard_normal(args), but if it throws a TypeError > call standard_normal(*args). I prefer that a single function has a single calling convention, and that we do not try to guess what the user wants. If you want the convention (mostly) consistent with zeros() and ones() that uses a tuple, use numpy.random.standard_normal(). If you want the, often convenient convention, that uses multiple arguments, use randn(). numpy.random.standard_normal(shape) == numpy.randn(*shape) numpy.random.random(shape) == numpy.rand(*shape) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Thu May 4 15:12:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 15:12:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: Message-ID: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Hey Robert and list > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern > Sent: 04 May 2006 23:08 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Re: API inconsistencies > > Albert Strasheim wrote: > > Hello all > > > > I noticed some inconsistencies in the NumPy API that might warrant some > > attention. On the one hand we have functions like the following: > > > > empty((d1,...,dn),dtype=int,order='C') > > empty((d1,...,dn),dtype=int,order='C') > > ones(shape, dtype=int_) (no order?) > > > > whereas the functions for generating random matrices look like this: > > > > rand(d0, d1, ..., dn) > > randn(d0, d1, ..., dn) > > > > Was this done for a specific reason? Numeric compatibility perhaps? > > Mostly. rand() and randn() are convenience functions anyways, so they > present an > API that is convenient for their particular task. > > > Is this > > something that should be changed? > > Not unless if you want to be responsible for breaking substantial amounts > of code. What's the current thinking NumPy backwards compatibility with Numeric and ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort be considered to be the default setting? In case the answer is the latter, and rand is supposed to be convenient way of calling numpy.random.random(someshape), as you said on the scipy-user list, shouldn't their arguments be the same? Having to choose between the two inconveniences of typing long names or remembering more than one convention for calling functions, I'd prefer to choose neither. But that's just me. ;-) Regards, Albert From cookedm at physics.mcmaster.ca Thu May 4 15:18:08 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 4 15:18:08 2006 Subject: [Numpy-discussion] Re: numexpr: optimizing pow In-Reply-To: <440FB3C7.7070704@cox.net> (Tim Hochberg's message of "Wed, 08 Mar 2006 21:49:11 -0700") References: <440AF97C.5040106@cox.net> <440CFCA1.70007@cox.net> <440DD209.5060900@cox.net> <20060307185127.GA31063@arbutus.physics.mcmaster.ca> <440F80A3.8070006@cox.net> <440FB3C7.7070704@cox.net> Message-ID: Tim Hochberg writes: > I just checked in some changes that do aggressive optimization on the > pow operator in numexpr. Now all integral and half integral powers > between [-50 and 50] are computed using multiples and sqrt. > (Empirically 50 seemed to be the closest round number to the breakeven > point.) > > I mention this primarily because I think it's cool. But also, it's the > kind of optimization that I don't think would be feasible in numpy > itself short of defining a whole pile of special cases, either > separate ufuncs or separate loops within a single ufunc, one for each > case that needed optimizing. Otherwise the bookkeeping overhead would > overwhelm the savings of replacing pow with multiplies. > > Now all of the bookkeeping is done in Python, which makes it easy; and > done once ahead of time and translated into bytecode, which makes it > fast. The actual code that does the optimization is included below for > those of you interested enough to care, but not interested enough to > check it out of the sandbox. It could be made simpler, but I jump > through some hoops to avoid unnecessary mulitplies. For instance, > starting 'r' as 'OpNode('ones_like', [a])' would simplify things > signifigantly, but at the cost of adding an extra multiply in most > cases. > > That brings up an interesting idea. If 'mul' were made smarter, so > that it recognized OpNode('ones_like', [a]) and ConstantNode(1), then > not only would that speed some 'mul' cases up, it would simplify the > code for 'pow' as well. I'll have to look into that tomorrow. Instead of using a separate ones_like opcode, why don't you just add a ConstantNode(1) instead? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Thu May 4 15:22:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:22:02 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> References: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > What's the current thinking NumPy backwards compatibility with Numeric and > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort > be considered to be the default setting? We are trying to be as compatible with Numeric as possible by using the convertcode.py script. We hope that that will get people 99% of the way there. numarray modules will take some more effort to convert. > In case the answer is the latter, and rand is supposed to be convenient way > of calling numpy.random.random(someshape), as you said on the scipy-user > list, shouldn't their arguments be the same? Having to choose between the > two inconveniences of typing long names or remembering more than one > convention for calling functions, I'd prefer to choose neither. But that's > just me. ;-) Most of the convenience is the calling convention, not the shorter name. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Thu May 4 15:34:12 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu May 4 15:34:12 2006 Subject: [Numpy-discussion] Re: numexpr: optimizing pow In-Reply-To: References: <440AF97C.5040106@cox.net> <440CFCA1.70007@cox.net> <440DD209.5060900@cox.net> <20060307185127.GA31063@arbutus.physics.mcmaster.ca> <440F80A3.8070006@cox.net> <440FB3C7.7070704@cox.net> Message-ID: <445A8139.2050505@cox.net> David M. Cooke wrote: >Tim Hochberg writes: > > > >>I just checked in some changes that do aggressive optimization on the >>pow operator in numexpr. Now all integral and half integral powers >>between [-50 and 50] are computed using multiples and sqrt. >>(Empirically 50 seemed to be the closest round number to the breakeven >>point.) >> >>I mention this primarily because I think it's cool. But also, it's the >>kind of optimization that I don't think would be feasible in numpy >>itself short of defining a whole pile of special cases, either >>separate ufuncs or separate loops within a single ufunc, one for each >>case that needed optimizing. Otherwise the bookkeeping overhead would >>overwhelm the savings of replacing pow with multiplies. >> >>Now all of the bookkeeping is done in Python, which makes it easy; and >>done once ahead of time and translated into bytecode, which makes it >>fast. The actual code that does the optimization is included below for >>those of you interested enough to care, but not interested enough to >>check it out of the sandbox. It could be made simpler, but I jump >>through some hoops to avoid unnecessary mulitplies. For instance, >>starting 'r' as 'OpNode('ones_like', [a])' would simplify things >>signifigantly, but at the cost of adding an extra multiply in most >>cases. >> >>That brings up an interesting idea. If 'mul' were made smarter, so >>that it recognized OpNode('ones_like', [a]) and ConstantNode(1), then >>not only would that speed some 'mul' cases up, it would simplify the >>code for 'pow' as well. I'll have to look into that tomorrow. >> >> > >Instead of using a separate ones_like opcode, why don't you just add a >ConstantNode(1) instead? > > You think I can remember something like that a month or whatever later? You may be giving me too much credit ;-) Hmm..... Ok. I think I remember. IIRC, the reason is that ConstantNode(1) is slightly different than OpNode('ones_like', [a]) in some corner cases. Specifically pow(a**0) will produce an array of ones in one case, but a scalar 1 in the other case. -tim From fullung at gmail.com Thu May 4 15:36:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 15:36:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: Message-ID: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> Robert Kern wrote: > Albert Strasheim wrote: > > > What's the current thinking NumPy backwards compatibility with Numeric > and > > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting > effort > > be considered to be the default setting? > > We are trying to be as compatible with Numeric as possible by using the > convertcode.py script. We hope that that will get people 99% of the way > there. > > numarray modules will take some more effort to convert. This is a good way of dealing with this issue. > > In case the answer is the latter, and rand is supposed to be convenient > way > > of calling numpy.random.random(someshape), as you said on the scipy-user > > list, shouldn't their arguments be the same? Having to choose between > the > > two inconveniences of typing long names or remembering more than one > > convention for calling functions, I'd prefer to choose neither. But > that's > > just me. ;-) > > Most of the convenience is the calling convention, not the shorter name. In my opinion, having one calling convention for functions in the base namespace is much more convenient than the convenience provided by these special cases. Are there many convenience functions, or are rand and randn the only two? A quick search didn't turn up any others, but I might have missed some. Regards, Albert From robert.kern at gmail.com Thu May 4 15:46:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:46:01 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> References: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Are there many convenience functions, or are rand and randn the only two? A > quick search didn't turn up any others, but I might have missed some. rand() and randn() are the only ones that come to mind. There may be a couple more Matlab-inspired functions floating around, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu May 4 22:18:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 4 22:18:02 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On 5/4/06, Arnd Baecker wrote: > I use hotshot in combination with hotshot2cachegrind > and kcachegrind to get a nice visual display and line-by-line > profilings > (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html > has all the links ;-). > As far as I know this is not possible with the "old profiler". I've seen that, and it really looks fantastic. I need to start using it... So yes > However, the output of this is also puzzling, ranging > from 6 (almost useless) lines for Numeric and > to overwhelming 6 screens (also useless?) for numpy (see below). > ((Also, if you vary the order of the tests you also get > different results...)) > > Ouch, this looks like a mess - I know benchmarking/profiling > is considered to be tricky - but that tricky? I'm by no means a profiling guru, but I wonder if this difference is due to the fact that numpy uses much more python code than C, while much of Numeric's work is done inside C routines. As far as the profile module is concerned, C code is invisible. If I read the info you posted correctly, it also seems like the time in your particular example is spent on many different things, rather than there being a single obvious bottleneck. That can make the optimization of this case a bit problematic. Cheers, f From fullung at gmail.com Thu May 4 23:21:15 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 23:21:15 2006 Subject: [Numpy-discussion] Running NumPy tests from inside source tree fails Message-ID: <002301c66fe7$42a16600$0a84a8c0@dsp.sun.ac.za> Hello all I'm trying to run the NumPy tests from inside the source tree. This depends on set_package_path() finding the package files I built. According to the set_package_path documentation: set_package_path should be called from a test_file.py that satisfies the following tree structure: //test_file.py Then the first existing path name from the following list /build/lib.- /.. My source tree (and probably everyone else's) looks like this: numpy numpy\build numpy\build\lib.- ... numpy\core\tests\test_foo.py This means that set_package_path isn't going to do the right thing when trying to run these tests. And indeed, I get an ImportError when I try. A better strategy would probably be to search up from dirname(abspath(testfile)) until you reach the current working directory, instead of the hardcoded dirname(dirname(abspath(testfile)) currently being used. Pearu, think we could fix this (assuming it's broken and that I didn't miss something obvious)? Regards, Albert From arnd.baecker at web.de Fri May 5 00:36:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 00:36:06 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On Thu, 4 May 2006, Fernando Perez wrote: > On 5/4/06, Arnd Baecker wrote: > > > I use hotshot in combination with hotshot2cachegrind > > and kcachegrind to get a nice visual display and line-by-line > > profilings > > (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html > > has all the links ;-). > > As far as I know this is not possible with the "old profiler". > > I've seen that, and it really looks fantastic. I need to start using it... > > So yes And no (at least with python <2.5): http://docs.python.org/dev/lib/module-hotshot.html When searching around on c.l.p. I also found this very interesting looking tool to analyze hotshot output: http://www.vrplumber.com/programming/runsnakerun/ """RunSnakeRun is a small GUI utility that allows you to view HotShot profiler dumps in a sortable GUI view. It loads the dumps incrementally in the background so that you can begin viewing the profile results fairly quickly.""" (I haven't tested it yet though ...) > > However, the output of this is also puzzling, ranging > > from 6 (almost useless) lines for Numeric and > > to overwhelming 6 screens (also useless?) for numpy (see below). > > ((Also, if you vary the order of the tests you also get > > different results...)) > > > > Ouch, this looks like a mess - I know benchmarking/profiling > > is considered to be tricky - but that tricky? > > I'm by no means a profiling guru, but I wonder if this difference is > due to the fact that numpy uses much more python code than C, while > much of Numeric's work is done inside C routines. As far as the > profile module is concerned, C code is invisible. I can understand that a python profiler cannot look into C-code (well, even that might be possible with tricks), but shouldn't one be able to get the time before and after a line is executed and from this infer the time spent on a given line of code? Maybe that's not how these profilers work, but that's the way I would sprinkle my code with calls to time A bit more googling reveals an old cookbook example http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/81535 """This recipe lets you take into account time spent in C modules when profiling your Python code""" I start to wonder, whether things like this are properly taken into account in any of the available profilers?? > If I read the info you posted correctly, it also seems like the time > in your particular example is spent on many different things, rather > than there being a single obvious bottleneck. That can make the > optimization of this case a bit problematic. The example from scipy import * # from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) is pretty cooked up. Still I had success (well, it really was constructed for this ;-) with it in the past: http://www.physik.tu-dresden.de/~baecker/talks/pyco/pyco_.html#benchmarking-and-profiling Without importing sqrt from math, the output of kcachegrind http://www.physik.tu-dresden.de/~baecker/talks/pyco/BenchExamples/test_with_old_scipy.png suggests that 39% of the time is spent in sqrt and 61% in log. Importing sqrt from math leads to http://www.physik.tu-dresden.de/~baecker/talks/pyco/BenchExamples/test_with_old_scipy_and_mathsqrt.png I.e., only the log operation is visible. So things went fine in this particular example, but after all this I got really sceptical about profiling in python. A bit more googling revealed this very interesting thread http://thread.gmane.org/gmane.comp.python.devel/73166 which discusses short-comings of hotshot and suggests lsprof as replacement. cProfile (which is based on lsprof) will be in Python 2.5 And according to http://jcalderone.livejournal.com/21124.html there is also the possibility to use the output of cProfile with KCacheGrind. So all this does not help me right now (Presently I don't have the time to install python 2.5 alpha + numpy + scipy and test cProfile + the corresponding scripts), but in the longer run it might be the solution ... Best, Arnd From arnd.baecker at web.de Fri May 5 02:18:05 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 02:18:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: References: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Message-ID: On Thu, 4 May 2006, Robert Kern wrote: > Albert Strasheim wrote: > > > What's the current thinking NumPy backwards compatibility with Numeric and > > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort > > be considered to be the default setting? > > We are trying to be as compatible with Numeric as possible by using the > convertcode.py script. We hope that that will get people 99% of the way there. In the wiki there is a page documenting necesssary changes and possible problems when converting from Numeric http://www.scipy.org/Converting_from_Numeric > numarray modules will take some more effort to convert. See also http://www.scipy.org/Converting_from_numarray Please add anything you observe while converting old code. Best, Arnd From berginadarleen at hdm-stuttgart.de Fri May 5 05:12:19 2006 From: berginadarleen at hdm-stuttgart.de (Darleen Bergin) Date: Fri May 5 05:12:19 2006 Subject: [Numpy-discussion] Re: good credddit Message-ID: <000001c6703c$e8c48620$c154a8c0@pea52> the web site a staff. He had a tall pointed blue hat, a long grey cloak, a silver scarf over which a white beard hung down below his waist, and immense black boots. Good morning! said Bilbo, and he meant it. The sun was shining, and the grass was very green. But Gandalf looked at him from under long bushy eyebrows that stuck out further than the brim of his shady hat. What do you mean? be said. Do you wish me a good morning, or mean that it is a good morning whether I want not; or that you feel good this morning; or that it is morning to be good on? All of them at once, said Bilbo. And a very fine morning for a pipe of tobacco out of doors, into the bargain. If you have a pipe about you, sit down and have a fill of mine! Theres no hurry, we have all the day before us! Then Bilbo sat down on a seat by his door, crossed his legs, and blew out a beautiful grey ring of smoke that sailed up into the air without breaking and floated away over The Hill. Very pretty! said Gandalf. But I have no time to blow smoke-rings this morning. I am looking for someone to share in an adventure that I -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5359 bytes Desc: not available URL: From ryanlists at gmail.com Fri May 5 06:07:06 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri May 5 06:07:06 2006 Subject: [Numpy-discussion] Profiling problem Message-ID: I was trying to help Arnd and ended up confusing myself even more. I tried to run this cooked up example: from scipy import * # from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) using prun main() in ipython and this is the output: In [5]: prun main() 10005 function calls in 1.050 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.930 0.930 1.050 1.050 prof.py:4(main) 10000 0.120 0.000 0.120 0.000 :0(abs) 1 0.000 0.000 0.000 0.000 :0(arange) 1 0.000 0.000 1.050 1.050 profile:0(main()) 1 0.000 0.000 0.000 0.000 :0(setprofile) 1 0.000 0.000 1.050 1.050 :1(?) 0 0.000 0.000 profile:0(profiler) What does that mean? It looks to me like 0.12 total seconds were spent calling abs (with no mention of what command is calling abs), and the rest of the 1.05 total seconds is unaccounted for. Am I misunderstanding this or doing something else wrong? Thanks, Ryan From robert.kern at gmail.com Fri May 5 10:16:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri May 5 10:16:07 2006 Subject: [Numpy-discussion] Re: Profiling problem In-Reply-To: References: Message-ID: Ryan Krauss wrote: > What does that mean? It looks to me like 0.12 total seconds were > spent calling abs (with no mention of what command is calling abs), > and the rest of the 1.05 total seconds is unaccounted for. Am I > misunderstanding this or doing something else wrong? Donuts get you dollars that the profiler does not recognize ufuncs as functions that it ought to be profiling. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arnd.baecker at web.de Fri May 5 12:17:02 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 12:17:02 2006 Subject: [Numpy-discussion] Re: Profiling problem In-Reply-To: References: Message-ID: On Fri, 5 May 2006, Robert Kern wrote: > Ryan Krauss wrote: > > > What does that mean? It looks to me like 0.12 total seconds were > > spent calling abs (with no mention of what command is calling abs), > > and the rest of the 1.05 total seconds is unaccounted for. Am I > > misunderstanding this or doing something else wrong? > > Donuts get you dollars that the profiler does not recognize ufuncs as functions > that it ought to be profiling. So does that mean that something has to be done to the ufuncs so that the profiler can recognize them, or that the profiler has to be improved in some way? Just for completeness: I installed python 2.5 to test out the new cProfile which gives for ################################ import cProfile from numpy import * from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) cProfile.run("main()") ################################ the following result: 20004 function calls in 0.611 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.611 0.611 :1() 1 0.515 0.515 0.611 0.611 tst_prof.py:7(main) 10000 0.081 0.000 0.081 0.000 {abs} 10000 0.016 0.000 0.016 0.000 {math.sqrt} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {numpy.core.multiarray.arange} Best, Arnd From fullung at gmail.com Fri May 5 16:18:03 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri May 5 16:18:03 2006 Subject: [Numpy-discussion] Array interface Message-ID: <01a501c6709a$0568b320$0a84a8c0@dsp.sun.ac.za> Hello all I have a few quick questions regarding the array interface. First off, according to the documentation of array(...): """ array(object, dtype=None, copy=1, order=None, subok=0,ndmin=0) will return an array from object with the specified date-type Inputs: object - an array, any object exposing the array interface, any object whose __array__ method returns an array, or any (nested) sequence. """ array calls _array_fromobject in the C code, but I can't quite figure out how this code determines whether an arbitrary object is exposing the array interface. Any info would be appreciated. Next up, __array_struct__. According to Guide to NumPy: """ __array_struct__ A PyCObject that wraps a pointer to a PyArrayInterface structure. This is only useful on the C-level for rapid implementation of the array interface, using a single attribute lookup. """ Does an object have to implement __array_struct__, or is this optional? I would think that it might be optional, so that pure-Python objects can implement the array interface. In this case, what should such an object do about __array_struct__? Return None? Not implement the method? Something else? Thanks! Regards, Albert From fullung at gmail.com Fri May 5 19:49:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri May 5 19:49:04 2006 Subject: [Numpy-discussion] ctypes and NumPy In-Reply-To: <00d901c66d6c$2d2569c0$0a84a8c0@dsp.sun.ac.za> Message-ID: <000001c670b7$946f3540$0502010a@dsp.sun.ac.za> Hello all After some more playing with ctypes, I've come up with some ideas of using ctypes with NumPy without having to change ndarray. Info on the wiki: http://www.scipy.org/Cookbook/Ctypes I've figured out how to overlay ctypes onto NumPy arrays. Used in conjunction with decorators, this offers a reasonably seamless way of passing NumPy arrays through ctypes to C functions. I have some ideas for doing the reverse, i.e. overlaying NumPy arrays onto ctypes, but I haven't been able to get this to work yet. Any suggestions in this area would be appreciated. It is useful to be able to do this when you have a function returning something like a double*. In this case you probably have some knowledge about the size of this buffer, so you can construct an array with the right dtype and shape to work further on this array. Regards, Albert From chanley at stsci.edu Mon May 8 10:31:24 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon May 8 10:31:24 2006 Subject: [Numpy-discussion] numpy does not build under Solaris 8 -- may be related to ticket #96 Message-ID: <445F7B16.2040006@stsci.edu> Numpy does not currently build under Solaris 8. I have tracked this problem to the following function definition in scalarmathmodule.c.src: static PyObject * @name at _power(PyObject *a, PyObject *b, PyObject *c) The declaration of "int retstatus" needs to be moved ahead of the #if/else statements in which variable assignment occurs. This makes the function C like in syntax instead of C++. The native Solaris compilers are picky that way. I have checked in this change. Someone else will need to test on Windows to see if this corrects ticket #96. This change is a slight modification to revision 2472. Chris From stefan at sun.ac.za Mon May 8 10:36:21 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon May 8 10:36:21 2006 Subject: [Numpy-discussion] array without string representation Message-ID: <20060507204643.GA18870@mentat.za.net> Hi everyone, While playing around with record arrays, I managed to create an object that I can't print. This snippet: import numpy as N r = N.zeros(shape=(3,3), dtype=[('x','f4')]).view(N.recarray) y = r.view('f4') print "Created y" print y produces a traceback: In [51]: print y ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in ? File "/home/stefan//lib/python2.4/site-packages/numpy/core/numeric.py", line 272, in array_str return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) File "/home/stefan//lib/python2.4/site-packages/numpy/core/arrayprint.py", line 198, in array2string separator, prefix) File "/home/stefan//lib/python2.4/site-packages/numpy/core/arrayprint.py", line 145, in _array2string format_function = a._format File "/home/stefan//lib/python2.4/site-packages/numpy/core/records.py", line 176, in __getattribute__ res = fielddict[attr][:2] TypeError: unsubscriptable object Has anyone else seen this sort of thing before? Regards St?fan From fullung at gmail.com Mon May 8 10:51:51 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 8 10:51:51 2006 Subject: [Numpy-discussion] Incorrect result when multiplying views of record arrays Message-ID: <00b201c67221$d9ce6ff0$0502010a@dsp.sun.ac.za> Hello all I've found a possible bug when multiplying views of record arrays. Strangely enough, this only shows up on Linux. Luckily, the unit tests for my library picked up on this. The code: import numpy as N rdt = N.dtype({'names' : ['i', 'j'], 'formats' : [N.intc, N.float64]}) x = N.array([[(1,5.),(2,6.),(-1,0.)]], dtype=rdt) y = N.array([[(1,1.),(2,2.),(-1,0.)], [(1,3.),(2,4.),(-1,0.)]], dtype=rdt) xv = x['j'][:,:-1] yv = y['j'][:,:-1] z = N.dot(yv, N.transpose(xv)) The operands: yv = [[ 1. 2.] [ 3. 4.]] xv = [[ 5. 6.]] z = [[ 5.] [ 15.]] The expected result is [[17, 39]]', which is what I get on Windows. If I copy the views prior to multiplying: a = N.array(xv) b = N.array(yv) c = N.dot(b, N.transpose(a)) I get the correct result on both platforms. Trac ticket at http://projects.scipy.org/scipy/numpy/ticket/98. Cheers, Albert From tim.hochberg at cox.net Mon May 8 11:36:00 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 11:36:00 2006 Subject: [Numpy-discussion] basearray / arraykit Message-ID: <445F8F71.4070801@cox.net> I created a branch to work on basearray and arraykit: http://svn.scipy.org/svn/numpy/branches/arraykit Basearray, as most of you probably know by now is the array superclass that Travis, Sasha and I have all talked about at various times with slightly different emphasis. Arraykit is something I've mentioned in connection with basearray: a toolkit for creating custom array like objects. Here's a brief example; this is what an custom array class that just supported indexing and shape would look like using arraykit: import numpy.arraykit as _kit class customarray(_kit.basearray): __new__ = _kit.fromobj __getitem__ = _kit.getitem __setitem__ = _kit.setitem shape = property(_kit.getshape, _kit.setshape) In practice you'd probably define a few more things like __repr__, etc, but this should get the idea across. Stuff like the above already works. However there are several areas that need more work: 1. arraykit is very tangled of with multiarraymodule.c. I would like to decouple these. 2. A lot of stuff has an extra layer of python in it. I would like to remove that to minimize the extra overhead custom array types get. 3. The functions that are available closely reflect my particular application. If someone else uses this, they're likely to want functions that are not yet exposed through arraykit. That's it for now. More updates as events warrant. -tim From chanley at stsci.edu Mon May 8 13:50:01 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon May 8 13:50:01 2006 Subject: [Numpy-discussion] problems with index arrays and byte order Message-ID: <445FAEE5.4040500@stsci.edu> Greetings, The following example was uncovered by a pyfits user during testing. It appears that there is a significant problem with index array handling byte order properly. Example: >>> import numpy >>> a = numpy.array([1,2,3,4,5,6],'>f8') >>> a[3:5] array([ 4., 5.]) >>> a[[3,4]] array([ 2.05531309e-320, 2.56123631e-320]) >>> a[numpy.array([3,4])] array([ 2.05531309e-320, 2.56123631e-320]) >>> numpy.__version__ '0.9.7.2477' This test was conducted on a Red Hat Enterprise system. Chris From pau.gargallo at gmail.com Mon May 8 16:17:19 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Mon May 8 16:17:19 2006 Subject: [Numpy-discussion] axis in vdot? Message-ID: <6ef8f3380605081419ife00c4dn653e5f87e5c6d1b3@mail.gmail.com> hi, all given to nd-arrays x and y, i would like to know the most efficient way to compute >>> (x*y).sum( axis=n ) When using broadcasting rules x, y and (x*y).sum(axis=n) may happen to be much smaller that x*y, so it will be great if (x*y).sum(axis=n) could be computed without actually building x*y. Is there an existing way to do that in NumPy? If not, would it make sense to add an axis argument to vdot to perform such operation? something like >>> vdot( x,y, axis=n ) thanks a lot, pau From andorxor at gmx.de Mon May 8 16:33:32 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Mon May 8 16:33:32 2006 Subject: [Numpy-discussion] numpy does not build under Solaris 8 -- may be related to ticket #96 In-Reply-To: <445F7B16.2040006@stsci.edu> References: <445F7B16.2040006@stsci.edu> Message-ID: <445F8337.8020008@gmx.de> > Someone else will need to test on Windows to see if this corrects > ticket #96. It's the same error Albert reported and your modification has fixed it. Stephan From steve at shrogers.com Mon May 8 16:47:22 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Mon May 8 16:47:22 2006 Subject: [Numpy-discussion] Call for Papers APL Quote Quad 34:4 Message-ID: <445CA494.4080601@shrogers.com> The ACM Special Interest Group for APL is broadening it's scope from APL and J to all Array Programming Languages. Papers on the design, implementation, and use of NumPy would be welcome. ANNOUNCEMENT Issue 34:3 of APL QUOTE QUAD is now closed and ready for publishing. We are now starting to process the next issue (34:4), and we need new material. Therefore, I am sending this CALL FOR PAPERS APL QUOTE QUAD The next issue of APL Quote Quad is being designed. Prospective authors are encouraged to submit papers on any of the usual subjects of interest related to Array-Processing Languages (APL, APL2, J, and so forth). Submitted papers, in Microsoft Word (.doc), Rich Text Format (.rtf), Openoffice format (.scw), Latex (.tex) or Acrobat (.pdf) should be addressed to Manuel Alfonseca Manuel.Alfons... @uam.es Care must be taken to make the submitted papers self-contained, eg. if they require special APL typesettings. The tentative time limit for the new material is May 31st, 2006. From david at ar.media.kyoto-u.ac.jp Mon May 8 17:06:22 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon May 8 17:06:22 2006 Subject: [Numpy-discussion] Changing the printing format for numpy array Message-ID: <445EF420.6040307@ar.media.kyoto-u.ac.jp> Hi there, I would like to change the default printing format of float numbers when using numpy arrays. On the "old" numpy documentation, it is said that import sys and changing the value of sys.float_output_precision can achieve this. Unfortunately, I don't manage to make it work under ipython (or the standard cpython interpreter, for that matter): # Change default format import sys sys.float_output_suppress_small = 1 sys.float_output_precision = 3 import numpy b = numpy.linspace(0.1, 0.0001, 5) print b print numpy.array2string(b, precision = 3, suppress_small = 1) Is this a bug, or did I miss anything ? I basically want "print b" to behave like "print numpy.array2string(b, precision = 3, suppress_small = 1)", thank for the help, David From ted.horst at earthlink.net Mon May 8 17:10:11 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Mon May 8 17:10:11 2006 Subject: [Numpy-discussion] scalarmathmodule changes broke MSVC build; patch available In-Reply-To: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> References: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> Message-ID: <5F8B4509-2D51-431E-84BE-DAA64B088C70@earthlink.net> Sorry about that. I'm not sure why that compiled for me. gcc 4 seems to allow this. Ted On May 4, 2006, at 16:36, Albert Strasheim wrote: > Hello all > > Some changes to scalarmathmodule.c.src have broken the build with > at least > MSVC 7.1. The generated .c contains code like this: > > static PyObject * > cfloat_power(PyObject *a, PyObject *b, PyObject *c) > { > PyObject *ret; > cfloat arg1, arg2, out; > #if 1 > cfloat out1; > out1.real = out.imag = 0; > #else > cfloat out1=0; > #endif > int retstatus; > > As far as I know, this is not valid C. Ticket with patch here: > > http://projects.scipy.org/scipy/numpy/ticket/96 > > By the way, how about setting up buildbot so that we can avoid these > problems in future? I'd be very happy to maintain build slaves for > a few > Windows configurations. > > Regards, > > Albert > From ryanlists at gmail.com Mon May 8 18:20:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon May 8 18:20:23 2006 Subject: [Numpy-discussion] downsample vector with averaging Message-ID: I need to downsample some data while averaging it. Basically, I have a vector and I want to take for example every ten points and average them together so that the new vector would be made up of newvect[0]=oldvect[0:9].mean() newvect[1]=oldevect[10:19].mean() .... Is there a built-in or vectorized way to do this? I default to thinking in for loops, but that can lead to slow code. Thanks, Ryan From tim.hochberg at cox.net Mon May 8 18:21:38 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 18:21:38 2006 Subject: [Numpy-discussion] basearray / arraykit Message-ID: <445F8A61.8040107@cox.net> From tim.hochberg at cox.net Mon May 8 18:32:10 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 18:32:10 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <445FF0AD.5040005@cox.net> Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. You could try something like this: >>> import numpy >>> a = numpy.arange(100) # stand in for the real data >>> a.shape = 10,10 >>> a.mean(axis=1) array([ 4.5, 14.5, 24.5, 34.5, 44.5, 54.5, 64.5, 74.5, 84.5, 94.5]) >>> a.shape = [-1] # put a back like we found it -tim > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From sransom at nrao.edu Mon May 8 18:36:03 2006 From: sransom at nrao.edu (Scott Ransom) Date: Mon May 8 18:36:03 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <20060509013403.GA28744@ssh.cv.nrao.edu> How about this: --------------------------------------------- import numpy as Num def downsample(vector, factor): """ downsample(vector, factor): Downsample (by averaging) a vector by an integer factor. """ if (len(vector) % factor): print "Length of 'vector' is not divisible by 'factor'=%d!" % factor return 0 vector.shape = (len(vector)/factor, factor) return Num.mean(vector, axis=1) --------------------------------------------- Scott On Mon, May 08, 2006 at 01:17:16PM -0400, Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd_______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From ryanlists at gmail.com Mon May 8 19:09:09 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon May 8 19:09:09 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <20060509013403.GA28744@ssh.cv.nrao.edu> References: <20060509013403.GA28744@ssh.cv.nrao.edu> Message-ID: Thanks Scott and Tim. These look good, and very similar. On 5/8/06, Scott Ransom wrote: > How about this: > > --------------------------------------------- > import numpy as Num > > def downsample(vector, factor): > """ > downsample(vector, factor): > Downsample (by averaging) a vector by an integer factor. > """ > if (len(vector) % factor): > print "Length of 'vector' is not divisible by 'factor'=%d!" % factor > return 0 > vector.shape = (len(vector)/factor, factor) > return Num.mean(vector, axis=1) > --------------------------------------------- > > Scott > > > On Mon, May 08, 2006 at 01:17:16PM -0400, Ryan Krauss wrote: > > I need to downsample some data while averaging it. Basically, I have > > a vector and I want to take for example every ten points and average > > them together so that the new vector would be made up of > > newvect[0]=oldvect[0:9].mean() > > newvect[1]=oldevect[10:19].mean() > > .... > > > > Is there a built-in or vectorized way to do this? I default to > > thinking in for loops, but that can lead to slow code. > > > > Thanks, > > > > Ryan > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd_______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -- > -- > Scott M. Ransom Address: NRAO > Phone: (434) 296-0320 520 Edgemont Rd. > email: sransom at nrao.edu Charlottesville, VA 22903 USA > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > From jdhunter at ace.bsd.uchicago.edu Mon May 8 19:37:17 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon May 8 19:37:17 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: ("Ryan Krauss"'s message of "Mon, 8 May 2006 13:17:16 -0400") References: Message-ID: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Ryan" == Ryan Krauss writes: Ryan> I need to downsample some data while averaging it. Ryan> Basically, I have a vector and I want to take for example Ryan> every ten points and average them together so that the new Ryan> vector would be made up of newvect[0]=oldvect[0:9].mean() Ryan> newvect[1]=oldevect[10:19].mean() .... Ryan> Is there a built-in or vectorized way to do this? I default Ryan> to thinking in for loops, but that can lead to slow code. You might look at the matlab function decimate -- it first does a chebyshev low-pass filter before it does the down-sampling. Conceptually similar to what you are proposing with simple averaging but with a little more sophistication http://www-ccs.ucsd.edu/matlab/toolbox/signal/decimate.html An open source version (GPL) for octave by Paul Kienzle, who is one of the authors of the matplotlib quadmesh functionality and apparently a python convertee, is here http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m and it looks like someone has already translated this to python using scipy.signal http://www.bigbold.com/snippets/posts/show/1209 Some variant of this would be a nice addition to scipy. JDH From a.u.r.e.l.i.a.n at gmx.net Tue May 9 01:18:24 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue May 9 01:18:24 2006 Subject: [Numpy-discussion] Boolean indexing Message-ID: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> Hi, I posted this before on scipy-user, maybe that was the wrong place. Have a look at this: # ===================================== In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2484' In [3]: a = numpy.array([[0, 1],[2, 3], [4, 5],[6, 7],[8, 9]]) In [4]: b = numpy.array([False, True, True, True, False]) In [5]: a[b] Out[5]: array([[2, 3], [4, 5], [6, 7]]) In [6]: a[b,:] --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/jloehnert/ IndexError: arrays used as indices must be of integer type In [7]: a[numpy.nonzero(b),:] Out[7]: array([[2, 3], [4, 5], [6, 7]]) # ====================== Is there a reason to forbid a[b, :] with boolean array b? Johannes From oliphant.travis at ieee.org Tue May 9 01:42:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 9 01:42:09 2006 Subject: [Numpy-discussion] Boolean indexing In-Reply-To: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> References: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <4460558C.80006@ieee.org> Johannes Loehnert wrote: > Hi, > > I posted this before on scipy-user, maybe that was the wrong place. > > Have a look at this: > > # ===================================== > In [1]: import numpy > > In [2]: numpy.__version__ > Out[2]: '0.9.7.2484' > > In [3]: a = numpy.array([[0, 1],[2, 3], [4, 5],[6, 7],[8, 9]]) > > In [4]: b = numpy.array([False, True, True, True, False]) > > In [5]: a[b] > Out[5]: > array([[2, 3], > [4, 5], > [6, 7]]) > > In [6]: a[b,:] > --------------------------------------------------------------------------- > exceptions.IndexError Traceback (most recent > call last) > > /home/jloehnert/ > > IndexError: arrays used as indices must be of integer type > > In [7]: a[numpy.nonzero(b),:] > Out[7]: > array([[2, 3], > [4, 5], > [6, 7]]) > # ====================== > > Is there a reason to forbid a[b, :] with boolean array b? > I think admitting this behavior would require even more checks in already complicated code. I'm not sure it's worth it. I don't have an principled opposition, but it's not on my list of things to implement. -Travis From olivetti at itc.it Tue May 9 04:29:21 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Tue May 9 04:29:21 2006 Subject: [Numpy-discussion] Guide to Numpy book In-Reply-To: <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> References: <3FA6601C-819F-4F15-A670-829FC428F47B@cortechs.net> <4452C145.8050803@geodynamics.org> <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> Message-ID: <44607CCB.402@itc.it> My copy dates January 31. Did you receive updates after this thread started? Thanks, Emanuele Charlie Moad wrote: > On 4/30/06, Vidar Gundersen wrote: >> ===== Original message from Luis Armendariz | 29 Apr 2006: >> >> What is the newest version of Guide to numpy? The recent one I got is >> >> dated at Jan 9 2005 on the cover. >> > The one I got yesterday is dated March 15, 2006. >> >> aren't the updates supposed to be sent out >> to customers when available? > > I was waiting to hear a reply on this, because I am curious about > getting updates as well. Our labs copy reads Jan 20. How often > should we expect updates? I am guessing the date variations on the > front page are from latex each time the doc is regenerated. From fullung at gmail.com Tue May 9 05:36:16 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue May 9 05:36:16 2006 Subject: [Numpy-discussion] problems with index arrays and byte order In-Reply-To: <445FAEE5.4040500@stsci.edu> Message-ID: <005501c67365$10206f10$0502010a@dsp.sun.ac.za> Hello all The issue Chris mentioned seems to be fixed in r2487 (did this happen in r2480 or was it another change that fixed it?). http://projects.scipy.org/scipy/numpy/changeset/2480 >>> import numpy >>> a = numpy.array([1,2,3,4,5,6],'>f8') >>> a[3:5] array([ 4., 5.]) >>> >>> a[[3,4]] array([ 4., 5.]) >>> a[numpy.array([3,4])] array([ 4., 5.]) >>> numpy.__version__ '0.9.7.2487' Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Christopher Hanley > Sent: 08 May 2006 22:50 > To: numpy-discussion > Subject: [Numpy-discussion] problems with index arrays and byte order > > Greetings, > > The following example was uncovered by a pyfits user during testing. It > appears that there is a significant problem with index array handling > byte order properly. > > Example: > > >>> import numpy > >>> a = numpy.array([1,2,3,4,5,6],'>f8') > >>> a[3:5] > array([ 4., 5.]) > >>> a[[3,4]] > array([ 2.05531309e-320, 2.56123631e-320]) > >>> a[numpy.array([3,4])] > array([ 2.05531309e-320, 2.56123631e-320]) > >>> numpy.__version__ > '0.9.7.2477' > > This test was conducted on a Red Hat Enterprise system. > > Chris From nvf at MIT.EDU Tue May 9 10:05:08 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Tue May 9 10:05:08 2006 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #1753 - 1 msg In-Reply-To: <20060509030933.A19D3F53A@sc8-sf-spam2.sourceforge.net> References: <20060509030933.A19D3F53A@sc8-sf-spam2.sourceforge.net> Message-ID: <4460CB00.10601@mit.edu> Dear all, I sure wish I had seen John's link before writing my own decimate, but such is life. I am going to transition this thread to the scipy list since it seems more as if it belongs there. I have a comment which may be way, way beyond the scope of Ryan's intended application, but which probably should be made before a decimate function is added to scipy. If you will be FFTing your decimated signal and care at all about the phase, then you may want to consider using something like filtfilt rather than just applying a single filter. It filters a signal forwards then backwards in order to avoid introducing a phase delay, and is actually used in the Matlab implementation of decimate. I found an implementation in Python here: http://article.gmane.org/gmane.comp.python.scientific.user/1164/ While the author, Andrew Straw, seems to be suffering edge effects, it seems as if he's not windowing his data. In my application, his filtfilt seems to do its job quite nicely. Then again, I am also not a signals whiz. Take care, Nick numpy-discussion-request at lists.sourceforge.net wrote: > Message: 1 > To: "Ryan Krauss" > Cc: numpy-discussion > Subject: Re: [Numpy-discussion] downsample vector with averaging > From: John Hunter > Date: Mon, 08 May 2006 21:31:51 -0500 > > You might look at the matlab function decimate -- it first does a > chebyshev low-pass filter before it does the down-sampling. > Conceptually similar to what you are proposing with simple averaging > but with a little more sophistication > > http://www-ccs.ucsd.edu/matlab/toolbox/signal/decimate.html > > An open source version (GPL) for octave by Paul Kienzle, who is one of > the authors of the matplotlib quadmesh functionality and apparently a > python convertee, is here > > http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m > > and it looks like someone has already translated this to python using > scipy.signal > > http://www.bigbold.com/snippets/posts/show/1209 > > Some variant of this would be a nice addition to scipy. > > JDH > From chanley at stsci.edu Tue May 9 10:27:41 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue May 9 10:27:41 2006 Subject: [Numpy-discussion] field access in recarray broken when field name matches class method or attribute Message-ID: <4460CEF0.1080604@stsci.edu> It is not possible to access a named recarray field if that name matches the name of a recarray class method or attribute. Example code is below: >>> from numpy import rec >>> r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]],names='num,name,field') >>> r.field('num') array([456, 2]) >>> r.field('field') >>> I have opened a new ticket on the Trac site with this bug. Chris From eric at enthought.com Tue May 9 11:15:03 2006 From: eric at enthought.com (eric jones) Date: Tue May 9 11:15:03 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <4460DBE1.90104@enthought.com> Look into ufunc.reduceat(). It'll help with your specific case. It doesn't do it in one step but still should be pretty efficient. >>> a = ones(15) >>> a array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> add.reduceat(a,[0,5,10]) array([5, 5, 5]) >>> add.reduceat(a,[0,5,10])/5.0 array([ 1., 1., 1.]) eric Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ryanlists at gmail.com Tue May 9 13:11:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue May 9 13:11:23 2006 Subject: [Numpy-discussion] column_stack with len +/-1 Message-ID: I am trying to column_stack a bunch of data vectors that I think should be the same length. For whatever reason, my DAQ system doesn't always put out exactly the same length vector. They should be 11,000 elements long, but some are 11,000+/-1. How do I efficently drop one or two elements of the lists that are too long without making a bunch of accidental copies? What would newlist=[item[0:11000] for item in oldlist] do? Ryan From alexander.belopolsky at gmail.com Tue May 9 13:13:21 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue May 9 13:13:21 2006 Subject: [Numpy-discussion] Changing the printing format for numpy array In-Reply-To: <445EF420.6040307@ar.media.kyoto-u.ac.jp> References: <445EF420.6040307@ar.media.kyoto-u.ac.jp> Message-ID: On 5/8/06, David Cournapeau wrote: > [...] > I would like to change the default printing format of float numbers > when using numpy arrays. In numpy you use set_printoptions for that: >>> from numpy import * >>> set_printoptions(precision=2) >>> array([1/3.]) array([ 0.33]) Unfortunately, this does not affect dimensionless arrays and array scalars: >>> array(1/3.) array(0.33333333333333331) >>> float_(1/3.) 0.33333333333333331 (Travis said this was no a bug.) From chanley at stsci.edu Tue May 9 13:19:02 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue May 9 13:19:02 2006 Subject: [Numpy-discussion] cannot create single field recarray using string format input Message-ID: <4460F8F1.20703@stsci.edu> An error is raised when attempting to create a single field recarray using string format as input. If there is more than one format specified in the string no error is created. However, it is possible to create a single field recarray if the format is given as a member of a 1 element list. Example code below: In [1]: from numpy import rec In [2]: rr = rec.array(None, formats="f4,i4,i8,f8", shape=100) In [3]: rr.dtype Out[3]: dtype([('f1', ' /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in array(obj, formats, names, titles, shape, byteorder, aligned, offset, strides) 414 return recarray(shape, formats, names=names, titles=titles, 415 buf=obj, offset=offset, strides=strides, --> 416 byteorder=byteorder, aligned=aligned) 417 elif isinstance(obj, str): 418 return fromstring(obj, formats, names=names, titles=titles, /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in __new__(subtype, shape, formats, names, titles, buf, offset, strides, byteorder, aligned) 152 descr = formats 153 else: --> 154 parsed = format_parser(formats, names, titles, aligned) 155 descr = parsed._descr 156 /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in __init__(self, formats, names, titles, aligned) 42 class format_parser: 43 def __init__(self, formats, names, titles, aligned=False): ---> 44 self._parseFormats(formats, aligned) 45 self._setfieldnames(names, titles) 46 self._createdescr() /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in _parseFormats(self, formats, aligned) 51 dtype = sb.dtype(formats, aligned) 52 fields = dtype.fields ---> 53 keys = fields[-1] 54 self._f_formats = [fields[key][0] for key in keys] 55 self._offsets = [fields[key][1] for key in keys] TypeError: unsubscriptable object In [8]: rr = rec.array(None, formats=["f4"], shape=100) In [9]: rr.dtype Out[9]: dtype([('f1', ' References: Message-ID: <4460F920.6020108@cox.net> Ryan Krauss wrote: > I am trying to column_stack a bunch of data vectors that I think > should be the same length. For whatever reason, my DAQ system doesn't > always put out exactly the same length vector. They should be 11,000 > elements long, but some are 11,000+/-1. > > How do I efficently drop one or two elements of the lists that are too > long without making a bunch of accidental copies? > > What would > newlist=[item[0:11000] for item in oldlist] > do? That should be fine in terms of copying; 'item[0:11000]' will be a view of item, not a copy and thus will be cheap in terms of memory and time. However, if the vectors are really 11000+/-1 in length, don't you need 'items[:10999]' ? -tim > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From oliphant.travis at ieee.org Tue May 9 15:10:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 9 15:10:47 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week Message-ID: <4461131B.1050907@ieee.org> I'd like to get 0.9.8 of NumPy released by the end of the week. There are a few Trac tickets that need to be resolved by then. In particular #83 suggests returning scalars instead of 1x1 matrices from certain reduce-like methods. Please chime in on your preference. I'm waiting to here more feedback before applying the patch. If you can help out on any other ticket that would be much appreciated. For my part, the scalar math module need to have a re-worked coercion model so that mixing in Python scalars is handled correctly (as well as the case of mixing unsigned and signed integers of the same type --- neither can be cast to each other). I'm open to any suggestions and help. This should eliminate many of the problems causing tests to fail currently with scalar math. The 0.9.8 release is past due now. I was out of town last week and did not have much time for NumPy. -Travis From gruben at bigpond.net.au Tue May 9 16:39:45 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue May 9 16:39:45 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <445FF0AD.5040005@cox.net> References: <445FF0AD.5040005@cox.net> Message-ID: <446127FF.4090909@bigpond.net.au> Thanks to Tim and Scott, I've added a new example to the rebinning example in the cookbook which takes advantage of using mean. This also does what Ryan wants, but extends it to arbitrary dimensions. Gary R. From tim.hochberg at cox.net Tue May 9 18:36:21 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 18:36:21 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: <4461434F.6060604@cox.net> Travis Oliphant wrote: > > I'd like to get 0.9.8 of NumPy released by the end of the week. > There are a few Trac tickets that need to be resolved by then. > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your > preference. I'm waiting to here more feedback before applying the patch. Do you want replies to the list? Or on the ticket. Or both? I'll start with the list? Let's start with the example in the ticket: >>> m matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> m.argmax() matrix([[8]]) Does anyone else think that this is a fairly nonsensical result? Not that this is specific to matrices, the array result is just as weird: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() 8 Given that obj[obj.argmax()] should really equal obj.max(), argmax with no axis specified should really either raise an exception or return a tuple. Anyway, that's off topic. Let's consider something that makes more sense: >>> m.max() matrix([[8]]) >>> a.max() 8 OK, so how would we explain this result? One way would be refer to the implementation, saying that first we flatten the matrix/array and then we apply max to it. That sort of makes sense for arrays, but seems pretty outlandish for matrices which are always 2D and don't so much have the concept of flattening. A more appealing description is that we successively apply max along every axis. That is: >>> m.max(1).max(0) matrix([[8]]) >>> a.max(1).max(0) 8 [I start with the last axis here since it makes things more symetric between the array and matrix case] If we switch to having m.max() return a scalar, then this equivalence goes away. That makes things harder to explain. So, in the absence of more compelling use cases that those presented in the ticket I'd be inclined to leave things as they are. Of course I'm not a user of the matrix class, so take that for what it's worth. -tim > > If you can help out on any other ticket that would be much appreciated. > > For my part, the scalar math module need to have a re-worked coercion > model so that mixing in Python scalars is handled correctly (as well > as the case of mixing unsigned and signed integers of the same type > --- neither can be cast to each other). I'm open to any suggestions > and help. This should eliminate many of the problems causing tests to > fail currently with scalar math. > The 0.9.8 release is past due now. I was out of town last week and > did not have much time for NumPy. > > -Travis > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From ndarray at mac.com Tue May 9 18:39:27 2006 From: ndarray at mac.com (Sasha) Date: Tue May 9 18:39:27 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: On 5/9/06, Travis Oliphant wrote: > ... > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > I don't use matrices, so I don't have a strong opinion on this issue. On the other hand this brings back an old an in my view unfinished discussion on what aggregating methods should return in numpy: a rank-0 array or a scalar. If you remember I once advocated a point of view that both rank-0 arrays and array scalars are necessary in numpy an that we need to have consistent rules about when to return what. That discussion resulted in the following current behavior: >>> x = zeros((2,2)) >>> x[1,1] 0 >>> x[1,1,...] array(0) an important use case is: >>> y = x[1,1,...] >>> y[...] = 1 >>> x array([[0, 0], [0, 1]]) Note that in case of matrices: >>> m = matrix(x) >>> m[1,1] 0 >>> m[1,1,...] matrix([[0]]) I don't know is this was deliberately implemented this way or just comes as a side effect of the ndarray behavior. I also proposed back then to generalize the aggregating operations to allow aggregation over multiple axes and make x.sum() return a scalar, but a.sum((0,1)) return a rank-0 array. On the matrix feature in question I am -0. The proposed change trades one inconsistency for another, so why bother. From wbaxter at gmail.com Tue May 9 19:32:00 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 19:32:00 2006 Subject: [Numpy-discussion] Who uses matrix? Message-ID: On 5/10/06, Tim Hochberg wrote: > Of course I'm not a user of the matrix class, so take that for what it's worth. On 5/10/06, Sasha wrote: > > > I don't use matrices, so I don't have a strong opinion on this issue. > > Hmm. And in addition to you two, I'm pretty sure I've seen Travis mention he doesn't use matrix, either. That plus the lack of response to my and others' previous posts on the topic kinda makes me wonder whether there are actually *any* hardcore numpy users out there using the matrix class. I've been using matrix, but I don't consider myself a numpy hardcore at this point. On the issue in quesiton about returning scalars vs matrices from things like sum(), my initial reaction is that any time you have a 1x1 matrix it should just be returned as a scalar. In linear algebra terms, a 1x1 matrix *is* a scalar for all intents and purposes, so it doesn't need to be all dressed up in this fancy 'maxtrix' garb, as in "matrix([[8]])". But I'm not sure it's worth getting all in a hurry to make the change before numpy 0.9.8, or ever for that matter. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 9 20:16:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 20:16:10 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On Wed, 10 May 2006, Bill Baxter apparently wrote: > kinda makes me wonder whether there are actually any > hardcore numpy users out there using the matrix class. > On the issue in quesiton about returning scalars vs > matrices from things like sum(), my initial reaction is > that any time you have a 1x1 matrix it should just be > returned as a scalar. In linear algebra terms, a 1x1 > matrix is a scalar for all intents and purposes I use matrices when doing linear algebra. I believe that matrices will be used a lot by new users, especially those coming from matrix oriented languages such as GAUSS and Matlab. Regarding 1 by 1 matrices, I see two competing considerations: - a one by one matrix is not a scalar, as conformity for multiplication makes obvious - matrix oriented languages, like GAUSS or Matlab, do not draw this distinction for two obvious reasons * it did not fit their initial design (where everything was effectively a matrix) * people find it a convenient convention to conflate the two So that leaves it seems three options for numpy: - do it "right" (i.e., make the distinction, allow for explicit conversion, and raise an exception when the two are obviously confused as in addition or mutliplication when 1 by 1 is not conformable with the other argument) - conform to existing practice in other languages and likely to the expectations of users migrating from other languages - offer a switch, so that 1 by 1 matrices behave like scalars BUT the issue about what to return from sum() is entirely separate. I prefer it to be a scalar. Cheers, Alan Isaac From aisaac at american.edu Tue May 9 20:16:41 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 20:16:41 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> Message-ID: On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Let's start with the example in the ticket: > >>> m.argmax() > matrix([[8]]) > Does anyone else think that this is a fairly nonsensical result? Yes. The result should be a scalar. > Not that this is specific to matrices, the array result is > just as weird: > >>> a.argmax() > 8 This is desirable. This is just the meaning of axis=None in this context. I do not see a reason to discard this convenience and resort to a.ravel().argmax() > Anyway, that's off topic. Let's consider something that makes more sense: > >>> m.max() > matrix([[8]]) > >>> a.max() > 8 With axis=None, I want a scalar in both cases. But if 1 by 1 matrices end up being treated as scalars, I may not care. > If we switch to having m.max() return a scalar, then this > equivalence goes away. That makes things harder to > explain. Again, it is just the meaning of axis=None in this context. Right? Cheers, Alan Isaac From ndarray at mac.com Tue May 9 20:28:00 2006 From: ndarray at mac.com (Sasha) Date: Tue May 9 20:28:00 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On 5/9/06, Bill Baxter wrote: > ..... In linear algebra terms, a 1x1 matrix *is* a scalar for all intents and purposes ... In linear algebra you start with an N-dimensional space V then define an NxN dimensional space of matrices M. You can view that space as either linear space V'xV or a ring of linear operators V->V. Thus M is both a (non-comutative) ring (you can add and multiply its elements) and an a vector space (you can add and multiply by scalars). [An object that is both a ring and a linear space is called an "algebra"] Diagonal matrices with all diagonal elements equal behave like scalars w.r.t. multiplication, but it is different multiplication in the two cases. In a mathematical expression a . A where a is a scalar and A a is a matrix "." is the R x M -> M operation dictated by the linear space structure. On the other hand in (a . I) * A, "*" is the non-comutative MxM->M operation dictated by the ring structure. Note that for N>1 M does not contain 1x1 matrices and still has elements that are "scalar for *some* intents and purposes": a . I. Matrix algebra M can be extended to include rectangular matrices and matrices of different sizes, but by doing this one looses both the ring (not all pairs of matrices can be multiplied) and the linear space structures (not all pairs of matrices can be added). All this long introduction was to demonstrate that even from the linear algebra point of view a 1x1 matrix (an element of the space R'xR) is not the same as a scalar (an element of R). In numpy, however, algebraic differences between 1x1 matrices and scalars are not as important as the fact that matrices are mutable while scalars are not. From haase at msg.ucsf.edu Tue May 9 21:25:26 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue May 9 21:25:26 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray Message-ID: <44616AF9.5010905@msg.ucsf.edu> Hi, I'm a long time numarray user. I have some SWIG typemaps that I'm using for quite some time. They are C++ oriented and support creating template'd function. I only cover the case of contiguous "input" arrays of 1D,2D and 3D. (I would like to ensure that NO implicit type conversions are made so that I can use the same scheme to have arrays changed on the C/C++ side and can later see/access those in python.) (as I just added some text to the scipy wiki: www.scipy.org/Converting_from_numarray) I use something like: PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, NUM_C_ARRAY); arr = (double *) NA_OFFSETDATA(NAarr); What is new numpy's equivalent of NA_InputArray I tried looking http://numeric.scipy.org/array_interface.html but did not get the needed info. (numarray info on this is here: http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-C-API.html) Thanks for the new numpy - I'm very excited !! Sebastian Haase From tim.hochberg at cox.net Tue May 9 21:34:19 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 21:34:19 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> Message-ID: <44616D16.6090406@cox.net> Alan G Isaac wrote: >On Tue, 09 May 2006, Tim Hochberg apparently wrote: > > >>Let's start with the example in the ticket: >> >>> m.argmax() >> matrix([[8]]) >>Does anyone else think that this is a fairly nonsensical result? >> >> > >Yes. The result should be a scalar. > > Why? The current behaviour is both more self consistent and easier to explain, so it would be nice to see some examples of how a scalar would be advantageous here. > > >>Not that this is specific to matrices, the array result is >>just as weird: >> >>> a.argmax() >> 8 >> >> > >This is desirable. >This is just the meaning of axis=None in this context. >I do not see a reason to discard this convenience >and resort to a.ravel().argmax() > > And that is useful how? How do you plan to use the result without using ravel or it's equivalent at some point anyway? Now if a.argmax() returned (2, 2) in this case, that would be useful, but it would also probably a little bit of a pain to implement. And a little inconsistent with the other operators. But useful. >>Anyway, that's off topic. Let's consider something that makes more sense: >> >>> m.max() >> matrix([[8]]) >> >>> a.max() >> 8 >> >> > >With axis=None, I want a scalar in both cases. >But if 1 by 1 matrices end up being treated as scalars, >I may not care. > > I was going to say that 1x1 matrices are like scalars for most practical purposes, but while that's true for arrays, it's not really true for matrices since '*' stands for matrixmultiply, which doesn't work with 1x1 matrices. So, one is left with either munging dot or munging the axis=None stuff. Hmph.... >>If we switch to having m.max() return a scalar, then this >>equivalence goes away. That makes things harder to >>explain. >> >> > >Again, it is just the meaning of axis=None in this context. >Right? > > Is it? Perhaps you should write down a candidate description of how axis should work for matrices. It would be best if you could come up with something that looks internally self consistent. At present axis=None means returns 1x1 matrix and there's a fairly self consistent story that we can tell to explain this behaviour. I haven't seen a description for the alternative behaviour other than it does one thing when axis is a number and something very different when axis=None. That seems less than ideal. Regards, -tim From haase at msg.ucsf.edu Tue May 9 22:00:26 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue May 9 22:00:26 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44616AF9.5010905@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> Message-ID: <44617334.2070606@msg.ucsf.edu> One additional question: is PyArray_FromDimsAndData creating a copy ? I have very large image data and cannot afford copies :-( -Thanks. Sebastian Haase Sebastian Haase wrote: > Hi, > I'm a long time numarray user. > I have some SWIG typemaps that I'm using for quite some time. > They are C++ oriented and support creating template'd function. > I only cover the case of contiguous "input" arrays of 1D,2D and 3D. > (I would like to ensure that NO implicit type conversions are made so > that I can use the same scheme to have arrays changed on the C/C++ side > and can later see/access those in python.) > > (as I just added some text to the scipy wiki: > www.scipy.org/Converting_from_numarray) > I use something like: > > PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, > NUM_C_ARRAY); > arr = (double *) NA_OFFSETDATA(NAarr); > > > What is new numpy's equivalent of NA_InputArray > > I tried looking http://numeric.scipy.org/array_interface.html but did > not get the needed info. > > (numarray info on this is here: > http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-C-API.html) > > Thanks for the new numpy - I'm very excited !! > Sebastian Haase > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From wbaxter at gmail.com Tue May 9 22:11:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 22:11:05 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On 5/10/06, Sasha wrote: > > On 5/9/06, Bill Baxter wrote: > > ..... In linear algebra terms, a 1x1 matrix *is* a scalar for all > intents and purposes ... > > In linear algebra you start with an N-dimensional space V then define > an NxN dimensional space of matrices M. [...] > All this long introduction was to demonstrate that even from the > linear algebra point of view a 1x1 matrix (an element of the space > R'xR) is not the same as a scalar (an element of R). Ok, fair enough. I think what I was trying to express is that it's rarely useful to have such a thing as a 1x1 matrix in linear algebra. For instance, I can't recall a single textbook or journal article I've read where a distinction was made between a true scalar product between two vectors and the 1x1 matrix resulting from the matrix product of, x^t * y. But I'll admit that most of what I read is more like applied math than pure math. On the other hand, I would expect that most people trying to write programs to do actual calculations are also going to be more interested in practical applications of math than math theory. Also, if you want to get nit-picky about what is correct in terms of rigorous math, it rasies the question as to whether it even makes any sense to apply .sum() to an element of R^n x R^m. In the end Numpy is a package for performing practical computations. So the question shouldn't be whether a 1x1 matrix is really the same thing as a scalar or not, but which way is going to make life the easiest for the people writing the code. Currently numpy lets you multiply a 1x1 matrix times another matrix as if the 1x1 were a scalar. That seems like a practical and useful behavior to me, regardless of its correctness. Anyway, back to sum -- if .sum() is to return a scalar, then what about .sum(axis=0)? Should that be a 1-D array of scalars rather than a matrix? If you answer no, then what about .sum(axis=0).sum(axis=1)? (Unrelated issue, but it seems that .sum(axis=0) and .sum(axis=1) both return row vectors, whereas I would expect the axis=1 variety to be a column vector.) Anyway, seems to be like Tim (I think) said. This is just introducing new inconsistencies in place of old ones, so what's the point. In numpy, however, algebraic differences between 1x1 matrices and > scalars are not as important as the fact that matrices are mutable > while scalars are not. > Seems like for most code the lack of methods and attributes on the scalar would be the bigger deal than the mutability difference. But either way, I'm not sure what point you're trying to make there. Scalars should go away because 1x1 matrices are more flexible? Regards, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Tue May 9 22:14:02 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue May 9 22:14:02 2006 Subject: [Numpy-discussion] numpy byte ordering bug Message-ID: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> Hi all- The previous byte order problems seem to be fixed in the svn numpy. Here is another odd one: This is on my little-endian linux box. >>> numpy.__version__ '0.9.7.2490' >>> x = numpy.arange(10,dtype='>> x array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> x = numpy.arange(10,dtype='>f8') >>> x array([ 0.00000000e+000, 1.00000000e+000, 1.37186586e+303, -5.82360826e-011, -7.98920843e+292, 3.60319875e-021, 4.94303335e+282, -2.09830067e-031, -2.87854483e+272, 1.29367874e-041]) Clearly a byte ordering problem. This also fails for '>f4' , but works for '>i4' and '>i8'. Erin From tim.hochberg at cox.net Tue May 9 22:37:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 22:37:13 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44617BC5.4060207@cox.net> Bill Baxter wrote: > [MATH elided because I'm too tired to try to follow it] > > > Anyway, back to sum -- if .sum() is to return a scalar, then what > about .sum(axis=0)? Should that be a 1-D array of scalars rather > than a matrix? If you answer no, then what about > .sum(axis=0).sum(axis=1)? (Unrelated issue, but it seems that > .sum(axis=0) and .sum(axis=1) both return row vectors, whereas I would > expect the axis=1 variety to be a column vector.) For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 vector as you expect. For arrays, it just returns a 1-D array, which isn't row or column it's just 1-D. > Anyway, seems to be like Tim (I think) said. This is just > introducing new inconsistencies in place of old ones, so what's the > point. Well as much as possible the end result should be (a) useful and (b) easy to explain. I suspect that the problem with the matrix class is that not enough people have experience with it, so we're left with either blindly following the lead of other matrix packages, some of which I know do stupid things, or taking our best guess. I suspect things will need to be tweaked as more experience with it piles up. > In numpy, however, algebraic differences between 1x1 matrices and > scalars are not as important as the fact that matrices are mutable > while scalars are not. > > > Seems like for most code the lack of methods and attributes on the > scalar would be the bigger deal than the mutability difference. Scalars in numpy *do* have methods and attributes, which may be why Sasha doesn't think that difference is a big deal ;-). > But either way, I'm not sure what point you're trying to make > there. Scalars should go away because 1x1 matrices are more flexible? Actually I thought that Sasha's position was that both scalars and *rank-0* [aka shape=()] arrays were useful in different circumstances and that we shouldn't completely anihilate rank-0 arrays in favor of scalars. I'm not quite sure what that has to do with 1x1 arrays which are a different kettle of fish, although also weird because of broadcasting. I admit to never taking the time to fully deciphre Sasha's position on rank-0 arrays though. Speaking of rank-0 arrays, here's a wacky idea that Sasha will probably appreciate even if it (probably) turns out to be impracticle: what if matrix reduce returned 0xN and Nx0 array instead of 1xN and Nx1 arrays. This preserves the column/rowness of the result, however a rank-0 array falls out of the axis=None case naturally. Rank-0 arrays are nearly identical to scalars (except for the mutability bit, which I suspect is a pretty minor issue). I generated a Nx0 array by hand to try this out; some stuff works, but at least moderate tweaking would be required to make this work correctly. -tim From wbaxter at gmail.com Tue May 9 22:51:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 22:51:15 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44617BC5.4060207@cox.net> References: <44617BC5.4060207@cox.net> Message-ID: On 5/10/06, Tim Hochberg wrote: > > > For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 > vector as you expect. For arrays, it just returns a 1-D array, which > isn't row or column it's just 1-D. Ok, that's new in numpy 0.9.6 apparently. I was checking sum()s behavior on a 0.9.5 install. > Anyway, seems to be like Tim (I think) said. This is just > > introducing new inconsistencies in place of old ones, so what's the > > point. > > Well as much as possible the end result should be (a) useful and (b) > easy to explain. I suspect that the problem with the matrix class is > that not enough people have experience with it, so we're left with > either blindly following the lead of other matrix packages, some of > which I know do stupid things, or taking our best guess. I suspect > things will need to be tweaked as more experience with it piles up. That's kinda why I asked if anyone is seriously using it. In my case, with the number of gotchas I ran across, I feel like I might have been better off just writing my code from the beginning with arrays and calling numpy.dot() to multiply things when I needed to. Unless the matrix class somehow becomes more "central" to numpy I think it's going to continue to languish as a dangly appendage off the main numpy that mostly sorta works most of the time. > In numpy, however, algebraic differences between 1x1 matrices and > > scalars are not as important as the fact that matrices are mutable > > while scalars are not. > > > > > > Seems like for most code the lack of methods and attributes on the > > scalar would be the bigger deal than the mutability difference. > > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). Ah. I haven't really understood all this business about the "new numpy scalar module". I thought we were talking about returning plain old python scalars. Regards, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue May 9 23:06:09 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue May 9 23:06:09 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44617BC5.4060207@cox.net> Message-ID: On 5/9/06, Bill Baxter wrote: > That's kinda why I asked if anyone is seriously using it. In my case, with > the number of gotchas I ran across, I feel like I might have been better off > just writing my code from the beginning with arrays and calling numpy.dot() > to multiply things when I needed to. Unless the matrix class somehow > becomes more "central" to numpy I think it's going to continue to languish > as a dangly appendage off the main numpy that mostly sorta works most of the > time. Just as a data point: I do lots of linear algebra using Numeric (production code which I'm just about to move to numpy now, but it doesn't matter for this discussion). I always do everything using plain arrays, and I'm happy to call dot() or transpose() here and there. I personally find that the added syntactic overhead of arrays far offsets having to think about all the special cases that seem to arise with the matrix objects. This also makes much of my code work transparently as the dimensionality of the problem (reflected in the rank of the arrays) changes, as long as I'm careful with how I write certain things. Cheers, f From aisaac at american.edu Tue May 9 23:19:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 23:19:03 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <44616D16.6090406@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> <44616D16.6090406@cox.net> Message-ID: >> On Tue, 09 May 2006, Tim Hochberg apparently wrote: >>> Let's start with the example in the ticket: >>> >>> m.argmax() >>> matrix([[8]]) >>> Does anyone else think that this is a fairly nonsensical result? > Alan G Isaac wrote: >> Yes. The result should be a scalar. On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Why? The current behaviour is both more self consistent and easier to > explain, so it would be nice to see some examples of how a scalar would > be advantageous here. Because that is the result for array. For consistency, I think m.argmax() m.A.argmax() should be equivalent. (Also along axes, and therefore never returning matrices. And let ravel really ravel it, rather than duplicating hstack! What is the principal at work: must matrices always produce matrices almost no matter what we do with them? I prefer the principle that standard matrix operations on matrices return matrices. But then, I see I am not ready to be consistent, as I do want m.max(num) to be a matrix ... ) > Now if a.argmax() returned (2, 2) in this case, that would > be useful Agreed. I should not have implied that a scalar return is "better" than a tuple in this case. But this seems a radical change in behavior. If this is the behavior desired in this case, does that not suggest a behavior change for every case? That is, are you not in effect arguing that argmax should some kind of indexing object such that a.max(num) == a[a.argmax(num)] This seems quite useful but entirely new. Cheers, Alan Isaac From aisaac at american.edu Tue May 9 23:19:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 23:19:05 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <44616D16.6090406@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> <44616D16.6090406@cox.net> Message-ID: On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Perhaps you should write down a candidate description of > how axis should work for matrices. It would be best if you > could come up with something that looks internally self > consistent. At present axis=None means returns 1x1 matrix > and there's a fairly self consistent story that we can > tell to explain this behaviour. You mean that x.max(1).max(0) == m.max() (which I take it is the new behavior)? > I haven't seen a description for the alternative behaviour > other than it does one thing when axis is a number and > something very different when axis=None. That seems less > than ideal. OK, I agree. There are several cases where there have been requests to return scalars instead of 1?1 matrices. This is starting to look like that, and I do not want to take a stand on such questions. But for context, the "principle" (such as it is) that I had in mind is essentially that axis=None is a request for an *element* of a matrix. Cheers, Alan Isaac From jonathan.taylor at stanford.edu Wed May 10 00:54:18 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed May 10 00:54:18 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44617BC5.4060207@cox.net> Message-ID: <44619BE6.3070409@stanford.edu> my $0.02: i have never used a matrix in numpy, though i am quite a regular user of numpy/Numeric/numarray (now just numpy, of course). i agree with fernando about the overhead of using "dot" and "transpose" once in a while..... the overhead is small and it sometimes forces one to be a little less lazy in writing code. -- jonathan Bill Baxter wrote: > , o > > On 5/10/06, *Tim Hochberg* > wrote: > > > For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 > vector as you expect. For arrays, it just returns a 1-D array, which > isn't row or column it's just 1-D. > > > Ok, that's new in numpy 0.9.6 apparently. I was checking sum()s > behavior on a 0.9.5 install. > > > Anyway, seems to be like Tim (I think) said. This is just > > introducing new inconsistencies in place of old ones, so what's the > > point. > > Well as much as possible the end result should be (a) useful and (b) > easy to explain. I suspect that the problem with the matrix class is > that not enough people have experience with it, so we're left with > either blindly following the lead of other matrix packages, some of > which I know do stupid things, or taking our best guess. I suspect > things will need to be tweaked as more experience with it piles up. > > > That's kinda why I asked if anyone is seriously using it. In my case, > with the number of gotchas I ran across, I feel like I might have been > better off just writing my code from the beginning with arrays and > calling numpy.dot() to multiply things when I needed to. Unless the > matrix class somehow becomes more "central" to numpy I think it's > going to continue to languish as a dangly appendage off the main numpy > that mostly sorta works most of the time. > > > In numpy, however, algebraic differences between 1x1 > matrices and > > scalars are not as important as the fact that matrices are > mutable > > while scalars are not. > > > > > > Seems like for most code the lack of methods and attributes on the > > scalar would be the bigger deal than the mutability difference. > > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). > > > Ah. I haven't really understood all this business about the "new > numpy scalar module". I thought we were talking about returning plain > old python scalars. > > Regards, > --bill -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From jonathan.taylor at stanford.edu Wed May 10 01:21:38 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed May 10 01:21:38 2006 Subject: [Numpy-discussion] broken recarray Message-ID: <4461A259.8050105@stanford.edu> hi, sorry to sound like a broken recarray, but i {\em really} don't understand the origin of this error message. i have more complicated examples, but i don't even understand this very simple unpythonic behaviour. can anybody help me out? i have reproduced this error on three different machines with python2.4 (two debian unstables, one red hat ? (a redhat enterprise server at work that i installed numpy on)). my complicated examples also reproduce on these three different machines..... ------------------------------------------------------ [jtaylo at miller jtaylo]$ python2.4 Python 2.4 (#1, Feb 9 2006, 18:46:06) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> >>> desc = N.dtype({'names':['x'], 'formats':[N.Float]}) >>> >>> print N.array([(3.,),(4.,)], dtype=desc) [(3.0,) (4.0,)] >>> print N.array([[3.],[4.]], dtype=desc) Traceback (most recent call last): File "", line 1, in ? TypeError: expected a readable buffer object >>> >>> --------------------------------------------------------------------------------------- any ideas? jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From svetosch at gmx.net Wed May 10 01:26:16 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed May 10 01:26:16 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: <4461A357.5000106@gmx.net> Travis Oliphant schrieb: > > I'd like to get 0.9.8 of NumPy released by the end of the week. There > are a few Trac tickets that need to be resolved by then. > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > If somebody wants to somehow consolidate the numbers in a matrix into a single number I think it is most natural to return a scalar, which probably is the most intuitive representation of a single number. Also, I can think of many cases where you want to use the resulting number in a scalar multiplication, but I cannot think of a single case where you want to have an exception raised because of non-conforming shapes of the (1,1)-matrix-result and some other matrix. A slightly different issue (I think) is the question of what's the best way to tell numpy that you want to consolidate (or call it reduce, or whatever) over all axes. After reading the other messages so far in this thread, it seems to me that this second issue caused some of the concern, not so much the return type itself. (But I don't have an opinion on this latter syntax-like question, I don't know enough about the issues involved here.) Good luck for the release, and many thanks, Sven From svetosch at gmx.net Wed May 10 01:42:01 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed May 10 01:42:01 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44619BE6.3070409@stanford.edu> References: <44617BC5.4060207@cox.net> <44619BE6.3070409@stanford.edu> Message-ID: <4461A70E.5000601@gmx.net> Jonathan Taylor schrieb: > my $0.02: > > i have never used a matrix in numpy, though i am quite a regular user of > numpy/Numeric/numarray (now just numpy, of course). > > i agree with fernando about the overhead of using "dot" and "transpose" > once in a while..... the overhead is small and it sometimes forces one > to be a little less lazy in writing code. > The overhead is not so small imho, and if you use numpy for what you may call rapid prototyping of matrix calculations, the loss of productivity and code readability is substantial. My impression from this list is that there are many people who use numpy for code that is developed and optimized for quite a long time. In those cases it may be best to use arrays. But I think there are many potential numpy users out there who want to quickly implement some matrix calculation of a journal article or textbook. Numpy will not attract those people if they have to write inverse( dot(transpose(a), a) ) instead of (a.T*a).I as always only my 2c as well, of course cheers, sven From schofield at ftw.at Wed May 10 01:52:14 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 10 01:52:14 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44617BC5.4060207@cox.net> References: <44617BC5.4060207@cox.net> Message-ID: <4461A71A.30602@ftw.at> Tim Hochberg wrote: > Actually I thought that Sasha's position was that both scalars and > *rank-0* [aka shape=()] arrays were useful in different circumstances > and that we shouldn't completely anihilate rank-0 arrays in favor of > scalars. I'm not quite sure what that has to do with 1x1 arrays which > are a different kettle of fish, although also weird because of > broadcasting. I admit to never taking the time to fully deciphre > Sasha's position on rank-0 arrays though. > > Speaking of rank-0 arrays, here's a wacky idea that Sasha will > probably appreciate even if it (probably) turns out to be impracticle: > what if matrix reduce returned 0xN and Nx0 array instead of 1xN and > Nx1 arrays. This preserves the column/rowness of the result, however > a rank-0 array falls out of the axis=None case naturally. Rank-0 > arrays are nearly identical to scalars (except for the mutability bit, > which I suspect is a pretty minor issue). I generated a Nx0 array by > hand to try this out; some stuff works, but at least moderate tweaking > would be required to make this work correctly. I believe something like this is the right way to solve the problem. Reduce operations on arrays work well by also reducing dimensionality. I think reduce operations on matrices are clunky because matrices are inherently 2d, and that the result of matrix reduce should *not* be a matrix, but a *vector* -- a 1d object, similar to a 1d array but with orientation information. Tim's wacky 0xN and Nx0 objects are something like row and column vectors. But the simplest construction of this would probably be a new vector object. This needn't be complex, and could share much of the matrix code. If there's sufficient interest I could write a simple vector class and post it for review. -- Ed From schofield at ftw.at Wed May 10 02:25:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 10 02:25:20 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> Message-ID: <4461B26F.4060309@ftw.at> Tim Hochberg wrote: > >>> m > matrix([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> m.argmax() > matrix([[8]]) > > Does anyone else think that this is a fairly nonsensical result? Not > that this is specific to matrices, the array result is just as weird: > > >>> a > array([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> a.argmax() > 8 > > Given that obj[obj.argmax()] should really equal obj.max(), argmax > with no axis specified should really either raise an exception or > return a tuple. I came across this last week, and I came to a similar conclusion. I agree that a sequence of indices would be far more useful. This sequence could be either an array or a tuple: With a tuple: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() (2, 2) >>> a[a.argmax()] == a.max() True >>> b = array([0, 10, 20]) >>> b.argmax() (2,) With an array: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() array([(2,2)] >>> a[tuple(a.argmax())] == a.max() True >>> b = array([0, 10, 20]) >>> print b.argmax() 2 >>> type(b.argmax()) [currently ] I think either one would be more useful than the current ravel().argmax() behaviour. A tuple would be more consistent with the nonzero method. -- Ed From st at sigmasquared.net Wed May 10 02:31:22 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 02:31:22 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> Message-ID: <4461B2B0.30408@sigmasquared.net> I'm +1 for returning scalars instead of 1x1 for reduce-like methods on matrices in case no axis is specified. Tim Hochberg wrote: > >>> m.max(1).max(0) > matrix([[8]]) > >>> a.max(1).max(0) > 8 (...) > If we switch to having m.max() return a scalar, then this equivalence > goes away. That makes things harder to explain. > > So, in the absence of more compelling use cases that those presented in > the ticket I'd be inclined to leave things as they are. Of course I'm > not a user of the matrix class, so take that for what it's worth. I don't think symmetry between matrix and array classes should be an argument for doing things a certain way for matrices when there are other arguments against it, because matrices were meant to behave differently in the first place. As to the consistency of explanation, why isn't it consistent to say that max() returns a scalar and max(axis) returns a column or row vector (in form of a matrix)? I don't see a problem with m.max(1).max(0) still returning a 1x1 matrix, though I think for matrix multiplication broadcasting rules shouldn't be applied to 1x1 matrices. (Is there any use for broadcasting rules applied to matrix multiplication in general?) Regards, Stephan From st at sigmasquared.net Wed May 10 03:39:19 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 03:39:19 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <4461A71A.30602@ftw.at> References: <44617BC5.4060207@cox.net> <4461A71A.30602@ftw.at> Message-ID: <4461C295.5080201@sigmasquared.net> Ed Schofield wrote: > I believe something like this is the right way to solve the problem. > Reduce operations on arrays work well by also reducing dimensionality. > I think reduce operations on matrices are clunky because matrices are > inherently 2d, and that the result of matrix reduce should *not* be a > matrix, but a *vector* -- a 1d object, similar to a 1d array but with > orientation information. For me a vector with "orientation" is a matrix. While it sometimes might be convenient not to differentiate between column and row vectors, which could be an argument for a vector without orientation, I wouldn't want to make a difference between a row vector and matrix, both are linear forms in the mathematical sense. Why shouldn't reduce operations just return a scalar for no axis argument and a Nx1 or 1xN matrix when an axis is specified? Regards, Stephan From pjssilva at ime.usp.br Wed May 10 03:57:02 2006 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Wed May 10 03:57:02 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <1147258568.29685.12.camel@localhost.localdomain> Bill Baxter wrote: > Ok, fair enough. I think what I was trying to express is that it's > rarely useful to have such a thing as a 1x1 matrix in linear algebra. > For instance, I can't recall a single textbook or journal article I've > read where a distinction was made between a true scalar product > between two vectors and the 1x1 matrix resulting from the matrix > product of, x^t * y. But I'll admit that most of what I read is more > like applied math than pure math. On the other hand, I would expect > that most people trying to write programs to do actual calculations > are also going to be more interested in practical applications of math > than math theory. > > > Also, if you want to get nit-picky about what is correct in terms of > rigorous math, it rasies the question as to whether it even makes any > sense to apply .sum() to an element of R^n x R^m. In the end Numpy > is a package for performing practical computations. +1. 1x1 matrices usually appear when we compute a inner product. I also read (and write) lots (fewer) papers and it is very usual to define the (real) inner product as x.T*y (where x and y are column vectors: nx1 matrices). Of course this is an abuse of notation as the inner product should return a real number. As you see Mathematics does this sometimes, an abuse of notation. Actually, I feel that matrices are very important in numpy, for the compatibility reasons cited before. You can call me lazy, but my mind really prefer the second option below: > inverse( dot(transpose(a), a) ) instead of (a.T*a).I Best, Paulo -- Paulo Jos? da Silva e Silva Professor Assistente do Dep. de Ci?ncia da Computa??o (Assistant Professor of the Computer Science Dept.) Universidade de S?o Paulo - Brazil e-mail: pjssilva at ime.usp.br Web: http://www.ime.usp.br/~pjssilva Teoria ? o que n?o entendemos o (Theory is something we don't) suficiente para chamar de pr?tica. (understand well enough to call practice) From phddas at yahoo.com Wed May 10 03:58:04 2006 From: phddas at yahoo.com (Fred J.) Date: Wed May 10 03:58:04 2006 Subject: [Numpy-discussion] numpy book Message-ID: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Hi where do I get the numpy book from? a link would be fine. and thanks for making it available for such a low price. --------------------------------- New Yahoo! Messenger with Voice. Call regular phones from your PC and save big. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Wed May 10 05:38:01 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Wed May 10 05:38:01 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: <20060510105734.44985.qmail@web54607.mail.yahoo.com> References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: <4461DEDD.9030306@shrogers.com> http://www.trelgol.com/ Fred J. wrote: > Hi > > where do I get the numpy book from? a link would be fine. and thanks for > making it available for such a low price. > > ------------------------------------------------------------------------ > New Yahoo! Messenger with Voice. Call regular phones from your PC > > and save big. -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From joseph.a.crider at Boeing.com Wed May 10 06:59:13 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Wed May 10 06:59:13 2006 Subject: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 Message-ID: I am considering moving to SciPy for our project as it has some features that we need and that we don't have time to write ourselves. However, I need to be able to support several different architectures, with Solaris 9 being the most important at this time. I've succeeded in building numpy on Cygwin and on another Sun running Solaris 8 (with gcc 2.95.3 and python 2.4), but I'm not getting very far with Solaris 9 (with gcc 3.3.2 and python 2.4). Here are a few lines from the output of the command "python setup.py install --home=~": Generating build/src/numpy/core/config.h customize SunFCompiler customize SunFCompiler gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/usr/local/include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/usr/loca/include/python2.4 -c' gcc: _configtest.c Segmentation Fault Segmentation Fault failure. removing: _configtest.c _configtest.o The last few lines of the traceback are: File "numpy/core/setup.py", line 37, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration This system does have the Sun compiler and gcc 2.95.3 installed also, but I don't know how to change to another compiler without messing up my path. (Our project does use some Python 2.4 features and Python 2.2 is installed in the same directory as gcc 2.95.3 while gcc 3.3.2 is in the same directory as Python 2.4.) Any suggestions? J. Allen Crider (256)461-2699 From aisaac at american.edu Wed May 10 07:22:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 10 07:22:13 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: <20060510105734.44985.qmail@web54607.mail.yahoo.com> References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: On Wed, 10 May 2006, "Fred J." apparently wrote: > where do I get the numpy book from? http://www.tramy.us/ Cheers, Alan Isaac From stefan at sun.ac.za Wed May 10 07:41:19 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed May 10 07:41:19 2006 Subject: [Numpy-discussion] Converting a flat index to an array index Message-ID: <20060510144032.GB15544@mentat.za.net> Hi all, Methods like argmax() and argmin() return an index into a flattened array. For example, x = N.zeros((3,3)) x[1,0] = 1 x.argmax() == 3 Is there an easy way to convert between this flat index back to the original array index, i.e. from 3 to (1,0)? If not, the attached code might be a useful addition. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: idx_fromflat.py Type: text/x-python Size: 1487 bytes Desc: not available URL: From joseph.a.crider at Boeing.com Wed May 10 08:28:02 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Wed May 10 08:28:02 2006 Subject: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 In-Reply-To: <4461FBD1.6000109@sigmasquared.net> Message-ID: According to one of our system administrators, she installed Python 2.4 from a precompiled package on Sunfreeware.com. The message that is displayed when Python is started indicates it was built using gcc 3.3.2, so I would have expected more problems on our Solaris 8 machine (which only has gcc 2.95.3) than Solaris 9. J. Allen Crider (256)461-2699 -----Original Message----- From: Stephan Tolksdorf [mailto:st at sigmasquared.net] Sent: Wednesday, May 10, 2006 9:42 AM To: Crider, Joseph A Subject: Re: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 Addendum: Same runtime library should suffice normally. Stephan > Not knowing anything about Sun, I'd just like to note that you need to > make sure that Python and its extension (Numpy) are built with the same > compiler and runtime library. You are probably aware of this and made > sure they are, but just in case... > > Regards, > Stephan > > > Crider, Joseph A wrote: >> I am considering moving to SciPy for our project as it has some features >> that we need and that we don't have time to write ourselves. However, I >> need to be able to support several different architectures, with Solaris >> 9 being the most important at this time. I've succeeded in building >> numpy on Cygwin and on another Sun running Solaris 8 (with gcc 2.95.3 >> and python 2.4), but I'm not getting very far with Solaris 9 (with gcc >> 3.3.2 and python 2.4). >> >> Here are a few lines from the output of the command "python setup.py >> install --home=~": >> >> Generating build/src/numpy/core/config.h >> customize SunFCompiler >> customize SunFCompiler >> gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall >> -Wstrict-prototypes -fPIC' >> compile options: '-I/usr/local/include/python2.4 -Inumpy/core/src >> -Inumpy/core/include -I/usr/loca/include/python2.4 -c' >> gcc: _configtest.c >> Segmentation Fault >> Segmentation Fault >> failure. >> removing: _configtest.c _configtest.o >> >> The last few lines of the traceback are: >> File "numpy/core/setup.py", line 37, in generate_config_h >> raise "ERROR: Failed to test configuration" >> ERROR: Failed to test configuration >> >> This system does have the Sun compiler and gcc 2.95.3 installed also, >> but I don't know how to change to another compiler without messing up my >> path. (Our project does use some Python 2.4 features and Python 2.2 is >> installed in the same directory as gcc 2.95.3 while gcc 3.3.2 is in the >> same directory as Python 2.4.) >> >> Any suggestions? >> >> J. Allen Crider >> (256)461-2699 >> >> >> >> ------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > From chanley at stsci.edu Wed May 10 09:28:00 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed May 10 09:28:00 2006 Subject: [Numpy-discussion] use of boolean index array returns byteswapped values Message-ID: <44621454.1040008@stsci.edu> When using a big-endian array on a little-endian OS, the use of a boolean array as an index array causes the resulting array to have byteswapped values. Example code is below: In [14]: a = numpy.array([1,2,3,4,5,6,7,8,9],'>f8') In [15]: a Out[15]: array([ 1., 2., 3., 4., 5., 6., 7., 8., 9.]) In [16]: x = numpy.where( (a>2) & (a<6) ) In [17]: x Out[17]: (array([2, 3, 4]),) In [18]: a[x] Out[18]: array([ 3., 4., 5.]) In [19]: y = ( (a>2) & (a<6) ) In [20]: y Out[20]: array([False, False, True, True, True, False, False, False, False], dtype=bool) In [21]: a[y] Out[21]: array([ 1.04346664e-320, 2.05531309e-320, 2.56123631e-320]) This bug was originally discovered by Erin Sheldon while testing pyfits. I have submitted this bug report on the Trac site. Chris From gruel at astro.ufl.edu Wed May 10 09:47:13 2006 From: gruel at astro.ufl.edu (Nicolas Gruel) Date: Wed May 10 09:47:13 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: <200605101222.06050.gruel@astro.ufl.edu> and the new version? I bought mine there are sometimes and I have been very surprised to see that some people does have a newer version. I'm not complaining I'm pretty sure that Travis is overbusy with Numpy but perhaps a good things to do will be to send automatically a link to the last version of the book. Parhpas a web site with login+passwd gave when you're buying the book will be the perfect solution. Travis can update is version to the site and people can check it themselve. Only some suggestion but I would like to have my book update my version is dat from October 27 2005... N. Le Wednesday 10 Mai 2006 10:28, Alan G Isaac a ?crit?: > On Wed, 10 May 2006, "Fred J." apparently wrote: > > where do I get the numpy book from? > > http://www.tramy.us/ > > Cheers, > Alan Isaac > > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Wed May 10 09:54:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 09:54:13 2006 Subject: [Numpy-discussion] numpy byte ordering bug In-Reply-To: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> References: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> Message-ID: <44621A6D.8010002@ieee.org> Erin Sheldon wrote: > Hi all- > > The previous byte order problems seem to be fixed > in the svn numpy. Here is another odd one: > > This is on my little-endian linux box. >>>> numpy.__version__ > '0.9.7.2490' >>>> x = numpy.arange(10,dtype='>>> x > array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>>> x = numpy.arange(10,dtype='>f8') >>>> x > array([ 0.00000000e+000, 1.00000000e+000, 1.37186586e+303, > -5.82360826e-011, -7.98920843e+292, 3.60319875e-021, > 4.94303335e+282, -2.09830067e-031, -2.87854483e+272, > 1.29367874e-041]) > > Clearly a byte ordering problem. This also fails for '>f4' , but works > for > '>i4' and '>i8'. It works by accident on integer arrays. arange for non-native byte-order needs to be disabled or handled separately by byte-swapping after completion. Enter a Ticket if you haven't already. -Travis From oliphant.travis at ieee.org Wed May 10 10:02:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:02:14 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44621C57.6050302@ieee.org> Bill Baxter wrote: > > On 5/10/06, *Tim Hochberg* > wrote: > > Of course I'm not a user of the matrix class, so take that for what > it's worth. > > On 5/10/06, *Sasha* > wrote: > > > I don't use matrices, so I don't have a strong opinion on this issue. > > > Hmm. And in addition to you two, I'm pretty sure I've seen Travis > mention he doesn't use matrix, either. That plus the lack of response > to my and others' previous posts on the topic kinda makes me wonder > whether there are actually *any* hardcore numpy users out there using > the matrix class. That's not true, I do use matrices. I just use them in small doses --- to simplify certain expressions. I don't typically use them for all arrays in my code, however. -Travis From oliphant.travis at ieee.org Wed May 10 10:06:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:06:05 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44617334.2070606@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> Message-ID: <44621D61.5040102@ieee.org> Sebastian Haase wrote: > One additional question: > is PyArray_FromDimsAndData creating a copy ? > I have very large image data and cannot afford copies :-( No, it uses the data as the memory space for the array (but you have to either manage that memory area yourself or reset the OWNDATA flag to get NumPy to delete it for you on array deletion). -Travis From oliphant.travis at ieee.org Wed May 10 10:19:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:19:06 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <4461B26F.4060309@ftw.at> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> <4461B26F.4060309@ftw.at> Message-ID: <44621F89.3050903@ieee.org> Ed Schofield wrote: > Tim Hochberg wrote: > >> >>> m >> matrix([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> m.argmax() >> matrix([[8]]) >> >> Does anyone else think that this is a fairly nonsensical result? Not >> that this is specific to matrices, the array result is just as weird: >> >> >>> a >> array([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> a.argmax() >> 8 >> >> Given that obj[obj.argmax()] should really equal obj.max(), argmax >> with no axis specified should really either raise an exception or >> return a tuple. >> > > I came across this last week, and I came to a similar conclusion. I > agree that a sequence of indices would be far more useful. This > sequence could be either an array or a tuple: > > but a.flat[a.argmax()] == a.max() works also. I'm not convinced it's worth special-casing the argmax method because a.flat exists already. -Travis From haase at msg.ucsf.edu Wed May 10 10:41:02 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed May 10 10:41:02 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44621D61.5040102@ieee.org> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> <44621D61.5040102@ieee.org> Message-ID: <200605101040.20624.haase@msg.ucsf.edu> On Wednesday 10 May 2006 10:05, Travis Oliphant wrote: > Sebastian Haase wrote: > > One additional question: > > is PyArray_FromDimsAndData creating a copy ? > > I have very large image data and cannot afford copies :-( > > No, it uses the data as the memory space for the array (but you have to > either manage that memory area yourself or reset the OWNDATA flag to get > NumPy to delete it for you on array deletion). > > -Travis Thanks for the reply. Regarding "setting the OWNDATA flag": How does NumPy know if it should call free (C code) or delete [] (C++ code) ? I have been told that at least on some platforms its crucial to properly match those with if you used malloc or the new-operator (only available in C++). Thanks, - Sebastian From tim.hochberg at cox.net Wed May 10 11:16:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 11:16:04 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <44621F89.3050903@ieee.org> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> <4461B26F.4060309@ftw.at> <44621F89.3050903@ieee.org> Message-ID: <44622DC3.9040507@cox.net> Travis Oliphant wrote: > Ed Schofield wrote: > >> Tim Hochberg wrote: >> >> >>> >>> m >>> matrix([[0, 1, 2], >>> [3, 4, 5], >>> [6, 7, 8]]) >>> >>> m.argmax() >>> matrix([[8]]) >>> >>> Does anyone else think that this is a fairly nonsensical result? Not >>> that this is specific to matrices, the array result is just as weird: >>> >>> >>> a >>> array([[0, 1, 2], >>> [3, 4, 5], >>> [6, 7, 8]]) >>> >>> a.argmax() >>> 8 >>> >>> Given that obj[obj.argmax()] should really equal obj.max(), argmax >>> with no axis specified should really either raise an exception or >>> return a tuple. >>> >> >> >> I came across this last week, and I came to a similar conclusion. I >> agree that a sequence of indices would be far more useful. This >> sequence could be either an array or a tuple: >> >> > > > but > > a.flat[a.argmax()] == a.max() > > works also. > > > I'm not convinced it's worth special-casing the argmax method because > a.flat exists already. That's reasonable. However, we should probably add a note to the docstring of argmin and argmax that describes this trick. -tim From Chris.Barker at noaa.gov Wed May 10 11:34:12 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed May 10 11:34:12 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44623207.2020306@noaa.gov> Bill Baxter wrote: > (Unrelated > issue, but it seems that .sum(axis=0) and .sum(axis=1) both return row > vectors, whereas I would expect the axis=1 variety to be a column vector.) It works for me (): >>> m matrix([[1, 2], [3, 4]]) >>> m.sum(0) matrix([[4, 6]]) >>> m.sum(1) matrix([[3], [7]]) >>> N.__version__ '0.9.6' > In numpy, however, algebraic differences between 1x1 matrices and >> scalars are not as important as the fact that matrices are mutable >> while scalars are not. >> > Seems like for most code the lack of methods and attributes on the scalar > would be the bigger deal than the mutability difference. But either way, > I'm not sure what point you're trying to make there. Scalars should go > away > because 1x1 matrices are more flexible? Isn't that what rank-0 arrays are for? > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). Then __repr__ needs to be fixed: >>> a array([0, 1, 2, 3, 4]) >>> s = a.sum() >>> type(s) >>> type(10) >>> repr(s) '10' >>> repr(10) '10' Is there even a direct way to construct a numpy scalar? > Actually I thought that Sasha's position was that both scalars and > *rank-0* [aka shape=()] arrays were useful in different circumstances > and that we shouldn't completely anihilate rank-0 arrays in favor of > scalars. What is the difference? except that rank-o arrays are mutable, and I do think a mutable scalar is a good thing to have. Why not make numpy scalars mutable, and then would there be a difference? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Wed May 10 11:35:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 11:35:04 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <200605101040.20624.haase@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> <44621D61.5040102@ieee.org> <200605101040.20624.haase@msg.ucsf.edu> Message-ID: <44623231.4040603@ieee.org> Sebastian Haase wrote: > On Wednesday 10 May 2006 10:05, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> One additional question: >>> is PyArray_FromDimsAndData creating a copy ? >>> I have very large image data and cannot afford copies :-( >>> >> No, it uses the data as the memory space for the array (but you have to >> either manage that memory area yourself or reset the OWNDATA flag to get >> NumPy to delete it for you on array deletion). >> >> -Travis >> > > Thanks for the reply. > Regarding "setting the OWNDATA flag": > How does NumPy know if it should call free (C code) or delete [] (C++ code) ? > It doesn't. It always uses _pya_free which is a macro that is defined to either system free or Python's memory-manager equivalent. It should always be paired with _pya_malloc. Yes, you can have serious problems by mixing memory allocators. In other-words, unless you know what you are doing it is unwise to set the OWNDATA flag for data that was defined elsewhere. My favorite method is to simply let NumPy create the memory for you (e.g. use PyArray_SimpleNew). Then, you won't have trouble. If that method is not possible, then the next best thing to do is to define a simple Python Object that uses reference counting to manage the memory for you. Then, you point array->base to that object so that it's reference count gets decremented when the array disappears. The simple Python Object defines it's tp_deallocate function to call the appropriate free. -Travis From oliphant.travis at ieee.org Wed May 10 11:43:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 11:43:03 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44616AF9.5010905@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> Message-ID: <4462341A.6010704@ieee.org> Sebastian Haase wrote: > Hi, > I'm a long time numarray user. > I have some SWIG typemaps that I'm using for quite some time. > They are C++ oriented and support creating template'd function. > I only cover the case of contiguous "input" arrays of 1D,2D and 3D. > (I would like to ensure that NO implicit type conversions are made so > that I can use the same scheme to have arrays changed on the C/C++ > side and can later see/access those in python.) > > (as I just added some text to the scipy wiki: > www.scipy.org/Converting_from_numarray) > I use something like: > > PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, > NUM_C_ARRAY); > arr = (double *) NA_OFFSETDATA(NAarr); > > > What is new numpy's equivalent of NA_InputArray In the scipy/Lib/ndimage package is a numcompat.c and numcompat.h file that implements several of the equivalents. I'd like to see a module like this get formalized and placed into numpy itself so that most numarray extensions simply have to be re-compiled to work with NumPy. Here is the relevant information (although I don't know what PYARR_TC is...) typedef enum { tAny, tBool=PyArray_BOOL, tInt8=PyArray_INT8, tUInt8=PyArray_UINT8, tInt16=PyArray_INT16, tUInt16=PyArray_UINT16, tInt32=PyArray_INT32, tUInt32=PyArray_UINT32, tInt64=PyArray_INT64, tUInt64=PyArray_UINT64, tFloat32=PyArray_FLOAT32, tFloat64=PyArray_FLOAT64, tComplex32=PyArray_COMPLEX64, tComplex64=PyArray_COMPLEX128, tObject=PyArray_OBJECT, /* placeholder... does nothing */ tDefault = tFloat64, #if BITSOF_LONG == 64 tLong = tInt64, #else tLong = tInt32, #endif tMaxType } NumarrayType; typedef enum { NUM_CONTIGUOUS=CONTIGUOUS, NUM_NOTSWAPPED=NOTSWAPPED, NUM_ALIGNED=ALIGNED, NUM_WRITABLE=WRITEABLE, NUM_COPY=ENSURECOPY, NUM_C_ARRAY = (NUM_CONTIGUOUS | NUM_ALIGNED | NUM_NOTSWAPPED), NUM_UNCONVERTED = 0 } NumRequirements; #define _NAtype_toDescr(type) (((type)==tAny) ? NULL : \ PyArray_DescrFromType(type)) #define NA_InputArray(obj, type, flags) \ (PyArrayObject *)\ PyArray_FromAny(obj, _NAtype_toDescr(type), 0, 0, flags, NULL) #define NA_OFFSETDATA(a) ((void *) PyArray_DATA(a)) From ndarray at mac.com Wed May 10 11:47:03 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 11:47:03 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44623207.2020306@noaa.gov> References: <44623207.2020306@noaa.gov> Message-ID: On 5/10/06, Christopher Barker wrote: > ... > Is there even a direct way to construct a numpy scalar? > Yes, >>> type(int_(0)) >>> type(float_(0)) RTFM :-) > > Actually I thought that Sasha's position was that both scalars and > > *rank-0* [aka shape=()] arrays were useful in different circumstances > > and that we shouldn't completely anihilate rank-0 arrays in favor of > > scalars. > > What is the difference? except that rank-o arrays are mutable, and I do > think a mutable scalar is a good thing to have. Why not make numpy > scalars mutable, and then would there be a difference? Mutable objects cannot have value based hash, which practically means they cannot be used as keys in python dictionaries. This may change in python 3.0, but meanwhile mutable scalars are not an option. From oliphant.travis at ieee.org Wed May 10 12:59:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 12:59:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <445F8F71.4070801@cox.net> References: <445F8F71.4070801@cox.net> Message-ID: <446245AB.6000900@ieee.org> Tim Hochberg wrote: > > I created a branch to work on basearray and arraykit: > > http://svn.scipy.org/svn/numpy/branches/arraykit > > Basearray, as most of you probably know by now is the array superclass > that Travis, Sasha and I have > all talked about at various times with slightly different emphasis. I'm thinking that fancy-indexing should be re-factored a bit so that view-based indexing is tried first and then on error, fancy-indexing is tried. Right now, it goes through the fancy-indexing check and that seems to slow things down more than it needs to for simple indexing operations. Perhaps it would make sense for basearray to implement simple indexing while the ndarray would augment the basic indexing. Is adding basic Numeric-like indexing something you see as useful to basearray? -Travis From tim.hochberg at cox.net Wed May 10 13:30:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 13:30:05 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446245AB.6000900@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <44624D24.3050406@cox.net> Travis Oliphant wrote: > Tim Hochberg wrote: > >> >> I created a branch to work on basearray and arraykit: >> >> http://svn.scipy.org/svn/numpy/branches/arraykit >> >> Basearray, as most of you probably know by now is the array >> superclass that Travis, Sasha and I have all talked about at various >> times with slightly different emphasis. > > > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing > is tried. Right now, it goes through the fancy-indexing check and > that seems to slow things down more than it needs to for simple > indexing operations. That sounds like a good idea. I would like to see fancy indexing broken out for arraykit if not necessarily for basearray. > > Perhaps it would make sense for basearray to implement simple indexing > while the ndarray would augment the basic indexing. > > > Is adding basic Numeric-like indexing something you see as useful to > basearray? Yes! No! Maybe ;-) Each time I think this over I come to a slightly different conclusion. At one point I was thinking that basearray should support shape, __getitem__ and __setitem__ (and had I thought about it at the time, I would have preferred basic indexing here). However at present I'm thinking that basearray should really just support your basic array protocol and nothing else. If we added the above three methods then that makes life harder for someone who wants to create an array subclass that is either immutable or has a fixed shape. Sure shape and/or __setitem__ can be overridden with something that raises some sort of exception, but it's exactly that sort of stuff that I was interested in getting away from with basearray (although admittedly this would be on a much smaller scale). I can't think of a real problem with supplying just a read only version of shape and getitem, but it also doesn't seem very useful. So, as I said I lean towards the simplest, thinnest interface possible. However, it may be a good idea to put together another subclass of basearray that supports shape, __getitem__, __setitem__ [in their basic forms], __repr__ and __str__. This could be part of the proposal to add basearray to the core. That way the basearray module could export something that's directly useful to people in addition to basearray which is really only useful as a basis for other stuff. Also, like I said I would use this for arraykit if it were available (and might even be willing to do the work myself if I find the time). I have considered splitting out fancy indexing in my simplified array class using some sort of psuedo attribute (similar to flat). If I was doing that, I'd actually prefer to split out the two different types of fancy indexing (boolean versus integer) so that they could be applied separately. -tim > > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From ndarray at mac.com Wed May 10 13:43:04 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 13:43:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446245AB.6000900@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: On 5/10/06, Travis Oliphant wrote: > ... > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing is > tried. Right now, it goes through the fancy-indexing check and that > seems to slow things down more than it needs to for simple indexing > operations. Is it too late to reconsider the decision to further overload [] to support fancy indexing? It would be nice to restrict [] to view based indexing and require a function call for copy-based. If that is not an option, I would like to propose to have no __getitem__ in the basearray and instead have rich collection of various functions such as "take" which can be used by the derived classes to create their own __getitem__ . Independent of the fate of the [] operator, I would like to have means to specify exactly what I want without having to rely on the smartness of the fancy-indexing check. For example, in the current version, I can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I cannot find an equivalent of x[[3,2,1],[1,2,3]]. I think [] syntax preferable in the interactive setting, where it allows to get the result with a few keystrokes. In addition [] has special syntactic properties in python (special meaning of : and ... within []) that allows some nifty looking syntax not available for member functions. On the other hand in programming, and especially in writing reusable code specialized member functions such as "take" are more appropriate for several resons. (1) Robustness, x.take(i) will do the same thing if i is a tuple, list, or array of any integer type, while with x[i] it is anybodys guess and the results may change with the changes in numpy. (2) Performance: fancy-indexing check is expensive. (3) Code readability: in the interactive session when you type x[i], i is either supplied literally or is defined on the same screen, but if i comes as an argument to the function, it may be hard to figure out whether i expected to be an integer or a list of integers is also ok. From oliphant.travis at ieee.org Wed May 10 13:55:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 13:55:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <4462516A.3010007@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <4462516A.3010007@cox.net> Message-ID: <446252D1.2050300@ieee.org> Tim Hochberg wrote: > > Since I'm actually messing with trying to untangle arraykit from > multiarray right now, let me ask you a question: there are several > functions in arrayobject.c that look like they should be part of the > API. Notably: > > PyArray_CopyObject > PyArray_MapIterNew > PyArray_MapIterBind > PyArray_GetMap > PyArray_MapIterReset > PyArray_MapIterNext > PyArray_SetMap > PyArray_IntTupleFromIntp > > However, they don't appear to show up. They also aren't in > *_api_order.txt, where I presume the list of all exported functions > lives. Is this on purpose, or is it an oversight? > Some of these are an oversight. The Mapping-related ones require a little more explanation, though. Initially I had thought to allow mapping iterators to live independently of array indexing. But, it never worked out that way. I think they could be made part of the API, but they need to be used correctly together. In partciular, you can't really "re-bind" another array to a mapping interator. You have to create a new one, so it may be a little confusing. But, I see no reason to not let these things out on their own. -Travis From oliphant.travis at ieee.org Wed May 10 13:58:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 13:58:08 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <446253AE.4090402@ieee.org> Sasha wrote: > On 5/10/06, Travis Oliphant wrote: >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . It may be too late since the fancy-indexing was actually introduced by numarray. It does seem to be a feature that people like. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. It probably isn't there. Perhaps it should be. As you've guessed, a lot of the overloading of [] is because inside it you can use simplified syntax to generate slices. I would like to see the slice syntax extended so it could be used inside function calls as well as a means to generate slice objects on-the-fly. Perhaps for Python 3.0 this could be suggested. -Travis From ndarray at mac.com Wed May 10 14:05:04 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 14:05:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <445F8F71.4070801@cox.net> References: <445F8F71.4070801@cox.net> Message-ID: On 5/8/06, Tim Hochberg wrote: > [...] Here's a brief example; > this is what an custom array class that just supported indexing and > shape would look like using arraykit: > > import numpy.arraykit as _kit > > class customarray(_kit.basearray): > __new__ = _kit.fromobj > __getitem__ = _kit.getitem > __setitem__ = _kit.setitem > shape = property(_kit.getshape, _kit.setshape) I see the following problem with your approach: customarray.__new__ is supposed to return an instance of customarray, but in your example it returns a basearray. You may like an approach that I took in writing r.py . In the context of your example, I would make fromobj a classmethod of _kit.basearray and use the type argument to allocate the new object (type->tp_alloc(type, 0);). This way customarray(...) will return customarray as expected. All _kit methods that return arrays can take the same approach and become classmethods of _kit.basearray. The drawback is the pollution of the base class namespace, but this may be acceptable if you name the baseclass methods with a leading underscore. From oliphant.travis at ieee.org Wed May 10 14:26:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 14:26:05 2006 Subject: [Numpy-discussion] reduce-type results from matrices Message-ID: <44625A4D.7020905@ieee.org> Thanks for the discussion on Ticket #83 (whether or not to return scalars from matrix methods). I like many of the comments and generally agree with Tim and Sasha about why bothering to replace one inconsistency with another. However, I've been swayed by two facts 1) People that use matrices more seem to like having the arguments be returned as scalars 2) Multiplication by a 1x1 matrix won't work on most matrices but multiplication by a scalar will. These two facts lean towards accepting the patch. Therefore, Ticket #83 patch will be applied. Best regards, -Travis From st at sigmasquared.net Wed May 10 14:33:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 14:33:02 2006 Subject: [Numpy-discussion] Building on Windows Message-ID: <44625BE2.4010708@sigmasquared.net> Hi, there are still some (mostly minor) problems with the Windows build of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation enhancements, but before doing so I'd like to know if there's interest from one of the core developers to review/commit these patches afterwards. I'm asking because in the past questions and suggestions regarding the building process of Numpy (especially on Windows) often remained unanswered on this list. I realise that many developers don't use Windows and that the distutils build is a complex beast, but the current situation seems a bit unsatisfactory - and I would like to help. Would there be any interest for further refactoring of the build code over and above patching errors? Stephan From pearu at scipy.org Wed May 10 14:41:03 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 10 14:41:03 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625BE2.4010708@sigmasquared.net> References: <44625BE2.4010708@sigmasquared.net> Message-ID: On Wed, 10 May 2006, Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of Numpy > (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation > enhancements, but before doing so I'd like to know if there's interest from > one of the core developers to review/commit these patches afterwards. I'm > asking because in the past questions and suggestions regarding the building > process of Numpy (especially on Windows) often remained unanswered on this > list. I realise that many developers don't use Windows and that the distutils > build is a complex beast, but the current situation seems a bit > unsatisfactory - and I would like to help. > Would there be any interest for further refactoring of the build code over > and above patching errors? Yes. Note that patches should not break other platforms. I am currently succesfully using mingw32 compiler to build numpy. Python is Enthon23 and Enthon24 that conviniently contain all compiler tools. Pearu From oliphant.travis at ieee.org Wed May 10 14:42:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 14:42:01 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625BE2.4010708@sigmasquared.net> References: <44625BE2.4010708@sigmasquared.net> Message-ID: <44625E03.9080105@ieee.org> Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of > Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > documentation enhancements, but before doing so I'd like to know if > there's interest from one of the core developers to review/commit > these patches afterwards. I'm asking because in the past questions and > suggestions regarding the building process of Numpy (especially on > Windows) often remained unanswered on this list. I realise that many > developers don't use Windows and that the distutils build is a complex > beast, but the current situation seems a bit unsatisfactory - and I > would like to help. I think your assessment is a bit harsh. I regularly build on MinGW so I know it works there (at least at release time). I also have applied several patches with the express purpose of getting the build working on MSVC and Cygwin. So, go ahead and let us know what problems you are having. You are correct that my main build platform is not Windows, but I think several other people do use Windows regularly and we definitely want to support it. -Travis From tim.hochberg at cox.net Wed May 10 14:47:01 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:47:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> Message-ID: <44625F1F.6070003@cox.net> Sasha wrote: > On 5/8/06, Tim Hochberg wrote: > >> [...] Here's a brief example; >> this is what an custom array class that just supported indexing and >> shape would look like using arraykit: >> >> import numpy.arraykit as _kit >> >> class customarray(_kit.basearray): >> __new__ = _kit.fromobj >> __getitem__ = _kit.getitem >> __setitem__ = _kit.setitem >> shape = property(_kit.getshape, _kit.setshape) > > > I see the following problem with your approach: customarray.__new__ is > supposed to return an instance of customarray, but in your example it > returns a basearray. Actually, it doesn't. The signature of fromobj is: fromobj(subtype, obj, dtype=None, order="C"). It returns an object of type subtype (as long as subtype is derived from basearray). At present, fromobj is implemented in Python as: def fromobj(subtype, obj, dtype=None, order="C"): if order not in ["C", "FORTRAN"]: raise ValueError("Order must be either 'C' or 'FORTRAN', not %r" % order) nda = _numpy.array(obj, dtype, order=order) return basearray.__new__(subtype, nda.shape, nda.dtype, nda.data, order=order) That's kind of kludgy, and I plan to remove the dependance on numpy.array at some point, but it seems to work OK. > You may like an approach that I took in writing > r.py . In > the context of your example, I would make fromobj a classmethod of > _kit.basearray and use the type argument to allocate the new object > (type->tp_alloc(type, 0);). This way customarray(...) will return > customarray as expected. > > All _kit methods that return arrays can take the same approach and > become classmethods of _kit.basearray. The drawback is the pollution > of the base class namespace, but this may be acceptable if you name > the baseclass methods with a leading underscore. I'd rather avoid that since one of my goals is to remove name polution. I'll keep it in mind though if I run into problems with the above approach. -tim From tim.hochberg at cox.net Wed May 10 14:49:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:49:04 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625E03.9080105@ieee.org> References: <44625BE2.4010708@sigmasquared.net> <44625E03.9080105@ieee.org> Message-ID: <44625FB9.50501@cox.net> Travis Oliphant wrote: > Stephan Tolksdorf wrote: > >> Hi, >> >> there are still some (mostly minor) problems with the Windows build >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >> documentation enhancements, but before doing so I'd like to know if >> there's interest from one of the core developers to review/commit >> these patches afterwards. I'm asking because in the past questions >> and suggestions regarding the building process of Numpy (especially >> on Windows) often remained unanswered on this list. I realise that >> many developers don't use Windows and that the distutils build is a >> complex beast, but the current situation seems a bit unsatisfactory >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > so I know it works there (at least at release time). I also have > applied several patches with the express purpose of getting the build > working on MSVC and Cygwin. > > So, go ahead and let us know what problems you are having. You are > correct that my main build platform is not Windows, but I think > several other people do use Windows regularly and we definitely want > to support it. > Indeeed. I build from SVN at least once a week using MSVC and it's been compiling warning free and passing all tests for me for some time. -tim From tim.hochberg at cox.net Wed May 10 14:53:03 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:53:03 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <44626084.7000001@cox.net> Sasha wrote: > On 5/10/06, Travis Oliphant wrote: > >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . This is exactly the approach taken by arraykit. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. > > I think [] syntax preferable in the interactive setting, where it > allows to get the result with a few keystrokes. In addition [] has > special syntactic properties in python (special meaning of : and ... > within []) that allows some nifty looking syntax not available for > member functions. This is why I was considering using pseudo attributes, similar to flat, for my basearray subclass. I could hang [] off of them and use all of the normal array indexing syntax. I haven't come up with ideal names yet, but it could look something like: x[:3, 5:] # normal view indexing x.at[[3,2,1], [1,2,3]] # integer array indexing x.iff[[1,0,1], [2,1,0]] # boolean array indexing. > On the other hand in programming, and especially in > writing reusable code specialized member functions such as "take" are > more appropriate for several resons. (1) Robustness, x.take(i) will do > the same thing if i is a tuple, list, or array of any integer type, > while with x[i] it is anybodys guess and the results may change with > the changes in numpy. (2) Performance: fancy-indexing check is > expensive. (3) Code readability: in the interactive session when you > type x[i], i is either supplied literally or is defined on the same > screen, but if i comes as an argument to the function, it may be hard > to figure out whether i expected to be an integer or a list of > integers is also ok. Sounds reasonable to me. -tim From tim.hochberg at cox.net Wed May 10 15:07:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 15:07:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44626012.6050506@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> Message-ID: <446263EF.5050408@cox.net> Travis Oliphant wrote: > Tim Hochberg wrote: > >>> Is adding basic Numeric-like indexing something you see as useful to >>> basearray? >> >> >> Yes! No! Maybe ;-) > > > Got ya, loud and clear :-) > > I understand the confusion. > > I think we should do the following (for release 1.0) > > > 1) Implement a base-array with no getitem method nor setitem method at > all > > 2) Implement a sub-class that supports only creation of data-types > corresponding to existing Python scalars (Boolean, Long-based > integers, Double-based floats, complex and object types). Then, all > array accesses should return the underlying Python objects. > This sub-class should also only do view-based indexing (basically it's > old Numeric behavior inside of NumPy). > > 3) Implement the ndarray as a sub-class of #2 that does fancy indexing > and returns array-scalars > > > Item 1) should be pushed for inclusion in 2.6 and possibly even > something like 2) +1 Let me point out an interesting possibility. If ndarray inherits from basearray, only one of them needs to have the current __new__ method. That means that we could do the following rearrangement, if we felt like it: 1. Remove 'array' 2. Rename 'ndarray' to 'array' 3. Put the old functionality of array into array.__new__ The current functionality of ndarray.__new__ would still be available as basearray.__new__. I mention this partly because I can think of things to do with the name ndarray: for example, use it as the name of the subclass in (2). -tim From oliphant.travis at ieee.org Wed May 10 15:38:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 15:38:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44624D24.3050406@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> Message-ID: <44626012.6050506@ieee.org> Tim Hochberg wrote: >> Is adding basic Numeric-like indexing something you see as useful to >> basearray? > > Yes! No! Maybe ;-) Got ya, loud and clear :-) I understand the confusion. I think we should do the following (for release 1.0) 1) Implement a base-array with no getitem method nor setitem method at all 2) Implement a sub-class that supports only creation of data-types corresponding to existing Python scalars (Boolean, Long-based integers, Double-based floats, complex and object types). Then, all array accesses should return the underlying Python objects. This sub-class should also only do view-based indexing (basically it's old Numeric behavior inside of NumPy). 3) Implement the ndarray as a sub-class of #2 that does fancy indexing and returns array-scalars Item 1) should be pushed for inclusion in 2.6 and possibly even something like 2) -Travis From fullung at gmail.com Wed May 10 15:40:02 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 15:40:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625FB9.50501@cox.net> Message-ID: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Hello all, It seems that many people are building on Windows without problems, except for Stephan and myself. Let me start by staying that yes, the default build on Windows with MinGW and Visual Studio works nicely. However, is anybody building with ATLAS and finding that experience to be equally painless? If so, *please* can you tell me how you've organized your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very interested in the contents of your site.cfg. I've been trying for many weeks to do some small subset of the above without hacking into the core of numpy.distutils. So far, no luck. Does anybody do debug builds on Windows? Again, please tell me how you do this, because I would really like to be able to build a debug version of NumPy for debugging with the MSVS compiler. As for compiler warnings, last time I checked, distutils seems to be suppressing the output from the compiler, except when the build actually fails. Or am I mistaken? Eagerly awaiting Windows build nirvana, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg > Sent: 10 May 2006 23:49 > To: Travis Oliphant > Cc: Stephan Tolksdorf; numpy-discussion > Subject: Re: [Numpy-discussion] Building on Windows > > Travis Oliphant wrote: > > > Stephan Tolksdorf wrote: > > > >> Hi, > >> > >> there are still some (mostly minor) problems with the Windows build > >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > >> documentation enhancements, but before doing so I'd like to know if > >> there's interest from one of the core developers to review/commit > >> these patches afterwards. I'm asking because in the past questions > >> and suggestions regarding the building process of Numpy (especially > >> on Windows) often remained unanswered on this list. I realise that > >> many developers don't use Windows and that the distutils build is a > >> complex beast, but the current situation seems a bit unsatisfactory > >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > > so I know it works there (at least at release time). I also have > > applied several patches with the express purpose of getting the build > > working on MSVC and Cygwin. > > > > So, go ahead and let us know what problems you are having. You are > > correct that my main build platform is not Windows, but I think > > several other people do use Windows regularly and we definitely want > > to support it. > > > Indeeed. I build from SVN at least once a week using MSVC and it's been > compiling warning free and passing all tests for me for some time. > > -tim From st at sigmasquared.net Wed May 10 16:49:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 16:49:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Message-ID: <44627BC0.3080403@sigmasquared.net> Hi Albert, in the following you find an abridged preview version of my MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of it, but maybe it helps. You'll need a current Cygwin with g77 and MinGW-libraries installed. Atlas: ====== Download and extract latest development ATLAS (3.7.11). Comment out line 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate options for your system. Don't activate posix threads (for now). Overwrite the compiler and linker flags with flags that include "-mno-cygwin". Use the default architecture settings. Atlas and the test suite hopefully compile without an error now. Lapack: ======= Download and extract www.netlib.org/lapack/lapack.tgz and apply the most current patch from www.netlib.org/lapack-dev/ Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add ".PHONY: install testing timing" as the last line to lapack/Makefile. Run "make install lib" in the lapack root directory in Cygwin. ("make testing timing" and should also work now, but you probably want to use your optimised BLAS for that. Some errors in the tests are to be expected.) Atlas + Lapack: =============== Copy the generated lapack_LINUX.a together with "libatlas.a", "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. In Cygwin execute the following command sequence in that directory to get an ATLAS-optimized LAPACK library "ar x liblapack.a ar r lapack_LINUX.a *.o rm *.o mv lapack_LINUX.a liblapack.a" Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to atlas.lib, in order to allow distutils to recognize the libs and at the same time provide the correct versions for MSVC. Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to this directory and again make .lib copies. Compile and install numpy: ========================== Put "[atlas] library_dirs = d:\path\to\your\BlasDirectory atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" into your site.cfg in the numpy root directory. Open an Visual Studio 2003 command prompt and run "Path\To\Python.exe setup.py config --compiler=msvc build --compiler=msvc bdist_wininst". Use the resulting dist/numpy-VERSION.exe installer to install Numpy. Testing: In a Python console run import numpy.testing numpy.testing.NumpyTest(numpy).run() ... hopefully without an error. Test your code base. I'll wikify an extended version in the next days. Stephan Albert Strasheim wrote: > Hello all, > > It seems that many people are building on Windows without problems, except > for Stephan and myself. > > Let me start by staying that yes, the default build on Windows with MinGW > and Visual Studio works nicely. > > However, is anybody building with ATLAS and finding that experience to be > equally painless? If so, *please* can you tell me how you've organized your > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > interested in the contents of your site.cfg. I've been trying for many > weeks to do some small subset of the above without hacking into the core of > numpy.distutils. So far, no luck. > > Does anybody do debug builds on Windows? Again, please tell me how you do > this, because I would really like to be able to build a debug version of > NumPy for debugging with the MSVS compiler. > > As for compiler warnings, last time I checked, distutils seems to be > suppressing the output from the compiler, except when the build actually > fails. Or am I mistaken? > > Eagerly awaiting Windows build nirvana, > > Albert > >> -----Original Message----- >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg >> Sent: 10 May 2006 23:49 >> To: Travis Oliphant >> Cc: Stephan Tolksdorf; numpy-discussion >> Subject: Re: [Numpy-discussion] Building on Windows >> >> Travis Oliphant wrote: >> >>> Stephan Tolksdorf wrote: >>> >>>> Hi, >>>> >>>> there are still some (mostly minor) problems with the Windows build >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >>>> documentation enhancements, but before doing so I'd like to know if >>>> there's interest from one of the core developers to review/commit >>>> these patches afterwards. I'm asking because in the past questions >>>> and suggestions regarding the building process of Numpy (especially >>>> on Windows) often remained unanswered on this list. I realise that >>>> many developers don't use Windows and that the distutils build is a >>>> complex beast, but the current situation seems a bit unsatisfactory >>>> - and I would like to help. >>> I think your assessment is a bit harsh. I regularly build on MinGW >>> so I know it works there (at least at release time). I also have >>> applied several patches with the express purpose of getting the build >>> working on MSVC and Cygwin. >>> >>> So, go ahead and let us know what problems you are having. You are >>> correct that my main build platform is not Windows, but I think >>> several other people do use Windows regularly and we definitely want >>> to support it. >>> >> Indeeed. I build from SVN at least once a week using MSVC and it's been >> compiling warning free and passing all tests for me for some time. >> >> -tim > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From tim.hochberg at cox.net Wed May 10 20:20:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 20:20:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Message-ID: <4462AD4D.8070401@cox.net> Albert Strasheim wrote: >Hello all, > >It seems that many people are building on Windows without problems, except >for Stephan and myself. > >Let me start by staying that yes, the default build on Windows with MinGW >and Visual Studio works nicely. > >However, is anybody building with ATLAS and finding that experience to be >equally painless? If so, *please* can you tell me how you've organized your >libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's >LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very >interested in the contents of your site.cfg. I've been trying for many >weeks to do some small subset of the above without hacking into the core of >numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. >Does anybody do debug builds on Windows? Again, please tell me how you do >this, because I would really like to be able to build a debug version of >NumPy for debugging with the MSVS compiler. > > Again just vanilla builds, although this is something I'd like to try one of these days. (Is that MSVC compiler, or is that yet another compiler for windows). >As for compiler warnings, last time I checked, distutils seems to be >suppressing the output from the compiler, except when the build actually >fails. Or am I mistaken? > > Hmm. I hadn't thought about that. It certainly spits out plenty of warnings when the build fails, so I assumed that it was always spitting out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on a successful compilation. Anyone know a way to stop that off the top of their head? >Eagerly awaiting Windows build nirvana, > > Heh! Regards, -tim From ryanlists at gmail.com Wed May 10 20:21:03 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed May 10 20:21:03 2006 Subject: [Numpy-discussion] array of arbitrary objects Message-ID: Is it possible with numpy to create arrays of arbitrary objects? Specifically, I have defined a symbolic string class with operator overloading for most simple math operations: 'a'*'b' ==> 'a*b' Can I create two matrices of these symbolic string objects and multiply those matrices together? (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings got cast to regular strings) Thanks, Ryan From ndarray at mac.com Wed May 10 20:28:01 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 20:28:01 2006 Subject: [Numpy-discussion] array of arbitrary objects In-Reply-To: References: Message-ID: You have to specify dtype to be object: >>> from numpy import * >>> class X(str): pass ... >>> a,b,c,d = map(X, 'abcd') >>> array([[a,b],[c,d]],'O') array([[a, b], [c, d]], dtype=object) >>> _ * 2 array([[aa, bb], [cc, dd]], dtype=object) On 5/10/06, Ryan Krauss wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Wed May 10 20:28:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 10 20:28:05 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: References: Message-ID: Ryan Krauss wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) Use dtype=object . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Wed May 10 20:34:00 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 20:34:00 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <4462AD4D.8070401@cox.net> Message-ID: <008601c674ab$97e7fa70$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: Tim Hochberg [mailto:tim.hochberg at cox.net] > Sent: 11 May 2006 05:20 > To: Albert Strasheim > Cc: 'numpy-discussion' > Subject: Re: [Numpy-discussion] Building on Windows > > Albert Strasheim wrote: > > >Hello all, > > > >It seems that many people are building on Windows without problems, > > except for Stephan and myself. > > > >Let me start by staying that yes, the default build on Windows with MinGW > >and Visual Studio works nicely. > > > >However, is anybody building with ATLAS and finding that experience to be > >equally painless? If so, *please* can you tell me how you've organized > >your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about > >ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd > >be very interested in the contents of your site.cfg. I've been trying > > for many weeks to do some small subset of the above without hacking > >into the core of numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. I like vanilla, but I'd love to try one of the other hundred flavors! ;-) > >Does anybody do debug builds on Windows? Again, please tell me how you do > >this, because I would really like to be able to build a debug version of > >NumPy for debugging with the MSVS compiler. > > > > > Again just vanilla builds, although this is something I'd like to try > one of these days. (Is that MSVC compiler, or is that yet another > compiler for windows). MSVS (MS Visual Studio) and MSVC can probably be considered to be the same thing. However, you have many flavors (argh!). The Microsoft Visual C++ Toolkit 2003 only includes MSVC, while Visual C++ Express Edition 2005 and all the "pay-to-play" editions include MSVC and the MSV[SC] debugger. I think there's also another debugger called WinDbg which is included with the Platform SDK. > >As for compiler warnings, last time I checked, distutils seems to be > >suppressing the output from the compiler, except when the build actually > >fails. Or am I mistaken? > > > Hmm. I hadn't thought about that. It certainly spits out plenty of > warnings when the build fails, so I assumed that it was always spitting > out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on > a successful compilation. Anyone know a way to stop that off the top of > their head? See the following URL for the kind of pain this causes: http://article.gmane.org/gmane.comp.python.numeric.general/5219 > >Eagerly awaiting Windows build nirvana, > > > > > Heh! Thanks for your feedback. Not nirvana yet, but vanilla will do for now. Cheers, Albert From fullung at gmail.com Wed May 10 20:39:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 20:39:01 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: Message-ID: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern > Sent: 11 May 2006 05:28 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Re: array of arbitrary objects > > Ryan Krauss wrote: > > Is it possible with numpy to create arrays of arbitrary objects? > > Specifically, I have defined a symbolic string class with operator > > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > > > Can I create two matrices of these symbolic string objects and > > multiply those matrices together? > > > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > > got cast to regular strings) > > Use dtype=object . How does one go about putting tuples into an object array? Consider the following example, courtesy of Louis Cordier: In [1]: import numpy as N In [2]: a = [(1,2), (2,3), (3,4)] In [3]: len(a) Out[3]: 3 In [5]: N.array(a, 'O') Out[5]: array([[1, 2], [2, 3], [3, 4]], dtype=object) instead of something like this: array([[(1, 2), (2, 3), (3, 4)]], dtype=object) Cheers, Albert From robert.kern at gmail.com Wed May 10 20:43:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 10 20:43:01 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> References: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > How does one go about putting tuples into an object array? Very carefully. In [2]: a = empty(3, dtype=object) In [3]: a Out[3]: array([None, None, None], dtype=object) In [4]: a[:] = [(1,2), (2,3), (3,4)] In [5]: a Out[5]: array([(1, 2), (2, 3), (3, 4)], dtype=object) In [6]: a.shape Out[6]: (3,) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Wed May 10 20:48:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 20:48:04 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> References: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Message-ID: <4462B3DF.5060809@cox.net> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >>discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern >>Sent: 11 May 2006 05:28 >>To: numpy-discussion at lists.sourceforge.net >>Subject: [Numpy-discussion] Re: array of arbitrary objects >> >>Ryan Krauss wrote: >> >> >>>Is it possible with numpy to create arrays of arbitrary objects? >>>Specifically, I have defined a symbolic string class with operator >>>overloading for most simple math operations: 'a'*'b' ==> 'a*b' >>> >>>Can I create two matrices of these symbolic string objects and >>>multiply those matrices together? >>> >>>(simply doing array([[a,b],[c,d]]) did not work. the symbolic strings >>>got cast to regular strings) >>> >>> >>Use dtype=object . >> >> > >How does one go about putting tuples into an object array? > >Consider the following example, courtesy of Louis Cordier: > >In [1]: import numpy as N >In [2]: a = [(1,2), (2,3), (3,4)] >In [3]: len(a) >Out[3]: 3 >In [5]: N.array(a, 'O') >Out[5]: >array([[1, 2], > [2, 3], > [3, 4]], dtype=object) > >instead of something like this: > >array([[(1, 2), (2, 3), (3, 4)]], dtype=object) > > Creating object arrays is always going to be a bit of a pain, however, the following approach will work here: >>> import numpy as N >>> a = [(1,2), (2,3), (3,4)] >>> b = N.zeros([len(a)], object) >>> b[:] = a >>> b array([(1, 2), (2, 3), (3, 4)], dtype=object) >>> type(b[0]) Regards, -tim From st at sigmasquared.net Wed May 10 23:26:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 23:26:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <4462AD4D.8070401@cox.net> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> <4462AD4D.8070401@cox.net> Message-ID: <4462D8B8.4010405@sigmasquared.net> >> As for compiler warnings, last time I checked, distutils seems to be >> suppressing the output from the compiler, except when the build actually >> fails. Or am I mistaken? >> >> > Hmm. I hadn't thought about that. It certainly spits out plenty of > warnings when the build fails, so I assumed that it was always spitting > out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on > a successful compilation. Anyone know a way to stop that off the top of > their head? A quick fix is to throw out the customized spawn by commenting out line 40 in distutils/ccompiler.py. Stephan From wbaxter at gmail.com Thu May 11 01:28:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 01:28:01 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays Message-ID: Two quick questions: ---------1------------ Is there any better way to intialize an array from a string than this: A = asarray(matrix("1 2 3")) Or is that as good as it gets? I suppose it's not so inefficient. ----------2------------- A lot of time I'd like to be able to write loops like this: A = array([]) for row in function_that_returns_iterable_of_one_d_arrays(): A = vstack((A,row)) But that generates an error the first iteration because the shape of A is wrong. I could stick in a reshape in the loop: A = array([]) for row in function_that_returns_iterable_of_one_d_arrays(): if not A.shape[0]: A.reshape(0,row.shape[0]) A = vstack((A,row)) Meh, but I don't really like looking at 'if's in loops when I know they're really only going to be true once the first time. In Matlab the empty matrix [] can be concatenated with anything and just takes on it's shape. It's handy for writing code like the above. Thanks, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz at loskot.net Thu May 11 04:39:15 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Thu May 11 04:39:15 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? Message-ID: <4463220B.20909@loskot.net> Hi, I'm a developer contributing to GDAL project (http://www.gdal.org). GDAL is a core GeoSpatial Librarry that has had Python bindings for a while. Recently, we did get some reports from users that GDAL bindings do not work with NumPy package. We've learned on the NumPy website that it's a new derivation from Numeric code base. So, now we are facing the question what we should do? Should we completely port our project to use NumPy or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both packages. Python plays a very important role in GDAL project, so our concern are quite critical for future development. This situation brings some questions we'd like to ask NumPy Dev Team: Is it fair to say we are unlikely to see Numeric releases for new Pythons in the future? Can we consider NumPy as the only package in future? Simply, we are wondering which Python library we should develop for NumPy, Numeric or numarray to be most generally useful. What's the recommended way to go now? We'd appreciate your assistance on this issue. Best regards -- Mateusz ?oskot http://mateusz.loskot.net From hetland at tamu.edu Thu May 11 07:16:00 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu May 11 07:16:00 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> Message-ID: <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> I never use matrices primarily because I am worried about one more data type floating around in my code. That is, data is often read in or constructed as lists, and must be converted to an array to do anything useful. Take a simple example of optimal interpolation: Read in the data (as a list?), construct the background error covariance arrays (arrays), then do about three lines of linear algebra; e.g., W = dot(linalg.inv(B + O), Bi) # weights A = dot(self.Di,W).transpose() # analysis Ea = diag(sqrt(self.Be - dot(W.transpose(), Bi))) # analysis error Is it worth it to convert the arrays to matrices in order to do this handful of calculation? Almost. I covet the shorthand .T notation in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. Also, for involved calculations inverse, transpose et. al are long enough words such that the line always wraps, resulting in less- readable code. Should I give in? If there was some shorthand links to inverse and transpose methods in the array object, I would definitely stick with arrays. -Rob p.s. By the way, array broadcasting saves much pain in creating the background error covariance arrays. Yeah for array broadcasting! ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From oliphant.travis at ieee.org Thu May 11 08:41:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 11 08:41:01 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <4463220B.20909@loskot.net> References: <4463220B.20909@loskot.net> Message-ID: <44635ABE.5030200@ieee.org> Mateusz ?oskot wrote: > Hi, > > I'm a developer contributing to GDAL project (http://www.gdal.org). > GDAL is a core GeoSpatial Librarry that has had Python bindings > for a while. Recently, we did get some reports from users that GDAL > bindings do not work with NumPy package. > Most packages can be "ported" simply by replacing include "Numeric/arrayobject.h" with include "numpy/arrayobject.h" and making sure the include files are retrieved from the right place. NumPy was designed to make porting from Numeric a breeze. > > This situation brings some questions we'd like to ask NumPy Dev Team: > > Is it fair to say we are unlikely to see Numeric releases for new > Pythons in the future? > Yes, that's fair. Nobody is maintaining Numeric. > Can we consider NumPy as the only package in future? > Yes. That's where development is currently active. > Simply, we are wondering which Python library we should develop for > NumPy, Numeric or numarray to be most generally useful. > NumPy is the merging of Numeric and numarray to bring people together. I think you should develop for NumPy. In practical terms, it is pretty easy to port from Numeric. > What's the recommended way to go now? > I've ported tens of packages to NumPy from Numeric and have had very little trouble. It is not difficult. Most of the time, simply replacing the *.h file with the one from numpy works fine. It might be a bit trickier to get your headers from the right place. The directory is returned by import numpy.distutils numpy.distutils.misc_util.get_numpy_include_dirs() Give it a try it's not very difficult. -Travis From Chris.Barker at noaa.gov Thu May 11 09:15:01 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu May 11 09:15:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44626012.6050506@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> Message-ID: <446362CC.6000004@noaa.gov> Travis Oliphant wrote: > 1) Implement a base-array with no getitem method nor setitem method at all > > 2) Implement a sub-class that supports only creation of data-types > corresponding to existing Python scalars (Boolean, Long-based integers, > Double-based floats, complex and object types). Then, all array > accesses should return the underlying Python objects. > This sub-class should also only do view-based indexing (basically it's > old Numeric behavior inside of NumPy). > Item 1) should be pushed for inclusion in 2.6 and possibly even > something like 2) + sys.maxint Having even this very basic n-d object in the standard lib would be a MAJOR boon to python. However, as I think about it, one reason I'd really like to see an nd-array in python is as a standard way to pass binary data around. Right now, I'm working with the GDAL lib for geo-referenced raster images, PIL, numpy and wxPython. I'm making a lot of copies to and from python strings to pass the binary data back and forth. If all these libs used the same nd-array, this would be much more efficient. However, that would require non-python data types, most notably a byte (or char, whatever) type. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu May 11 09:25:14 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu May 11 09:25:14 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: Message-ID: <44636533.3020201@noaa.gov> Bill Baxter wrote: > Two quick questions: > ---------1------------ > Is there any better way to intialize an array from a string than this: > > A = asarray(matrix("1 2 3")) How about: >>> import numpy as N >>> N.fromstring("1 2 3", sep = " ") array([1, 2, 3]) or >>> N.fromstring("1 2 3", dtype = N.Float, sep = " ") array([ 1., 2., 3.]) If you pass a non-empty "sep" parameter, it parses the string, rather than treating is as binary data. fromfile works this way too -- thanks Travis! -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Thu May 11 09:29:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 11 09:29:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446362CC.6000004@noaa.gov> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> Message-ID: <44636613.7050201@ieee.org> Christopher Barker wrote: > Travis Oliphant wrote: >> 1) Implement a base-array with no getitem method nor setitem method >> at all >> >> 2) Implement a sub-class that supports only creation of data-types >> corresponding to existing Python scalars (Boolean, Long-based >> integers, Double-based floats, complex and object types). Then, all >> array accesses should return the underlying Python objects. >> This sub-class should also only do view-based indexing (basically >> it's old Numeric behavior inside of NumPy). > >> Item 1) should be pushed for inclusion in 2.6 and possibly even >> something like 2) > > + sys.maxint > > Having even this very basic n-d object in the standard lib would be a > MAJOR boon to python. > I totally agree. I've been advertising this for at least 8 months, but nobody is really willing to work on it (or fund it). At least we have a summer student who is going to try and get Google summer-of-code money for it. If you have any ability to bump up the ratings of summer of code applications. Please consider bumping up his application. > However, as I think about it, one reason I'd really like to see an > nd-array in python is as a standard way to pass binary data around. > Right now, I'm working with the GDAL lib for geo-referenced raster > images, PIL, numpy and wxPython. I'm making a lot of copies to and > from python strings to pass the binary data back and forth. If all > these libs used the same nd-array, this would be much more efficient. > However, that would require non-python data types, most notably a byte > (or char, whatever) type. > Anything in Python would need to define at least a basic bytes type, for exactly this purpose. So, we are on the same page. I'm thinking now that the data-type object now in NumPy would make a nice addition to Python as well for a standard way to define data-types. Then, Python itself wouldn't have to do anything useful with non-standard arrays (except pass them around), but it would at least have a way to describe them. -Travis From tim.hochberg at cox.net Thu May 11 09:45:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu May 11 09:45:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44636613.7050201@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> Message-ID: <446369D6.7060205@cox.net> Travis Oliphant wrote: > Christopher Barker wrote: > >> Travis Oliphant wrote: >> >>> 1) Implement a base-array with no getitem method nor setitem method >>> at all >>> >>> 2) Implement a sub-class that supports only creation of data-types >>> corresponding to existing Python scalars (Boolean, Long-based >>> integers, Double-based floats, complex and object types). Then, all >>> array accesses should return the underlying Python objects. >>> This sub-class should also only do view-based indexing (basically >>> it's old Numeric behavior inside of NumPy). >> >> >>> Item 1) should be pushed for inclusion in 2.6 and possibly even >>> something like 2) >> >> >> + sys.maxint >> >> Having even this very basic n-d object in the standard lib would be a >> MAJOR boon to python. >> > > I totally agree. I've been advertising this for at least 8 months, > but nobody is really willing to work on it (or fund it). At least we > have a summer student who is going to try and get Google > summer-of-code money for it. If you have any ability to bump up the > ratings of summer of code applications. Please consider bumping up > his application. > >> However, as I think about it, one reason I'd really like to see an >> nd-array in python is as a standard way to pass binary data around. >> Right now, I'm working with the GDAL lib for geo-referenced raster >> images, PIL, numpy and wxPython. I'm making a lot of copies to and >> from python strings to pass the binary data back and forth. If all >> these libs used the same nd-array, this would be much more efficient. >> However, that would require non-python data types, most notably a >> byte (or char, whatever) type. >> > Anything in Python would need to define at least a basic bytes type, > for exactly this purpose. So, we are on the same page. > > I'm thinking now that the data-type object now in NumPy would make a > nice addition to Python as well for a standard way to define > data-types. Then, Python itself wouldn't have to do anything useful > with non-standard arrays (except pass them around), but it would at > least have a way to describe them. On this front, it's probably at least thinking a bit about whether there is any prospect of harmonizing ctypes type notation and the numpy data-type object. It seems somewhat silly to have (1) array.arrays notation for types ['i'. 'f', etc], (2) ctypes notation for types [c_int, c_float, etc] and (3) numpy's notation for types [dtype(' References: <4463220B.20909@loskot.net> Message-ID: <44636A10.1030909@noaa.gov> Mateusz ?oskot wrote: > I'm a developer contributing to GDAL project (http://www.gdal.org). > GDAL is a core GeoSpatial Librarry that has had Python bindings > for a while. Recently, we did get some reports from users that GDAL > bindings do not work with NumPy package. Speaking as a long time numpy (Numeric, etc.) user, and a new user of GDAL, I had no idea GDAL worked with num* at all! at least not directly. In fact, I thought I was going to have to write that code myself. Where do I find docs for this? I'm sure I've just missed something, but I'm finding the docs a bit sparse. On the other hand, I am also finding GDAL to be an excellent library, and have so far gotten it to do what I need it to do. So kudos! > So, now we are facing the question what we should do? > Should we completely port our project to use NumPy I say yes. At least for new code. numpy is the way of the future, and the more projects commit to it the better. > or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both > packages. That's a reasonable option as well. In fact, the APIs are pretty darn similar. Also, keep in mind that if you go exclusively to numpy, the newest version of Numeric will still work well with it. The conversion of numpy to Numeric arrays is very efficient, thanks to them both using the new "array protocol". Same goes for numarray. > Is it fair to say we are unlikely to see Numeric releases for new > Pythons in the future? I'm guessing someone might do a little maintainance. For the short term, the current Numeric will probably build just fine against the next couple releases of python -- python's good that way! > Can we consider NumPy as the only package in future? > Simply, we are wondering which Python library we should develop for > NumPy, Numeric or numarray to be most generally useful. Speaking as a very interested observer, but not a developer of any of the num* packages: numpy is the way of the future. As an observer, I think that's pretty clear. On the other hand, it is still beta software, so dropping Numeric just yet may not be appropriate. If you don't already support numarray, there is no reason to do so now. It will do the python numerical community a world of good for all of us to get back to a single array package. Note also that we hope some day to get a simple n-d array object into the python standard library. When that happens, that object is all but guaranteed to be compatible with numpy. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ndarray at mac.com Thu May 11 10:03:02 2006 From: ndarray at mac.com (Sasha) Date: Thu May 11 10:03:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446369D6.7060205@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> Message-ID: On 5/11/06, Tim Hochberg wrote: > [...] > On this front, it's probably at least thinking a bit about whether there > is any prospect of harmonizing ctypes type notation and the numpy > data-type object. It seems somewhat silly to have (1) array.arrays > notation for types ['i'. 'f', etc], (2) ctypes notation for types > [c_int, c_float, etc] and (3) numpy's notation for types [dtype(' dtype(' Don't forget (4) the struct module (similar to (1), but not exactly the same). Also I am not familiar with Boost.Python, but it must have some way to reflect C++ types to Python. If anyone on this list uses Boost.Python, please think if we can borrow any ideas from there. From ndarray at mac.com Thu May 11 10:15:02 2006 From: ndarray at mac.com (Sasha) Date: Thu May 11 10:15:02 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44636A10.1030909@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> Message-ID: On 5/11/06, Christopher Barker wrote: > [...] > > or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both > > packages. > > That's a reasonable option as well. In fact, the APIs are pretty darn > similar. Also, keep in mind that if you go exclusively to numpy, the > newest version of Numeric will still work well with it. The conversion > of numpy to Numeric arrays is very efficient, thanks to them both using > the new "array protocol". Same goes for numarray. I have started using the array protocol recently, and it is very useful. It allows you to eliminate compile time dependency on any of the particular array package from your extension modules. If your package already has an array-like object, that object should provide an __array_struct__ attribute and that will make it acceptable to asarray method from any of the latest array packages (numpy, numarray, or Numeric). If you have functions that take array as arguments they should be modified to accept any object that has __array_struct__. From st at sigmasquared.net Thu May 11 10:55:13 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu May 11 10:55:13 2006 Subject: [Numpy-discussion] Compiler warnings Message-ID: <44637A3D.1060203@sigmasquared.net> Seems like the setup script currently suppresses quite a lot of warnings, not only in MSVC. I created a ticket (#113) with a patch that fixes many of the warnings under Visual C. Maybe someone else could have a look at the remaining ones. @Albert: Do you still have a linking problem with the math functions? I haven't seen such a warning... Ciao, Stephan From oliphant at ee.byu.edu Thu May 11 11:13:03 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu May 11 11:13:03 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446369D6.7060205@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> Message-ID: <44637E83.7090207@ee.byu.edu> Tim Hochberg wrote: > Travis Oliphant wrote: > >> Christopher Barker wrote: >> >>> Travis Oliphant wrote: >>> >>>> 1) Implement a base-array with no getitem method nor setitem method >>>> at all >>>> >>>> 2) Implement a sub-class that supports only creation of data-types >>>> corresponding to existing Python scalars (Boolean, Long-based >>>> integers, Double-based floats, complex and object types). Then, >>>> all array accesses should return the underlying Python objects. >>>> This sub-class should also only do view-based indexing (basically >>>> it's old Numeric behavior inside of NumPy). >>> >>> >>> >>>> Item 1) should be pushed for inclusion in 2.6 and possibly even >>>> something like 2) >>> >>> >>> >>> + sys.maxint >>> >>> Having even this very basic n-d object in the standard lib would be >>> a MAJOR boon to python. >>> >> >> I totally agree. I've been advertising this for at least 8 months, >> but nobody is really willing to work on it (or fund it). At least >> we have a summer student who is going to try and get Google >> summer-of-code money for it. If you have any ability to bump up the >> ratings of summer of code applications. Please consider bumping up >> his application. >> >>> However, as I think about it, one reason I'd really like to see an >>> nd-array in python is as a standard way to pass binary data around. >>> Right now, I'm working with the GDAL lib for geo-referenced raster >>> images, PIL, numpy and wxPython. I'm making a lot of copies to and >>> from python strings to pass the binary data back and forth. If all >>> these libs used the same nd-array, this would be much more >>> efficient. However, that would require non-python data types, most >>> notably a byte (or char, whatever) type. >>> >> Anything in Python would need to define at least a basic bytes type, >> for exactly this purpose. So, we are on the same page. >> >> I'm thinking now that the data-type object now in NumPy would make a >> nice addition to Python as well for a standard way to define >> data-types. Then, Python itself wouldn't have to do anything >> useful with non-standard arrays (except pass them around), but it >> would at least have a way to describe them. > > > On this front, it's probably at least thinking a bit about whether > there is any prospect of harmonizing ctypes type notation and the > numpy data-type object. It seems somewhat silly to have (1) > array.arrays notation for types ['i'. 'f', etc], (2) ctypes notation > for types [c_int, c_float, etc] and (3) numpy's notation for types > [dtype(' I agree in spirit. Python is growing several ways to do the same thing because determining what data-type you are dealing with is useful in many contexts. Unfortunately, there is not a lot of cross-pollination between the people. I've contributed to a couple of threads to try and at least raise awareness that data-description is something NumPy has been thinking about, also. -Travis From fperez.net at gmail.com Thu May 11 11:24:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 11 11:24:08 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44637E83.7090207@ee.byu.edu> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> <44637E83.7090207@ee.byu.edu> Message-ID: On 5/11/06, Travis Oliphant wrote: > I agree in spirit. Python is growing several ways to do the same thing > because determining what data-type you are dealing with is useful in > many contexts. Unfortunately, there is not a lot of cross-pollination > between the people. I've contributed to a couple of threads to try and > at least raise awareness that data-description is something NumPy has > been thinking about, also. Sounds like a good topic for the discussion idea. I started the wiki for it: http://scipy.org/SciPy06DiscussionWithGuido Gotta run now, I'll announce it on list later. f From lapataia28-northern at yahoo.com Thu May 11 11:52:04 2006 From: lapataia28-northern at yahoo.com (Lalo) Date: Thu May 11 11:52:04 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44627BC0.3080403@sigmasquared.net> Message-ID: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> Hi, I built on Cygwin using the detailed ATLAS+LAPACK build tutorial below. It's great, thanks! My site.cfg is: ------------------------------------------------- [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/usr/lib/python2.4/config include_dirs = /usr/include:/usr/local/include search_static_first = 0 [atlas] library_dirs = /usr/local/lib/atlas # for overriding the names of the atlas libraries atlas_libs = lapack, f77blas, cblas, atlas ------------------------------------------------- However, I had to create the following link, not sure why it wouldn't find the library otherwise: ln -s /usr/lib/python2.4/config/libpython2.4.dll.a /usr/lib/libpython2.4.dll.a Hope this helps, Lalo ----- Original Message ---- From: Stephan Tolksdorf To: Albert Strasheim Cc: numpy-discussion Sent: Wednesday, May 10, 2006 4:48:16 PM Subject: Re: [Numpy-discussion] Building on Windows Hi Albert, in the following you find an abridged preview version of my MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of it, but maybe it helps. You'll need a current Cygwin with g77 and MinGW-libraries installed. Atlas: ====== Download and extract latest development ATLAS (3.7.11). Comment out line 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate options for your system. Don't activate posix threads (for now). Overwrite the compiler and linker flags with flags that include "-mno-cygwin". Use the default architecture settings. Atlas and the test suite hopefully compile without an error now. Lapack: ======= Download and extract www.netlib.org/lapack/lapack.tgz and apply the most current patch from www.netlib.org/lapack-dev/ Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add ".PHONY: install testing timing" as the last line to lapack/Makefile. Run "make install lib" in the lapack root directory in Cygwin. ("make testing timing" and should also work now, but you probably want to use your optimised BLAS for that. Some errors in the tests are to be expected.) Atlas + Lapack: =============== Copy the generated lapack_LINUX.a together with "libatlas.a", "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. In Cygwin execute the following command sequence in that directory to get an ATLAS-optimized LAPACK library "ar x liblapack.a ar r lapack_LINUX.a *.o rm *.o mv lapack_LINUX.a liblapack.a" Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to atlas.lib, in order to allow distutils to recognize the libs and at the same time provide the correct versions for MSVC. Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to this directory and again make .lib copies. Compile and install numpy: ========================== Put "[atlas] library_dirs = d:\path\to\your\BlasDirectory atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" into your site.cfg in the numpy root directory. Open an Visual Studio 2003 command prompt and run "Path\To\Python.exe setup.py config --compiler=msvc build --compiler=msvc bdist_wininst". Use the resulting dist/numpy-VERSION.exe installer to install Numpy. Testing: In a Python console run import numpy.testing numpy.testing.NumpyTest(numpy).run() ... hopefully without an error. Test your code base. I'll wikify an extended version in the next days. Stephan Albert Strasheim wrote: > Hello all, > > It seems that many people are building on Windows without problems, except > for Stephan and myself. > > Let me start by staying that yes, the default build on Windows with MinGW > and Visual Studio works nicely. > > However, is anybody building with ATLAS and finding that experience to be > equally painless? If so, *please* can you tell me how you've organized your > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > interested in the contents of your site.cfg. I've been trying for many > weeks to do some small subset of the above without hacking into the core of > numpy.distutils. So far, no luck. > > Does anybody do debug builds on Windows? Again, please tell me how you do > this, because I would really like to be able to build a debug version of > NumPy for debugging with the MSVS compiler. > > As for compiler warnings, last time I checked, distutils seems to be > suppressing the output from the compiler, except when the build actually > fails. Or am I mistaken? > > Eagerly awaiting Windows build nirvana, > > Albert > >> -----Original Message----- >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg >> Sent: 10 May 2006 23:49 >> To: Travis Oliphant >> Cc: Stephan Tolksdorf; numpy-discussion >> Subject: Re: [Numpy-discussion] Building on Windows >> >> Travis Oliphant wrote: >> >>> Stephan Tolksdorf wrote: >>> >>>> Hi, >>>> >>>> there are still some (mostly minor) problems with the Windows build >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >>>> documentation enhancements, but before doing so I'd like to know if >>>> there's interest from one of the core developers to review/commit >>>> these patches afterwards. I'm asking because in the past questions >>>> and suggestions regarding the building process of Numpy (especially >>>> on Windows) often remained unanswered on this list. I realise that >>>> many developers don't use Windows and that the distutils build is a >>>> complex beast, but the current situation seems a bit unsatisfactory >>>> - and I would like to help. >>> I think your assessment is a bit harsh. I regularly build on MinGW >>> so I know it works there (at least at release time). I also have >>> applied several patches with the express purpose of getting the build >>> working on MSVC and Cygwin. >>> >>> So, go ahead and let us know what problems you are having. You are >>> correct that my main build platform is not Windows, but I think >>> several other people do use Windows regularly and we definitely want >>> to support it. >>> >> Indeeed. I build from SVN at least once a week using MSVC and it's been >> compiling warning free and passing all tests for me for some time. >> >> -tim > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > ------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Thu May 11 12:04:03 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 11 12:04:03 2006 Subject: [Numpy-discussion] Compiler warnings In-Reply-To: <44637A3D.1060203@sigmasquared.net> Message-ID: <013f01c6752d$952a3ff0$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Stephan Tolksdorf > Sent: 11 May 2006 19:54 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Compiler warnings > > Seems like the setup script currently suppresses quite a lot of > warnings, not only in MSVC. I created a ticket (#113) with a patch that > fixes many of the warnings under Visual C. Maybe someone else could have > a look at the remaining ones. > > @Albert: Do you still have a linking problem with the math functions? I > haven't seen such a warning... I sent Travis a description of the problem and he fixed it a while back. Nice work on fixing the compiler warnings. I'll take a look at what's left over when your patch is merged. Regards, Albert From st at sigmasquared.net Thu May 11 12:36:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu May 11 12:36:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> References: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> Message-ID: <446391FD.5030901@sigmasquared.net> Lalo wrote: > Hi, > > I built on Cygwin using the detailed ATLAS+LAPACK build tutorial below. > It's great, thanks! If I've understood you correctly, you built Numpy in Cygwin using the _Python_ compiler from Cygwin, not the official Python binary distribution. As far as I know, that is currently not supported and probably unsafe because the resulting extension modules and Python will be linked to incompatible runtime libraries (msvcrt and the Cygwin one). If you want to build Numpy under Windows without MSVC, currently the easiest way is to download MinGW from http://sourceforge.net/forum/forum.php?forum_id=539405 install it and to put the MinGW/bin folder on the path of a normal windows command prompt. You do not need MSYS if you have built ATLAS and LAPACK with Cygwin and the -mno-cygwin option, as previously described. Now run "d:\path\to\offical\python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst" in the Numpy root directory in the windows command prompt window (not Cygwin). Stephan > > My site.cfg is: > > ------------------------------------------------- > [DEFAULT] > library_dirs = /usr/lib:/usr/local/lib:/usr/lib/python2.4/config > include_dirs = /usr/include:/usr/local/include > search_static_first = 0 > > [atlas] > library_dirs = /usr/local/lib/atlas > # for overriding the names of the atlas libraries > atlas_libs = lapack, f77blas, cblas, atlas > ------------------------------------------------- > > However, I had to create the following link, not sure why it wouldn't > find the library otherwise: > > ln -s /usr/lib/python2.4/config/libpython2.4.dll.a > /usr/lib/libpython2.4.dll.a > > Hope this helps, > Lalo > > > ----- Original Message ---- > From: Stephan Tolksdorf > To: Albert Strasheim > Cc: numpy-discussion > Sent: Wednesday, May 10, 2006 4:48:16 PM > Subject: Re: [Numpy-discussion] Building on Windows > > Hi Albert, > > in the following you find an abridged preview version of my > MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of > it, but maybe it helps. > > You'll need a current Cygwin with g77 and MinGW-libraries installed. > > Atlas: > ====== > Download and extract latest development ATLAS (3.7.11). Comment out line > 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate > options for your system. Don't activate posix threads (for now). > Overwrite the compiler and linker flags with flags that include > "-mno-cygwin". Use the default architecture settings. Atlas and the test > suite hopefully compile without an error now. > > Lapack: > ======= > Download and extract www.netlib.org/lapack/lapack.tgz > and apply the most > current patch from www.netlib.org/lapack-dev/ > > Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append > "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add > ".PHONY: install testing timing" as the last line to lapack/Makefile. > Run "make install lib" in the lapack root directory in Cygwin. ("make > testing timing" and should also work now, but you probably want to use > your optimised BLAS for that. Some errors in the tests are to be expected.) > > Atlas + Lapack: > =============== > Copy the generated lapack_LINUX.a together with "libatlas.a", > "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. > > In Cygwin execute the following command sequence in that directory to > get an ATLAS-optimized LAPACK library > "ar x liblapack.a > ar r lapack_LINUX.a *.o > rm *.o > mv lapack_LINUX.a liblapack.a" > > Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to > atlas.lib, in order to allow distutils to recognize the libs and at the > same time provide the correct versions for MSVC. > > Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to > this directory and again make .lib copies. > > Compile and install numpy: > ========================== > > Put > "[atlas] > library_dirs = d:\path\to\your\BlasDirectory > atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" > into your site.cfg in the numpy root directory. > > Open an Visual Studio 2003 command prompt and run > "Path\To\Python.exe setup.py config --compiler=msvc build > --compiler=msvc bdist_wininst". > Use the resulting dist/numpy-VERSION.exe installer to install Numpy. > > Testing: > In a Python console run > import numpy.testing > numpy.testing.NumpyTest(numpy).run() > ... hopefully without an error. Test your code base. > > I'll wikify an extended version in the next days. > > Stephan > > > Albert Strasheim wrote: > > Hello all, > > > > It seems that many people are building on Windows without problems, > except > > for Stephan and myself. > > > > Let me start by staying that yes, the default build on Windows with MinGW > > and Visual Studio works nicely. > > > > However, is anybody building with ATLAS and finding that experience to be > > equally painless? If so, *please* can you tell me how you've > organized your > > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about > ATLAS's > > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > > interested in the contents of your site.cfg. I've been trying for many > > weeks to do some small subset of the above without hacking into the > core of > > numpy.distutils. So far, no luck. > > > > Does anybody do debug builds on Windows? Again, please tell me how you do > > this, because I would really like to be able to build a debug version of > > NumPy for debugging with the MSVS compiler. > > > > As for compiler warnings, last time I checked, distutils seems to be > > suppressing the output from the compiler, except when the build actually > > fails. Or am I mistaken? > > > > Eagerly awaiting Windows build nirvana, > > > > Albert > > > >> -----Original Message----- > >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg > >> Sent: 10 May 2006 23:49 > >> To: Travis Oliphant > >> Cc: Stephan Tolksdorf; numpy-discussion > >> Subject: Re: [Numpy-discussion] Building on Windows > >> > >> Travis Oliphant wrote: > >> > >>> Stephan Tolksdorf wrote: > >>> > >>>> Hi, > >>>> > >>>> there are still some (mostly minor) problems with the Windows build > >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > >>>> documentation enhancements, but before doing so I'd like to know if > >>>> there's interest from one of the core developers to review/commit > >>>> these patches afterwards. I'm asking because in the past questions > >>>> and suggestions regarding the building process of Numpy (especially > >>>> on Windows) often remained unanswered on this list. I realise that > >>>> many developers don't use Windows and that the distutils build is a > >>>> complex beast, but the current situation seems a bit unsatisfactory > >>>> - and I would like to help. > >>> I think your assessment is a bit harsh. I regularly build on MinGW > >>> so I know it works there (at least at release time). I also have > >>> applied several patches with the express purpose of getting the build > >>> working on MSVC and Cygwin. > >>> > >>> So, go ahead and let us know what problems you are having. You are > >>> correct that my main build platform is not Windows, but I think > >>> several other people do use Windows regularly and we definitely want > >>> to support it. > >>> > >> Indeeed. I build from SVN at least once a week using MSVC and it's been > >> compiling warning free and passing all tests for me for some time. > >> > >> -tim > > > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your > job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From cookedm at physics.mcmaster.ca Thu May 11 12:48:05 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 11 12:48:05 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> (Travis Oliphant's message of "Tue, 09 May 2006 16:09:31 -0600") References: <4461131B.1050907@ieee.org> Message-ID: Travis Oliphant writes: > I'd like to get 0.9.8 of NumPy released by the end of the week. > There are a few Trac tickets that need to be resolved by then. > > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > > If you can help out on any other ticket that would be much > appreciated. I'd like to fix up #81 (Numpy should be installable with setuptool's easy_install), but I'm not going to have anytime to work on it before the weekend. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Thu May 11 13:01:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 11 13:01:05 2006 Subject: [Numpy-discussion] Discussion with Guido: an idea for scipy'06 Message-ID: Hi all, I think the presence of Guido as keynote speaker for scipy'06 is something we should try to benefit of as much as possible. I figured a good way to do that would be to set aside a one hour period, at the end of the first day (his keynote is that day), to hold a discussion with him on all aspects of Python which are relevant to us as a group of users, and which he may contribute feedback to, incorporate into future versions, etc. I floated the idea privately by some people and got no negative (and some positive) feedback, so I'm now putting it out on the lists. If you all think it's a waste of time, it's easy to just kill the thing. Since I know that many on this list may not be able to attend the conference, but may still have valuable ideas to contribute, I thought the best way to proceed (assuming people want to do this) would be to prepare the key points for discussion on a public forum, true to the spirit of open source collaboration. For this purpose, I've just created a stub page in a hurry: http://scipy.org/SciPy06DiscussionWithGuido Feel free to contribute to it. Hopefully there (and on-list) we can sort out interesting questions, and we can contact Guido a few days before the conference so he has a chance to read it in advance. Cheers, f ps - I didn't link to this page from anywhere else on the wiki, so outside of this message it won't be easy to find. I just didn't feel comfortable touching the more 'visible' pages, but if this idea floats, we should make it easier to find by linking to it on one of the conference pages. From joris at ster.kuleuven.ac.be Thu May 11 14:14:01 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Thu May 11 14:14:01 2006 Subject: [Numpy-discussion] resize() Message-ID: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> Hi, I was surprised by the following effect of resize() >>> from numpy import * # 0.9.6 >>> a = array([1,2,3,4]) >>> a.resize(2,2) >>> a array([[1, 2], [3, 4]]) >>> a.resize(2,3) Traceback (most recent call last): File "", line 1, in ? ValueError: cannot resize an array that has been referenced or is referencing another array in this way. Use the resize function Where exactly is the reference? I just started the interactive python shell, did nothing else... On the other hand, restarting python and executing >>> from numpy import * >>> a = array([1,2,3,4]) >>> a.resize(2,3) >>> a array([[1, 2, 3], [4, 0, 0]]) does work... Why didn't it work for the first case? Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From fonnesbeck at gmail.com Thu May 11 15:26:02 2006 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu May 11 15:26:02 2006 Subject: [Numpy-discussion] OS X binary installers Message-ID: <723eb6930605111525m1718b70al19cbca2aabe31f22@mail.gmail.com> I am now distributing Mac binary installers for both numpy and scipy in a "meta-package" along with a couple other modules (Matplotlib, PyMC). This will hopefully resolve some of the version conflicts that some have been experiencing with my builds of numpy and scipy that were not compiled together. These builds are recent svn checkouts, and I hope to update them approximately weekly. In addition, now that I have a new dual core Intel Mac Mini, I am distributing both PPC and Intel versions. You can download either at http://trichech.us in the OS X downloads section. Chris -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From filip at ftv.pl Thu May 11 16:00:04 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Thu May 11 16:00:04 2006 Subject: [Numpy-discussion] resize() In-Reply-To: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> References: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> Message-ID: <1162264868.20060512005926@gmail.com> Hi joris, > I was surprised by the following effect of resize() >>>> from numpy import * # 0.9.6 >>>> a = array([1,2,3,4]) >>>> a.resize(2,2) >>>> a > array([[1, 2], > [3, 4]]) >>>> a.resize(2,3) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: cannot resize an array that has been referenced or is referencing > another array in this way. Use the resize function > Where exactly is the reference? I just started the interactive python shell, > did nothing else... You have also typed >>> a which in turn prints repr() of the array and causes some side effect in the interactive mode (the `a` array is also referenced by _ special variable after this). Try running this code as a script or use `print a`: a.resize(2,2) >>> print a [[1 2] [3 4]] >>> a.resize(2,3) >>> print a [[1 2 3] [4 0 0]] > On the other hand, restarting python and executing >>>> from numpy import * >>>> a = array([1,2,3,4]) >>>> a.resize(2,3) >>>> a > array([[1, 2, 3], > [4, 0, 0]]) > does work... Yes, no extra referencing before array resizing here. > Why didn't it work for the first case? This is just a small interactive mode feature and does not happen during normal script execution. cheers, fw From wbaxter at gmail.com Thu May 11 16:29:03 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 16:29:03 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: <44636533.3020201@noaa.gov> References: <44636533.3020201@noaa.gov> Message-ID: Ahh, I hadn't noticed the fromstring/fromfile methods. Hmm. That seems ok for making a row-at-a-time, but it doesn't support the full syntax of the matrix string constructor, which allows for things like >>> numpy.matrix("[1 2; 2 3;3 4]") matrix([[1, 2], [2, 3], [3, 4]]) On the other hand since it's 'matrix', it turns things like "1 2 3" into [[1,2,3]] instead of just [1,2,3]. I think an array version of the matrix string constructor that returns the latter would be handy. But it's admittedly a pretty minor thing. ---bb On 5/12/06, Christopher Barker wrote: > > > Bill Baxter wrote: > > Two quick questions: > > ---------1------------ > > Is there any better way to intialize an array from a string than this: > > > > A = asarray(matrix("1 2 3")) > > How about: > > >>> import numpy as N > >>> N.fromstring("1 2 3", sep = " ") > array([1, 2, 3]) > > or > > >>> N.fromstring("1 2 3", dtype = N.Float, sep = " ") > array([ 1., 2., 3.]) > > If you pass a non-empty "sep" parameter, it parses the string, rather > than treating is as binary data. fromfile works this way too -- thanks > Travis! > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Thu May 11 17:26:04 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 17:26:04 2006 Subject: [Numpy-discussion] numpy crash on numpy.tri(-1) Message-ID: Subject says it all. numpy.tri(-1) crashes the python process. >>> numpy.__version__ '0.9.6' --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 11 17:38:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 11 17:38:02 2006 Subject: [Numpy-discussion] Re: numpy crash on numpy.tri(-1) In-Reply-To: References: Message-ID: Bill Baxter wrote: > Subject says it all. numpy.tri(-1) crashes the python process. > >>>> numpy.__version__ > '0.9.6' On OS X: >>> import numpy >>> numpy.tri(-1) array([], shape=(0, 0), dtype=int32) >>> numpy.__version__ '0.9.7.2476' While probably not "right" in any sense of the word, it doesn't crash in recent versions. In the future, it would be good to post the output of the crash. Sometimes people use the word "crash" to imply that an exception made it to the toplevel rather than, say, a segfault at the C level. Knowing what kind of "crash" is important. With segfaults, it is also usually helpful to know the platform you are on. With segfaults in linear algebra code, it is extremely helpful to know what BLAS and LAPACK you used to compile numpy. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Thu May 11 17:51:13 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 17:51:13 2006 Subject: [Numpy-discussion] Re: numpy crash on numpy.tri(-1) In-Reply-To: References: Message-ID: Sorry about the lack of detail. The platform is Windows. I'm using binaries off the Scipy website. And by crash I mean a dialog pops up in my pycrust shell saying "pythonw.exehas encountered a problem and needs to close. We are sorry for the inconvenience", and then asks me if I want to send Microsoft an error report. Perhaps it is fixed in a more recent version of numpy, though. --bb On 5/12/06, Robert Kern wrote: > > Bill Baxter wrote: > > Subject says it all. numpy.tri(-1) crashes the python process. > > > >>>> numpy.__version__ > > '0.9.6' > > On OS X: > > >>> import numpy > >>> numpy.tri(-1) > array([], shape=(0, 0), dtype=int32) > >>> numpy.__version__ > '0.9.7.2476' > > While probably not "right" in any sense of the word, it doesn't crash in > recent > versions. > > In the future, it would be good to post the output of the crash. Sometimes > people use the word "crash" to imply that an exception made it to the > toplevel > rather than, say, a segfault at the C level. Knowing what kind of "crash" > is > important. With segfaults, it is also usually helpful to know the platform > you > are on. With segfaults in linear algebra code, it is extremely helpful to > know > what BLAS and LAPACK you used to compile numpy. > > Thank you. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri May 12 01:18:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri May 12 01:18:01 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: On 5/11/06, Robert Hetland wrote: > > > Is it worth it to convert the arrays to matrices in order to do this > handful of calculation? Almost. I covet the shorthand .T notation > in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. > Also, for involved calculations inverse, transpose et. al are long > enough words such that the line always wraps, resulting in less- > readable code. +1 on a .T shortcut for arrays. +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) I'm not wild about the .I shortcut. I prefer to see big expensive operations like a matrix inverse to stand out a little more when I'm looking at the code. And I hardly ever need an inverse anyway (usually an lu_factor or SVD or something like that will do what I need more efficiently and robustly than directly taking an inverse). I just finished writing my first few hundred lines of code using array instead of matrix. It went fine. Here are my impressions: it was nice not having to worry so much about whether my vectors were row vectors or column vectors all the time, or whether this or that was matrix type or not. It felt much less like I was fighting with numpy than when I was using matrix. I also ported a little bit of matlab code that was full of apostrophes (matlab's transpose operator). With numpy arrays, all but one of the transposes went away. I realized also that most of what I end up doing is wrangling data to get it in the right shape and form so that I can to do just a few lines of linear algebra on it, similar to what you observe, Rob, so it makes little sense to have everything be matrices all the time. So looks like I'm joining the camp of "matrix -- not worth the bother" folks. The .T shortcut for arrays would still be nice though. And some briefer way to make a new axis than 'numpy.newaxis' would be nice too (I'm trying to keep my global namespace clean these days). --- Whoa I just noticed that a[None,:] has the same effect as 'newaxis'. Is that a blessed way to do things? Regards, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Fri May 12 01:34:11 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri May 12 01:34:11 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: <4464484E.2040200@gmx.net> Bill Baxter schrieb: > So looks like I'm joining the camp of "matrix -- not worth the bother" > folks. The .T shortcut for arrays would still be nice though. > Bill, I think I see what you mean. Nevertheless, I prefer my vectors to have a row-or-column orientation, for example because the requirement of conforming shapes for multiplication reveals (some of) my bugs when I slice a vector out of a matrix but along the wrong axis. It seems that otherwise I would get a bogus result without necessarily realizing it. I guess that also tells something about my inefficient matrix coding style in general, but still... For rapidly implementing some published formulae I prefer to map them to (numpy) code as directly and proof-readably as possible, at least initially. And that means with matrices. cheers, sven From tim.hochberg at cox.net Fri May 12 08:06:32 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri May 12 08:06:32 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: <4464A3F9.3010204@cox.net> Bill Baxter wrote: > On 5/11/06, *Robert Hetland* > wrote: > > > Is it worth it to convert the arrays to matrices in order to do this > handful of calculation? Almost. I covet the shorthand .T notation > in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. > Also, for involved calculations inverse, transpose et. al are long > enough words such that the line always wraps, resulting in less- > readable code. > Note that T(a) is only one character longer than a.T and is trivially implementable today. If you're doing enough typing of t-r-a-n-s-p-o-s-e to induce RSI, it's surely not going to hurt you to type: def T(a): return a.transpose() etc, somewhere at the top of your module. > > +1 on a .T shortcut for arrays. > +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) -1. These are really only meaningul for 2D arrays. Without the axis keyword they generally do the wrong things for arrays of other dimensionality (there I usually, but not always, want to transpose the first two or last two axes). In addition, ndarray is already overstuffed with methods and attributes, let's not add any more without careful consideration. > I'm not wild about the .I shortcut. I prefer to see big expensive > operations like a matrix inverse to stand out a little more when I'm > looking at the code. And I hardly ever need an inverse anyway > (usually an lu_factor or SVD or something like that will do what I > need more efficiently and robustly than directly taking an inverse). Agreed. In addition, inverse is another thing that's specific to 2D arrays. > I just finished writing my first few hundred lines of code using array > instead of matrix. It went fine. Here are my impressions: it was > nice not having to worry so much about whether my vectors were row > vectors or column vectors all the time, or whether this or that was > matrix type or not. It felt much less like I was fighting with numpy > than when I was using matrix. I also ported a little bit of matlab > code that was full of apostrophes (matlab's transpose operator). With > numpy arrays, all but one of the transposes went away. I realized > also that most of what I end up doing is wrangling data to get it in > the right shape and form so that I can to do just a few lines of > linear algebra on it, similar to what you observe, Rob, so it makes > little sense to have everything be matrices all the time. > > So looks like I'm joining the camp of "matrix -- not worth the bother" > folks. The .T shortcut for arrays would still be nice though. > > And some briefer way to make a new axis than 'numpy.newaxis' would be > nice too (I'm trying to keep my global namespace clean these days). > --- Whoa I just noticed that a[None,:] has the same effect as > 'newaxis'. Is that a blessed way to do things? numpy.newaxis *is* None. I don't think that I'd spell it that way since None doesn't really have the same conotations as newaxis, so I think I'll stick with np.newaxis, which is how I spell these things. For an even weirder example, which only exists in a parallel universe, I thought about proposing alls as another alias for None. This was so that, for instance, one could do: sum(a, axes=all) instead of sum(a, axes=None). That's clearer, but the cognitive disonance of 'all=None' scared me away. -tim From st at sigmasquared.net Fri May 12 15:03:13 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri May 12 15:03:13 2006 Subject: [Numpy-discussion] Some numpy.distutils question Message-ID: <44650286.9060209@sigmasquared.net> Hi all Could maybe someone explain which combinations of Blas, Atlas and Lapack Numpy should support in principle? For example, atlas_info in system_info.py takes a special code path and defines a macro when 'lapack' is not among the found libraries but a lib with the hard-coded name 'lapack_atlas' is. This looks like a Numpy with ATLAS-only option, but it is not documented. It looks like the numpy.distutils mingw32compiler class is copy of the one in the python distutils - but it seems to be out of date. Shouldn't we now make use of the Python one, or resync numpy's code? What should be done with the suppressed compiler warnings? Should there be some config switch to activate them, should they always be shown? Ciao, Stephan From charlesr.harris at gmail.com Fri May 12 17:47:30 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri May 12 17:47:30 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: On 5/11/06, Bill Baxter wrote: > > Ahh, I hadn't noticed the fromstring/fromfile methods. > > Hmm. That seems ok for making a row-at-a-time, but it doesn't support the > full syntax of the matrix string constructor, which allows for things like > > >>> numpy.matrix("[1 2; 2 3;3 4]") > matrix([[1, 2], > [2, 3], > [3, 4]]) > You can reshape the array returned by fromstring, i.e., In [6]: fromstring("1 2 2 3 3 4", sep=" ").reshape(-1,2) Out[6]: array([[1, 2], [2, 3], [3, 4]]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Fri May 12 21:45:08 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri May 12 21:45:08 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: <4464B6D2.2040701@noaa.gov> Bill Baxter wrote: > it doesn't support the > full syntax of the matrix string constructor, which allows for things like > >>>> numpy.matrix("[1 2; 2 3;3 4]") > matrix([[1, 2], > [2, 3], > [3, 4]]) > > I think an array version of the matrix string constructor that returns the > latter would be handy. > But it's admittedly a pretty minor thing. I agree it's pretty minor indeed, but as long as the code is in the matrix object, why not in the array object as well? As I think about it, I can see two reasons: 1) arrays are n-d. after commas and semi-colons, how do you construct a higher-than-rank-two array? 2) is it really that much harder to type the parentheses?: I suppose there is a bit of inefficiency in creating all those tuples, just to have them dumped, but I can't imagine that ever really matters. By the way, you can do: >>> a = numpy.fromstring("1 2; 2 3; 3 4", sep=" ").reshape((-1,2)) >>> a array([[1, 2], [2, 3], [3, 4]]) Which, admittedly, is kind of clunky, and, in fact, the ";" is being ignored, but you can put it there to remind yourself what you meant. A note about fromstring/fromfile: I sometimes might have a mix of separators, like the above example. It would be nice I I could pass in more than one that would get used. the above example will only work if there is a space in addition to the semi-colon. It would be nice to be able to do: a = numpy.fromstring("1 2;2 3;3 4", sep=" ;") or a = numpy.fromstring("1,2;2,3;3,4", sep=",;") and have that work. Travis, I believe you said that this code was inspired by my Scanfile code I posted on this list a while back. In that code, I allowed any character that ?scanf didn't interpret as a number be used as a separator: if you asked for the next ten numbers in the file, you'd get the next ten numbers, regardless of what was in between them. While that seems kind of ripe for masking errors, I find that I need to know what the file format I'm working with looks like anyway, and while this approach might mask an error when you read the data, it'll show up soon enough later on, and it sure does make it easy to use and code. Maybe a special string for sep could give us this behavior, like "*" or something. I'm also not sure it's the best idea to put this functionality into fromstring, rather than a separate function, perhaps fromtext()? (or scantext(), or ? ) That's not a big deal, but it just seems like it's a bit hidden there, and scanning a string is a very different operation that interpreting that string as binary data. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david.huard at gmail.com Fri May 12 22:09:53 2006 From: david.huard at gmail.com (David Huard) Date: Fri May 12 22:09:53 2006 Subject: [Numpy-discussion] Weird slicing behavior Message-ID: <91cf711d0605120950x23ac031fv88c10fd28f868172@mail.gmail.com> Hi, I noticed that slicing outside bounds with lists returns the whole list : >>> a = range(5) >>> a[-7:] [0, 1, 2, 3, 4] while in numpy, array has a weird behavior >>> a = numpy.arange(5) >>> a[-7:] array([3, 4]) # Ok, it takes modulo(index, length) but >>> a[-11:] array([0, 1, 2, 3, 4]) # ??? Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz at loskot.net Sat May 13 06:49:20 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Sat May 13 06:49:20 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44635ABE.5030200@ieee.org> References: <4463220B.20909@loskot.net> <44635ABE.5030200@ieee.org> Message-ID: <4465E108.9050703@loskot.net> Travis Oliphant wrote: > Mateusz ?oskot wrote: > >> >> I'm a developer contributing to GDAL project (http://www.gdal.org). >> GDAL is a core GeoSpatial Librarry that has had Python bindings >> for a while. Recently, we did get some reports from users that GDAL >> bindings do not work with NumPy package. > > > Most packages can be "ported" simply by replacing include > "Numeric/arrayobject.h" with include "numpy/arrayobject.h" and making > sure the include files are retrieved from the right place. NumPy was > designed to make porting from Numeric a breeze. Yes, portging to NumPy seems to be straightforward. >> This situation brings some questions we'd like to ask NumPy Dev >> Team: >> >> Is it fair to say we are unlikely to see Numeric releases for new >> Pythons in the future? >> > > Yes, that's fair. Nobody is maintaining Numeric. I understand. >> Can we consider NumPy as the only package in future? >> > > Yes. That's where development is currently active. Clear. So, I'm now working on moving GDAL bindings to Python to NumPy. >> What's the recommended way to go now? >> > > > I've ported tens of packages to NumPy from Numeric and have had very > little trouble. It is not difficult. Most of the time, simply > replacing the *.h file with the one from numpy works fine. It might > be a bit trickier to get your headers from the right place. The > directory is returned by > > import numpy.distutils > numpy.distutils.misc_util.get_numpy_include_dirs() > > Give it a try it's not very difficult. Thank you very much for valuable tips. I've just started to port it. I'll give some note about results. Cheers -- Mateusz ?oskot http://mateusz.loskot.net From mateusz at loskot.net Sat May 13 07:21:55 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Sat May 13 07:21:55 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44636A10.1030909@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> Message-ID: <4465E839.2000604@loskot.net> Christopher Barker wrote: > Mateusz ?oskot wrote: > >> I'm a developer contributing to GDAL project (http://www.gdal.org). >> GDAL is a core GeoSpatial Librarry that has had Python bindings >> for a while. Recently, we did get some reports from users that GDAL >> bindings do not work with NumPy package. > > > Speaking as a long time numpy (Numeric, etc.) user, and a new user of > GDAL, I had no idea GDAL worked with num* at all! at least not directly. Yes, it is :-) > In fact, I thought I was going to have to write that code myself. Where > do I find docs for this? I'm sure I've just missed something, but I'm > finding the docs a bit sparse. Do you mean docs for Python bindings of GDAL? AFAIK the only docs for Python oriented users is the "GDAL API Tutorial" (http://www.gdal.org/gdal_tutorial.html). Although, I'm not sure if there is or not manual reference for gdal.py module, so I'll take up this subject soon. In the meantime, I'd suggest to use epydoc tool to generate some manual - it won't be a complete reference but it can be usable. > On the other hand, I am also finding GDAL to be an excellent library, > and have so far gotten it to do what I need it to do. So kudos! Thanks! >> Can we consider NumPy as the only package in future? >> Simply, we are wondering which Python library we should develop for >> NumPy, Numeric or numarray to be most generally useful. > > > Speaking as a very interested observer, but not a developer of any of > the num* packages: > > numpy is the way of the future. As an observer, I think that's pretty > clear. On the other hand, it is still beta software, so dropping Numeric > just yet may not be appropriate. If you don't already support numarray, > there is no reason to do so now. > > It will do the python numerical community a world of good for all of us > to get back to a single array package. > > Note also that we hope some day to get a simple n-d array object into > the python standard library. When that happens, that object is all but > guaranteed to be compatible with numpy. There are two separate bindings to Python in GDAL: - native, called traditional - SWIG based, called New Generation Python So, we've decided to port the latter to NumPy. Thank you -- Mateusz ?oskot http://mateusz.loskot.net From amcmorl at gmail.com Sat May 13 20:03:59 2006 From: amcmorl at gmail.com (Angus McMorland) Date: Sat May 13 20:03:59 2006 Subject: [Numpy-discussion] Dot product threading? Message-ID: <44669A5B.8040700@gmail.com> Is there a way to specify which dimensions I want dot to work over? For example, if I have two arrays: In [78]:ma = array([4,5,6]) # shape = (1,3) In [79]:mb = ma.transpose() # shape = (3,1) In [80]:dot(mb,ma) Out[80]: array([[16, 20, 24], [20, 25, 30], [24, 30, 36]]) No problems there. Now I want to do that multiple times, threading over the first dimension of larger arrays: In [85]:mc = array([[[4,5,6]],[[7,8,9]]]) # shape = (2, 1, 3) In [86]:md = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) and I want to calculate the two (3, 1) x (1, 3) dot products to get a shape = (2, 3, 3) result, so that res[i,...] == dot(md[i,...], mc[i,...]) >From my example above res[0,...] would be the same as dot(mb,ma) and res[1,...] would be In [108]:dot(md[1,...],mc[1,...]) Out[108]: array([[49, 56, 63], [56, 64, 72], [63, 72, 81]]) I could do it by explicitly looping over the first dimension (as suggested by my generic example), but it seems like there should be a better way by specifying to the dot function the dimensions over which it should be 'dotting'. Cheers, Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From robert.kern at gmail.com Sat May 13 20:49:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat May 13 20:49:03 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <44669A5B.8040700@gmail.com> References: <44669A5B.8040700@gmail.com> Message-ID: Angus McMorland wrote: > Is there a way to specify which dimensions I want dot to work over? Use swapaxes() on the arrays to put the desired axes in the right places. In [2]: numpy.swapaxes? Type: function Base Class: String Form: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-0.9.7.2476-py2.4-macosx-10.4-ppc.egg/numpy/core/oldnumeric.py Definition: numpy.swapaxes(a, axis1, axis2) Docstring: swapaxes(a, axis1, axis2) returns array a with axis1 and axis2 interchanged. In [3]: numpy.dot? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stephen.walton at csun.edu Sun May 14 18:20:47 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun May 14 18:20:47 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> References: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <4467D47B.30907@csun.edu> John Hunter wrote: > >An open source version (GPL) for octave by Paul Kienzle, who is one of >the authors of the matplotlib quadmesh functionality and apparently a >python convertee, is here > > http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m > >and it looks like someone has already translated this to python using >scipy.signal > > http://www.bigbold.com/snippets/posts/show/1209 > >Some variant of this would be a nice addition to scipy. > > > I agree, and I just found by experiment, putting the code from the second link into scipy was pretty easy (I put it into signaltools.py). But the Octave code is GPL and the Python translation is pretty much a straight copy of it. Plus I don't know how to get hold of the Python translator. So, how to proceed from here? From simon at arrowtheory.com Mon May 15 00:21:37 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 00:21:37 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN Message-ID: <20060515170846.609be180.simon@arrowtheory.com> >>> numpy.float64(numpy.NaN)==numpy.NaN False Hmm. Bug or feature ? >>> numpy.__version__ 0.9.7.2502 Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From a.u.r.e.l.i.a.n at gmx.net Mon May 15 00:51:20 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon May 15 00:51:20 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <200605150936.32199.a.u.r.e.l.i.a.n@gmx.net> Hi, > >>> numpy.float64(numpy.NaN)==numpy.NaN > > False > According to the standards, two NaNs should never be equal (since nan represents an 'unknown' value). So the 'real' bug is this: ------------------------ In [1]: import numpy as N In [2]: N.nan == N.nan # wrong result! Out[2]: True In [3]: N.array([N.nan]) == N.array([N.nan]) # correct result Out[3]: array([False], dtype=bool) In [4]: N.__version__ Out[4]: '0.9.7.2484' ------------------------ Johannes From fullung at gmail.com Mon May 15 01:11:40 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 15 01:11:40 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <004401c677f4$f17c5260$0502010a@dsp.sun.ac.za> Hey Simon I think this is a feature. NaN is never equal to anything else, including itself. MATLAB does the same thing: >> nan==nan ans = 0 >> inf==inf ans = 1 Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Simon Burton > Sent: 15 May 2006 09:09 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN > > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? > > >>> numpy.__version__ > 0.9.7.2502 > > Simon. From simon at arrowtheory.com Mon May 15 01:46:26 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 01:46:26 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <20060515181956.6330ea7b.simon@arrowtheory.com> On Mon, 15 May 2006 17:08:46 +1000 Simon Burton wrote: > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? I don't know, but I just found the isnan function. Along with some other curious functions: ['isnan', 'nan', 'nan_to_num', 'nanargmax', 'nanargmin', 'nanmax', 'nanmin', 'nansum'] :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From ndarray at mac.com Mon May 15 05:52:01 2006 From: ndarray at mac.com (Sasha) Date: Mon May 15 05:52:01 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: It's a feature IEEE standard requires that NaNs are not equal to any floating point numbers including other NaNs. On 5/15/06, Simon Burton wrote: > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? > > >>> numpy.__version__ > 0.9.7.2502 > > Simon. > > -- > Simon Burton, B.Sc. > Licensed PO Box 8066 > ANU Canberra 2601 > Australia > Ph. 61 02 6249 6940 > http://arrowtheory.com > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Mon May 15 06:38:01 2006 From: ndarray at mac.com (Sasha) Date: Mon May 15 06:38:01 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: BTW, >>> from numpy import * >>> nan == nan False >>> from decimal import * >>> Decimal('NaN') == Decimal('NaN') False and finally (may not work on all platforms): >>> float('NaN') == float('NaN') False On 5/15/06, Sasha wrote: > It's a feature IEEE standard requires that NaNs are not equal to any > floating point numbers including other NaNs. > > On 5/15/06, Simon Burton wrote: > > > > >>> numpy.float64(numpy.NaN)==numpy.NaN > > False > > > > Hmm. Bug or feature ? > > > > >>> numpy.__version__ > > 0.9.7.2502 > > > > Simon. > > > > -- > > Simon Burton, B.Sc. > > Licensed PO Box 8066 > > ANU Canberra 2601 > > Australia > > Ph. 61 02 6249 6940 > > http://arrowtheory.com > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From simon at arrowtheory.com Mon May 15 06:57:28 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 06:57:28 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <20060515232824.302f0aab.simon@arrowtheory.com> On Mon, 15 May 2006 09:21:59 -0400 Sasha wrote: > > BTW, > > >>> from numpy import * > >>> nan == nan > False > > >>> from decimal import * > >>> Decimal('NaN') == Decimal('NaN') > False > > and finally (may not work on all platforms): > > >>> float('NaN') == float('NaN') > False Yes. It looks like the "isnan" function is what I need. I use NaN for missing data points, and I do masking on NaN, etc. Quite handy really. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From st at sigmasquared.net Mon May 15 07:15:32 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Mon May 15 07:15:32 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625E03.9080105@ieee.org> References: <44625BE2.4010708@sigmasquared.net> <44625E03.9080105@ieee.org> Message-ID: <4468895A.8060501@sigmasquared.net> > So, go ahead and let us know what problems you are having. You are > correct that my main build platform is not Windows, but I think several > other people do use Windows regularly and we definitely want to support it. I attached a patch to ticket #114 that fixes various build issues. I've also updated http://www.scipy.org/Installing_SciPy/Windows with a new tutorial on how to build Numpy/Scipy on Windows. Any comments or suggestions are very welcome. Should we make ATLAS-optimized Windows binaries of Numpy available? Maybe starting with 0.9.8? I could provide (32bit) Athlon64 binaries. Stephan From karol.langner at kn.pl Mon May 15 13:51:47 2006 From: karol.langner at kn.pl (Karol Langner) Date: Mon May 15 13:51:47 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44636613.7050201@ieee.org> References: <445F8F71.4070801@cox.net> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> Message-ID: <200605152237.51096.karol.langner@kn.pl> On Thursday 11 May 2006 18:28, Travis Oliphant wrote: > Christopher Barker wrote: > > Travis Oliphant wrote: > >> 1) Implement a base-array with no getitem method nor setitem method > >> at all > >> > >> 2) Implement a sub-class that supports only creation of data-types > >> corresponding to existing Python scalars (Boolean, Long-based > >> integers, Double-based floats, complex and object types). Then, all > >> array accesses should return the underlying Python objects. > >> This sub-class should also only do view-based indexing (basically > >> it's old Numeric behavior inside of NumPy). > >> > >> Item 1) should be pushed for inclusion in 2.6 and possibly even > >> something like 2) > > > > + sys.maxint > > > > Having even this very basic n-d object in the standard lib would be a > > MAJOR boon to python. > > I totally agree. I've been advertising this for at least 8 months, but > nobody is really willing to work on it (or fund it). At least we have > a summer student who is going to try and get Google summer-of-code money > for it. If you have any ability to bump up the ratings of summer of > code applications. Please consider bumping up his application. I am this student Travis mentioned. Right now I am intently looking through past archives and slowly digging into the basearray stuff. If I get the SoC money, I officially start working on this May 23rd, in close contact with eveyone connected with numpy I hope. If I don't get it, I may still go ahead, but not so eagerly for sure, as then I'll need to take up a summer job. Those interested that don't have inside access to SoC ;) can find my application here: http://www.mml.ch.pwr.wroc.pl/langner/SoC-NDarray.txt Karol -- written by Karol Langner pon maj 15 22:29:43 CEST 2006 From wbaxter at gmail.com Mon May 15 18:10:22 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 15 18:10:22 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: On 5/13/06, Charles R Harris wrote: > > > > On 5/11/06, Bill Baxter wrote: > > > > Ahh, I hadn't noticed the fromstring/fromfile methods. > > > > Hmm. That seems ok for making a row-at-a-time, but it doesn't support > > the full syntax of the matrix string constructor, which allows for things > > like > > > > >>> numpy.matrix("[1 2; 2 3;3 4]") > > matrix([[1, 2], > > [2, 3], > > [3, 4]]) > > > > You can reshape the array returned by fromstring, i.e., > > In [6]: fromstring("1 2 2 3 3 4", sep=" ").reshape(-1,2) > Out[6]: > array([[1, 2], > [2, 3], > [3, 4]]) > But if the string comes from someplace other than a literal right there in the code (like loaded from a file or passed in as an argument or something), I may not know the shape in advance. I'll just stick with the matrix constructor, since for my case, I do know the array dim is 2. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon May 15 20:55:11 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 20:55:11 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed Message-ID: <20060516134106.551fe813.simon@arrowtheory.com> Sometime between 0.9.5 and 0.9.7.2502 the behaviour of nonzero changed: >>> a=numpy.array([1,1,0,1,1]) >>> a.nonzero() array([0, 1, 3, 4]) >>> now it returns a tuple: >>> a=numpy.array((0,1,1,0,0)) >>> a.nonzero() (array([1, 2]),) This is rather unpleasant for me. For example, my collegue uses OSX and finds only numpy 0.9.5 under darwinports. :( Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Mon May 15 21:28:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 15 21:28:32 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060516134106.551fe813.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> Message-ID: <44695185.9090700@ieee.org> Simon Burton wrote: > Sometime between 0.9.5 and 0.9.7.2502 the behaviour of nonzero changed: > > >>>> a=numpy.array([1,1,0,1,1]) >>>> a.nonzero() >>>> > array([0, 1, 3, 4]) > > > now it returns a tuple: > > >>>> a=numpy.array((0,1,1,0,0)) >>>> a.nonzero() >>>> > (array([1, 2]),) > > This is rather unpleasant for me. For example, my collegue > uses OSX and finds only numpy 0.9.5 under darwinports. > > Chris Fonnesback releases quite up-to-date binary releases of NumPy and SciPy for Mac OSX. An alternative solution is to use the functional form: nonzero(a) works the same as a.nonzero() did before for backwards compatibility. The behavior of a.nonzero() was changed for compatibility with numarray. -Travis From simon at arrowtheory.com Mon May 15 22:46:20 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 22:46:20 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <44695185.9090700@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> Message-ID: <20060516153639.23ac52fa.simon@arrowtheory.com> On Mon, 15 May 2006 22:13:57 -0600 Travis Oliphant wrote: > > Chris Fonnesback releases quite up-to-date binary releases of NumPy and > SciPy for Mac OSX. Yes: http://homepage.mac.com/fonnesbeck/mac/index.html > > An alternative solution is to use the functional form: > > nonzero(a) works the same as a.nonzero() did before for backwards > compatibility. > > The behavior of a.nonzero() was changed for compatibility with numarray. Is this a general strategy employed by numpy ? Ie. functions have old semantics, methods have numarray semantics ? Scary. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From cadot at science-and-technology.nl Tue May 16 05:58:07 2006 From: cadot at science-and-technology.nl (Sidney Cadot) Date: Tue May 16 05:58:07 2006 Subject: [Numpy-discussion] Argument of complex number array? Message-ID: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Hi all, I am looking for a function to calculate the argument (i.e., the 'phase') from a numarray containing complex numbers. A bit to my surprise, I don't see this listed under the ufunc's. Now of course I can do something like arg = arctan2(z.real, z.imag) ... But I would have hoped for a direct function that does this. Any thoughts? (Perhaps I am missing something?) Cheerio, Sidney From pau.gargallo at gmail.com Tue May 16 06:22:44 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue May 16 06:22:44 2006 Subject: [Numpy-discussion] bug in argmax (?) Message-ID: <6ef8f3380605150721m5dd6f637ne428ef98a1a16512@mail.gmail.com> hi all, argmax gives me some errors for arrays with more than 2 dimensions: >>> import numpy >>> numpy.__version__ '0.9.7.2503' >>> x = numpy.zeros((2,3,4)) >>> x.argmax(0) Traceback (most recent call last): File "", line 1, in ? ValueError: bad axis2 argument to swapaxes >>> x.argmax(1) Traceback (most recent call last): File "", line 1, in ? ValueError: bad axis2 argument to swapaxes >>> x.argmax(2) array([[0, 0, 0], [0, 0, 0]]) does this happens to anyone else? pau From st at sigmasquared.net Tue May 16 06:41:11 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Tue May 16 06:41:11 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? Message-ID: <4469D615.3090402@sigmasquared.net> Hi all, am I the only one who is sometimes (particularly today) receiving emails from the numpy-discussion list up to one day after they have been sent? Or is there a problem with sf.net? Stephan From joris at ster.kuleuven.ac.be Tue May 16 06:41:33 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue May 16 06:41:33 2006 Subject: [Numpy-discussion] numpy documentation Message-ID: <1147786827.4469d64b6b0cf@webmail.ster.kuleuven.be> Hi, I've started a numpy documentation wikipage with a list of examples illustrating the use of each numpy function. Although far from complete, you can have a look at what I currently have: http://scipy.org/JorisDeRidder I would like to make the example list more visible for the numpy community, though. Any suggestions? And, needless to say, if anyone likes to contribute, please jump in! Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pau.gargallo at gmail.com Tue May 16 06:56:22 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue May 16 06:56:22 2006 Subject: [Numpy-discussion] Bug in ndarray.argmax In-Reply-To: <44561C11.3000601@cmp.uea.ac.uk> References: <44561C11.3000601@cmp.uea.ac.uk> Message-ID: <6ef8f3380605160655p5aa35228r778b5d240f6ed760@mail.gmail.com> On 5/1/06, Pierre Barbier de Reuille wrote: > Hello, > > I notices a bug in ndarray.argmax which prevent from getting the argmax > from any axis but the last one. > I join a patch to correct this. > Also, here is a small python code to test the behaviour of argmax I > implemented : > > ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== > > from numpy import array, random, all > > a = random.normal( 0, 1, ( 4,5,6,7,8 ) ) > for i in xrange( a.ndim ): > amax = a.max( i ) > aargmax = a.argmax( i ) > axes = range( a.ndim ) > axes.remove( i ) > assert all( amax == aargmax.choose( *a.transpose( i, *axes ) ) ) > > ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== > > Pierre > > > diff numpy-0.9.6/numpy/core/src/multiarraymodule.c numpy-0.9.6.mod/numpy/core/src/multiarraymodule.c > 1952a1953,1955 > > If orign > ap->nd, then we cannot "swap it back" > > as the dimension does not exist anymore. It means > > the axis must be put back at the end of the array. > 1956c1959,1979 > < (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ > --- > > int nb_dims = (ap)->nd; \ > > if (orign > nb_dims-1 ) { \ > > PyArray_Dims dims; \ > > int i; \ > > dims.ptr = ( intp* )malloc( sizeof( intp )*nb_dims );\ > > dims.len = nb_dims; \ > > for(i = 0 ; i < axis ; ++i) \ > > { \ > > dims.ptr[i] = i; \ > > } \ > > for(i = axis ; i < nb_dims-1 ; ++i) \ > > { \ > > dims.ptr[i] = i+1; \ > > } \ > > dims.ptr[nb_dims-1] = axis; \ > > (op) = (PyAO *)PyArray_Transpose((ap), &dims ); \ > > } \ > > else \ > > { \ > > (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ > > } \ > > > The bug seems to be still there in the current svn version, so I filled out a ticket for this. Is the first time i do such a thing, so someone competent should _please_ take a look at it. Thanks, and sorry in advance if i did something wrong. pau From MAILER-DAEMON at adm3.ims.u-tokyo.ac.jp Tue May 16 07:10:55 2006 From: MAILER-DAEMON at adm3.ims.u-tokyo.ac.jp (Mail Delivery Subsystem) Date: Tue May 16 07:10:55 2006 Subject: [Numpy-discussion] Returned mail: User unknown Message-ID: <200605151810.ABX13921@adm3.ims.u-tokyo.ac.jp> The original message was received at Tue, 16 May 2006 03:10:27 +0900 (JST) from [83.244.18.119] ----- The following addresses had permanent delivery errors ----- -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From Numpy-discussion at lists.sourceforge.net Mon May 15 14:08:28 2006 From: Numpy-discussion at lists.sourceforge.net (Numpy-discussion) Date: Tue, 16 May 2006 03:08:28 +0900 (JST) Subject: Fwd: image.jpg Message-ID: <200605151808.ABX13442@adm3.ims.u-tokyo.ac.jp> --ABX13921.1147716629/adm3.ims.u-tokyo.ac.jp-- From wbaxter at gmail.com Tue May 16 07:12:02 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 16 07:12:02 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <4464A3F9.3010204@cox.net> References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> <4464A3F9.3010204@cox.net> Message-ID: On 5/13/06, Tim Hochberg wrote: > > Bill Baxter wrote: > > > > > +1 on a .T shortcut for arrays. > > +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) > > -1. > > These are really only meaningul for 2D arrays. Without the axis keyword > they generally do the wrong things for arrays of other dimensionality > (there I usually, but not always, want to transpose the first two or > last two axes). In addition, ndarray is already overstuffed with methods > and attributes, let's not add any more without careful consideration. What am I missing here? There's already a .transpose() method on array. >From my quick little test it doesn't seem like the argument is useful: >>> a = num.rand(2,2) >>> a array([[ 0.96685836, 0.55643033], [ 0.86387107, 0.39331451]]) >>> a.transpose(1) array([ 0.96685836, 0.55643033]) >>> a array([[ 0.96685836, 0.55643033], [ 0.86387107, 0.39331451]]) >>> a.transpose(0) array([ 0.96685836, 0.86387107]) >>> a.transpose() array([[ 0.96685836, 0.86387107], [ 0.55643033, 0.39331451]]) (Python 2.4 / Windows XP) But maybe I'm using it wrong. The docstring isn't much help: --------------------------- >>> help(a.transpose) Help on built-in function transpose: transpose(...) m.transpose() --------------------------- Assuming I'm missing something about transpose(), that still doesn't rule out a shorter functional form, like a.T() or a.t(). The T(a) that you suggest is short, but it's just not the order people are used to seeing things in their math. Besides, having a global function with a name like T is just asking for trouble. And defining it in every function that uses it would get tedious. Even if the .T shortcut were added as an attribute and not a function, you could still use .transpose() for N-d arrays when the default two axes weren't the ones you wanted to swap. Yes, 2-D *is* a special case of array, but -- correct me if I'm wrong -- it's a very common special case. Matlab's transpose operator, for instance, just raises an error if it's used on anything other than a 1D- or 2D- array. There's some other way to shuffle axes around for N-d arrays in matlab, but I forget what. Not saying that the way Matlab does it is right, but my own experience reading and writing code of various sorts is that 2D stuff is far more common than arbitrary N-d. But for some reason it seems like most people active on this mailing list see things as just the opposite. Perhaps it's just my little niche of the world that is mostly 2D. The other thing is that I suspect you don't get so many transposes when doing arbitrary N-d array stuff so there's not as much need for a shortcut. But again, not much experience with typical N-d array usages here. > > And some briefer way to make a new axis than 'numpy.newaxis' would be > > nice too (I'm trying to keep my global namespace clean these days). > > --- Whoa I just noticed that a[None,:] has the same effect as > > 'newaxis'. Is that a blessed way to do things? > > numpy.newaxis *is* None. I don't think that I'd spell it that way since > None doesn't really have the same conotations as newaxis, so I think > I'll stick with np.newaxis, which is how I spell these things. Well, to a newbie, none of the notation used for indexing has much of any connotation. It's all just arbitrary symbols. ":" means the whole range? 2:10 means a range from 2 to 9? Sure, most of those are familiar to Python users (even '...'?) but even a heavy python user would be puzzled by something like r_[1:2:3j]. Or reshape(-1,2). The j is a convention, the -1 is a convention. NewAxis seems common enough to me that it's worth just learning None as another numpy convention. As long as "numpy.newaxis *is* None", as you say, and not "is *currently* None, but subject to change without notice" , then I think I'd rather use None. It has the feel of being something that's a fundamental part of the language. The first few times I saw indexing with numpy.newaxis it made no sense to me. How can you index with a new axis? What's the new axis'th element of an array? "None" says to me the index we're selecting is "None" as in "None of the above", i.e. we're not taking from the elements that are there, or not using up any of our current axes on this index. Also when you see None, it's clear to a Python user that there's some trickery going on, but when you see a[NewAxis,:] (as it was the first time I saw it) it's not clear if NewAxis is some numeric value defined in the code you're reading or by Numpy or what. For whatever reason None seems more logical and symmetrical to me than numpy.newaxis. Plus it seems that documentation can't be attached to numpy.newaxis because of it being None, which I recall also confused me at first. "help(numpy.newaxis)" gives you a welcome to Python rather than information about using numpy.newaxis. Regards, --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Tue May 16 07:20:38 2006 From: skip at pobox.com (skip at pobox.com) Date: Tue May 16 07:20:38 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? In-Reply-To: <4469D615.3090402@sigmasquared.net> References: <4469D615.3090402@sigmasquared.net> Message-ID: <17513.57172.453964.424726@montanaro.dyndns.org> Stephan> am I the only one who is sometimes (particularly today) Stephan> receiving emails from the numpy-discussion list up to one day Stephan> after they have been sent? Or is there a problem with sf.net? I suspect it's just the status quo. sf.net is rarely "speedy". Skip From tim.hochberg at cox.net Tue May 16 08:29:09 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 16 08:29:09 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> <4464A3F9.3010204@cox.net> Message-ID: <4469EF9A.50202@cox.net> Bill Baxter wrote: > On 5/13/06, *Tim Hochberg* > wrote: > > Bill Baxter wrote: > > > > > +1 on a .T shortcut for arrays. > > +1 on a .H shortcut for arrays, too. (Instead of > .conj().transpose()) > > -1. > > These are really only meaningul for 2D arrays. Without the axis > keyword > they generally do the wrong things for arrays of other dimensionality > (there I usually, but not always, want to transpose the first two or > last two axes). In addition, ndarray is already overstuffed with > methods > and attributes, let's not add any more without careful consideration. > > > What am I missing here? There's already a .transpose() method on array. > From my quick little test it doesn't seem like the argument is useful: > >>> a = num.rand(2,2) > >>> a > array([[ 0.96685836, 0.55643033], > [ 0.86387107, 0.39331451]]) > >>> a.transpose(1) > array([ 0.96685836, 0.55643033]) > >>> a > array([[ 0.96685836, 0.55643033], > [ 0.86387107, 0.39331451]]) > >>> a.transpose(0) > array([ 0.96685836, 0.86387107]) > >>> a.transpose() > array([[ 0.96685836, 0.86387107], > [ 0.55643033 , 0.39331451]]) > > (Python 2.4 / Windows XP) > > But maybe I'm using it wrong. The docstring isn't much help: > --------------------------- > >>> help(a.transpose) > Help on built-in function transpose: > > transpose(...) > m.transpose() > --------------------------- No, the docstring does not appear to be useful. I'm not sure what's happening to your argument there, something bogus I assume; you are supposed to pass a sequence, which becomes the new axis order. For example. >>> import numpy >>> a = numpy.arange(6).reshape([3,2,1]) >>> a.shape (3, 2, 1) >>> a.copy().transpose().shape # By default, the axes are just reversed (1, 2, 3) >>> a.copy().transpose([0,2,1]).shape # Transpose last two axes (3, 1, 2) >>> a.copy().transpose([1,0,2]).shape # Transpose first two axes (2, 3, 1) Most of the time, when I'm using 3D arrays, I use one of the last two versions of transpose. Back to passing a single number. Looking at array_transponse, it appears that a.transpose(1) and a.transpose([1]) are equivalent. The problem is that transpose is accepting sequences of the lengths other than len(a.shape). The results appear bogus: chunks of the array actually disappear as you illustrate. It looks like this can be fixed by just adding a check in PyArray_Transpose that permute->len is the same as ap->nd. I won't have time to get to this today; if possible, could you enter a ticket on this so it doesn't fall through the cracks? > > Assuming I'm missing something about transpose(), that still doesn't > rule out a shorter functional form, like a.T() or a.t(). > > The T(a) that you suggest is short, but it's just not the order people > are used to seeing things in their math. Besides, having a global > function with a name like T is just asking for trouble. And defining > it in every function that uses it would get tedious. > > Even if the .T shortcut were added as an attribute and not a function, > you could still use .transpose() for N-d arrays when the default two > axes weren't the ones you wanted to swap. Yes, 2-D *is* a special > case of array, but -- correct me if I'm wrong -- it's a very common > special case. Not in my experience. I can't claim that this is typical for everyone, but my usage is most often of 3D arrays that represent arrays of matrices. Also common is representing an array of 2x2 (or 4x4) matrices as [[X00, X01], [X10, X11]], where the various Xs are (big) 1D arrays or scalars. This is because I often know that one of the Xs is either all ones or all zeros and by storing them as a scalar values I can save space, and by eliding subsequent operations by special casing one and zero, time. My usage of vanilla 2D arrays is relatively limited and occurs mostly where I interface with the outside world. > Matlab's transpose operator, for instance, just raises an error if > it's used on anything other than a 1D- or 2D- array. There's some > other way to shuffle axes around for N-d arrays in matlab, but I > forget what. Not saying that the way Matlab does it is right, but my > own experience reading and writing code of various sorts is that 2D > stuff is far more common than arbitrary N-d. But for some reason it > seems like most people active on this mailing list see things as just > the opposite. Perhaps it's just my little niche of the world that is > mostly 2D. The other thing is that I suspect you don't get so many > transposes when doing arbitrary N-d array stuff so there's not as much > need for a shortcut. But again, not much experience with typical N-d > array usages here. It's hard to say. People coming from Matlab etc tend to see things in terms of 2D. Some of that may be just that they work in different problem domains than, for instance, I do. Image processing for example seems to be mostly 2D. Although in some cases you might actually want to work on stacks of images, in which case you're back to the 3D regime. Part of it may also be that once you work an ND array language for a while you see ND solutions to some problems that previously you only saw 2D solutions for. > > > > > And some briefer way to make a new axis than 'numpy.newaxis' > would be > > nice too (I'm trying to keep my global namespace clean these days). > > --- Whoa I just noticed that a[None,:] has the same effect as > > 'newaxis'. Is that a blessed way to do things? > > numpy.newaxis *is* None. I don't think that I'd spell it that way > since > None doesn't really have the same conotations as newaxis, so I think > I'll stick with np.newaxis, which is how I spell these things. > > > Well, to a newbie, none of the notation used for indexing has much of > any connotation. It's all just arbitrary symbols. ":" means the > whole range? 2:10 means a range from 2 to 9? Sure, most of those are > familiar to Python users (even '...'?) but even a heavy python user > would be puzzled by something like r_[1:2:3j]. Or reshape(-1,2). The > j is a convention, the -1 is a convention. NewAxis seems common > enough to me that it's worth just learning None as another numpy > convention. > > As long as "numpy.newaxis *is* None", as you say, and not "is > *currently* None, but subject to change without notice" , It's been None since the beginning, so I don't think it's likely to change. Not being in charge, I can't guarantee that, but it would likely be a huge pain to do so I don't see it happening. > then I think I'd rather use None. It has the feel of being something > that's a fundamental part of the language. The first few times I saw > indexing with numpy.newaxis it made no sense to me. How can you index > with a new axis? What's the new axis'th element of an array? "None" > says to me the index we're selecting is "None" as in "None of the > above", i.e. we're not taking from the elements that are there, or not > using up any of our current axes on this index. Also when you see > None, it's clear to a Python user that there's some trickery going on, > but when you see a[NewAxis,:] (as it was the first time I saw it) > it's not clear if NewAxis is some numeric value defined in the code > you're reading or by Numpy or what. For whatever reason None seems > more logical and symmetrical to me than numpy.newaxis. Plus it seems > that documentation can't be attached to numpy.newaxis because of it > being None, which I recall also confused me at first. > "help(numpy.newaxis)" gives you a welcome to Python rather than > information about using numpy.newaxis. In abstract, I don't see any problem with that, but assuming that your sharing code with others, everyone's likely to be happier if you spell newaxis as newaxis, just for consistencies sake. On a related note, I think newaxis is probably overused. at least by me, I just grepped my code for newaxis, and I'd say at least half the uses of newaxis would have been clearer as reshape. -tim From aisaac at american.edu Tue May 16 12:09:02 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 16 12:09:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <44695185.9090700@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: 1. I hope for an array 'a' that nonzero(a) and a.nonzero() will produce the same result, whatever convention is chosen. For 1d arrays only the numarray behavior can be consistent with nd arrays (which Numeric's 'nonzero' did not handle). But ... 2. How are people using this? I trust that the numarray behavior was well considered, but I would have expected coordinates to be grouped rather than spread across the arrays in the tuple. Thank you for any insight, Alan Isaac From simon at arrowtheory.com Tue May 16 17:34:10 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue May 16 17:34:10 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? In-Reply-To: <4469D615.3090402@sigmasquared.net> References: <4469D615.3090402@sigmasquared.net> Message-ID: <20060517103347.56fd9194.simon@arrowtheory.com> On Tue, 16 May 2006 15:39:33 +0200 Stephan Tolksdorf wrote: > > Hi all, > > am I the only one who is sometimes (particularly today) receiving emails > from the numpy-discussion list up to one day after they have been sent? > Or is there a problem with sf.net? Yes, me too. It's very confusing, and rather random. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From simon at arrowtheory.com Tue May 16 18:18:05 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue May 16 18:18:05 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> Message-ID: <20060517111742.421bdf3b.simon@arrowtheory.com> On Tue, 16 May 2006 15:15:23 -0400 Alan G Isaac wrote: > 2. How are people using this? I trust that the numarray > behavior was well considered, but I would have expected > coordinates to be grouped rather than spread across > the arrays in the tuple. Yes, this strikes me as bizarre. How about we make a new function, eg. argwhere, that returns an array of indices ? argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From tim.leslie at gmail.com Tue May 16 21:24:01 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Tue May 16 21:24:01 2006 Subject: [Numpy-discussion] Argument of complex number array? In-Reply-To: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> References: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Message-ID: On 5/16/06, Sidney Cadot wrote: > > Hi all, > > I am looking for a function to calculate the argument (i.e., the > 'phase') from a numarray containing complex numbers. A bit to my > surprise, I don't see this listed under the ufunc's. > > Now of course I can do something like > > arg = arctan2(z.real, z.imag) > > ... But I would have hoped for a direct function that does this. > > Any thoughts? (Perhaps I am missing something?) angle(z, deg=0) Return the angle of the complex argument z. >>> z = complex(1, 1) >>> angle(z)==pi/4 True Cheers, Timl Cheerio, Sidney > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed May 17 01:38:02 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 17 01:38:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060517111742.421bdf3b.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <20060517111742.421bdf3b.simon@arrowtheory.com> Message-ID: <200605171036.53706.faltet@carabos.com> A Dimecres 17 Maig 2006 03:17, Simon Burton va escriure: > On Tue, 16 May 2006 15:15:23 -0400 > > Alan G Isaac wrote: > > 2. How are people using this? I trust that the numarray > > behavior was well considered, but I would have expected > > coordinates to be grouped rather than spread across > > the arrays in the tuple. > > Yes, this strikes me as bizarre. > How about we make a new function, eg. argwhere, that > returns an array of indices ? > > argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] +1 That could be quite useful. -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cadot at science-and-technology.nl Wed May 17 01:45:03 2006 From: cadot at science-and-technology.nl (Sidney Cadot) Date: Wed May 17 01:45:03 2006 Subject: [Numpy-discussion] Argument of complex number array? In-Reply-To: References: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Message-ID: On May 17, 2006, at 6:23, Tim Leslie wrote: > angle(z, deg=0) Great, I missed that completely. Thanks! Sidney From joris at ster.kuleuven.ac.be Wed May 17 02:54:12 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed May 17 02:54:12 2006 Subject: [Numpy-discussion] numpy docs Message-ID: <1147859593.446af2893a9a8@webmail.ster.kuleuven.be> On Tuesday 16 May 2006 16:32, you wrote: > That's great! > I think it would be nice if it could somehow become a gateway for > docstrings. Are you only interested in examples? I'm not sure what your > intentions are, but it would be nice if there were a Wiki page like yours > where people could contribute docstring fixes and then those fixes would > eventually find their way into the source with the help of someone with CVS > write access. Usage examples like the ones on your page are needed in the > docstrings too. My fear though is that with a wiki page, there's no real > incentive to be concise. People tend to just add more rather than erasing > something someone else wrote. I don't really want to read a flame war in my > docstrings about whether the sum() method is superior to the sum() function, > etc. > > --bill +1 on the docstrings. At the moment I will put my efforts in making the example list more complete, though. It's actually a great way to learn about the numpy features. I recommend numarray users to simply browse through the example list and pick up the new numpy features by reading some of the examples. But be aware that there are actually many more as the list is not yet complete. And don't hesitate to contribute! ;-) Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From jg307 at cam.ac.uk Wed May 17 04:20:06 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 17 04:20:06 2006 Subject: [Numpy-discussion] Distutils/NAG compiler problems Message-ID: <446B06B8.30003@cam.ac.uk> I have been experiencing difficulty with the final linking step when using f2py and the NAG Fortran 95 compiler, using the latest release version of numpy (0.9.6). I believe this is caused by a typo in the options being passed to the compiler (at least, making this change fixes the problem): --- /usr/lib/python2.3/site-packages/numpy/distutils/fcompiler/nag.py 2006-01-06 21:29:40.000000000 +0000 +++ /scratch/jgraham/nag.py 2006-05-17 10:46:38.000000000 +0100 @@ -22,7 +22,7 @@ def get_flags_linker_so(self): if sys.platform=='darwin': return ['-unsharedf95','-Wl,-bundle,-flat_namespace,-undefined,suppress'] - return ["-Wl,shared"] + return ["-Wl,-shared"] def get_flags_opt(self): return ['-O4'] def get_flags_arch(self): (that may be incorrectly wrapped) -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From mateusz at loskot.net Wed May 17 08:35:09 2006 From: mateusz at loskot.net (Mateusz Loskot) Date: Wed May 17 08:35:09 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <446A0A86.3030400@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> <4465E839.2000604@loskot.net> <446A0A86.3030400@noaa.gov> Message-ID: <446B4278.50401@loskot.net> Christopher Barker wrote: > Mateusz ?oskot wrote: >> Christopher Barker wrote: >> >>> Speaking as a long time numpy (Numeric, etc.) user, and a new >>> user of GDAL, I had no idea GDAL worked with num* at all! at >>> least not directly. >> >> >> Yes, it is :-) >> >>> In fact, I thought I was going to have to write that code myself. >>> Where do I find docs for this? I'm sure I've just missed >>> something, but I'm finding the docs a bit sparse. >> >> >> Do you mean docs for Python bindings of GDAL? > > > I meant docs for the num* part. Yes, agreed. >> AFAIK the only docs for Python oriented users is the "GDAL API >> Tutorial" (http://www.gdal.org/gdal_tutorial.html). > > > That's all I've found. And there was no indication of what datatypes > were being returned (or accepted). So far, all I'm doing it reading > images, and it looks to me like ReadRaster() is returning a string, > for instance. I'd love to have it return a numpy array. Now, it's using Numeric package. NumPy will be used soon. >> In the meantime, I'd suggest to use epydoc tool to generate some >> manual - it won't be a complete reference but it can be usable. > > > Good idea. Someone should get around to doing that, and post it > somewhere. I if do it, I'll let you know. I'm going to include epydoc configuration file in GDAL source tree. Such file is similar to doxygen config and can be used to generate html/pdf documentations with single command. I think including generated docs is not necessary. >> There are two separate bindings to Python in GDAL: - native, called >> traditional - SWIG based, called New Generation Python > > > How do I know which ones I'm using? When you're building GDAL you can use appropriate options for ./configure script. > Were the "tradition" ones done my hand? Yes, traditional pymod. Cheers -- Mateusz ?oskot http://mateusz.loskot.net From cookedm at physics.mcmaster.ca Wed May 17 13:53:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed May 17 13:53:01 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: (David M. Cooke's message of "Thu, 11 May 2006 15:47:51 -0400") References: <4461131B.1050907@ieee.org> Message-ID: cookedm at physics.mcmaster.ca (David M. Cooke) writes: > Travis Oliphant writes: > >> I'd like to get 0.9.8 of NumPy released by the end of the week. >> There are a few Trac tickets that need to be resolved by then. >> >> In particular #83 suggests returning scalars instead of 1x1 matrices >> from certain reduce-like methods. Please chime in on your preference. >> I'm waiting to here more feedback before applying the patch. >> >> If you can help out on any other ticket that would be much >> appreciated. > > I'd like to fix up #81 (Numpy should be installable with setuptool's > easy_install), but I'm not going to have anytime to work on it before > the weekend. I'm good to go now. #81 is fixed with a hack, but it's the only way I can see to do it (without a major restructuring of numpy.distutils). Only showstopper I can see is probably #110, which seems to show we're leaking memory in the ufunc machinery (haven't looked into it, though). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed May 17 13:55:02 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed May 17 13:55:02 2006 Subject: [Numpy-discussion] Distutils/NAG compiler problems In-Reply-To: <446B06B8.30003@cam.ac.uk> (James Graham's message of "Wed, 17 May 2006 12:19:20 +0100") References: <446B06B8.30003@cam.ac.uk> Message-ID: James Graham writes: > I have been experiencing difficulty with the final linking step when > using f2py and the NAG Fortran 95 compiler, using the latest release > version of numpy (0.9.6). I believe this is caused by a typo in the > options being passed to the compiler (at least, making this change > fixes the problem): > > --- /usr/lib/python2.3/site-packages/numpy/distutils/fcompiler/nag.py > 2006-01-06 21:29:40.000000000 +0000 > +++ /scratch/jgraham/nag.py 2006-05-17 10:46:38.000000000 +0100 > @@ -22,7 +22,7 @@ > def get_flags_linker_so(self): > if sys.platform=='darwin': > return > ['-unsharedf95','-Wl,-bundle,-flat_namespace,-undefined,suppress'] > - return ["-Wl,shared"] > + return ["-Wl,-shared"] > def get_flags_opt(self): > return ['-O4'] > def get_flags_arch(self): > > (that may be incorrectly wrapped) I've applied that to svn. It'll be in 0.9.8. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Wed May 17 14:34:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 17 14:34:04 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: On Tue, 16 May 2006, Alan G Isaac apparently wrote: > 2. How are people using this? I trust that the numarray > behavior was well considered, but I would have expected > coordinates to be grouped rather than spread across the > arrays in the tuple. OK, just to satisfy my curiosity: does the silence mean that nobody is using 'nonzero' in a fashion that leads them to prefer the current behavior to the "obvious" alternative of grouping the coordinates? Is the current behavior just an inherited convention, or is it useful in specific applications? Sorry to ask again, but I'm interested to know the application that is facilitated by getting one coordinate at a time, or possibly by getting e.g. an array of first coordinates without the others. Thank you, Alan Isaac From oliphant.travis at ieee.org Wed May 17 15:20:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 17 15:20:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060517111742.421bdf3b.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <20060517111742.421bdf3b.simon@arrowtheory.com> Message-ID: <446BA147.7020802@ieee.org> Simon Burton wrote: > On Tue, 16 May 2006 15:15:23 -0400 > Alan G Isaac wrote: > > >> 2. How are people using this? I trust that the numarray >> behavior was well considered, but I would have expected >> coordinates to be grouped rather than spread across >> the arrays in the tuple. >> > > The split-tuple is for fancy-indexing. That's how you index a multidimensional array using an array of integers. The output of a.nonzero() is setup to do that so that a[a.nonzero()] works. > Yes, this strikes me as bizarre. > How about we make a new function, eg. argwhere, that > returns an array of indices ? > > argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] > > I could see the value of this kind of function too. -Travis From oliphant.travis at ieee.org Wed May 17 15:21:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 17 15:21:17 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today Message-ID: <446BA1AA.8000300@ieee.org> If there are now further difficulties, I'm going to release 0.9.8 today. Then work on 1.0 can begin. The 1.0 release will consist of a series of release candidates. Thank you to David Cooke for his recent flurry of fixes.. Thanks to all the other developers who have contributed as well. -Travis From a.mcmorland at auckland.ac.nz Wed May 17 17:39:19 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed May 17 17:39:19 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> Message-ID: <446BC1FA.2060404@auckland.ac.nz> Robert Kern wrote: > Angus McMorland wrote: > >>Is there a way to specify which dimensions I want dot to work over? > > Use swapaxes() on the arrays to put the desired axes in the right places. Thanks for your reply, Robert. I've explored a bit further, and have made sense of what's going on, to some extent, but have further questions. My interpretation of the dot docstring, is that the shapes I need are: a.shape == (2,3,1) and b.shape == (2,1,3) so that the sum is over the 1s, giving result.shape == (2,3,3) but: In [85]:ma = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) In [86]:mb = array([[[4,5,6]],[[7,8,9]]]) #shape = (2, 1, 3) so In [87]:res = dot(ma,mb).shape In [88]:res.shape Out[88]:(2, 3, 2, 3) such that res[i,:,j,:] == dot(ma[i,:,:], mb[j,:,:]) which means that I can take the results I want out of res by slicing (somehow) res[0,:,0,:] and res[1,:,1,:] out. Is there an easier way, which would make dot only calculate the dot products for the cases where i==j (which is what I call threading over the first dimension)? Since the docstring makes no mention of what happens over other dimensions, should that be added, or is this the conventional numpy behaviour that I need to get used to? Cheers, Angus -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From twisti at iceberg.co.nz Wed May 17 18:22:06 2006 From: twisti at iceberg.co.nz (Chibuzo Twist) Date: Wed May 17 18:22:06 2006 Subject: [Numpy-discussion] Re: your good crdit Message-ID: <000001c67a18$ed1c5aa0$defba8c0@afa45> D p ea l r H r om p e O u wn m er, Your c d re d di e t doesn' f t m r atter to us! If you O w WN s r j ea q l e h st v at f e and wa i nt I s MME x DI g ATE c a as z h to s q pe f nd ANY wa f y you lik v e, or simp d ly wis t h to L e OW s ER your mo q nthly p d aym i ent h s by a thir e d or m m ore, h v ere a e re the d n ea q ls o we hav r e T q ODA x Y: $4 w 90 , 00 x 0 a b s l o ow a j s 3 , 6 b 4 % $ b 370 , 0 z 00 a d s l g ow a g s 3 , 8 u 9 % $ l 490 , 0 g 00 a p s lo p w a p s 3 , 1 k 9 % $2 t 50 , 00 k 0 a u s lo i w a l s 3 , 3 o 4 % $2 s 00 , 00 c 0 a p s lo z w a k s 3 , 5 v 4 % Your c e re f di h t does v n't mat k ter to u u s! g x et ma t tc y hed with l v en g der m s Chibuzo Twist , Ap h pr d ov k al M d ana r ge x r ----- Original Message ----- believe, after a long start. And so do burglars, he added as a parting shot, as he darted back and fled up the tunnel. It was an unfortunate remark, for the dragon spouted terrific flames after him, and fast though he sped up the slope, he had not gone nearly -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 17 19:13:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 17 19:13:04 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <446BC1FA.2060404@auckland.ac.nz> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> Message-ID: Angus McMorland wrote: > Robert Kern wrote: > >>Angus McMorland wrote: >> >>>Is there a way to specify which dimensions I want dot to work over? >> >>Use swapaxes() on the arrays to put the desired axes in the right places. > > Thanks for your reply, Robert. I've explored a bit further, and have > made sense of what's going on, to some extent, but have further questions. > > My interpretation of the dot docstring, is that the shapes I need are: > a.shape == (2,3,1) and b.shape == (2,1,3) > so that the sum is over the 1s, giving result.shape == (2,3,3) I'm not sure why you think you should get that resulting shape. Yes, it will "sum" over the 1s (in this case there is only one element in those axes so there is nothing really to sum). What exactly are the semantics of the operation that you want? I can't tell from just the input and output shapes. > but: > In [85]:ma = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) > In [86]:mb = array([[[4,5,6]],[[7,8,9]]]) #shape = (2, 1, 3) > so > In [87]:res = dot(ma,mb).shape > In [88]:res.shape > Out[88]:(2, 3, 2, 3) > such that > res[i,:,j,:] == dot(ma[i,:,:], mb[j,:,:]) > which means that I can take the results I want out of res by slicing > (somehow) res[0,:,0,:] and res[1,:,1,:] out. > > Is there an easier way, which would make dot only calculate the dot > products for the cases where i==j (which is what I call threading over > the first dimension)? I'm afraid I really don't understand the operation that you want. > Since the docstring makes no mention of what happens over other > dimensions, should that be added, or is this the conventional numpy > behaviour that I need to get used to? It's fairly conventional for operations that reduce values along an axis to a single value. The remaining axes are left untouched. E.g. In [1]: from numpy import * In [2]: a = random.randint(0, 10, size=(3,4,5)) In [3]: s1 = sum(a, axis=1) In [4]: a.shape Out[4]: (3, 4, 5) In [5]: s1.shape Out[5]: (3, 5) In [6]: for i in range(3): ...: for j in range(5): ...: print i, j, (sum(a[i,:,j]) == s1[i,j]).all() ...: ...: 0 0 True 0 1 True 0 2 True 0 3 True 0 4 True 1 0 True 1 1 True 1 2 True 1 3 True 1 4 True 2 0 True 2 1 True 2 2 True 2 3 True 2 4 True -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Thu May 18 01:56:02 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 01:56:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: <446C3636.3070906@ftw.at> Alan G Isaac wrote: > On Tue, 16 May 2006, Alan G Isaac apparently wrote: > >> 2. How are people using this? I trust that the numarray >> behavior was well considered, but I would have expected >> coordinates to be grouped rather than spread across the >> arrays in the tuple. >> > > OK, just to satisfy my curiosity: > does the silence mean that nobody is using > 'nonzero' in a fashion that leads them to > prefer the current behavior to the "obvious" > alternative of grouping the coordinates? > Is the current behavior just an inherited > convention, or is it useful in specific applications? > I also think a function that groups coordinates would be useful. Here's a prototype: def argnonzero(a): return transpose(a.nonzero()) This has the same effect as: def argnonzero(a): nz = a.nonzero() return array([[nz[i][j] for i in xrange(a.ndim)] for j in xrange(len(nz[0]))]) The output is always a 2d array, so >>> a array([[ 0, 1, 2, 3], [ 4, 0, 6, 7], [ 8, 9, 0, 11]]) >>> argnonzero(a) array([[0, 1], [0, 2], [0, 3], [1, 0], [1, 2], [1, 3], [2, 0], [2, 1], [2, 3]]) >>> b = a[0] >>> argnonzero(b) array([[1], [2], [3]]) >>> c = array([a, a-1, a-2]) >>> argnonzero(c) array([[0, 0, 1], [0, 0, 2], ... It looks a little clumsy for 1d arrays, but I'd argue that, if NumPy were to offer a function like this, it should always return a 2d array whose rows are the coordinates for consistency, rather than returning some squeezed version for indices into 1d arrays. I'd support the addition of such a function to NumPy. Although it's tiny, it's not obvious, and it might be useful. -- Ed From wbaxter at gmail.com Thu May 18 02:37:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 02:37:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C3636.3070906@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> Message-ID: Aren't there quite a few functions that return indices like nonzero() does? Should they all have arg*** versions that just transpose the output of the non arg*** version? Maybe the answer is yes, but to me if all it takes to get what you needs is transpose() then I'm not sure how that fails to be obvious. You have [[x y z] [p d q]] and you want ([x p][y d][z q]). Looks like a job for transpose! Maybe a note in the docstring pointing out "the obvious" is what is really warranted. As in "hey, look, you can transpose the output of this function if you'd like the indices the other way, but the way it is it's more useful for indexing, as in a[a.nonzero()]." Sure would be nice if all you had to type was a.nonzero().T, though... ;-P --bill On 5/18/06, Ed Schofield wrote: > > Alan G Isaac wrote: > > On Tue, 16 May 2006, Alan G Isaac apparently wrote: > > > >> 2. How are people using this? I trust that the numarray > >> behavior was well considered, but I would have expected > >> coordinates to be grouped rather than spread across the > >> arrays in the tuple. > >> > > > > OK, just to satisfy my curiosity: > > does the silence mean that nobody is using > > 'nonzero' in a fashion that leads them to > > prefer the current behavior to the "obvious" > > alternative of grouping the coordinates? > > Is the current behavior just an inherited > > convention, or is it useful in specific applications? > > > > I also think a function that groups coordinates would be useful. Here's > a prototype: > > def argnonzero(a): > return transpose(a.nonzero()) > > This has the same effect as: > > def argnonzero(a): > nz = a.nonzero() > return array([[nz[i][j] for i in xrange(a.ndim)] for j in > xrange(len(nz[0]))]) > > The output is always a 2d array, so > > >>> a > array([[ 0, 1, 2, 3], > [ 4, 0, 6, 7], > [ 8, 9, 0, 11]]) > > >>> argnonzero(a) > array([[0, 1], > [0, 2], > [0, 3], > [1, 0], > [1, 2], > [1, 3], > [2, 0], > [2, 1], > [2, 3]]) > > >>> b = a[0] > >>> argnonzero(b) > array([[1], > [2], > [3]]) > > >>> c = array([a, a-1, a-2]) > >>> argnonzero(c) > array([[0, 0, 1], > [0, 0, 2], > ... > > It looks a little clumsy for 1d arrays, but I'd argue that, if NumPy > were to offer a function like this, it should always return a 2d array > whose rows are the coordinates for consistency, rather than returning > some squeezed version for indices into 1d arrays. > > I'd support the addition of such a function to NumPy. Although it's > tiny, it's not obvious, and it might be useful. > > -- Ed > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pau.gargallo at gmail.com Thu May 18 02:48:01 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 02:48:01 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> Message-ID: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> > I'm afraid I really don't understand the operation that you want. I think that the operation Angus wants is the following (at least I would like that one ;-) if you have two 2darrays of shapes: a.shape = (n,k) b.shape = (k,m) you get: dot( a,b ).shape == (n,m) Now, if you have higher dimensional arrays (kind of "arrays of matrices") a.shape = I+(n,k) b.shape = J+(k,m) where I and J are tuples, you get dot( a,b ).shape == I+J+(n,m) dot( a,b )[ i,j ] == dot( a[i],b[j] ) #i,j represent tuples That is the current behaviour, it computes the matrix product between every possible pair. For me that is similar to 'outer' but with matrix product. But sometimes it would be useful (at least for me) to have: a.shape = I+(n,k) b.shape = I+(k,m) and to get only: dot2( a,b ).shape == I+(n,m) dot2( a,b )[i] == dot2( a[i], b[i] ) This would be a natural extension of the scalar product (a*b)[i] == a[i]*b[i] If dot2 was a kind of ufunc, this will be the expected behaviour, while the current dot's behaviour will be obtained by dot2.outer(a,b). Does this make any sense? Cheers, pau From schofield at ftw.at Thu May 18 03:40:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 03:40:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> Message-ID: <446C4E85.5000503@ftw.at> Bill Baxter wrote: > Aren't there quite a few functions that return indices like nonzero() > does? > Should they all have arg*** versions that just transpose the output of > the non arg*** version? > > Maybe the answer is yes, but to me if all it takes to get what you > needs is transpose() then I'm not sure how that fails to be obvious. > You have [[x y z] [p d q]] and you want ([x p][y d][z q]). Looks like > a job for transpose! > Maybe a note in the docstring pointing out "the obvious" is what is > really warranted. As in "hey, look, you can transpose the output of > this function if you'd like the indices the other way, but the way it > is it's more useful for indexing, as in a[ a.nonzero()]." Well, it wasn't immediately obvious to me how it could be done so elegantly. But I agree that a new function isn't necessary. I've re-written the docstring for the nonzero method in SVN to point out how to do it. > Sure would be nice if all you had to type was a.nonzero().T, though... ;-P No, this wouldn't be possible -- the output of the nonzero method is a tuple, not an array. Perhaps this is why it's not _that_ obvious ;) -- Ed From amcmorl at gmail.com Thu May 18 04:15:13 2006 From: amcmorl at gmail.com (Angus McMorland) Date: Thu May 18 04:15:13 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: <446C571B.5050700@gmail.com> Pau Gargallo wrote: > I think that the operation Angus wants is the following (at least I > would like that one ;-) > [snip] > > But sometimes it would be useful (at least for me) to have: > a.shape = I+(n,k) > b.shape = I+(k,m) > and to get only: > dot2( a,b ).shape == I+(n,m) > dot2( a,b )[i] == dot2( a[i], b[i] ) > > This would be a natural extension of the scalar product (a*b)[i] == > a[i]*b[i] > If dot2 was a kind of ufunc, this will be the expected behaviour, > while the current dot's behaviour will be obtained by dot2.outer(a,b). > > Does this make any sense? Thank-you Pau, this looks exactly like what I want. For further explanation, here, I believe, is an implementation of the desired routine using a loop. It would, however, be great to do this using quicker (ufunc?) machinery. Pau, can you confirm that this is the same as the routine you're interested in? def dot2(a,b): '''Returns dot product of last two dimensions of two 3-D arrays, threaded over first dimension.''' try: assert a.shape[1] == b.shape[2] assert a.shape[0] == b.shape[0] except AssertionError: print "incorrect input shapes" res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) for i in range(a.shape[0]): res[i,...] = dot( a[i,...], b[i,...] ) return res I think the 'arrays of 2-D matrices' comment (which I've snipped out, oh well) captures the idea well. Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From martin.wiechert at gmx.de Thu May 18 04:25:04 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Thu May 18 04:25:04 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today In-Reply-To: <446BA1AA.8000300@ieee.org> References: <446BA1AA.8000300@ieee.org> Message-ID: <200605181323.19679.martin.wiechert@gmx.de> Does scipy 0.4.8 work on top of that? Martin On Thursday 18 May 2006 00:20, Travis Oliphant wrote: > If there are now further difficulties, I'm going to release 0.9.8 > today. Then work on 1.0 can begin. The 1.0 release will consist of a > series of release candidates. > > Thank you to David Cooke for his recent flurry of fixes.. Thanks to all > the other developers who have contributed as well. > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From pau.gargallo at gmail.com Thu May 18 04:35:09 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 04:35:09 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <446C571B.5050700@gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> <446C571B.5050700@gmail.com> Message-ID: <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> > Pau, can you confirm that this is the same > as the routine you're interested in? > > def dot2(a,b): > '''Returns dot product of last two dimensions of two 3-D arrays, > threaded over first dimension.''' > try: > assert a.shape[1] == b.shape[2] > assert a.shape[0] == b.shape[0] > except AssertionError: > print "incorrect input shapes" > res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) > for i in range(a.shape[0]): > res[i,...] = dot( a[i,...], b[i,...] ) > return res > yes, that is what I would like. I would like it even with more dimensions and with all the broadcasting rules ;-) These can probably be achieved by building actual 'arrays of matrices' (an array of matrix objects) and then using the ufunc machinery. But I think that a simple dot2 function (with an other name of course) will still very useful. pau From wbaxter at gmail.com Thu May 18 04:45:07 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 04:45:07 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C4E85.5000503@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: On 5/18/06, Ed Schofield wrote: > > Bill Baxter wrote: > > Sure would be nice if all you had to type was a.nonzero().T, though... > ;-P > > No, this wouldn't be possible -- the output of the nonzero method is a > tuple, not an array. Perhaps this is why it's not _that_ obvious ;) Oh, I see. I did miss that bit. I think I may have even done something recently myself like vstack(where(a > val)).transpose() not realizing that plain old transpose() would work in place of vstack(xxx).transpose(). If you feel like copy-pasting your doc addition for nonzero() over to where() also, that would be nice. What other functions work like that? ... ... actually it looks like most other functions similar to nonzero() return a boolean array, then you use where() if you need an index list. isnan(), iscomplex(), isinf(), isreal(), isneginf(), etc and of course all the boolean operators like a>0. So nonzero() is kind of an oddball. Is it actually any different from where(a!=0)? --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Thu May 18 05:17:19 2006 From: gnurser at googlemail.com (George Nurser) Date: Thu May 18 05:17:19 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C4E85.5000503@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> This is a very useful thread. Made me think about what the present arrangement is supposed to do. My usage of this is to get out indices corresponding to a given condition. So E.g. In [38]: a = arange(12).reshape(3,2,2) Out[39]: array([[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]], [[ 8, 9], [10, 11]]]) In [40]: xx = where(a%2==0) In [41]: xx Out[41]: (array([0, 0, 1, 1, 2, 2]), array([0, 1, 0, 1, 0, 1]), array([0, 0, 0, 0, 0, 0])) The transpose idea should have been obvious to me... In [48]: for i,j,k in transpose(xx): ....: print i,j,k ....: ....: 0 0 0 0 1 0 1 0 0 1 1 0 2 0 0 2 1 0 A 1D array In [57]: aa = a.ravel() In [49]: xx = where(aa%2==0) In [53]: xx Out[53]: (array([ 0, 2, 4, 6, 8, 10]),) needs tuple unpacking: n [52]: for i, in transpose(xx): ....: print i ....: ....: 0 2 4 6 8 10 Or more simply, the ugly In [58]: for i in xx[0]: ....: print i ....: ....: 0 2 4 6 8 10 Alternatively, instead of transpose, we can simply use zip. E.g. (3D) In [42]: for i,j,k in zip(*xx): ....: print i,j,k or (1D, again still needs unpacking) In [56]: for i, in zip(*xx): ....: print i From wbaxter at gmail.com Thu May 18 05:40:11 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 05:40:11 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: In Einstein summation notation, what numpy.dot() does now is: c_riqk = a_rij * b_qjk And you want: c_[r]ik = a_[r]ij * b_[r]jk where the brackets indicate a 'no summation' index. Writing the ESN makes it clearer to me anyway. :-) --bb On 5/18/06, Pau Gargallo wrote: > > > I'm afraid I really don't understand the operation that you want. > > I think that the operation Angus wants is the following (at least I > would like that one ;-) > > if you have two 2darrays of shapes: > a.shape = (n,k) > b.shape = (k,m) > you get: > dot( a,b ).shape == (n,m) > > Now, if you have higher dimensional arrays (kind of "arrays of matrices") > a.shape = I+(n,k) > b.shape = J+(k,m) > where I and J are tuples, you get > dot( a,b ).shape == I+J+(n,m) > dot( a,b )[ i,j ] == dot( a[i],b[j] ) #i,j represent tuples > That is the current behaviour, it computes the matrix product between > every possible pair. > For me that is similar to 'outer' but with matrix product. > > But sometimes it would be useful (at least for me) to have: > a.shape = I+(n,k) > b.shape = I+(k,m) > and to get only: > dot2( a,b ).shape == I+(n,m) > dot2( a,b )[i] == dot2( a[i], b[i] ) > > This would be a natural extension of the scalar product (a*b)[i] == > a[i]*b[i] > If dot2 was a kind of ufunc, this will be the expected behaviour, > while the current dot's behaviour will be obtained by dot2.outer(a,b). > > Does this make any sense? > > Cheers, > pau > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joris at ster.kuleuven.ac.be Thu May 18 05:54:03 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Thu May 18 05:54:03 2006 Subject: [Numpy-discussion] Numpy Example List Message-ID: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Newsflash The Numpy Example List has now passed its 100th example, and has now its own wikipage: http://scipy.org/Numpy_Example_List Thanks to Pau Gargallo for his contributions. Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From gruben at bigpond.net.au Thu May 18 07:38:03 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu May 18 07:38:03 2006 Subject: [Numpy-discussion] Numpy Example List proposals In-Reply-To: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> References: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Message-ID: <446C8685.1060203@bigpond.net.au> I want to thank Joris for this fantastic resource. Well done. Further to Bill's suggestion, perhaps the first example(s) in each box could be the docstring example. For functions/methods already with docstrings, we could paste that into the wiki page. Those without could be pasted back the other way. Maybe we could convince the developers to add an example() function to numpy and scipy which takes the function name as an argument and accesses a file containing these examples and spits it out. This way we could have many examples in the documentation without polluting the docstrings with pages of text. The Maxima CAS package provides help, example() and apropos() functions. In this case example() actually executes the example input to generate the output which we could do with numpy/scipy as well. apropos() returns similar sounding functions. I contacted Fernando Perez the other day because I discovered a neat recipe for this functionality in the python cookbook here: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/409000 so we could do this in numpy/scipy too. Gary R. joris at ster.kuleuven.ac.be wrote: > Newsflash > > The Numpy Example List has now passed its 100th example, > and has now its own wikipage: http://scipy.org/Numpy_Example_List > > Thanks to Pau Gargallo for his contributions. > > Joris From aisaac at american.edu Thu May 18 07:43:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 07:43:08 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446BA0B9.7000704@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> <446BA0B9.7000704@ieee.org> Message-ID: > Alan G Isaac wrote: >> 1. I hope for an array 'a' that nonzero(a) and a.nonzero() >> will produce the same result, whatever convention is >> chosen. On Wed, 17 May 2006, Travis Oliphant apparently wrote: > Unfortunately this won't work because the nonzero(a) behavior was > inherited from Numeric but the .nonzero() from numarray. > This is the price of trying to merge two user groups. The little bit > of pain is worth it, I think. One last comment: This is definitely trading off surprises for future users against ease for incoming Numeric users. So ... might the best to be to make numpy consistent but, assuming the more consistent numarray behavior is chosen, provide the Numeric behavior as part of the compatability module? fwiw, Alan From wbaxter at gmail.com Thu May 18 07:57:09 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 07:57:09 2006 Subject: [Numpy-discussion] Indexing arrays with arrays Message-ID: One thing I haven't quite managed to figure out yet, is what the heck indexing an array with an array is supposed to give you. This is sort of an offshoot of the "nonzero()" discussion. I was wondering why nonzero() and where() return tuples at all instead of just an array, till I tried it. >>> b array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> b[ num.asarray(num.where(b>4)) ] array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) Whoa, not sure what that is. Can someone explain the current rule and in what situations is it useful? Thanks --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.chauvat at logilab.fr Thu May 18 08:04:09 2006 From: nicolas.chauvat at logilab.fr (Nicolas Chauvat) Date: Thu May 18 08:04:09 2006 Subject: [Numpy-discussion] [ANN] EuroPython 2006 - call for papers Message-ID: <20060518150512.GG4480@crater.logilab.fr> Hi Lists, This year again EuroPython will see pythonistas from all over flock together at the same place and the same time : EuroPython 2006 - July, 3rd to 5th - CERN, Geneva, Switzerland Please do not hesitate to submit your presentation proposals to share your insights and experience with all who have an interest in Python put to good use in Science. http://www.europython.org Deadline is May 31st. See you there, -- Nicolas Chauvat logilab.fr - services en informatique avanc?e et gestion de connaissances From aisaac at american.edu Thu May 18 08:14:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 08:14:03 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: References: Message-ID: On Thu, 18 May 2006, Bill Baxter apparently wrote: > One thing I haven't quite managed to figure out yet, is > what the heck indexing an array with an array is supposed > to give you. I think you want section 3.3.6.1 of Travis's book, which is easily one of the hardest sections of the book. I find it nonobvious that when x and y are nd-arrays that x[y] should differ from x[tuple(y)] or x[list(y)], but as explained in this section it does in a big way. Cheers, Alan Isaac From pau.gargallo at gmail.com Thu May 18 08:24:10 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 08:24:10 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: References: Message-ID: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> On 5/18/06, Bill Baxter wrote: > One thing I haven't quite managed to figure out yet, is what the heck > indexing an array with an array is supposed to give you. > This is sort of an offshoot of the "nonzero()" discussion. > I was wondering why nonzero() and where() return tuples at all instead of > just an array, till I tried it. > > >>> b > array([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> b[ num.asarray(num.where(b>4)) ] > array([[[3, 4, 5], > [6, 7, 8], > [6, 7, 8], > [6, 7, 8]], > > [[6, 7, 8], > [0, 1, 2], > [3, 4, 5], > [6, 7, 8]]]) > > Whoa, not sure what that is. Can someone explain the current rule and in > what situations is it useful? > > Thanks > --bb i think that is b and x are arrays then, b[x]_ijk = b[ x_ijk ] where ijk is whatever x needs to be indexed with. Note that the indexing on b is _only_ on its first dimension. >>> from numpy import * >>> b = arange(9).reshape(3,3) >>> b array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> x,y = where(b>4) >>> xy = asarray( (x,y) ) >>> xy array([[1, 2, 2, 2], [2, 0, 1, 2]]) >>> b[x,y] array([5, 6, 7, 8]) >>> b[xy] array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) >>> asarray( (b[x],b[y]) ) array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) From faltet at carabos.com Thu May 18 09:08:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Thu May 18 09:08:06 2006 Subject: [Numpy-discussion] Numpy Example List In-Reply-To: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> References: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Message-ID: <200605181806.56193.faltet@carabos.com> A Dijous 18 Maig 2006 14:52, joris at ster.kuleuven.ac.be va escriure: > Newsflash > > The Numpy Example List has now passed its 100th example, > and has now its own wikipage: http://scipy.org/Numpy_Example_List > > Thanks to Pau Gargallo for his contributions. Wow, very good resource and indeed I'll check it frequently! Thanks! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Thu May 18 09:41:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 18 09:41:04 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> References: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> Message-ID: <446CA360.6040901@ieee.org> Pau Gargallo wrote: > On 5/18/06, Bill Baxter wrote: >> One thing I haven't quite managed to figure out yet, is what the heck >> indexing an array with an array is supposed to give you. >> This is sort of an offshoot of the "nonzero()" discussion. >> I was wondering why nonzero() and where() return tuples at all >> instead of >> just an array, till I tried it. >> >> >>> b >> array([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> b[ num.asarray(num.where(b>4)) ] >> array([[[3, 4, 5], >> [6, 7, 8], >> [6, 7, 8], >> [6, 7, 8]], >> >> [[6, 7, 8], >> [0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]]) >> >> Whoa, not sure what that is. Can someone explain the current rule >> and in >> what situations is it useful? I can't take too much credit for the indexing behavior. I took what numarray had done and extended it just a little bit to allow mixing of slices and index arrays. It was easily one of the most complicated things to write for NumPy. What you are observing is called "partial indexing" by Numarray. The indexing rules where also spelled out in emails to this list last year and placed in the design document for what was then called Numeric3 (I'm not sure if that design document is still visible or not (probably under old.scipy.org it could be found). The section in my book that covers this is based on that information. I can't pretend to have use cases for all the indexing fanciness because I was building off the work that numarray had already pioneered. -Travis From schofield at ftw.at Thu May 18 10:17:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 10:17:15 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: <446CABF3.5020802@ftw.at> Bill Baxter wrote: > On 5/18/06, *Ed Schofield* > wrote: > > Bill Baxter wrote: > > Sure would be nice if all you had to type was a.nonzero().T, > though... ;-P > > No, this wouldn't be possible -- the output of the nonzero method is a > tuple, not an array. Perhaps this is why it's not _that_ obvious ;) > > > Oh, I see. I did miss that bit. I think I may have even done > something recently myself like > vstack(where(a > val)).transpose() > not realizing that plain old transpose() would work in place of > vstack(xxx).transpose(). > > If you feel like copy-pasting your doc addition for nonzero() over to > where() also, that would be nice. Okay, done. > What other functions work like that? ... little> ... actually it looks like most other functions similar to > nonzero() return a boolean array, then you use where() if you need an > index list. isnan(), iscomplex(), isinf(), isreal(), isneginf(), etc > and of course all the boolean operators like a>0. So nonzero() is > kind of an oddball. Is it actually any different from where(a!=0)? I think it's equivalent, only slightly more efficient... -- Ed From schofield at ftw.at Thu May 18 10:21:10 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 10:21:10 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> Message-ID: <446CACEB.6030505@ftw.at> George Nurser wrote: > This is a very useful thread. Made me think about what the present > arrangement is supposed to do. Great! > Alternatively, instead of transpose, we can simply use zip. > > E.g. (3D) > In [42]: for i,j,k in zip(*xx): > ....: print i,j,k That's interesting. I've never thought of zip as a transpose operation before, but I guess it is ... -- Ed From fperez.net at gmail.com Thu May 18 10:28:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 18 10:28:02 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: On 5/18/06, Bill Baxter wrote: > In Einstein summation notation, what numpy.dot() does now is: > c_riqk = a_rij * b_qjk > > And you want: > c_[r]ik = a_[r]ij * b_[r]jk > > where the brackets indicate a 'no summation' index. > Writing the ESN makes it clearer to me anyway. :-) I recently needed something similar to this, and being too lazy to think up the proper numpy ax-swapping kung-fu, I just opened up weave and was done with it in a hurry. Here it is, in case anyone finds the basic idea of any use. Cheers, f ### class mt3_dot_factory(object): """Generate the mt3t contract function, holding necessary state. This class only needs to be instantiated once, though it doesn't try to enforce this via singleton/borg approaches at all.""" def __init__(self): # The actual expression to contract indices, as a list of strings to be # interpolated into the C++ source mat_ten = ['mat(i,m)*ten(m,j,k)', # first index 'mat(j,m)*ten(i,m,k)', # second 'mat(k,m)*ten(i,j,m)', # third ] # Source template code_tpl = """ for (int i=0;i tensor. A special type of matrix-tensor contraction over a single index. The returned array has the following structure: out(i,j,k) = sum_m(mat(i,m)*ten(m,j,k)) if idx==0 out(i,j,k) = sum_m(mat(j,m)*ten(i,m,k)) if idx==1 out(i,j,k) = sum_m(mat(k,m)*ten(i,j,m)) if idx==2 Inputs: - mat: an NxN array. - ten: an NxNxN array. - idx: the position of the index to contract over, 0 1 or 2.""" # Minimal input validation - we use asserts so they don't kick in # under a -O run of python. assert len(mat.shape)==2,\ "mat must be a rank 2 array, shape=%s" % mat.shape assert mat.shape[0]==mat.shape[1],\ "Only square matrices are supported: mat shape=%s" % mat.shape assert len(ten.shape)==3,\ "mat must be a rank 3 array, shape=%s" % ten.shape assert ten.shape[0]==ten.shape[1]==ten.shape[2],\ "Only equal-dim tensors are supported: ten shape=%s" % ten.shape order = mat.shape[0] out = zeros_like(ten) inline(self.code[idx],('mat','ten','out','order'), type_converters = converters.blitz) return out # Make actual instance mt3_dot = mt3_dot_factory() From aisaac at american.edu Thu May 18 10:56:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 10:56:04 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446CACEB.6030505@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com><446CACEB.6030505@ftw.at> Message-ID: On Thu, 18 May 2006, Ed Schofield apparently wrote: > I've never thought of zip as a transpose operation > before, but I guess it is http://mail.python.org/pipermail/python-list/2004-December/257416.html But I like it. Cheers, Alan Isaac From jonathan.taylor at utoronto.ca Thu May 18 10:59:02 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu May 18 10:59:02 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today In-Reply-To: <446BA1AA.8000300@ieee.org> References: <446BA1AA.8000300@ieee.org> Message-ID: <463e11f90605181058v777535f6y645ebd4bdbeda9c2@mail.gmail.com> When 0.9.8 comes out does that mean the API is stable, or will there be API changes before 1.0? Jon. On 5/17/06, Travis Oliphant wrote: > > If there are now further difficulties, I'm going to release 0.9.8 > today. Then work on 1.0 can begin. The 1.0 release will consist of a > series of release candidates. > > Thank you to David Cooke for his recent flurry of fixes.. Thanks to all > the other developers who have contributed as well. > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From mark at mitre.org Thu May 18 14:09:00 2006 From: mark at mitre.org (Mark Heslep) Date: Thu May 18 14:09:00 2006 Subject: [Numpy-discussion] Guide to NumPy latest? Message-ID: <446CE229.9060302@mitre.org> So is the January '06 version of the Guide to Numpy still the most recent version? Mark From wbaxter at gmail.com Thu May 18 18:46:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 18:46:01 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: <446CA360.6040901@ieee.org> References: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> <446CA360.6040901@ieee.org> Message-ID: I read the chapter in your book and sort of vaguely understand what it does now. Take an array A and index array ind: >>> A array([[ 0, 5, 10, 15], [ 1, 6, 11, 16], [ 2, 7, 12, 17], [ 3, 8, 13, 18], [ 4, 9, 14, 19]]) >>> ind array([[1, 3, 4], [0, 2, 1]]) And you get >>> A[ind] array([[[ 1, 6, 11, 16], [ 3, 8, 13, 18], [ 4, 9, 14, 19]], [[ 0, 5, 10, 15], [ 2, 7, 12, 17], [ 1, 6, 11, 16]]]) In this case it's roughly equivalent to [ A[row] for row in ind ]. >>> num.asarray( [ A[r] for r in ind ] ) array([[[ 1, 6, 11, 16], [ 3, 8, 13, 18], [ 4, 9, 14, 19]], [[ 0, 5, 10, 15], [ 2, 7, 12, 17], [ 1, 6, 11, 16]]]) >>> So I guess it could be useful if you want to take a bunch of different random samples of your data and stack them all up. E.g. you have a (1000,50) shaped grid of data, and you want to take N random samplings, each consisting of 100 rows from the original grid, and then put them all together into an (N,100,50) array. Or say you want to make a stack of sliding windows on the data like rows 0-5, then rows 1-6, then rows 2-7, etc to make a big (1000-5,5,50) array. Might be useful for that kind of thing. But thinking about applying an index obj of shape (2,3,4) to a (10,20,30,40,50) shaped array just makes my head hurt. :-) Does anyone actually use it, though? I also found it unexpected that A[ (ind[0], ind[1] ) ] doesn't do the same thing as A[ind] when ind.shape=( A.ndim, N). List of array -- as in A[ [ind[0], ind[1]] ] -- seems to act just like tuple of array also. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Thu May 18 23:46:04 2006 From: christian at marquardt.sc (Christian Marquardt) Date: Thu May 18 23:46:04 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <446CE229.9060302@mitre.org> References: <446CE229.9060302@mitre.org> Message-ID: <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> Mine says February 28, 2006 on the title page. As this has come up quite often - is there any kind of mechanism to obtain the most recent version? Regards, Christian. On Thu, May 18, 2006 23:07, Mark Heslep wrote: > So is the January '06 version of the Guide to Numpy still the most > recent version? > > Mark > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From satyaupadhya at yahoo.co.in Fri May 19 06:18:07 2006 From: satyaupadhya at yahoo.co.in (Satya Upadhya) Date: Fri May 19 06:18:07 2006 Subject: [Numpy-discussion] Matrix query Message-ID: <20060519131729.55818.qmail@web8503.mail.in.yahoo.com> Dear Python Users and Gurus, My problem is that i have a distance matrix obtained from the software package PHYLIP. I have placed it in a text file, and using ordinary python i am able to save the matrix elements as a multidimensional array. My problem is that i wish to now convert this multidimensional array into a matrix and subsequently i wish to find the eigenvectors and eigenvalues of this matrix. I tried using scipy and also earlier the Numeric and LinearAlgebra modules but i am facing problems. I can generate a new matrix using scipy (and also with Numeric/LinearAlgebra) and can obtain the corresponding eigenvectors. But i am facing a real problem in making scipy or Numeric/LinearAlgebra accept my multidimensional array as a matrix it can recognize. Please help/give a pointer to a similar query. Thanking you, Satya --------------------------------- Do you have a question on a topic you cant find an Answer to. Try Yahoo! Answers India Get the all new Yahoo! Messenger Beta Now -------------- next part -------------- An HTML attachment was scrubbed... URL: From joris at ster.kuleuven.ac.be Fri May 19 07:46:02 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Fri May 19 07:46:02 2006 Subject: [Numpy-discussion] Numpy Example List Message-ID: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> For your information: http://scipy.org/Numpy_Example_List - all examples are now syntax highlighted (thanks Pau Gargallo!) - most examples have now a clickable "See also:" footnote - is extended to 118 examples Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pau.gargallo at gmail.com Fri May 19 09:57:08 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Fri May 19 09:57:08 2006 Subject: [Numpy-discussion] Numpy Example List In-Reply-To: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> References: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> Message-ID: <6ef8f3380605190956l3bda1095gf9aa3f80513d884c@mail.gmail.com> On 5/19/06, joris at ster.kuleuven.ac.be wrote: > > For your information: http://scipy.org/Numpy_Example_List > - all examples are now syntax highlighted (thanks Pau Gargallo!) > - most examples have now a clickable "See also:" footnote > - is extended to 118 examples > > Joris > Thanks to you Joris !! 118 examples in a week, you did a huge work! The color syntax have some problems. We are using now the moinmoin command: {{{#!python numbers=disable so moinmoin is coloring the interactive sessions as if they were python files. This gives error when the output of the interactive session don't look like python code. See for example http://www.scipy.org/Numpy_Example_List#histogram where the print commands print opening brackets "[" and not the closing ones. Does someone know an easy way to solve this? Is there some way to deactivate the #!python color highlighting for some lines of the code? All around the scipy.org site there are examples of interactive python sessions, it will be nice to have a special syntax highlighter for them. Like: {{{#!python_interactive may be to much efforts just for colors? pau From fred at ucar.edu Fri May 19 09:59:02 2006 From: fred at ucar.edu (Fred Clare) Date: Fri May 19 09:59:02 2006 Subject: [Numpy-discussion] isdtype Message-ID: <5292be8b1f2b781cb213bc9464f088be@ucar.edu> None of the functions in section 4.5 of the manual seem to be implemented: >>> import numpy >>> numpy.__version__ '0.9.6' >>> a = numpy.array([0.]) >>> numpy.isdtype(a) Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'isdtype' ? Fred Clare From aisaac at american.edu Fri May 19 11:04:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri May 19 11:04:01 2006 Subject: [Numpy-discussion] matrix direct sum Message-ID: It seems to me that the core question is whether you really need the direct sum to be constructed (seem unlikely), or whether you just need the information in the constituent matrices. How about a direct sum class, which takes a list of matrices as an argument, and appropriately defines the operations you need. Cheers, Alan Isaac PS I have argued that this is a better way to handle the Kronecker product as well, but as here I did not offer code. ;-) If you code it, please share it. From erjj1 at 163.com Fri May 19 20:43:02 2006 From: erjj1 at 163.com (=?GB2312?B?NtTCMy00yNXJz7qj?=) Date: Fri May 19 20:43:02 2006 Subject: [Numpy-discussion] =?GB2312?B?1MvTw0VYQ0VMtNm9+MrQs6HTqs/6us2yxs7xudzA7ShBRCk=?= Message-ID: An HTML attachment was scrubbed... URL: From schofield at ftw.at Sat May 20 04:51:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat May 20 04:51:01 2006 Subject: [Numpy-discussion] a.squeeze() for 1d and 0d arrays Message-ID: <446F0230.6050101@ftw.at> Hi all, I've discovered a bug in my SciPy code for sparse matrix slicing that was caused by the following behaviour of squeeze(): >>> a = array([3]) # array of shape (1,) >>> type(a.squeeze()) That is, squeezing a 1-dim array returns an array scalar. Could we change this to return a 0-dim array instead? Another related question is this: >>> b = array(3) # 0-dim array >>> type(a.squeeze()) I find this behaviour surprising too; the docstring claims that squeeze eliminates any length-1 dimensions, but a 0-dimensional array has shape (), without any length-1 dimensions. So shouldn't squeeze() leave 0-dimensional arrays alone? -- Ed From svetosch at gmx.net Sat May 20 06:40:12 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat May 20 06:40:12 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> References: <446CE229.9060302@mitre.org> <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> Message-ID: <446F1BF8.9010108@gmx.net> Christian Marquardt schrieb: > Mine says February 28, 2006 on the title page. > > As this has come up quite often - is there any kind of mechanism to > obtain the most recent version? > I would also kindly like to ask why nobody (?) gets the updates of the numpy book. I am totally in favor of the idea to raise funds by selling good documentation, but the deal was to get the updates when they come out. Thanks, Sven From aisaac at american.edu Sat May 20 07:23:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat May 20 07:23:01 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <446F1BF8.9010108@gmx.net> References: <446CE229.9060302@mitre.org> <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc><446F1BF8.9010108@gmx.net> Message-ID: On Sat, 20 May 2006, Sven Schreiber apparently wrote: > I would also kindly like to ask why nobody (?) gets the > updates of the numpy book. I am totally in favor of the > idea to raise funds by selling good documentation, but the > deal was to get the updates when they come out. While you are phrasing this fairly politely, I think you should not *assume* you have not gotten your most recent "update". Rather you should ask: what is the definition of an "update"? You seem to be defining it by the date on the title page, but that need not be Travis's definition. (I would agree that he could reduce some traffic on this list by clarifying that, perhaps by versioning the book.) Personally, I do not care to receive a copy each time a few typos are corrected or some grammar is changed. I'd rather receive a copy each time there are important additions or corrections. I assume that this is Travis's practice. Cheers, Alan Isaac From oliphant.travis at ieee.org Sat May 20 07:54:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 07:54:05 2006 Subject: [Numpy-discussion] Updates to "Guide to NumPy" Message-ID: <446F2D68.90607@ieee.org> Owners of "Guide to NumPy": I'm a little late with updates. I really apologize for that. It's due to not having a good system in place to allow people to get the updates in a timely manner. The latest version is from March. It includes a bit more information about writing C-extensions and some corrections. I've been sending them to people who urgently need them and request it. I'm working on an update to go along with 0.9.8 release. When it is ready I will send a location and password for people to download it. I expect 2-3 weeks for its release. If you need the March copy earlier than that, please let me know and I'll send you a personal update. Thanks for your patience. I hope to send updates to the manual at major releases of NumPy. Best regards, -Travis Oliphant From olivetti at itc.it Sat May 20 13:58:06 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Sat May 20 13:58:06 2006 Subject: [Numpy-discussion] numpy and mayavi Message-ID: <20060520205738.GC17718@eloy.itc.it> I need to use a 3D visualization toolkit together with the amazing numpy. On the scipy website mayavi/VTK are suggested but as far as I tried, mayavi uses only Numeric and doesn't like numpy arrays (and mayavi, last release, dates 13 September 2005 so a bit earlier for numpy). Do you know something about it or could suggest alternative 3D visualization packages? Basically my needs are related to this task: I've 3D matrices and I'like to see them not only as 2D slices. Thanks in advance, Emanuele From joris at ster.kuleuven.ac.be Sat May 20 15:15:00 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat May 20 15:15:00 2006 Subject: [Numpy-discussion] ascii input/output cookbook doc Message-ID: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> For your information: I just created a small cookbook document http://scipy.org/Cookbook/InputOutput where it is explained how one can read and write Numpy arrays in human readable (ascii) format. The document describes how one can use read_array/write_array if SciPy is installed, or how one can use load/save if Matplotlib is installed. When neither of these two packages is installed, one basically has no other choice then to improvise, so I also give here a few examples how one could do this. Imho, there is something unsatisfactorily about this need to improvise. Ascii input/output of numpy arrays seems to me a very basic need. Even when one defines Numpy crudely as the N-dimensional array object, and Scipy as the science you can do with these array objects, then I would intuitively still expect that ascii input/output would belong to Numpy rather than to Scipy. There are Numpy support functions for binary format, and for pickled format, but strangely enough not really for ascii format. tofile() and fromfile() do not preserve the shape of a 2d array, and are in practice therefore hardly usable. There may be a signficant fraction of Numpy users that do not need SciPy for their work, and have only Numpy installed. My guess is that the read_array and write_array functions have already been re-invented many many times by these users. Imho, I therefore think that Numpy deserves its own read_array/write_array method. Does anyone else have this feeling, or am I the only one? :o) Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From oliphant.travis at ieee.org Sat May 20 15:26:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 15:26:03 2006 Subject: [Numpy-discussion] ascii input/output cookbook doc In-Reply-To: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> References: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> Message-ID: <446F9751.5030108@ieee.org> joris at ster.kuleuven.ac.be wrote: > For your information: I just created a small cookbook document > http://scipy.org/Cookbook/InputOutput > where it is explained how one can read and write Numpy arrays in human > readable (ascii) format. > > > The document describes how one can use read_array/write_array if SciPy > is installed, or how one can use load/save if Matplotlib is installed. > When neither of these two packages is installed, one basically has no > other choice then to improvise, so I also give here a few examples how > one could do this. > > Imho, there is something unsatisfactorily about this need to improvise. > Ascii input/output of numpy arrays seems to me a very basic need. Even > when one defines Numpy crudely as the N-dimensional array object, and > Scipy as the science you can do with these array objects, then I would > intuitively still expect that ascii input/output would belong to Numpy > rather than to Scipy. There are Numpy support functions for binary format, > and for pickled format, but strangely enough not really for ascii format. > tofile() and fromfile() do not preserve the shape of a 2d array, and are > in practice therefore hardly usable. > > There may be a signficant fraction of Numpy users that do not need SciPy > for their work, and have only Numpy installed. My guess is that the > read_array and write_array functions have already been re-invented many > many times by these users. Imho, I therefore think that Numpy deserves > its own read_array/write_array method. Does anyone else have this feeling, > or am I the only one? :o) > I think you are correct. I'd like to see better ascii input-output. That's why it's supported on a fundamental level in tofile and fromfile. SciPy's support for ascii reading and writing is rather slow as it has a lot of features. Something a little-less grandiose, but still able to read and write simple ascii tables would be a good thing to bring into NumPy. General-purpose parsing can be very difficult, but a simple parser for 2-d arrays would probably be very useful. On the other hand, I've found that even though it understands only one separator at this point, fromfile is still pretty useful for table processing as long as you know the shape of what you want. -Travis > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From simon at arrowtheory.com Sat May 20 19:43:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sat May 20 19:43:02 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? Message-ID: <20060521122810.54f24311.simon@arrowtheory.com> >>> a=numpy.array([1,2,3],numpy.Int8) >>> a array([1, 2, 3], dtype=int8) >>> a*2 array([2, 4, 6], dtype=int16) >>> My little 2 lives happily as an int8, so why is the int8 array cast to an int16 ? I spent quite a while finding this bug. Furthermore, this works: >>> a*a array([1, 4, 9], dtype=int8) but not this: >>> a*numpy.array(2,dtype=numpy.Int8) array([2, 4, 6], dtype=int16) >>> print numpy.__version__ 0.9.9.2533 Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From simon at arrowtheory.com Sat May 20 19:45:04 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sat May 20 19:45:04 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? In-Reply-To: <20060521122810.54f24311.simon@arrowtheory.com> References: <20060521122810.54f24311.simon@arrowtheory.com> Message-ID: <20060521122954.21305c89.simon@arrowtheory.com> On Sun, 21 May 2006 12:28:10 +0100 Simon Burton wrote: > > >>> a=numpy.array([1,2,3],numpy.Int8) > >>> a > array([1, 2, 3], dtype=int8) > >>> a*2 > array([2, 4, 6], dtype=int16) > >>> > > My little 2 lives happily as an int8, so why is the int8 array > cast to an int16 ? As a workaround, i found this: >>> a*=2 >>> a array([2, 4, 6], dtype=int8) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Sat May 20 20:27:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 20:27:08 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? In-Reply-To: <20060521122810.54f24311.simon@arrowtheory.com> References: <20060521122810.54f24311.simon@arrowtheory.com> Message-ID: <446FDDF5.3090001@ieee.org> Simon Burton wrote: >>>> a=numpy.array([1,2,3],numpy.Int8) >>>> a >>>> > array([1, 2, 3], dtype=int8) > >>>> a*2 >>>> > array([2, 4, 6], dtype=int16) > > > My little 2 lives happily as an int8, so why is the int8 array > cast to an int16 ? > Thanks for catching this bug in the logic of scalar upcasting. It should only affect this one case. It's now been fixed. -Travis From bryan at cole.uklinux.net Sun May 21 00:40:01 2006 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sun May 21 00:40:01 2006 Subject: [Numpy-discussion] Re: numpy and mayavi In-Reply-To: <20060520205738.GC17718@eloy.itc.it> References: <20060520205738.GC17718@eloy.itc.it> Message-ID: <1148197014.32313.61.camel@pc1.cole.uklinux.net> On Sat, 2006-05-20 at 22:57 +0200, Emanuele Olivetti wrote: > I need to use a 3D visualization toolkit together with the > amazing numpy. On the scipy website mayavi/VTK are suggested but > as far as I tried, mayavi uses only Numeric and doesn't like > numpy arrays (and mayavi, last release, dates 13 September 2005 > so a bit earlier for numpy). You can safely install Numeric along side numpy; you'll need to do this in order to run mayavi. Alternatively, you could try Paraview (http://www.paraview.org). Like mayavi, Paraview is based on VTK, but it's written in Tcl/Tk rather than python. It's more feature complete than mayavi and easier to use, in my view. I quick test shows that VTK-5 is happy to accept numpy arrays as "Void Pointers" for its vtkDataArrays. Using this method, you can construct any vtkDataObject from numpy arrays. If you just want to turn your 3D array into vtkImageData, you can use the vtkImageImport filter. Once you've got a vtkDataSet (vtkImageData or some other form), you can save this as a .vtk file, which either mayavi or paraview can then open. If you're not already familiar with VTK programming, then another way to get going is to convert your numpy array directly to a VTK data file using a pure python script. The file formats are documented at http://www.vtk.org/pdf/file-formats.pdf and the can be written in either binary or ascii form. The 'legacy' .vtk formats and are quite simple to construct. If you want to use mayavi from a script, then you need to convert your numpy arrays to Numeric arrays (using the Numeric.asarray function) for transfer to mayavi as needed. HTH Bryan > Do you know something about it or > could suggest alternative 3D visualization packages? > Basically my needs are related to this task: I've 3D matrices > and I'like to see them not only as 2D slices. > > Thanks in advance, > > Emanuele > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 From aisaac at american.edu Sun May 21 06:06:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun May 21 06:06:05 2006 Subject: [Numpy-discussion] Re: numpy and mayavi In-Reply-To: <1148197014.32313.61.camel@pc1.cole.uklinux.net> References: <20060520205738.GC17718@eloy.itc.it> <1148197014.32313.61.camel@pc1.cole.uklinux.net> Message-ID: On Sun, 21 May 2006, Bryan Cole apparently wrote: > another way to get going is to convert your numpy array > directly to a VTK data file using a pure python script. For which http://cens.ioc.ee/projects/pyvtk/ may be useful. Cheers, Alan Isaac From simon at arrowtheory.com Sun May 21 19:47:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 21 19:47:02 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays Message-ID: <20060522124610.3b8ba40a.simon@arrowtheory.com> This is something I will need to be able do: >>> a=numpy.array( [(1,2,3)], list('lll') ) >>> a.astype( 'l' ) Traceback (most recent call last): File "", line 1, in ? TypeError: an integer is required (and what a strange error message). Is there a workaround ? .tostring/.fromstring incurs a memcopy copy, if i am not mistaken. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From winnieshop888 at yahoo.com.cn Sun May 21 22:52:00 2006 From: winnieshop888 at yahoo.com.cn (winnie) Date: Sun May 21 22:52:00 2006 Subject: [Numpy-discussion] Wetsuit Message-ID: Multi Neoprene Factory Product name: Wetsuit (shorties --style no. S015) The materials : 2mm Neoprene QTY:2,000PCS The price : USD8.00/pc Delivery : FOB Shenzhen China see the attached http://home.netvigator.com/~sky888s/ Thanks, Winnie From oliphant.travis at ieee.org Mon May 22 00:48:12 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 00:48:12 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays In-Reply-To: <20060522124610.3b8ba40a.simon@arrowtheory.com> References: <20060522124610.3b8ba40a.simon@arrowtheory.com> Message-ID: <44716C9A.4020502@ieee.org> Simon Burton wrote: > This is something I will need to be able do: > > >>>> a=numpy.array( [(1,2,3)], list('lll') ) >>>> >>>> a.astype( 'l' ) >>>> Currently record-arrays can't be cast like this to built-in types. it's true the error message could be more informative. What do you think should actually be done here anyway? How do you want to cast 3 long's to 1 long? You can use the .view method to view a record-array as a different data-type without going through a data-copy using tostring and fromstring. -Travis From simon at arrowtheory.com Mon May 22 01:03:01 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 22 01:03:01 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays In-Reply-To: <44716C9A.4020502@ieee.org> References: <20060522124610.3b8ba40a.simon@arrowtheory.com> <44716C9A.4020502@ieee.org> Message-ID: <20060522174701.78a6ee9e.simon@arrowtheory.com> On Mon, 22 May 2006 01:47:38 -0600 Travis Oliphant wrote: > > Simon Burton wrote: > > This is something I will need to be able do: > > > > > >>>> a=numpy.array( [(1,2,3)], list('lll') ) > >>>> > >>>> a.astype( 'l' ) > >>>> > > Currently record-arrays can't be cast like this to built-in types. > it's true the error message could be more informative. > > What do you think should actually be done here anyway? How do you want > to cast 3 long's to 1 long? with shape = (1,3). I am visualizing the fields as columns, and when the array is homogeneous it seems natural to be able to switch between the two array types. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From martin.wiechert at gmx.de Mon May 22 04:10:02 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 04:10:02 2006 Subject: [Numpy-discussion] numpy (?) bug. Message-ID: <200605221307.47277.martin.wiechert@gmx.de> Hi list, I've a rather huge and involved application which now that I've updateded a couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps crashing on me after "certain patterns of interaction". I've pasted a typical backtrace below, the top part looks always very similar, in particular multiarray.so is always there. Also it's always an illegal call to free (). So you gurus out there, does this mean that numpy is the culprit? Any help would be appreciated. Thanks, Martin *** glibc detected *** python: free(): invalid pointer: 0xb7a95ac0 *** ======= Backtrace: ========= /lib/libc.so.6[0xb7c00911] /lib/libc.so.6(__libc_free+0x84)[0xb7c01f84] /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e3cf31] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb7a47d97] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb7a60dca] /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7a2bd9f] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e3964d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e7542e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x408b)[0xb7e7462b] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0[0xb7e25fce] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e11b05] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e5192e] /usr/local/lib/libpython2.4.so.1.0[0xb7e4aea5] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x25fd)[0xb7e72b9d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x408b)[0xb7e7462b] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0[0xb7e25efa] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e11b05] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0(PyEval_CallObjectWithKeywords+0x7c) [0xb7e6f91c] /usr/local/lib/python2.4/site-packages/sip.so[0xb3aa2817] /usr/local/lib/python2.4/site-packages/qt.so[0xb3b7bb80] /usr/lib/libqt-mt.so.3 (_ZN7QObject15activate_signalEP15QConnectionListP8QUObject+0x16d)[0xb3618b5d] /usr/lib/libqt-mt.so.3 (_ZN9QListView13doubleClickedEP13QListViewItemRK6QPointi+0xe6)[0xb3964686] /usr/lib/libqt-mt.so.3 (_ZN9QListView29contentsMouseDoubleClickEventEP11QMouseEvent+0x168) [0xb36fc908] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d6ae7c] /usr/lib/libqt-mt.so.3 (_ZN11QScrollView29viewportMouseDoubleClickEventEP11QMouseEvent+0xa5) [0xb372e345] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d6a7bc] /usr/lib/libqt-mt.so.3(_ZN11QScrollView11eventFilterEP7QObjectP6QEvent+0x1e1) [0xb372b821] /usr/lib/libqt-mt.so.3(_ZN9QListView11eventFilterEP7QObjectP6QEvent+0xa6) [0xb36f9c96] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d66aab] /usr/lib/libqt-mt.so.3(_ZN7QObject16activate_filtersEP6QEvent+0x5c) [0xb361845c] /usr/lib/libqt-mt.so.3(_ZN7QObject5eventEP6QEvent+0x3b)[0xb36184cb] /usr/lib/libqt-mt.so.3(_ZN7QWidget5eventEP6QEvent+0x2c)[0xb36514fc] /usr/lib/libqt-mt.so.3 (_ZN12QApplication14internalNotifyEP7QObjectP6QEvent+0x97)[0xb35b9c47] /usr/lib/libqt-mt.so.3(_ZN12QApplication6notifyEP7QObjectP6QEvent+0x1cb) [0xb35bab6b] /usr/local/lib/python2.4/site-packages/qt.so[0xb3f3bf05] /usr/lib/libqt-mt.so.3(_ZN9QETWidget19translateMouseEventEPK7_XEvent+0x4c2) [0xb3559c42] /usr/lib/libqt-mt.so.3(_ZN12QApplication15x11ProcessEventEP7_XEvent+0x916) [0xb3558e16] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop13processEventsEj+0x4aa)[0xb356945a] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop9enterLoopEv+0x48)[0xb35d0a78] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop4execEv+0x2e)[0xb35d090e] /usr/lib/libqt-mt.so.3(_ZN12QApplication4execEv+0x1f)[0xb35b97ff] /usr/local/lib/python2.4/site-packages/qt.so[0xb3f39d6e] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x14d)[0xb7e3967d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e7542e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e76643] /usr/local/lib/libpython2.4.so.1.0(PyRun_FileExFlags+0xb7)[0xb7e9b5c7] /usr/local/lib/libpython2.4.so.1.0(PyRun_SimpleFileExFlags+0x198)[0xb7e9b7c8] /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x7a)[0xb7e9baba] /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ea1f3d] python(main+0x32)[0x80485e2] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bb287c] python[0x8048521] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python 0804a000-087cc000 rw-p 0804a000 00:00 0 [heap] b1f00000-b1f21000 rw-p b1f00000 00:00 0 b1f21000-b2000000 ---p b1f21000 00:00 0 b204b000-b21d7000 rw-p b204b000 00:00 0 b229d000-b24ef000 rw-p b229d000 00:00 0 b2551000-b2583000 rw-p b2551000 00:00 0 b25b5000-b25e7000 rw-p b25b5000 00:00 0 b2619000-b2713000 rw-p b2619000 00:00 0 b2745000-b2786000 rw-p b2745000 00:00 0 b2786000-b278f000 r-xp 00000000 03:05 42242 /usr/X11R6/lib/X11/locale/lib/common/xomGeneric.so.2 b278f000-b2790000 rw-p 00008000 03:05 42242 /usr/X11R6/lib/X11/locale/lib/common/xomGeneric.so.2 b2790000-b27d6000 r--p 00000000 03:05 47073 /var/X11R6/compose-cache/l2_024_35fe9fba b27d6000-b27f1000 r-xp 00000000 03:05 42237 /usr/X11R6/lib/X11/locale/lib/common/ximcp.so.2 b27f1000-b27f3000 rw-p 0001b000 03:05 42237 /usr/X11R6/lib/X11/locale/lib/common/ximcp.so.2 b27f3000-b2816000 r-xp 00000000 03:05 43814 /usr/lib/qt3/plugins/inputmethods/libqsimple.so b2816000-b2817000 rw-p 00022000 03:05 43814 /usr/lib/qt3/plugins/inputmethods/libqsimple.so b2817000-b281f000 r-xp 00000000 03:05 13957 /lib/libnss_files-2.4.so b281f000-b2821000 rw-p 00007000 03:05 13957 /lib/libnss_files-2.4.so b2821000-b2832000 r-xp 00000000 03:05 13951 /lib/libnsl-2.4.so b2832000-b2834000 rw-p 00010000 03:05 13951 /lib/libnsl-2.4.so b2834000-b2836000 rw-p b2834000 00:00 0 b2836000-b2860000 r-xp 00000000 03:05 61788 /opt/kde3/lib/libkdefx.so.4.2.0 b2860000-b2862000 rw-p 00029000 03:05 61788 /opt/kde3/lib/libkdefx.so.4.2.0 b2862000-b2878000 r-xp 00000000 03:05 61766 /opt/kde3/lib/kde3/plugins/styles/light.so b2878000-b2879000 rw-p 00015000 03:05 61766 /opt/kde3/lib/kde3/plugins/styles/light.so b2879000-b289a000 r--p 00000000 03:05 36221 /usr/X11R6/lib/X11/fonts/truetype/DejaVuSerif.ttf b28a8000-b28c8000 r--p 00000000 03:05 36218 /usr/X11R6/lib/X11/fonts/truetype/DejaVuSerif-Bold.ttf b28c8000-b2909000 rw-p b28c8000 00:00 0 b2909000-b2934000 r-xp 00000000 03:05 19354 /usr/lib/liblcms.so.1.0.15 b2934000-b2936000 rw-p 0002a000 03:05 19354 /usr/lib/liblcms.so.1.0.15 b2936000-b2938000 rw-p b2936000 00:00 0 b2938000-b29a5000 r-xp 00000000 03:05 21150 /usr/lib/libmng.so.1.1.0.9 b29a5000-b29a8000 rw-p 0006c000 03:05 21150 /usr/lib/libmng.so.1.1.0.9 b29ab000-b29b3000 r-xp 00000000 03:05 43815 /usr/lib/qt3/plugins/inputmethods/libqxim.so b29b3000-b29b4000 rw-p 00008000 03:05 43815 /usr/lib/qt3/plugins/inputmethods/libqxim.so b29b4000-b29b7000 r-xp 00000000 03:05 43813 /usr/lib/qt3/plugins/inputmethods/libqimsw-none.so b29b7000-b29b8000 rw-p 00003000 03:05 43813 /usr/lib/qt3/plugins/inputmethods/libqimsw-none.so b29b8000-b29bf000 r-xp 00000000 03:05 43812 /usr/lib/qt3/plugins/inputmethods/libqimsw-multi.so b29bf000-b29c0000 rw-p 00007000 03:05 43812 /usr/lib/qt3/plugins/inputmethods/libqimsw-multi.so b29c0000-b29de000 r-xp 00000000 03:05 17664 /usr/lib/libjpeg.so.62.0.0 b29de000-b29df000 rw-p 0001d000 03:05 17664 /usr/lib/libjpeg.so.62.0.0 b29e0000-b29e8000 r-xp 00000000 03:05 13961 /lib/libnss_nis-2.4.so b29e8000-b29ea000 rw-p 00007000 03:05 13961 /lib/libnss_nis-2.4.so b29ea000-b29f0000 r-xp 00000000 03:05 13953 /lib/libnss_compat-2.4.so b29f0000-b29f2000 rw-p 00005000 03:05 13953 /lib/libnss_compat-2.4.so b29f2000-b29f6000 r-xp 00000000 03:05 43810 /usr/lib/qt3/plugins/imageformats/libqmng.so b29f6000-b29f7000 rw-p 00003000 03:05 43810 /usr/lib/qt3/plugins/imageformats/libqmng.so b29f7000-b29f8000 r-xp 00000000 03:05 42239 /usr/X11R6/lib/X11/locale/lib/common/xlcUTF8Load.so.2 b29f8000-b29f9000 rw-p 00000000 03:05 42239 /usr/X11R6/lib/X11/locale/lib/common/xlcUTF8Load.so.2 b29f9000-b29ff000 r--s 00001000 03:05 68835 /var/cache/fontconfig/d0814903482a18ed8717ceb08fcf4410.cache-2 b29ff000-b2a04000 r--s 00001000 0Aborted From robert.kern at gmail.com Mon May 22 04:23:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon May 22 04:23:00 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221307.47277.martin.wiechert@gmx.de> References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: Martin Wiechert wrote: > Hi list, > > I've a rather huge and involved application which now that I've updateded a > couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps crashing > on me after "certain patterns of interaction". I've pasted a typical > backtrace below, the top part looks always very similar, in particular > multiarray.so is always there. Also it's always an illegal call to free (). > > So you gurus out there, does this mean that numpy is the culprit? Possibly. Without access to your code, it's impossible for us to tell and even more impossible for us to fix it. If you can narrow it down to the function in numpy.core.multiarray that's being called and the arguments you are passing to it, we might be able to do something. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From martin.wiechert at gmx.de Mon May 22 05:17:04 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 05:17:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: <200605221415.11183.martin.wiechert@gmx.de> Robert, Thanks for your reply. I have managed to get a gdb backtrace, see below. Does this help? Can I extract more useful information from gdb (never used it before)? Btw. I have no problem showing my code (besides embarrassment), but I'm pretty certain you don't want to read it ;-) I'll try to narrow it down, but it's difficult, e.g. it doesn't seem to happen the first time the guilty code is executed but rather when it is called the third or fourth time or even later. Thanks, Martin. On Monday 22 May 2006 13:21, Robert Kern wrote: > Martin Wiechert wrote: > > Hi list, > > > > I've a rather huge and involved application which now that I've updateded > > a couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps > > crashing on me after "certain patterns of interaction". I've pasted a > > typical backtrace below, the top part looks always very similar, in > > particular multiarray.so is always there. Also it's always an illegal > > call to free (). > > > > So you gurus out there, does this mean that numpy is the culprit? > > Possibly. Without access to your code, it's impossible for us to tell and > even more impossible for us to fix it. If you can narrow it down to the > function in numpy.core.multiarray that's being called and the arguments you > are passing to it, we might be able to do something. (gdb) backtrace #0 0xffffe410 in __kernel_vsyscall () #1 0xb7c6b7d0 in raise () from /lib/libc.so.6 #2 0xb7c6cea3 in abort () from /lib/libc.so.6 #3 0xb7ca0f8b in __libc_message () from /lib/libc.so.6 #4 0xb7ca6911 in malloc_printerr () from /lib/libc.so.6 #5 0xb7ca7f84 in free () from /lib/libc.so.6 #6 0xb7ee2f31 in PyObject_Free (p=0x2) at Objects/obmalloc.c:798 #7 0xb7af9d97 in arraydescr_dealloc (self=0xb7b47ac0) at numpy/core/src/arrayobject.c:8956 #8 0xb7b12dca in array_dealloc (self=0x8714a18) at numpy/core/src/arrayobject.c:1477 #9 0xb7addd9f in PyUFunc_GenericReduction (self=0x80a49a0, args=0xb299152c, kwds=, operation=2) at numpy/core/src/ufuncobject.c:2521 #10 0xb7edf64d in PyCFunction_Call (func=0xb2d2c82c, arg=0xb299152c, kw=0x6) at Objects/methodobject.c:77 #11 0xb7f1b42e in PyEval_EvalFrame (f=0x8299d24) at Python/ceval.c:3563 #12 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2f623a0, globals=0xb41e59bc, locals=0x0, args=0x812dd1c, argcount=1, kws=0x812dd20, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #13 0xb7f1a62b in PyEval_EvalFrame (f=0x812db8c) at Python/ceval.c:3656 #14 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2f414a0, globals=0xb2f3a0b4, locals=0x0, args=0xb2d45c78, argcount=1, kws=0x869bdf8, kwcount=2, defs=0xb2d148f8, defcount=8, closure=0x0) at Python/ceval.c:2736 #15 0xb7ecbfce in function_call (func=0xb2d148b4, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/funcobject.c:548 #16 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/abstract.c:1795 #17 0xb7eb7b05 in instancemethod_call (func=0xb427e324, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/classobject.c:2447 #18 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb7c0102c, kw=0xb28052d4) at Objects/abstract.c:1795 #19 0xb7ef792e in slot_tp_init (self=0xb27fdbac, args=0xb7c0102c, kwds=0xb28052d4) at Objects/typeobject.c:4759 #20 0xb7ef0ea5 in type_call (type=, args=0xb7c0102c, kwds=0xb28052d4) at Objects/typeobject.c:435 #21 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb7c0102c, kw=0xb28052d4) at Objects/abstract.c:1795 #22 0xb7f18b9d in PyEval_EvalFrame (f=0x849ef24) at Python/ceval.c:3771 #23 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2d1b0a0, globals=0xb2d12f0c, locals=0x0, args=0x80ab9ec, argcount=2, kws=0x80ab9f4, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #24 0xb7f1a62b in PyEval_EvalFrame (f=0x80ab88c) at Python/ceval.c:3656 #25 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb7bd63e0, globals=0xb7bcd24c, locals=0x0, args=0xb297eee8, argcount=4, kws=0x0, kwcount=0, defs=0xb2d27d18, defcount=2, closure=0x0) at Python/ceval.c:2736 #26 0xb7ecbefa in function_call (func=0xb2d31b8c, arg=0xb297eedc, kw=0x0) at Objects/funcobject.c:548 #27 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb297eedc, kw=0x0) at Objects/abstract.c:1795 #28 0xb7eb7b05 in instancemethod_call (func=0xb427e2fc, arg=0xb297eedc, kw=0x0) at Objects/classobject.c:2447 #29 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb29a8d9c, kw=0x0) at Objects/abstract.c:1795 #30 0xb7f1591c in PyEval_CallObjectWithKeywords (func=0xb427e2fc, arg=0xb29a8d9c, kw=0x0) at Python/ceval.c:3430 #31 0xb3b48817 in initsip () from /usr/local/lib/python2.4/site-packages/sip.so #32 0xb3c21b80 in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #33 0xb36beb5d in QObject::activate_signal () from /usr/lib/libqt-mt.so.3 #34 0xb3a0a686 in QListView::doubleClicked () from /usr/lib/libqt-mt.so.3 #35 0xb37a2908 in QListView::contentsMouseDoubleClickEvent () from /usr/lib/libqt-mt.so.3 #36 0xb3e10e7c in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #37 0xb37d4345 in QScrollView::viewportMouseDoubleClickEvent () from /usr/lib/libqt-mt.so.3 #38 0xb3e107bc in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #39 0xb37d1821 in QScrollView::eventFilter () from /usr/lib/libqt-mt.so.3 #40 0xb379fc96 in QListView::eventFilter () from /usr/lib/libqt-mt.so.3 #41 0xb3e0caab in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #42 0xb36be45c in QObject::activate_filters () from /usr/lib/libqt-mt.so.3 #43 0xb36be4cb in QObject::event () from /usr/lib/libqt-mt.so.3 #44 0xb36f74fc in QWidget::event () from /usr/lib/libqt-mt.so.3 #45 0xb365fc47 in QApplication::internalNotify () from /usr/lib/libqt-mt.so.3 #46 0xb3660b6b in QApplication::notify () from /usr/lib/libqt-mt.so.3 #47 0xb3fe1f05 in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #48 0xb35ffc42 in QETWidget::translateMouseEvent () from /usr/lib/libqt-mt.so.3 #49 0xb35fee16 in QApplication::x11ProcessEvent () from /usr/lib/libqt-mt.so.3 #50 0xb360f45a in QEventLoop::processEvents () from /usr/lib/libqt-mt.so.3 #51 0xb3676a78 in QEventLoop::enterLoop () from /usr/lib/libqt-mt.so.3 #52 0xb367690e in QEventLoop::exec () from /usr/lib/libqt-mt.so.3 #53 0xb365f7ff in QApplication::exec () from /usr/lib/libqt-mt.so.3 #54 0xb3fdfd6e in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #55 0xb7edf67d in PyCFunction_Call (func=0xb2f42c8c, arg=0xb7c0102c, kw=0x0) at Objects/methodobject.c:108 #56 0xb7f1b42e in PyEval_EvalFrame (f=0x80577d4) at Python/ceval.c:3563 #57 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb7be1360, globals=0xb7c19824, locals=0xb7c19824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #58 0xb7f1c643 in PyEval_EvalCode (co=0xb7be1360, globals=0xb7c19824, locals=0xb7c19824) at Python/ceval.c:484 #59 0xb7f415c7 in PyRun_FileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", start=257, globals=0xb7c19824, locals=0xb7c19824, closeit=1, flags=0xbfb60814) at Python/pythonrun.c:1265 #60 0xb7f417c8 in PyRun_SimpleFileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", closeit=1, flags=0xbfb60814) at Python/pythonrun.c:860 #61 0xb7f41aba in PyRun_AnyFileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", closeit=1, flags=0xbfb60814) at Python/pythonrun.c:664 #62 0xb7f47f3d in Py_Main (argc=1, argv=0xbfb608e4) at Modules/main.c:493 #63 0x080485e2 in main (argc=Cannot access memory at address 0x5e02 ) at Modules/python.c:23 From ivilata at carabos.com Mon May 22 05:20:13 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Mon May 22 05:20:13 2006 Subject: [Numpy-discussion] Power with negative exponent Message-ID: <4471AC10.9070000@carabos.com> Hi all, when working with numexpr, I have come across a curiosity in both numarray and numpy:: In [30]:b = numpy.array([1,2,3,4]) In [31]:b ** -1 Out[31]:array([1, 0, 0, 0]) In [32]:4 ** -1 Out[32]:0.25 In [33]: According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. Then, shouldn?t be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])``? Is this behaviour intentional? (I googled for previous messages on the topic but I didn?t find any.) Thanks, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From simon at arrowtheory.com Mon May 22 05:49:04 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 22 05:49:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221415.11183.martin.wiechert@gmx.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221415.11183.martin.wiechert@gmx.de> Message-ID: <20060522223128.0967a826.simon@arrowtheory.com> On Mon, 22 May 2006 14:15:11 +0200 Martin Wiechert wrote: > #9 0xb7addd9f in PyUFunc_GenericReduction (self=0x80a49a0, args=0xb299152c, > kwds=, operation=2) > at numpy/core/src/ufuncobject.c:2521 I was getting segfaults with version 0.9.7.2523: #0 0xb7c5d612 in PyUFunc_GenericFunction (self=0x81b6d08, args=0x76484a40, mps=0xbfffad10) at ufuncobject.c:1484 An svn update fixed it. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From sen at ps.net Mon May 22 06:08:25 2006 From: sen at ps.net (Susannah Senior) Date: Mon May 22 06:08:25 2006 Subject: [Numpy-discussion] Re: test AMBeBtEN Message-ID: <000001c67da0$ad315a90$1e54a8c0@bly51> Hi, S O M ^ A M B / E N P R O Z ^ C L E V / T R A X ^ N A X M E R / D / A V A L / U M V / A G R A C / A L / S http://www.teminolvi.com And so Bilbo was swung down from the wall, and departed with nothing for all his trouble, except the armour which Thorin had given him already. More than one of the dwarves in their hearts felt shame and pity at his going. Farewell! he cried to them. We may meet again as friends. Be off! called Thorin. You have mail upon you, which was made by my folk, and is too good for you. It cannot be pierced.by -------------- next part -------------- An HTML attachment was scrubbed... URL: From Martin.Wiechert at mpimf-heidelberg.mpg.de Mon May 22 07:26:09 2006 From: Martin.Wiechert at mpimf-heidelberg.mpg.de (Martin Takeo Wiechert) Date: Mon May 22 07:26:09 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Robert, I nailed it down. Look at the short interactive session below. numpy version is 0.9.8. Regards, Martin. P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you do your svn update? Python 2.4.3 (#1, May 12 2006, 05:35:54) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** ======= Backtrace: ========= /lib/libc.so.6[0xb7c1a911] /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) [0xb7eb512d] /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) [0xb7eb526b] /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] python(main+0x32)[0x80485e2] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] python[0x8048521] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] b7000000-b7021000 rw-p b7000000 00:00 0 b7021000-b7100000 ---p b7021000 00:00 0 b71b4000-b7297000 rw-p b71b4000 00:00 0 b7297000-b72b2000 r-xp 00000000 03:05 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so b72b2000-b72b6000 rw-p 0001a000 03:05 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 b72d1000-b72d4000 rw-p b72d1000 00:00 0 b72e2000-b72eb000 r-xp 00000000 03:05 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so b72eb000-b72ec000 rw-p 00008000 03:05 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so b72ec000-b758c000 r-xp 00000000 03:05 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so b758c000-b758e000 rw-p 0029f000 03:05 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so b758e000-b75ef000 rw-p b758e000 00:00 0 b75ef000-b75f2000 r-xp 00000000 03:05 208618 /usr/local/lib/python2.4/lib-dynload/math.so b75f2000-b75f3000 rw-p 00002000 03:05 208618 /usr/local/lib/python2.4/lib-dynload/math.so b75f3000-b75f5000 r-xp 00000000 03:05 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so b75f5000-b75f6000 rw-p 00002000 03:05 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so b75f6000-b7610000 r-xp 00000000 03:05 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so b7610000-b7611000 rw-p 00019000 03:05 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so b7611000-b7614000 r-xp 00000000 03:05 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so b7614000-b7615000 rw-p 00003000 03:05 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so b7615000-b7656000 rw-p b7615000 00:00 0 b7656000-b765a000 r-xp 00000000 03:05 208644 /usr/local/lib/python2.4/lib-dynload/strop.so b765a000-b765c000 rw-p 00003000 03:05 208644 /usr/local/lib/python2.4/lib-dynload/strop.so b765c000-b765f000 r-xp 00000000 03:05 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so b765f000-b7660000 rw-p 00003000 03:05 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so b7660000-b7671000 r-xp 00000000 03:05 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so b7671000-b7672000 rw-p 00010000 03:05 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so b7672000-b7964000 r-xp 00000000 03:05 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so b7964000-b7966000 rw-p 002f1000 03:05 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so b7966000-b798e000 r-xp 00000000 03:05 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so b798e000-b7991000 rw-p 00027000 03:05 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so b7991000-b79d3000 rw-p b7991000 00:00 0 b79d3000-b7a28000 r-xp 00000000 03:05 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so b7a28000-b7a32000 rw-p 00054000 03:05 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 b7a78000-b7a79000 rw-p b7a78000 00:00 0 b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 b7af4000-b7af5000 rw-p b7af4000 00:00 0 b7af5000-b7af8000 r-xp 00000000 03:05 208646 /usr/local/lib/python2.4/lib-dynload/readline.so b7af8000-b7af9000 rw-p 000030Aborted From fullung at gmail.com Mon May 22 07:35:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 22 07:35:01 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Hello all This bug seems to be present in 0.9.9.2536. Martin, it would be great if you could create a ticket in Trac. http://projects.scipy.org/scipy/numpy/newticket Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Martin Takeo Wiechert > Sent: 22 May 2006 16:24 > To: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Re: numpy (?) bug. > > Robert, > > I nailed it down. Look at the short interactive session below. numpy > version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did > you > do your svn update? From nwagner at iam.uni-stuttgart.de Mon May 22 07:35:13 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon May 22 07:35:13 2006 Subject: [Fwd: Re: [Numpy-discussion] Re: numpy (?) bug.] Message-ID: <4471CBF0.3090602@iam.uni-stuttgart.de> -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From nwagner at iam.uni-stuttgart.de Mon May 22 10:32:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 22 May 2006 16:32:05 +0200 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <4471CB65.9090903@iam.uni-stuttgart.de> Martin Takeo Wiechert wrote: > Robert, > > I nailed it down. Look at the short interactive session below. numpy version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you > do your svn update? > > > Python 2.4.3 (#1, May 12 2006, 05:35:54) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numpy import * >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > array([225, 225]) > >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** > ======= Backtrace: ========= > /lib/libc.so.6[0xb7c1a911] > /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] > /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] > /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] > /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) > [0xb7eb512d] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) > [0xb7eb526b] > /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] > /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] > python(main+0x32)[0x80485e2] > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] > python[0x8048521] > ======= Memory map: ======== > 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python > 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python > 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] > b7000000-b7021000 rw-p b7000000 00:00 0 > b7021000-b7100000 ---p b7021000 00:00 0 > b71b4000-b7297000 rw-p b71b4000 00:00 0 > b7297000-b72b2000 r-xp 00000000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b2000-b72b6000 rw-p 0001a000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d1000-b72d4000 rw-p b72d1000 00:00 0 > b72e2000-b72eb000 r-xp 00000000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72eb000-b72ec000 rw-p 00008000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72ec000-b758c000 r-xp 00000000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758c000-b758e000 rw-p 0029f000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758e000-b75ef000 rw-p b758e000 00:00 0 > b75ef000-b75f2000 r-xp 00000000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f2000-b75f3000 rw-p 00002000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f3000-b75f5000 r-xp 00000000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f5000-b75f6000 rw-p 00002000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f6000-b7610000 r-xp 00000000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7610000-b7611000 rw-p 00019000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7611000-b7614000 r-xp 00000000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7614000-b7615000 rw-p 00003000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7615000-b7656000 rw-p b7615000 00:00 0 > b7656000-b765a000 r-xp 00000000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765a000-b765c000 rw-p 00003000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765c000-b765f000 r-xp 00000000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b765f000-b7660000 rw-p 00003000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b7660000-b7671000 r-xp 00000000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7671000-b7672000 rw-p 00010000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7672000-b7964000 r-xp 00000000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7964000-b7966000 rw-p 002f1000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7966000-b798e000 r-xp 00000000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b798e000-b7991000 rw-p 00027000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b7991000-b79d3000 rw-p b7991000 00:00 0 > b79d3000-b7a28000 r-xp 00000000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a28000-b7a32000 rw-p 00054000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 > b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 > b7a78000-b7a79000 rw-p b7a78000 00:00 0 > b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 > b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 > b7af4000-b7af5000 rw-p b7af4000 00:00 0 > b7af5000-b7af8000 r-xp 00000000 03:05 > 208646 /usr/local/lib/python2.4/lib-dynload/readline.so > b7af8000-b7af9000 rw-p 000030Aborted > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > I can reproduce it. Numpy version 0.9.9.2537 Scipy version 0.4.9.1906 Starting program: /usr/bin/python [Thread debugging using libthread_db enabled] [New Thread 16384 (LWP 17182)] Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) *** glibc detected *** free(): invalid pointer: 0x00002aaaabe87a40 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 17182)] 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 (gdb) bt #0 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 #1 0x00002aaaaadf3821 in pthread_kill () from /lib64/libpthread.so.0 #2 0x00002aaaaadf3be2 in raise () from /lib64/libpthread.so.0 #3 0x00002aaaab61759d in abort () from /lib64/libc.so.6 #4 0x00002aaaab64a7be in __libc_message () from /lib64/libc.so.6 #5 0x00002aaaab64f76c in malloc_printerr () from /lib64/libc.so.6 #6 0x00002aaaab65025a in free () from /lib64/libc.so.6 #7 0x00002aaaabd51879 in array_dealloc (self=0x749be0) at arrayobject.c:1477 #8 0x00002aaaaac1ae77 in insertdict (mp=0x50b0b0, key=0x2aaaaaae79c0, hash=12160036574, value=0x2aaaaadbee30) at dictobject.c:397 #9 0x00002aaaaac1b147 in PyDict_SetItem (op=0x50b0b0, key=0x2aaaaaae79c0, value=0x2aaaaadbee30) at dictobject.c:551 #10 0x00002aaaaac206e4 in PyObject_GenericSetAttr (obj=0x2aaaaaac2bb0, name=0x2aaaaaae79c0, value=0x2aaaaadbee30) at object.c:1370 #11 0x00002aaaaac20136 in PyObject_SetAttr (v=0x2aaaaaac2bb0, name=0x2aaaaaae79c0, value=0x2aaaaadbee30) at object.c:1128 #12 0x00002aaaaac202cd in PyObject_SetAttrString (v=0x2aaaaaac2bb0, name=, w=0x2aaaaadbee30) at object.c:1044 #13 0x00002aaaaac73572 in sys_displayhook (self=, o=0x74c4d0) at sysmodule.c:105 #14 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #15 0x00002aaaaac4efe0 in PyEval_CallObjectWithKeywords (func=0x2aaaaaacf170, arg=0x2aaaaccaa890, kw=0x0) at ceval.c:3419 #16 0x00002aaaaac53a07 in PyEval_EvalFrame (f=0x541960) at ceval.c:1507 #17 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaccadab0, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #18 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #19 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503b50, locals=0x503b50, flags=) at pythonrun.c:1265 #20 0x00002aaaaac71bc7 in PyRun_InteractiveOneFlags (fp=, filename=0x2aaaaac95e73 "", flags=0x7fffff82c870) at pythonrun.c:762 #21 0x00002aaaaac71cbe in PyRun_InteractiveLoopFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", flags=0x7fffff82c870) at pythonrun.c:695 #22 0x00002aaaaac7221c in PyRun_AnyFileExFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", closeit=0, flags=0x7fffff82c870) at pythonrun.c:658 #23 0x00002aaaaac77b25 in Py_Main (argc=, argv=0x7fffff82e7bf) at main.c:484 #24 0x00002aaaab603ced in __libc_start_main () from /lib64/libc.so.6 #25 0x00000000004006ea in _start () at start.S:113 #26 0x00007fffff82c908 in ?? () #27 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 Nils --------------070904090207090509070803-- From robert.kern at gmail.com Mon May 22 07:51:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon May 22 07:51:38 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> References: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > This bug seems to be present in 0.9.9.2536. In order to bracket the bug, I will add that I do not see it in 0.9.7.2477 . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From martin.wiechert at gmx.de Mon May 22 07:52:17 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 07:52:17 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> References: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Message-ID: <200605221649.47954.martin.wiechert@gmx.de> I've created ticket #128 Regards, Martin On Monday 22 May 2006 16:33, Albert Strasheim wrote: > Hello all > > This bug seems to be present in 0.9.9.2536. Martin, it would be great if > you could create a ticket in Trac. > > http://projects.scipy.org/scipy/numpy/newticket > > Regards, > > Albert > > > -----Original Message----- > > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > > discussion-admin at lists.sourceforge.net] On Behalf Of Martin Takeo > > Wiechert Sent: 22 May 2006 16:24 > > To: numpy-discussion at lists.sourceforge.net > > Subject: Re: [Numpy-discussion] Re: numpy (?) bug. > > > > Robert, > > > > I nailed it down. Look at the short interactive session below. numpy > > version > > is 0.9.8. > > > > Regards, Martin. > > > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did > > you > > do your svn update? > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From bblais at bryant.edu Mon May 22 09:50:02 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon May 22 09:50:02 2006 Subject: [Numpy-discussion] saving data: which format is recommended? Message-ID: <4471EB90.7030005@bryant.edu> Hello, I am trying to save numpy arrays (actually a list of them) for later use, and distribution to others. Up until yesterday, I've been using the zpickle module from the Cookbook, which is just pickle binary format with gzip compression. Yesterday, I upgraded my operating system, and now I can't read those files. I am using numpy 0.9.9.2536, and unfortunately I can't recall the version that I was using, but it was pretty relatively recent. I also upgraded from Python 2.3 to 2.4. Trying to load the "old" files, I get: AttributeError: 'module' object has no attribute 'dtypedescr' the file consists of a single dictionary, with two elements, like: var={'im': numpy.zeros((5,5)),'im_scale_shift':[0.0,1.0]} My question isn't how can I load these "old" files, because I can regenerate them. I would like to know what file format I should be using so that I don't have to worry about upgrades/version differences when I want to load them. Is there a preferred way to do this? I thought pickle was that way, but perhaps I don't understand how pickle works. thanks, Brian Blais From faltet at carabos.com Mon May 22 11:15:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Mon May 22 11:15:06 2006 Subject: [Numpy-discussion] saving data: which format is recommended? In-Reply-To: <4471EB90.7030005@bryant.edu> References: <4471EB90.7030005@bryant.edu> Message-ID: <1148321632.7596.21.camel@localhost.localdomain> El dl 22 de 05 del 2006 a les 12:49 -0400, en/na Brian Blais va escriure: > I am trying to save numpy arrays (actually a list of them) for later use, and > distribution to others. Up until yesterday, I've been using the zpickle module from > the Cookbook, which is just pickle binary format with gzip compression. Yesterday, I > upgraded my operating system, and now I can't read those files. I am using numpy > 0.9.9.2536, and unfortunately I can't recall the version that I was using, but it was > pretty relatively recent. I also upgraded from Python 2.3 to 2.4. Trying to load > the "old" files, I get: > > AttributeError: 'module' object has no attribute 'dtypedescr' > > > the file consists of a single dictionary, with two elements, like: > > var={'im': numpy.zeros((5,5)),'im_scale_shift':[0.0,1.0]} This could be because NumPy objects has suffered some changes in their structure in the last months. After 1.0 version there will (hopefully) be no more changes in the structure, so your pickles will be more stable (but again, you might have problems in the long run, i.e. when NumPy 2.0 will appear). > > My question isn't how can I load these "old" files, because I can regenerate them. I > would like to know what file format I should be using so that I don't have to worry > about upgrades/version differences when I want to load them. Is there a preferred > way to do this? I thought pickle was that way, but perhaps I don't understand how > pickle works. If you need full comaptibility, a better approach than pickle-based solutions are the .tofile() .fromfile() methods, but you need to save the metadata for your objects (type, shape, etc.) separately. If you need full support for saving data & metadata for your NumPy objects in a transparent way that is independent of pickle you may want to have a look at PyTables [1] or NetCDF4 [2]. Both packages should be able to save NumPy datasets without a need to worry for future changes in NumPy data structures. These both packages are ultimately based on the HDF5 format[3], which has a pretty strong commitement with backward/forward format compatibility along its versions. [1]http://www.pytables.org [2]http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/netCDF4.html [3]http://hdf.ncsa.uiuc.edu/HDF5 Cheers, -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From oliphant.travis at ieee.org Mon May 22 11:28:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 11:28:03 2006 Subject: [Numpy-discussion] saving data: which format is recommended? In-Reply-To: <4471EB90.7030005@bryant.edu> References: <4471EB90.7030005@bryant.edu> Message-ID: <44720288.2080902@ieee.org> Brian Blais wrote: > Hello, > > I am trying to save numpy arrays (actually a list of them) for later > use, and > distribution to others. Up until yesterday, I've been using the > zpickle module from > the Cookbook, which is just pickle binary format with gzip > compression. Yesterday, I > upgraded my operating system, and now I can't read those files. I am > using numpy > 0.9.9.2536, and unfortunately I can't recall the version that I was > using, but it was > pretty relatively recent. I also upgraded from Python 2.3 to 2.4. > Trying to load > the "old" files, I get: > > AttributeError: 'module' object has no attribute 'dtypedescr' The name "dtypedescr" was changed to "dtype" back in early February. The problem with pickle is that it is quite sensitive to these kind of changes. These kind of changes are actually rare, but in the early stages of NumPy, more common. This should be more stable now. I don't expect changes that will cause pickled NumPy arrays to fail in the future. If you needed to read the data on these files, it is likely possible with a little tweaking. While pickle is convenient and the actual data is guaranteed to be readable, reconstructing the data requires that certain names won't change. Many people use other methods for persistence because of this. -Travis From oliphant.travis at ieee.org Mon May 22 11:47:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 11:47:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <447206E5.6060608@ieee.org> Martin Takeo Wiechert wrote: > Robert, > > I nailed it down. Look at the short interactive session below. numpy version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you > do your svn update? > > > Python 2.4.3 (#1, May 12 2006, 05:35:54) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numpy import * >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > array([225, 225]) > >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> Thanks for tracking this down. It was a reference-count bug on the data-type object. The builtin data-types should never be freed, but an attempt was made due to the bug. This should be fixed in SVN now. -Travis > *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** > ======= Backtrace: ========= > /lib/libc.so.6[0xb7c1a911] > /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] > /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] > /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] > /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) > [0xb7eb512d] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) > [0xb7eb526b] > /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] > /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] > python(main+0x32)[0x80485e2] > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] > python[0x8048521] > ======= Memory map: ======== > 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python > 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python > 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] > b7000000-b7021000 rw-p b7000000 00:00 0 > b7021000-b7100000 ---p b7021000 00:00 0 > b71b4000-b7297000 rw-p b71b4000 00:00 0 > b7297000-b72b2000 r-xp 00000000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b2000-b72b6000 rw-p 0001a000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d1000-b72d4000 rw-p b72d1000 00:00 0 > b72e2000-b72eb000 r-xp 00000000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72eb000-b72ec000 rw-p 00008000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72ec000-b758c000 r-xp 00000000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758c000-b758e000 rw-p 0029f000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758e000-b75ef000 rw-p b758e000 00:00 0 > b75ef000-b75f2000 r-xp 00000000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f2000-b75f3000 rw-p 00002000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f3000-b75f5000 r-xp 00000000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f5000-b75f6000 rw-p 00002000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f6000-b7610000 r-xp 00000000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7610000-b7611000 rw-p 00019000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7611000-b7614000 r-xp 00000000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7614000-b7615000 rw-p 00003000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7615000-b7656000 rw-p b7615000 00:00 0 > b7656000-b765a000 r-xp 00000000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765a000-b765c000 rw-p 00003000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765c000-b765f000 r-xp 00000000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b765f000-b7660000 rw-p 00003000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b7660000-b7671000 r-xp 00000000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7671000-b7672000 rw-p 00010000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7672000-b7964000 r-xp 00000000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7964000-b7966000 rw-p 002f1000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7966000-b798e000 r-xp 00000000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b798e000-b7991000 rw-p 00027000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b7991000-b79d3000 rw-p b7991000 00:00 0 > b79d3000-b7a28000 r-xp 00000000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a28000-b7a32000 rw-p 00054000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 > b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 > b7a78000-b7a79000 rw-p b7a78000 00:00 0 > b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 > b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 > b7af4000-b7af5000 rw-p b7af4000 00:00 0 > b7af5000-b7af8000 r-xp 00000000 03:05 > 208646 /usr/local/lib/python2.4/lib-dynload/readline.so > b7af8000-b7af9000 rw-p 000030Aborted > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ivilata at carabos.com Mon May 22 23:50:03 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Mon May 22 23:50:03 2006 Subject: [Numpy-discussion] Pow() with negative exponent Message-ID: <4472B013.1010607@carabos.com> (I'm sending this again because I'm afraid the previous post may have qualified as spam because of it subject. Sorry for the inconvenience.) Hi all, when working with numexpr, I have come across a curiosity in both numarray and numpy:: In [30]:b = numpy.array([1,2,3,4]) In [31]:b ** -1 Out[31]:array([1, 0, 0, 0]) In [32]:4 ** -1 Out[32]:0.25 In [33]: According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. Then, shouldn?t be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` (i.e. a floating point result)? Is this behaviour intentional? (I googled for previous messages on the topic but I didn?t find any.) Thanks, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ajikoe at gmail.com Mon May 22 23:58:05 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Mon May 22 23:58:05 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: <4472B013.1010607@carabos.com> References: <4472B013.1010607@carabos.com> Message-ID: use 'f' to tell numpy that its array element is a float type: b = numpy.array([1,2,3,4],'f') an alternative is to put dot after the number: b = numpy.array([1. ,2. ,3. ,4.]) This hopefully solve your problem. Cheers, pujo On 5/23/06, Ivan Vilata i Balaguer wrote: > > (I'm sending this again because I'm afraid the previous post may have > qualified as spam because of it subject. Sorry for the inconvenience.) > > Hi all, when working with numexpr, I have come across a curiosity in > both numarray and numpy:: > > In [30]:b = numpy.array([1,2,3,4]) > In [31]:b ** -1 > Out[31]:array([1, 0, 0, 0]) > In [32]:4 ** -1 > Out[32]:0.25 > In [33]: > > According to http://docs.python.org/ref/power.html: > > For int and long int operands, the result has the same type as the > operands (after coercion) unless the second argument is negative; in > that case, all arguments are converted to float and a float result is > delivered. > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) > > Thanks, > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivilata at carabos.com Tue May 23 00:27:03 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue May 23 00:27:03 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com> Message-ID: <4472B8D0.6040606@carabos.com> En/na Pujo Aji ha escrit:: > use 'f' to tell numpy that its array element is a float type: > b = numpy.array([1,2,3,4],'f') > > an alternative is to put dot after the number: > b = numpy.array([1. ,2. ,3. ,4.]) > > This hopefully solve your problem. You're right, but according to Python reference docs, having an integer base and a negative integer exponent should still return a floating point result, without the need of converting the base to floating point beforehand. I wonder if the numpy/numarray behavior is based on some implicit policy which states that operating integers with integers should always return integers, for return type predictability, or something like that. Could someone please shed some light on this? Thanks! En/na Pujo Aji ha escrit:: > On 5/23/06, *Ivan Vilata i Balaguer* > wrote: > [...] > According to http://docs.python.org/ref/power.html: > > For int and long int operands, the result has the same type as the > operands (after coercion) unless the second argument is negative; in > that case, all arguments are converted to float and a float result is > delivered. > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ajikoe at gmail.com Tue May 23 01:08:12 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Tue May 23 01:08:12 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: <4472B8D0.6040606@carabos.com> References: <4472B013.1010607@carabos.com> <4472B8D0.6040606@carabos.com> Message-ID: Numpy optimize the python process by explicitly define the element type of array. Just like C++. Python let you work with automatic converting... but it slows down the process. Like having extra code to check the type of your element array. I suggest you check the numpy reference instead of python reference when using numpy. Sincerely Yours, pujo On 5/23/06, Ivan Vilata i Balaguer wrote: > > En/na Pujo Aji ha escrit:: > > > use 'f' to tell numpy that its array element is a float type: > > b = numpy.array([1,2,3,4],'f') > > > > an alternative is to put dot after the number: > > b = numpy.array([1. ,2. ,3. ,4.]) > > > > This hopefully solve your problem. > > You're right, but according to Python reference docs, having an integer > base and a negative integer exponent should still return a floating > point result, without the need of converting the base to floating point > beforehand. > > I wonder if the numpy/numarray behavior is based on some implicit policy > which states that operating integers with integers should always return > integers, for return type predictability, or something like that. Could > someone please shed some light on this? Thanks! > > En/na Pujo Aji ha escrit:: > > > On 5/23/06, *Ivan Vilata i Balaguer* > > wrote: > > [...] > > According to http://docs.python.org/ref/power.html: > > > > For int and long int operands, the result has the same type as the > > operands (after coercion) unless the second argument is negative; > in > > that case, all arguments are converted to float and a float result > is > > delivered. > > > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25 > ])`` > > (i.e. a floating point result)? Is this behaviour intentional? (I > > googled for previous messages on the topic but I didn't find any.) > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 23 04:56:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 23 04:56:36 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com><4472B8D0.6040606@carabos.com> Message-ID: On Tue, 23 May 2006, Pujo Aji apparently wrote: > I suggest you check the numpy reference instead of python > reference when using numpy. http://www.scipy.org/Documentation fyi, Alan Isaac From ivilata at carabos.com Tue May 23 05:52:14 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue May 23 05:52:14 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com> <4472B8D0.6040606@carabos.com> Message-ID: <447304FD.5090403@carabos.com> En/na Pujo Aji ha escrit:: > Numpy optimize the python process by explicitly define the element type > of array. > Just like C++. > > Python let you work with automatic converting... but it slows down the > process. > Like having extra code to check the type of your element array. > > I suggest you check the numpy reference instead of python reference then > using numpy. OK, I see that predictability of the type of the output result matters. ;) Besides that, I've been told that, according to the manual, power() (as every other ufunc) uses its ``types`` member to find out the type of the result depending only on the types of its arguments. It makes sense to avoid checking for particular values with possibly large arrays for efficiency, as you point out. I expected Python-like behaviour, but I understand this is not the most appropriate thing to do for a high-performace package (but then, I was not able to find out using the public docs). Thanks for your help, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From joris at ster.kuleuven.ac.be Tue May 23 07:16:01 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue May 23 07:16:01 2006 Subject: [Numpy-discussion] Numpy PR Message-ID: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Hi, I was thinking about our PR for numpy. Imo, the best page that we can currently show to newcomers is www.scipy.org. There they find out what is Numpy, where you can download it, documentation, cookbook recipes, examples, libraries that build on NumPy like SciPy, etc. In addition, it's the page that, imho, looks the most professional. Googling for "numpy" gives: 1) numeric.scipy.org/ Travis' webpage on Numpy. Travis, would you consider putting a much more pronounced link to scipy.org? The current link to is at the very bottom of the page and has no further comments... 2) www.numpy.org/ One is redirected to the sourceforge site. Question: why not to the scipy.org site? The reason why no wiki page is set up here is, I guess, because there is already one at scipy.org. So why not directly linking to it? 3) www.pfdubois.com/numpy/ This site is actually closed. It only contains a short paragraph pointing to the page http://sourceforge.net/projects/numpy. 4) www.pfdubois.com/numpy/html2/numpy.html This is a potentially very confusing web page. It constantly talks about 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this page could be taken down? 5) sourceforge.net/projects/numpy The download site for NumPy. This page doesn't contain a link to scipy.org. 6) www.python.org/moin/NumericAndScientificnumpy.html Informationless webpage. 7) wiki.python.org/moin/NumericAndScientific Up-to-date webpage, but refers to numeric.scipy.org for NumPy. 8) www.scipy.org/ And YES, finally... :o) Perhaps we could try to take scipy.org a bit higher in the Google ranking? I am not a HTML expert at all, but may a header in the www.scipy.org source code like help? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pfdubois at gmail.com Tue May 23 08:19:02 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Tue May 23 08:19:02 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Message-ID: I killed pfdubois/numpy. On 23 May 2006 07:18:32 -0700, joris at ster.kuleuven.ac.be < joris at ster.kuleuven.ac.be> wrote: > > Hi, > > I was thinking about our PR for numpy. Imo, the best page that we can > currently show to newcomers is www.scipy.org. There they find out what is > Numpy, where you can download it, documentation, cookbook recipes, > examples, > libraries that build on NumPy like SciPy, etc. In addition, it's the page > that, imho, looks the most professional. > > > Googling for "numpy" gives: > > 1) numeric.scipy.org/ > > Travis' webpage on Numpy. Travis, would you consider putting a much more > pronounced link to scipy.org? The current link to is at the very bottom of > the page and has no further comments... > > 2) www.numpy.org/ > > One is redirected to the sourceforge site. Question: why not to the > scipy.org > site? The reason why no wiki page is set up here is, I guess, because > there > is already one at scipy.org. So why not directly linking to it? > > > 3) www.pfdubois.com/numpy/ > > This site is actually closed. It only contains a short paragraph pointing > to > the page http://sourceforge.net/projects/numpy. > > > 4) www.pfdubois.com/numpy/html2/numpy.html > > This is a potentially very confusing web page. It constantly talks about > 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this > page > could be taken down? > > > 5) sourceforge.net/projects/numpy > > The download site for NumPy. This page doesn't contain a link to scipy.org > . > > > 6) www.python.org/moin/NumericAndScientificnumpy.html > > Informationless webpage. > > > 7) wiki.python.org/moin/NumericAndScientific > > Up-to-date webpage, but refers to numeric.scipy.org for NumPy. > > > 8) www.scipy.org/ > > And YES, finally... :o) > > > Perhaps we could try to take scipy.org a bit higher in the Google ranking? > I am not a HTML expert at all, but may a header in the www.scipy.orgsource > code like > > > > help? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at utoronto.ca Tue May 23 10:58:04 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Tue May 23 10:58:04 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Message-ID: <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> I updated the python moin pages. On 5/23/06, joris at ster.kuleuven.ac.be wrote: > Hi, > > I was thinking about our PR for numpy. Imo, the best page that we can > currently show to newcomers is www.scipy.org. There they find out what is > Numpy, where you can download it, documentation, cookbook recipes, examples, > libraries that build on NumPy like SciPy, etc. In addition, it's the page > that, imho, looks the most professional. > > > Googling for "numpy" gives: > > 1) numeric.scipy.org/ > > Travis' webpage on Numpy. Travis, would you consider putting a much more > pronounced link to scipy.org? The current link to is at the very bottom of > the page and has no further comments... > > 2) www.numpy.org/ > > One is redirected to the sourceforge site. Question: why not to the scipy.org > site? The reason why no wiki page is set up here is, I guess, because there > is already one at scipy.org. So why not directly linking to it? > > > 3) www.pfdubois.com/numpy/ > > This site is actually closed. It only contains a short paragraph pointing to > the page http://sourceforge.net/projects/numpy. > > > 4) www.pfdubois.com/numpy/html2/numpy.html > > This is a potentially very confusing web page. It constantly talks about > 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this page > could be taken down? > > > 5) sourceforge.net/projects/numpy > > The download site for NumPy. This page doesn't contain a link to scipy.org. > > > 6) www.python.org/moin/NumericAndScientificnumpy.html > > Informationless webpage. > > > 7) wiki.python.org/moin/NumericAndScientific > > Up-to-date webpage, but refers to numeric.scipy.org for NumPy. > > > 8) www.scipy.org/ > > And YES, finally... :o) > > > Perhaps we could try to take scipy.org a bit higher in the Google ranking? > I am not a HTML expert at all, but may a header in the www.scipy.org source > code like > > > > help? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rob at hooft.net Tue May 23 11:52:06 2006 From: rob at hooft.net (Rob Hooft) Date: Tue May 23 11:52:06 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> Message-ID: <4473598D.5000400@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jonathan Taylor wrote: | I updated the python moin pages. I created a link from my own bookmark page, which is reasonably ranked in google.... ;-) If we all link to scipy.org somewhere useful, it will be raised in googles ranking automatically, and completely legitimately. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEc1mNH7J/Cv8rb3QRAocsAJ9cYiAy211XLBzTO7LEEhvW+3AxkgCdGdYt Cu//sOM1WzC68YKeFAZMlG0= =hZy8 -----END PGP SIGNATURE----- From jdhunter at ace.bsd.uchicago.edu Tue May 23 12:14:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue May 23 12:14:04 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <4473598D.5000400@hooft.net> (Rob Hooft's message of "Tue, 23 May 2006 20:50:53 +0200") References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> <4473598D.5000400@hooft.net> Message-ID: <877j4czhjv.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Rob" == Rob Hooft writes: Rob> Jonathan Taylor wrote: | I updated the python moin pages. Rob> I created a link from my own bookmark page, which is Rob> reasonably ranked in google.... ;-) If we all link to Rob> scipy.org somewhere useful, it will be raised in googles Rob> ranking automatically, and completely legitimately. A very good (legitimate) way to raise your google page rank is to post announcements to python-announce and python-list with the keywords you want google to match on in the subject heading. Mix these up between announces to cover the space ANN: scipy.xxx: scientific tools for python ANN: scipy.xyz: python algorithms and array methods etc..... These will be magnified across the net through RSS and mirrors, much faster than a few people making links on their homepages. Also, include the keywords you want google to match in the title field of the scipy.org html. The title is now simply scipy.org and making it something like "scientific tools for python" will help. JDH From Chris.Barker at noaa.gov Tue May 23 12:52:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue May 23 12:52:07 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <4473598D.5000400@hooft.net> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> <4473598D.5000400@hooft.net> Message-ID: <447367B1.10600@noaa.gov> Rob Hooft wrote: > I created a link from my own bookmark page, which is reasonably ranked > in google.... ;-) If we all link to scipy.org somewhere useful, it will > be raised in googles ranking automatically, and completely legitimately. ideally, put your link in so that users click on "numpy" to get there, that has a large impact on Google (see google bombing, or google "failure" for an explanation) like this: numpy However, I"m not sure that the scipy site should be the first one people find. I vote for creating a good the home page for the sourceforge site at www.numpy.org. Travis' page at numeric.scipy.org would be a good start. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ERIC.C.SCHUG at saic.com Tue May 23 15:18:11 2006 From: ERIC.C.SCHUG at saic.com (Schug, Eric C.) Date: Tue May 23 15:18:11 2006 Subject: [Numpy-discussion] win32all dependance Message-ID: When installing Numpy 0.9.8 then running import scipy I get the following Error import testing -> failed: No module named win32pdh Base on reviewing earlier releases appears to have a dependence on win32all which was removed in numpy-0.9.6r1.win32-py2.4.exe Could this dependence be removed from this latest version? Thanks Eric Schug -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazycoding at gmail.com Tue May 23 21:46:05 2006 From: lazycoding at gmail.com (Wenjie He) Date: Tue May 23 21:46:05 2006 Subject: [Numpy-discussion] Fail in building numpy(svn). Message-ID: I just follow the tutorial in http://www.scipy.org/Installing_SciPy/Windows to build numpy step by step, and everything is OK except building numpy. I use cygwin with gcc to compile them all, and use pre-compiler python-2.4 in windows xp sp2. I try build in both windows native command line and cygwin with: python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst And the result is the same with the error which is attached below. It is my first time to build a python module, and I don't know where to start to fix the error. Can anyone show me the way, plz? Wenjie -- I'm lazy, I'm coding. http://my.donews.com/henotii -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 10180 bytes Desc: not available URL: From schofield at ftw.at Wed May 24 01:47:22 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 24 01:47:22 2006 Subject: [Numpy-discussion] win32all dependance In-Reply-To: References: Message-ID: <44741D41.1060700@ftw.at> Schug, Eric C. wrote: > > When installing Numpy 0.9.8 then running import scipy I get the > following Error > > import testing -> failed: No module named win32pdh > > > > Base on reviewing earlier releases appears to have a dependence on > win32all which was removed in > > numpy-0.9.6r1.win32-py2.4.exe > > Could this dependence be removed from this latest version? > Doh! My 0.9.6 rebuild was just a stop-gap measure; I commented out the offending lines in numpy/testing/utils.py, but didn't change the trunk. I should have communicated this better. Travis, shall we just remove all lines from 67 to 97? -- Ed From pearu at scipy.org Wed May 24 01:53:07 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 24 01:53:07 2006 Subject: [Numpy-discussion] win32all dependance In-Reply-To: <44741D41.1060700@ftw.at> References: <44741D41.1060700@ftw.at> Message-ID: On Wed, 24 May 2006, Ed Schofield wrote: > Schug, Eric C. wrote: >> >> When installing Numpy 0.9.8 then running import scipy I get the >> following Error >> >> import testing -> failed: No module named win32pdh >> >> >> >> Base on reviewing earlier releases appears to have a dependence on >> win32all which was removed in >> >> numpy-0.9.6r1.win32-py2.4.exe >> >> Could this dependence be removed from this latest version? >> > > Doh! My 0.9.6 rebuild was just a stop-gap measure; I commented out the > offending lines in numpy/testing/utils.py, but didn't change the trunk. > I should have communicated this better. Travis, shall we just remove > all lines from 67 to 97? No, the code in these lines is not dead. I'll commit a fix to this issue in a moment. Pearu From arnd.baecker at web.de Wed May 24 07:49:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 24 07:49:06 2006 Subject: [Numpy-discussion] fwd: ANN: PyX 0.9 released Message-ID: Hi, some of you might have heard about PyX, http://pyx.sourceforge.net/ which """is a Python package for the creation of PostScript and PDF files. It combines an abstraction of the PostScript drawing model with a TeX/LaTeX interface. Features * PostScript and PDF output for device independent, free scalable figures * seamless TeX/LaTeX integration * full access to PostScript features like paths, linestyles, fill patterns, transformations, clipping, bitmap inclusion, etc. * advanced geometric operations on paths like intersections, transformations, splitting, smoothing, etc. * sophisticated graph generation: modular design, pluggable axes, axes partitioning based on rational number arithmetics, flexible graph styles, etc. """ Below is the release announcement for PyX 0.9. Best, Arnd P.S. To simplify the plotting of numpy arrays (1D and 2D) with PyX you could have a look at pyxgraph, http://www.physik.tu-dresden.de/~baecker/python/pyxgraph.html (which is still under heavy development ...;-) ---------- Forwarded message ---------- Date: Wed, 24 May 2006 15:23:22 +0200 From: Andre Wobst To: PyX-user , PyX-devel Subject: [PyX-user] ANN: PyX 0.9 released Hi! We're proud to announce the release of PyX 0.9! After quite some time we finally managed to prepare a new major release. Many improvements and fixes are included (see the attached list of changes), but there are a couple of highlights which should be mentioned separately: This release adds a set of deformers to PyX for path manipulations like smoothing, shifting, etc. A new set of extensively documented examples describing various aspects of PyX in a cookbook-like fashion have been written. Type 1 font-stripping is now handled by a newly written Python module. The evaluation of functions for graph plotting is now left to Python. Thereby some obscure data manipulation could be removed from the bar style for handling of nested bar graphs. Transparency is now supported for PDF output. Let me try to summarize some of the *visible* changes (to existing code out there) we needed to apply to facilitate some of the major goals of this release: - The path system has passed another restructuring phase. The normpath, which allows for all the advanced path features of PyX, have been moved into a separate module normpath. The parametrization of the paths (i.e. of the normpaths) is now handled by normpathparam instances allowing for mixing arc lengths and normpathparam instances in any order in many path functions like split etc. The normpathparam instances allow the addition of arc lengths to walk along a path, for example starting at the end of a path or at an intersection point. - The evaluation of mathematical expressions in the classes from the graph.data module is now left to Python. While this leads to a huge set of other improvements (like being not restricted to the floats datatype anymore), there are no differences between the evaluation of the expression compared to Python anymore. As Python by default still uses integer division for integer arguments, the meaning of a function expression like "1/2" has changed dramatically. In earlier versions of PyX for this example the value 0.5 was calculated, now it becomes 0. (I'm looking forward to Python 3000 to get rid of this situation once and for all! :-)) - Bars graphs on a list of data sets, which in earlier versions have been automatically converted to use a nested bar axis, don't do that automatic conversion anymore. You may want to look at the example http://pyx.sourceforge.net/examples/bargraphs/compare.html to learn more about the new and more flexible solution. - The stroke styles linestyle and dash now use the rellength feature by default. Furthermore the rellength feature was adjusted to not modify the dash lengths for the default linewidth. Hence you're affected by this change when you have used the rellength feature before or you used linestyles on linewidths different from the default. - The bbox calcuation was modified in two respects: First it now properly takes into account the shape of bezier boxes (so the real bounding box is now used instead of the control box). Secondly PyX now takes into account the linewidths when stroking a path and adds it to the bounding box. Happy PyXing ... J?rg, Michael, and Andr? ------------------ 0.9 (2006/05/24): - most important changes (to be included in the release notes): - mathtree removal (warning about integer division) - barpos style does not build tuples for nestedbar axes automatically - new deformers for path manipulation (for smoothing, shifting, ... paths) - font modules: - new framework for font handling - own implementation of type1 font stripping (old pdftex code fragments removed) - complete type1 font command representation and glyph path extraction from font programs - t1code extension module (C version of de-/encoding routines used in Type 1 font files) - AFM file parser - graph modules: - data module: - mathtree removal: more flexibility due to true python expressions - default style instantiation bug (reported by Gregory Novak) - style module: - automatic subaxis tuple creation removed in barpos (create tuples in expressions now; subnames argument removed since it became pointless; adujstaxis became independend from selectstyle for all styles now) - remove multiple painting of frompath in histogram and barpos styles - fix missing attribute select when using a bar style once only (reported by Alan Isaac) - fix histograms for negative y-coordinates (reported by Dominic Ford, bug #1492548) - fix histogram to stroke lines to the baseline for steps=0 when two subsequent values are equal - add key method for histogram style (reported by Hagemann, bug #1371554) - implement a changebar style - graph, axis and style module: - support for mutual linking of axes between graphs - new domethods dependency handling - separate axis range calculation from dolayout - axis.parter module: - linear and logarthmic partitioners always need lists now (as it was documented all the time; renamed tickdist/labeldist to tickdists/labeldists; renamed tickpos/labelpos to tickpreexps/labelpreexps) - axis module: - patch to tickpos and vtickpos (reported by Wojciech Smigaj, cf. patch #1286112) - anchoredpathaxis added (suggested by Wojciech Smigaj) - properly handle range rating on inversed axis (reported by Dominic Ford, cf. bug #1461513) - invalidate axis partitions with a single label only by the distance rater - fallback (with warning) to linear partitioner on a small logarithmics scale - painter module: - patch to allow for tickattrs=None (reported by Wojciech Smigaj, cf. patch #1286116) - color module: - transparency support (PDF only) - conversion between colorspaces - nonlinear palettes added - the former palette must now be initialized as linearpalette - remove min and max arguments of palettes - text module: - improve escapestring to handle all ascii characters - correct vshift when text size is modified by a text.size instance - recover from exceptions (reported by Alan Isaac) - handle missing italic angle information in tfm for pdf output (reported by Brett Calcott) - allow for .def and .fd files in texmessage.loaddef (new name for texmessage.loadfd, which was restricted to .fd files) - path module: - correct closepath (do not invalidate currentpoint but set it to the beginning of the current subpath); structural rework of pathitems - calculate real bboxes for Bezier curves - fix intersection due to non-linear parametrization of bezier curves - add rotate methods to path, normpath, normsubpath, and normsubpathitems - add flushskippedline to normsubpath - add arclentoparam to normsubpath and normsubpathitems - path is no longer a canvasitem - reduce number of parameters of outputPS/outputPDF methods (do not pass context and registry) - normpath module: - contains normpath, normsubpath and normpathparam which have originally been in the path module - return "invalid" for certain path operations when the curve "speed" is below a certain threshold - normpath is no longer a canvasitem - reduce number of parameters of outputPS/outputPDF methods (do not pass context and registry) - deformer module: - rewritten smoothed to make use of the subnormpath facilities - rewritten parallel for arbitrary paths - deco module: - add basic text decorator - allow arrows at arbitrary positions along the path - connector module: - boxdists parameter need to be a list/tuple of two items now - changed the orientation of the angle parameters - trafo module: - renamed _apply to apply_pt - introduce _epsilon for checking the singularity of a trafo - epsfile module: - use rectclip instead of clip to remove the clipping path from the PostScript stack, which otherwise might create strange effects for certain PostScript files (reported by Gert Ingold) - dvifile module: - silently ignore TrueType fonts in font mapping files (reported by Gabriel Vasseur) - type1font module: - accept [ and ] as separators in encoding files (reported by Mojca Miklavec, cf. bug #1429524) - canvas module: - remove registerPS/registerPDF in favour of registering resourcing during the outputPS/outputPDF run - move bbox handling to registry - rename outputPS/outputPDF -> processPS/processPDF - remove set method of canvas - add a pipeGS method to directly pass the PyX output to ghostscript - allow file instances as parameter of the writeXXXfile methods (feature request #1419658 by Jason Pratt) - document modules: - allow file instances as parameter of the writeXXXfile methods (feature request #1419658 by Jason Pratt) - style module: - make rellength the default for dash styles - random notes: - switched to subversion on 2006/03/09 -- by _ _ _ Dr. Andr? Wobst / \ \ / ) wobsta at users.sourceforge.net, http://www.wobsta.de/ / _ \ \/\/ / PyX - High quality PostScript and PDF figures (_/ \_)_/\_/ with Python & TeX: visit http://pyx.sourceforge.net/ ------------------------------------------------------- All the advantages of Linux Managed Hosting--Without the Cost and Risk! Fully trained technicians. The highest number of Red Hat certifications in the hosting industry. Fanatical Support. Click to learn more http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 _______________________________________________ PyX-user mailing list PyX-user at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/pyx-user From ajikoe at gmail.com Wed May 24 08:20:19 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 24 08:20:19 2006 Subject: [Numpy-discussion] error numpy 0.9.8 Message-ID: Hello, I have problem importing numpy 0.9.8: While writing: >>> import numpy I 've got this error message: import testing -> failed: No module named win32pdh This is not happened in 0.9.6 Can anybody help me? Thanks, pujo -------------- next part -------------- An HTML attachment was scrubbed... URL: From st at sigmasquared.net Wed May 24 08:35:03 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 24 08:35:03 2006 Subject: [Numpy-discussion] error numpy 0.9.8 In-Reply-To: References: Message-ID: <44747CDF.7050009@sigmasquared.net> The current release accidentally depends on the win32all package: sourceforge.net/projects/pywin32/ This should be fixed in the latest SVN version. Stephan From ajikoe at gmail.com Wed May 24 08:44:02 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 24 08:44:02 2006 Subject: [Numpy-discussion] error numpy 0.9.8 In-Reply-To: <44747CDF.7050009@sigmasquared.net> References: <44747CDF.7050009@sigmasquared.net> Message-ID: Thanks, It works. pujo On 5/24/06, Stephan Tolksdorf wrote: > > The current release accidentally depends on the win32all package: > sourceforge.net/projects/pywin32/ > > This should be fixed in the latest SVN version. > > Stephan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Wed May 24 11:42:06 2006 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed May 24 11:42:06 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> <446C571B.5050700@gmail.com> <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> Message-ID: <4474A8BA.4040208@bostream.nu> This thread discusses one of the things highest on my wishlist for numpy. I have attached my first attempt at a code that will create a broadcastingfunction that will broadcast a function over arrays where the last N indices are assumed to be for use by the function, N=2 would be used for matrices. It is implemented for Numeric. /J?rgen Pau Gargallo skrev: >> Pau, can you confirm that this is the same >> as the routine you're interested in? >> >> def dot2(a,b): >> '''Returns dot product of last two dimensions of two 3-D arrays, >> threaded over first dimension.''' >> try: >> assert a.shape[1] == b.shape[2] >> assert a.shape[0] == b.shape[0] >> except AssertionError: >> print "incorrect input shapes" >> res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) >> for i in range(a.shape[0]): >> res[i,...] = dot( a[i,...], b[i,...] ) >> return res >> > > yes, that is what I would like. I would like it even with more > dimensions and with all the broadcasting rules ;-) > These can probably be achieved by building actual 'arrays of matrices' > (an array of matrix objects) and then using the ufunc machinery. > But I think that a simple dot2 function (with an other name of course) > will still very useful. > > pau > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: jsarray.py URL: From gkmohan at gmail.com Wed May 24 16:40:05 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 24 16:40:05 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean Message-ID: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> Hi, I am trying to build numpy-0.9.5, downloaded from sourceforge download page, as higher versions are not yet tested for pygsl. The build seems to be broken. I uninstall existing numpy and start build from scratch but I get the following errors when I import numpy after install. ==== >>> from numpy import * import core -> failed: module compiled against version 90504 of C-API but this version of numpy is 90500 import random -> failed: 'module' object has no attribute 'dtype' import linalg -> failed: module compiled against version 90504 of C-API but this version of numpy is 90500 ==== Any help is appreciated. Am I doing something wrong or is it known that this build is broken? thanks, Krishna. From chanley at stsci.edu Wed May 24 17:32:33 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed May 24 17:32:33 2006 Subject: [Numpy-discussion] need help building on windows Message-ID: <20060524203136.CJJ76221@comet.stsci.edu> Hi, I am new to working on the Windows XP OS and I am having a problem building numpy from source. I am attempting to build using the MinGW compilers on a Windows XP box which has Python 2.4.1 installed. I do not have nor do I intend to use any optimized libraries like ATLAS. I have checked out numpy revision 2542. This is as clean a build as you could probably get. I am attaching my build log to this message. If anyone has any helpful hints they would be most appreciated. Thanks, Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 28662 bytes Desc: not available URL: From robert.kern at gmail.com Wed May 24 19:22:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 24 19:22:05 2006 Subject: [Numpy-discussion] Re: need help building on windows In-Reply-To: <20060524203136.CJJ76221@comet.stsci.edu> References: <20060524203136.CJJ76221@comet.stsci.edu> Message-ID: Christopher Hanley wrote: > Hi, > > I am new to working on the Windows XP OS and I am having a problem building numpy from source. I am attempting to build using the MinGW compilers on a Windows XP box which has Python 2.4.1 installed. I do not have nor do I intend to use any optimized libraries like ATLAS. I have checked out numpy revision 2542. This is as clean a build as you could probably get. I am attaching my build log to this message. If anyone has any helpful hints they would be most appreciated. It's a known issue: http://projects.scipy.org/scipy/numpy/ticket/129 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Wed May 24 22:00:10 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed May 24 22:00:10 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> Message-ID: <447539B2.80206@astraw.com> Dear Krishna, it looks like there are some mixed versions of numpy floating around on your system. Before building, remove the "build" directory completely. Krishna Mohan Gundu wrote: > Hi, > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > page, as higher versions are not yet tested for pygsl. The build seems > to be broken. I uninstall existing numpy and start build from scratch > but I get the following errors when I import numpy after install. > > ==== > >>>> from numpy import * >>> > import core -> failed: module compiled against version 90504 of C-API > but this version of numpy is 90500 > import random -> failed: 'module' object has no attribute 'dtype' > import linalg -> failed: module compiled against version 90504 of > C-API but this version of numpy is 90500 > ==== > > Any help is appreciated. Am I doing something wrong or is it known > that this build is broken? > > thanks, > Krishna. > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From gkmohan at gmail.com Wed May 24 22:41:04 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 24 22:41:04 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <447539B2.80206@astraw.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> <447539B2.80206@astraw.com> Message-ID: <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> Dear Andrew, Thanks for your reply. As I said earlier I deleted the existing numpy installation and the build directories. I am more than confidant that I did it right. Is there anyway I can prove myself wrong? I also tried importing umath.so from build directory === $ cd numpy-0.9.5/build/lib.linux-x86_64-2.4/numpy/core $ ls -l umath.so -rwxr-xr-x 1 krishna users 463541 May 22 17:46 umath.so $ python Python 2.4.3 (#1, Apr 8 2006, 19:10:42) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import umath Traceback (most recent call last): File "", line 1, in ? RuntimeError: module compiled against version 90500 of C-API but this version of numpy is 90504 >>> === So something is wrong with the build for sure. Could there be anything wrong other than not deleting the build directory? thanks, Krishna. On 5/24/06, Andrew Straw wrote: > Dear Krishna, it looks like there are some mixed versions of numpy > floating around on your system. Before building, remove the "build" > directory completely. > > Krishna Mohan Gundu wrote: > > > Hi, > > > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > > page, as higher versions are not yet tested for pygsl. The build seems > > to be broken. I uninstall existing numpy and start build from scratch > > but I get the following errors when I import numpy after install. > > > > ==== > > > >>>> from numpy import * > >>> > > import core -> failed: module compiled against version 90504 of C-API > > but this version of numpy is 90500 > > import random -> failed: 'module' object has no attribute 'dtype' > > import linalg -> failed: module compiled against version 90504 of > > C-API but this version of numpy is 90500 > > ==== > > > > Any help is appreciated. Am I doing something wrong or is it known > > that this build is broken? > > > > thanks, > > Krishna. > > > > > > ------------------------------------------------------- > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > Fully trained technicians. The highest number of Red Hat > > certifications in > > the hosting industry. Fanatical Support. Click to learn more > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From rudolphv at gmail.com Thu May 25 01:36:08 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Thu May 25 01:36:08 2006 Subject: [Numpy-discussion] Numpy 0.9.8 fails under Mac OS X Message-ID: <97670e910605250135l609b6b67k27f1b5d87ea17666@mail.gmail.com> I built and installed Numpy 0.9.8 successfully from the source using python setup.py build sudo python setup.py install When I run the unit-test though, it fails with the following error: [~] python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.8' >>> numpy.test(1,1) Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 1 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 95 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 9 tests for numpy.lib.twodim_base Found 2 tests for numpy.core.oldnumeric Found 9 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 35 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ...................................................python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd668; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd4e8; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd650; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd560; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd4a0; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd3f8; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd470; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd320; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd728; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd440; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd410; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd590; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd458; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug F............................................................................................................................................................................................................................................................................................................................ ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 63, in check_types assert val.dtype.num == typeconv[k,l] and \ AssertionError: error with (13,14) ---------------------------------------------------------------------- Ran 368 tests in 1.517s FAILED (failures=1) Any idea what is causing this? Thanks Rudolph -- Rudolph van der Merwe From aisaac at american.edu Thu May 25 11:11:07 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 11:11:07 2006 Subject: [Numpy-discussion] numpy.sin Message-ID: I am a user, not a numerics type, so this is undoubtedly a naive question. Might the sin function be written to give greater accuracy for large real numbers? It seems that significant digits are in some sense being discarded needlessly. I.e., compare these: >>> sin(linspace(0,10*pi,11)) array([ 0.00000000e+00, 1.22460635e-16, -2.44921271e-16, 3.67381906e-16, -4.89842542e-16, 6.12303177e-16, -7.34763812e-16, 8.57224448e-16, -9.79685083e-16, 1.10214572e-15, -1.22460635e-15]) >>> sin(linspace(0,10*pi,11)%(2*pi)) array([ 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00]) Just wondering. Cheers, Alan Isaac From robert.kern at gmail.com Thu May 25 11:37:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 11:37:09 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > I am a user, not a numerics type, > so this is undoubtedly a naive question. > > Might the sin function be written to give greater > accuracy for large real numbers? It seems that significant > digits are in some sense being discarded needlessly. Not really. The floating point representation of pi is not exact. The problem only gets worse when you multiply it with something. The method you showed of using % (2*pi) is only accurate when the values are created by multiplying the same pi by another value. Otherwise, it just introduces another source of error, I think. This is one of the few places where a version of trig functions that directly operate on degrees are preferred. 360.0*n is exactly representable by floating point arithmetic until n~=12509998964918 (give or take a power of two). Doing % 360 can be done exactly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 25 11:47:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 11:47:08 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > The method you showed of using % (2*pi) is only accurate > when the values are created by multiplying the same pi by > another value. Otherwise, it just introduces another > source of error, I think. Just to be clear, I meant not (!) to presumptuosly propose a method for improving thigs, but just to illustrate the issue: both the loss of accuracy, and the obvious conceptual point that there is (in an abstract sense, at least) no need for sin(x) and sin(x+ 2*pi) to differ. Thanks, Alan From rob at hooft.net Thu May 25 11:53:06 2006 From: rob at hooft.net (Rob Hooft) Date: Thu May 25 11:53:06 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: <4475FCE2.6000802@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Robert Kern wrote: | Alan G Isaac wrote: | |>I am a user, not a numerics type, |>so this is undoubtedly a naive question. |> |>Might the sin function be written to give greater |>accuracy for large real numbers? It seems that significant |>digits are in some sense being discarded needlessly. | | | Not really. The floating point representation of pi is not exact. The problem | only gets worse when you multiply it with something. The method you showed of | using % (2*pi) is only accurate when the values are created by multiplying the | same pi by another value. Otherwise, it just introduces another source of error, | I think. | | This is one of the few places where a version of trig functions that directly | operate on degrees are preferred. 360.0*n is exactly representable by floating | point arithmetic until n~=12509998964918 (give or take a power of two). Doing % | 360 can be done exactly. This reminds me of a story Richard Feynman tells in his autobiography. He used to say: "if you can pose a mathematical question in 10 seconds, I can solve it with 10% accuracy in one minute just calculating in my head". This worked for a long time, until someone told him "please calculate the sine of a million". Actual mantissa bits are used by the multiple of two-pi, and those are lost at the back of the calculated value. Calculating the sine of a million with the same precision as the sine of zero requires 20 more bits of accuracy. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEdfziH7J/Cv8rb3QRAiO8AKCQdJ+9EMOP6bOmUX0NIhuWVoEFQgCgmvTS fgO08dI16AUFcYKkpRJXg/Q= =qQXI -----END PGP SIGNATURE----- From alexander.belopolsky at gmail.com Thu May 25 12:02:03 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu May 25 12:02:03 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: <4475FCE2.6000802@hooft.net> References: <4475FCE2.6000802@hooft.net> Message-ID: This is not really a numpy issue, but general floating point problem. Consider this: >>> x=linspace(0,10*pi,11) >>> all(array(map(math.sin, x))==sin(x)) True If anything can be improved, that would be the C math library. On 5/25/06, Rob Hooft wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Robert Kern wrote: > | Alan G Isaac wrote: > | > |>I am a user, not a numerics type, > |>so this is undoubtedly a naive question. > |> > |>Might the sin function be written to give greater > |>accuracy for large real numbers? It seems that significant > |>digits are in some sense being discarded needlessly. > | > | > | Not really. The floating point representation of pi is not exact. The > problem > | only gets worse when you multiply it with something. The method you > showed of > | using % (2*pi) is only accurate when the values are created by > multiplying the > | same pi by another value. Otherwise, it just introduces another source > of error, > | I think. > | > | This is one of the few places where a version of trig functions that > directly > | operate on degrees are preferred. 360.0*n is exactly representable by > floating > | point arithmetic until n~=12509998964918 (give or take a power of > two). Doing % > | 360 can be done exactly. > > This reminds me of a story Richard Feynman tells in his autobiography. > He used to say: "if you can pose a mathematical question in 10 seconds, > I can solve it with 10% accuracy in one minute just calculating in my > head". This worked for a long time, until someone told him "please > calculate the sine of a million". > > Actual mantissa bits are used by the multiple of two-pi, and those are > lost at the back of the calculated value. Calculating the sine of a > million with the same precision as the sine of zero requires 20 more > bits of accuracy. > > Rob > - -- > Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.3 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org > > iD8DBQFEdfziH7J/Cv8rb3QRAiO8AKCQdJ+9EMOP6bOmUX0NIhuWVoEFQgCgmvTS > fgO08dI16AUFcYKkpRJXg/Q= > =qQXI > -----END PGP SIGNATURE----- > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu May 25 12:10:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:10:01 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>The method you showed of using % (2*pi) is only accurate >>when the values are created by multiplying the same pi by >>another value. Otherwise, it just introduces another >>source of error, I think. > > Just to be clear, I meant not (!) to presumptuosly propose > a method for improving thigs, but just to illustrate the > issue: both the loss of accuracy, and the obvious conceptual > point that there is (in an abstract sense, at least) no need > for sin(x) and sin(x+ 2*pi) to differ. But numpy doesn't deal with abstract senses. It deals with concrete floating point arithmetic. The best value you can *use* for pi in that expression is not the real irrational ?. And the best floating-point algorithm you can use for sin() won't (and shouldn't!) assume that sin(x) will equal sin(x + 2*pi). That your demonstration results in the desired exact 0.0 for multiples of 2*pi is an accident. The results for values other than integer multiples of pi will be as wrong or more wrong. It does not demonstrate that floating-point sin(x) and sin(x + 2*pi) need not differ. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 25 12:11:17 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 12:11:17 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: <4475FCE2.6000802@hooft.net> Message-ID: On Thu, 25 May 2006, Alexander Belopolsky apparently wrote: > This is not really a numpy issue, but general floating point problem. > Consider this: >>>> x=linspace(0,10*pi,11) >>>> all(array(map(math.sin, x))==sin(x)) > True I think this misses the point. I was not suggesting numpy results differ from the C math library results. >>> x1=sin(linspace(0,10*pi,21)) >>> x2=sin(linspace(0,10*pi,21)%(2*pi)) >>> all(x1==x2) False >>> x1 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, -2.44921271e-16, 1.00000000e+00, 3.67381906e-16, -1.00000000e+00, -4.89842542e-16, 1.00000000e+00, 6.12303177e-16, -1.00000000e+00, -7.34763812e-16, 1.00000000e+00, 8.57224448e-16, -1.00000000e+00, -9.79685083e-16, 1.00000000e+00, 1.10214572e-15, -1.00000000e+00, -1.22460635e-15]) >>> x2 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00]) I'd rather have x2: I'm just asking if there is anything exploitable here. Robert suggests not. Cheers, Alan Isaac From aisaac at american.edu Thu May 25 12:16:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 12:16:13 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > But numpy doesn't deal with abstract senses. It deals with > concrete floating point arithmetic. Of course. > That your demonstration results in the desired exact 0.0 > for multiples of 2*pi is an accident. The results for > values other than integer multiples of pi will be as wrong > or more wrong. It seems that a continuity argument should undermine that as a general claim. Right? But like I said: I was just wondering if there was anything exploitable here. Thanks, Alan From robert.kern at gmail.com Thu May 25 12:29:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:29:06 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? What continuity? This is floating-point arithmetic. [~]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 s(1000000) -.34999350217129295211765248678077146906140660532871 [~]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> sin(1000000.0) -0.34999350217129299 >>> sin(1000000.0 % (2*pi)) -0.34999350213477698 >>> > But like I said: I was just wondering if there was anything > exploitable here. Like I said: not really. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 25 12:39:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:39:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? Let me clarify. Since you created your values by multiplying the floating-point approximation pi by an integer value. When you perform the operation % (2*pi) on those values, the result happens to be exact or nearly so but only because you used the same approximation of pi. Doing that operation on an arbitrary value (like 1000000) only introduces more error to the calculation. Floating-point sin(1000000.0) should return a value within eps (~2**-52) of the true, real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introduces error in two places: the approximation pi and the operation %. A floating-point implementation of sin(.) will return a value within eps of the real sin(.) of the value that is the result of the floating-point operation (1000000 % (2*pi)), which already has some error accumulated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexander.belopolsky at gmail.com Thu May 25 13:14:13 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu May 25 13:14:13 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: I agree with Robert. In fact on the FPUs such as x87, where floating point registers have extended precision, sin(x % (2%pi)) will give you a less precise answer than sin(x). The improved presision that you see is illusory because gives that 64 bit pi is not a precise mathematical pi, sin(2*pi) is not 0. On 5/25/06, Robert Kern wrote: > Alan G Isaac wrote: > > On Thu, 25 May 2006, Robert Kern apparently wrote: > > >>That your demonstration results in the desired exact 0.0 > >>for multiples of 2*pi is an accident. The results for > >>values other than integer multiples of pi will be as wrong > >>or more wrong. > > > > It seems that a continuity argument should undermine that as > > a general claim. Right? > > Let me clarify. Since you created your values by multiplying the floating-point > approximation pi by an integer value. When you perform the operation % (2*pi) on > those values, the result happens to be exact or nearly so but only because you > used the same approximation of pi. Doing that operation on an arbitrary value > (like 1000000) only introduces more error to the calculation. Floating-point > sin(1000000.0) should return a value within eps (~2**-52) of the true, > real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introduces > error in two places: the approximation pi and the operation %. A floating-point > implementation of sin(.) will return a value within eps of the real sin(.) of > the value that is the result of the floating-point operation (1000000 % (2*pi)), > which already has some error accumulated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Thu May 25 13:18:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 13:18:05 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > What continuity? This is floating-point arithmetic. Sure, but a continuity argument suggests (in the absence of specific floating point reasons to doubt it) that a better approximation at one point will mean better approximations nearby. E.g., >>> epsilon = 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 Compare to the bc result of 9.9999999998333333e-006 bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253968254 Cheers, Alan From aisaac at american.edu Thu May 25 13:29:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 13:29:04 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > Let me clarify. Since you created your values by > multiplying the floating-point approximation pi by an > integer value. When you perform the operation % (2*pi) on > those values, the result happens to be exact or nearly so > but only because you used the same approximation of pi. > Doing that operation on an arbitrary value (like 1000000) > only introduces more error to the calculation. > Floating-point sin(1000000.0) should return a value within > eps (~2**-52) of the true, real-valued function > sin(1000000). Calculating (1000000 % (2*pi)) introduces > error in two places: the approximation pi and the > operation %. A floating-point implementation of sin(.) > will return a value within eps of the real sin(.) of the > value that is the result of the floating-point operation > (1000000 % (2*pi)), which already has some error > accumulated. I do not think that we have any disgreement here, except possibly over eps, which is not constant for different argument sizes. So I wondered if there was a tradeoff: smaller eps (from smaller argument) for the cost of computational error in an additional operation. Anyway, thanks for the feedback on this. Cheers, Alan From robert.kern at gmail.com Thu May 25 13:43:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 13:43:12 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 You aren't using bc correctly. bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 100*pi 0 If you know that you are epsilon from n*2*? (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*? + epsilon). FWIW, for the calculation that you did in bc, numpy.sin() gives the same results (up to the last digit): >>> from numpy import * >>> sin(0.00001) 9.9999999998333335e-06 You wanted to know if something there is something exploitable to improve the accuracy of numpy.sin(). In general, there is not. However, if you know the difference between your value and an integer multiple of the real number 2*?, then you can do your floating-point calculation on that difference. However, you will not in general get an improvement by using % (2*pi) to calculate that difference. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Thu May 25 14:29:07 2006 From: ndarray at mac.com (Sasha) Date: Thu May 25 14:29:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: This example looks like an artifact of decimal to binary conversion. Consider this: >>> epsilon = 1./2**16 >>> epsilon 1.52587890625e-05 >>> sin(100*pi+epsilon) 1.5258789063872671e-05 >>> sin((100*pi+epsilon)%(2*pi)) 1.5258789076118735e-05 and in bc: scale=50 epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141310239 On 5/25/06, Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > > What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > > >>> epsilon = 0.00001 > >>> sin(100*pi+epsilon) > 9.999999976550551e-006 > >>> sin((100*pi+epsilon)%(2*pi)) > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 > > Cheers, > Alan > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmdlnk&kid7521&bid$8729&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From yi at yiqiang.net Thu May 25 15:34:01 2006 From: yi at yiqiang.net (Yi Qiang) Date: Thu May 25 15:34:01 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) Message-ID: <447630A0.60205@yiqiang.net> Hi list, I searched the archives and found various threads regarding this issue and I have not found a solution there. software versions: gfortran 4.0.1 atlas 3.6.0 lapack 3.0 Basically numpy spits out this message when I try to compile it: gcc: numpy/linalg/lapack_litemodule.c /usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so" failed with exit status 1 However I have compiled all the software explicitly with the -fPIC flag on. Attached is my make.inc for LAPACK and my Makefile for ATLAS. I followed these instructions to create a hybrid LAPACK/ATLAS archive: http://math-atlas.sourceforge.net/errata.html#completelp Interestingly enough, if I just use the bare version of ATLAS, numpy compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Any help would be greatly appreciated. -Yi From yi at yiqiang.net Thu May 25 15:41:07 2006 From: yi at yiqiang.net (Yi Qiang) Date: Thu May 25 15:41:07 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: <447630A0.60205@yiqiang.net> References: <447630A0.60205@yiqiang.net> Message-ID: <4476326E.6030308@yiqiang.net> Yi Qiang wrote: > Interestingly enough, if I just use the bare version of ATLAS, numpy > compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Actually, I take that back. I get the same error when trying to link against the standalone version of LAPACK, so that suggests something went wrong there. And here are the files I forgot to attach! > > Any help would be greatly appreciated. > > > > -Yi > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: make.inc URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Make.Linux_HAMMER64SSE2_2 URL: From cookedm at physics.mcmaster.ca Thu May 25 15:46:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 25 15:46:01 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: <4476326E.6030308@yiqiang.net> (Yi Qiang's message of "Thu, 25 May 2006 15:40:46 -0700") References: <447630A0.60205@yiqiang.net> <4476326E.6030308@yiqiang.net> Message-ID: Yi Qiang writes: > Yi Qiang wrote: >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > #################################################################### > # LAPACK make include file. # > # LAPACK, Version 3.0 # > # June 30, 1999 # > #################################################################### > # > SHELL = /bin/sh > # > # The machine (platform) identifier to append to the library names > # > PLAT = _LINUX > # > # Modify the FORTRAN and OPTS definitions to refer to the > # compiler and desired compiler options for your machine. NOOPT > # refers to the compiler options desired when NO OPTIMIZATION is > # selected. Define LOADER and LOADOPTS to refer to the loader and > # desired load options for your machine. > # > FORTRAN = gfortran > OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 > DRVOPTS = $(OPTS) > NOOPT = Maybe NOOPT needs -fPIC? That's the only one I see where it could be missing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Thu May 25 15:47:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 15:47:04 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > You aren't using bc correctly. Ooops. I am not a user and was just following your post without reading the manual. I hope the below fixes pi; and (I think) it makes the same point I tried to make before: a continuity argument renders the general claim you made suspect. (Of course it's looking like a pretty narrow range of possible benefit as well.) > If you know that you are epsilon from n*2*? (the real > number, not the floating point one), you should just be > calculating sin(epsilon). Usually, you do not know this, > and % (2*pi) will not tell you this. (100*pi + epsilon) is > not the same thing as (100*? + epsilon). Yes, I understand all this. Of course, it is not quite an answer to the question: can '%(2*pi)' offer an advantage in the right circumstances? And the original question was again different: can we learn from such calculations that **some** method might offer an improvement? Anyway, you have already been more than generous with your time. Thanks! Alan bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253967996 or 9.999999999833333e-006 compared to: >>> epsilon = 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 From robert.kern at gmail.com Thu May 25 16:15:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 16:15:24 2006 Subject: [Numpy-discussion] Re: Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: References: <447630A0.60205@yiqiang.net> <4476326E.6030308@yiqiang.net> Message-ID: David M. Cooke wrote: > Yi Qiang writes: >>FORTRAN = gfortran >>OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 >>DRVOPTS = $(OPTS) >>NOOPT = > > Maybe NOOPT needs -fPIC? That's the only one I see where it could be > missing. That sounds right. dlamch is not supposed to be compiled with optimization, IIRC. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 25 16:40:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 16:40:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>You aren't using bc correctly. > > Ooops. I am not a user and was just following your post > without reading the manual. I hope the below fixes pi; > and (I think) it makes the same point I tried to make before: > a continuity argument renders the general claim you > made suspect. (Of course it's looking like a pretty > narrow range of possible benefit as well.) Yes, you probably can construct cases where the % (2*pi) step will ultimately yield an answer closer to what you want. You cannot expect that step to give *reliable* improvements. >>If you know that you are epsilon from n*2*? (the real >>number, not the floating point one), you should just be >>calculating sin(epsilon). Usually, you do not know this, >>and % (2*pi) will not tell you this. (100*pi + epsilon) is >>not the same thing as (100*? + epsilon). > > Yes, I understand all this. Of course, > it is not quite an answer to the question: > can '%(2*pi)' offer an advantage in the > right circumstances? Not in any that aren't contrived. Not in any situations where you don't already have enough knowledge to do a better calculation (e.g. calculating sin(epsilon) rather than sin(2*n*pi + epsilon)). > And the original question > was again different: can we learn > from such calculations that **some** method might > offer an improvement? No, those calculations make no such revelation. Good implementations of sin() already reduce the argument into a small range around 0 just to make the calculation feasible. They do so much more accurately than doing % (2*pi) but they can only work with the information given to the function. It cannot know that, for example, you generated the inputs by multiplying the double-precision approximation of ? by an integer. You can look at the implementation in fdlibm: http://www.netlib.org/fdlibm/s_sin.c > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > pi = 4*a(1) > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253967996 > or > 9.999999999833333e-006 > > compared to: > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 As Sasha noted, that is an artifact of bc's use of decimal rather than binary, and Python's conversion of the literal "0.00001" into binary. [scipy]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141309981 s(epsilon) .00001525878906190788105354014301687863346141310239 [scipy]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> epsilon = 1./2.**16 >>> sin(100*pi + epsilon) 1.5258789063872268e-05 >>> sin((100*pi + epsilon) % (2*pi)) 1.5258789076118735e-05 >>> sin(epsilon) 1.5258789061907882e-05 I do recommend reading up more on floating point arithmetic. A good paper is Goldman's "What Every Computer Scientist Should Know About Floating-Point Arithmetic": http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hubeihuaweichile at 163.com Fri May 26 01:12:06 2006 From: hubeihuaweichile at 163.com (mandy) Date: Fri May 26 01:12:06 2006 Subject: [Numpy-discussion] Re:oil trucks Message-ID: Dear sir: Nice day! See fuel tanks , please browse our website: www.chilegroup.com/en/ Highly appropriated for your dedication on this email. Best regards, Miss Mandy, WEBSITE: WWW.CHILEGROUP.COM/en/ From bylsmao at heartspring.org Fri May 26 07:41:06 2006 From: bylsmao at heartspring.org (Hughie Bylsma) Date: Fri May 26 07:41:06 2006 Subject: [Numpy-discussion] shea 7160 Message-ID: <000001c680d2$42665cd0$cc66a8c0@rqm89> Hi, X & n a x P R O z & C A m o x / c i I l / n M e R / D / A V / a G R A A m B / E N L e V / T R A V A L / u M T r & m a d o I S O m & C i A L / S http://www.bendoutin.com was there anyone to pack him in, even if there had been a chance! It looked as if he would certainly lose his friends this time (nearly all of them had already disappeared through the dark trap-door), and get utterly left behind and have to stay lurking as a permanent burglar in the elf-caves for ever. For even if he could have escaped through the upper gates at once, he had precious small chance of ever finding the -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Sun May 28 00:51:00 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 00:51:00 2006 Subject: [Numpy-discussion] dtype: hashing and cmp Message-ID: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Is there a reason why dtype's are unhashable ? (ouch) On another point, is there a canonical list of dtype's ? I'd like to test the dtype of an array, and I always end up with something like this: if array.dtype == numpy.dtype('l'): ... When I would prefer to write something like: if array.dtype == numpy.Int32: ... (i can never remember these char codes !) Alternatively, should dtype's __cmp__ promote the other arg to a dtype before the compare ? I guess not, since that would break a lot of code: eg. dtype(None) is legal. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From erin.sheldon at gmail.com Sun May 28 08:34:16 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Sun May 28 08:34:16 2006 Subject: [Numpy-discussion] fromfile for dtype=Int8 Message-ID: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> Hi everyone - The "fromfile" method isn't working for Int8 in ascii mode: # cat test.dat 3 4 5 >>> import numpy as np >>> np.__version__ '0.9.9.2547' >>> np.fromfile('test.dat', sep='\n', dtype=np.Int16) array([3, 4, 5], dtype=int16) >>> np.fromfile('test.dat', sep='\n', dtype=np.Int8) Traceback (most recent call last): File "", line 1, in ? ValueError: don't know how to read character files with that array type Was this intended? Erin From robert.kern at gmail.com Sun May 28 12:35:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun May 28 12:35:01 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060528173303.6cd4c5c6.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Message-ID: Simon Burton wrote: > Is there a reason why dtype's are unhashable ? (ouch) No one thought about it, probably. If you would like to submit a patch, I think it we would check it in. > On another point, is there a canonical list of dtype's ? > I'd like to test the dtype of an array, and I always > end up with something like this: > > if array.dtype == numpy.dtype('l'): ... > > When I would prefer to write something like: > > if array.dtype == numpy.Int32: ... numpy.int32 There is a list on page 20 of _The Guide to NumPy_. It is included in the sample chapters: http://www.tramy.us/scipybooksample.pdf > (i can never remember these char codes !) > > Alternatively, should dtype's __cmp__ promote the other arg > to a dtype before the compare ? > I guess not, since that would break a lot of code: eg. dtype(None) > is legal. Correct, it should not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon at arrowtheory.com Sun May 28 12:53:03 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 12:53:03 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Message-ID: <20060529055411.5bc43330.simon@arrowtheory.com> On Sun, 28 May 2006 14:33:37 -0500 Robert Kern wrote: > > > if array.dtype == numpy.Int32: ... > > numpy.int32 No that doesn't work. >>> numpy.int32 >>> numpy.int32 == numpy.dtype('l') False Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From robert.kern at gmail.com Sun May 28 13:04:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun May 28 13:04:03 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529055411.5bc43330.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> Message-ID: Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern wrote: > >>>if array.dtype == numpy.Int32: ... >> >>numpy.int32 > > No that doesn't work. > >>>>numpy.int32 > > > >>>>numpy.int32 == numpy.dtype('l') > > False >>> from numpy import * >>> a = linspace(0, 10, 11) >>> a.dtype == dtype(float64) True -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Sun May 28 13:38:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 13:38:01 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529055411.5bc43330.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> Message-ID: <447A09DF.3060803@ieee.org> Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern wrote: > > >>> if array.dtype == numpy.Int32: ... >>> >> numpy.int32 >> > > > No that doesn't work. > > Yeah, the "canonical" types (e.g. int32, float64, etc) are actually scalar objects. The type objects themselves are dtype(int32). I don't think they are currently listed anywhere in Python (except there is one for every canonical scalar object). The difference between the scalar object and the data-type object did not become clear until December 2005. Previously the scalar object was used as the data-type (obviously there is still a relationship between them). -Travis >>>> numpy.int32 >>>> > > >>>> numpy.int32 == numpy.dtype('l') >>>> > False > > > Simon. > > From oliphant.travis at ieee.org Sun May 28 13:42:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 13:42:04 2006 Subject: ***[Possible UCE]*** [Numpy-discussion] fromfile for dtype=Int8 In-Reply-To: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> References: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> Message-ID: <447A0AF9.5050104@ieee.org> Erin Sheldon wrote: > Hi everyone - > > The "fromfile" method isn't working for Int8 in > ascii mode: > > # cat test.dat > 3 > 4 > 5 The problem is that the internal _scan method for that data-type has not been written (it was not just a character code for fscanf). It should not be too hard to write but hasn't been done yet. Perhaps you can file a ticket so we don't lose track of it. -Travis From simon at arrowtheory.com Sun May 28 13:57:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 13:57:02 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <447A09DF.3060803@ieee.org> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> <447A09DF.3060803@ieee.org> Message-ID: <20060529065757.5d784334.simon@arrowtheory.com> On Sun, 28 May 2006 14:36:47 -0600 Travis Oliphant wrote: > Simon Burton wrote: > > On Sun, 28 May 2006 14:33:37 -0500 > > Robert Kern wrote: > > > > > >>> if array.dtype == numpy.Int32: ... > >>> > >> numpy.int32 > >> > > > > > > No that doesn't work. > > > > > > Yeah, the "canonical" types (e.g. int32, float64, etc) are actually > scalar objects. The type objects themselves are dtype(int32). I don't > think they are currently listed anywhere in Python (except there is one > for every canonical scalar object). ... Can we promote the numarray names: Int32 etc. to their dtype equivalents ? I don't see why having Int32='l' is any more usefull that Int32=dtype('l'), and the later works with cmp (and also is more helpful in the interactive interpreter). Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Sun May 28 14:57:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 14:57:02 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529065757.5d784334.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> <447A09DF.3060803@ieee.org> <20060529065757.5d784334.simon@arrowtheory.com> Message-ID: <447A1C69.4050602@ieee.org> Simon Burton wrote: > On Sun, 28 May 2006 14:36:47 -0600 > Travis Oliphant wrote: > > >> Simon Burton wrote: >> >>> On Sun, 28 May 2006 14:33:37 -0500 >>> Robert Kern wrote: >>> >>> >>> >>>>> if array.dtype == numpy.Int32: ... >>>>> >>>>> >>>> numpy.int32 >>>> >>>> >>> >>> No that doesn't work. >>> >>> >>> >> Yeah, the "canonical" types (e.g. int32, float64, etc) are actually >> scalar objects. The type objects themselves are dtype(int32). I don't >> think they are currently listed anywhere in Python (except there is one >> for every canonical scalar object). >> > ... > > Can we promote the numarray names: Int32 etc. to their dtype equivalents ? > Perhaps. There is the concern that it might break Numeric compatibility, though. -Travis From maxim.krikun at gmail.com Mon May 29 05:01:03 2006 From: maxim.krikun at gmail.com (Maxim Krikun) Date: Mon May 29 05:01:03 2006 Subject: [Numpy-discussion] 24bit arrays Message-ID: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> Hi all. I'm writing a tool to access uncompressed audio files (currently in wave, aiff and wave64 formats) using numarray.memmap module -- it maps the whole file to memory, then finds the waveform data region and casts it to and array of appropriate type and shape. This works pretty well for both 16-bit integer and 32-bit float data, but not for 24-bit files, since there is no Int24 data type in numarray. Is there some clever way to achieve the same goal for 24bit data without copying everything into a new 32-bit array? The typical 24bit audio file contains two interleaved channels, i.e. frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or (nframes,2,3) Int8 array, but this is hardly a useful representation for audio data. --Maxim From jmniexrw at accunet.net Mon May 29 05:08:08 2006 From: jmniexrw at accunet.net (mzcdobk hxuqvubp) Date: Mon May 29 05:08:08 2006 Subject: [Numpy-discussion] [fwd] here's a winer Message-ID: <28391391.5797162400448.JavaMail.qzimllwg@ov-ff02> CTXE***CTXE***CTXE***CTXE***CTXE***CTXE***CTXE Get CTXE First Thing Today, Check out for HOT NEWS!!! CTXE - CANTEX ENERGY CORP CURRENT_PRICE: $0.53 GET IT N0W! Before we start with the profile of CTXE we would like to mention something very important: There is a Big PR Campaign starting this weeek . And it will go all week so it would be best to get in NOW. Company Profile Cantex Energy Corporation is an independent, managed risk, oil and gas exploration, development, and production company headquartered in San Antonio, Texas. Recent News Cantex Energy Corp. Announces Completion of the GPS Survey Today and the Mobilization of Seismic Crews for Big Canyon 2D Swath, Management would like to report The GPS surveying of our Big Canyon 2D Swath Geophysical program is being completed today. The crew that has been obtained to conduct the seismic survey (Quantum Geophysical) will be mobilizing May 30 (plus or minus 2 days) to the Big Canyon Prospect. It will take the crews about 3 to 4 days to get all the equipment (cable and geophones) laid out on the ground and then another day of testing so we should be in full production mode on or around the 4th or 5th of June. Once the first of three lines are shot we will then get data processed and report progress on a weekly basis. Cantex Energy Corp. Receiving Interest From the Industry as It Enters Next Phase of Development Cantex Energy Corp. (CTXE - News) is pleased to report the following on its Big Canyon Prospect in West Texas. Recent company announcements related to the acquisition of over 48,000 acres of a world-class prospect has captured the attention of many oil & gas industry experts and corporations, who have recently inquired into various participation opportunities ranging from sharing science technology to support findings or expertise to drill, operate and manage wells. Trace Maurin, President of Cantex, commented, "Although we are a small independent oil & gas company, we have a very unique 0pp0rtunity in one of the last under-explored world-class potential gas plays with no geopolitical risks and the industry is starting to take notice. As we prepare to prove up the various structures within our prospect later this month, we are increasing our efforts to communicate on our progress to our shareholders and investors. Our intention is to provide investors with a better understanding of the full potential of this prospect as we embark on the next phase of operations." Starting immediately the company will undertake CEO interviews, radio spots (which will be recorded and published on the company website), publication placements, introductions to small cap institutional investors and funds all in an effort to optimize market awareness and keep our shareholder well informed. GET IN NOW Happy memorial day There's no time like the present. Rise and shine. Shake like a leaf. You can lead a horse to water but you can't make him drink. Stop and smell the roses. Stop and smell the roses. A stepping stone to. When you get lemons, make lemonade.(When life gives you scraps make quilts.) Your in hot water. Sweet as apple pie. Raking in the dough. Shake like a leaf. Stuck in a rut. That's a whole new can of worms. Some like carrots others like cabbage. They're like two peas in a pod. Throw pearls before swine. A rose is a rose is a rose. Season of mists and mellow fruitfulness. Were you born in a barn? Top of the morning. Which came first, the chicken or the egg. Spaceship earth. This is for the birds. Through the grapevine. When the cows come home. From simon at arrowtheory.com Mon May 29 05:17:01 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 29 05:17:01 2006 Subject: [Numpy-discussion] 24bit arrays In-Reply-To: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> References: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> Message-ID: <20060529221738.6fb03e53.simon@arrowtheory.com> On Mon, 29 May 2006 14:00:34 +0200 "Maxim Krikun" wrote: > > The typical 24bit audio file contains two interleaved channels, i.e. > frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or > (nframes,2,3) Int8 array, but this is hardly a useful representation for > audio data. Why not ? It's good for slicing and dicing, anything else and you should convert it to float before operating on it. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From pfdubois at gmail.com Mon May 29 10:21:05 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Mon May 29 10:21:05 2006 Subject: [Numpy-discussion] Special issue of CiSE on Python -- Correspondents please reply Message-ID: In April I sent out a request for proposals for a special issue of Computing in Science and Engineering on Python's use in science and engineering. Due to being somewhat inexperienced with a new mailer, I lost some of the correspondence. Would those with whom I corresponded send me back something about what we were talking about? I know some additional people had gotten dragged into the conversation. I did not lose letters from: Jarrod Millman Kent-Andre Mardal Xuan Shi Ryan Krauss I have the word doc only from Peter Bienstman. I have the text outline only from Arnd Baecker Sorry to be so clumsy. Paul Dubois -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfdubois at gmail.com Mon May 29 15:20:02 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Mon May 29 15:20:02 2006 Subject: [Numpy-discussion] ...never mind, found everything Message-ID: Please disregard my previous posting about my special issue correspondence. All is well. Gmail and my twitching don't work well together. I need a computer that ignores me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon May 29 23:56:00 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 29 23:56:00 2006 Subject: [Numpy-discussion] when does numpy create temporaries ? Message-ID: <20060530165457.769baf5f.simon@arrowtheory.com> Consider these two operations: >>> a=numpy.empty( 1024 ) >>> b=numpy.empty( 1024 ) >>> a[1:] = b[:-1] >>> a[1:] = a[:-1] >>> It seems that in the second operation we need to copy the view a[:-1] but in the first operation we don't need to copy b[:-1]. How does numpy detect this, or does it always copy the source when assigning to a slice ? I've poked around the (numpy) code a bit and tried some benchmarks, but it's still not so clear to me. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From yhaibnjtqq at born.com Tue May 30 01:15:06 2006 From: yhaibnjtqq at born.com (rxctaeg olwhljm) Date: Tue May 30 01:15:06 2006 Subject: [Numpy-discussion] {Info} st ock speculation for Message-ID: <84051149.6664341607410.JavaMail.bufnputnk@dd-yf02> An HTML attachment was scrubbed... URL: From rmuller at sandia.gov Tue May 30 09:30:16 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 09:30:16 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? Message-ID: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Is it possible to create arrays that run from, say -5:5, rather than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? I'm translating a F90 code to Python, and it would be easier to do this than to use a python dictionary. Thanks in advance. Rick Muller rmuller at sandia.gov From rays at blue-cove.com Tue May 30 09:45:02 2006 From: rays at blue-cove.com (Ray Schumacher) Date: Tue May 30 09:45:02 2006 Subject: [Numpy-discussion] Re: 24bit arrays Message-ID: <6.2.5.2.2.20060530084113.0587c1e0@blue-cove.com> >Is there some clever way to achieve the same goal for 24bit data without >copying everything into a new 32-bit array? I agree with the other post, Int8 x 3 can be used with slices to get a lot done, depending on data tasks desired, but not all >The typical 24bit audio file contains two interleaved channels, i.e. >frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or >(nframes,2,3) Int8 array, but this is hardly a useful representation for >audio data. Along these liners, I have been working with 24 bit ADC data returned from pyUSB as tuples, which I need to convert to Float32 and save, like this: WRAP = 2.**23 BITS24 = 2.**24 try: chValue = struct.unpack(">I", struct.pack(">4b", 0,*dataTuple[byteN:byteN+3]) )[0] except: chValue = 0 if chValue>WRAP: chValue = ((BITS24-chValue) / WRAP) * gainFactors[thisCh] else: chValue = (-chValue / WRAP) * gainFactors[thisCh] data[thisCh].append(chValue) which is really slow (no real time is possible). Is there a much faster way evident to others here? We are going to do a pyd in C otherwise... Ray From aisaac at american.edu Tue May 30 10:26:07 2006 From: aisaac at american.edu (Alan Isaac) Date: Tue May 30 10:26:07 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Message-ID: On Tue, 30 May 2006, Rick Muller wrote: > Is it possible to create arrays that run from, say -5:5, rather than > 0:11? Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> x = N.arange(-5,6) >>> x array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) >>> y=N.arange(11) >>> y-5 array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) hth, Alan Isaac From hetland at tamu.edu Tue May 30 10:33:03 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue May 30 10:33:03 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Message-ID: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> I believe Rick is talking about negative indices (possible in FORTRAN), in which case the answer is no. -Rob On May 30, 2006, at 11:28 AM, Rick Muller wrote: > Is it possible to create arrays that run from, say -5:5, rather > than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > I'm translating a F90 code to Python, and it would be easier to do > this than to use a python dictionary. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From aisaac at american.edu Tue May 30 10:43:04 2006 From: aisaac at american.edu (Alan Isaac) Date: Tue May 30 10:43:04 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov><8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: > On May 30, 2006, at 11:28 AM, Rick Muller wrote: >> Is it possible to create arrays that run from, say -5:5, rather >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >> I'm translating a F90 code to Python, and it would be easier to do >> this than to use a python dictionary. On Tue, 30 May 2006, Rob Hetland wrote: > I believe Rick is talking about negative indices (possible in > FORTRAN), in which case the answer is no. I see. Perhaps this is still relevant? (Or perhaps not.) >>> y=N.arange(11) >>> x=range(-5,6) >>> y[x] array([ 6, 7, 8, 9, 10, 0, 1, 2, 3, 4, 5]) >>> hth, Alan Isaac From hetland at tamu.edu Tue May 30 10:55:02 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue May 30 10:55:02 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov><8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: <286B76A0-7A06-4459-9B81-ABC0F347128B@tamu.edu> Yes, brilliant. The answer is yes, but you need to modify the array to have it make sense; you need to fold the array over, so that the 'negative' indices reference data from the rear of the array.... I was thinking about having the first negative index first... I'm still not sure if this will be 'skating on dull blades with sharp knives' in converting fortran to numpy. More generally to the problem of code conversion, I think that a direct fortran -> numpy translation is not the best thing -- the numpy code should be vectorized. The array indexing problems will (mostly) go away when the fortran code is vectorized, and will result in much faster python code in the end as well. -r On May 30, 2006, at 12:42 PM, Alan Isaac wrote: >> On May 30, 2006, at 11:28 AM, Rick Muller wrote: >>> Is it possible to create arrays that run from, say -5:5, rather >>> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >>> I'm translating a F90 code to Python, and it would be easier to do >>> this than to use a python dictionary. > > > On Tue, 30 May 2006, Rob Hetland wrote: >> I believe Rick is talking about negative indices (possible in >> FORTRAN), in which case the answer is no. > > > > I see. > Perhaps this is still relevant? > (Or perhaps not.) > >>>> y=N.arange(11) >>>> x=range(-5,6) >>>> y[x] > array([ 6, 7, 8, 9, 10, 0, 1, 2, 3, 4, 5]) >>>> > > hth, > Alan Isaac > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From rmuller at sandia.gov Tue May 30 11:37:06 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 11:37:06 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> Indeed I am. Thanks for the reply On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > I believe Rick is talking about negative indices (possible in > FORTRAN), in which case the answer is no. > > -Rob > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > >> Is it possible to create arrays that run from, say -5:5, rather >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >> I'm translating a F90 code to Python, and it would be easier to do >> this than to use a python dictionary. > > ----- > Rob Hetland, Assistant Professor > Dept of Oceanography, Texas A&M University > p: 979-458-0096, f: 979-845-6331 > e: hetland at tamu.edu, w: http://pong.tamu.edu > > Rick Muller rmuller at sandia.gov From david.huard at gmail.com Tue May 30 13:05:06 2006 From: david.huard at gmail.com (David Huard) Date: Tue May 30 13:05:06 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> Message-ID: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Just a thought: would it be possible to overload the array __getitem__ method ? I can do it with lists, but not with arrays... For instance, class fortarray(list): def __getitem__(self, index): return list.__getitem__(self, index+5) and >>> l = fortarray() >>> l.append(1) >>> l[-5] 1 There is certainly a more elegant way to define the class with the starting index as an argument, but I didn't look into it. For arrays, this doesn't work out of the box, but I'd surprised if there was no way to tweak it to do the same. Good luck David 2006/5/30, Rick Muller : > > Indeed I am. Thanks for the reply > On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > > > > I believe Rick is talking about negative indices (possible in > > FORTRAN), in which case the answer is no. > > > > -Rob > > > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > > > >> Is it possible to create arrays that run from, say -5:5, rather > >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > >> I'm translating a F90 code to Python, and it would be easier to do > >> this than to use a python dictionary. > > > > ----- > > Rob Hetland, Assistant Professor > > Dept of Oceanography, Texas A&M University > > p: 979-458-0096, f: 979-845-6331 > > e: hetland at tamu.edu, w: http://pong.tamu.edu > > > > > > Rick Muller > rmuller at sandia.gov > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue May 30 13:18:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue May 30 13:18:03 2006 Subject: [Numpy-discussion] Re: arrays that start from negative numbers? In-Reply-To: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Message-ID: David Huard wrote: > Just a thought: > would it be possible to overload the array __getitem__ method ? > > I can do it with lists, but not with arrays... > > For instance, > > class fortarray(list): > def __getitem__(self, index): > return list.__getitem__(self, index+5) > > and >>>> l = fortarray() >>>> l.append(1) >>>> l[-5] > 1 > > There is certainly a more elegant way to define the class with the > starting index as an argument, but I didn't look into it. For arrays, > this doesn't work out of the box, but I'd surprised if there was no way > to tweak it to do the same. One certainly could write a subclass of array that handles arbitrarily-based indices. On the other hand, writing a correct and consistent implementation would be very tricky. On the gripping hand, a quick hack might suffice if one only needed to use it locally, like inside a single function, and convert to and from real arrays at the boundaries. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rmuller at sandia.gov Tue May 30 13:33:02 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 13:33:02 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Message-ID: <2C2B7EF5-2827-41DE-9D8B-D86C913565CA@sandia.gov> I certainly think that something along these lines would be possible. However, in the end I just decided to keep track of the indices using a Python dictionary, which means to access A[-3] I actually have to call A[index[-3]]. A little clunkier, but I was worried that the other solutions would be brittle in the long run. Thanks for all of the comments. On May 30, 2006, at 2:04 PM, David Huard wrote: > Just a thought: > would it be possible to overload the array __getitem__ method ? > > I can do it with lists, but not with arrays... > > For instance, > > class fortarray(list): > def __getitem__(self, index): > return list.__getitem__(self, index+5) > > and > >>> l = fortarray() > >>> l.append(1) > >>> l[-5] > 1 > > There is certainly a more elegant way to define the class with the > starting index as an argument, but I didn't look into it. For > arrays, this doesn't work out of the box, but I'd surprised if > there was no way to tweak it to do the same. > > Good luck > David > > 2006/5/30, Rick Muller : > Indeed I am. Thanks for the reply > On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > > > > I believe Rick is talking about negative indices (possible in > > FORTRAN), in which case the answer is no. > > > > -Rob > > > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > > > >> Is it possible to create arrays that run from, say -5:5, rather > >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > >> I'm translating a F90 code to Python, and it would be easier to do > >> this than to use a python dictionary. > > > > ----- > > Rob Hetland, Assistant Professor > > Dept of Oceanography, Texas A&M University > > p: 979-458-0096, f: 979-845-6331 > > e: hetland at tamu.edu, w: http://pong.tamu.edu > > > > > > Rick Muller > rmuller at sandia.gov > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Rick Muller rmuller at sandia.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue May 30 19:54:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 30 19:54:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? Message-ID: <447D051E.9000709@ieee.org> Please help the developers by responding to a few questions. 1) Have you transitioned or started to transition to NumPy (i.e. import numpy)? 2) Will you transition within the next 6 months? (if you answered No to #1) 3) Please, explain your reason(s) for not making the switch. (if you answered No to #2) 4) Please provide any suggestions for improving NumPy. Thanks for your answers. NumPy Developers From mrovner at cadence.com Tue May 30 20:16:03 2006 From: mrovner at cadence.com (Mike Rovner) Date: Tue May 30 20:16:03 2006 Subject: [Numpy-discussion] cx_freezing numpy problem. Message-ID: Hi, I'm trying to package numpy based application but get following TB: No scipy-style subpackage 'core' found in /home/mrovner/dev/psgapp/src/gui/lnx32/dvip/numpy. Ignoring: No module named _internal Traceback (most recent call last): File "/home/mrovner/src/cx_Freeze-3.0.2/initscripts/Console.py", line 26, in ? exec code in m.__dict__ File "dvip.py", line 42, in ? File "dvip.py", line 31, in dvip_gui File "mainui.py", line 1, in ? File "psgdb.pyx", line 162, in psgdb File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? verbose=NUMPY_IMPORT_VERBOSE,postpone=False) File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", line 173, in __call__ self._init_info_modules(packages or None) File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", line 68, in _init_info_modules exec 'import %s.info as info' % (package_name) File "", line 1, in ? File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/__init__.py", line 5, in ? from type_check import * File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/core/__init__.py", line 6, in ? import umath File "ExtensionLoader.py", line 12, in ? AttributeError: 'module' object has no attribute '_ARRAY_API' I did freezing with: FreezePython --install-dir=lnx32 --include-modules=numpy --include-modules=numpy.core dvip.py I'm using Python-2.4.2 numpy-0.9.8 cx_Freeze-3.0.2 on linux. Everything compiled from source. Any help appreciated. Thanks, Mike From wbaxter at gmail.com Tue May 30 20:44:02 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 30 20:44:02 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: pyOpenGL is one project that hasn't upgraded to numpy yet. http://pyopengl.sourceforge.net/ I think the issue is just that noone is really maintaining it, rather than any difficulty in porting to numpy. Since he's probably not reading this list, might be a good idea to send the project admin a copy of the survey: mcfletch at users.sourceforge.net --bb On 5/31/06, Travis Oliphant wrote: > > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > > > > > 2) Will you transition within the next 6 months? (if you answered No to > #1) > > > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > > > > > > 4) Please provide any suggestions for improving NumPy. > > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndarray at mac.com Tue May 30 20:57:05 2006 From: ndarray at mac.com (Sasha) Date: Tue May 30 20:57:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: I am a Numeric user. On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > Started transition. Most applications were easily ported to Numpy. I am still deciding whether or not support both Numpy and Numeric during the transition period. > > 2) Will you transition within the next 6 months? (if you answered No to #1) > Yes, as soon as numpy 1.0 is released. > 4) Please provide any suggestions for improving NumPy. > That's a big topic! Without expanding on anything: - optimized array of interned strings (compatible with char** at the C level) - optimized array of arrays (a restriction of dtype=object array) - use BLAS in umath From andrewm at object-craft.com.au Tue May 30 21:32:01 2006 From: andrewm at object-craft.com.au (Andrew McNamara) Date: Tue May 30 21:32:01 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <20060531043103.0C6996F4C81@longblack.object-craft.com.au> We use Numeric in NetEpi Analysis (www.netepi.org). >Please help the developers by responding to a few questions. > >1) Have you transitioned or started to transition to NumPy (i.e. import >numpy)? No. >2) Will you transition within the next 6 months? (if you answered No to #1) Unknown - someone will have to fund the work. >3) Please, explain your reason(s) for not making the switch. (if you >answered No to #2) NetEpi Analysis implements C extensions to do fast set options on integer Numeric arrays, as well as to support mmap'ed Numeric arrays. I haven't looked at what is required to port these to Numpy (or replace with native Numpy features). NetEpi Analysis also uses rpy, which will potentially need to be updated to support Numpy. We're also concerned about speed - but I haven't done any testing against the latest Numpy. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ From rob at hooft.net Tue May 30 21:48:05 2006 From: rob at hooft.net (Rob Hooft) Date: Tue May 30 21:48:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <447D1FDF.6010508@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Travis Oliphant wrote: Numeric user since 1998. Writing commercial application software for control of machines. The machine is sold with the application software, closed source. | 1) Have you transitioned or started to transition to NumPy (i.e. import | numpy)? No | 2) Will you transition within the next 6 months? (if you answered No to #1) Maybe. | 3) Please, explain your reason(s) for not making the switch. (if you | answered No to #2) We are by now late adopters of everything. Everything else we use (we use about 30 non-GPL opensource packages in our development environment, some of which are using Numeric themselves) will need to migrate as well. Our own code is interspersed with Numeric calls, and amounts to about 200k lines... Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEfR/fH7J/Cv8rb3QRAvbjAKCCMLpbBbWSBDsRZZzL0+p4HTqcLACbBwiB YLFwX1oEULCH068j2I7ZoDg= =0gj8 -----END PGP SIGNATURE----- From jensj at fysik.dtu.dk Tue May 30 23:15:01 2006 From: jensj at fysik.dtu.dk (Jens =?iso-8859-1?Q?J=F8rgen_Mortensen?=) Date: Tue May 30 23:15:01 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> We use Numeric for our "Atomic Simulation Environment" and for a Density Functional Theory code: http://wiki.fysik.dtu.dk/ASE http://wiki.fysik.dtu.dk/gridcode > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No. > 2) Will you transition within the next 6 months? (if you answered No to > #1) Yes. Only problem is that ASE relies on Konrad Hinsen's Scientific.IO.NetCDF module which is still a Numeric thing. I saw recently that this module has been converted to numpy and put in SciPy/sandbox. What is the future of this module? > 4) Please provide any suggestions for improving NumPy. Can't think of anything! Jens J?rgen Mortensen From arnd.baecker at web.de Wed May 31 00:18:01 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 31 00:18:01 2006 Subject: [Numpy-discussion] when does numpy create temporaries ? In-Reply-To: <20060530165457.769baf5f.simon@arrowtheory.com> References: <20060530165457.769baf5f.simon@arrowtheory.com> Message-ID: On Tue, 30 May 2006, Simon Burton wrote: > Consider these two operations: > > >>> a=numpy.empty( 1024 ) > >>> b=numpy.empty( 1024 ) > >>> a[1:] = b[:-1] > >>> a[1:] = a[:-1] > >>> > > It seems that in the second operation > we need to copy the view a[:-1] > but in the first operation we don't need > to copy b[:-1]. > > How does numpy detect this, or does it > always copy the source when assigning to a slice ? > > I've poked around the (numpy) code a bit and > tried some benchmarks, but it's still not so > clear to me. Hi, not being able to give an answer to this question, I would like to emphasize that this can be a very important issue: Firstly, I don't know how to monitor the memory usage *during* the execution of a line of code. (Putting numpy.testing.memusage() before and after that line does not help, if I remember things correctly). With Numeric I ran into a memory problem with the code appended below. It turned out, that internally a copy had been made which for huge arrays brought my system into swapping. (numpy behaves the same as Numeric. Moreover, it seems to consume around 8.5 MB more memory than Numeric?!) So I strongly agree that it would be nice to know in advance when temporaries are created. In addition it would be good to be able to debug memory allocation. (For example, with f2py there is the option -DF2PY_REPORT_ON_ARRAY_COPY=1 Note that this only works when generating the wrapper library, i.e., there is no switch to turn this on or off afterwards, at least as far as I know). Best, Arnd ########################################################## from Numeric import * #from numpy import * import os pid=os.getpid() print "Process id: ",pid N=200 NB=30 # number of wavefunctions NT=20 # time steps print "Expected size of `wfk` (in KB):", N*N*NB*8/1024.0 print "Expected size of `time_arr` (in KB):", N*N*NT*16/1024.0 wfk=zeros( (N,N,NB),Float) phase=ones(NB,Complex) time_arr=zeros( (N,N,NT),Complex) print "press enter and watch the memory" raw_input("(eg. with pmap %d | grep total)" % (pid) ) # this one does a full copy of wfk, because it is complex !!! #while 1: # for tn in range(NT): # time_arr[:,:,tn]+=dot(wfk, phase) # # # memory usage: varies around: # - 38524K/57276K with Numeric # - 46980K/66360K with numpy while 1: for tn in range(NT): time_arr[:,:,tn]+=dot(wfk, phase.real)+1j*dot(wfk, phase.imag) # # memory usage: varies around: # - 38524K/40412K with Numeric # - 46984K/47616K with numpy ################################################################ From rbastian at free.fr Wed May 31 00:54:06 2006 From: rbastian at free.fr (=?iso-8859-1?q?Ren=E9=20Bastian?=) Date: Wed May 31 00:54:06 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <06053109532800.00754@rbastian> Le Mercredi 31 Mai 2006 04:53, Travis Oliphant a ?crit : > Please help the developers by responding to a few questions. > > I am a numarray user > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? no I tried to install numpy but the installation failed. > > 2) Will you transition within the next 6 months? (if you answered No to #1) > no, (hm, but if numarray will be prohibited, ...) > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > numarray works and works fine (from version number 0.8 to 1.5) > > > > 4) Please provide any suggestions for improving NumPy. > > > > > > Thanks for your answers. > > > NumPy Developers > > -- Ren? Bastian http://www.musiques-rb.org http://pythoneon.musiques-rb.org From ajikoe at gmail.com Wed May 31 01:46:02 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 31 01:46:02 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/31/06, Travis Oliphant wrote: > > > Please help the developers by responding to a few questions. > > I'm a Numeric user. > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > No > > > 2) Will you transition within the next 6 months? (if you answered No to > #1) > No > > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > Numeric is ok and the conversion somehow make my unittest fail..... 4) Please provide any suggestions for improving NumPy. > I think the conversion between Numpy and numeric should be > compatible...... > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed May 31 04:52:04 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 31 04:52:04 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <200605311351.11499.faltet@carabos.com> Hi Travis, Here you have the answers for PyTables project (www.pytables.org). A Dimecres 31 Maig 2006 04:53, Travis Oliphant va escriure: > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? Not yet, although suport for it is in-place through the use of the array protocol. > 2) Will you transition within the next 6 months? (if you answered No to #1) We don't know, but other projects in our radar make us to think that we will not be able to do that in this timeframe. > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) As I said before, it is mainly a matter of priorities. Also, numarray works very well for PyTables usage, and besides, NumPy 1.0 is not yet there. > 4) Please provide any suggestions for improving NumPy. You are already doing a *great* work. Perhaps pushing numexpr in NumPy would be nice. Also working in introducing a simple array class in Python core and using the array protocol to access the data would be very good. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From schofield at ftw.at Wed May 31 06:17:03 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 31 06:17:03 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> On 31/05/2006, at 4:53 AM, Travis Oliphant wrote: > Please help the developers by responding to a few questions. I've ported my code to NumPy. But I have some suggestions for improving NumPy. I've now entered them as these tickets: Improvements for NumPy's web presence: http://projects.scipy.org/scipy/numpy/ticket/132 Squeeze behaviour for 1d and 0d arrays: http://projects.scipy.org/scipy/numpy/ticket/133 Array creation from sequences: http://projects.scipy.org/scipy/numpy/ticket/134 -- Ed From nwagner at iam.uni-stuttgart.de Wed May 31 06:35:03 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 31 06:35:03 2006 Subject: [Numpy-discussion] test_scalarmath.py", line 63 Message-ID: <447D9B2E.90500@iam.uni-stuttgart.de> >>> numpy.__version__ '0.9.9.2553' numpy.test(1,10) results in ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 63, in check_types assert val.dtype.num == typeconv[k,l] and \ AssertionError: error with (0,7) ---------------------------------------------------------------------- Ran 368 tests in 0.479s FAILED (failures=1) Nils From jg307 at cam.ac.uk Wed May 31 06:57:04 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 31 06:57:04 2006 Subject: [Numpy-discussion] Distutils problem with g95 Message-ID: <447DA07D.5010003@cam.ac.uk> numpy.distutils seems to have difficulties detecting the current version of the g95 compiler. I believe this is because the output of `g95 --version` has changed. The patch below seems to correct the problem (i.e. it now works with the latest g95) but my regexp foo is very weak so it may not be correct/optimal. --- /usr/lib64/python2.3/site-packages/numpy/distutils/fcompiler/g95.py 2006-01-06 21:29:40.000000000 +0000 +++ /home/jgraham/lib64/python/numpy/distutils/fcompiler/g95.py 2006-05-26 12:49:50.000000000 +0100 @@ -9,7 +9,7 @@ class G95FCompiler(FCompiler): compiler_type = 'g95' - version_pattern = r'G95.*\(experimental\) \(g95!\) (?P.*)\).*' + version_pattern = r'G95.*(?:\(experimental\))? \(g95!\) (?P.*)\).*' executables = { 'version_cmd' : ["g95", "--version"], -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From strang at nmr.mgh.harvard.edu Wed May 31 07:01:10 2006 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Wed May 31 07:01:10 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: Numeric user since somewhere near "the beginning" (~1996-7?). > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No. Just dabbling so far. > 2) Will you transition within the next 6 months? (if you answered No to #1) Depends. > 3) Please, explain your reason(s) for not making the switch. (if you answered > No to #2) (i) Numpy out-of-the-box build failed on my linux boxes, and I will need to upgrade linux and Windoze simultaneously. (ii) So far, Numeric is still working for me. (iii) Time ... I also have ~200k lines of code to convert. (iv) Like others, I guess I'm waiting for NumPy 1.0 to take a stab. > 4) Please provide any suggestions for improving NumPy. >From what I've tried/tested so far on Windoze, Numpy looks awesome. Thanks Travis! And the rest of the development team! (Now, if I could only truly understand and remember all the new indexing options ... esp. a[tuple] vs. a[array] .....) Gary From pearu at scipy.org Wed May 31 07:01:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 31 07:01:12 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: <447DA07D.5010003@cam.ac.uk> References: <447DA07D.5010003@cam.ac.uk> Message-ID: On Wed, 31 May 2006, James Graham wrote: > numpy.distutils seems to have difficulties detecting the current version of > the g95 compiler. I believe this is because the output of `g95 --version` has > changed. The patch below seems to correct the problem (i.e. it now works with > the latest g95) but my regexp foo is very weak so it may not be > correct/optimal. Could you send me the output of g95 --version for reference? Thanks, Pearu From jg307 at cam.ac.uk Wed May 31 07:09:03 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 31 07:09:03 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: References: <447DA07D.5010003@cam.ac.uk> Message-ID: <447DA348.2000405@cam.ac.uk> Pearu Peterson wrote: > > > On Wed, 31 May 2006, James Graham wrote: > >> numpy.distutils seems to have difficulties detecting the current >> version of the g95 compiler. I believe this is because the output of >> `g95 --version` has changed. The patch below seems to correct the >> problem (i.e. it now works with the latest g95) but my regexp foo is >> very weak so it may not be correct/optimal. > > Could you send me the output of > > g95 --version > > for reference? $ g95 --version G95 (GCC 4.0.3 (g95!) May 22 2006) Copyright (C) 2002-2005 Free Software Foundation, Inc. G95 comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of G95 under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From rays at blue-cove.com Wed May 31 07:12:03 2006 From: rays at blue-cove.com (RayS) Date: Wed May 31 07:12:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <20060531031122.CA26E33E0B@sc8-sf-spam1.sourceforge.net> References: <20060531031122.CA26E33E0B@sc8-sf-spam1.sourceforge.net> Message-ID: <6.2.3.4.2.20060531063528.02bf0970@blue-cove.com> At 08:10 PM 5/30/2006, you wrote: >1) Have you transitioned or started to transition to NumPy (i.e. import >numpy)? only by following the threads here, so far no download yet >2) Will you transition within the next 6 months? (if you answered No to #1) yes, on this next project (assuming the small array, <2048, performance compares to Numeric >3) Please, explain your reason(s) for not making the switch. (if you >answered No to #2) if no, it is because most projects involve small bin number FFTs and correlations >4) Please provide any suggestions for improving NumPy. 24 bit signed integer type, for the new class of ADCs coming out (or at least the ability to cast efficiently to Float32) a GPU back-end option for FFT ;-) Thanks, it's all good, Ray From pearu at scipy.org Wed May 31 07:24:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 31 07:24:12 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: <447DA348.2000405@cam.ac.uk> References: <447DA07D.5010003@cam.ac.uk> <447DA348.2000405@cam.ac.uk> Message-ID: On Wed, 31 May 2006, James Graham wrote: > Pearu Peterson wrote: >> >> Could you send me the output of >> >> g95 --version >> >> for reference? > > $ g95 --version > > G95 (GCC 4.0.3 (g95!) May 22 2006) Thanks, I have applied the patch with modifications to numpy svn. Let me know if it fails to work. Pearu From jdhunter at ace.bsd.uchicago.edu Wed May 31 07:32:05 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed May 31 07:32:05 2006 Subject: [Numpy-discussion] masked arrays Message-ID: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> I'm a bit of an ma newbie. I have a 2D masked array R and want to extract the non-masked values in the last column. Below I use logical indexing, but I suspect there is a "built-in" way w/ masked arrays. I read through the docstring, but didn't see anything better. In [66]: c = R[:,-1] In [67]: m = R.mask[:,-1] In [69]: c[m==0] Out[69]: array(data = [ 0.94202899 0.51839465 0.24080268 0.26198439 0.29877369 2.06856187 0.91415831 0.64994426 0.96544036 1.11259755 2.53623188 0.71571906 0.18394649 0.78037904 0.60869565 3.56744705 0.44147157 0.07692308 0.27090301 0.16610925 0.57068004 0.80267559 0.57636566 0.23634337 1.9509476 0.50761427 0.09587514 0.45039019 0.14381271 0.69007804 2.44481605 0.2909699 0.45930881 1.37123746 2.00668896 3.1638796 1.0735786 1.06800446 0.18952062 1.55964326 1.16833891 0.17502787 1.16610925 0.85507246 0.42140468 0.04236343 1.01337793 0.22853958 1.76365663 1.78372352 0.96209588 0.73578595 0.94760312 1.59531773 0.88963211], mask = [False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False], fill_value=1e+20) From nwagner at iam.uni-stuttgart.de Wed May 31 08:48:02 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 31 08:48:02 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault Message-ID: <447DBA8D.5090908@iam.uni-stuttgart.de> test_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok check_types (numpy.core.tests.test_scalarmath.test_types)*** glibc detected *** free() : invalid pointer: 0xb7ab74a0 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 14948)] 0xb7ca51f1 in kill () from /lib/i686/libc.so.6 (gdb) bt #0 0xb7ca51f1 in kill () from /lib/i686/libc.so.6 #1 0xb7e90401 in pthread_kill () from /lib/i686/libpthread.so.0 #2 0xb7e9044b in raise () from /lib/i686/libpthread.so.0 #3 0xb7ca4f84 in raise () from /lib/i686/libc.so.6 #4 0xb7ca6498 in abort () from /lib/i686/libc.so.6 #5 0xb7cd8cf6 in __libc_message () from /lib/i686/libc.so.6 #6 0xb7cde367 in malloc_printerr () from /lib/i686/libc.so.6 #7 0xb7cdfacf in free () from /lib/i686/libc.so.6 #8 0xb7ae3fc5 in gentype_dealloc (v=0x0) at scalartypes.inc.src:281 #9 0xb7f57bed in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #10 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #11 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #12 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #13 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #14 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #15 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #16 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #17 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #18 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #19 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #20 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #21 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #22 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #23 0xb7f34c2c in slot_tp_call () from /usr/lib/libpython2.4.so.1.0 #24 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #25 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #26 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #27 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #28 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #29 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #30 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #31 0xb7f34c2c in slot_tp_call () from /usr/lib/libpython2.4.so.1.0 #32 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #33 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #34 0xb7f5a663 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #35 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #36 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #37 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #38 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #39 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #40 0xb7f5aff5 in PyEval_EvalCode () from /usr/lib/libpython2.4.so.1.0 #41 0xb7f75778 in run_node () from /usr/lib/libpython2.4.so.1.0 #42 0xb7f77228 in PyRun_InteractiveOneFlags () from /usr/lib/libpython2.4.so.1.0 #43 0xb7f77396 in PyRun_InteractiveLoopFlags () from /usr/lib/libpython2.4.so.1.0 #44 0xb7f774a7 in PyRun_AnyFileExFlags () from /usr/lib/libpython2.4.so.1.0 #45 0xb7f7d66a in Py_Main () from /usr/lib/libpython2.4.so.1.0 #46 0x0804871a in main (argc=0, argv=0x0) at ccpython.cc:10 Can someone reproduce the segfault ? Linux amanda 2.6.11.4-21.12-default #1 Wed May 10 09:38:20 UTC 2006 i686 athlon i386 GNU/Linux >>> numpy.__version__ '0.9.9.2553' From pgmdevlist at mailcan.com Wed May 31 09:45:00 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Wed May 31 09:45:00 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <200605311243.49056.pgmdevlist@mailcan.com> On Wednesday 31 May 2006 10:25, John Hunter wrote: > I'm a bit of an ma newbie. I have a 2D masked array R and want to > extract the non-masked values in the last column. Below I use logical > indexing, but I suspect there is a "built-in" way w/ masked arrays. I > read through the docstring, but didn't see anything better. R[:,-1].compressed() should do the trick. From eric at enthought.com Wed May 31 09:58:08 2006 From: eric at enthought.com (eric) Date: Wed May 31 09:58:08 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <447DCB07.8050607@enthought.com> > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. > import numpy)? We (Enthought) have started. > > > 2) Will you transition within the next 6 months? (if you answered No > to #1) We have a number of deployed applications that use Numeric heavily. Much of our code is now NumPy/Numeric compatible, but it is not well tested on NumPy. That said, a recent large project will be delivered on NumPy this summer, and we are releasing an update to a legacy app using NumPy in the next month or so. Pearu, Travis, and the Numeric->NumPy conversion scripts have been very helpful in this respect. It has been (and remains) a big effort to get the ship turned in the direction of NumPy, but we're committed to it. We are very much looking forward to using its new features. > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) Just time right now. We have noticed one major slow down in code, but it is a known issue (scalar math). This was easily fixed with a little weave code in the time being (so now we're actually 2-3 times faster than the old Numeric code. :-) > > 4) Please provide any suggestions for improving NumPy. No strong opinions here yet as I (sadly) haven't gotten to use it much yet. The scalar math speed hit us once, so others will probably hit it as well. Thanks again for all the amazing work on this stuff. It has already had an amazing impact on the community involvement and growth. From my own experience, I understand why others are slow to convert. Enthought has wanted to be an early adopter from the beginning, and we are still not there because of the effort involved in conversion and testing along with time pressures from other projects. Still, there is a nice feed back loop that happens here. As scipy/numpy continue to improve (more functionality, 64-bit stability, etc.) and more projects convert over, there are more reasons for people to update their code to the latest and greatest. My bet is it'll take 2-3 more years for the transition to run its course. see ya, eric > > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From Chris.Barker at noaa.gov Wed May 31 10:09:03 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed May 31 10:09:03 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> Message-ID: <447DCD79.3000808@noaa.gov> Ed Schofield wrote: > Improvements for NumPy's web presence: > http://projects.scipy.org/scipy/numpy/ticket/132 From that page: NumPy's web presence could be improved by: 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page I don't like this. *numpy is not scipy*. It should have it's own page (which would refer to scipy). That page should be something better than the raw sourceforge page, however. A lot of us use numpy without anything else from the scipy project, and scipy is still a major pain in the *&&^* to build. Can you even build it with gcc 4 yet? I like the other ideas. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From strawman at astraw.com Wed May 31 11:26:06 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed May 31 11:26:06 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <447DC7F9.9060201@astraw.com> John, you want c.compressed(). John Hunter wrote: >I'm a bit of an ma newbie. I have a 2D masked array R and want to >extract the non-masked values in the last column. Below I use logical >indexing, but I suspect there is a "built-in" way w/ masked arrays. I >read through the docstring, but didn't see anything better. > >In [66]: c = R[:,-1] > >In [67]: m = R.mask[:,-1] > > >In [69]: c[m==0] >Out[69]: >array(data = > [ 0.94202899 0.51839465 0.24080268 0.26198439 0.29877369 > 2.06856187 > 0.91415831 0.64994426 0.96544036 1.11259755 2.53623188 > 0.71571906 > 0.18394649 0.78037904 0.60869565 3.56744705 0.44147157 > 0.07692308 > 0.27090301 0.16610925 0.57068004 0.80267559 0.57636566 > 0.23634337 > 1.9509476 0.50761427 0.09587514 0.45039019 0.14381271 > 0.69007804 > 2.44481605 0.2909699 0.45930881 1.37123746 2.00668896 > 3.1638796 > 1.0735786 1.06800446 0.18952062 1.55964326 1.16833891 > 0.17502787 > 1.16610925 0.85507246 0.42140468 0.04236343 1.01337793 > 0.22853958 > 1.76365663 1.78372352 0.96209588 0.73578595 0.94760312 > 1.59531773 > 0.88963211], > mask = > [False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False], > fill_value=1e+20) > > > > >------------------------------------------------------- >All the advantages of Linux Managed Hosting--Without the Cost and Risk! >Fully trained technicians. The highest number of Red Hat certifications in >the hosting industry. Fanatical Support. Click to learn more >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From paustin at eos.ubc.ca Wed May 31 11:42:01 2006 From: paustin at eos.ubc.ca (Philip Austin) Date: Wed May 31 11:42:01 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <447DC7F9.9060201@astraw.com> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> <447DC7F9.9060201@astraw.com> Message-ID: <17533.58189.41909.508462@owl.eos.ubc.ca> > John Hunter wrote: > >I read through the docstring, but didn't see anything better. Andrew Straw writes: > John, you want c.compressed(). I've also found the old Numeric documentation to be helpful: http://numeric.scipy.org/numpydoc/numpy-22.html regards, Phil From bsouthey at gmail.com Wed May 31 11:48:04 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Wed May 31 11:48:04 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: Hi, On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? Yes and No > 2) Will you transition within the next 6 months? (if you answered No to #1) Probably for new code only. Having ported numarray code to NumPy there are too many quirks that need to be found. > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) Hopefully 1.0 will be out by then :-). Also bugs and performance will at a similar level to numeric and numarray. > 4) Please provide any suggestions for improving NumPy. The main one at present is to provide a stable release that can serve as a reference point for users. This is more a reflection of having a stable version of numpy for reference rather than having to check the svn for an appropriate version. Bruce > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From mrovner at cadence.com Wed May 31 13:26:11 2006 From: mrovner at cadence.com (Mike Rovner) Date: Wed May 31 13:26:11 2006 Subject: [Numpy-discussion] Re: cx_freezing numpy problem. In-Reply-To: References: Message-ID: Seems, the problem was that _internal.py was not picked up by Freeze. Using --include-modules=numpy.core._internal helps. Mike Rovner wrote: > Hi, > > I'm trying to package numpy based application but get following TB: > > No scipy-style subpackage 'core' found in > /home/mrovner/dev/psgapp/src/gui/lnx32/dvip/numpy. Ignoring: No module > named _internal > Traceback (most recent call last): > File "/home/mrovner/src/cx_Freeze-3.0.2/initscripts/Console.py", line > 26, in ? > exec code in m.__dict__ > File "dvip.py", line 42, in ? > File "dvip.py", line 31, in dvip_gui > File "mainui.py", line 1, in ? > File "psgdb.pyx", line 162, in psgdb > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/__init__.py", > line 35, in ? > verbose=NUMPY_IMPORT_VERBOSE,postpone=False) > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", > line 173, in __call__ > self._init_info_modules(packages or None) > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", > line 68, in _init_info_modules > exec 'import %s.info as info' % (package_name) > File "", line 1, in ? > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/__init__.py", > line 5, in ? > from type_check import * > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/type_check.py", > line 8, in ? > import numpy.core.numeric as _nx > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/core/__init__.py", > line 6, in ? > import umath > File "ExtensionLoader.py", line 12, in ? > AttributeError: 'module' object has no attribute '_ARRAY_API' > > I did freezing with: > FreezePython --install-dir=lnx32 --include-modules=numpy > --include-modules=numpy.core dvip.py > > I'm using Python-2.4.2 numpy-0.9.8 cx_Freeze-3.0.2 on linux. Everything > compiled from source. > > Any help appreciated. > > Thanks, > Mike > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 From gkmohan at gmail.com Wed May 31 13:29:02 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 31 13:29:02 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> <447539B2.80206@astraw.com> <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> Message-ID: <70ec82800605311327x254d44aci6552314f63d02ce1@mail.gmail.com> Dear Andrew, I forgot to delete the include files from the previous installation, which I installed manually by copying the include files. Sorry for the trouble. Hope this helps someone, if (s)he does the same mistake as mine. cheers, Krishna. On 5/24/06, Krishna Mohan Gundu wrote: > Dear Andrew, > > Thanks for your reply. As I said earlier I deleted the existing numpy > installation and the build directories. I am more than confidant that > I did it right. Is there anyway I can prove myself wrong? > > I also tried importing umath.so from build directory > === > $ cd numpy-0.9.5/build/lib.linux-x86_64-2.4/numpy/core > $ ls -l umath.so > -rwxr-xr-x 1 krishna users 463541 May 22 17:46 umath.so > $ python > Python 2.4.3 (#1, Apr 8 2006, 19:10:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import umath > Traceback (most recent call last): > File "", line 1, in ? > RuntimeError: module compiled against version 90500 of C-API but this > version of numpy is 90504 > >>> > === > > So something is wrong with the build for sure. Could there be anything > wrong other than not deleting the build directory? > > thanks, > Krishna. > > On 5/24/06, Andrew Straw wrote: > > Dear Krishna, it looks like there are some mixed versions of numpy > > floating around on your system. Before building, remove the "build" > > directory completely. > > > > Krishna Mohan Gundu wrote: > > > > > Hi, > > > > > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > > > page, as higher versions are not yet tested for pygsl. The build seems > > > to be broken. I uninstall existing numpy and start build from scratch > > > but I get the following errors when I import numpy after install. > > > > > > ==== > > > > > >>>> from numpy import * > > >>> > > > import core -> failed: module compiled against version 90504 of C-API > > > but this version of numpy is 90500 > > > import random -> failed: 'module' object has no attribute 'dtype' > > > import linalg -> failed: module compiled against version 90504 of > > > C-API but this version of numpy is 90500 > > > ==== > > > > > > Any help is appreciated. Am I doing something wrong or is it known > > > that this build is broken? > > > > > > thanks, > > > Krishna. > > > > > > > > > ------------------------------------------------------- > > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > > Fully trained technicians. The highest number of Red Hat > > > certifications in > > > the hosting industry. Fanatical Support. Click to learn more > > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > From nvf at MIT.EDU Wed May 31 13:48:04 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Wed May 31 13:48:04 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <20060531031123.F3D7532B05@sc8-sf-spam1.sourceforge.net> References: <20060531031123.F3D7532B05@sc8-sf-spam1.sourceforge.net> Message-ID: <5C73A4D7-66AB-4737-A41E-C61DACFCBB87@mit.edu> > 1) Have you transitioned or started to transition to NumPy (i.e. > import > numpy)? Yes, I've pretty much decided that numpy is the way to go with my young analysis codes, even though it is going to be somewhat painful to distribute to my colleagues and among compute nodes in clusters. I am saved by the fact that we all work within an NFS space, so I can host up-to-date compiles of numpy and scipy without requiring root access nor requiring everyone to compile their own. Of course, I will end up giving them a few lines of LD_LIBRARY_PATH to add. Old codes that used Numeric have all transitioned smoothly to numpy in my internal tests. I may distribute two versions (one Numeric, one numpy) for wider distribution outside my clusters, but there may be a better way to do this. > 2) Will you transition within the next 6 months? (if you answered > No to #1) > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > 4) Please provide any suggestions for improving NumPy. I think NumPy is fantastic. That said, even the fantastic can improve. I am very happy with numpy as software, so my comments are mostly about adoption and accessibility. I'd really like to see the team getting packages (32 and 64 bit) into standard Redhat, Debian, and Fink repositories quickly after a release. I believe Redhat is at numpy 0.9.5 and neither Debian (testing) nor Fink have packages in the default repositories. Having to add extra lines to /etc/apt/sources.list (or Redhat equivalent) to grab packages from private repositories will dissuade a lot of people from adopting numpy. Many of them are the same people who are unable to compile it themselves. Up-to-date packages also help me with my problem in #1 -- an admin will happily yum an rpm for me an all of the cluster nodes, but might not be willing to install it from a nonstandard source. I agree that Googlability is very important and easily addressable. I'm glad someone brought this up. Btw, googling "numpy rpm" brings up http://pylab.sourceforge.net/, which is some more of Travis's old Numeric stuff (labeled as such on the page). Thanks for the hard work in coding and thanks for keeping a thriving discussion list going. Take care, Nick From fperez.net at gmail.com Wed May 31 14:36:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed May 31 14:36:02 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447DCD79.3000808@noaa.gov> References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> <447DCD79.3000808@noaa.gov> Message-ID: On 5/31/06, Christopher Barker wrote: > > Ed Schofield wrote: > > Improvements for NumPy's web presence: > > http://projects.scipy.org/scipy/numpy/ticket/132 > > From that page: > > NumPy's web presence could be improved by: > > 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page > > I don't like this. *numpy is not scipy*. It should have it's own page > (which would refer to scipy). That page should be something better than > the raw sourceforge page, however. Well, ipython is not scipy either, and yet its homepage is ipython.scipy.org. I think it's simply a matter of convenience that the Enthought hosting infrastructure is so much more pleasant to use than SF, that other projects use scipy.org as an umbrella. In that sense, I think it's fair to say that numpy is part of the 'scipy family'. I don't know, at least this doesn't particularly bother me. > A lot of us use numpy without anything else from the scipy project, and > scipy is still a major pain in the *&&^* to build. Can you even build it > with gcc 4 yet? I built it on a recent ubuntu not too long ago, without any glitches. I can check again tonitght on a fresh Dapper with up-to-date SVN if you want. Cheers, f From rowen at cesmail.net Wed May 31 14:49:01 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Wed May 31 14:49:01 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? References: <447D051E.9000709@ieee.org> Message-ID: In article <447D051E.9000709 at ieee.org>, Travis Oliphant wrote: > Please help the developers by responding to a few questions. > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No, not beyond installing it to see if it works. > 2) Will you transition within the next 6 months? (if you answered No to #1) I expect to start to transition within a few months of both numpy and pyfits-with-numpy being released and being reported as stable and fast. > 4) Please provide any suggestions for improving NumPy. Please improve notification of documentation updates. I keep seeing complaints from folks who've bought the numpy documentation that they get no notification of updates. That makes me very reluctant to buy the documentation myself. I wish that full support for masked arrays had made it in (i.e. masked arrays are first class citizens that are accepted by all functions). The inability in numeric to apply 2d filters on masked image arrays is the main thing missing for me in numarray. -- Russell From stefan at sun.ac.za Wed May 31 16:08:02 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed May 31 16:08:02 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault In-Reply-To: <447DBA8D.5090908@iam.uni-stuttgart.de> References: <447DBA8D.5090908@iam.uni-stuttgart.de> Message-ID: <20060531230706.GA32246@mentat.za.net> I filed this as ticket #135: http://projects.scipy.org/scipy/numpy/ticket/135 Regards St?fan On Wed, May 31, 2006 at 05:47:25PM +0200, Nils Wagner wrote: > test_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok > check_types (numpy.core.tests.test_scalarmath.test_types)*** glibc > detected *** free() : invalid pointer: > 0xb7ab74a0 *** > > Program received signal SIGABRT, Aborted. From oliphant.travis at ieee.org Wed May 31 17:40:00 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 31 17:40:00 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault In-Reply-To: <20060531230706.GA32246@mentat.za.net> References: <447DBA8D.5090908@iam.uni-stuttgart.de> <20060531230706.GA32246@mentat.za.net> Message-ID: <447E3711.9080609@ieee.org> Stefan van der Walt wrote: > I filed this as ticket #135: > > http://projects.scipy.org/scipy/numpy/ticket/135 > > Thanks. This one is due to a bug/oddity in Python itself. Apparently complex-number subtypes can't use a different memory allocator than the Python memory allocator. I've let Python powers know about the bug and worked around it in NumPy, for now. -Travis From aisaac at american.edu Wed May 31 19:58:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 31 19:58:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: On Wed, 31 May 2006, "Russell E. Owen" apparently wrote: > Please improve notification of documentation updates. > I keep seeing complaints from folks who've bought the > numpy documentation that they get no notification of > updates. That makes me very reluctant to buy the > documentation myself. The documentation is excellent, and I've been completely satisfied with Travis's handling of the updates. It is also a minimal investment in an excellent project. fwiw, Alan Isaac From oliphant.travis at ieee.org Wed May 31 21:03:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 31 21:03:03 2006 Subject: [Numpy-discussion] test_scalarmath.py", line 63 In-Reply-To: <447D9B2E.90500@iam.uni-stuttgart.de> References: <447D9B2E.90500@iam.uni-stuttgart.de> Message-ID: <447E66D8.7020908@ieee.org> Nils Wagner wrote: >>>> numpy.__version__ >>>> > '0.9.9.2553' > > > numpy.test(1,10) results in > ====================================================================== > FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", > line 63, in check_types > assert val.dtype.num == typeconv[k,l] and \ > AssertionError: error with (0,7) > This is probably on a 64-bit system. It would be great if you could take the code in the test module and adapt it to print out the typecodes that are obtained using 0-dimensional arrays. Of course, maybe that's a better way to run the test.... -Travis > ---------------------------------------------------------------------- > Ran 368 tests in 0.479s > > FAILED (failures=1) > > Nils > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Wed May 31 21:20:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed May 31 21:20:04 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: On 5/31/06, Alan G Isaac wrote: > On Wed, 31 May 2006, "Russell E. Owen" apparently wrote: > > Please improve notification of documentation updates. > > I keep seeing complaints from folks who've bought the > > numpy documentation that they get no notification of > > updates. That makes me very reluctant to buy the > > documentation myself. > > The documentation is excellent, and I've been completely > satisfied with Travis's handling of the updates. I'll add my voice on this front. When I've needed a special update (for a workshop, where I needed to print out hardcopies as up-to-date as possible), Travis was very forthcoming and gave me quickly his most recent copy. So while a few weeks ago a couple of emails may not have been replied quite on the spot, overall I don't feel in any way slighted by his handling of the doc system, quite the opposite. And he also indicated he was in the process of setting up a more automated system. To be honest, I'd rather wait for a manual upate than see Travis devote one or two evenings to configuring something of this nature when he could be coding for numpy :) Cheers, f From wbaxter at gmail.com Mon May 1 00:20:03 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 1 00:20:03 2006 Subject: [Numpy-discussion] Some questions about dot() Message-ID: Some questions about numpy.dot() 1) Is there a reason that dot() doesn't transpose the first argument? As far as I know, that flies in the face of the mathematical definition of what a dot product is. 2) From the docstring: dot(...) matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. 2a) What is matrixproduct(a,b)? I don't see such a function in numpy 2b) What is this "generic numpy equivalent" vaguely referred to? Seems like it would make more sense to have dot() follow the mathematical convention of a.T * b, and have a separate function, like mult() or matrixmult(), do what dot() does currently. Is there historical baggage of some kind here preventing that? Or some maybe there's a different definition of dot product from another branch of mathematics that I'm not familiar with? Thanks, --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Mon May 1 02:16:11 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon May 1 02:16:11 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: Message-ID: <4455D188.7030505@bigpond.net.au> Hi Bill, It looks to me like dot() is doing the right thing. Can you post an example of why you think it's wrong? 2a) re. the docstring - this looks like a 'bug'; presumably an old docstring not correctly updated. 2b) "generic numpy equivalent" - agree that this isn't very enlightening. Gary R. Bill Baxter wrote: > Some questions about numpy.dot() > > 1) Is there a reason that dot() doesn't transpose the first argument? > As far as I know, that flies in the face of the mathematical definition > of what a dot product is. > > 2) From the docstring: > dot(...) > matrixproduct(a,b) > Returns the dot product of a and b for arrays of floating point types. > Like the generic numpy equivalent the product sum is over > the last dimension of a and the second-to-last dimension of b. > NB: The first argument is not conjugated. > > 2a) What is matrixproduct(a,b)? I don't see such a function in numpy > > 2b) What is this "generic numpy equivalent" vaguely referred to? > > > Seems like it would make more sense to have dot() follow the > mathematical convention of a.T * b, and have a separate function, like > mult() or matrixmult(), do what dot() does currently. Is there > historical baggage of some kind here preventing that? Or some maybe > there's a different definition of dot product from another branch of > mathematics that I'm not familiar with? > > Thanks, > --Bill From wbaxter at gmail.com Mon May 1 04:49:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 1 04:49:15 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: <4455D188.7030505@bigpond.net.au> Message-ID: Hi Gary, On 5/1/06, Gary Ruben wrote: > > Hi Bill, > > It looks to me like dot() is doing the right thing. Can you post an > example of why you think it's wrong? It /is/ behaving as documented, if that's what you mean. But the question is why it acts that way. Simple example: >>> numpy.__version__, os.name ('0.9.5', 'nt') >>> a = numpy.asmatrix([1.,2.,3.]).T >>> a matrix([[ 1.], [ 2.], [ 3.]]) >>> numpy.dot(a,a) Traceback (most recent call last): File "", line 1, in ? ValueError: matrices are not aligned >>> numpy.dot(a.T,a) matrix([[ 14.]]) Everywhere I've ever encountered a dot product before it's been equivalent to the transpose of A times B. So a 'dot()' function that acts exactly like a matrix multiply is a bit surprising to me. After poking around some more I found numpy.vdot() which is apparently supposed to do the standard "vector" dot product. However, all I get from that is: >>> a matrix([[ 1.], [ 2.], [ 3.]]) >>> numpy.vdot(a,a) Traceback (most recent call last): File "", line 1, in ? ValueError: vectors have different lengths >>> Also in the same numpy.core._dotblas module as dot and vdot, there's an 'inner', which claims to be an inner product, but seems to only work when called with both arguments transposed as follows: >>> numpy.inner(a.T, a.T) array([[ 14.]]) 2a) re. the docstring - this looks like a 'bug'; presumably an old > docstring not correctly updated. I think maybe 'matrixproduct' is supposed to be 'matrixmultiply' which /is/ a synonym for dot. 2b) "generic numpy equivalent" - agree that this isn't very enlightening. -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon May 1 05:26:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon May 1 05:26:01 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: Message-ID: On Mon, 1 May 2006, Bill Baxter apparently wrote: > Seems like it would make more sense to have dot() follow > the mathematical convention of a.T * b, and have > a separate function, like mult() or matrixmult(), do what > dot() does currently. Is there historical baggage of some > kind here preventing that? Or some maybe there's > a different definition of dot product from another branch > of mathematics that I'm not familiar with? Historically, 'dot' was essentially an alias for 'multiarray.matrixproduct' in Numeric. This is a long standing use that I would not expect to change. (But I am just a user.) I believe you have found a documentation bug, as matrixproduct either no longer exists or is well hidden. On the more general point ... Can you point to a definition that matches your proposed use? The most common definition I know for 'dot' is between vectors, which do not "transpose". In numpy this is 'vdot', which returns a scalar product. The production of a dot product between two column vectors in a linear algebra context by transposing and then matrix multiplying is, I believe, a convenience rather than a definition of any sort. If we care about the details, the result is also not a scalar. Cheers, Alan Isaac From p.barbier-de-reuille at uea.ac.uk Mon May 1 07:34:06 2006 From: p.barbier-de-reuille at uea.ac.uk (Pierre Barbier de Reuille) Date: Mon May 1 07:34:06 2006 Subject: [Numpy-discussion] Bug in ndarray.argmax Message-ID: <44561C11.3000601@cmp.uea.ac.uk> Hello, I notices a bug in ndarray.argmax which prevent from getting the argmax from any axis but the last one. I join a patch to correct this. Also, here is a small python code to test the behaviour of argmax I implemented : ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== from numpy import array, random, all a = random.normal( 0, 1, ( 4,5,6,7,8 ) ) for i in xrange( a.ndim ): amax = a.max( i ) aargmax = a.argmax( i ) axes = range( a.ndim ) axes.remove( i ) assert all( amax == aargmax.choose( *a.transpose( i, *axes ) ) ) ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== Pierre -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: swapback.patch URL: From gruben at bigpond.net.au Mon May 1 07:39:10 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon May 1 07:39:10 2006 Subject: [Numpy-discussion] Some questions about dot() In-Reply-To: References: <4455D188.7030505@bigpond.net.au> Message-ID: <44561D5E.7020905@bigpond.net.au> Hi Bill, I see what you mean. It looks to me like the functionality hasn't been extended to the matrix type in a sensible way, but I can see issues with this. I would expect dot() to be defined for a rank-1 array or vector object, but not necessarily for a rank-2 array or matrix. Linear algebra texts tend to hide the issue of the rank of the object being transposed when they equate the dot product with a.T*b (or a*b.T). It could be argued that if dot takes arguments which are matrix objects of shape (n,1), it could sensibly consider them to be 'column vectors', and should return a scalar. However, these are rank-2 arrays and I think Travis explicitly wrote the dot functionality for rank-2 arrays to support taking multiple rank-1 dot products simultaneously. I don't know if there are other issues around changing the dot() behaviour to act differently for matrix objects and array objects, but I suspect that's what is required. Gary R. Bill Baxter wrote: > Hi Gary, > > On 5/1/06, *Gary Ruben* < gruben at bigpond.net.au > > wrote: > > Hi Bill, > > It looks to me like dot() is doing the right thing. Can you post an > example of why you think it's wrong? > > > It /is/ behaving as documented, if that's what you mean. But the > question is why it acts that way. > Simple example: > >>> numpy.__version__, os.name > ('0.9.5', 'nt') > >>> a = numpy.asmatrix([1.,2.,3.]).T > >>> a > matrix([[ 1.], > [ 2.], > [ 3.]]) > >>> numpy.dot(a,a) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: matrices are not aligned > >>> numpy.dot(a.T,a) > matrix([[ 14.]]) > > Everywhere I've ever encountered a dot product before it's been > equivalent to the transpose of A times B. So a 'dot()' function that > acts exactly like a matrix multiply is a bit surprising to me. > > After poking around some more I found numpy.vdot() which is apparently > supposed to do the standard "vector" dot product. However, all I get > from that is: > >>> a > matrix([[ 1.], > [ 2.], > [ 3.]]) > >>> numpy.vdot(a,a) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: vectors have different lengths > >>> > > Also in the same numpy.core._dotblas module as dot and vdot, there's an > 'inner', which claims to be an inner product, but seems to only work > when called with both arguments transposed as follows: > >>> numpy.inner(a.T, a.T) > array([[ 14.]]) From fullung at gmail.com Mon May 1 15:12:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 1 15:12:06 2006 Subject: [Numpy-discussion] ctypes and NumPy Message-ID: <00d901c66d6c$2d2569c0$0a84a8c0@dsp.sun.ac.za> Hello all I've been working on wrapping a C library for use with NumPy for the past week. After struggling to get it "just right" with SWIG and hand-written C API, I tried ctypes and I was able to do with ctypes in 4 hours what I was unable to do with SWIG or the C API in 5 days (probably mostly due to incompetence on my part ;-)). So there's my ctypes testimonial. I have a few questions regarding using of ctypes with NumPy. I'd appreciate any feedback on better ways of accomplishing what I've done so far. 1. Passing pointers to NumPy data to C functions I would like to pass the data pointer of a NumPy array to a C function via ctypes. Currently I'm doing the following in C: #ifdef _DEBUG #define DEBUG__ #undef _DEBUG #endif #include "Python.h" #ifdef DEBUG__ #define _DEBUG #undef DEBUG__ #endif typedef struct PyArrayObject { PyObject_HEAD char* data; int nd; void* dimensions; void* strides; void* descr; int flags; void* weakreflist; } PyArrayObject; extern void* PyArray_DATA(PyObject* obj) { return (void*) (((PyArrayObject*)(obj))->data); } First some notes regarding the code above. The preprocessor goop is there to turn off the _DEBUG define, if any, to prevent Python from trying to link against its debug library (python24_d) on Windows, even when you do a debug build of your own code. Including arrayobject.h seems to introduce some Python library symbols, so that's why I also had to extract the definition of PyArrayObject from arrayobject.h. Now I can build my code in debug or release mode without having to worry about Python. As a companion to this C function that allows me to get the data pointer of a NumPy array I have on the Python side: def c_arraydata(a, t): return cast(foo.PyArray_DATA(a), POINTER(t)) def arraydata_intp(a): return N.intp(foo.PyArray_DATA(a)) I use c_arraydata to cast a NumPy array for wrapped functions expecting something like a double*. I use arraydata_intp when I need to deal with something like a double**. I make the double** buffer as an array of intp and then assign each element to point to an array of 'f8', the address of which I get from arraydata_intp. The reason I'm jumping through all these hoops is so that I can get at the data pointer of a NumPy array. ndarray.data is a Python buffer object and I didn't manage to find any other way to obtain this pointer. If it doesn't exist already, it would be very useful if NumPy arrays exposed a way to get this information, by calling something like ndarray.dataptr or ndarray.dataintp. Once this is possible, there could be more integration with ctypes. See item 3. 2. Checking struct alignment With the following ctypes struct: class svm_node(Structure): _fields_ = [ ('index', c_int), ('value', c_double) ] I can do: print svm_node.index.offset print svm_node.index.size print svm_node.value.offset print svm_node.value.size which prints out: 0, 4, 8, 8 on my system. The corresponding array description is: In [58]: dtype({'names' : ['index', 'value'], 'formats' : [intc, 'f8']}, align=1) Out[58]: dtype([('index', ' In [48]: descr['index'].alignment Out[48]: 4 In [49]: descr['index'].itemsize Out[49]: 4 However, there doesn't seem to be an equivalent in the array description to the offset parameter that ctypes structs have. Is there a way to get this information? It would be useful to have it, since then one could make sure that the NumPy array and the ctypes struct line up in memory. 3. Further integration with ctypes >From the ctypes tutorial: """You can also customize ctypes argument conversion to allow instances of your own classes be used as function arguments. ctypes looks for an _as_parameter_ attribute and uses this as the function argument. Of course, it must be one of integer, string, or unicode: ... If you don't want to store the instance's data in the _as_parameter_ instance variable, you could define a property which makes the data available.""" If I understand correctly, you could also accomplish the same thing by implementing the from_param class method. I don't think it's well defined what _as_parameter_ (or from_param) should do for an arbitrary NumPy array, so there are a few options. 1. Allow the user to add _as_parameter_ or from_param to an ndarray instance. I don't know if this is possible at all (it doesn't seem to work at the moment because ndarray is a "built-in" type). 2. Allow _as_parameter_ to be a property with the user being able to specify the get method at construction time (or allow the user to specify the from_param method). For svm_node I might do something like: def svm_node_as_parameter(self): return cast(self.dataptr, POINTER(svm_node)) svm_node_descr = \ dtype({'names' : ['index', 'value'], 'formats' : [N.intc, 'f8']}, align=1) node = array([...], dtype=svm_node_descr, ctypes_as_parameter=svm_node_as_parameter) 3. As a next step, provide defaults for _as_parameter_ where possible. The scalar types could set it to the corresponding ctype (or None if ctypes can't be imported). Arrays with "basic" data, such as 'f8' and friends could set up a property that calls ctypes.cast(self.dataptr, POINTER(corresponding_ctype)). Thanks for reading. :-) Comments would be appreciated. If some of my suggestions seem implementation-worty, I'd be willing to try to implement them. Regards, Albert From Chris.Barker at noaa.gov Mon May 1 17:07:22 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon May 1 17:07:22 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. Message-ID: <4456A284.90805@noaa.gov> Hi all, I was just trying to use the putmask() method to replace a bunch of values with the same value and got and error, where the putmask() function works fine: >>> import numpy as N >>> a = N.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a.putmask(a > 3, 3) Traceback (most recent call last): File "", line 1, in ? ValueError: putmask: mask and data must be the same size >>> N.putmask(a, a > 3, 3) >>> a array([0, 1, 2, 3, 3]) and: >>> help (N.ndarray.putmask) putmask(...) a.putmask(values, mask) sets a.flat[n] = v[n] for each n where mask.flat[n] is TRUE. v can be scalar. indicates that it should work. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ndarray at mac.com Mon May 1 19:07:02 2006 From: ndarray at mac.com (Sasha) Date: Mon May 1 19:07:02 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: <4456A284.90805@noaa.gov> References: <4456A284.90805@noaa.gov> Message-ID: Do we really need the putmask method? With fancy indexing the obvious way to do it is >>> a[a>3] = 3 and it works fine. Maybe we can drop putmask method before 1.0 instead of fixing the bug. On 5/1/06, Christopher Barker wrote: > Hi all, > > I was just trying to use the putmask() method to replace a bunch of > values with the same value and got and error, where the putmask() > function works fine: > > >>> import numpy as N > >>> a = N.arange(5) > >>> a > array([0, 1, 2, 3, 4]) > >>> a.putmask(a > 3, 3) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: putmask: mask and data must be the same size > > >>> N.putmask(a, a > 3, 3) > >>> a > array([0, 1, 2, 3, 3]) > > and: > > >>> help (N.ndarray.putmask) > putmask(...) > a.putmask(values, mask) sets a.flat[n] = v[n] for each n where > mask.flat[n] is TRUE. v can be scalar. > > indicates that it should work. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jdhunter at ace.bsd.uchicago.edu Mon May 1 20:43:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon May 1 20:43:04 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: (Sasha's message of "Mon, 1 May 2006 22:06:15 -0400") References: <4456A284.90805@noaa.gov> Message-ID: <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Sasha" == Sasha writes: Sasha> Do we really need the putmask method? With fancy indexing Sasha> the obvious way to do it is >>>> a[a>3] = 3 Sasha> and it works fine. Maybe we can drop putmask method before Sasha> 1.0 instead of fixing the bug. I'm +1 for backwards compatibility with the Numeric API where the cost of a fix isn't too painful. The way to speed numpy adoption is to make old Numeric code "just work" where possible with the new API. Bear in mind that many people may not even begin the port attempt until well after numpy 1.0. JDH From andorxor at gmx.de Tue May 2 01:30:08 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Tue May 2 01:30:08 2006 Subject: [Numpy-discussion] Why cblas not blas? Message-ID: <4457165D.8050508@gmx.de> Hi While I'm just trying to get Atlas compiled on my Win system (so far without success) I'm wondering why NumPy is using the cblas interface to BLAS instead of the normal Fortran one. If it was using the normal BLAS interface, one could just use the binaries from the optimized ACML, MKL, or GOTO libraries. Is this for technical or purely historical reasons? Stephan From N.Gorsic at vipnet.hr Tue May 2 05:27:22 2006 From: N.Gorsic at vipnet.hr (Neven Gorsic) Date: Tue May 2 05:27:22 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric Message-ID: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> I have WinXp SP1 and I installed: python-2.4.3.msi and 3 packeges: numpy-0.9.6r1.win32-py2.4.exe scipy-0.4.8.win32-py2.4-pentium4sse2.exe py2exe-0.6.5.win32-py2.4.exe All 3 directories are placed in the C:\Python24\Lib\site-packages. After I type import Numeric I got "ImportError: No module named Numeric". But SciPy which needs NumPy works fine (no Impert Error). Can You help me what to do? Neven -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Tue May 2 05:36:17 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue May 2 05:36:17 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C0101184E@MAIL02.win.vipnet.hr> Message-ID: <200605021434.59560.a.u.r.e.l.i.a.n@gmx.net> On Tuesday 02 May 2006 14:25, Neven Gorsic wrote: > I have WinXp SP1 and I installed: > > python-2.4.3.msi > > and 3 packeges: > > numpy-0.9.6r1.win32-py2.4.exe > scipy-0.4.8.win32-py2.4-pentium4sse2.exe > py2exe-0.6.5.win32-py2.4.exe > > All 3 directories are placed in the C:\Python24\Lib\site-packages. > After I type import Numeric I got "ImportError: No module named > Numeric". > But SciPy which needs NumPy works fine (no Impert Error). NumPy is the 'new' Numeric, i.e. the successor of Numeric. Instead of ``import Numeric``, use ``import numpy``. Johannes From luszczek at cs.utk.edu Tue May 2 06:05:00 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue May 2 06:05:00 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <4457165D.8050508@gmx.de> References: <4457165D.8050508@gmx.de> Message-ID: <200605020900.24814.luszczek@cs.utk.edu> On Tuesday 02 May 2006 03:20, Stephan Tolksdorf wrote: > Hi > > While I'm just trying to get Atlas compiled on my Win system (so far > without success) I'm wondering why NumPy is using the cblas interface > to BLAS instead of the normal Fortran one. If it was using the normal > BLAS interface, one could just use the binaries from the optimized > ACML, MKL, or GOTO libraries. Is this for technical or purely > historical reasons? Stephan, I cannot speak about history but I can mention a few technical reasons. 1. CBLAS handles both C's row-major order as well as Fortran's column-major order of storing matrices. Since numpy supports both, it's important to be efficient in both cases. 2. There is a set of wrappers that provide CBLAS interface on top of vendor BLAS. It's here: http://netlib.org/blas/blast-forum/cblas.tgz It uses a few tricks to make sure that nearly all operations are as efficient as vendor BLAS regardless of element order (C or Fortran). 3. CBLAS is supposed be adopted by vendors. Just like BLAS once only was the underlying library for LINPACK and then LAPACK. Now it's considered a standard. So there is nothing "normal" about Fortran BLAS. It's just something that grew standard over time. And in fact Intel has CBLAS in MKL (at least version 8.0): http://www.ualberta.ca/AICT/RESEARCH/LinuxClusters/doc/mkl/mklqref/cblas.htm Piotr PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. I've seen problems in performance and (!) correctness. Mr. Goto has been notified and hopefully is working on it. From schofield at ftw.at Tue May 2 06:17:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue May 2 06:17:01 2006 Subject: [Numpy-discussion] ImportError: No module named Numeric In-Reply-To: <89684A5E33D0BC4CA1CA32E6E6499E7C0101186D@MAIL02.win.vipnet.hr> References: <89684A5E33D0BC4CA1CA32E6E6499E7C0101186D@MAIL02.win.vipnet.hr> Message-ID: <44575C83.1000003@ftw.at> Neven Gorsic wrote: > I try to learn to use NumPy but all examples from numpy.pdf are > unusable: > vector1 = array((1,2,3,4,5)), ... > Can You tel me, where can I find any up-to-date manual/tutoria about > numpy > or any sintax/example? > Yes, for now you can use "from numpy import *" at the top of your file or session; then the examples should work. I also suggest you read the Python tutorial at http://www.python.org/doc/current/tut/ (e.g. Section 6.4 on Packages). -- Ed From cwmoad at gmail.com Tue May 2 06:47:04 2006 From: cwmoad at gmail.com (Charlie Moad) Date: Tue May 2 06:47:04 2006 Subject: [Numpy-discussion] Guide to Numpy book In-Reply-To: References: <3FA6601C-819F-4F15-A670-829FC428F47B@cortechs.net> <4452C145.8050803@geodynamics.org> Message-ID: <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> On 4/30/06, Vidar Gundersen wrote: > ===== Original message from Luis Armendariz | 29 Apr 2006: > >> What is the newest version of Guide to numpy? The recent one I got is > >> dated at Jan 9 2005 on the cover. > > The one I got yesterday is dated March 15, 2006. > > aren't the updates supposed to be sent out > to customers when available? I was waiting to hear a reply on this, because I am curious about getting updates as well. Our labs copy reads Jan 20. How often should we expect updates? I am guessing the date variations on the front page are from latex each time the doc is regenerated. Thanks, Charlie From andorxor at gmx.de Tue May 2 06:59:21 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Tue May 2 06:59:21 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <200605020900.24814.luszczek@cs.utk.edu> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: <4457656C.5030605@gmx.de> Hi Piotr, thanks for the informative answer! My worry was that the additional level of indirection of the Netlib cblas wrapper might notably slow down calculations (at least for small matrices). On the other side, this effect is probably negligible when taking into account all the Python overhead and the fact that NumPy supports both column- and row-major matrices. Although ACML supports a C interface it is not compatible to CBLAS, as far as I know. MKL seems to do better in this regard. Goto is now available as a source distribution, I wonder where this might lead. By the way, I got my Atlas 3.7.11 compilation working on Cygwin by selecting the P4 branch, although it's running on an Opteron processor. Stephan From dr.popovici at gmail.com Tue May 2 07:27:04 2006 From: dr.popovici at gmail.com (Vlad Popovici) Date: Tue May 2 07:27:04 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() Message-ID: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Hi, It seems to me that there is a bug in the matrix code: both methods std() and var() fail with "ValueError: matrices are not aligned" exception. Note that the same methods for array objects work correctly. I am using NumPy version 0.9.7. Example: >>> m = matrix(arange(1,10)).reshape(3,3) >>> m.std() Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", line 149, in __mul__ return N.dot(self, other) ValueError: matrices are not aligned >>> asarray(m).std() 2.7386127875258306 Temporarily, I can use asarray() to get the right values, but this should be corrected. Best wishes, Vlad -- Vlad POPOVICI, PhD. Swiss Institute for Bioinformatics -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 2 07:53:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 07:53:05 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <4457656C.5030605@gmx.de> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> <4457656C.5030605@gmx.de> Message-ID: Hi, On 5/2/06, Stephan Tolksdorf wrote: > > Hi Piotr, > > thanks for the informative answer! > > My worry was that the additional level of indirection of the Netlib > cblas wrapper might notably slow down calculations (at least for small > matrices). Atlas *is* a bit slow for small, dense matrices due to its generality. It also produces very large binaries when statically linked. For numpy I think this is fine, but if you want high performance with small matrices and need small binaries you will do better rolling your own routines. I think the Goto code for some architectures has been moved into Atlas, but I don't recall which ones offhand. Pearu is probably the expert for those sort of questions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue May 2 08:35:01 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 08:35:01 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: References: <4456A284.90805@noaa.gov> Message-ID: On 5/1/06, Sasha wrote: > Do we really need the putmask method? With fancy indexing the obvious > way to do it is > > >>> a[a>3] = 3 That would be a great example for the NumPy for Matlab Users page. http://www.scipy.org/NumPy_for_Matlab_Users From schofield at ftw.at Tue May 2 09:04:03 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue May 2 09:04:03 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() In-Reply-To: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> References: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Message-ID: <445783F1.7030102@ftw.at> Vlad Popovici wrote: > Hi, > > It seems to me that there is a bug in the matrix code: both methods > std() and var() fail with > "ValueError: matrices are not aligned" exception. Note that the same > methods for array objects work correctly. > I am using NumPy version 0.9.7. Travis fixed this last week in SVN. But thanks for reporting it! -- Ed From faltet at carabos.com Tue May 2 09:27:10 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue May 2 09:27:10 2006 Subject: [Numpy-discussion] bug: wrong itemsizes between NumPy strings and numarray Message-ID: <200605021826.21114.faltet@carabos.com> Hi, The PyTables test suite has just discovered a subtle bug when doing conversions between NumPy strings and numarray. In principle I'd say that this is a problem with numarray, but as this problem raised when updating numpy to the latest SVN version, I don't know what to think, frankly. The problem can be seen in: In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2466' In [3]: from numarray import strings In [4]: strings.array(numpy.array(['aa'],'S1')).itemsize() Out[4]: 1 In [5]: strings.array(numpy.array(['aa'],'S2')).itemsize() Out[5]: 2 In [6]: strings.array(numpy.array(['aa'],'S3')).itemsize() Out[6]: 2 In [7]: strings.array(numpy.array(['aa'],'S30')).itemsize() Out[7]: 2 i.e. the numarray element size out of the conversion is always 2 (i.e. the actual size of the element in the list), despite that the original NumPy object can have bigger sizes. However, with another NumPy version (dated from 3 weeks ago or so): In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2278' In [3]: from numarray import strings In [4]: strings.array(numpy.array(['aa'],'S1')).itemsize() Out[4]: 1 In [5]: strings.array(numpy.array(['aa'],'S2')).itemsize() Out[5]: 2 In [6]: strings.array(numpy.array(['aa'],'S3')).itemsize() Out[6]: 3 In [7]: strings.array(numpy.array(['aa'],'S30')).itemsize() Out[7]: 30 i.e. it works as intended. I report the bug here because I don't know who is the actual culprit. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From tgrav at mac.com Tue May 2 09:34:03 2006 From: tgrav at mac.com (Tommy Grav) Date: Tue May 2 09:34:03 2006 Subject: [Numpy-discussion] where function Message-ID: Hi, I have a fits file that I read in with pyfits (which use numpy). Some of the elements in the data has a 'nan' value. How can I most easily find these elements and replace them with 0.? Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue May 2 10:02:10 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue May 2 10:02:10 2006 Subject: [Numpy-discussion] numpy putmask() method and a scalar. In-Reply-To: <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> References: <4456A284.90805@noaa.gov> <87wtd5gkq9.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44579074.1020607@noaa.gov> John Hunter wrote: > Sasha> Do we really need the putmask method? With fancy indexing > Sasha> the obvious way to do it is > > >>>> a[a>3] = 3 Duh! I totally forgot about that, despite the fact that that was the one thing I missed when moving form Matlab to Numeric a few years back. > I'm +1 for backwards compatibility with the Numeric API where the cost > of a fix isn't too painful. I agree, however, does Numeric have a putmask() method? or only a putmask function? The numpy putmask function works fine. Still, this looks like a bug that may well apply to other methods, so it's probably worth fixing. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From tgrav at mac.com Tue May 2 10:12:07 2006 From: tgrav at mac.com (Tommy Grav) Date: Tue May 2 10:12:07 2006 Subject: [Numpy-discussion] where function In-Reply-To: References: Message-ID: <71497875-D8D7-4369-BAA0-9369C48974E9@mac.com> > Thanks I had tried that earlier but it failed, so I checked again and realized that pyfits was set up to work with numarray. Changing that to numpy made it work perfectly :) Cheers Tommy tgrav at mac.com http://homepage.mac.com/tgrav/ "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction" -- Albert Einstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue May 2 10:37:07 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 10:37:07 2006 Subject: [Numpy-discussion] bug: wrong itemsizes between NumPy strings and numarray In-Reply-To: <200605021826.21114.faltet@carabos.com> References: <200605021826.21114.faltet@carabos.com> Message-ID: Francesc, Completely off topic, but are you aware of the lexsort function in numpy and numarray? It is like argsort but takes a list of (vector)keys and performs a stable sort on each key in turn, so for record arrays you can get the effect of sorting on column A, then column B, etc. I thought I would mention it because you seem to use argsort a lot and, well, because I wrote it ;) BTW, thanks for PyTables, it is quite wonderful. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue May 2 10:39:07 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 10:39:07 2006 Subject: [Numpy-discussion] where function In-Reply-To: References: Message-ID: On 5/2/06, Tommy Grav wrote: > I have a fits file that I read in with pyfits (which use numpy). > Some of the elements in the data has a 'nan' value. How can > I most easily find these elements and replace them with 0.? I learned how to do this a few minutes ago from an example on this list. The example was a[a>3] = 3. I took a guess at isnan since that is what it is called in Octave. >> x = asmatrix(random.uniform(0,1,(3,3))) >> x matrix([[ 0.6183926 , 0.00306816, 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) >> x[0,1] = nan >> x matrix([[ 0.6183926 , nan, 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) >> x[isnan(x)] = 0 >> x matrix([[ 0.6183926 , 0. , 0.36471066], [ 0.24329805, 0.44638449, 0.63253303], [ 0.86444777, 0.61926557, 0.82174768]]) From faltet at carabos.com Tue May 2 11:29:05 2006 From: faltet at carabos.com (Francesc Altet) Date: Tue May 2 11:29:05 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: References: <200605021826.21114.faltet@carabos.com> Message-ID: <200605022027.59369.faltet@carabos.com> A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > Francesc, > > Completely off topic, but are you aware of the lexsort function in numpy > and numarray? It is like argsort but takes a list of (vector)keys and > performs a stable sort on each key in turn, so for record arrays you can > get the effect of sorting on column A, then column B, etc. I thought I > would mention it because you seem to use argsort a lot and, well, because I > wrote it ;) Thanks for pointing this out. In fact, I had no idea of this capability in numarray (numpy neither). I'll have to look more carefully into this to fully realize the kind of things that can be done with it. But it seems very promising anyway :-) > BTW, thanks for PyTables, it is quite wonderful. You are welcome! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From kwgoodman at gmail.com Tue May 2 11:47:05 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue May 2 11:47:05 2006 Subject: [Numpy-discussion] bug: matrix.std() and matrix.var() In-Reply-To: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> References: <5a8cce280605020726o24ae05dbwee1df927cccd2b3f@mail.gmail.com> Message-ID: On 5/2/06, Vlad Popovici wrote: > Hi, > > It seems to me that there is a bug in the matrix code: both methods std() > and var() fail with > "ValueError: matrices are not aligned" exception. Note that the same methods > for array objects work correctly. > I am using NumPy version 0.9.7. > > Example: > >>> m = matrix(arange(1,10)).reshape(3,3) > >>> m.std() > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", > line 149, in __mul__ > return N.dot(self, other) > ValueError: matrices are not aligned > >>> asarray(m).std() > 2.7386127875258306 That might have been fixed recently in SVN. Have a look at this thread: http://sourceforge.net/mailarchive/forum.php?thread_id=10254986&forum_id=4890 >> m = matrix(arange(1,10)).reshape(3,3) >> m.std() matrix([[ 2.73861279]]) >> m.var() matrix([[ 7.5]]) >> numpy.__version__ '0.9.7.2435' From fperez.net at gmail.com Tue May 2 12:23:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue May 2 12:23:05 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: <200605020900.24814.luszczek@cs.utk.edu> References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: Hi Piotr, On 5/2/06, Piotr Luszczek wrote: > PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. > I've seen problems in performance and (!) correctness. Mr. Goto > has been notified and hopefully is working on it. Quick question: a colleague from ORNL to whom I forwarded this information asked me whether the problem appears also on single-cpus used with multithreaded code. Do you know? I think he uses both AMD and Itanium chips, but I'm not 100% certain. Cheers, f From luszczek at cs.utk.edu Tue May 2 12:36:11 2006 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue May 2 12:36:11 2006 Subject: [Numpy-discussion] Why cblas not blas? In-Reply-To: References: <4457165D.8050508@gmx.de> <200605020900.24814.luszczek@cs.utk.edu> Message-ID: <200605021536.06303.luszczek@cs.utk.edu> On Tuesday 02 May 2006 14:22, Fernando Perez wrote: > Hi Piotr, > > On 5/2/06, Piotr Luszczek wrote: > > PS. Be careful with GOTO BLAS on Intel Dual Core and Itanium. > > I've seen problems in performance and (!) correctness. Mr. Goto > > has been notified and hopefully is working on it. > > Quick question: a colleague from ORNL to whom I forwarded this > information asked me whether the problem appears also on single-cpus > used with multithreaded code. Do you know? I think he uses both AMD > and Itanium chips, but I'm not 100% certain. The problems we had were in multithreaded code on Dual Core Intel. And on Itanium the problems were with single-threaded code. I don't know the details on problems with AMD cpus it's been reported on GOTO BLAS mailing list. Piotr From fullung at gmail.com Tue May 2 16:28:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue May 2 16:28:01 2006 Subject: [Numpy-discussion] Crash on failed memory allocation Message-ID: <00d001c66e3f$ef293cd0$0a84a8c0@dsp.sun.ac.za> Hello all Stefan van der Walt and I have discovered two bugs when working with large blocks of memory and array descriptors. Example code that causes problems: import numpy as N print N.__version__ x=[] i = 20000 j = 10000 names = ['a', 'b'] formats = ['f8', 'f8'] descr = N.dtype({'names' : names, 'formats' : formats}) for y in range(20000): x.append(N.empty((10000,),dtype=descr)['a']) N.asarray(x) With i and j large and a big descriptor, you run out of process address space (typically 2 GB?) during the list append. This raises a MemoryError. However, with a slightly smaller list, you run out of memory in asarray. This causes a segfault on Linux. This problem can also manifest itself as a TypeError. On Windows with r2462 I got this message: Traceback (most recent call last): File "numpybug.py", line 7, in ? N.asarray(x) File "C:\Python24\Lib\site-packages\numpy\core\numeric.py", line 116, in asarray return array(a, dtype, copy=False, order=order) TypeError: a float is required Stefan also discovered the following bug: import numpy as N descr = N.dtype({'names' : ['a'], 'formats' : ['foo']}, align=1) Notice the invalid typestring. Combined with align=1 this seem to be guaranteed to crash. Regards, Albert From ted.horst at earthlink.net Tue May 2 20:11:02 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Tue May 2 20:11:02 2006 Subject: [Numpy-discussion] scalarmath fails to build Message-ID: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Here is a patch: --- numpy/core/src/scalarmathmodule.c.src (revision 2471) +++ numpy/core/src/scalarmathmodule.c.src (working copy) @@ -597,7 +597,12 @@ { PyObject *ret; @name@ arg1, arg2, out; +#if @cmplx@ + @otyp@ out1; + out1.real = out.imag = 0; +#else @otyp@ out1=0; +#endif int retstatus; switch(_ at name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { From charlesr.harris at gmail.com Tue May 2 21:12:05 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue May 2 21:12:05 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: <200605022027.59369.faltet@carabos.com> References: <200605021826.21114.faltet@carabos.com> <200605022027.59369.faltet@carabos.com> Message-ID: Hi, On 5/2/06, Francesc Altet wrote: > > A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > > Francesc, > > > > Completely off topic, but are you aware of the lexsort function in numpy > > and numarray? It is like argsort but takes a list of (vector)keys and > > performs a stable sort on each key in turn, so for record arrays you can > > get the effect of sorting on column A, then column B, etc. I thought I > > would mention it because you seem to use argsort a lot and, well, > because I > > wrote it ;) > > Thanks for pointing this out. In fact, I had no idea of this > capability in numarray (numpy neither). I'll have to look more > carefully into this to fully realize the kind of things that can be > done with it. But it seems very promising anyway :-) As an example: In [21]: a Out[21]: array([[0, 1], [1, 0], [1, 1], [0, 1], [1, 0]]) In [22]: a[lexsort((a[:,1],a[:,0]))] Out[22]: array([[0, 1], [0, 1], [1, 0], [1, 0], [1, 1]]) Hmm, I notice that lexsort requires a tuple and won't accept a list. I wonder if there is a good reason for that. Travis? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Tue May 2 23:51:04 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue May 2 23:51:04 2006 Subject: [Numpy-discussion] scalarmath fails to build In-Reply-To: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> References: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Message-ID: On Tue, 2 May 2006, Ted Horst wrote: > Here is a patch: > > > --- numpy/core/src/scalarmathmodule.c.src (revision 2471) > +++ numpy/core/src/scalarmathmodule.c.src (working copy) > @@ -597,7 +597,12 @@ > { > PyObject *ret; > @name@ arg1, arg2, out; > +#if @cmplx@ > + @otyp@ out1; > + out1.real = out.imag = 0; > +#else > @otyp@ out1=0; > +#endif > int retstatus; > switch(_ at name@_convert2_to_ctypes(a, &arg1, b, &arg2)) { Thanks - applied: http://projects.scipy.org/scipy/numpy/changeset/2472 (Travis, I hope I got that right ;-) After this numpy compiles for me, but on testing (64 Bit opteron) I get: In [1]: import numpy from lib import * -> failed: 'module' object has no attribute 'nmath' import linalg -> failed: 'module' object has no attribute 'nmath' In numpy/lib/__init__.py the construct __all__ = ['nmath','math'] is used, but `nmath` is not mentioned explicitely before. In [2]: numpy.__version__ Out[2]: '0.9.7.2471' In [3]: numpy.test(10) Found 5 tests for numpy.distutils.misc_util Found 4 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 8 tests for numpy.lib.arraysetops Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_type_check.py:7: AttributeError: 'modul e' object has no attribute 'nmath' (in ?) Found 93 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 2 tests for numpy.core.oldnumeric Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_twodim_base.py:7: ImportError: cannot i mport name rot90 (in ?) Found 8 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py:7: AttributeError: 'mo dule' object has no attribute 'nmath' (in ?) Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_index_tricks.py:4: ImportError: cannot import name r_ (in ?) Warning: FAILURE importing tests for /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_shape_base.py:5: AttributeError: 'modul e' object has no attribute 'nmath' (in ?) Found 0 tests for __main__ ......................................................E............................................................................ ...............................................................E...E.............................. ====================================================================== ERROR: check_manyways (numpy.lib.tests.test_arraysetops.test_aso) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_arraysetops.py", line 128, in c heck_manyways a = numpy.fix( nItem / 10 * numpy.random.random( nItem ) ) AttributeError: 'module' object has no attribute 'fix' ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_algebra) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 115, in ch eck_basic import numpy.linalg as linalg File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 19, in ? from numpy.lib import * AttributeError: 'module' object has no attribute 'nmath' ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_properties) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 44, in che ck_basic import numpy.linalg as linalg File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 19, in ? from numpy.lib import * AttributeError: 'module' object has no attribute 'nmath' ---------------------------------------------------------------------- Ran 229 tests in 0.440s FAILED (errors=3) Out[3]: Best, Arnd From arnd.baecker at web.de Wed May 3 00:03:14 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 3 00:03:14 2006 Subject: [Numpy-discussion] scalarmath fails to build In-Reply-To: References: <43873FB5-0060-4893-B398-181A62755B1E@earthlink.net> Message-ID: On Wed, 3 May 2006, Arnd Baecker wrote: > In [1]: import numpy > from lib import * -> failed: 'module' object has no attribute 'nmath' > import linalg -> failed: 'module' object has no attribute 'nmath' > > In numpy/lib/__init__.py the construct > __all__ = ['nmath','math'] > is used, but `nmath` is not mentioned explicitely before. By looking in the trac history I found the simple reason: Index: numpy/lib/__init__.py =================================================================== --- numpy/lib/__init__.py (revision 2472) +++ numpy/lib/__init__.py (working copy) @@ -18,7 +18,7 @@ from arraysetops import * import math -__all__ = ['nmath','math'] +__all__ = ['emath','math'] __all__ += type_check.__all__ __all__ += index_tricks.__all__ __all__ += function_base.__all__ Patch is applied, http://projects.scipy.org/scipy/numpy/changeset/2473 So now numpy builds again and no errors on test. Back to real work ;-). Arnd From a.u.r.e.l.i.a.n at gmx.net Wed May 3 01:16:01 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed May 3 01:16:01 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 Message-ID: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Hi, with the current svn version, numpy.test(10,10) gives the following: [...everything ok...] check_scalar (numpy.lib.tests.test_function_base.test_vectorize) ... ok check_simple (numpy.lib.tests.test_function_base.test_vectorize) ... ok ====================================================================== ERROR: check_doctests (numpy.lib.tests.test_ufunclike.test_docs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_ufunclike.py", line 59, in check_doctests def check_doctests(self): return self.rundocs() File "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", line 185, in rundocs tests = doctest.DocTestFinder().find(m) AttributeError: 'module' object has no attribute 'DocTestFinder' ====================================================================== ERROR: check_doctests (numpy.lib.tests.test_polynomial.test_docs) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_polynomial.py", line 79, in check_doctests def check_doctests(self): return self.rundocs() File "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", line 185, in rundocs tests = doctest.DocTestFinder().find(m) AttributeError: 'module' object has no attribute 'DocTestFinder' ---------------------------------------------------------------------- Ran 364 tests in 1.747s FAILED (errors=2) Out[5]: In [6]: numpy.__version__ Out[6]: '0.9.7.2473' Johannes From nwagner at iam.uni-stuttgart.de Wed May 3 01:20:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 3 01:20:59 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <445867A6.3060304@iam.uni-stuttgart.de> Johannes Loehnert wrote: > Hi, > > with the current svn version, numpy.test(10,10) gives the following: > > [...everything ok...] > check_scalar (numpy.lib.tests.test_function_base.test_vectorize) ... ok > check_simple (numpy.lib.tests.test_function_base.test_vectorize) ... ok > > ====================================================================== > ERROR: check_doctests (numpy.lib.tests.test_ufunclike.test_docs) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_ufunclike.py", line > 59, in check_doctests > def check_doctests(self): return self.rundocs() > File > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' > > ====================================================================== > ERROR: check_doctests (numpy.lib.tests.test_polynomial.test_docs) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/jloehnert/python/mylibs/numpy/lib/tests/test_polynomial.py", > line 79, in check_doctests > def check_doctests(self): return self.rundocs() > File > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' > > ---------------------------------------------------------------------- > Ran 364 tests in 1.747s > > FAILED (errors=2) > Out[5]: > > In [6]: numpy.__version__ > Out[6]: '0.9.7.2473' > > > Johannes > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > I cannot reproduce these errors. 32bit Ran 364 tests in 2.457s OK >>> numpy.__version__ '0.9.7.2473' 64bit Ran 364 tests in 0.858s OK Nils From arnd.baecker at web.de Wed May 3 01:35:04 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 3 01:35:04 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: <4451C076.40608@ieee.org> References: <4451C076.40608@ieee.org> Message-ID: On Fri, 28 Apr 2006, Travis Oliphant wrote: > > The scalar math module is complete and ready to be tested. It should > speed up code that relies heavily on scalar arithmetic by by-passing the > ufunc machinery. > > It needs lots of testing to be sure that it is doing the "right" > thing. To enable scalarmath you need to > > import numpy.core.scalarmath After numpy compiles on the 64 Bit machine I tried: import numpy import numpy.core.scalarmath numpy.test(1) The only failure is ====================================================================== FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", line 149, in check_basic assert_equal(prod(a),26400) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 128, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 26400 ACTUAL: 26400L ---------------------------------------------------------------------- Ran 363 tests in 0.246s Not sure if this is more a bug of assert_equal? Testing scipy with `import numpy.core.scalarmath` before leads to quite a few errors, many of them related to sparse, see below for an exerpt, and a couple of others. Best, Arnd ====================================================================== ERROR: check_eye (scipy.sparse.tests.test_sparse.test_construct_utils) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 626, in check_eye a = speye(2, 3 ) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2810, in speye return spdiags(diags, k, n, m) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_identity (scipy.sparse.tests.test_sparse.test_construct_utils) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 621, in check_identity a = spidentity(3) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2800, in spidentity return spdiags( diags, 0, n, n ) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_add (scipy.sparse.tests.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: Check whether the copy=True and copy=False keywords work ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 33, in setUp self.datsp = self.spmatrix(self.dat) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz [....] ====================================================================== ERROR: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 67, in setUp self.a = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 2792, in spdiags return csc_matrix((a, rowa, ptra), dims=(M, N)) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 574, in __init__ self._check() File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 590, in _check raise ValueError, "nzmax must not be less than nnz" ValueError: nzmax must not be less than nnz ====================================================================== ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 79, in check_exact W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/morestats.py", line 591, in ansari pval = 2.0*sum(a1[:cind+1])/total TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 53, in check_simple_complex assert_array_almost_equal(w,exact_w) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.2983779e+00 +6.5630282e-01j -5.5210285e-16 -1.7300711e-16j -2.9837791e-01 +3.4369718e-01j] Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j 1.+0.75j 1.+1.j 1.-0.5714286j 1.-1.j ] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 340, in check_definition assert_array_almost_equal(y,y1) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1118, in check_h1vp assert_almost_equal(h1,h1real,8) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004+63.055272295669909j) ACTUAL: (1+126.58490844594492j) ====================================================================== FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1125, in check_h2vp assert_almost_equal(h2,h2real,8) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004-63.055272295669909j) ACTUAL: (1-126.58490844594492j) ====================================================================== FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 42, in check_nils assert_array_almost_equal(r,cr) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] [ -5.808 2.936 -8.712 -9.68 1.936] [ -6.24 2.08 -8.36 -10.4 ... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ====================================================================== FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 99, in check_bad assert_array_almost_equal(dot(esa,esa),a) File "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ nan +nanj nan +n... Array 2: [[ 1. 0. 0. 1. ] [ 0. 0.03125 0. 0. ] [ 0. 0. 0.03125 0. ] [ ... ---------------------------------------------------------------------- Ran 1119 tests in 1.151s FAILED (failures=7, errors=82) From nwagner at iam.uni-stuttgart.de Wed May 3 01:42:04 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 3 01:42:04 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: References: <4451C076.40608@ieee.org> Message-ID: <44586C88.80702@iam.uni-stuttgart.de> Arnd Baecker wrote: > On Fri, 28 Apr 2006, Travis Oliphant wrote: > > >> The scalar math module is complete and ready to be tested. It should >> speed up code that relies heavily on scalar arithmetic by by-passing the >> ufunc machinery. >> >> It needs lots of testing to be sure that it is doing the "right" >> thing. To enable scalarmath you need to >> >> import numpy.core.scalarmath >> > > > After numpy compiles on the 64 Bit machine I tried: > > import numpy > import numpy.core.scalarmath > numpy.test(1) > > The only failure is > > ====================================================================== > FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", > line 149, in check_basic > assert_equal(prod(a),26400) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 128, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > DESIRED: 26400 > ACTUAL: 26400L > > ---------------------------------------------------------------------- > Ran 363 tests in 0.246s > > > Not sure if this is more a bug of assert_equal? > > > Testing scipy with `import numpy.core.scalarmath` before leads > to quite a few errors, many of them related to sparse, see below for an > exerpt, and a couple of others. > > Best, Arnd > > > ====================================================================== > ERROR: check_eye (scipy.sparse.tests.test_sparse.test_construct_utils) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 626, in check_eye > a = speye(2, 3 ) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2810, in speye > return spdiags(diags, k, n, m) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: check_identity > (scipy.sparse.tests.test_sparse.test_construct_utils) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 621, in check_identity > a = spidentity(3) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2800, in spidentity > return spdiags( diags, 0, n, n ) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: check_add (scipy.sparse.tests.test_sparse.test_csc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > ====================================================================== > ERROR: Check whether the copy=True and copy=False keywords work > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", > line 33, in setUp > self.datsp = self.spmatrix(self.dat) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > > [....] > > > ====================================================================== > ERROR: Solve: single precision > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", > line 67, in setUp > self.a = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 2792, in spdiags > return csc_matrix((a, rowa, ptra), dims=(M, N)) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 574, in __init__ > self._check() > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 590, in _check > raise ValueError, "nzmax must not be less than nnz" > ValueError: nzmax must not be less than nnz > > > > ====================================================================== > ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", > line 79, in check_exact > W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/stats/morestats.py", > line 591, in ansari > pval = 2.0*sum(a1[:cind+1])/total > TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' > > ====================================================================== > FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", > line 53, in check_simple_complex > assert_array_almost_equal(w,exact_w) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 66.6666666667%): > Array 1: [ 9.2983779e+00 +6.5630282e-01j -5.5210285e-16 > -1.7300711e-16j > -2.9837791e-01 +3.4369718e-01j] > Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", > line 183, in check_definition > assert_array_almost_equal(y,y1) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 > -0.5j > 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... > Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j > 1.+0.75j > 1.+1.j 1.-0.5714286j 1.-1.j ] > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", > line 340, in check_definition > assert_array_almost_equal(y,y1) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 > 0.4356602 -0.375 > 0.9356602] > Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] > > > ====================================================================== > FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", > line 1118, in check_h1vp > assert_almost_equal(h1,h1real,8) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 148, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: (0.49812630170362004+63.055272295669909j) > ACTUAL: (1+126.58490844594492j) > > ====================================================================== > FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", > line 1125, in check_h2vp > assert_almost_equal(h2,h2real,8) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 148, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: (0.49812630170362004-63.055272295669909j) > ACTUAL: (1-126.58490844594492j) > > ====================================================================== > FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", > line 42, in check_nils > assert_array_almost_equal(r,cr) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] > [ -5.808 2.936 -8.712 -9.68 1.936] > [ -6.24 2.08 -8.36 -10.4 ... > Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 > -2.2453333] > [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... > > > ====================================================================== > FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", > line 99, in check_bad > assert_array_almost_equal(dot(esa,esa),a) > File > "/home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/numpy/testing/utils.py", > line 231, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[ nan > +nanj > nan +n... > Array 2: [[ 1. 0. 0. 1. ] > [ 0. 0.03125 0. 0. ] > [ 0. 0. 0.03125 0. ] > [ ... > > > ---------------------------------------------------------------------- > Ran 1119 tests in 1.151s > > FAILED (failures=7, errors=82) > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > On 32 bit : scipy.test(1) results in ====================================================================== ERROR: check_exact (scipy.stats.tests.test_morestats.test_ansari) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 79, in check_exact W,pval = stats.ansari([1,2,3,4],[15,5,20,8,10,12]) File "/usr/lib/python2.4/site-packages/scipy/stats/morestats.py", line 591, in ansari pval = 2.0*sum(a1[:cind+1])/total TypeError: unsupported operand type(s) for *: 'float' and 'float32scalar' ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 53, in check_simple_complex assert_array_almost_equal(w,exact_w) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.2983779e+00 +6.5630282e-01j -4.1690526e-16 -9.3738002e-17j -2.9837791e-01 +3.4369718e-01j] Array 2: [ 1.+0.0705825j 0.+0.j 1.-1.1518855j] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... Array 2: [ 1.+0.15j 1.+1.j 1.+4.j 1.-1.j 1.+0.75j 1.+1.j 1.-0.5714286j 1.-1.j ] ====================================================================== FAIL: check_definition (scipy.fftpack.tests.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 340, in check_definition assert_array_almost_equal(y,y1) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_h1vp (scipy.special.tests.test_basic.test_h1vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1118, in check_h1vp assert_almost_equal(h1,h1real,8) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004+63.055272295669901j) ACTUAL: (1+126.58490844594493j) ====================================================================== FAIL: check_h2vp (scipy.special.tests.test_basic.test_h2vp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1125, in check_h2vp assert_almost_equal(h2,h2real,8) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (0.49812630170362004-63.055272295669901j) ACTUAL: (1-126.58490844594493j) ====================================================================== FAIL: check_nils (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 42, in check_nils assert_array_almost_equal(r,cr) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 20.728 -6.576 29.592 32.88 -6.576] [ -5.808 2.936 -8.712 -9.68 1.936] [ -6.24 2.08 -8.36 -10.4 ... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ====================================================================== FAIL: check_bad (scipy.linalg.tests.test_matfuncs.test_sqrtm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 99, in check_bad assert_array_almost_equal(dot(esa,esa),a) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 231, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ nan +nanj nan +nanj nan ... Array 2: [[ 1. 0. 0. 1. ] [ 0. 0.03125 0. 0. ] [ 0. 0. 0.03125 0. ] [ ... ---------------------------------------------------------------------- Ran 1516 tests in 7.346s FAILED (failures=7, errors=1) and numpy.test(1) yields ====================================================================== FAIL: check_basic (numpy.lib.tests.test_function_base.test_prod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/lib/tests/test_function_base.py", line 149, in check_basic assert_equal(prod(a),26400) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 128, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 26400 ACTUAL: 26400L ---------------------------------------------------------------------- Ran 363 tests in 0.935s FAILED (failures=1) Nils From fullung at gmail.com Wed May 3 04:18:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 04:18:01 2006 Subject: [Numpy-discussion] Array descriptors not compared correctly causes asarray to copy data Message-ID: <019401c66ea3$1c754590$0a84a8c0@dsp.sun.ac.za> Hello all There seems to be a problem with asarray when using more than one instance of an equivalent dtype. Example code: import numpy as N print N.__version__ import time a = N.dtype({'names' : ['i', 'j'], 'formats' : [N.float64, N.float64]}) b = N.dtype({'names' : ['i', 'j'], 'formats' : [N.float64, N.float64]}) print a == b print a.descr == b.descr _starttime = time.clock() arr = N.zeros((5000,1000), dtype=a) for x in arr: y = N.asarray(x, dtype=a) print 'done in %f seconds' % (time.clock() - _starttime,) _starttime = time.clock() for x in arr: y = N.asarray(x, dtype=b) print 'done in %f seconds' % (time.clock() - _starttime,) On my system I get the following result: 0.9.7.2462 False True done in 0.153871 seconds done in 8.726785 seconds So while the descrs are equal, Python doesn't seem to think that the dtypes are which is probably causing the problem. Trac ticket at http://projects.scipy.org/scipy/numpy/ticket/94. Regards, Albert From schofield at ftw.at Wed May 3 04:28:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 3 04:28:15 2006 Subject: [Numpy-discussion] Object array creation from sequences Message-ID: <445894E0.7030303@ftw.at> Hi all, NumPy currently does the following: >>> s = set([1, 100, 10]) >>> a = numpy.array(s) >>> a array(set([1, 100, 10]), dtype=object) >>> a.shape () Many functions in NumPy's functional interface, like numpy.sort(), inherit this behaviour: >>> b = numpy.sort(s) >>> b array(set([1, 10, 100]), dtype=object) >>> b.shape () I'd like to propose two modifications to improve array construction from non-list sequences: 1. We inspect whether the data has a __len__ method. If it does, and it returns an integer, we construct an array out of the C equivalent of list(data). Others on this list have noted that NumPy also creates a rank-0 object array from generators: >>> c = numpy.array(i*2 for i in xrange(10)) >>> c array(, dtype=object) This proposal wouldn't affect this case, since generators do not in general have a __len__ attribute. 2. A stronger version of the above proposal: if the data has an __iter__ method, we construct an array out of the C equivalent of list(data). This would handle the generator case above correctly until we have a more efficient implementation. Creating an array from an infinite sequence would loop forever, just as list(inf_generator) does. -- Ed From schofield at ftw.at Wed May 3 04:59:05 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 3 04:59:05 2006 Subject: [Numpy-discussion] Object array creation from sequences In-Reply-To: <445894E0.7030303@ftw.at> References: <445894E0.7030303@ftw.at> Message-ID: <44589BD0.9060909@ftw.at> Ed Schofield wrote: >>>> s = set([1, 100, 10]) >>>> a = numpy.array(s) >>>> a >>>> > array(set([1, 100, 10]), dtype=object) > [snip] >>>> b = numpy.sort(s) >>>> b >>>> > array(set([1, 10, 100]), dtype=object) > Oops, the output I gave here was perhaps confusing. The set elements can appear in arbitrary order, but the output here should be the same as above. -- Ed From umut.tabak at student.kuleuven.be Wed May 3 08:34:10 2006 From: umut.tabak at student.kuleuven.be (umut tabak) Date: Wed May 3 08:34:10 2006 Subject: [Numpy-discussion] Numpy installation failure Message-ID: <4458CD34.6000604@student.kuleuven.be> Dear all, I am new to python however not new to programming world, I have got the Numeric package I think because the output seems OK >>> from Numeric import * >>> seems OK but when I try to view a picture file which is supplied in the install and test documentation of numpy it fails >>>view(greece) Traceback (most recent call last): File "", line 1, in ? NameError: name 'view' is not defined There it is written that the numpy directory must be under the demo directory but I do not have a demo directory under the python directory. Sth is missing I guess but could not figure out, actually I did not have much time to search and find the error :), hope for your understanding. I am seeking a step by step installation description so your help is very much appreciated. Regards, U.T. From ndarray at mac.com Wed May 3 08:49:23 2006 From: ndarray at mac.com (Sasha) Date: Wed May 3 08:49:23 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On 5/3/06, Johannes Loehnert wrote: > [snip] > "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", > line 185, in rundocs > tests = doctest.DocTestFinder().find(m) > AttributeError: 'module' object has no attribute 'DocTestFinder' It looks like this test expects python 2.4 version of doctest. If we are going to support 2.3, this is a bug. From fullung at gmail.com Wed May 3 09:00:13 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 09:00:13 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: Message-ID: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Sasha > Sent: 03 May 2006 17:49 > To: Johannes Loehnert > Cc: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Test fails for rev. 2473 > > On 5/3/06, Johannes Loehnert wrote: > > > [snip] > > "/scratch/jloehnert/python-svn/lib/python2.3/site- > packages/numpy/testing/numpytest.py", > > line 185, in rundocs > > tests = doctest.DocTestFinder().find(m) > > AttributeError: 'module' object has no attribute 'DocTestFinder' > > It looks like this test expects python 2.4 version of doctest. If we > are going to support 2.3, this is a bug. Are doctests really worth it? They are reasonably useful when developing new code, but "normal" tests might be better once the code is done. I also found out that they break trace.py, a very useful tool included with Python for determining code coverage -- knowledge that would be very useful for identifying areas where NumPy's test suite can be improved. I prepared two patches to turn the existing doctests into "normal" tests: http://projects.scipy.org/scipy/numpy/ticket/87 http://projects.scipy.org/scipy/numpy/ticket/88 Regards, Albert From pearu at scipy.org Wed May 3 09:04:02 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 3 09:04:02 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: References: <200605031014.30834.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On Wed, 3 May 2006, Sasha wrote: > On 5/3/06, Johannes Loehnert wrote: > >> [snip] >> "/scratch/jloehnert/python-svn/lib/python2.3/site-packages/numpy/testing/numpytest.py", >> line 185, in rundocs >> tests = doctest.DocTestFinder().find(m) >> AttributeError: 'module' object has no attribute 'DocTestFinder' > > It looks like this test expects python 2.4 version of doctest. If we > are going to support 2.3, this is a bug. The bug is fixed in svn. Thanks, Pearu From tim.hochberg at cox.net Wed May 3 09:20:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 3 09:20:05 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> References: <020401c66eca$94a57b80$0a84a8c0@dsp.sun.ac.za> Message-ID: <4458D85F.5040202@cox.net> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >>discussion-admin at lists.sourceforge.net] On Behalf Of Sasha >>Sent: 03 May 2006 17:49 >>To: Johannes Loehnert >>Cc: numpy-discussion at lists.sourceforge.net >>Subject: Re: [Numpy-discussion] Test fails for rev. 2473 >> >>On 5/3/06, Johannes Loehnert wrote: >> >> >> >>>[snip] >>>"/scratch/jloehnert/python-svn/lib/python2.3/site- >>> >>> >>packages/numpy/testing/numpytest.py", >> >> >>>line 185, in rundocs >>> tests = doctest.DocTestFinder().find(m) >>>AttributeError: 'module' object has no attribute 'DocTestFinder' >>> >>> >>It looks like this test expects python 2.4 version of doctest. If we >>are going to support 2.3, this is a bug. >> >> > >Are doctests really worth it? They are reasonably useful when developing new >code, but "normal" tests might be better once the code is done. I also found >out that they break trace.py, a very useful tool included with Python for >determining code coverage -- knowledge that would be very useful for >identifying areas where NumPy's test suite can be improved. > >I prepared two patches to turn the existing doctests into "normal" tests: > >http://projects.scipy.org/scipy/numpy/ticket/87 >http://projects.scipy.org/scipy/numpy/ticket/88 > > > In general, I'm not a big fan of turning doctests into "normal" tests. Any time you muck with a working test you have to worry about introducing bugs into the test suite. In addition doctests are frequently, although certainly not always. clearer than their unit test counterparts. It seems that the time would be better spent fixing trace to do the right thing in the presence of doctests. Then again, you've already spent the time so it's to late to unspend it I suppose. -tim From fullung at gmail.com Wed May 3 09:41:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 3 09:41:08 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <4458D85F.5040202@cox.net> Message-ID: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> Hello Tim > -----Original Message----- > From: Tim Hochberg [mailto:tim.hochberg at cox.net] > Sent: 03 May 2006 18:21 > To: Albert Strasheim > Cc: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Test fails for rev. 2473 > > Albert Strasheim wrote: > >I prepared two patches to turn the existing doctests into "normal" tests: > > > >http://projects.scipy.org/scipy/numpy/ticket/87 > >http://projects.scipy.org/scipy/numpy/ticket/88 > > > > > > > In general, I'm not a big fan of turning doctests into "normal" tests. > Any time you muck with a working test you have to worry about > introducing bugs into the test suite. In addition doctests are > frequently, although certainly not always. clearer than their unit test > counterparts. It seems that the time would be better spent fixing trace > to do the right thing in the presence of doctests. Then again, you've > already spent the time so it's to late to unspend it I suppose. I agree with you -- rewriting tests is suboptimal. However, in the presence of not-perfect-yet tools I prefer to adapt my way of working so that I can still use the tool instead of hoping that the tool will get fixed at some undefined time in the future. Anyway, as I noted in ticket #87, the last doctest in test_ufunclike might be hiding a bug or there might be floating point math issues I'm unaware of. When I run my "normal" tests for log2, I get: Traceback (most recent call last): File "numpybug3.py", line 7, in ? assert_array_equal(b, array([2.169925, 1.20163386, 2.70043972])) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 204, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 100.0%): Array 1: [ 2.1699250014423126 1.2016338611696504 2.7004397181410922] Array 2: [ 2.1699250000000001 1.2016338600000001 2.7004397199999999] Is this the expected behaviour? Regards, Albert From tim.hochberg at cox.net Wed May 3 10:00:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 3 10:00:13 2006 Subject: [Numpy-discussion] Test fails for rev. 2473 In-Reply-To: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> References: <020e01c66ed0$421b0dc0$0a84a8c0@dsp.sun.ac.za> Message-ID: <4458E1DD.9070008@cox.net> Albert Strasheim wrote: >Hello Tim > > > >>-----Original Message----- >>From: Tim Hochberg [mailto:tim.hochberg at cox.net] >>Sent: 03 May 2006 18:21 >>To: Albert Strasheim >>Cc: numpy-discussion at lists.sourceforge.net >>Subject: Re: [Numpy-discussion] Test fails for rev. 2473 >> >>Albert Strasheim wrote: >> >> > > > > > >>>I prepared two patches to turn the existing doctests into "normal" tests: >>> >>>http://projects.scipy.org/scipy/numpy/ticket/87 >>>http://projects.scipy.org/scipy/numpy/ticket/88 >>> >>> >>> >>> >>> >>In general, I'm not a big fan of turning doctests into "normal" tests. >>Any time you muck with a working test you have to worry about >>introducing bugs into the test suite. In addition doctests are >>frequently, although certainly not always. clearer than their unit test >>counterparts. It seems that the time would be better spent fixing trace >>to do the right thing in the presence of doctests. Then again, you've >>already spent the time so it's to late to unspend it I suppose. >> >> > >I agree with you -- rewriting tests is suboptimal. However, in the presence >of not-perfect-yet tools I prefer to adapt my way of working so that I can >still use the tool instead of hoping that the tool will get fixed at some >undefined time in the future. > > I just googled and there seem to be some patches relating to trace.py and doctest, but not being a user of trace, I don't really know what its issues are with doctest, so I'm unsure if they would address the issue you are having. >Anyway, as I noted in ticket #87, the last doctest in test_ufunclike might >be hiding a bug or there might be floating point math issues I'm unaware of. >When I run my "normal" tests for log2, I get: > >Traceback (most recent call last): > File "numpybug3.py", line 7, in ? > assert_array_equal(b, array([2.169925, 1.20163386, 2.70043972])) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 204, in >assert_array_equal > assert cond,\ >AssertionError: >Arrays are not equal (mismatch 100.0%): > Array 1: [ 2.1699250014423126 1.2016338611696504 >2.7004397181410922] > Array 2: [ 2.1699250000000001 1.2016338600000001 >2.7004397199999999] > >Is this the expected behaviour? > > I think that this is a non bug, it's just that the regular test is more picky here. Take a look at this: >>> a = n.array([ 2.1699250014423126, 1.2016338611696504,2.7004397181410922]) >>> a array([ 2.169925 , 1.20163386, 2.70043972]) >>> b = n.array([ 2.169925 , 1.20163386, 2.70043972]) >>> b array([ 2.169925 , 1.20163386, 2.70043972]) >>> a == b array([False, False, False], dtype=bool) >>> [x for x in a] [2.1699250014423126, 1.2016338611696504, 2.7004397181410922] >>> [x for x in b] [2.1699250000000001, 1.2016338600000001, 2.7004397199999999] So what we have here is that numpy displays less digits whan it actually knows. Presumably this doctest was pasted in from the interpreter where not all the digits show up, then when you moved this to unittest where you are comparing the actual numbers, things start to fail. Regards, -tim From faltet at carabos.com Wed May 3 10:27:12 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 3 10:27:12 2006 Subject: [Numpy-discussion] ANN: PyTables 1.3.1 - Hierarchical datasets Message-ID: <200605031925.51430.faltet@carabos.com> =========================== Announcing PyTables 1.3.1 =========================== This is a new minor release of PyTables. On it, you will find support for NumPy integers as indexes of datasets and many bug fixes. *Important note*: one of the fixes adresses a bug in the flushing of I/O buffers that was introduced back in PyTables 1.1. So, for those of you that want to improve the integrity of the PyTables files during unexpected crashes, an upgrade is strongly encouraged. Go to the PyTables web site for downloading the beast: http://www.pytables.org/ or keep reading for more info about the new features and bugs fixed. Changes more in depth ===================== Improvements: - NumPy integer scalars are supported as indexes for ``__getitem__()`` and ``__setitem__()`` methods in ``Leaf`` objects. In addition, any object that exposes the __index__() method (see PEP 357) is supported as well. This later feature will be more useful for those of you that start using Python 2.5. Meanwhile, PyTables will use its own guessing (quite trivial, in fact) in order to convert indexes to 64-bit integers (you know, PyTables does support 64-bit indexes even in 32-bit platforms). Bug fixes: - ``Leaf.flush()`` didn't force an actual flush on-disk at HDF5 level, raising the chances of letting the file in an inconsistent state during an unexpected shutdown of the program. Now, it works as it should. This bug was around from PyTables 1.1 on. Thanks to Andrew Straw for complaining about this persistently enough. ;-) - The code for recognizing a leaf class in a native HDF5 file was not aware of the ``trMap`` option if ``openFile()``, thus giving potentially scary error messages. This has been fixed. - When an iterator was used on an unbound table, PyTables crashed. A workaround has been implemented so as to avoid this. In addition, a better solution has been devised, but as it requires an in-deep refactorisation, it will be delivered with PyTables 1.4 series. - Added ``Enum.__ne__()`` method to avoid equal enumerations comparing to both True and False. Closes ticket #8 (thanks to Ashley Walsh). - Work around Python bug when storing floats in attributes under some locales. See ticket #9 (thanks to Fabio Zadrozny). Deprecated features: - None Backward-incompatible changes: - Please, see ``RELEASE-NOTES.txt`` file. Important note for Windows users ================================ If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-165-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (but NumPy and Numeric are also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC (and PowerPC64) and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further issues. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From nodwell at physics.ubc.ca Wed May 3 11:44:20 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Wed May 3 11:44:20 2006 Subject: [Numpy-discussion] extension to io.read_array : parse strings Message-ID: I've made a small modification to import_array.py so that I can parse a string as if it were a file. For example: >>> B = """# comment line ... 3 4 5 ... 2 1 5""" >>> io.read_array(B, datainstring=1) array([[ 3., 4., 5.], [ 2., 1., 5.]]) I rather expect that there already exists a standard way to do this, and if so, I would appreciate it if someone could illustrate. Otherwise, if no convenient way of parsing strings into arrays exists at the moment, and if anyone is of the opinion that this might be generally useful, then I will submit this as a patch. (For my own purposes I needed to parse web form input.) Eric From tom.denniston at alum.dartmouth.org Wed May 3 13:01:12 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Wed May 3 13:01:12 2006 Subject: [Numpy-discussion] numpy imports Message-ID: When I import numpy in python -v mode I notice that it imports a slew of modules. Is this intentional? Are the implicit dependencies necessary or have I just misconfigured or build something wrong? Has anyone noticed the same behavior? --Tom From robert.kern at gmail.com Wed May 3 13:15:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 3 13:15:01 2006 Subject: [Numpy-discussion] Re: numpy imports In-Reply-To: References: Message-ID: Tom Denniston wrote: > When I import numpy in python -v mode I notice that it imports a slew > of modules. Is this intentional? More or less. There is quite a bit of accumulated history from Numeric and scipy_base that numpy is trying to stay relatively close to API-wise. numpy et. al. are frequently used from the interactive prompt, probably more so than many other Python packages. There is an expectation that "import numpy" makes nearly everything public in the package accessible (maybe a dot or two down, but still accessible). Unfortunately, that entails a lot of internal imports. Paring down that list of imports in any significant manner is going to break code. I wish we could, but I don't think it's practical at this point in time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From customer-dept at creditunions.com Wed May 3 13:16:14 2006 From: customer-dept at creditunions.com (Credit Unions Bank.) Date: Wed May 3 13:16:14 2006 Subject: [Numpy-discussion] Credit Union`s Bank Security Notice ID# 13182 Message-ID: An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed May 3 15:21:10 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed May 3 15:21:10 2006 Subject: [Numpy-discussion] numpy imports In-Reply-To: References: Message-ID: On 5/3/06, Tom Denniston wrote: > When I import numpy in python -v mode I notice that it imports a slew > of modules. Is this intentional? Are the implicit dependencies > necessary or have I just misconfigured or build something wrong? Has > anyone noticed the same behavior? > > --Tom > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Where does the IBM ad come from? Are ads attached to the email I send to this list? The ad does not appear in the archive: http://sourceforge.net/mailarchive/forum.php?thread_id=10299357&forum_id=4890 From robert.kern at gmail.com Wed May 3 15:25:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 3 15:25:04 2006 Subject: [Numpy-discussion] Re: numpy imports In-Reply-To: References: Message-ID: Keith Goodman wrote: >> ------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > > Where does the IBM ad come from? Are ads attached to the email I send > to this list? > > The ad does not appear in the archive: > > http://sourceforge.net/mailarchive/forum.php?thread_id=10299357&forum_id=4890 Sourceforge adds them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Wed May 3 15:44:14 2006 From: ndarray at mac.com (Sasha) Date: Wed May 3 15:44:14 2006 Subject: [Numpy-discussion] Doctests vs. unittests Message-ID: In a recent thread (see "Test fails for rev. 2473") Albert suggested that unittests should be preferred over doctests. I disagree and would like to hear some opinions on this topic. I believe unitests and doctests serve two distinct purposes. Unittests should provide bigger coverage than doctests, particularly tests for rarely used corner cases or tests that are added when a bug is fixed should be written as unittests. Doctests should mostly be used as automatically tested examples in docstrings. Doctests should be selected primarily based on their readibility and relevance to the rest of the docstring rather than completeness of test coverage. There is one use of doctests that may justify conversion to unittests. We should encourage users to submit tests and doctest format is much more accessible than unittest. Doctests that appear in separate test files rather than as examples in docstings should probably be converted eventually, but this should not discourage anyone from writing the tests as doctests in the first place. From oliphant at ee.byu.edu Wed May 3 22:39:11 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed May 3 22:39:11 2006 Subject: [Numpy-discussion] lexsort In-Reply-To: References: <200605021826.21114.faltet@carabos.com> <200605022027.59369.faltet@carabos.com> Message-ID: <4459937A.2080108@ee.byu.edu> Charles R Harris wrote: > Hi, > > On 5/2/06, *Francesc Altet* > wrote: > > A Dimarts 02 Maig 2006 19:36, Charles R Harris va escriure: > > Francesc, > > > > Completely off topic, but are you aware of the lexsort function > in numpy > > and numarray? It is like argsort but takes a list of > (vector)keys and > > performs a stable sort on each key in turn, so for record arrays > you can > > get the effect of sorting on column A, then column B, etc. I > thought I > > would mention it because you seem to use argsort a lot and, > well, because I > > wrote it ;) > > Thanks for pointing this out. In fact, I had no idea of this > capability in numarray (numpy neither). I'll have to look more > carefully into this to fully realize the kind of things that can be > done with it. But it seems very promising anyway :-) > > > > As an example: > > In [21]: a > Out[21]: > array([[0, 1], > [1, 0], > [1, 1], > [0, 1], > [1, 0]]) > > In [22]: a[lexsort((a[:,1],a[:,0]))] > Out[22]: > array([[0, 1], > [0, 1], > [1, 0], > [1, 0], > [1, 1]]) > > > Hmm, I notice that lexsort requires a tuple and won't accept a list. I > wonder if there is a good reason for that. Travis? > Can't think of one right now except haste in coding... -Travis From arnd.baecker at web.de Thu May 4 00:34:13 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 00:34:13 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... Message-ID: Hi, I am trying to profile some code which I am converting from Numeric to numpy. However, the whole profiling seems to break down on the level of hotshot, depending on whether the code is run with Numeric/numpy or (old) scipy. For the attached test I get: - old scipy: all is fine! But all these fail - Numeric 24.2: - numpy version: 0.9.7.2256 - scipy version: 0.4.8 "Failing" means that I don't get a break-down on which routine takes how much time. First I would be interested whether someone else sees the same behaviour, or if we screwed up something with our installation. If this is reproducible, the question is, whether one can do something about it ... Any help and ideas are very welcome! Best, Arnd python test_profile_problem.py scipy ############################################# scipy version: 0.3.2 scipy case: Size: 79776 18001 function calls in 0.660 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.269 0.269 0.660 0.660 test_profile_problem.py:38(main) 1000 0.118 0.000 0.235 0.000 scimath.py:31(log) 2000 0.068 0.000 0.129 0.000 type_check.py:96(isreal) 2000 0.053 0.000 0.057 0.000 type_check.py:86(imag) 6000 0.039 0.000 0.039 0.000 type_check.py:12(asarray) 1000 0.036 0.000 0.156 0.000 scimath.py:25(sqrt) 2000 0.034 0.000 0.034 0.000 Numeric.py:583(ravel) 2000 0.027 0.000 0.027 0.000 Numeric.py:655(sometrue) 2000 0.017 0.000 0.078 0.000 function_base.py:36(any) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py Numeric ############################################# Numeric version: 24.2 Numeric case: Size: 1374 1 function calls in 0.385 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.385 0.385 0.385 0.385 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py numpy ############################################# numpy version: 0.9.7.2256 numpy case: Size: 1426 1 function calls in 0.346 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.346 0.346 0.346 0.346 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) python test_profile_problem.py new-scipy ############################################# scipy version: 0.4.8 new-scipy case: Size: 1426 1 function calls in 0.327 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.327 0.327 0.327 0.327 test_profile_problem.py:38(main) 0 0.000 0.000 profile:0(profiler) -------------- next part -------------- A non-text attachment was scrubbed... Name: test_profile_problem.py Type: text/x-python Size: 1597 bytes Desc: URL: From schofield at ftw.at Thu May 4 01:31:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 4 01:31:15 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: Message-ID: <4459BCDE.6070002@ftw.at> Arnd Baecker wrote: > Hi, > > I am trying to profile some code which I am converting from > Numeric to numpy. > However, the whole profiling seems to break down on the level of hotshot, > depending on whether the code is run with Numeric/numpy or (old) scipy. > > For the attached test I get: > - old scipy: all is fine! > But all these fail > - Numeric 24.2: > - numpy version: 0.9.7.2256 > - scipy version: 0.4.8 > > "Failing" means that I don't get a break-down on which routine takes how > much time. > > First I would be interested whether someone else sees the same behaviour, > or if we screwed up something with our installation. > I've had trouble with this too. I get more meaningful results using prof.run('function()') instead of prof.runcall. I wrote myself a little wrapper function, which I'll include below. But I'm still mystified why hotshot's runcall doesn't work ... -- Ed ----------------------- def profilerun(function, logfilename='temp.prof'): """A nice wrapper for the hotshot profiler. Usage: profilerun('my_statement') Example: >>> from scipy.linalg import inv >>> from numpy import rand >>> def timewaste(arg1=None, arg2=None): >>> print "Arguments 1 and 2 are: " + str(arg1) + " and " + str(arg2) >>> a = rand(1000,1000) >>> b = linalg.inv(a) >>> >>> profilerun('timewaste()') Example output: 7 function calls in 0.917 CPU seconds Ordered by: internal time, call count ncalls tottime percall cumtime percall filename:lineno(function) 1 0.916 0.916 0.917 0.917 basic.py:176(inv) 1 0.000 0.000 0.000 0.000 function_base.py:162(asarray_chkfinite) 1 0.000 0.000 0.917 0.917 :1(timewaste) 1 0.000 0.000 0.000 0.000 __init__.py:28(get_lapack_funcs) 1 0.000 0.000 0.000 0.000 _internal.py:28(__init__) 1 0.000 0.000 0.000 0.000 numeric.py:70(asarray) 1 0.000 0.000 0.000 0.000 _internal.py:36(__getitem__) 0 0.000 0.000 profile:0(profiler) """ prof = hotshot.Profile(logfilename) output = prof.run(function) print "Output of function is:" print output prof.close() stats = hotshot.stats.load(logfilename) stats.strip_dirs() stats.sort_stats('time', 'calls') stats.print_stats() From arnd.baecker at web.de Thu May 4 01:32:15 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 01:32:15 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: <4459B603.2060304@ftw.at> References: <4459B603.2060304@ftw.at> Message-ID: Hi Ed, On Thu, 4 May 2006, Ed Schofield wrote: > Arnd Baecker wrote: > > Hi, > > > > I am trying to profile some code which I am converting from > > Numeric to numpy. > > However, the whole profiling seems to break down on the level of hotshot, > > depending on whether the code is run with Numeric/numpy or (old) scipy. > > > > For the attached test I get: > > - old scipy: all is fine! > > But all these fail > > - Numeric 24.2: > > - numpy version: 0.9.7.2256 > > - scipy version: 0.4.8 > > > > "Failing" means that I don't get a break-down on which routine takes how > > much time. > > > > First I would be interested whether someone else sees the same behaviour, > > or if we screwed up something with our installation. > > > > I've had trouble with this too. I get more meaningful results using > prof.run('function()') instead of prof.runcall. I wrote myself a little > wrapper function, which I'll include below. Thanks for the wrapper - but it seems that is does not to help in my case: I replaced in my script prof.runcall(main) by prof.run("main()") and still see the same output, i.e. no information in the Numeric/numpy/scipy cases. For your example (with the corresponding modifications) I get meaningful results in all cases. Completely puzzled ... > But I'm still mystified why hotshot's runcall doesn't work ... Something really weird must be going on. Thanks, Arnd > ----------------------- > > > def profilerun(function, logfilename='temp.prof'): > """A nice wrapper for the hotshot profiler. > Usage: > profilerun('my_statement') > > Example: > >>> from scipy.linalg import inv > >>> from numpy import rand > >>> def timewaste(arg1=None, arg2=None): > >>> print "Arguments 1 and 2 are: " + str(arg1) + " and " + > str(arg2) > >>> a = rand(1000,1000) > >>> b = linalg.inv(a) > >>> > >>> profilerun('timewaste()') > > Example output: > 7 function calls in 0.917 CPU seconds > > Ordered by: internal time, call count > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.916 0.916 0.917 0.917 basic.py:176(inv) > 1 0.000 0.000 0.000 0.000 > function_base.py:162(asarray_chkfinite) > 1 0.000 0.000 0.917 0.917 console>:1(timewaste) > 1 0.000 0.000 0.000 0.000 > __init__.py:28(get_lapack_funcs) > 1 0.000 0.000 0.000 0.000 _internal.py:28(__init__) > 1 0.000 0.000 0.000 0.000 numeric.py:70(asarray) > 1 0.000 0.000 0.000 0.000 > _internal.py:36(__getitem__) > 0 0.000 0.000 profile:0(profiler) > > """ > prof = hotshot.Profile(logfilename) > output = prof.run(function) > print "Output of function is:" > print output > prof.close() > stats = hotshot.stats.load(logfilename) > stats.strip_dirs() > stats.sort_stats('time', 'calls') > stats.print_stats() > > > From steffen.loeck at gmx.de Thu May 4 06:59:14 2006 From: steffen.loeck at gmx.de (Steffen Loeck) Date: Thu May 4 06:59:14 2006 Subject: [Numpy-discussion] Scalar math module is ready for testing In-Reply-To: <4451C076.40608@ieee.org> References: <4451C076.40608@ieee.org> Message-ID: <200605041557.15172.steffen.loeck@gmx.de> > The scalar math module is complete and ready to be tested. It should > speed up code that relies heavily on scalar arithmetic by by-passing the > ufunc machinery. > > It needs lots of testing to be sure that it is doing the "right" > thing. To enable scalarmath you need to > > import numpy.core.scalarmath After changing some code to numpy/new scipy my programs slowed down remarkably, so i did some comparison between Numeric, numpy and math using timeit.py. At first the sin(x) function was tested: Results (in usec per loop): sin-scalar sin-array Numeric 118 130 numpy 66.3 191 numpy + scalarmath 65.8 112 numpy + math 12.3 143 numpy + math + scalarmath 14.7 37.8 Numeric + math 14.8 22.3 The scripts are shown at the end. So using numpy.core.scalarmath improves speed. The fastest way however is using Numeric and the sin function from math. Second is the test of the modulo operation %: %-array Numeric 17.1 numpy 299 numpy + scalarmath 55.6 Things get faster using scalarmath but are still four times slower then with Numeric. Is there any possibility to speed up the modulo operation? scripts: Numeric: /usr/lib/python2.3/timeit.py \ -s "from Numeric import sin,arange; x=0.1" "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from Numeric import sin,zeros,arange; x=x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy: /usr/lib/python2.3/timeit.py \ -s "from numpy import sin,arange; x=0.1" "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from numpy import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy + scalarmath: /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import sin; x=0.1"\ "for i in xrange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" numpy + math: /usr/lib/python2.3/timeit.py \ -s "from math import sin; from numpy import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from math import sin; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" numpy + scalarmath + math /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from math import sin; from numpy import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "import numpy.core.scalarmath; from math import sin; from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" Numeric + math /usr/lib/python2.3/timeit.py \ -s "from math import sin; from Numeric import arange; x=0.1"\ "for i in arange(9): x=sin(x)" /usr/lib/python2.3/timeit.py \ -s "from math import sin; from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1"\ "for i in arange(9): x[i+1]=sin(x[i])" Regards, Steffen From stefan at sun.ac.za Thu May 4 09:08:05 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu May 4 09:08:05 2006 Subject: [Numpy-discussion] memory de-allocation problems running 'digitize' Message-ID: <20060504160705.GA15868@mentat.za.net> Hi everyone, Albert Strasheim and I found a memory de-allocation error in _compiled_base_.c, which manifests itself when running 'digitize'. This code triggers the bug: import numpy as N for i in range(100): N.digitize([1,2,3,4],[1,3]) N.digitize([0,1,2,3,4],[1,3]) The ticket, with valgrind output, is filed at http://projects.scipy.org/scipy/numpy/ticket/95 A docstring describing what 'digitize' does would also be a useful addition. Regards St?fan From fperez.net at gmail.com Thu May 4 09:17:10 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 4 09:17:10 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On 5/4/06, Arnd Baecker wrote: > Thanks for the wrapper - but it seems that is does not to help in my case: > I replaced in my script > prof.runcall(main) > by > prof.run("main()") > and still see the same output, i.e. no information in the > Numeric/numpy/scipy cases. Have you tried in ipython %prun or '%run -p'? The first will run a single statement, the second a whole script, undre the control of the OLD python profiler (not hotshot). While the 'profile' module lacks some of the niceties of hotshot, it may at least work here. Not a permanent solution, but it could get you moving. Cheers, f From jonathan.taylor at stanford.edu Thu May 4 10:48:05 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu May 4 10:48:05 2006 Subject: [Numpy-discussion] recarray behaviour Message-ID: <445A3E32.2000405@stanford.edu> I posted a message to this list about a weird recarray error (and sent a second one with the actual pickled data that failed for me). Just wondering if anyone is/was able to reproduce this error. In recarray.py and recarrayII.py I have the exact same dtype description and try to create an array with the exact same list (their equality evaluates to True, and printed they are identical), but one raises an error, the other doesn't. Further, I can't use pdb to investigate the error... I have another example (recarrayIII.py) that gives this same error and behaves rather unpythonically (if that really is a word....). I couldn't figure out what was going on based on the recarray description in the numpy book. ========================================= jtaylo at kff:~/Desktop$ python recarrayII.py [(12.0, (2005, 4, 22, 0, 0, 0, 4, 112, -1), 501.0, 1.0, 2.0, 0.0, 0.0, 1.0, 91.5, 1.0, 1.0, 87.0, 1.0, 129.0, 76.0, 107.0, 11.0), (24.0, (2005, 2, 1, 0, 0, 0, 1, 32, -1), 504.0, 1.0, 2.0, 0.0, 0.0, 1.0, 166.0, 2.0, 1.0, 84.0, 1.0, 128.0, 78.0, 401.0, 7.0)] [('Week', ' -------------- next part -------------- A non-text attachment was scrubbed... Name: recarrayII.py Type: text/x-python Size: 135 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: dump.pickle URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: recarrayIII.py Type: text/x-python Size: 148 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From arnd.baecker at web.de Thu May 4 11:38:01 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu May 4 11:38:01 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On Thu, 4 May 2006, Fernando Perez wrote: > On 5/4/06, Arnd Baecker wrote: > > > Thanks for the wrapper - but it seems that is does not to help in my case: > > I replaced in my script > > prof.runcall(main) > > by > > prof.run("main()") > > and still see the same output, i.e. no information in the > > Numeric/numpy/scipy cases. > > Have you tried in ipython %prun or '%run -p'? The first will run a > single statement, the second a whole script, undre the control of the > OLD python profiler (not hotshot). While the 'profile' module lacks > some of the niceties of hotshot, it may at least work here. I use hotshot in combination with hotshot2cachegrind and kcachegrind to get a nice visual display and line-by-line profilings (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html has all the links ;-). As far as I know this is not possible with the "old profiler". So yes > Not a permanent solution, but it could get you moving. At the moment even timeit.py is sufficient, but `prun` is indeed a very convenient option! However, the output of this is also puzzling, ranging from 6 (almost useless) lines for Numeric and to overwhelming 6 screens (also useless?) for numpy (see below). ((Also, if you vary the order of the tests you also get different results...)) Ouch, this looks like a mess - I know benchmarking/profiling is considered to be tricky - but that tricky? Many thanks, Arnd ############################################# Numeric version: 24.2 4 function calls in 0.490 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.480 0.480 0.480 0.480 test_profile_problem_via_ipython.py:38(main) 1 0.010 0.010 0.490 0.490 :1(?) 1 0.000 0.000 0.480 0.480 test_profile_problem_via_ipython.py:7(?) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.490 0.490 profile:0(execfile(filename,prog_ns)) ############################################# scipy version: 0.3.2 18004 function calls in 1.080 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.380 0.380 1.080 1.080 test_profile_problem_via_ipython.py:38(main) 1000 0.230 0.000 0.460 0.000 scimath.py:31(log) 2000 0.090 0.000 0.190 0.000 type_check.py:96(isreal) 2000 0.080 0.000 0.080 0.000 Numeric.py:655(sometrue) 2000 0.070 0.000 0.190 0.000 function_base.py:36(any) 6000 0.070 0.000 0.070 0.000 type_check.py:12(asarray) 2000 0.070 0.000 0.080 0.000 type_check.py:86(imag) 1000 0.050 0.000 0.240 0.000 scimath.py:25(sqrt) 2000 0.040 0.000 0.040 0.000 Numeric.py:583(ravel) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 1.080 1.080 profile:0(execfile(filename,prog_ns)) 1 0.000 0.000 1.080 1.080 test_profile_problem_via_ipython.py:7(?) 1 0.000 0.000 1.080 1.080 :1(?) ############################################# numpy version: 0.9.7.2256 33673 function calls (31376 primitive calls) in 1.380 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.480 0.480 0.480 0.480 test_profile_problem_via_ipython.py:43(main) 1438 0.130 0.000 0.130 0.000 posixpath.py:171(exists) 3523 0.090 0.000 0.130 0.000 sre_parse.py:206(get) 486/59 0.080 0.000 0.380 0.006 sre_parse.py:367(_parse) 886/59 0.060 0.000 0.100 0.002 sre_compile.py:24(_compile) 364/59 0.060 0.000 0.380 0.006 sre_parse.py:312(_parse_sub) 4426 0.050 0.000 0.050 0.000 sre_parse.py:187(__next) 1032/408 0.050 0.000 0.050 0.000 sre_parse.py:147(getwidth) 81 0.040 0.000 0.040 0.000 auxfuncs.py:253(l_and) 150/94 0.030 0.000 0.190 0.002 glob.py:9(glob) 2269 0.030 0.000 0.040 0.000 sre_parse.py:200(match) 291 0.020 0.000 0.020 0.000 sre_compile.py:180(_optimize_charset) 2228 0.020 0.000 0.020 0.000 sre_parse.py:145(append) 1 0.020 0.020 0.020 0.020 linalg.py:3(?) 886 0.020 0.000 0.020 0.000 sre_parse.py:98(__init__) 59 0.020 0.000 0.020 0.000 sre_parse.py:75(__init__) 59 0.020 0.000 0.060 0.001 sre_compile.py:331(_compile_info) 79 0.010 0.000 0.010 0.000 auxfuncs.py:265(l_not) 111 0.010 0.000 0.010 0.000 sre_parse.py:221(isname) 1400 0.010 0.000 0.010 0.000 glob.py:49() 30 0.010 0.000 0.010 0.000 numerictypes.py:284(_add_array_type) 1 0.010 0.010 0.010 0.010 ma.py:9(?) 349 0.010 0.000 0.020 0.000 sre_compile.py:324(_simple) 56 0.010 0.000 0.020 0.000 glob.py:42(glob1) 40 0.010 0.000 0.010 0.000 sre_parse.py:135(__delitem__) 17/1 0.010 0.001 1.380 1.380 :1(?) 1 0.010 0.010 0.530 0.530 crackfortran.py:14(?) 1 0.010 0.010 0.010 0.010 defmatrix.py:2(?) 1 0.010 0.010 0.010 0.010 numeric.py:1(?) 349 0.010 0.000 0.010 0.000 sre_parse.py:141(__getslice__) 55 0.010 0.000 0.010 0.000 posixpath.py:117(dirname) 2914 0.010 0.000 0.010 0.000 posixpath.py:56(join) 1 0.010 0.010 0.020 0.020 capi_maps.py:12(?) 1 0.000 0.000 0.000 0.000 records.py:1(?) 1 0.000 0.000 0.000 0.000 numerictypes.py:161(_add_aliases) 1 0.000 0.000 0.000 0.000 unittest.py:640(TextTestRunner) 1 0.000 0.000 0.000 0.000 ma.py:76(default_fill_value) 1 0.000 0.000 0.000 0.000 arraysetops.py:26(?) 1 0.000 0.000 0.000 0.000 function_base.py:2(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:343(_index_expression_class) 1 0.000 0.000 0.000 0.000 ma.py:263(__init__) 473 0.000 0.000 0.000 0.000 sre_parse.py:269(_escape) 1 0.000 0.000 0.000 0.000 ma.py:320(domain_safe_divide) 1 0.000 0.000 0.000 0.000 ma.py:38(_MaskedPrintOption) 1 0.000 0.000 0.000 0.000 ma.py:261(domain_tan) 1 0.000 0.000 0.100 0.100 _import_tools.py:301(get_pkgdocs) 2 0.000 0.000 0.000 0.000 info.py:27(?) 1 0.000 0.000 0.310 0.310 __init__.py:15(?) 2 0.000 0.000 0.000 0.000 UserDict.py:41(has_key) 1 0.000 0.000 0.000 0.000 __config__.py:3(?) 16 0.000 0.000 0.000 0.000 auxfuncs.py:259(l_or) 1 0.000 0.000 0.010 0.010 numerictypes.py:76(?) 1 0.000 0.000 0.000 0.000 cfuncs.py:15(?) 1 0.000 0.000 0.000 0.000 __version__.py:1(?) 1 0.000 0.000 0.000 0.000 copy.py:389(_EmptyClass) 47 0.000 0.000 0.000 0.000 ma.py:2104(_m) 1 0.000 0.000 0.000 0.000 records.py:42(format_parser) 56 0.000 0.000 0.000 0.000 posixpath.py:39(normcase) 1 0.000 0.000 0.000 0.000 numpytest.py:180(_SciPyTextTestResult) 1 0.000 0.000 0.000 0.000 numpytest.py:99(_dummy_stream) 1 0.000 0.000 0.000 0.000 numeric.py:484(_setdef) 2 0.000 0.000 0.000 0.000 __svn_version__.py:1(?) 1 0.000 0.000 1.380 1.380 test_profile_problem_via_ipython.py:12(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:229(r_class) 2 0.000 0.000 0.000 0.000 ma.py:496(size) 2 0.000 0.000 0.000 0.000 _import_tools.py:280(_format_titles) 5 0.000 0.000 0.000 0.000 ma.py:209(filled) 1 0.000 0.000 0.000 0.000 add_newdocs.py:2(?) 1 0.000 0.000 0.000 0.000 ma.py:1976(_maximum_operation) 2 0.000 0.000 0.000 0.000 ma.py:493(shape) 1 0.000 0.000 0.000 0.000 _import_tools.py:92(_get_sorted_names) 3 0.000 0.000 0.000 0.000 info.py:3(?) 6 0.000 0.000 0.000 0.000 sre_parse.py:218(isdigit) 16 0.000 0.000 0.000 0.000 _import_tools.py:258(log) 1 0.000 0.000 0.000 0.000 ufunclike.py:4(?) 59 0.000 0.000 0.000 0.000 sre_parse.py:183(__init__) 2 0.000 0.000 0.000 0.000 sre_compile.py:229(_mk_bitmap) 1 0.000 0.000 0.000 0.000 ma.py:29(MAError) 1 0.000 0.000 0.000 0.000 index_tricks.py:241(c_class) 1 0.000 0.000 0.000 0.000 auxfuncs.py:247(__init__) 1 0.000 0.000 0.000 0.000 unittest.py:78(TestResult) 1 0.000 0.000 0.000 0.000 index_tricks.py:278(ndindex) 1 0.000 0.000 0.000 0.000 fileinput.py:174(FileInput) 1 0.000 0.000 0.000 0.000 copy.py:54(Error) 2 0.000 0.000 0.000 0.000 info.py:28(?) 18 0.000 0.000 0.000 0.000 numerictypes.py:93(_evalname) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 machar.py:8(?) 1 0.000 0.000 0.000 0.000 numpytest.py:197(SciPyTextTestRunner) 21 0.000 0.000 0.000 0.000 numerictypes.py:349(obj2sctype) 1 0.000 0.000 0.000 0.000 _import_tools.py:331(PackageLoaderDebug) 97 0.000 0.000 0.000 0.000 string.py:125(join) 1 0.000 0.000 0.210 0.210 _import_tools.py:118(__call__) 1 0.000 0.000 0.000 0.000 version.py:1(?) 349 0.000 0.000 0.000 0.000 sre_parse.py:139(__setitem__) 1 0.000 0.000 0.000 0.000 unittest.py:136(TestCase) 1 0.000 0.000 0.000 0.000 arrayprint.py:4(?) 14 0.000 0.000 0.110 0.008 _import_tools.py:236(_execcmd) 1 0.000 0.000 0.000 0.000 ma.py:499(MaskedArray) 1 0.000 0.000 0.000 0.000 stat.py:54(S_ISREG) 1 0.000 0.000 0.590 0.590 __init__.py:3(?) 2 0.000 0.000 0.000 0.000 UserDict.py:50(get) 59 0.000 0.000 0.560 0.009 sre.py:216(_compile) 1 0.000 0.000 0.000 0.000 index_tricks.py:145(concatenator) 1 0.000 0.000 0.000 0.000 unittest.py:686(TestProgram) 1 0.000 0.000 0.000 0.000 index_tricks.py:3(?) 1 0.000 0.000 0.000 0.000 utils.py:1(?) 1 0.000 0.000 0.000 0.000 oldnumeric.py:3(?) 1 0.000 0.000 0.000 0.000 index_tricks.py:250(__init__) 1 0.000 0.000 0.000 0.000 auxfuncs.py:15(?) 1 0.000 0.000 0.000 0.000 fnmatch.py:72(translate) 1 0.000 0.000 0.000 0.000 function_base.py:847(add_newdoc) 15 0.000 0.000 0.000 0.000 auxfuncs.py:393(gentitle) 2 0.000 0.000 0.000 0.000 index_tricks.py:82(__init__) 1 0.000 0.000 0.000 0.000 auxfuncs.py:243(F2PYError) 1 0.000 0.000 0.000 0.000 twodim_base.py:3(?) 8 0.000 0.000 0.000 0.000 _import_tools.py:268(_get_doc_title) 1 0.000 0.000 0.000 0.000 records.py:133(recarray) 1 0.000 0.000 0.000 0.000 defchararray.py:16(chararray) 2 0.000 0.000 0.000 0.000 oldnumeric.py:552(size) 264 0.000 0.000 0.000 0.000 sre_parse.py:80(opengroup) [...] From david.huard at gmail.com Thu May 4 12:10:03 2006 From: david.huard at gmail.com (David Huard) Date: Thu May 4 12:10:03 2006 Subject: [Numpy-discussion] recarray behaviour In-Reply-To: <445A3E32.2000405@stanford.edu> References: <445A3E32.2000405@stanford.edu> Message-ID: <91cf711d0605041209s4ad089c3n7a679a9c6f7e40ce@mail.gmail.com> I couldn't reproduce the error message you got. The scripts you provided seem to lack the pickle part. However, my guess is that you dumped an array object using pickle, and tried to retrieve it later on. For me this has never worked. I believe you should use the load and dump functions provided by numpy. If this is not solve your problem, please send one complete script that brings it up. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From andorxor at gmx.de Thu May 4 12:13:12 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Thu May 4 12:13:12 2006 Subject: [Numpy-discussion] System_info options for setup.py Message-ID: <445A5226.3060305@gmx.de> Hi, is there any information besides scipy.org/FAQ and Installing_SciPy/Windows on building NumPy (on Windows)? Is there any documentation of the various system_info options configurable in site.cfg other than (the uncommented) system_info.py? More specifically, what do the various BLAS/LAPACK options stand for? Should I use atlas or blas_opt/lapack_opt? Is amd an option to directly support ACML? Can I build NumPy with BLAS only, without LAPACK? It should be possible to combine ATLAS/LAPACK compiled with GCC/G77 and Numpy with Visual C++, right? Thanks in advance for any help. I'll try to update the wiki with the information I get. Regards, Stephan From fullung at gmail.com Thu May 4 13:59:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 13:59:04 2006 Subject: [Numpy-discussion] API inconsistencies Message-ID: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Hello all I noticed some inconsistencies in the NumPy API that might warrant some attention. On the one hand we have functions like the following: empty((d1,...,dn),dtype=int,order='C') empty((d1,...,dn),dtype=int,order='C') ones(shape, dtype=int_) (no order?) whereas the functions for generating random matrices look like this: rand(d0, d1, ..., dn) randn(d0, d1, ..., dn) Was this done for a specific reason? Numeric compatibility perhaps? Is this something that should be changed? Thanks! Regards, Albert From robert.kern at gmail.com Thu May 4 14:09:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 14:09:02 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > I noticed some inconsistencies in the NumPy API that might warrant some > attention. On the one hand we have functions like the following: > > empty((d1,...,dn),dtype=int,order='C') > empty((d1,...,dn),dtype=int,order='C') > ones(shape, dtype=int_) (no order?) > > whereas the functions for generating random matrices look like this: > > rand(d0, d1, ..., dn) > randn(d0, d1, ..., dn) > > Was this done for a specific reason? Numeric compatibility perhaps? Mostly. rand() and randn() are convenience functions anyways, so they present an API that is convenient for their particular task. > Is this > something that should be changed? Not unless if you want to be responsible for breaking substantial amounts of code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Thu May 4 14:11:08 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:11:08 2006 Subject: [Numpy-discussion] System_info options for setup.py In-Reply-To: <445A5226.3060305@gmx.de> Message-ID: <03f201c66fbf$393acf70$0a84a8c0@dsp.sun.ac.za> Hello Stephan > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Stephan Tolksdorf > Sent: 04 May 2006 21:13 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] System_info options for setup.py > > Hi, > > is there any information besides scipy.org/FAQ and > Installing_SciPy/Windows on building NumPy (on Windows)? > > Is there any documentation of the various system_info options > configurable in site.cfg other than (the uncommented) system_info.py? > More specifically, what do the various BLAS/LAPACK options stand for? > Should I use atlas or blas_opt/lapack_opt? Is amd an option to directly > support ACML? > Can I build NumPy with BLAS only, without LAPACK? > It should be possible to combine ATLAS/LAPACK compiled with GCC/G77 and > Numpy with Visual C++, right? I can confirm that this is possible. However, I noticed that my system is actually performing really badly with NumPy built in this way, so something probably went awry somewhere in my build. I'm investigating the issue. Anyway, the build goes something like this: Build ATLAS with GCC/G77. This is described in the ATLAS FAQ. Build FLAPACK from sources. To do this, I used NumPy lapack_src option. When the build crashes due to too many open files, restart it. Hopefully you'll get a liblapack.a out here somewhere. Throw this somewhere with your ATLAS libs. Rename libatlas.a to atlas.lib, etc. and in site.cfg in numpy/numpy/distutils put the following: [atlas] library_dirs = C:\home\albert\work2\ATLAS\lib\WinNT_P4SSE2 atlas_libs = lapack, f77blas, cblas, atlas, g2c This LAPACK library is the complete LAPACK library. ATLAS provides some LAPACK functions. To use these instead of the ones in FLAPACK, use ar x to unpack the object files from the (tiny) LAPACK library made by ATLAS and ar r to pack them pack into the complete LAPACK library. You need to grab libg2c.a from a Cygwin or MinGW install if I remember correctly. Now build with something like this: copy site.cfg c:\home\albert\work2\numpy\numpy\distutils cd /d "c:\home\albert\work2\numpy" del /f /s /q build del /f /s /q dist del /f /s /q numpy\core\__svn_version__.py del /f /s /q numpy\f2py\__svn_version__.py python setup.py config --compiler=msvc build --compiler=msvc bdist_wininst I also tried building with CLAPACK but this effort ended in tears. I'll try it again another day. If you want to try, you need to build libI77, libF77 and the LAPACK library using the solution files provided on the netlib site. You might also need to add blaswr.c (might be cblaswr.c, included with the CLAPACK sources) in there somewhere to get CLAPACK to call ATLAS BLAS. Other things I've noticed is that you can reasonably link against GCC-compiled libraries with MSVC, but using MSVC libraries with GCC can cause link or other problems. Also, GCC-compiled code can't be debugged with the MSVC debugger as far as I can tell If anybody else has some suggestions for easing this process, I'd love to hear from you. Regards, Albert From fullung at gmail.com Thu May 4 14:37:10 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:37:10 2006 Subject: [Numpy-discussion] scalarmathmodule changes broke MSVC build; patch available Message-ID: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> Hello all Some changes to scalarmathmodule.c.src have broken the build with at least MSVC 7.1. The generated .c contains code like this: static PyObject * cfloat_power(PyObject *a, PyObject *b, PyObject *c) { PyObject *ret; cfloat arg1, arg2, out; #if 1 cfloat out1; out1.real = out.imag = 0; #else cfloat out1=0; #endif int retstatus; As far as I know, this is not valid C. Ticket with patch here: http://projects.scipy.org/scipy/numpy/ticket/96 By the way, how about setting up buildbot so that we can avoid these problems in future? I'd be very happy to maintain build slaves for a few Windows configurations. Regards, Albert From fullung at gmail.com Thu May 4 14:51:06 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 14:51:06 2006 Subject: [Numpy-discussion] Matrix multiply benchmark Message-ID: <03fd01c66fc4$c058b530$0a84a8c0@dsp.sun.ac.za> Hello all My current work involves multiplication of some rather large matrices and vectors, so I was wondering about benchmarks for figuring out how fast NumPy is multiplying. Matrix Toolkits for Java (MTJ) has a Java Native Interface for calling through to ATLAS, MKL and friends. Some benchmark results with some interesting graphs are here: http://rs.cipr.uib.no/mtj/benchmark.html There is also some Java code for measuring the number of floating point operations per second here: http://rs.cipr.uib.no/mtj/bench/NNIGEMM.html I attempted to adapt this code to Python (suggestions and fixes welcome). My attempt at benchmarking general matrix-matrix multiplication: #!/usr/bin/env python import time import numpy as N print N.__version__ print N.__config__.blas_opt_info for n in range(50,501,10): A = N.rand(n,n) B = N.rand(n,n) C = N.empty_like(A) alpha = N.rand() beta = N.rand() if n < 100: r = 100 else: r = 10 # this gets the cache warmed up? for i in range(10): C[:,:] = N.dot(alpha*A, beta*B) t1 = time.clock() for i in range(r): C[:,:] = N.dot(alpha*A, beta*B) t2 = time.clock() s = t2 - t1 f = 2 * (n + 1) * n * n; mfs = (f / (s * 1000000.)) * r; print '%d %f' % (n, mfs) I think you might want to make r a bit larger to get more accurate results for smaller matrices, depending on your CPU speed. Is this benchmark comparable to the MTJ benchmark? NumPy might not be performing the same operations. MTJ probably uses BLAS to do the scaling and multiplication with one function call to dgemm. By the way, is there a better way of assigning into a preallocated matrix? I eagerly await your comments and/or results. Regards, Albert From stefan at sun.ac.za Thu May 4 14:57:19 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu May 4 14:57:19 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> Message-ID: <20060504215627.GC11436@mentat.za.net> On Thu, May 04, 2006 at 04:08:04PM -0500, Robert Kern wrote: > Albert Strasheim wrote: > > Hello all > > > > I noticed some inconsistencies in the NumPy API that might warrant some > > attention. On the one hand we have functions like the following: > > > > empty((d1,...,dn),dtype=int,order='C') > > empty((d1,...,dn),dtype=int,order='C') > > ones(shape, dtype=int_) (no order?) > > > > whereas the functions for generating random matrices look like this: > > > > rand(d0, d1, ..., dn) > > randn(d0, d1, ..., dn) > > > > Was this done for a specific reason? Numeric compatibility perhaps? > > Mostly. rand() and randn() are convenience functions anyways, so they present an > API that is convenient for their particular task. > > > Is this > > something that should be changed? > > Not unless if you want to be responsible for breaking substantial amounts of code. Why can't we modify the code to accept a new calling convention, in addition to the older one? Maybe try to run standard_normal(args), but if it throws a TypeError call standard_normal(*args). Regards St?fan From robert.kern at gmail.com Thu May 4 15:08:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:08:09 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <20060504215627.GC11436@mentat.za.net> References: <03f101c66fbd$79377ee0$0a84a8c0@dsp.sun.ac.za> <20060504215627.GC11436@mentat.za.net> Message-ID: Stefan van der Walt wrote: > Maybe try to run standard_normal(args), but if it throws a TypeError > call standard_normal(*args). I prefer that a single function has a single calling convention, and that we do not try to guess what the user wants. If you want the convention (mostly) consistent with zeros() and ones() that uses a tuple, use numpy.random.standard_normal(). If you want the, often convenient convention, that uses multiple arguments, use randn(). numpy.random.standard_normal(shape) == numpy.randn(*shape) numpy.random.random(shape) == numpy.rand(*shape) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Thu May 4 15:12:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 15:12:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: Message-ID: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Hey Robert and list > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern > Sent: 04 May 2006 23:08 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Re: API inconsistencies > > Albert Strasheim wrote: > > Hello all > > > > I noticed some inconsistencies in the NumPy API that might warrant some > > attention. On the one hand we have functions like the following: > > > > empty((d1,...,dn),dtype=int,order='C') > > empty((d1,...,dn),dtype=int,order='C') > > ones(shape, dtype=int_) (no order?) > > > > whereas the functions for generating random matrices look like this: > > > > rand(d0, d1, ..., dn) > > randn(d0, d1, ..., dn) > > > > Was this done for a specific reason? Numeric compatibility perhaps? > > Mostly. rand() and randn() are convenience functions anyways, so they > present an > API that is convenient for their particular task. > > > Is this > > something that should be changed? > > Not unless if you want to be responsible for breaking substantial amounts > of code. What's the current thinking NumPy backwards compatibility with Numeric and ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort be considered to be the default setting? In case the answer is the latter, and rand is supposed to be convenient way of calling numpy.random.random(someshape), as you said on the scipy-user list, shouldn't their arguments be the same? Having to choose between the two inconveniences of typing long names or remembering more than one convention for calling functions, I'd prefer to choose neither. But that's just me. ;-) Regards, Albert From cookedm at physics.mcmaster.ca Thu May 4 15:18:08 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 4 15:18:08 2006 Subject: [Numpy-discussion] Re: numexpr: optimizing pow In-Reply-To: <440FB3C7.7070704@cox.net> (Tim Hochberg's message of "Wed, 08 Mar 2006 21:49:11 -0700") References: <440AF97C.5040106@cox.net> <440CFCA1.70007@cox.net> <440DD209.5060900@cox.net> <20060307185127.GA31063@arbutus.physics.mcmaster.ca> <440F80A3.8070006@cox.net> <440FB3C7.7070704@cox.net> Message-ID: Tim Hochberg writes: > I just checked in some changes that do aggressive optimization on the > pow operator in numexpr. Now all integral and half integral powers > between [-50 and 50] are computed using multiples and sqrt. > (Empirically 50 seemed to be the closest round number to the breakeven > point.) > > I mention this primarily because I think it's cool. But also, it's the > kind of optimization that I don't think would be feasible in numpy > itself short of defining a whole pile of special cases, either > separate ufuncs or separate loops within a single ufunc, one for each > case that needed optimizing. Otherwise the bookkeeping overhead would > overwhelm the savings of replacing pow with multiplies. > > Now all of the bookkeeping is done in Python, which makes it easy; and > done once ahead of time and translated into bytecode, which makes it > fast. The actual code that does the optimization is included below for > those of you interested enough to care, but not interested enough to > check it out of the sandbox. It could be made simpler, but I jump > through some hoops to avoid unnecessary mulitplies. For instance, > starting 'r' as 'OpNode('ones_like', [a])' would simplify things > signifigantly, but at the cost of adding an extra multiply in most > cases. > > That brings up an interesting idea. If 'mul' were made smarter, so > that it recognized OpNode('ones_like', [a]) and ConstantNode(1), then > not only would that speed some 'mul' cases up, it would simplify the > code for 'pow' as well. I'll have to look into that tomorrow. Instead of using a separate ones_like opcode, why don't you just add a ConstantNode(1) instead? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Thu May 4 15:22:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:22:02 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> References: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > What's the current thinking NumPy backwards compatibility with Numeric and > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort > be considered to be the default setting? We are trying to be as compatible with Numeric as possible by using the convertcode.py script. We hope that that will get people 99% of the way there. numarray modules will take some more effort to convert. > In case the answer is the latter, and rand is supposed to be convenient way > of calling numpy.random.random(someshape), as you said on the scipy-user > list, shouldn't their arguments be the same? Having to choose between the > two inconveniences of typing long names or remembering more than one > convention for calling functions, I'd prefer to choose neither. But that's > just me. ;-) Most of the convenience is the calling convention, not the shorter name. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Thu May 4 15:34:12 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu May 4 15:34:12 2006 Subject: [Numpy-discussion] Re: numexpr: optimizing pow In-Reply-To: References: <440AF97C.5040106@cox.net> <440CFCA1.70007@cox.net> <440DD209.5060900@cox.net> <20060307185127.GA31063@arbutus.physics.mcmaster.ca> <440F80A3.8070006@cox.net> <440FB3C7.7070704@cox.net> Message-ID: <445A8139.2050505@cox.net> David M. Cooke wrote: >Tim Hochberg writes: > > > >>I just checked in some changes that do aggressive optimization on the >>pow operator in numexpr. Now all integral and half integral powers >>between [-50 and 50] are computed using multiples and sqrt. >>(Empirically 50 seemed to be the closest round number to the breakeven >>point.) >> >>I mention this primarily because I think it's cool. But also, it's the >>kind of optimization that I don't think would be feasible in numpy >>itself short of defining a whole pile of special cases, either >>separate ufuncs or separate loops within a single ufunc, one for each >>case that needed optimizing. Otherwise the bookkeeping overhead would >>overwhelm the savings of replacing pow with multiplies. >> >>Now all of the bookkeeping is done in Python, which makes it easy; and >>done once ahead of time and translated into bytecode, which makes it >>fast. The actual code that does the optimization is included below for >>those of you interested enough to care, but not interested enough to >>check it out of the sandbox. It could be made simpler, but I jump >>through some hoops to avoid unnecessary mulitplies. For instance, >>starting 'r' as 'OpNode('ones_like', [a])' would simplify things >>signifigantly, but at the cost of adding an extra multiply in most >>cases. >> >>That brings up an interesting idea. If 'mul' were made smarter, so >>that it recognized OpNode('ones_like', [a]) and ConstantNode(1), then >>not only would that speed some 'mul' cases up, it would simplify the >>code for 'pow' as well. I'll have to look into that tomorrow. >> >> > >Instead of using a separate ones_like opcode, why don't you just add a >ConstantNode(1) instead? > > You think I can remember something like that a month or whatever later? You may be giving me too much credit ;-) Hmm..... Ok. I think I remember. IIRC, the reason is that ConstantNode(1) is slightly different than OpNode('ones_like', [a]) in some corner cases. Specifically pow(a**0) will produce an array of ones in one case, but a scalar 1 in the other case. -tim From fullung at gmail.com Thu May 4 15:36:05 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 15:36:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: Message-ID: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> Robert Kern wrote: > Albert Strasheim wrote: > > > What's the current thinking NumPy backwards compatibility with Numeric > and > > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting > effort > > be considered to be the default setting? > > We are trying to be as compatible with Numeric as possible by using the > convertcode.py script. We hope that that will get people 99% of the way > there. > > numarray modules will take some more effort to convert. This is a good way of dealing with this issue. > > In case the answer is the latter, and rand is supposed to be convenient > way > > of calling numpy.random.random(someshape), as you said on the scipy-user > > list, shouldn't their arguments be the same? Having to choose between > the > > two inconveniences of typing long names or remembering more than one > > convention for calling functions, I'd prefer to choose neither. But > that's > > just me. ;-) > > Most of the convenience is the calling convention, not the shorter name. In my opinion, having one calling convention for functions in the base namespace is much more convenient than the convenience provided by these special cases. Are there many convenience functions, or are rand and randn the only two? A quick search didn't turn up any others, but I might have missed some. Regards, Albert From robert.kern at gmail.com Thu May 4 15:46:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 4 15:46:01 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> References: <040501c66fcb$13dea380$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Are there many convenience functions, or are rand and randn the only two? A > quick search didn't turn up any others, but I might have missed some. rand() and randn() are the only ones that come to mind. There may be a couple more Matlab-inspired functions floating around, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu May 4 22:18:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 4 22:18:02 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On 5/4/06, Arnd Baecker wrote: > I use hotshot in combination with hotshot2cachegrind > and kcachegrind to get a nice visual display and line-by-line > profilings > (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html > has all the links ;-). > As far as I know this is not possible with the "old profiler". I've seen that, and it really looks fantastic. I need to start using it... So yes > However, the output of this is also puzzling, ranging > from 6 (almost useless) lines for Numeric and > to overwhelming 6 screens (also useless?) for numpy (see below). > ((Also, if you vary the order of the tests you also get > different results...)) > > Ouch, this looks like a mess - I know benchmarking/profiling > is considered to be tricky - but that tricky? I'm by no means a profiling guru, but I wonder if this difference is due to the fact that numpy uses much more python code than C, while much of Numeric's work is done inside C routines. As far as the profile module is concerned, C code is invisible. If I read the info you posted correctly, it also seems like the time in your particular example is spent on many different things, rather than there being a single obvious bottleneck. That can make the optimization of this case a bit problematic. Cheers, f From fullung at gmail.com Thu May 4 23:21:15 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 4 23:21:15 2006 Subject: [Numpy-discussion] Running NumPy tests from inside source tree fails Message-ID: <002301c66fe7$42a16600$0a84a8c0@dsp.sun.ac.za> Hello all I'm trying to run the NumPy tests from inside the source tree. This depends on set_package_path() finding the package files I built. According to the set_package_path documentation: set_package_path should be called from a test_file.py that satisfies the following tree structure: //test_file.py Then the first existing path name from the following list /build/lib.- /.. My source tree (and probably everyone else's) looks like this: numpy numpy\build numpy\build\lib.- ... numpy\core\tests\test_foo.py This means that set_package_path isn't going to do the right thing when trying to run these tests. And indeed, I get an ImportError when I try. A better strategy would probably be to search up from dirname(abspath(testfile)) until you reach the current working directory, instead of the hardcoded dirname(dirname(abspath(testfile)) currently being used. Pearu, think we could fix this (assuming it's broken and that I didn't miss something obvious)? Regards, Albert From arnd.baecker at web.de Fri May 5 00:36:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 00:36:06 2006 Subject: [Numpy-discussion] profiling code with hotshot and Numeric/numpy/scipy/... In-Reply-To: References: <4459B603.2060304@ftw.at> Message-ID: On Thu, 4 May 2006, Fernando Perez wrote: > On 5/4/06, Arnd Baecker wrote: > > > I use hotshot in combination with hotshot2cachegrind > > and kcachegrind to get a nice visual display and line-by-line > > profilings > > (http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html > > has all the links ;-). > > As far as I know this is not possible with the "old profiler". > > I've seen that, and it really looks fantastic. I need to start using it... > > So yes And no (at least with python <2.5): http://docs.python.org/dev/lib/module-hotshot.html When searching around on c.l.p. I also found this very interesting looking tool to analyze hotshot output: http://www.vrplumber.com/programming/runsnakerun/ """RunSnakeRun is a small GUI utility that allows you to view HotShot profiler dumps in a sortable GUI view. It loads the dumps incrementally in the background so that you can begin viewing the profile results fairly quickly.""" (I haven't tested it yet though ...) > > However, the output of this is also puzzling, ranging > > from 6 (almost useless) lines for Numeric and > > to overwhelming 6 screens (also useless?) for numpy (see below). > > ((Also, if you vary the order of the tests you also get > > different results...)) > > > > Ouch, this looks like a mess - I know benchmarking/profiling > > is considered to be tricky - but that tricky? > > I'm by no means a profiling guru, but I wonder if this difference is > due to the fact that numpy uses much more python code than C, while > much of Numeric's work is done inside C routines. As far as the > profile module is concerned, C code is invisible. I can understand that a python profiler cannot look into C-code (well, even that might be possible with tricks), but shouldn't one be able to get the time before and after a line is executed and from this infer the time spent on a given line of code? Maybe that's not how these profilers work, but that's the way I would sprinkle my code with calls to time A bit more googling reveals an old cookbook example http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/81535 """This recipe lets you take into account time spent in C modules when profiling your Python code""" I start to wonder, whether things like this are properly taken into account in any of the available profilers?? > If I read the info you posted correctly, it also seems like the time > in your particular example is spent on many different things, rather > than there being a single obvious bottleneck. That can make the > optimization of this case a bit problematic. The example from scipy import * # from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) is pretty cooked up. Still I had success (well, it really was constructed for this ;-) with it in the past: http://www.physik.tu-dresden.de/~baecker/talks/pyco/pyco_.html#benchmarking-and-profiling Without importing sqrt from math, the output of kcachegrind http://www.physik.tu-dresden.de/~baecker/talks/pyco/BenchExamples/test_with_old_scipy.png suggests that 39% of the time is spent in sqrt and 61% in log. Importing sqrt from math leads to http://www.physik.tu-dresden.de/~baecker/talks/pyco/BenchExamples/test_with_old_scipy_and_mathsqrt.png I.e., only the log operation is visible. So things went fine in this particular example, but after all this I got really sceptical about profiling in python. A bit more googling revealed this very interesting thread http://thread.gmane.org/gmane.comp.python.devel/73166 which discusses short-comings of hotshot and suggests lsprof as replacement. cProfile (which is based on lsprof) will be in Python 2.5 And according to http://jcalderone.livejournal.com/21124.html there is also the possibility to use the output of cProfile with KCacheGrind. So all this does not help me right now (Presently I don't have the time to install python 2.5 alpha + numpy + scipy and test cProfile + the corresponding scripts), but in the longer run it might be the solution ... Best, Arnd From arnd.baecker at web.de Fri May 5 02:18:05 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 02:18:05 2006 Subject: [Numpy-discussion] Re: API inconsistencies In-Reply-To: References: <03fe01c66fc7$b28a7030$0a84a8c0@dsp.sun.ac.za> Message-ID: On Thu, 4 May 2006, Robert Kern wrote: > Albert Strasheim wrote: > > > What's the current thinking NumPy backwards compatibility with Numeric and > > ndarray? Is NumPy 1.0 aiming to be compatible, or will some porting effort > > be considered to be the default setting? > > We are trying to be as compatible with Numeric as possible by using the > convertcode.py script. We hope that that will get people 99% of the way there. In the wiki there is a page documenting necesssary changes and possible problems when converting from Numeric http://www.scipy.org/Converting_from_Numeric > numarray modules will take some more effort to convert. See also http://www.scipy.org/Converting_from_numarray Please add anything you observe while converting old code. Best, Arnd From berginadarleen at hdm-stuttgart.de Fri May 5 05:12:19 2006 From: berginadarleen at hdm-stuttgart.de (Darleen Bergin) Date: Fri May 5 05:12:19 2006 Subject: [Numpy-discussion] Re: good credddit Message-ID: <000001c6703c$e8c48620$c154a8c0@pea52> the web site a staff. He had a tall pointed blue hat, a long grey cloak, a silver scarf over which a white beard hung down below his waist, and immense black boots. Good morning! said Bilbo, and he meant it. The sun was shining, and the grass was very green. But Gandalf looked at him from under long bushy eyebrows that stuck out further than the brim of his shady hat. What do you mean? be said. Do you wish me a good morning, or mean that it is a good morning whether I want not; or that you feel good this morning; or that it is morning to be good on? All of them at once, said Bilbo. And a very fine morning for a pipe of tobacco out of doors, into the bargain. If you have a pipe about you, sit down and have a fill of mine! Theres no hurry, we have all the day before us! Then Bilbo sat down on a seat by his door, crossed his legs, and blew out a beautiful grey ring of smoke that sailed up into the air without breaking and floated away over The Hill. Very pretty! said Gandalf. But I have no time to blow smoke-rings this morning. I am looking for someone to share in an adventure that I -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5359 bytes Desc: not available URL: From ryanlists at gmail.com Fri May 5 06:07:06 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri May 5 06:07:06 2006 Subject: [Numpy-discussion] Profiling problem Message-ID: I was trying to help Arnd and ended up confusing myself even more. I tried to run this cooked up example: from scipy import * # from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) using prun main() in ipython and this is the output: In [5]: prun main() 10005 function calls in 1.050 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.930 0.930 1.050 1.050 prof.py:4(main) 10000 0.120 0.000 0.120 0.000 :0(abs) 1 0.000 0.000 0.000 0.000 :0(arange) 1 0.000 0.000 1.050 1.050 profile:0(main()) 1 0.000 0.000 0.000 0.000 :0(setprofile) 1 0.000 0.000 1.050 1.050 :1(?) 0 0.000 0.000 profile:0(profiler) What does that mean? It looks to me like 0.12 total seconds were spent calling abs (with no mention of what command is calling abs), and the rest of the 1.05 total seconds is unaccounted for. Am I misunderstanding this or doing something else wrong? Thanks, Ryan From robert.kern at gmail.com Fri May 5 10:16:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri May 5 10:16:07 2006 Subject: [Numpy-discussion] Re: Profiling problem In-Reply-To: References: Message-ID: Ryan Krauss wrote: > What does that mean? It looks to me like 0.12 total seconds were > spent calling abs (with no mention of what command is calling abs), > and the rest of the 1.05 total seconds is unaccounted for. Am I > misunderstanding this or doing something else wrong? Donuts get you dollars that the profiler does not recognize ufuncs as functions that it ought to be profiling. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arnd.baecker at web.de Fri May 5 12:17:02 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri May 5 12:17:02 2006 Subject: [Numpy-discussion] Re: Profiling problem In-Reply-To: References: Message-ID: On Fri, 5 May 2006, Robert Kern wrote: > Ryan Krauss wrote: > > > What does that mean? It looks to me like 0.12 total seconds were > > spent calling abs (with no mention of what command is calling abs), > > and the rest of the 1.05 total seconds is unaccounted for. Am I > > misunderstanding this or doing something else wrong? > > Donuts get you dollars that the profiler does not recognize ufuncs as functions > that it ought to be profiling. So does that mean that something has to be done to the ufuncs so that the profiler can recognize them, or that the profiler has to be improved in some way? Just for completeness: I installed python 2.5 to test out the new cProfile which gives for ################################ import cProfile from numpy import * from math import sqrt def main(): x=arange(1,100.0) for i in xrange(10000): y=sin(x) y2=log(x) z=sqrt(i) z2=abs(y**3) cProfile.run("main()") ################################ the following result: 20004 function calls in 0.611 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.611 0.611 :1() 1 0.515 0.515 0.611 0.611 tst_prof.py:7(main) 10000 0.081 0.000 0.081 0.000 {abs} 10000 0.016 0.000 0.016 0.000 {math.sqrt} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {numpy.core.multiarray.arange} Best, Arnd From fullung at gmail.com Fri May 5 16:18:03 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri May 5 16:18:03 2006 Subject: [Numpy-discussion] Array interface Message-ID: <01a501c6709a$0568b320$0a84a8c0@dsp.sun.ac.za> Hello all I have a few quick questions regarding the array interface. First off, according to the documentation of array(...): """ array(object, dtype=None, copy=1, order=None, subok=0,ndmin=0) will return an array from object with the specified date-type Inputs: object - an array, any object exposing the array interface, any object whose __array__ method returns an array, or any (nested) sequence. """ array calls _array_fromobject in the C code, but I can't quite figure out how this code determines whether an arbitrary object is exposing the array interface. Any info would be appreciated. Next up, __array_struct__. According to Guide to NumPy: """ __array_struct__ A PyCObject that wraps a pointer to a PyArrayInterface structure. This is only useful on the C-level for rapid implementation of the array interface, using a single attribute lookup. """ Does an object have to implement __array_struct__, or is this optional? I would think that it might be optional, so that pure-Python objects can implement the array interface. In this case, what should such an object do about __array_struct__? Return None? Not implement the method? Something else? Thanks! Regards, Albert From fullung at gmail.com Fri May 5 19:49:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri May 5 19:49:04 2006 Subject: [Numpy-discussion] ctypes and NumPy In-Reply-To: <00d901c66d6c$2d2569c0$0a84a8c0@dsp.sun.ac.za> Message-ID: <000001c670b7$946f3540$0502010a@dsp.sun.ac.za> Hello all After some more playing with ctypes, I've come up with some ideas of using ctypes with NumPy without having to change ndarray. Info on the wiki: http://www.scipy.org/Cookbook/Ctypes I've figured out how to overlay ctypes onto NumPy arrays. Used in conjunction with decorators, this offers a reasonably seamless way of passing NumPy arrays through ctypes to C functions. I have some ideas for doing the reverse, i.e. overlaying NumPy arrays onto ctypes, but I haven't been able to get this to work yet. Any suggestions in this area would be appreciated. It is useful to be able to do this when you have a function returning something like a double*. In this case you probably have some knowledge about the size of this buffer, so you can construct an array with the right dtype and shape to work further on this array. Regards, Albert From chanley at stsci.edu Mon May 8 10:31:24 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon May 8 10:31:24 2006 Subject: [Numpy-discussion] numpy does not build under Solaris 8 -- may be related to ticket #96 Message-ID: <445F7B16.2040006@stsci.edu> Numpy does not currently build under Solaris 8. I have tracked this problem to the following function definition in scalarmathmodule.c.src: static PyObject * @name at _power(PyObject *a, PyObject *b, PyObject *c) The declaration of "int retstatus" needs to be moved ahead of the #if/else statements in which variable assignment occurs. This makes the function C like in syntax instead of C++. The native Solaris compilers are picky that way. I have checked in this change. Someone else will need to test on Windows to see if this corrects ticket #96. This change is a slight modification to revision 2472. Chris From stefan at sun.ac.za Mon May 8 10:36:21 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon May 8 10:36:21 2006 Subject: [Numpy-discussion] array without string representation Message-ID: <20060507204643.GA18870@mentat.za.net> Hi everyone, While playing around with record arrays, I managed to create an object that I can't print. This snippet: import numpy as N r = N.zeros(shape=(3,3), dtype=[('x','f4')]).view(N.recarray) y = r.view('f4') print "Created y" print y produces a traceback: In [51]: print y ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in ? File "/home/stefan//lib/python2.4/site-packages/numpy/core/numeric.py", line 272, in array_str return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) File "/home/stefan//lib/python2.4/site-packages/numpy/core/arrayprint.py", line 198, in array2string separator, prefix) File "/home/stefan//lib/python2.4/site-packages/numpy/core/arrayprint.py", line 145, in _array2string format_function = a._format File "/home/stefan//lib/python2.4/site-packages/numpy/core/records.py", line 176, in __getattribute__ res = fielddict[attr][:2] TypeError: unsubscriptable object Has anyone else seen this sort of thing before? Regards St?fan From fullung at gmail.com Mon May 8 10:51:51 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 8 10:51:51 2006 Subject: [Numpy-discussion] Incorrect result when multiplying views of record arrays Message-ID: <00b201c67221$d9ce6ff0$0502010a@dsp.sun.ac.za> Hello all I've found a possible bug when multiplying views of record arrays. Strangely enough, this only shows up on Linux. Luckily, the unit tests for my library picked up on this. The code: import numpy as N rdt = N.dtype({'names' : ['i', 'j'], 'formats' : [N.intc, N.float64]}) x = N.array([[(1,5.),(2,6.),(-1,0.)]], dtype=rdt) y = N.array([[(1,1.),(2,2.),(-1,0.)], [(1,3.),(2,4.),(-1,0.)]], dtype=rdt) xv = x['j'][:,:-1] yv = y['j'][:,:-1] z = N.dot(yv, N.transpose(xv)) The operands: yv = [[ 1. 2.] [ 3. 4.]] xv = [[ 5. 6.]] z = [[ 5.] [ 15.]] The expected result is [[17, 39]]', which is what I get on Windows. If I copy the views prior to multiplying: a = N.array(xv) b = N.array(yv) c = N.dot(b, N.transpose(a)) I get the correct result on both platforms. Trac ticket at http://projects.scipy.org/scipy/numpy/ticket/98. Cheers, Albert From tim.hochberg at cox.net Mon May 8 11:36:00 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 11:36:00 2006 Subject: [Numpy-discussion] basearray / arraykit Message-ID: <445F8F71.4070801@cox.net> I created a branch to work on basearray and arraykit: http://svn.scipy.org/svn/numpy/branches/arraykit Basearray, as most of you probably know by now is the array superclass that Travis, Sasha and I have all talked about at various times with slightly different emphasis. Arraykit is something I've mentioned in connection with basearray: a toolkit for creating custom array like objects. Here's a brief example; this is what an custom array class that just supported indexing and shape would look like using arraykit: import numpy.arraykit as _kit class customarray(_kit.basearray): __new__ = _kit.fromobj __getitem__ = _kit.getitem __setitem__ = _kit.setitem shape = property(_kit.getshape, _kit.setshape) In practice you'd probably define a few more things like __repr__, etc, but this should get the idea across. Stuff like the above already works. However there are several areas that need more work: 1. arraykit is very tangled of with multiarraymodule.c. I would like to decouple these. 2. A lot of stuff has an extra layer of python in it. I would like to remove that to minimize the extra overhead custom array types get. 3. The functions that are available closely reflect my particular application. If someone else uses this, they're likely to want functions that are not yet exposed through arraykit. That's it for now. More updates as events warrant. -tim From chanley at stsci.edu Mon May 8 13:50:01 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon May 8 13:50:01 2006 Subject: [Numpy-discussion] problems with index arrays and byte order Message-ID: <445FAEE5.4040500@stsci.edu> Greetings, The following example was uncovered by a pyfits user during testing. It appears that there is a significant problem with index array handling byte order properly. Example: >>> import numpy >>> a = numpy.array([1,2,3,4,5,6],'>f8') >>> a[3:5] array([ 4., 5.]) >>> a[[3,4]] array([ 2.05531309e-320, 2.56123631e-320]) >>> a[numpy.array([3,4])] array([ 2.05531309e-320, 2.56123631e-320]) >>> numpy.__version__ '0.9.7.2477' This test was conducted on a Red Hat Enterprise system. Chris From pau.gargallo at gmail.com Mon May 8 16:17:19 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Mon May 8 16:17:19 2006 Subject: [Numpy-discussion] axis in vdot? Message-ID: <6ef8f3380605081419ife00c4dn653e5f87e5c6d1b3@mail.gmail.com> hi, all given to nd-arrays x and y, i would like to know the most efficient way to compute >>> (x*y).sum( axis=n ) When using broadcasting rules x, y and (x*y).sum(axis=n) may happen to be much smaller that x*y, so it will be great if (x*y).sum(axis=n) could be computed without actually building x*y. Is there an existing way to do that in NumPy? If not, would it make sense to add an axis argument to vdot to perform such operation? something like >>> vdot( x,y, axis=n ) thanks a lot, pau From andorxor at gmx.de Mon May 8 16:33:32 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Mon May 8 16:33:32 2006 Subject: [Numpy-discussion] numpy does not build under Solaris 8 -- may be related to ticket #96 In-Reply-To: <445F7B16.2040006@stsci.edu> References: <445F7B16.2040006@stsci.edu> Message-ID: <445F8337.8020008@gmx.de> > Someone else will need to test on Windows to see if this corrects > ticket #96. It's the same error Albert reported and your modification has fixed it. Stephan From steve at shrogers.com Mon May 8 16:47:22 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Mon May 8 16:47:22 2006 Subject: [Numpy-discussion] Call for Papers APL Quote Quad 34:4 Message-ID: <445CA494.4080601@shrogers.com> The ACM Special Interest Group for APL is broadening it's scope from APL and J to all Array Programming Languages. Papers on the design, implementation, and use of NumPy would be welcome. ANNOUNCEMENT Issue 34:3 of APL QUOTE QUAD is now closed and ready for publishing. We are now starting to process the next issue (34:4), and we need new material. Therefore, I am sending this CALL FOR PAPERS APL QUOTE QUAD The next issue of APL Quote Quad is being designed. Prospective authors are encouraged to submit papers on any of the usual subjects of interest related to Array-Processing Languages (APL, APL2, J, and so forth). Submitted papers, in Microsoft Word (.doc), Rich Text Format (.rtf), Openoffice format (.scw), Latex (.tex) or Acrobat (.pdf) should be addressed to Manuel Alfonseca Manuel.Alfons... @uam.es Care must be taken to make the submitted papers self-contained, eg. if they require special APL typesettings. The tentative time limit for the new material is May 31st, 2006. From david at ar.media.kyoto-u.ac.jp Mon May 8 17:06:22 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon May 8 17:06:22 2006 Subject: [Numpy-discussion] Changing the printing format for numpy array Message-ID: <445EF420.6040307@ar.media.kyoto-u.ac.jp> Hi there, I would like to change the default printing format of float numbers when using numpy arrays. On the "old" numpy documentation, it is said that import sys and changing the value of sys.float_output_precision can achieve this. Unfortunately, I don't manage to make it work under ipython (or the standard cpython interpreter, for that matter): # Change default format import sys sys.float_output_suppress_small = 1 sys.float_output_precision = 3 import numpy b = numpy.linspace(0.1, 0.0001, 5) print b print numpy.array2string(b, precision = 3, suppress_small = 1) Is this a bug, or did I miss anything ? I basically want "print b" to behave like "print numpy.array2string(b, precision = 3, suppress_small = 1)", thank for the help, David From ted.horst at earthlink.net Mon May 8 17:10:11 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Mon May 8 17:10:11 2006 Subject: [Numpy-discussion] scalarmathmodule changes broke MSVC build; patch available In-Reply-To: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> References: <03fc01c66fc2$daabd270$0a84a8c0@dsp.sun.ac.za> Message-ID: <5F8B4509-2D51-431E-84BE-DAA64B088C70@earthlink.net> Sorry about that. I'm not sure why that compiled for me. gcc 4 seems to allow this. Ted On May 4, 2006, at 16:36, Albert Strasheim wrote: > Hello all > > Some changes to scalarmathmodule.c.src have broken the build with > at least > MSVC 7.1. The generated .c contains code like this: > > static PyObject * > cfloat_power(PyObject *a, PyObject *b, PyObject *c) > { > PyObject *ret; > cfloat arg1, arg2, out; > #if 1 > cfloat out1; > out1.real = out.imag = 0; > #else > cfloat out1=0; > #endif > int retstatus; > > As far as I know, this is not valid C. Ticket with patch here: > > http://projects.scipy.org/scipy/numpy/ticket/96 > > By the way, how about setting up buildbot so that we can avoid these > problems in future? I'd be very happy to maintain build slaves for > a few > Windows configurations. > > Regards, > > Albert > From ryanlists at gmail.com Mon May 8 18:20:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon May 8 18:20:23 2006 Subject: [Numpy-discussion] downsample vector with averaging Message-ID: I need to downsample some data while averaging it. Basically, I have a vector and I want to take for example every ten points and average them together so that the new vector would be made up of newvect[0]=oldvect[0:9].mean() newvect[1]=oldevect[10:19].mean() .... Is there a built-in or vectorized way to do this? I default to thinking in for loops, but that can lead to slow code. Thanks, Ryan From tim.hochberg at cox.net Mon May 8 18:21:38 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 18:21:38 2006 Subject: [Numpy-discussion] basearray / arraykit Message-ID: <445F8A61.8040107@cox.net> From tim.hochberg at cox.net Mon May 8 18:32:10 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon May 8 18:32:10 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <445FF0AD.5040005@cox.net> Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. You could try something like this: >>> import numpy >>> a = numpy.arange(100) # stand in for the real data >>> a.shape = 10,10 >>> a.mean(axis=1) array([ 4.5, 14.5, 24.5, 34.5, 44.5, 54.5, 64.5, 74.5, 84.5, 94.5]) >>> a.shape = [-1] # put a back like we found it -tim > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From sransom at nrao.edu Mon May 8 18:36:03 2006 From: sransom at nrao.edu (Scott Ransom) Date: Mon May 8 18:36:03 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <20060509013403.GA28744@ssh.cv.nrao.edu> How about this: --------------------------------------------- import numpy as Num def downsample(vector, factor): """ downsample(vector, factor): Downsample (by averaging) a vector by an integer factor. """ if (len(vector) % factor): print "Length of 'vector' is not divisible by 'factor'=%d!" % factor return 0 vector.shape = (len(vector)/factor, factor) return Num.mean(vector, axis=1) --------------------------------------------- Scott On Mon, May 08, 2006 at 01:17:16PM -0400, Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd_______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From ryanlists at gmail.com Mon May 8 19:09:09 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon May 8 19:09:09 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <20060509013403.GA28744@ssh.cv.nrao.edu> References: <20060509013403.GA28744@ssh.cv.nrao.edu> Message-ID: Thanks Scott and Tim. These look good, and very similar. On 5/8/06, Scott Ransom wrote: > How about this: > > --------------------------------------------- > import numpy as Num > > def downsample(vector, factor): > """ > downsample(vector, factor): > Downsample (by averaging) a vector by an integer factor. > """ > if (len(vector) % factor): > print "Length of 'vector' is not divisible by 'factor'=%d!" % factor > return 0 > vector.shape = (len(vector)/factor, factor) > return Num.mean(vector, axis=1) > --------------------------------------------- > > Scott > > > On Mon, May 08, 2006 at 01:17:16PM -0400, Ryan Krauss wrote: > > I need to downsample some data while averaging it. Basically, I have > > a vector and I want to take for example every ten points and average > > them together so that the new vector would be made up of > > newvect[0]=oldvect[0:9].mean() > > newvect[1]=oldevect[10:19].mean() > > .... > > > > Is there a built-in or vectorized way to do this? I default to > > thinking in for loops, but that can lead to slow code. > > > > Thanks, > > > > Ryan > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd_______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -- > -- > Scott M. Ransom Address: NRAO > Phone: (434) 296-0320 520 Edgemont Rd. > email: sransom at nrao.edu Charlottesville, VA 22903 USA > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > From jdhunter at ace.bsd.uchicago.edu Mon May 8 19:37:17 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon May 8 19:37:17 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: ("Ryan Krauss"'s message of "Mon, 8 May 2006 13:17:16 -0400") References: Message-ID: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Ryan" == Ryan Krauss writes: Ryan> I need to downsample some data while averaging it. Ryan> Basically, I have a vector and I want to take for example Ryan> every ten points and average them together so that the new Ryan> vector would be made up of newvect[0]=oldvect[0:9].mean() Ryan> newvect[1]=oldevect[10:19].mean() .... Ryan> Is there a built-in or vectorized way to do this? I default Ryan> to thinking in for loops, but that can lead to slow code. You might look at the matlab function decimate -- it first does a chebyshev low-pass filter before it does the down-sampling. Conceptually similar to what you are proposing with simple averaging but with a little more sophistication http://www-ccs.ucsd.edu/matlab/toolbox/signal/decimate.html An open source version (GPL) for octave by Paul Kienzle, who is one of the authors of the matplotlib quadmesh functionality and apparently a python convertee, is here http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m and it looks like someone has already translated this to python using scipy.signal http://www.bigbold.com/snippets/posts/show/1209 Some variant of this would be a nice addition to scipy. JDH From a.u.r.e.l.i.a.n at gmx.net Tue May 9 01:18:24 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue May 9 01:18:24 2006 Subject: [Numpy-discussion] Boolean indexing Message-ID: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> Hi, I posted this before on scipy-user, maybe that was the wrong place. Have a look at this: # ===================================== In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.7.2484' In [3]: a = numpy.array([[0, 1],[2, 3], [4, 5],[6, 7],[8, 9]]) In [4]: b = numpy.array([False, True, True, True, False]) In [5]: a[b] Out[5]: array([[2, 3], [4, 5], [6, 7]]) In [6]: a[b,:] --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/jloehnert/ IndexError: arrays used as indices must be of integer type In [7]: a[numpy.nonzero(b),:] Out[7]: array([[2, 3], [4, 5], [6, 7]]) # ====================== Is there a reason to forbid a[b, :] with boolean array b? Johannes From oliphant.travis at ieee.org Tue May 9 01:42:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 9 01:42:09 2006 Subject: [Numpy-discussion] Boolean indexing In-Reply-To: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> References: <200605091016.50881.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <4460558C.80006@ieee.org> Johannes Loehnert wrote: > Hi, > > I posted this before on scipy-user, maybe that was the wrong place. > > Have a look at this: > > # ===================================== > In [1]: import numpy > > In [2]: numpy.__version__ > Out[2]: '0.9.7.2484' > > In [3]: a = numpy.array([[0, 1],[2, 3], [4, 5],[6, 7],[8, 9]]) > > In [4]: b = numpy.array([False, True, True, True, False]) > > In [5]: a[b] > Out[5]: > array([[2, 3], > [4, 5], > [6, 7]]) > > In [6]: a[b,:] > --------------------------------------------------------------------------- > exceptions.IndexError Traceback (most recent > call last) > > /home/jloehnert/ > > IndexError: arrays used as indices must be of integer type > > In [7]: a[numpy.nonzero(b),:] > Out[7]: > array([[2, 3], > [4, 5], > [6, 7]]) > # ====================== > > Is there a reason to forbid a[b, :] with boolean array b? > I think admitting this behavior would require even more checks in already complicated code. I'm not sure it's worth it. I don't have an principled opposition, but it's not on my list of things to implement. -Travis From olivetti at itc.it Tue May 9 04:29:21 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Tue May 9 04:29:21 2006 Subject: [Numpy-discussion] Guide to Numpy book In-Reply-To: <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> References: <3FA6601C-819F-4F15-A670-829FC428F47B@cortechs.net> <4452C145.8050803@geodynamics.org> <6382066a0605020646u6d752d84v1c2711101e108883@mail.gmail.com> Message-ID: <44607CCB.402@itc.it> My copy dates January 31. Did you receive updates after this thread started? Thanks, Emanuele Charlie Moad wrote: > On 4/30/06, Vidar Gundersen wrote: >> ===== Original message from Luis Armendariz | 29 Apr 2006: >> >> What is the newest version of Guide to numpy? The recent one I got is >> >> dated at Jan 9 2005 on the cover. >> > The one I got yesterday is dated March 15, 2006. >> >> aren't the updates supposed to be sent out >> to customers when available? > > I was waiting to hear a reply on this, because I am curious about > getting updates as well. Our labs copy reads Jan 20. How often > should we expect updates? I am guessing the date variations on the > front page are from latex each time the doc is regenerated. From fullung at gmail.com Tue May 9 05:36:16 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue May 9 05:36:16 2006 Subject: [Numpy-discussion] problems with index arrays and byte order In-Reply-To: <445FAEE5.4040500@stsci.edu> Message-ID: <005501c67365$10206f10$0502010a@dsp.sun.ac.za> Hello all The issue Chris mentioned seems to be fixed in r2487 (did this happen in r2480 or was it another change that fixed it?). http://projects.scipy.org/scipy/numpy/changeset/2480 >>> import numpy >>> a = numpy.array([1,2,3,4,5,6],'>f8') >>> a[3:5] array([ 4., 5.]) >>> >>> a[[3,4]] array([ 4., 5.]) >>> a[numpy.array([3,4])] array([ 4., 5.]) >>> numpy.__version__ '0.9.7.2487' Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Christopher Hanley > Sent: 08 May 2006 22:50 > To: numpy-discussion > Subject: [Numpy-discussion] problems with index arrays and byte order > > Greetings, > > The following example was uncovered by a pyfits user during testing. It > appears that there is a significant problem with index array handling > byte order properly. > > Example: > > >>> import numpy > >>> a = numpy.array([1,2,3,4,5,6],'>f8') > >>> a[3:5] > array([ 4., 5.]) > >>> a[[3,4]] > array([ 2.05531309e-320, 2.56123631e-320]) > >>> a[numpy.array([3,4])] > array([ 2.05531309e-320, 2.56123631e-320]) > >>> numpy.__version__ > '0.9.7.2477' > > This test was conducted on a Red Hat Enterprise system. > > Chris From nvf at MIT.EDU Tue May 9 10:05:08 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Tue May 9 10:05:08 2006 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #1753 - 1 msg In-Reply-To: <20060509030933.A19D3F53A@sc8-sf-spam2.sourceforge.net> References: <20060509030933.A19D3F53A@sc8-sf-spam2.sourceforge.net> Message-ID: <4460CB00.10601@mit.edu> Dear all, I sure wish I had seen John's link before writing my own decimate, but such is life. I am going to transition this thread to the scipy list since it seems more as if it belongs there. I have a comment which may be way, way beyond the scope of Ryan's intended application, but which probably should be made before a decimate function is added to scipy. If you will be FFTing your decimated signal and care at all about the phase, then you may want to consider using something like filtfilt rather than just applying a single filter. It filters a signal forwards then backwards in order to avoid introducing a phase delay, and is actually used in the Matlab implementation of decimate. I found an implementation in Python here: http://article.gmane.org/gmane.comp.python.scientific.user/1164/ While the author, Andrew Straw, seems to be suffering edge effects, it seems as if he's not windowing his data. In my application, his filtfilt seems to do its job quite nicely. Then again, I am also not a signals whiz. Take care, Nick numpy-discussion-request at lists.sourceforge.net wrote: > Message: 1 > To: "Ryan Krauss" > Cc: numpy-discussion > Subject: Re: [Numpy-discussion] downsample vector with averaging > From: John Hunter > Date: Mon, 08 May 2006 21:31:51 -0500 > > You might look at the matlab function decimate -- it first does a > chebyshev low-pass filter before it does the down-sampling. > Conceptually similar to what you are proposing with simple averaging > but with a little more sophistication > > http://www-ccs.ucsd.edu/matlab/toolbox/signal/decimate.html > > An open source version (GPL) for octave by Paul Kienzle, who is one of > the authors of the matplotlib quadmesh functionality and apparently a > python convertee, is here > > http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m > > and it looks like someone has already translated this to python using > scipy.signal > > http://www.bigbold.com/snippets/posts/show/1209 > > Some variant of this would be a nice addition to scipy. > > JDH > From chanley at stsci.edu Tue May 9 10:27:41 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue May 9 10:27:41 2006 Subject: [Numpy-discussion] field access in recarray broken when field name matches class method or attribute Message-ID: <4460CEF0.1080604@stsci.edu> It is not possible to access a named recarray field if that name matches the name of a recarray class method or attribute. Example code is below: >>> from numpy import rec >>> r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]],names='num,name,field') >>> r.field('num') array([456, 2]) >>> r.field('field') >>> I have opened a new ticket on the Trac site with this bug. Chris From eric at enthought.com Tue May 9 11:15:03 2006 From: eric at enthought.com (eric jones) Date: Tue May 9 11:15:03 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: References: Message-ID: <4460DBE1.90104@enthought.com> Look into ufunc.reduceat(). It'll help with your specific case. It doesn't do it in one step but still should be pretty efficient. >>> a = ones(15) >>> a array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> add.reduceat(a,[0,5,10]) array([5, 5, 5]) >>> add.reduceat(a,[0,5,10])/5.0 array([ 1., 1., 1.]) eric Ryan Krauss wrote: > I need to downsample some data while averaging it. Basically, I have > a vector and I want to take for example every ten points and average > them together so that the new vector would be made up of > newvect[0]=oldvect[0:9].mean() > newvect[1]=oldevect[10:19].mean() > .... > > Is there a built-in or vectorized way to do this? I default to > thinking in for loops, but that can lead to slow code. > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ryanlists at gmail.com Tue May 9 13:11:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue May 9 13:11:23 2006 Subject: [Numpy-discussion] column_stack with len +/-1 Message-ID: I am trying to column_stack a bunch of data vectors that I think should be the same length. For whatever reason, my DAQ system doesn't always put out exactly the same length vector. They should be 11,000 elements long, but some are 11,000+/-1. How do I efficently drop one or two elements of the lists that are too long without making a bunch of accidental copies? What would newlist=[item[0:11000] for item in oldlist] do? Ryan From alexander.belopolsky at gmail.com Tue May 9 13:13:21 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue May 9 13:13:21 2006 Subject: [Numpy-discussion] Changing the printing format for numpy array In-Reply-To: <445EF420.6040307@ar.media.kyoto-u.ac.jp> References: <445EF420.6040307@ar.media.kyoto-u.ac.jp> Message-ID: On 5/8/06, David Cournapeau wrote: > [...] > I would like to change the default printing format of float numbers > when using numpy arrays. In numpy you use set_printoptions for that: >>> from numpy import * >>> set_printoptions(precision=2) >>> array([1/3.]) array([ 0.33]) Unfortunately, this does not affect dimensionless arrays and array scalars: >>> array(1/3.) array(0.33333333333333331) >>> float_(1/3.) 0.33333333333333331 (Travis said this was no a bug.) From chanley at stsci.edu Tue May 9 13:19:02 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Tue May 9 13:19:02 2006 Subject: [Numpy-discussion] cannot create single field recarray using string format input Message-ID: <4460F8F1.20703@stsci.edu> An error is raised when attempting to create a single field recarray using string format as input. If there is more than one format specified in the string no error is created. However, it is possible to create a single field recarray if the format is given as a member of a 1 element list. Example code below: In [1]: from numpy import rec In [2]: rr = rec.array(None, formats="f4,i4,i8,f8", shape=100) In [3]: rr.dtype Out[3]: dtype([('f1', ' /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in array(obj, formats, names, titles, shape, byteorder, aligned, offset, strides) 414 return recarray(shape, formats, names=names, titles=titles, 415 buf=obj, offset=offset, strides=strides, --> 416 byteorder=byteorder, aligned=aligned) 417 elif isinstance(obj, str): 418 return fromstring(obj, formats, names=names, titles=titles, /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in __new__(subtype, shape, formats, names, titles, buf, offset, strides, byteorder, aligned) 152 descr = formats 153 else: --> 154 parsed = format_parser(formats, names, titles, aligned) 155 descr = parsed._descr 156 /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in __init__(self, formats, names, titles, aligned) 42 class format_parser: 43 def __init__(self, formats, names, titles, aligned=False): ---> 44 self._parseFormats(formats, aligned) 45 self._setfieldnames(names, titles) 46 self._createdescr() /data/sparty1/dev/site-packages/lib/python/numpy/core/records.py in _parseFormats(self, formats, aligned) 51 dtype = sb.dtype(formats, aligned) 52 fields = dtype.fields ---> 53 keys = fields[-1] 54 self._f_formats = [fields[key][0] for key in keys] 55 self._offsets = [fields[key][1] for key in keys] TypeError: unsubscriptable object In [8]: rr = rec.array(None, formats=["f4"], shape=100) In [9]: rr.dtype Out[9]: dtype([('f1', ' References: Message-ID: <4460F920.6020108@cox.net> Ryan Krauss wrote: > I am trying to column_stack a bunch of data vectors that I think > should be the same length. For whatever reason, my DAQ system doesn't > always put out exactly the same length vector. They should be 11,000 > elements long, but some are 11,000+/-1. > > How do I efficently drop one or two elements of the lists that are too > long without making a bunch of accidental copies? > > What would > newlist=[item[0:11000] for item in oldlist] > do? That should be fine in terms of copying; 'item[0:11000]' will be a view of item, not a copy and thus will be cheap in terms of memory and time. However, if the vectors are really 11000+/-1 in length, don't you need 'items[:10999]' ? -tim > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From oliphant.travis at ieee.org Tue May 9 15:10:47 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 9 15:10:47 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week Message-ID: <4461131B.1050907@ieee.org> I'd like to get 0.9.8 of NumPy released by the end of the week. There are a few Trac tickets that need to be resolved by then. In particular #83 suggests returning scalars instead of 1x1 matrices from certain reduce-like methods. Please chime in on your preference. I'm waiting to here more feedback before applying the patch. If you can help out on any other ticket that would be much appreciated. For my part, the scalar math module need to have a re-worked coercion model so that mixing in Python scalars is handled correctly (as well as the case of mixing unsigned and signed integers of the same type --- neither can be cast to each other). I'm open to any suggestions and help. This should eliminate many of the problems causing tests to fail currently with scalar math. The 0.9.8 release is past due now. I was out of town last week and did not have much time for NumPy. -Travis From gruben at bigpond.net.au Tue May 9 16:39:45 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue May 9 16:39:45 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <445FF0AD.5040005@cox.net> References: <445FF0AD.5040005@cox.net> Message-ID: <446127FF.4090909@bigpond.net.au> Thanks to Tim and Scott, I've added a new example to the rebinning example in the cookbook which takes advantage of using mean. This also does what Ryan wants, but extends it to arbitrary dimensions. Gary R. From tim.hochberg at cox.net Tue May 9 18:36:21 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 18:36:21 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: <4461434F.6060604@cox.net> Travis Oliphant wrote: > > I'd like to get 0.9.8 of NumPy released by the end of the week. > There are a few Trac tickets that need to be resolved by then. > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your > preference. I'm waiting to here more feedback before applying the patch. Do you want replies to the list? Or on the ticket. Or both? I'll start with the list? Let's start with the example in the ticket: >>> m matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> m.argmax() matrix([[8]]) Does anyone else think that this is a fairly nonsensical result? Not that this is specific to matrices, the array result is just as weird: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() 8 Given that obj[obj.argmax()] should really equal obj.max(), argmax with no axis specified should really either raise an exception or return a tuple. Anyway, that's off topic. Let's consider something that makes more sense: >>> m.max() matrix([[8]]) >>> a.max() 8 OK, so how would we explain this result? One way would be refer to the implementation, saying that first we flatten the matrix/array and then we apply max to it. That sort of makes sense for arrays, but seems pretty outlandish for matrices which are always 2D and don't so much have the concept of flattening. A more appealing description is that we successively apply max along every axis. That is: >>> m.max(1).max(0) matrix([[8]]) >>> a.max(1).max(0) 8 [I start with the last axis here since it makes things more symetric between the array and matrix case] If we switch to having m.max() return a scalar, then this equivalence goes away. That makes things harder to explain. So, in the absence of more compelling use cases that those presented in the ticket I'd be inclined to leave things as they are. Of course I'm not a user of the matrix class, so take that for what it's worth. -tim > > If you can help out on any other ticket that would be much appreciated. > > For my part, the scalar math module need to have a re-worked coercion > model so that mixing in Python scalars is handled correctly (as well > as the case of mixing unsigned and signed integers of the same type > --- neither can be cast to each other). I'm open to any suggestions > and help. This should eliminate many of the problems causing tests to > fail currently with scalar math. > The 0.9.8 release is past due now. I was out of town last week and > did not have much time for NumPy. > > -Travis > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From ndarray at mac.com Tue May 9 18:39:27 2006 From: ndarray at mac.com (Sasha) Date: Tue May 9 18:39:27 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: On 5/9/06, Travis Oliphant wrote: > ... > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > I don't use matrices, so I don't have a strong opinion on this issue. On the other hand this brings back an old an in my view unfinished discussion on what aggregating methods should return in numpy: a rank-0 array or a scalar. If you remember I once advocated a point of view that both rank-0 arrays and array scalars are necessary in numpy an that we need to have consistent rules about when to return what. That discussion resulted in the following current behavior: >>> x = zeros((2,2)) >>> x[1,1] 0 >>> x[1,1,...] array(0) an important use case is: >>> y = x[1,1,...] >>> y[...] = 1 >>> x array([[0, 0], [0, 1]]) Note that in case of matrices: >>> m = matrix(x) >>> m[1,1] 0 >>> m[1,1,...] matrix([[0]]) I don't know is this was deliberately implemented this way or just comes as a side effect of the ndarray behavior. I also proposed back then to generalize the aggregating operations to allow aggregation over multiple axes and make x.sum() return a scalar, but a.sum((0,1)) return a rank-0 array. On the matrix feature in question I am -0. The proposed change trades one inconsistency for another, so why bother. From wbaxter at gmail.com Tue May 9 19:32:00 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 19:32:00 2006 Subject: [Numpy-discussion] Who uses matrix? Message-ID: On 5/10/06, Tim Hochberg wrote: > Of course I'm not a user of the matrix class, so take that for what it's worth. On 5/10/06, Sasha wrote: > > > I don't use matrices, so I don't have a strong opinion on this issue. > > Hmm. And in addition to you two, I'm pretty sure I've seen Travis mention he doesn't use matrix, either. That plus the lack of response to my and others' previous posts on the topic kinda makes me wonder whether there are actually *any* hardcore numpy users out there using the matrix class. I've been using matrix, but I don't consider myself a numpy hardcore at this point. On the issue in quesiton about returning scalars vs matrices from things like sum(), my initial reaction is that any time you have a 1x1 matrix it should just be returned as a scalar. In linear algebra terms, a 1x1 matrix *is* a scalar for all intents and purposes, so it doesn't need to be all dressed up in this fancy 'maxtrix' garb, as in "matrix([[8]])". But I'm not sure it's worth getting all in a hurry to make the change before numpy 0.9.8, or ever for that matter. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 9 20:16:10 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 20:16:10 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On Wed, 10 May 2006, Bill Baxter apparently wrote: > kinda makes me wonder whether there are actually any > hardcore numpy users out there using the matrix class. > On the issue in quesiton about returning scalars vs > matrices from things like sum(), my initial reaction is > that any time you have a 1x1 matrix it should just be > returned as a scalar. In linear algebra terms, a 1x1 > matrix is a scalar for all intents and purposes I use matrices when doing linear algebra. I believe that matrices will be used a lot by new users, especially those coming from matrix oriented languages such as GAUSS and Matlab. Regarding 1 by 1 matrices, I see two competing considerations: - a one by one matrix is not a scalar, as conformity for multiplication makes obvious - matrix oriented languages, like GAUSS or Matlab, do not draw this distinction for two obvious reasons * it did not fit their initial design (where everything was effectively a matrix) * people find it a convenient convention to conflate the two So that leaves it seems three options for numpy: - do it "right" (i.e., make the distinction, allow for explicit conversion, and raise an exception when the two are obviously confused as in addition or mutliplication when 1 by 1 is not conformable with the other argument) - conform to existing practice in other languages and likely to the expectations of users migrating from other languages - offer a switch, so that 1 by 1 matrices behave like scalars BUT the issue about what to return from sum() is entirely separate. I prefer it to be a scalar. Cheers, Alan Isaac From aisaac at american.edu Tue May 9 20:16:41 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 20:16:41 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> Message-ID: On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Let's start with the example in the ticket: > >>> m.argmax() > matrix([[8]]) > Does anyone else think that this is a fairly nonsensical result? Yes. The result should be a scalar. > Not that this is specific to matrices, the array result is > just as weird: > >>> a.argmax() > 8 This is desirable. This is just the meaning of axis=None in this context. I do not see a reason to discard this convenience and resort to a.ravel().argmax() > Anyway, that's off topic. Let's consider something that makes more sense: > >>> m.max() > matrix([[8]]) > >>> a.max() > 8 With axis=None, I want a scalar in both cases. But if 1 by 1 matrices end up being treated as scalars, I may not care. > If we switch to having m.max() return a scalar, then this > equivalence goes away. That makes things harder to > explain. Again, it is just the meaning of axis=None in this context. Right? Cheers, Alan Isaac From ndarray at mac.com Tue May 9 20:28:00 2006 From: ndarray at mac.com (Sasha) Date: Tue May 9 20:28:00 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On 5/9/06, Bill Baxter wrote: > ..... In linear algebra terms, a 1x1 matrix *is* a scalar for all intents and purposes ... In linear algebra you start with an N-dimensional space V then define an NxN dimensional space of matrices M. You can view that space as either linear space V'xV or a ring of linear operators V->V. Thus M is both a (non-comutative) ring (you can add and multiply its elements) and an a vector space (you can add and multiply by scalars). [An object that is both a ring and a linear space is called an "algebra"] Diagonal matrices with all diagonal elements equal behave like scalars w.r.t. multiplication, but it is different multiplication in the two cases. In a mathematical expression a . A where a is a scalar and A a is a matrix "." is the R x M -> M operation dictated by the linear space structure. On the other hand in (a . I) * A, "*" is the non-comutative MxM->M operation dictated by the ring structure. Note that for N>1 M does not contain 1x1 matrices and still has elements that are "scalar for *some* intents and purposes": a . I. Matrix algebra M can be extended to include rectangular matrices and matrices of different sizes, but by doing this one looses both the ring (not all pairs of matrices can be multiplied) and the linear space structures (not all pairs of matrices can be added). All this long introduction was to demonstrate that even from the linear algebra point of view a 1x1 matrix (an element of the space R'xR) is not the same as a scalar (an element of R). In numpy, however, algebraic differences between 1x1 matrices and scalars are not as important as the fact that matrices are mutable while scalars are not. From haase at msg.ucsf.edu Tue May 9 21:25:26 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue May 9 21:25:26 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray Message-ID: <44616AF9.5010905@msg.ucsf.edu> Hi, I'm a long time numarray user. I have some SWIG typemaps that I'm using for quite some time. They are C++ oriented and support creating template'd function. I only cover the case of contiguous "input" arrays of 1D,2D and 3D. (I would like to ensure that NO implicit type conversions are made so that I can use the same scheme to have arrays changed on the C/C++ side and can later see/access those in python.) (as I just added some text to the scipy wiki: www.scipy.org/Converting_from_numarray) I use something like: PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, NUM_C_ARRAY); arr = (double *) NA_OFFSETDATA(NAarr); What is new numpy's equivalent of NA_InputArray I tried looking http://numeric.scipy.org/array_interface.html but did not get the needed info. (numarray info on this is here: http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-C-API.html) Thanks for the new numpy - I'm very excited !! Sebastian Haase From tim.hochberg at cox.net Tue May 9 21:34:19 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 21:34:19 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> Message-ID: <44616D16.6090406@cox.net> Alan G Isaac wrote: >On Tue, 09 May 2006, Tim Hochberg apparently wrote: > > >>Let's start with the example in the ticket: >> >>> m.argmax() >> matrix([[8]]) >>Does anyone else think that this is a fairly nonsensical result? >> >> > >Yes. The result should be a scalar. > > Why? The current behaviour is both more self consistent and easier to explain, so it would be nice to see some examples of how a scalar would be advantageous here. > > >>Not that this is specific to matrices, the array result is >>just as weird: >> >>> a.argmax() >> 8 >> >> > >This is desirable. >This is just the meaning of axis=None in this context. >I do not see a reason to discard this convenience >and resort to a.ravel().argmax() > > And that is useful how? How do you plan to use the result without using ravel or it's equivalent at some point anyway? Now if a.argmax() returned (2, 2) in this case, that would be useful, but it would also probably a little bit of a pain to implement. And a little inconsistent with the other operators. But useful. >>Anyway, that's off topic. Let's consider something that makes more sense: >> >>> m.max() >> matrix([[8]]) >> >>> a.max() >> 8 >> >> > >With axis=None, I want a scalar in both cases. >But if 1 by 1 matrices end up being treated as scalars, >I may not care. > > I was going to say that 1x1 matrices are like scalars for most practical purposes, but while that's true for arrays, it's not really true for matrices since '*' stands for matrixmultiply, which doesn't work with 1x1 matrices. So, one is left with either munging dot or munging the axis=None stuff. Hmph.... >>If we switch to having m.max() return a scalar, then this >>equivalence goes away. That makes things harder to >>explain. >> >> > >Again, it is just the meaning of axis=None in this context. >Right? > > Is it? Perhaps you should write down a candidate description of how axis should work for matrices. It would be best if you could come up with something that looks internally self consistent. At present axis=None means returns 1x1 matrix and there's a fairly self consistent story that we can tell to explain this behaviour. I haven't seen a description for the alternative behaviour other than it does one thing when axis is a number and something very different when axis=None. That seems less than ideal. Regards, -tim From haase at msg.ucsf.edu Tue May 9 22:00:26 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue May 9 22:00:26 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44616AF9.5010905@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> Message-ID: <44617334.2070606@msg.ucsf.edu> One additional question: is PyArray_FromDimsAndData creating a copy ? I have very large image data and cannot afford copies :-( -Thanks. Sebastian Haase Sebastian Haase wrote: > Hi, > I'm a long time numarray user. > I have some SWIG typemaps that I'm using for quite some time. > They are C++ oriented and support creating template'd function. > I only cover the case of contiguous "input" arrays of 1D,2D and 3D. > (I would like to ensure that NO implicit type conversions are made so > that I can use the same scheme to have arrays changed on the C/C++ side > and can later see/access those in python.) > > (as I just added some text to the scipy wiki: > www.scipy.org/Converting_from_numarray) > I use something like: > > PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, > NUM_C_ARRAY); > arr = (double *) NA_OFFSETDATA(NAarr); > > > What is new numpy's equivalent of NA_InputArray > > I tried looking http://numeric.scipy.org/array_interface.html but did > not get the needed info. > > (numarray info on this is here: > http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-C-API.html) > > Thanks for the new numpy - I'm very excited !! > Sebastian Haase > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From wbaxter at gmail.com Tue May 9 22:11:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 22:11:05 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: On 5/10/06, Sasha wrote: > > On 5/9/06, Bill Baxter wrote: > > ..... In linear algebra terms, a 1x1 matrix *is* a scalar for all > intents and purposes ... > > In linear algebra you start with an N-dimensional space V then define > an NxN dimensional space of matrices M. [...] > All this long introduction was to demonstrate that even from the > linear algebra point of view a 1x1 matrix (an element of the space > R'xR) is not the same as a scalar (an element of R). Ok, fair enough. I think what I was trying to express is that it's rarely useful to have such a thing as a 1x1 matrix in linear algebra. For instance, I can't recall a single textbook or journal article I've read where a distinction was made between a true scalar product between two vectors and the 1x1 matrix resulting from the matrix product of, x^t * y. But I'll admit that most of what I read is more like applied math than pure math. On the other hand, I would expect that most people trying to write programs to do actual calculations are also going to be more interested in practical applications of math than math theory. Also, if you want to get nit-picky about what is correct in terms of rigorous math, it rasies the question as to whether it even makes any sense to apply .sum() to an element of R^n x R^m. In the end Numpy is a package for performing practical computations. So the question shouldn't be whether a 1x1 matrix is really the same thing as a scalar or not, but which way is going to make life the easiest for the people writing the code. Currently numpy lets you multiply a 1x1 matrix times another matrix as if the 1x1 were a scalar. That seems like a practical and useful behavior to me, regardless of its correctness. Anyway, back to sum -- if .sum() is to return a scalar, then what about .sum(axis=0)? Should that be a 1-D array of scalars rather than a matrix? If you answer no, then what about .sum(axis=0).sum(axis=1)? (Unrelated issue, but it seems that .sum(axis=0) and .sum(axis=1) both return row vectors, whereas I would expect the axis=1 variety to be a column vector.) Anyway, seems to be like Tim (I think) said. This is just introducing new inconsistencies in place of old ones, so what's the point. In numpy, however, algebraic differences between 1x1 matrices and > scalars are not as important as the fact that matrices are mutable > while scalars are not. > Seems like for most code the lack of methods and attributes on the scalar would be the bigger deal than the mutability difference. But either way, I'm not sure what point you're trying to make there. Scalars should go away because 1x1 matrices are more flexible? Regards, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Tue May 9 22:14:02 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue May 9 22:14:02 2006 Subject: [Numpy-discussion] numpy byte ordering bug Message-ID: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> Hi all- The previous byte order problems seem to be fixed in the svn numpy. Here is another odd one: This is on my little-endian linux box. >>> numpy.__version__ '0.9.7.2490' >>> x = numpy.arange(10,dtype='>> x array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> x = numpy.arange(10,dtype='>f8') >>> x array([ 0.00000000e+000, 1.00000000e+000, 1.37186586e+303, -5.82360826e-011, -7.98920843e+292, 3.60319875e-021, 4.94303335e+282, -2.09830067e-031, -2.87854483e+272, 1.29367874e-041]) Clearly a byte ordering problem. This also fails for '>f4' , but works for '>i4' and '>i8'. Erin From tim.hochberg at cox.net Tue May 9 22:37:13 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 9 22:37:13 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44617BC5.4060207@cox.net> Bill Baxter wrote: > [MATH elided because I'm too tired to try to follow it] > > > Anyway, back to sum -- if .sum() is to return a scalar, then what > about .sum(axis=0)? Should that be a 1-D array of scalars rather > than a matrix? If you answer no, then what about > .sum(axis=0).sum(axis=1)? (Unrelated issue, but it seems that > .sum(axis=0) and .sum(axis=1) both return row vectors, whereas I would > expect the axis=1 variety to be a column vector.) For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 vector as you expect. For arrays, it just returns a 1-D array, which isn't row or column it's just 1-D. > Anyway, seems to be like Tim (I think) said. This is just > introducing new inconsistencies in place of old ones, so what's the > point. Well as much as possible the end result should be (a) useful and (b) easy to explain. I suspect that the problem with the matrix class is that not enough people have experience with it, so we're left with either blindly following the lead of other matrix packages, some of which I know do stupid things, or taking our best guess. I suspect things will need to be tweaked as more experience with it piles up. > In numpy, however, algebraic differences between 1x1 matrices and > scalars are not as important as the fact that matrices are mutable > while scalars are not. > > > Seems like for most code the lack of methods and attributes on the > scalar would be the bigger deal than the mutability difference. Scalars in numpy *do* have methods and attributes, which may be why Sasha doesn't think that difference is a big deal ;-). > But either way, I'm not sure what point you're trying to make > there. Scalars should go away because 1x1 matrices are more flexible? Actually I thought that Sasha's position was that both scalars and *rank-0* [aka shape=()] arrays were useful in different circumstances and that we shouldn't completely anihilate rank-0 arrays in favor of scalars. I'm not quite sure what that has to do with 1x1 arrays which are a different kettle of fish, although also weird because of broadcasting. I admit to never taking the time to fully deciphre Sasha's position on rank-0 arrays though. Speaking of rank-0 arrays, here's a wacky idea that Sasha will probably appreciate even if it (probably) turns out to be impracticle: what if matrix reduce returned 0xN and Nx0 array instead of 1xN and Nx1 arrays. This preserves the column/rowness of the result, however a rank-0 array falls out of the axis=None case naturally. Rank-0 arrays are nearly identical to scalars (except for the mutability bit, which I suspect is a pretty minor issue). I generated a Nx0 array by hand to try this out; some stuff works, but at least moderate tweaking would be required to make this work correctly. -tim From wbaxter at gmail.com Tue May 9 22:51:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 9 22:51:15 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44617BC5.4060207@cox.net> References: <44617BC5.4060207@cox.net> Message-ID: On 5/10/06, Tim Hochberg wrote: > > > For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 > vector as you expect. For arrays, it just returns a 1-D array, which > isn't row or column it's just 1-D. Ok, that's new in numpy 0.9.6 apparently. I was checking sum()s behavior on a 0.9.5 install. > Anyway, seems to be like Tim (I think) said. This is just > > introducing new inconsistencies in place of old ones, so what's the > > point. > > Well as much as possible the end result should be (a) useful and (b) > easy to explain. I suspect that the problem with the matrix class is > that not enough people have experience with it, so we're left with > either blindly following the lead of other matrix packages, some of > which I know do stupid things, or taking our best guess. I suspect > things will need to be tweaked as more experience with it piles up. That's kinda why I asked if anyone is seriously using it. In my case, with the number of gotchas I ran across, I feel like I might have been better off just writing my code from the beginning with arrays and calling numpy.dot() to multiply things when I needed to. Unless the matrix class somehow becomes more "central" to numpy I think it's going to continue to languish as a dangly appendage off the main numpy that mostly sorta works most of the time. > In numpy, however, algebraic differences between 1x1 matrices and > > scalars are not as important as the fact that matrices are mutable > > while scalars are not. > > > > > > Seems like for most code the lack of methods and attributes on the > > scalar would be the bigger deal than the mutability difference. > > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). Ah. I haven't really understood all this business about the "new numpy scalar module". I thought we were talking about returning plain old python scalars. Regards, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Tue May 9 23:06:09 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue May 9 23:06:09 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44617BC5.4060207@cox.net> Message-ID: On 5/9/06, Bill Baxter wrote: > That's kinda why I asked if anyone is seriously using it. In my case, with > the number of gotchas I ran across, I feel like I might have been better off > just writing my code from the beginning with arrays and calling numpy.dot() > to multiply things when I needed to. Unless the matrix class somehow > becomes more "central" to numpy I think it's going to continue to languish > as a dangly appendage off the main numpy that mostly sorta works most of the > time. Just as a data point: I do lots of linear algebra using Numeric (production code which I'm just about to move to numpy now, but it doesn't matter for this discussion). I always do everything using plain arrays, and I'm happy to call dot() or transpose() here and there. I personally find that the added syntactic overhead of arrays far offsets having to think about all the special cases that seem to arise with the matrix objects. This also makes much of my code work transparently as the dimensionality of the problem (reflected in the rank of the arrays) changes, as long as I'm careful with how I write certain things. Cheers, f From aisaac at american.edu Tue May 9 23:19:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 23:19:03 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <44616D16.6090406@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> <44616D16.6090406@cox.net> Message-ID: >> On Tue, 09 May 2006, Tim Hochberg apparently wrote: >>> Let's start with the example in the ticket: >>> >>> m.argmax() >>> matrix([[8]]) >>> Does anyone else think that this is a fairly nonsensical result? > Alan G Isaac wrote: >> Yes. The result should be a scalar. On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Why? The current behaviour is both more self consistent and easier to > explain, so it would be nice to see some examples of how a scalar would > be advantageous here. Because that is the result for array. For consistency, I think m.argmax() m.A.argmax() should be equivalent. (Also along axes, and therefore never returning matrices. And let ravel really ravel it, rather than duplicating hstack! What is the principal at work: must matrices always produce matrices almost no matter what we do with them? I prefer the principle that standard matrix operations on matrices return matrices. But then, I see I am not ready to be consistent, as I do want m.max(num) to be a matrix ... ) > Now if a.argmax() returned (2, 2) in this case, that would > be useful Agreed. I should not have implied that a scalar return is "better" than a tuple in this case. But this seems a radical change in behavior. If this is the behavior desired in this case, does that not suggest a behavior change for every case? That is, are you not in effect arguing that argmax should some kind of indexing object such that a.max(num) == a[a.argmax(num)] This seems quite useful but entirely new. Cheers, Alan Isaac From aisaac at american.edu Tue May 9 23:19:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 9 23:19:05 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <44616D16.6090406@cox.net> References: <4461131B.1050907@ieee.org><4461434F.6060604@cox.net> <44616D16.6090406@cox.net> Message-ID: On Tue, 09 May 2006, Tim Hochberg apparently wrote: > Perhaps you should write down a candidate description of > how axis should work for matrices. It would be best if you > could come up with something that looks internally self > consistent. At present axis=None means returns 1x1 matrix > and there's a fairly self consistent story that we can > tell to explain this behaviour. You mean that x.max(1).max(0) == m.max() (which I take it is the new behavior)? > I haven't seen a description for the alternative behaviour > other than it does one thing when axis is a number and > something very different when axis=None. That seems less > than ideal. OK, I agree. There are several cases where there have been requests to return scalars instead of 1?1 matrices. This is starting to look like that, and I do not want to take a stand on such questions. But for context, the "principle" (such as it is) that I had in mind is essentially that axis=None is a request for an *element* of a matrix. Cheers, Alan Isaac From jonathan.taylor at stanford.edu Wed May 10 00:54:18 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed May 10 00:54:18 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44617BC5.4060207@cox.net> Message-ID: <44619BE6.3070409@stanford.edu> my $0.02: i have never used a matrix in numpy, though i am quite a regular user of numpy/Numeric/numarray (now just numpy, of course). i agree with fernando about the overhead of using "dot" and "transpose" once in a while..... the overhead is small and it sometimes forces one to be a little less lazy in writing code. -- jonathan Bill Baxter wrote: > , o > > On 5/10/06, *Tim Hochberg* > wrote: > > > For matrices, sum(0) returns a 1xN matrix, while sum(1) returns a Nx1 > vector as you expect. For arrays, it just returns a 1-D array, which > isn't row or column it's just 1-D. > > > Ok, that's new in numpy 0.9.6 apparently. I was checking sum()s > behavior on a 0.9.5 install. > > > Anyway, seems to be like Tim (I think) said. This is just > > introducing new inconsistencies in place of old ones, so what's the > > point. > > Well as much as possible the end result should be (a) useful and (b) > easy to explain. I suspect that the problem with the matrix class is > that not enough people have experience with it, so we're left with > either blindly following the lead of other matrix packages, some of > which I know do stupid things, or taking our best guess. I suspect > things will need to be tweaked as more experience with it piles up. > > > That's kinda why I asked if anyone is seriously using it. In my case, > with the number of gotchas I ran across, I feel like I might have been > better off just writing my code from the beginning with arrays and > calling numpy.dot() to multiply things when I needed to. Unless the > matrix class somehow becomes more "central" to numpy I think it's > going to continue to languish as a dangly appendage off the main numpy > that mostly sorta works most of the time. > > > In numpy, however, algebraic differences between 1x1 > matrices and > > scalars are not as important as the fact that matrices are > mutable > > while scalars are not. > > > > > > Seems like for most code the lack of methods and attributes on the > > scalar would be the bigger deal than the mutability difference. > > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). > > > Ah. I haven't really understood all this business about the "new > numpy scalar module". I thought we were talking about returning plain > old python scalars. > > Regards, > --bill -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From jonathan.taylor at stanford.edu Wed May 10 01:21:38 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed May 10 01:21:38 2006 Subject: [Numpy-discussion] broken recarray Message-ID: <4461A259.8050105@stanford.edu> hi, sorry to sound like a broken recarray, but i {\em really} don't understand the origin of this error message. i have more complicated examples, but i don't even understand this very simple unpythonic behaviour. can anybody help me out? i have reproduced this error on three different machines with python2.4 (two debian unstables, one red hat ? (a redhat enterprise server at work that i installed numpy on)). my complicated examples also reproduce on these three different machines..... ------------------------------------------------------ [jtaylo at miller jtaylo]$ python2.4 Python 2.4 (#1, Feb 9 2006, 18:46:06) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> >>> desc = N.dtype({'names':['x'], 'formats':[N.Float]}) >>> >>> print N.array([(3.,),(4.,)], dtype=desc) [(3.0,) (4.0,)] >>> print N.array([[3.],[4.]], dtype=desc) Traceback (most recent call last): File "", line 1, in ? TypeError: expected a readable buffer object >>> >>> --------------------------------------------------------------------------------------- any ideas? jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From svetosch at gmx.net Wed May 10 01:26:16 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed May 10 01:26:16 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> References: <4461131B.1050907@ieee.org> Message-ID: <4461A357.5000106@gmx.net> Travis Oliphant schrieb: > > I'd like to get 0.9.8 of NumPy released by the end of the week. There > are a few Trac tickets that need to be resolved by then. > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > If somebody wants to somehow consolidate the numbers in a matrix into a single number I think it is most natural to return a scalar, which probably is the most intuitive representation of a single number. Also, I can think of many cases where you want to use the resulting number in a scalar multiplication, but I cannot think of a single case where you want to have an exception raised because of non-conforming shapes of the (1,1)-matrix-result and some other matrix. A slightly different issue (I think) is the question of what's the best way to tell numpy that you want to consolidate (or call it reduce, or whatever) over all axes. After reading the other messages so far in this thread, it seems to me that this second issue caused some of the concern, not so much the return type itself. (But I don't have an opinion on this latter syntax-like question, I don't know enough about the issues involved here.) Good luck for the release, and many thanks, Sven From svetosch at gmx.net Wed May 10 01:42:01 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Wed May 10 01:42:01 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44619BE6.3070409@stanford.edu> References: <44617BC5.4060207@cox.net> <44619BE6.3070409@stanford.edu> Message-ID: <4461A70E.5000601@gmx.net> Jonathan Taylor schrieb: > my $0.02: > > i have never used a matrix in numpy, though i am quite a regular user of > numpy/Numeric/numarray (now just numpy, of course). > > i agree with fernando about the overhead of using "dot" and "transpose" > once in a while..... the overhead is small and it sometimes forces one > to be a little less lazy in writing code. > The overhead is not so small imho, and if you use numpy for what you may call rapid prototyping of matrix calculations, the loss of productivity and code readability is substantial. My impression from this list is that there are many people who use numpy for code that is developed and optimized for quite a long time. In those cases it may be best to use arrays. But I think there are many potential numpy users out there who want to quickly implement some matrix calculation of a journal article or textbook. Numpy will not attract those people if they have to write inverse( dot(transpose(a), a) ) instead of (a.T*a).I as always only my 2c as well, of course cheers, sven From schofield at ftw.at Wed May 10 01:52:14 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 10 01:52:14 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44617BC5.4060207@cox.net> References: <44617BC5.4060207@cox.net> Message-ID: <4461A71A.30602@ftw.at> Tim Hochberg wrote: > Actually I thought that Sasha's position was that both scalars and > *rank-0* [aka shape=()] arrays were useful in different circumstances > and that we shouldn't completely anihilate rank-0 arrays in favor of > scalars. I'm not quite sure what that has to do with 1x1 arrays which > are a different kettle of fish, although also weird because of > broadcasting. I admit to never taking the time to fully deciphre > Sasha's position on rank-0 arrays though. > > Speaking of rank-0 arrays, here's a wacky idea that Sasha will > probably appreciate even if it (probably) turns out to be impracticle: > what if matrix reduce returned 0xN and Nx0 array instead of 1xN and > Nx1 arrays. This preserves the column/rowness of the result, however > a rank-0 array falls out of the axis=None case naturally. Rank-0 > arrays are nearly identical to scalars (except for the mutability bit, > which I suspect is a pretty minor issue). I generated a Nx0 array by > hand to try this out; some stuff works, but at least moderate tweaking > would be required to make this work correctly. I believe something like this is the right way to solve the problem. Reduce operations on arrays work well by also reducing dimensionality. I think reduce operations on matrices are clunky because matrices are inherently 2d, and that the result of matrix reduce should *not* be a matrix, but a *vector* -- a 1d object, similar to a 1d array but with orientation information. Tim's wacky 0xN and Nx0 objects are something like row and column vectors. But the simplest construction of this would probably be a new vector object. This needn't be complex, and could share much of the matrix code. If there's sufficient interest I could write a simple vector class and post it for review. -- Ed From schofield at ftw.at Wed May 10 02:25:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 10 02:25:20 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> Message-ID: <4461B26F.4060309@ftw.at> Tim Hochberg wrote: > >>> m > matrix([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> m.argmax() > matrix([[8]]) > > Does anyone else think that this is a fairly nonsensical result? Not > that this is specific to matrices, the array result is just as weird: > > >>> a > array([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> a.argmax() > 8 > > Given that obj[obj.argmax()] should really equal obj.max(), argmax > with no axis specified should really either raise an exception or > return a tuple. I came across this last week, and I came to a similar conclusion. I agree that a sequence of indices would be far more useful. This sequence could be either an array or a tuple: With a tuple: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() (2, 2) >>> a[a.argmax()] == a.max() True >>> b = array([0, 10, 20]) >>> b.argmax() (2,) With an array: >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a.argmax() array([(2,2)] >>> a[tuple(a.argmax())] == a.max() True >>> b = array([0, 10, 20]) >>> print b.argmax() 2 >>> type(b.argmax()) [currently ] I think either one would be more useful than the current ravel().argmax() behaviour. A tuple would be more consistent with the nonzero method. -- Ed From st at sigmasquared.net Wed May 10 02:31:22 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 02:31:22 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461434F.6060604@cox.net> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> Message-ID: <4461B2B0.30408@sigmasquared.net> I'm +1 for returning scalars instead of 1x1 for reduce-like methods on matrices in case no axis is specified. Tim Hochberg wrote: > >>> m.max(1).max(0) > matrix([[8]]) > >>> a.max(1).max(0) > 8 (...) > If we switch to having m.max() return a scalar, then this equivalence > goes away. That makes things harder to explain. > > So, in the absence of more compelling use cases that those presented in > the ticket I'd be inclined to leave things as they are. Of course I'm > not a user of the matrix class, so take that for what it's worth. I don't think symmetry between matrix and array classes should be an argument for doing things a certain way for matrices when there are other arguments against it, because matrices were meant to behave differently in the first place. As to the consistency of explanation, why isn't it consistent to say that max() returns a scalar and max(axis) returns a column or row vector (in form of a matrix)? I don't see a problem with m.max(1).max(0) still returning a 1x1 matrix, though I think for matrix multiplication broadcasting rules shouldn't be applied to 1x1 matrices. (Is there any use for broadcasting rules applied to matrix multiplication in general?) Regards, Stephan From st at sigmasquared.net Wed May 10 03:39:19 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 03:39:19 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <4461A71A.30602@ftw.at> References: <44617BC5.4060207@cox.net> <4461A71A.30602@ftw.at> Message-ID: <4461C295.5080201@sigmasquared.net> Ed Schofield wrote: > I believe something like this is the right way to solve the problem. > Reduce operations on arrays work well by also reducing dimensionality. > I think reduce operations on matrices are clunky because matrices are > inherently 2d, and that the result of matrix reduce should *not* be a > matrix, but a *vector* -- a 1d object, similar to a 1d array but with > orientation information. For me a vector with "orientation" is a matrix. While it sometimes might be convenient not to differentiate between column and row vectors, which could be an argument for a vector without orientation, I wouldn't want to make a difference between a row vector and matrix, both are linear forms in the mathematical sense. Why shouldn't reduce operations just return a scalar for no axis argument and a Nx1 or 1xN matrix when an axis is specified? Regards, Stephan From pjssilva at ime.usp.br Wed May 10 03:57:02 2006 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Wed May 10 03:57:02 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <1147258568.29685.12.camel@localhost.localdomain> Bill Baxter wrote: > Ok, fair enough. I think what I was trying to express is that it's > rarely useful to have such a thing as a 1x1 matrix in linear algebra. > For instance, I can't recall a single textbook or journal article I've > read where a distinction was made between a true scalar product > between two vectors and the 1x1 matrix resulting from the matrix > product of, x^t * y. But I'll admit that most of what I read is more > like applied math than pure math. On the other hand, I would expect > that most people trying to write programs to do actual calculations > are also going to be more interested in practical applications of math > than math theory. > > > Also, if you want to get nit-picky about what is correct in terms of > rigorous math, it rasies the question as to whether it even makes any > sense to apply .sum() to an element of R^n x R^m. In the end Numpy > is a package for performing practical computations. +1. 1x1 matrices usually appear when we compute a inner product. I also read (and write) lots (fewer) papers and it is very usual to define the (real) inner product as x.T*y (where x and y are column vectors: nx1 matrices). Of course this is an abuse of notation as the inner product should return a real number. As you see Mathematics does this sometimes, an abuse of notation. Actually, I feel that matrices are very important in numpy, for the compatibility reasons cited before. You can call me lazy, but my mind really prefer the second option below: > inverse( dot(transpose(a), a) ) instead of (a.T*a).I Best, Paulo -- Paulo Jos? da Silva e Silva Professor Assistente do Dep. de Ci?ncia da Computa??o (Assistant Professor of the Computer Science Dept.) Universidade de S?o Paulo - Brazil e-mail: pjssilva at ime.usp.br Web: http://www.ime.usp.br/~pjssilva Teoria ? o que n?o entendemos o (Theory is something we don't) suficiente para chamar de pr?tica. (understand well enough to call practice) From phddas at yahoo.com Wed May 10 03:58:04 2006 From: phddas at yahoo.com (Fred J.) Date: Wed May 10 03:58:04 2006 Subject: [Numpy-discussion] numpy book Message-ID: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Hi where do I get the numpy book from? a link would be fine. and thanks for making it available for such a low price. --------------------------------- New Yahoo! Messenger with Voice. Call regular phones from your PC and save big. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Wed May 10 05:38:01 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Wed May 10 05:38:01 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: <20060510105734.44985.qmail@web54607.mail.yahoo.com> References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: <4461DEDD.9030306@shrogers.com> http://www.trelgol.com/ Fred J. wrote: > Hi > > where do I get the numpy book from? a link would be fine. and thanks for > making it available for such a low price. > > ------------------------------------------------------------------------ > New Yahoo! Messenger with Voice. Call regular phones from your PC > > and save big. -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From joseph.a.crider at Boeing.com Wed May 10 06:59:13 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Wed May 10 06:59:13 2006 Subject: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 Message-ID: I am considering moving to SciPy for our project as it has some features that we need and that we don't have time to write ourselves. However, I need to be able to support several different architectures, with Solaris 9 being the most important at this time. I've succeeded in building numpy on Cygwin and on another Sun running Solaris 8 (with gcc 2.95.3 and python 2.4), but I'm not getting very far with Solaris 9 (with gcc 3.3.2 and python 2.4). Here are a few lines from the output of the command "python setup.py install --home=~": Generating build/src/numpy/core/config.h customize SunFCompiler customize SunFCompiler gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/usr/local/include/python2.4 -Inumpy/core/src -Inumpy/core/include -I/usr/loca/include/python2.4 -c' gcc: _configtest.c Segmentation Fault Segmentation Fault failure. removing: _configtest.c _configtest.o The last few lines of the traceback are: File "numpy/core/setup.py", line 37, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration This system does have the Sun compiler and gcc 2.95.3 installed also, but I don't know how to change to another compiler without messing up my path. (Our project does use some Python 2.4 features and Python 2.2 is installed in the same directory as gcc 2.95.3 while gcc 3.3.2 is in the same directory as Python 2.4.) Any suggestions? J. Allen Crider (256)461-2699 From aisaac at american.edu Wed May 10 07:22:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 10 07:22:13 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: <20060510105734.44985.qmail@web54607.mail.yahoo.com> References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: On Wed, 10 May 2006, "Fred J." apparently wrote: > where do I get the numpy book from? http://www.tramy.us/ Cheers, Alan Isaac From stefan at sun.ac.za Wed May 10 07:41:19 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed May 10 07:41:19 2006 Subject: [Numpy-discussion] Converting a flat index to an array index Message-ID: <20060510144032.GB15544@mentat.za.net> Hi all, Methods like argmax() and argmin() return an index into a flattened array. For example, x = N.zeros((3,3)) x[1,0] = 1 x.argmax() == 3 Is there an easy way to convert between this flat index back to the original array index, i.e. from 3 to (1,0)? If not, the attached code might be a useful addition. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: idx_fromflat.py Type: text/x-python Size: 1487 bytes Desc: not available URL: From joseph.a.crider at Boeing.com Wed May 10 08:28:02 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Wed May 10 08:28:02 2006 Subject: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 In-Reply-To: <4461FBD1.6000109@sigmasquared.net> Message-ID: According to one of our system administrators, she installed Python 2.4 from a precompiled package on Sunfreeware.com. The message that is displayed when Python is started indicates it was built using gcc 3.3.2, so I would have expected more problems on our Solaris 8 machine (which only has gcc 2.95.3) than Solaris 9. J. Allen Crider (256)461-2699 -----Original Message----- From: Stephan Tolksdorf [mailto:st at sigmasquared.net] Sent: Wednesday, May 10, 2006 9:42 AM To: Crider, Joseph A Subject: Re: [Numpy-discussion] Compiling numpy-0.9.6 on Solaris 9 Addendum: Same runtime library should suffice normally. Stephan > Not knowing anything about Sun, I'd just like to note that you need to > make sure that Python and its extension (Numpy) are built with the same > compiler and runtime library. You are probably aware of this and made > sure they are, but just in case... > > Regards, > Stephan > > > Crider, Joseph A wrote: >> I am considering moving to SciPy for our project as it has some features >> that we need and that we don't have time to write ourselves. However, I >> need to be able to support several different architectures, with Solaris >> 9 being the most important at this time. I've succeeded in building >> numpy on Cygwin and on another Sun running Solaris 8 (with gcc 2.95.3 >> and python 2.4), but I'm not getting very far with Solaris 9 (with gcc >> 3.3.2 and python 2.4). >> >> Here are a few lines from the output of the command "python setup.py >> install --home=~": >> >> Generating build/src/numpy/core/config.h >> customize SunFCompiler >> customize SunFCompiler >> gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall >> -Wstrict-prototypes -fPIC' >> compile options: '-I/usr/local/include/python2.4 -Inumpy/core/src >> -Inumpy/core/include -I/usr/loca/include/python2.4 -c' >> gcc: _configtest.c >> Segmentation Fault >> Segmentation Fault >> failure. >> removing: _configtest.c _configtest.o >> >> The last few lines of the traceback are: >> File "numpy/core/setup.py", line 37, in generate_config_h >> raise "ERROR: Failed to test configuration" >> ERROR: Failed to test configuration >> >> This system does have the Sun compiler and gcc 2.95.3 installed also, >> but I don't know how to change to another compiler without messing up my >> path. (Our project does use some Python 2.4 features and Python 2.2 is >> installed in the same directory as gcc 2.95.3 while gcc 3.3.2 is in the >> same directory as Python 2.4.) >> >> Any suggestions? >> >> J. Allen Crider >> (256)461-2699 >> >> >> >> ------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job >> easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > From chanley at stsci.edu Wed May 10 09:28:00 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed May 10 09:28:00 2006 Subject: [Numpy-discussion] use of boolean index array returns byteswapped values Message-ID: <44621454.1040008@stsci.edu> When using a big-endian array on a little-endian OS, the use of a boolean array as an index array causes the resulting array to have byteswapped values. Example code is below: In [14]: a = numpy.array([1,2,3,4,5,6,7,8,9],'>f8') In [15]: a Out[15]: array([ 1., 2., 3., 4., 5., 6., 7., 8., 9.]) In [16]: x = numpy.where( (a>2) & (a<6) ) In [17]: x Out[17]: (array([2, 3, 4]),) In [18]: a[x] Out[18]: array([ 3., 4., 5.]) In [19]: y = ( (a>2) & (a<6) ) In [20]: y Out[20]: array([False, False, True, True, True, False, False, False, False], dtype=bool) In [21]: a[y] Out[21]: array([ 1.04346664e-320, 2.05531309e-320, 2.56123631e-320]) This bug was originally discovered by Erin Sheldon while testing pyfits. I have submitted this bug report on the Trac site. Chris From gruel at astro.ufl.edu Wed May 10 09:47:13 2006 From: gruel at astro.ufl.edu (Nicolas Gruel) Date: Wed May 10 09:47:13 2006 Subject: [Numpy-discussion] numpy book In-Reply-To: References: <20060510105734.44985.qmail@web54607.mail.yahoo.com> Message-ID: <200605101222.06050.gruel@astro.ufl.edu> and the new version? I bought mine there are sometimes and I have been very surprised to see that some people does have a newer version. I'm not complaining I'm pretty sure that Travis is overbusy with Numpy but perhaps a good things to do will be to send automatically a link to the last version of the book. Parhpas a web site with login+passwd gave when you're buying the book will be the perfect solution. Travis can update is version to the site and people can check it themselve. Only some suggestion but I would like to have my book update my version is dat from October 27 2005... N. Le Wednesday 10 Mai 2006 10:28, Alan G Isaac a ?crit?: > On Wed, 10 May 2006, "Fred J." apparently wrote: > > where do I get the numpy book from? > > http://www.tramy.us/ > > Cheers, > Alan Isaac > > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant.travis at ieee.org Wed May 10 09:54:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 09:54:13 2006 Subject: [Numpy-discussion] numpy byte ordering bug In-Reply-To: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> References: <331116dc0605092213k35d42c32v9395439ed9265db1@mail.gmail.com> Message-ID: <44621A6D.8010002@ieee.org> Erin Sheldon wrote: > Hi all- > > The previous byte order problems seem to be fixed > in the svn numpy. Here is another odd one: > > This is on my little-endian linux box. >>>> numpy.__version__ > '0.9.7.2490' >>>> x = numpy.arange(10,dtype='>>> x > array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>>> x = numpy.arange(10,dtype='>f8') >>>> x > array([ 0.00000000e+000, 1.00000000e+000, 1.37186586e+303, > -5.82360826e-011, -7.98920843e+292, 3.60319875e-021, > 4.94303335e+282, -2.09830067e-031, -2.87854483e+272, > 1.29367874e-041]) > > Clearly a byte ordering problem. This also fails for '>f4' , but works > for > '>i4' and '>i8'. It works by accident on integer arrays. arange for non-native byte-order needs to be disabled or handled separately by byte-swapping after completion. Enter a Ticket if you haven't already. -Travis From oliphant.travis at ieee.org Wed May 10 10:02:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:02:14 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44621C57.6050302@ieee.org> Bill Baxter wrote: > > On 5/10/06, *Tim Hochberg* > wrote: > > Of course I'm not a user of the matrix class, so take that for what > it's worth. > > On 5/10/06, *Sasha* > wrote: > > > I don't use matrices, so I don't have a strong opinion on this issue. > > > Hmm. And in addition to you two, I'm pretty sure I've seen Travis > mention he doesn't use matrix, either. That plus the lack of response > to my and others' previous posts on the topic kinda makes me wonder > whether there are actually *any* hardcore numpy users out there using > the matrix class. That's not true, I do use matrices. I just use them in small doses --- to simplify certain expressions. I don't typically use them for all arrays in my code, however. -Travis From oliphant.travis at ieee.org Wed May 10 10:06:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:06:05 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44617334.2070606@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> Message-ID: <44621D61.5040102@ieee.org> Sebastian Haase wrote: > One additional question: > is PyArray_FromDimsAndData creating a copy ? > I have very large image data and cannot afford copies :-( No, it uses the data as the memory space for the array (but you have to either manage that memory area yourself or reset the OWNDATA flag to get NumPy to delete it for you on array deletion). -Travis From oliphant.travis at ieee.org Wed May 10 10:19:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 10:19:06 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <4461B26F.4060309@ftw.at> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> <4461B26F.4060309@ftw.at> Message-ID: <44621F89.3050903@ieee.org> Ed Schofield wrote: > Tim Hochberg wrote: > >> >>> m >> matrix([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> m.argmax() >> matrix([[8]]) >> >> Does anyone else think that this is a fairly nonsensical result? Not >> that this is specific to matrices, the array result is just as weird: >> >> >>> a >> array([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> a.argmax() >> 8 >> >> Given that obj[obj.argmax()] should really equal obj.max(), argmax >> with no axis specified should really either raise an exception or >> return a tuple. >> > > I came across this last week, and I came to a similar conclusion. I > agree that a sequence of indices would be far more useful. This > sequence could be either an array or a tuple: > > but a.flat[a.argmax()] == a.max() works also. I'm not convinced it's worth special-casing the argmax method because a.flat exists already. -Travis From haase at msg.ucsf.edu Wed May 10 10:41:02 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed May 10 10:41:02 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44621D61.5040102@ieee.org> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> <44621D61.5040102@ieee.org> Message-ID: <200605101040.20624.haase@msg.ucsf.edu> On Wednesday 10 May 2006 10:05, Travis Oliphant wrote: > Sebastian Haase wrote: > > One additional question: > > is PyArray_FromDimsAndData creating a copy ? > > I have very large image data and cannot afford copies :-( > > No, it uses the data as the memory space for the array (but you have to > either manage that memory area yourself or reset the OWNDATA flag to get > NumPy to delete it for you on array deletion). > > -Travis Thanks for the reply. Regarding "setting the OWNDATA flag": How does NumPy know if it should call free (C code) or delete [] (C++ code) ? I have been told that at least on some platforms its crucial to properly match those with if you used malloc or the new-operator (only available in C++). Thanks, - Sebastian From tim.hochberg at cox.net Wed May 10 11:16:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 11:16:04 2006 Subject: [Numpy-discussion] argmax In-Reply-To: <44621F89.3050903@ieee.org> References: <4461131B.1050907@ieee.org> <4461434F.6060604@cox.net> <4461B26F.4060309@ftw.at> <44621F89.3050903@ieee.org> Message-ID: <44622DC3.9040507@cox.net> Travis Oliphant wrote: > Ed Schofield wrote: > >> Tim Hochberg wrote: >> >> >>> >>> m >>> matrix([[0, 1, 2], >>> [3, 4, 5], >>> [6, 7, 8]]) >>> >>> m.argmax() >>> matrix([[8]]) >>> >>> Does anyone else think that this is a fairly nonsensical result? Not >>> that this is specific to matrices, the array result is just as weird: >>> >>> >>> a >>> array([[0, 1, 2], >>> [3, 4, 5], >>> [6, 7, 8]]) >>> >>> a.argmax() >>> 8 >>> >>> Given that obj[obj.argmax()] should really equal obj.max(), argmax >>> with no axis specified should really either raise an exception or >>> return a tuple. >>> >> >> >> I came across this last week, and I came to a similar conclusion. I >> agree that a sequence of indices would be far more useful. This >> sequence could be either an array or a tuple: >> >> > > > but > > a.flat[a.argmax()] == a.max() > > works also. > > > I'm not convinced it's worth special-casing the argmax method because > a.flat exists already. That's reasonable. However, we should probably add a note to the docstring of argmin and argmax that describes this trick. -tim From Chris.Barker at noaa.gov Wed May 10 11:34:12 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed May 10 11:34:12 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: Message-ID: <44623207.2020306@noaa.gov> Bill Baxter wrote: > (Unrelated > issue, but it seems that .sum(axis=0) and .sum(axis=1) both return row > vectors, whereas I would expect the axis=1 variety to be a column vector.) It works for me (): >>> m matrix([[1, 2], [3, 4]]) >>> m.sum(0) matrix([[4, 6]]) >>> m.sum(1) matrix([[3], [7]]) >>> N.__version__ '0.9.6' > In numpy, however, algebraic differences between 1x1 matrices and >> scalars are not as important as the fact that matrices are mutable >> while scalars are not. >> > Seems like for most code the lack of methods and attributes on the scalar > would be the bigger deal than the mutability difference. But either way, > I'm not sure what point you're trying to make there. Scalars should go > away > because 1x1 matrices are more flexible? Isn't that what rank-0 arrays are for? > Scalars in numpy *do* have methods and attributes, which may be why > Sasha doesn't think that difference is a big deal ;-). Then __repr__ needs to be fixed: >>> a array([0, 1, 2, 3, 4]) >>> s = a.sum() >>> type(s) >>> type(10) >>> repr(s) '10' >>> repr(10) '10' Is there even a direct way to construct a numpy scalar? > Actually I thought that Sasha's position was that both scalars and > *rank-0* [aka shape=()] arrays were useful in different circumstances > and that we shouldn't completely anihilate rank-0 arrays in favor of > scalars. What is the difference? except that rank-o arrays are mutable, and I do think a mutable scalar is a good thing to have. Why not make numpy scalars mutable, and then would there be a difference? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Wed May 10 11:35:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 11:35:04 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <200605101040.20624.haase@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> <44617334.2070606@msg.ucsf.edu> <44621D61.5040102@ieee.org> <200605101040.20624.haase@msg.ucsf.edu> Message-ID: <44623231.4040603@ieee.org> Sebastian Haase wrote: > On Wednesday 10 May 2006 10:05, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> One additional question: >>> is PyArray_FromDimsAndData creating a copy ? >>> I have very large image data and cannot afford copies :-( >>> >> No, it uses the data as the memory space for the array (but you have to >> either manage that memory area yourself or reset the OWNDATA flag to get >> NumPy to delete it for you on array deletion). >> >> -Travis >> > > Thanks for the reply. > Regarding "setting the OWNDATA flag": > How does NumPy know if it should call free (C code) or delete [] (C++ code) ? > It doesn't. It always uses _pya_free which is a macro that is defined to either system free or Python's memory-manager equivalent. It should always be paired with _pya_malloc. Yes, you can have serious problems by mixing memory allocators. In other-words, unless you know what you are doing it is unwise to set the OWNDATA flag for data that was defined elsewhere. My favorite method is to simply let NumPy create the memory for you (e.g. use PyArray_SimpleNew). Then, you won't have trouble. If that method is not possible, then the next best thing to do is to define a simple Python Object that uses reference counting to manage the memory for you. Then, you point array->base to that object so that it's reference count gets decremented when the array disappears. The simple Python Object defines it's tp_deallocate function to call the appropriate free. -Travis From oliphant.travis at ieee.org Wed May 10 11:43:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 11:43:03 2006 Subject: [Numpy-discussion] C extension in new numpy: help to port from numarray In-Reply-To: <44616AF9.5010905@msg.ucsf.edu> References: <44616AF9.5010905@msg.ucsf.edu> Message-ID: <4462341A.6010704@ieee.org> Sebastian Haase wrote: > Hi, > I'm a long time numarray user. > I have some SWIG typemaps that I'm using for quite some time. > They are C++ oriented and support creating template'd function. > I only cover the case of contiguous "input" arrays of 1D,2D and 3D. > (I would like to ensure that NO implicit type conversions are made so > that I can use the same scheme to have arrays changed on the C/C++ > side and can later see/access those in python.) > > (as I just added some text to the scipy wiki: > www.scipy.org/Converting_from_numarray) > I use something like: > > PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, > NUM_C_ARRAY); > arr = (double *) NA_OFFSETDATA(NAarr); > > > What is new numpy's equivalent of NA_InputArray In the scipy/Lib/ndimage package is a numcompat.c and numcompat.h file that implements several of the equivalents. I'd like to see a module like this get formalized and placed into numpy itself so that most numarray extensions simply have to be re-compiled to work with NumPy. Here is the relevant information (although I don't know what PYARR_TC is...) typedef enum { tAny, tBool=PyArray_BOOL, tInt8=PyArray_INT8, tUInt8=PyArray_UINT8, tInt16=PyArray_INT16, tUInt16=PyArray_UINT16, tInt32=PyArray_INT32, tUInt32=PyArray_UINT32, tInt64=PyArray_INT64, tUInt64=PyArray_UINT64, tFloat32=PyArray_FLOAT32, tFloat64=PyArray_FLOAT64, tComplex32=PyArray_COMPLEX64, tComplex64=PyArray_COMPLEX128, tObject=PyArray_OBJECT, /* placeholder... does nothing */ tDefault = tFloat64, #if BITSOF_LONG == 64 tLong = tInt64, #else tLong = tInt32, #endif tMaxType } NumarrayType; typedef enum { NUM_CONTIGUOUS=CONTIGUOUS, NUM_NOTSWAPPED=NOTSWAPPED, NUM_ALIGNED=ALIGNED, NUM_WRITABLE=WRITEABLE, NUM_COPY=ENSURECOPY, NUM_C_ARRAY = (NUM_CONTIGUOUS | NUM_ALIGNED | NUM_NOTSWAPPED), NUM_UNCONVERTED = 0 } NumRequirements; #define _NAtype_toDescr(type) (((type)==tAny) ? NULL : \ PyArray_DescrFromType(type)) #define NA_InputArray(obj, type, flags) \ (PyArrayObject *)\ PyArray_FromAny(obj, _NAtype_toDescr(type), 0, 0, flags, NULL) #define NA_OFFSETDATA(a) ((void *) PyArray_DATA(a)) From ndarray at mac.com Wed May 10 11:47:03 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 11:47:03 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <44623207.2020306@noaa.gov> References: <44623207.2020306@noaa.gov> Message-ID: On 5/10/06, Christopher Barker wrote: > ... > Is there even a direct way to construct a numpy scalar? > Yes, >>> type(int_(0)) >>> type(float_(0)) RTFM :-) > > Actually I thought that Sasha's position was that both scalars and > > *rank-0* [aka shape=()] arrays were useful in different circumstances > > and that we shouldn't completely anihilate rank-0 arrays in favor of > > scalars. > > What is the difference? except that rank-o arrays are mutable, and I do > think a mutable scalar is a good thing to have. Why not make numpy > scalars mutable, and then would there be a difference? Mutable objects cannot have value based hash, which practically means they cannot be used as keys in python dictionaries. This may change in python 3.0, but meanwhile mutable scalars are not an option. From oliphant.travis at ieee.org Wed May 10 12:59:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 12:59:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <445F8F71.4070801@cox.net> References: <445F8F71.4070801@cox.net> Message-ID: <446245AB.6000900@ieee.org> Tim Hochberg wrote: > > I created a branch to work on basearray and arraykit: > > http://svn.scipy.org/svn/numpy/branches/arraykit > > Basearray, as most of you probably know by now is the array superclass > that Travis, Sasha and I have > all talked about at various times with slightly different emphasis. I'm thinking that fancy-indexing should be re-factored a bit so that view-based indexing is tried first and then on error, fancy-indexing is tried. Right now, it goes through the fancy-indexing check and that seems to slow things down more than it needs to for simple indexing operations. Perhaps it would make sense for basearray to implement simple indexing while the ndarray would augment the basic indexing. Is adding basic Numeric-like indexing something you see as useful to basearray? -Travis From tim.hochberg at cox.net Wed May 10 13:30:05 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 13:30:05 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446245AB.6000900@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <44624D24.3050406@cox.net> Travis Oliphant wrote: > Tim Hochberg wrote: > >> >> I created a branch to work on basearray and arraykit: >> >> http://svn.scipy.org/svn/numpy/branches/arraykit >> >> Basearray, as most of you probably know by now is the array >> superclass that Travis, Sasha and I have all talked about at various >> times with slightly different emphasis. > > > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing > is tried. Right now, it goes through the fancy-indexing check and > that seems to slow things down more than it needs to for simple > indexing operations. That sounds like a good idea. I would like to see fancy indexing broken out for arraykit if not necessarily for basearray. > > Perhaps it would make sense for basearray to implement simple indexing > while the ndarray would augment the basic indexing. > > > Is adding basic Numeric-like indexing something you see as useful to > basearray? Yes! No! Maybe ;-) Each time I think this over I come to a slightly different conclusion. At one point I was thinking that basearray should support shape, __getitem__ and __setitem__ (and had I thought about it at the time, I would have preferred basic indexing here). However at present I'm thinking that basearray should really just support your basic array protocol and nothing else. If we added the above three methods then that makes life harder for someone who wants to create an array subclass that is either immutable or has a fixed shape. Sure shape and/or __setitem__ can be overridden with something that raises some sort of exception, but it's exactly that sort of stuff that I was interested in getting away from with basearray (although admittedly this would be on a much smaller scale). I can't think of a real problem with supplying just a read only version of shape and getitem, but it also doesn't seem very useful. So, as I said I lean towards the simplest, thinnest interface possible. However, it may be a good idea to put together another subclass of basearray that supports shape, __getitem__, __setitem__ [in their basic forms], __repr__ and __str__. This could be part of the proposal to add basearray to the core. That way the basearray module could export something that's directly useful to people in addition to basearray which is really only useful as a basis for other stuff. Also, like I said I would use this for arraykit if it were available (and might even be willing to do the work myself if I find the time). I have considered splitting out fancy indexing in my simplified array class using some sort of psuedo attribute (similar to flat). If I was doing that, I'd actually prefer to split out the two different types of fancy indexing (boolean versus integer) so that they could be applied separately. -tim > > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From ndarray at mac.com Wed May 10 13:43:04 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 13:43:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446245AB.6000900@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: On 5/10/06, Travis Oliphant wrote: > ... > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing is > tried. Right now, it goes through the fancy-indexing check and that > seems to slow things down more than it needs to for simple indexing > operations. Is it too late to reconsider the decision to further overload [] to support fancy indexing? It would be nice to restrict [] to view based indexing and require a function call for copy-based. If that is not an option, I would like to propose to have no __getitem__ in the basearray and instead have rich collection of various functions such as "take" which can be used by the derived classes to create their own __getitem__ . Independent of the fate of the [] operator, I would like to have means to specify exactly what I want without having to rely on the smartness of the fancy-indexing check. For example, in the current version, I can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I cannot find an equivalent of x[[3,2,1],[1,2,3]]. I think [] syntax preferable in the interactive setting, where it allows to get the result with a few keystrokes. In addition [] has special syntactic properties in python (special meaning of : and ... within []) that allows some nifty looking syntax not available for member functions. On the other hand in programming, and especially in writing reusable code specialized member functions such as "take" are more appropriate for several resons. (1) Robustness, x.take(i) will do the same thing if i is a tuple, list, or array of any integer type, while with x[i] it is anybodys guess and the results may change with the changes in numpy. (2) Performance: fancy-indexing check is expensive. (3) Code readability: in the interactive session when you type x[i], i is either supplied literally or is defined on the same screen, but if i comes as an argument to the function, it may be hard to figure out whether i expected to be an integer or a list of integers is also ok. From oliphant.travis at ieee.org Wed May 10 13:55:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 13:55:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <4462516A.3010007@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <4462516A.3010007@cox.net> Message-ID: <446252D1.2050300@ieee.org> Tim Hochberg wrote: > > Since I'm actually messing with trying to untangle arraykit from > multiarray right now, let me ask you a question: there are several > functions in arrayobject.c that look like they should be part of the > API. Notably: > > PyArray_CopyObject > PyArray_MapIterNew > PyArray_MapIterBind > PyArray_GetMap > PyArray_MapIterReset > PyArray_MapIterNext > PyArray_SetMap > PyArray_IntTupleFromIntp > > However, they don't appear to show up. They also aren't in > *_api_order.txt, where I presume the list of all exported functions > lives. Is this on purpose, or is it an oversight? > Some of these are an oversight. The Mapping-related ones require a little more explanation, though. Initially I had thought to allow mapping iterators to live independently of array indexing. But, it never worked out that way. I think they could be made part of the API, but they need to be used correctly together. In partciular, you can't really "re-bind" another array to a mapping interator. You have to create a new one, so it may be a little confusing. But, I see no reason to not let these things out on their own. -Travis From oliphant.travis at ieee.org Wed May 10 13:58:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 13:58:08 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <446253AE.4090402@ieee.org> Sasha wrote: > On 5/10/06, Travis Oliphant wrote: >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . It may be too late since the fancy-indexing was actually introduced by numarray. It does seem to be a feature that people like. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. It probably isn't there. Perhaps it should be. As you've guessed, a lot of the overloading of [] is because inside it you can use simplified syntax to generate slices. I would like to see the slice syntax extended so it could be used inside function calls as well as a means to generate slice objects on-the-fly. Perhaps for Python 3.0 this could be suggested. -Travis From ndarray at mac.com Wed May 10 14:05:04 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 14:05:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <445F8F71.4070801@cox.net> References: <445F8F71.4070801@cox.net> Message-ID: On 5/8/06, Tim Hochberg wrote: > [...] Here's a brief example; > this is what an custom array class that just supported indexing and > shape would look like using arraykit: > > import numpy.arraykit as _kit > > class customarray(_kit.basearray): > __new__ = _kit.fromobj > __getitem__ = _kit.getitem > __setitem__ = _kit.setitem > shape = property(_kit.getshape, _kit.setshape) I see the following problem with your approach: customarray.__new__ is supposed to return an instance of customarray, but in your example it returns a basearray. You may like an approach that I took in writing r.py . In the context of your example, I would make fromobj a classmethod of _kit.basearray and use the type argument to allocate the new object (type->tp_alloc(type, 0);). This way customarray(...) will return customarray as expected. All _kit methods that return arrays can take the same approach and become classmethods of _kit.basearray. The drawback is the pollution of the base class namespace, but this may be acceptable if you name the baseclass methods with a leading underscore. From oliphant.travis at ieee.org Wed May 10 14:26:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 14:26:05 2006 Subject: [Numpy-discussion] reduce-type results from matrices Message-ID: <44625A4D.7020905@ieee.org> Thanks for the discussion on Ticket #83 (whether or not to return scalars from matrix methods). I like many of the comments and generally agree with Tim and Sasha about why bothering to replace one inconsistency with another. However, I've been swayed by two facts 1) People that use matrices more seem to like having the arguments be returned as scalars 2) Multiplication by a 1x1 matrix won't work on most matrices but multiplication by a scalar will. These two facts lean towards accepting the patch. Therefore, Ticket #83 patch will be applied. Best regards, -Travis From st at sigmasquared.net Wed May 10 14:33:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 14:33:02 2006 Subject: [Numpy-discussion] Building on Windows Message-ID: <44625BE2.4010708@sigmasquared.net> Hi, there are still some (mostly minor) problems with the Windows build of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation enhancements, but before doing so I'd like to know if there's interest from one of the core developers to review/commit these patches afterwards. I'm asking because in the past questions and suggestions regarding the building process of Numpy (especially on Windows) often remained unanswered on this list. I realise that many developers don't use Windows and that the distutils build is a complex beast, but the current situation seems a bit unsatisfactory - and I would like to help. Would there be any interest for further refactoring of the build code over and above patching errors? Stephan From pearu at scipy.org Wed May 10 14:41:03 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 10 14:41:03 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625BE2.4010708@sigmasquared.net> References: <44625BE2.4010708@sigmasquared.net> Message-ID: On Wed, 10 May 2006, Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of Numpy > (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation > enhancements, but before doing so I'd like to know if there's interest from > one of the core developers to review/commit these patches afterwards. I'm > asking because in the past questions and suggestions regarding the building > process of Numpy (especially on Windows) often remained unanswered on this > list. I realise that many developers don't use Windows and that the distutils > build is a complex beast, but the current situation seems a bit > unsatisfactory - and I would like to help. > Would there be any interest for further refactoring of the build code over > and above patching errors? Yes. Note that patches should not break other platforms. I am currently succesfully using mingw32 compiler to build numpy. Python is Enthon23 and Enthon24 that conviniently contain all compiler tools. Pearu From oliphant.travis at ieee.org Wed May 10 14:42:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 14:42:01 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625BE2.4010708@sigmasquared.net> References: <44625BE2.4010708@sigmasquared.net> Message-ID: <44625E03.9080105@ieee.org> Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of > Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > documentation enhancements, but before doing so I'd like to know if > there's interest from one of the core developers to review/commit > these patches afterwards. I'm asking because in the past questions and > suggestions regarding the building process of Numpy (especially on > Windows) often remained unanswered on this list. I realise that many > developers don't use Windows and that the distutils build is a complex > beast, but the current situation seems a bit unsatisfactory - and I > would like to help. I think your assessment is a bit harsh. I regularly build on MinGW so I know it works there (at least at release time). I also have applied several patches with the express purpose of getting the build working on MSVC and Cygwin. So, go ahead and let us know what problems you are having. You are correct that my main build platform is not Windows, but I think several other people do use Windows regularly and we definitely want to support it. -Travis From tim.hochberg at cox.net Wed May 10 14:47:01 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:47:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> Message-ID: <44625F1F.6070003@cox.net> Sasha wrote: > On 5/8/06, Tim Hochberg wrote: > >> [...] Here's a brief example; >> this is what an custom array class that just supported indexing and >> shape would look like using arraykit: >> >> import numpy.arraykit as _kit >> >> class customarray(_kit.basearray): >> __new__ = _kit.fromobj >> __getitem__ = _kit.getitem >> __setitem__ = _kit.setitem >> shape = property(_kit.getshape, _kit.setshape) > > > I see the following problem with your approach: customarray.__new__ is > supposed to return an instance of customarray, but in your example it > returns a basearray. Actually, it doesn't. The signature of fromobj is: fromobj(subtype, obj, dtype=None, order="C"). It returns an object of type subtype (as long as subtype is derived from basearray). At present, fromobj is implemented in Python as: def fromobj(subtype, obj, dtype=None, order="C"): if order not in ["C", "FORTRAN"]: raise ValueError("Order must be either 'C' or 'FORTRAN', not %r" % order) nda = _numpy.array(obj, dtype, order=order) return basearray.__new__(subtype, nda.shape, nda.dtype, nda.data, order=order) That's kind of kludgy, and I plan to remove the dependance on numpy.array at some point, but it seems to work OK. > You may like an approach that I took in writing > r.py . In > the context of your example, I would make fromobj a classmethod of > _kit.basearray and use the type argument to allocate the new object > (type->tp_alloc(type, 0);). This way customarray(...) will return > customarray as expected. > > All _kit methods that return arrays can take the same approach and > become classmethods of _kit.basearray. The drawback is the pollution > of the base class namespace, but this may be acceptable if you name > the baseclass methods with a leading underscore. I'd rather avoid that since one of my goals is to remove name polution. I'll keep it in mind though if I run into problems with the above approach. -tim From tim.hochberg at cox.net Wed May 10 14:49:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:49:04 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625E03.9080105@ieee.org> References: <44625BE2.4010708@sigmasquared.net> <44625E03.9080105@ieee.org> Message-ID: <44625FB9.50501@cox.net> Travis Oliphant wrote: > Stephan Tolksdorf wrote: > >> Hi, >> >> there are still some (mostly minor) problems with the Windows build >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >> documentation enhancements, but before doing so I'd like to know if >> there's interest from one of the core developers to review/commit >> these patches afterwards. I'm asking because in the past questions >> and suggestions regarding the building process of Numpy (especially >> on Windows) often remained unanswered on this list. I realise that >> many developers don't use Windows and that the distutils build is a >> complex beast, but the current situation seems a bit unsatisfactory >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > so I know it works there (at least at release time). I also have > applied several patches with the express purpose of getting the build > working on MSVC and Cygwin. > > So, go ahead and let us know what problems you are having. You are > correct that my main build platform is not Windows, but I think > several other people do use Windows regularly and we definitely want > to support it. > Indeeed. I build from SVN at least once a week using MSVC and it's been compiling warning free and passing all tests for me for some time. -tim From tim.hochberg at cox.net Wed May 10 14:53:03 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 14:53:03 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> Message-ID: <44626084.7000001@cox.net> Sasha wrote: > On 5/10/06, Travis Oliphant wrote: > >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . This is exactly the approach taken by arraykit. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. > > I think [] syntax preferable in the interactive setting, where it > allows to get the result with a few keystrokes. In addition [] has > special syntactic properties in python (special meaning of : and ... > within []) that allows some nifty looking syntax not available for > member functions. This is why I was considering using pseudo attributes, similar to flat, for my basearray subclass. I could hang [] off of them and use all of the normal array indexing syntax. I haven't come up with ideal names yet, but it could look something like: x[:3, 5:] # normal view indexing x.at[[3,2,1], [1,2,3]] # integer array indexing x.iff[[1,0,1], [2,1,0]] # boolean array indexing. > On the other hand in programming, and especially in > writing reusable code specialized member functions such as "take" are > more appropriate for several resons. (1) Robustness, x.take(i) will do > the same thing if i is a tuple, list, or array of any integer type, > while with x[i] it is anybodys guess and the results may change with > the changes in numpy. (2) Performance: fancy-indexing check is > expensive. (3) Code readability: in the interactive session when you > type x[i], i is either supplied literally or is defined on the same > screen, but if i comes as an argument to the function, it may be hard > to figure out whether i expected to be an integer or a list of > integers is also ok. Sounds reasonable to me. -tim From tim.hochberg at cox.net Wed May 10 15:07:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 15:07:04 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44626012.6050506@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> Message-ID: <446263EF.5050408@cox.net> Travis Oliphant wrote: > Tim Hochberg wrote: > >>> Is adding basic Numeric-like indexing something you see as useful to >>> basearray? >> >> >> Yes! No! Maybe ;-) > > > Got ya, loud and clear :-) > > I understand the confusion. > > I think we should do the following (for release 1.0) > > > 1) Implement a base-array with no getitem method nor setitem method at > all > > 2) Implement a sub-class that supports only creation of data-types > corresponding to existing Python scalars (Boolean, Long-based > integers, Double-based floats, complex and object types). Then, all > array accesses should return the underlying Python objects. > This sub-class should also only do view-based indexing (basically it's > old Numeric behavior inside of NumPy). > > 3) Implement the ndarray as a sub-class of #2 that does fancy indexing > and returns array-scalars > > > Item 1) should be pushed for inclusion in 2.6 and possibly even > something like 2) +1 Let me point out an interesting possibility. If ndarray inherits from basearray, only one of them needs to have the current __new__ method. That means that we could do the following rearrangement, if we felt like it: 1. Remove 'array' 2. Rename 'ndarray' to 'array' 3. Put the old functionality of array into array.__new__ The current functionality of ndarray.__new__ would still be available as basearray.__new__. I mention this partly because I can think of things to do with the name ndarray: for example, use it as the name of the subclass in (2). -tim From oliphant.travis at ieee.org Wed May 10 15:38:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 10 15:38:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44624D24.3050406@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> Message-ID: <44626012.6050506@ieee.org> Tim Hochberg wrote: >> Is adding basic Numeric-like indexing something you see as useful to >> basearray? > > Yes! No! Maybe ;-) Got ya, loud and clear :-) I understand the confusion. I think we should do the following (for release 1.0) 1) Implement a base-array with no getitem method nor setitem method at all 2) Implement a sub-class that supports only creation of data-types corresponding to existing Python scalars (Boolean, Long-based integers, Double-based floats, complex and object types). Then, all array accesses should return the underlying Python objects. This sub-class should also only do view-based indexing (basically it's old Numeric behavior inside of NumPy). 3) Implement the ndarray as a sub-class of #2 that does fancy indexing and returns array-scalars Item 1) should be pushed for inclusion in 2.6 and possibly even something like 2) -Travis From fullung at gmail.com Wed May 10 15:40:02 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 15:40:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625FB9.50501@cox.net> Message-ID: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Hello all, It seems that many people are building on Windows without problems, except for Stephan and myself. Let me start by staying that yes, the default build on Windows with MinGW and Visual Studio works nicely. However, is anybody building with ATLAS and finding that experience to be equally painless? If so, *please* can you tell me how you've organized your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very interested in the contents of your site.cfg. I've been trying for many weeks to do some small subset of the above without hacking into the core of numpy.distutils. So far, no luck. Does anybody do debug builds on Windows? Again, please tell me how you do this, because I would really like to be able to build a debug version of NumPy for debugging with the MSVS compiler. As for compiler warnings, last time I checked, distutils seems to be suppressing the output from the compiler, except when the build actually fails. Or am I mistaken? Eagerly awaiting Windows build nirvana, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg > Sent: 10 May 2006 23:49 > To: Travis Oliphant > Cc: Stephan Tolksdorf; numpy-discussion > Subject: Re: [Numpy-discussion] Building on Windows > > Travis Oliphant wrote: > > > Stephan Tolksdorf wrote: > > > >> Hi, > >> > >> there are still some (mostly minor) problems with the Windows build > >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > >> documentation enhancements, but before doing so I'd like to know if > >> there's interest from one of the core developers to review/commit > >> these patches afterwards. I'm asking because in the past questions > >> and suggestions regarding the building process of Numpy (especially > >> on Windows) often remained unanswered on this list. I realise that > >> many developers don't use Windows and that the distutils build is a > >> complex beast, but the current situation seems a bit unsatisfactory > >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > > so I know it works there (at least at release time). I also have > > applied several patches with the express purpose of getting the build > > working on MSVC and Cygwin. > > > > So, go ahead and let us know what problems you are having. You are > > correct that my main build platform is not Windows, but I think > > several other people do use Windows regularly and we definitely want > > to support it. > > > Indeeed. I build from SVN at least once a week using MSVC and it's been > compiling warning free and passing all tests for me for some time. > > -tim From st at sigmasquared.net Wed May 10 16:49:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 16:49:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Message-ID: <44627BC0.3080403@sigmasquared.net> Hi Albert, in the following you find an abridged preview version of my MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of it, but maybe it helps. You'll need a current Cygwin with g77 and MinGW-libraries installed. Atlas: ====== Download and extract latest development ATLAS (3.7.11). Comment out line 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate options for your system. Don't activate posix threads (for now). Overwrite the compiler and linker flags with flags that include "-mno-cygwin". Use the default architecture settings. Atlas and the test suite hopefully compile without an error now. Lapack: ======= Download and extract www.netlib.org/lapack/lapack.tgz and apply the most current patch from www.netlib.org/lapack-dev/ Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add ".PHONY: install testing timing" as the last line to lapack/Makefile. Run "make install lib" in the lapack root directory in Cygwin. ("make testing timing" and should also work now, but you probably want to use your optimised BLAS for that. Some errors in the tests are to be expected.) Atlas + Lapack: =============== Copy the generated lapack_LINUX.a together with "libatlas.a", "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. In Cygwin execute the following command sequence in that directory to get an ATLAS-optimized LAPACK library "ar x liblapack.a ar r lapack_LINUX.a *.o rm *.o mv lapack_LINUX.a liblapack.a" Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to atlas.lib, in order to allow distutils to recognize the libs and at the same time provide the correct versions for MSVC. Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to this directory and again make .lib copies. Compile and install numpy: ========================== Put "[atlas] library_dirs = d:\path\to\your\BlasDirectory atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" into your site.cfg in the numpy root directory. Open an Visual Studio 2003 command prompt and run "Path\To\Python.exe setup.py config --compiler=msvc build --compiler=msvc bdist_wininst". Use the resulting dist/numpy-VERSION.exe installer to install Numpy. Testing: In a Python console run import numpy.testing numpy.testing.NumpyTest(numpy).run() ... hopefully without an error. Test your code base. I'll wikify an extended version in the next days. Stephan Albert Strasheim wrote: > Hello all, > > It seems that many people are building on Windows without problems, except > for Stephan and myself. > > Let me start by staying that yes, the default build on Windows with MinGW > and Visual Studio works nicely. > > However, is anybody building with ATLAS and finding that experience to be > equally painless? If so, *please* can you tell me how you've organized your > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > interested in the contents of your site.cfg. I've been trying for many > weeks to do some small subset of the above without hacking into the core of > numpy.distutils. So far, no luck. > > Does anybody do debug builds on Windows? Again, please tell me how you do > this, because I would really like to be able to build a debug version of > NumPy for debugging with the MSVS compiler. > > As for compiler warnings, last time I checked, distutils seems to be > suppressing the output from the compiler, except when the build actually > fails. Or am I mistaken? > > Eagerly awaiting Windows build nirvana, > > Albert > >> -----Original Message----- >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg >> Sent: 10 May 2006 23:49 >> To: Travis Oliphant >> Cc: Stephan Tolksdorf; numpy-discussion >> Subject: Re: [Numpy-discussion] Building on Windows >> >> Travis Oliphant wrote: >> >>> Stephan Tolksdorf wrote: >>> >>>> Hi, >>>> >>>> there are still some (mostly minor) problems with the Windows build >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >>>> documentation enhancements, but before doing so I'd like to know if >>>> there's interest from one of the core developers to review/commit >>>> these patches afterwards. I'm asking because in the past questions >>>> and suggestions regarding the building process of Numpy (especially >>>> on Windows) often remained unanswered on this list. I realise that >>>> many developers don't use Windows and that the distutils build is a >>>> complex beast, but the current situation seems a bit unsatisfactory >>>> - and I would like to help. >>> I think your assessment is a bit harsh. I regularly build on MinGW >>> so I know it works there (at least at release time). I also have >>> applied several patches with the express purpose of getting the build >>> working on MSVC and Cygwin. >>> >>> So, go ahead and let us know what problems you are having. You are >>> correct that my main build platform is not Windows, but I think >>> several other people do use Windows regularly and we definitely want >>> to support it. >>> >> Indeeed. I build from SVN at least once a week using MSVC and it's been >> compiling warning free and passing all tests for me for some time. >> >> -tim > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From tim.hochberg at cox.net Wed May 10 20:20:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 20:20:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> Message-ID: <4462AD4D.8070401@cox.net> Albert Strasheim wrote: >Hello all, > >It seems that many people are building on Windows without problems, except >for Stephan and myself. > >Let me start by staying that yes, the default build on Windows with MinGW >and Visual Studio works nicely. > >However, is anybody building with ATLAS and finding that experience to be >equally painless? If so, *please* can you tell me how you've organized your >libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's >LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very >interested in the contents of your site.cfg. I've been trying for many >weeks to do some small subset of the above without hacking into the core of >numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. >Does anybody do debug builds on Windows? Again, please tell me how you do >this, because I would really like to be able to build a debug version of >NumPy for debugging with the MSVS compiler. > > Again just vanilla builds, although this is something I'd like to try one of these days. (Is that MSVC compiler, or is that yet another compiler for windows). >As for compiler warnings, last time I checked, distutils seems to be >suppressing the output from the compiler, except when the build actually >fails. Or am I mistaken? > > Hmm. I hadn't thought about that. It certainly spits out plenty of warnings when the build fails, so I assumed that it was always spitting out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on a successful compilation. Anyone know a way to stop that off the top of their head? >Eagerly awaiting Windows build nirvana, > > Heh! Regards, -tim From ryanlists at gmail.com Wed May 10 20:21:03 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed May 10 20:21:03 2006 Subject: [Numpy-discussion] array of arbitrary objects Message-ID: Is it possible with numpy to create arrays of arbitrary objects? Specifically, I have defined a symbolic string class with operator overloading for most simple math operations: 'a'*'b' ==> 'a*b' Can I create two matrices of these symbolic string objects and multiply those matrices together? (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings got cast to regular strings) Thanks, Ryan From ndarray at mac.com Wed May 10 20:28:01 2006 From: ndarray at mac.com (Sasha) Date: Wed May 10 20:28:01 2006 Subject: [Numpy-discussion] array of arbitrary objects In-Reply-To: References: Message-ID: You have to specify dtype to be object: >>> from numpy import * >>> class X(str): pass ... >>> a,b,c,d = map(X, 'abcd') >>> array([[a,b],[c,d]],'O') array([[a, b], [c, d]], dtype=object) >>> _ * 2 array([[aa, bb], [cc, dd]], dtype=object) On 5/10/06, Ryan Krauss wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmdlnk&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Wed May 10 20:28:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 10 20:28:05 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: References: Message-ID: Ryan Krauss wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) Use dtype=object . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Wed May 10 20:34:00 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 20:34:00 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <4462AD4D.8070401@cox.net> Message-ID: <008601c674ab$97e7fa70$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: Tim Hochberg [mailto:tim.hochberg at cox.net] > Sent: 11 May 2006 05:20 > To: Albert Strasheim > Cc: 'numpy-discussion' > Subject: Re: [Numpy-discussion] Building on Windows > > Albert Strasheim wrote: > > >Hello all, > > > >It seems that many people are building on Windows without problems, > > except for Stephan and myself. > > > >Let me start by staying that yes, the default build on Windows with MinGW > >and Visual Studio works nicely. > > > >However, is anybody building with ATLAS and finding that experience to be > >equally painless? If so, *please* can you tell me how you've organized > >your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about > >ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd > >be very interested in the contents of your site.cfg. I've been trying > > for many weeks to do some small subset of the above without hacking > >into the core of numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. I like vanilla, but I'd love to try one of the other hundred flavors! ;-) > >Does anybody do debug builds on Windows? Again, please tell me how you do > >this, because I would really like to be able to build a debug version of > >NumPy for debugging with the MSVS compiler. > > > > > Again just vanilla builds, although this is something I'd like to try > one of these days. (Is that MSVC compiler, or is that yet another > compiler for windows). MSVS (MS Visual Studio) and MSVC can probably be considered to be the same thing. However, you have many flavors (argh!). The Microsoft Visual C++ Toolkit 2003 only includes MSVC, while Visual C++ Express Edition 2005 and all the "pay-to-play" editions include MSVC and the MSV[SC] debugger. I think there's also another debugger called WinDbg which is included with the Platform SDK. > >As for compiler warnings, last time I checked, distutils seems to be > >suppressing the output from the compiler, except when the build actually > >fails. Or am I mistaken? > > > Hmm. I hadn't thought about that. It certainly spits out plenty of > warnings when the build fails, so I assumed that it was always spitting > out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on > a successful compilation. Anyone know a way to stop that off the top of > their head? See the following URL for the kind of pain this causes: http://article.gmane.org/gmane.comp.python.numeric.general/5219 > >Eagerly awaiting Windows build nirvana, > > > > > Heh! Thanks for your feedback. Not nirvana yet, but vanilla will do for now. Cheers, Albert From fullung at gmail.com Wed May 10 20:39:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed May 10 20:39:01 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: Message-ID: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern > Sent: 11 May 2006 05:28 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Re: array of arbitrary objects > > Ryan Krauss wrote: > > Is it possible with numpy to create arrays of arbitrary objects? > > Specifically, I have defined a symbolic string class with operator > > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > > > Can I create two matrices of these symbolic string objects and > > multiply those matrices together? > > > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > > got cast to regular strings) > > Use dtype=object . How does one go about putting tuples into an object array? Consider the following example, courtesy of Louis Cordier: In [1]: import numpy as N In [2]: a = [(1,2), (2,3), (3,4)] In [3]: len(a) Out[3]: 3 In [5]: N.array(a, 'O') Out[5]: array([[1, 2], [2, 3], [3, 4]], dtype=object) instead of something like this: array([[(1, 2), (2, 3), (3, 4)]], dtype=object) Cheers, Albert From robert.kern at gmail.com Wed May 10 20:43:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 10 20:43:01 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> References: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > How does one go about putting tuples into an object array? Very carefully. In [2]: a = empty(3, dtype=object) In [3]: a Out[3]: array([None, None, None], dtype=object) In [4]: a[:] = [(1,2), (2,3), (3,4)] In [5]: a Out[5]: array([(1, 2), (2, 3), (3, 4)], dtype=object) In [6]: a.shape Out[6]: (3,) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.hochberg at cox.net Wed May 10 20:48:04 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Wed May 10 20:48:04 2006 Subject: [Numpy-discussion] Re: array of arbitrary objects In-Reply-To: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> References: <008701c674ac$67f6c3e0$0502010a@dsp.sun.ac.za> Message-ID: <4462B3DF.5060809@cox.net> Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >>discussion-admin at lists.sourceforge.net] On Behalf Of Robert Kern >>Sent: 11 May 2006 05:28 >>To: numpy-discussion at lists.sourceforge.net >>Subject: [Numpy-discussion] Re: array of arbitrary objects >> >>Ryan Krauss wrote: >> >> >>>Is it possible with numpy to create arrays of arbitrary objects? >>>Specifically, I have defined a symbolic string class with operator >>>overloading for most simple math operations: 'a'*'b' ==> 'a*b' >>> >>>Can I create two matrices of these symbolic string objects and >>>multiply those matrices together? >>> >>>(simply doing array([[a,b],[c,d]]) did not work. the symbolic strings >>>got cast to regular strings) >>> >>> >>Use dtype=object . >> >> > >How does one go about putting tuples into an object array? > >Consider the following example, courtesy of Louis Cordier: > >In [1]: import numpy as N >In [2]: a = [(1,2), (2,3), (3,4)] >In [3]: len(a) >Out[3]: 3 >In [5]: N.array(a, 'O') >Out[5]: >array([[1, 2], > [2, 3], > [3, 4]], dtype=object) > >instead of something like this: > >array([[(1, 2), (2, 3), (3, 4)]], dtype=object) > > Creating object arrays is always going to be a bit of a pain, however, the following approach will work here: >>> import numpy as N >>> a = [(1,2), (2,3), (3,4)] >>> b = N.zeros([len(a)], object) >>> b[:] = a >>> b array([(1, 2), (2, 3), (3, 4)], dtype=object) >>> type(b[0]) Regards, -tim From st at sigmasquared.net Wed May 10 23:26:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 10 23:26:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <4462AD4D.8070401@cox.net> References: <006901c67482$93e310a0$0502010a@dsp.sun.ac.za> <4462AD4D.8070401@cox.net> Message-ID: <4462D8B8.4010405@sigmasquared.net> >> As for compiler warnings, last time I checked, distutils seems to be >> suppressing the output from the compiler, except when the build actually >> fails. Or am I mistaken? >> >> > Hmm. I hadn't thought about that. It certainly spits out plenty of > warnings when the build fails, so I assumed that it was always spitting > out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on > a successful compilation. Anyone know a way to stop that off the top of > their head? A quick fix is to throw out the customized spawn by commenting out line 40 in distutils/ccompiler.py. Stephan From wbaxter at gmail.com Thu May 11 01:28:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 01:28:01 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays Message-ID: Two quick questions: ---------1------------ Is there any better way to intialize an array from a string than this: A = asarray(matrix("1 2 3")) Or is that as good as it gets? I suppose it's not so inefficient. ----------2------------- A lot of time I'd like to be able to write loops like this: A = array([]) for row in function_that_returns_iterable_of_one_d_arrays(): A = vstack((A,row)) But that generates an error the first iteration because the shape of A is wrong. I could stick in a reshape in the loop: A = array([]) for row in function_that_returns_iterable_of_one_d_arrays(): if not A.shape[0]: A.reshape(0,row.shape[0]) A = vstack((A,row)) Meh, but I don't really like looking at 'if's in loops when I know they're really only going to be true once the first time. In Matlab the empty matrix [] can be concatenated with anything and just takes on it's shape. It's handy for writing code like the above. Thanks, --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz at loskot.net Thu May 11 04:39:15 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Thu May 11 04:39:15 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? Message-ID: <4463220B.20909@loskot.net> Hi, I'm a developer contributing to GDAL project (http://www.gdal.org). GDAL is a core GeoSpatial Librarry that has had Python bindings for a while. Recently, we did get some reports from users that GDAL bindings do not work with NumPy package. We've learned on the NumPy website that it's a new derivation from Numeric code base. So, now we are facing the question what we should do? Should we completely port our project to use NumPy or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both packages. Python plays a very important role in GDAL project, so our concern are quite critical for future development. This situation brings some questions we'd like to ask NumPy Dev Team: Is it fair to say we are unlikely to see Numeric releases for new Pythons in the future? Can we consider NumPy as the only package in future? Simply, we are wondering which Python library we should develop for NumPy, Numeric or numarray to be most generally useful. What's the recommended way to go now? We'd appreciate your assistance on this issue. Best regards -- Mateusz ?oskot http://mateusz.loskot.net From hetland at tamu.edu Thu May 11 07:16:00 2006 From: hetland at tamu.edu (Robert Hetland) Date: Thu May 11 07:16:00 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> Message-ID: <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> I never use matrices primarily because I am worried about one more data type floating around in my code. That is, data is often read in or constructed as lists, and must be converted to an array to do anything useful. Take a simple example of optimal interpolation: Read in the data (as a list?), construct the background error covariance arrays (arrays), then do about three lines of linear algebra; e.g., W = dot(linalg.inv(B + O), Bi) # weights A = dot(self.Di,W).transpose() # analysis Ea = diag(sqrt(self.Be - dot(W.transpose(), Bi))) # analysis error Is it worth it to convert the arrays to matrices in order to do this handful of calculation? Almost. I covet the shorthand .T notation in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. Also, for involved calculations inverse, transpose et. al are long enough words such that the line always wraps, resulting in less- readable code. Should I give in? If there was some shorthand links to inverse and transpose methods in the array object, I would definitely stick with arrays. -Rob p.s. By the way, array broadcasting saves much pain in creating the background error covariance arrays. Yeah for array broadcasting! ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From oliphant.travis at ieee.org Thu May 11 08:41:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 11 08:41:01 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <4463220B.20909@loskot.net> References: <4463220B.20909@loskot.net> Message-ID: <44635ABE.5030200@ieee.org> Mateusz ?oskot wrote: > Hi, > > I'm a developer contributing to GDAL project (http://www.gdal.org). > GDAL is a core GeoSpatial Librarry that has had Python bindings > for a while. Recently, we did get some reports from users that GDAL > bindings do not work with NumPy package. > Most packages can be "ported" simply by replacing include "Numeric/arrayobject.h" with include "numpy/arrayobject.h" and making sure the include files are retrieved from the right place. NumPy was designed to make porting from Numeric a breeze. > > This situation brings some questions we'd like to ask NumPy Dev Team: > > Is it fair to say we are unlikely to see Numeric releases for new > Pythons in the future? > Yes, that's fair. Nobody is maintaining Numeric. > Can we consider NumPy as the only package in future? > Yes. That's where development is currently active. > Simply, we are wondering which Python library we should develop for > NumPy, Numeric or numarray to be most generally useful. > NumPy is the merging of Numeric and numarray to bring people together. I think you should develop for NumPy. In practical terms, it is pretty easy to port from Numeric. > What's the recommended way to go now? > I've ported tens of packages to NumPy from Numeric and have had very little trouble. It is not difficult. Most of the time, simply replacing the *.h file with the one from numpy works fine. It might be a bit trickier to get your headers from the right place. The directory is returned by import numpy.distutils numpy.distutils.misc_util.get_numpy_include_dirs() Give it a try it's not very difficult. -Travis From Chris.Barker at noaa.gov Thu May 11 09:15:01 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu May 11 09:15:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44626012.6050506@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> Message-ID: <446362CC.6000004@noaa.gov> Travis Oliphant wrote: > 1) Implement a base-array with no getitem method nor setitem method at all > > 2) Implement a sub-class that supports only creation of data-types > corresponding to existing Python scalars (Boolean, Long-based integers, > Double-based floats, complex and object types). Then, all array > accesses should return the underlying Python objects. > This sub-class should also only do view-based indexing (basically it's > old Numeric behavior inside of NumPy). > Item 1) should be pushed for inclusion in 2.6 and possibly even > something like 2) + sys.maxint Having even this very basic n-d object in the standard lib would be a MAJOR boon to python. However, as I think about it, one reason I'd really like to see an nd-array in python is as a standard way to pass binary data around. Right now, I'm working with the GDAL lib for geo-referenced raster images, PIL, numpy and wxPython. I'm making a lot of copies to and from python strings to pass the binary data back and forth. If all these libs used the same nd-array, this would be much more efficient. However, that would require non-python data types, most notably a byte (or char, whatever) type. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu May 11 09:25:14 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu May 11 09:25:14 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: Message-ID: <44636533.3020201@noaa.gov> Bill Baxter wrote: > Two quick questions: > ---------1------------ > Is there any better way to intialize an array from a string than this: > > A = asarray(matrix("1 2 3")) How about: >>> import numpy as N >>> N.fromstring("1 2 3", sep = " ") array([1, 2, 3]) or >>> N.fromstring("1 2 3", dtype = N.Float, sep = " ") array([ 1., 2., 3.]) If you pass a non-empty "sep" parameter, it parses the string, rather than treating is as binary data. fromfile works this way too -- thanks Travis! -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oliphant.travis at ieee.org Thu May 11 09:29:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 11 09:29:01 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446362CC.6000004@noaa.gov> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> Message-ID: <44636613.7050201@ieee.org> Christopher Barker wrote: > Travis Oliphant wrote: >> 1) Implement a base-array with no getitem method nor setitem method >> at all >> >> 2) Implement a sub-class that supports only creation of data-types >> corresponding to existing Python scalars (Boolean, Long-based >> integers, Double-based floats, complex and object types). Then, all >> array accesses should return the underlying Python objects. >> This sub-class should also only do view-based indexing (basically >> it's old Numeric behavior inside of NumPy). > >> Item 1) should be pushed for inclusion in 2.6 and possibly even >> something like 2) > > + sys.maxint > > Having even this very basic n-d object in the standard lib would be a > MAJOR boon to python. > I totally agree. I've been advertising this for at least 8 months, but nobody is really willing to work on it (or fund it). At least we have a summer student who is going to try and get Google summer-of-code money for it. If you have any ability to bump up the ratings of summer of code applications. Please consider bumping up his application. > However, as I think about it, one reason I'd really like to see an > nd-array in python is as a standard way to pass binary data around. > Right now, I'm working with the GDAL lib for geo-referenced raster > images, PIL, numpy and wxPython. I'm making a lot of copies to and > from python strings to pass the binary data back and forth. If all > these libs used the same nd-array, this would be much more efficient. > However, that would require non-python data types, most notably a byte > (or char, whatever) type. > Anything in Python would need to define at least a basic bytes type, for exactly this purpose. So, we are on the same page. I'm thinking now that the data-type object now in NumPy would make a nice addition to Python as well for a standard way to define data-types. Then, Python itself wouldn't have to do anything useful with non-standard arrays (except pass them around), but it would at least have a way to describe them. -Travis From tim.hochberg at cox.net Thu May 11 09:45:02 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Thu May 11 09:45:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44636613.7050201@ieee.org> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> Message-ID: <446369D6.7060205@cox.net> Travis Oliphant wrote: > Christopher Barker wrote: > >> Travis Oliphant wrote: >> >>> 1) Implement a base-array with no getitem method nor setitem method >>> at all >>> >>> 2) Implement a sub-class that supports only creation of data-types >>> corresponding to existing Python scalars (Boolean, Long-based >>> integers, Double-based floats, complex and object types). Then, all >>> array accesses should return the underlying Python objects. >>> This sub-class should also only do view-based indexing (basically >>> it's old Numeric behavior inside of NumPy). >> >> >>> Item 1) should be pushed for inclusion in 2.6 and possibly even >>> something like 2) >> >> >> + sys.maxint >> >> Having even this very basic n-d object in the standard lib would be a >> MAJOR boon to python. >> > > I totally agree. I've been advertising this for at least 8 months, > but nobody is really willing to work on it (or fund it). At least we > have a summer student who is going to try and get Google > summer-of-code money for it. If you have any ability to bump up the > ratings of summer of code applications. Please consider bumping up > his application. > >> However, as I think about it, one reason I'd really like to see an >> nd-array in python is as a standard way to pass binary data around. >> Right now, I'm working with the GDAL lib for geo-referenced raster >> images, PIL, numpy and wxPython. I'm making a lot of copies to and >> from python strings to pass the binary data back and forth. If all >> these libs used the same nd-array, this would be much more efficient. >> However, that would require non-python data types, most notably a >> byte (or char, whatever) type. >> > Anything in Python would need to define at least a basic bytes type, > for exactly this purpose. So, we are on the same page. > > I'm thinking now that the data-type object now in NumPy would make a > nice addition to Python as well for a standard way to define > data-types. Then, Python itself wouldn't have to do anything useful > with non-standard arrays (except pass them around), but it would at > least have a way to describe them. On this front, it's probably at least thinking a bit about whether there is any prospect of harmonizing ctypes type notation and the numpy data-type object. It seems somewhat silly to have (1) array.arrays notation for types ['i'. 'f', etc], (2) ctypes notation for types [c_int, c_float, etc] and (3) numpy's notation for types [dtype(' References: <4463220B.20909@loskot.net> Message-ID: <44636A10.1030909@noaa.gov> Mateusz ?oskot wrote: > I'm a developer contributing to GDAL project (http://www.gdal.org). > GDAL is a core GeoSpatial Librarry that has had Python bindings > for a while. Recently, we did get some reports from users that GDAL > bindings do not work with NumPy package. Speaking as a long time numpy (Numeric, etc.) user, and a new user of GDAL, I had no idea GDAL worked with num* at all! at least not directly. In fact, I thought I was going to have to write that code myself. Where do I find docs for this? I'm sure I've just missed something, but I'm finding the docs a bit sparse. On the other hand, I am also finding GDAL to be an excellent library, and have so far gotten it to do what I need it to do. So kudos! > So, now we are facing the question what we should do? > Should we completely port our project to use NumPy I say yes. At least for new code. numpy is the way of the future, and the more projects commit to it the better. > or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both > packages. That's a reasonable option as well. In fact, the APIs are pretty darn similar. Also, keep in mind that if you go exclusively to numpy, the newest version of Numeric will still work well with it. The conversion of numpy to Numeric arrays is very efficient, thanks to them both using the new "array protocol". Same goes for numarray. > Is it fair to say we are unlikely to see Numeric releases for new > Pythons in the future? I'm guessing someone might do a little maintainance. For the short term, the current Numeric will probably build just fine against the next couple releases of python -- python's good that way! > Can we consider NumPy as the only package in future? > Simply, we are wondering which Python library we should develop for > NumPy, Numeric or numarray to be most generally useful. Speaking as a very interested observer, but not a developer of any of the num* packages: numpy is the way of the future. As an observer, I think that's pretty clear. On the other hand, it is still beta software, so dropping Numeric just yet may not be appropriate. If you don't already support numarray, there is no reason to do so now. It will do the python numerical community a world of good for all of us to get back to a single array package. Note also that we hope some day to get a simple n-d array object into the python standard library. When that happens, that object is all but guaranteed to be compatible with numpy. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ndarray at mac.com Thu May 11 10:03:02 2006 From: ndarray at mac.com (Sasha) Date: Thu May 11 10:03:02 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446369D6.7060205@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> Message-ID: On 5/11/06, Tim Hochberg wrote: > [...] > On this front, it's probably at least thinking a bit about whether there > is any prospect of harmonizing ctypes type notation and the numpy > data-type object. It seems somewhat silly to have (1) array.arrays > notation for types ['i'. 'f', etc], (2) ctypes notation for types > [c_int, c_float, etc] and (3) numpy's notation for types [dtype(' dtype(' Don't forget (4) the struct module (similar to (1), but not exactly the same). Also I am not familiar with Boost.Python, but it must have some way to reflect C++ types to Python. If anyone on this list uses Boost.Python, please think if we can borrow any ideas from there. From ndarray at mac.com Thu May 11 10:15:02 2006 From: ndarray at mac.com (Sasha) Date: Thu May 11 10:15:02 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44636A10.1030909@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> Message-ID: On 5/11/06, Christopher Barker wrote: > [...] > > or to stay with Numeric for a while (e.g. 1 year). There is also idea to support both > > packages. > > That's a reasonable option as well. In fact, the APIs are pretty darn > similar. Also, keep in mind that if you go exclusively to numpy, the > newest version of Numeric will still work well with it. The conversion > of numpy to Numeric arrays is very efficient, thanks to them both using > the new "array protocol". Same goes for numarray. I have started using the array protocol recently, and it is very useful. It allows you to eliminate compile time dependency on any of the particular array package from your extension modules. If your package already has an array-like object, that object should provide an __array_struct__ attribute and that will make it acceptable to asarray method from any of the latest array packages (numpy, numarray, or Numeric). If you have functions that take array as arguments they should be modified to accept any object that has __array_struct__. From st at sigmasquared.net Thu May 11 10:55:13 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu May 11 10:55:13 2006 Subject: [Numpy-discussion] Compiler warnings Message-ID: <44637A3D.1060203@sigmasquared.net> Seems like the setup script currently suppresses quite a lot of warnings, not only in MSVC. I created a ticket (#113) with a patch that fixes many of the warnings under Visual C. Maybe someone else could have a look at the remaining ones. @Albert: Do you still have a linking problem with the math functions? I haven't seen such a warning... Ciao, Stephan From oliphant at ee.byu.edu Thu May 11 11:13:03 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu May 11 11:13:03 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <446369D6.7060205@cox.net> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> Message-ID: <44637E83.7090207@ee.byu.edu> Tim Hochberg wrote: > Travis Oliphant wrote: > >> Christopher Barker wrote: >> >>> Travis Oliphant wrote: >>> >>>> 1) Implement a base-array with no getitem method nor setitem method >>>> at all >>>> >>>> 2) Implement a sub-class that supports only creation of data-types >>>> corresponding to existing Python scalars (Boolean, Long-based >>>> integers, Double-based floats, complex and object types). Then, >>>> all array accesses should return the underlying Python objects. >>>> This sub-class should also only do view-based indexing (basically >>>> it's old Numeric behavior inside of NumPy). >>> >>> >>> >>>> Item 1) should be pushed for inclusion in 2.6 and possibly even >>>> something like 2) >>> >>> >>> >>> + sys.maxint >>> >>> Having even this very basic n-d object in the standard lib would be >>> a MAJOR boon to python. >>> >> >> I totally agree. I've been advertising this for at least 8 months, >> but nobody is really willing to work on it (or fund it). At least >> we have a summer student who is going to try and get Google >> summer-of-code money for it. If you have any ability to bump up the >> ratings of summer of code applications. Please consider bumping up >> his application. >> >>> However, as I think about it, one reason I'd really like to see an >>> nd-array in python is as a standard way to pass binary data around. >>> Right now, I'm working with the GDAL lib for geo-referenced raster >>> images, PIL, numpy and wxPython. I'm making a lot of copies to and >>> from python strings to pass the binary data back and forth. If all >>> these libs used the same nd-array, this would be much more >>> efficient. However, that would require non-python data types, most >>> notably a byte (or char, whatever) type. >>> >> Anything in Python would need to define at least a basic bytes type, >> for exactly this purpose. So, we are on the same page. >> >> I'm thinking now that the data-type object now in NumPy would make a >> nice addition to Python as well for a standard way to define >> data-types. Then, Python itself wouldn't have to do anything >> useful with non-standard arrays (except pass them around), but it >> would at least have a way to describe them. > > > On this front, it's probably at least thinking a bit about whether > there is any prospect of harmonizing ctypes type notation and the > numpy data-type object. It seems somewhat silly to have (1) > array.arrays notation for types ['i'. 'f', etc], (2) ctypes notation > for types [c_int, c_float, etc] and (3) numpy's notation for types > [dtype(' I agree in spirit. Python is growing several ways to do the same thing because determining what data-type you are dealing with is useful in many contexts. Unfortunately, there is not a lot of cross-pollination between the people. I've contributed to a couple of threads to try and at least raise awareness that data-description is something NumPy has been thinking about, also. -Travis From fperez.net at gmail.com Thu May 11 11:24:08 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 11 11:24:08 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44637E83.7090207@ee.byu.edu> References: <445F8F71.4070801@cox.net> <446245AB.6000900@ieee.org> <44624D24.3050406@cox.net> <44626012.6050506@ieee.org> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> <446369D6.7060205@cox.net> <44637E83.7090207@ee.byu.edu> Message-ID: On 5/11/06, Travis Oliphant wrote: > I agree in spirit. Python is growing several ways to do the same thing > because determining what data-type you are dealing with is useful in > many contexts. Unfortunately, there is not a lot of cross-pollination > between the people. I've contributed to a couple of threads to try and > at least raise awareness that data-description is something NumPy has > been thinking about, also. Sounds like a good topic for the discussion idea. I started the wiki for it: http://scipy.org/SciPy06DiscussionWithGuido Gotta run now, I'll announce it on list later. f From lapataia28-northern at yahoo.com Thu May 11 11:52:04 2006 From: lapataia28-northern at yahoo.com (Lalo) Date: Thu May 11 11:52:04 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44627BC0.3080403@sigmasquared.net> Message-ID: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> Hi, I built on Cygwin using the detailed ATLAS+LAPACK build tutorial below. It's great, thanks! My site.cfg is: ------------------------------------------------- [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/usr/lib/python2.4/config include_dirs = /usr/include:/usr/local/include search_static_first = 0 [atlas] library_dirs = /usr/local/lib/atlas # for overriding the names of the atlas libraries atlas_libs = lapack, f77blas, cblas, atlas ------------------------------------------------- However, I had to create the following link, not sure why it wouldn't find the library otherwise: ln -s /usr/lib/python2.4/config/libpython2.4.dll.a /usr/lib/libpython2.4.dll.a Hope this helps, Lalo ----- Original Message ---- From: Stephan Tolksdorf To: Albert Strasheim Cc: numpy-discussion Sent: Wednesday, May 10, 2006 4:48:16 PM Subject: Re: [Numpy-discussion] Building on Windows Hi Albert, in the following you find an abridged preview version of my MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of it, but maybe it helps. You'll need a current Cygwin with g77 and MinGW-libraries installed. Atlas: ====== Download and extract latest development ATLAS (3.7.11). Comment out line 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate options for your system. Don't activate posix threads (for now). Overwrite the compiler and linker flags with flags that include "-mno-cygwin". Use the default architecture settings. Atlas and the test suite hopefully compile without an error now. Lapack: ======= Download and extract www.netlib.org/lapack/lapack.tgz and apply the most current patch from www.netlib.org/lapack-dev/ Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add ".PHONY: install testing timing" as the last line to lapack/Makefile. Run "make install lib" in the lapack root directory in Cygwin. ("make testing timing" and should also work now, but you probably want to use your optimised BLAS for that. Some errors in the tests are to be expected.) Atlas + Lapack: =============== Copy the generated lapack_LINUX.a together with "libatlas.a", "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. In Cygwin execute the following command sequence in that directory to get an ATLAS-optimized LAPACK library "ar x liblapack.a ar r lapack_LINUX.a *.o rm *.o mv lapack_LINUX.a liblapack.a" Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to atlas.lib, in order to allow distutils to recognize the libs and at the same time provide the correct versions for MSVC. Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to this directory and again make .lib copies. Compile and install numpy: ========================== Put "[atlas] library_dirs = d:\path\to\your\BlasDirectory atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" into your site.cfg in the numpy root directory. Open an Visual Studio 2003 command prompt and run "Path\To\Python.exe setup.py config --compiler=msvc build --compiler=msvc bdist_wininst". Use the resulting dist/numpy-VERSION.exe installer to install Numpy. Testing: In a Python console run import numpy.testing numpy.testing.NumpyTest(numpy).run() ... hopefully without an error. Test your code base. I'll wikify an extended version in the next days. Stephan Albert Strasheim wrote: > Hello all, > > It seems that many people are building on Windows without problems, except > for Stephan and myself. > > Let me start by staying that yes, the default build on Windows with MinGW > and Visual Studio works nicely. > > However, is anybody building with ATLAS and finding that experience to be > equally painless? If so, *please* can you tell me how you've organized your > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > interested in the contents of your site.cfg. I've been trying for many > weeks to do some small subset of the above without hacking into the core of > numpy.distutils. So far, no luck. > > Does anybody do debug builds on Windows? Again, please tell me how you do > this, because I would really like to be able to build a debug version of > NumPy for debugging with the MSVS compiler. > > As for compiler warnings, last time I checked, distutils seems to be > suppressing the output from the compiler, except when the build actually > fails. Or am I mistaken? > > Eagerly awaiting Windows build nirvana, > > Albert > >> -----Original Message----- >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg >> Sent: 10 May 2006 23:49 >> To: Travis Oliphant >> Cc: Stephan Tolksdorf; numpy-discussion >> Subject: Re: [Numpy-discussion] Building on Windows >> >> Travis Oliphant wrote: >> >>> Stephan Tolksdorf wrote: >>> >>>> Hi, >>>> >>>> there are still some (mostly minor) problems with the Windows build >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >>>> documentation enhancements, but before doing so I'd like to know if >>>> there's interest from one of the core developers to review/commit >>>> these patches afterwards. I'm asking because in the past questions >>>> and suggestions regarding the building process of Numpy (especially >>>> on Windows) often remained unanswered on this list. I realise that >>>> many developers don't use Windows and that the distutils build is a >>>> complex beast, but the current situation seems a bit unsatisfactory >>>> - and I would like to help. >>> I think your assessment is a bit harsh. I regularly build on MinGW >>> so I know it works there (at least at release time). I also have >>> applied several patches with the express purpose of getting the build >>> working on MSVC and Cygwin. >>> >>> So, go ahead and let us know what problems you are having. You are >>> correct that my main build platform is not Windows, but I think >>> several other people do use Windows regularly and we definitely want >>> to support it. >>> >> Indeeed. I build from SVN at least once a week using MSVC and it's been >> compiling warning free and passing all tests for me for some time. >> >> -tim > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > ------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Thu May 11 12:04:03 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu May 11 12:04:03 2006 Subject: [Numpy-discussion] Compiler warnings In-Reply-To: <44637A3D.1060203@sigmasquared.net> Message-ID: <013f01c6752d$952a3ff0$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Stephan Tolksdorf > Sent: 11 May 2006 19:54 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Compiler warnings > > Seems like the setup script currently suppresses quite a lot of > warnings, not only in MSVC. I created a ticket (#113) with a patch that > fixes many of the warnings under Visual C. Maybe someone else could have > a look at the remaining ones. > > @Albert: Do you still have a linking problem with the math functions? I > haven't seen such a warning... I sent Travis a description of the problem and he fixed it a while back. Nice work on fixing the compiler warnings. I'll take a look at what's left over when your patch is merged. Regards, Albert From st at sigmasquared.net Thu May 11 12:36:02 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Thu May 11 12:36:02 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> References: <20060511185101.63772.qmail@web36201.mail.mud.yahoo.com> Message-ID: <446391FD.5030901@sigmasquared.net> Lalo wrote: > Hi, > > I built on Cygwin using the detailed ATLAS+LAPACK build tutorial below. > It's great, thanks! If I've understood you correctly, you built Numpy in Cygwin using the _Python_ compiler from Cygwin, not the official Python binary distribution. As far as I know, that is currently not supported and probably unsafe because the resulting extension modules and Python will be linked to incompatible runtime libraries (msvcrt and the Cygwin one). If you want to build Numpy under Windows without MSVC, currently the easiest way is to download MinGW from http://sourceforge.net/forum/forum.php?forum_id=539405 install it and to put the MinGW/bin folder on the path of a normal windows command prompt. You do not need MSYS if you have built ATLAS and LAPACK with Cygwin and the -mno-cygwin option, as previously described. Now run "d:\path\to\offical\python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst" in the Numpy root directory in the windows command prompt window (not Cygwin). Stephan > > My site.cfg is: > > ------------------------------------------------- > [DEFAULT] > library_dirs = /usr/lib:/usr/local/lib:/usr/lib/python2.4/config > include_dirs = /usr/include:/usr/local/include > search_static_first = 0 > > [atlas] > library_dirs = /usr/local/lib/atlas > # for overriding the names of the atlas libraries > atlas_libs = lapack, f77blas, cblas, atlas > ------------------------------------------------- > > However, I had to create the following link, not sure why it wouldn't > find the library otherwise: > > ln -s /usr/lib/python2.4/config/libpython2.4.dll.a > /usr/lib/libpython2.4.dll.a > > Hope this helps, > Lalo > > > ----- Original Message ---- > From: Stephan Tolksdorf > To: Albert Strasheim > Cc: numpy-discussion > Sent: Wednesday, May 10, 2006 4:48:16 PM > Subject: Re: [Numpy-discussion] Building on Windows > > Hi Albert, > > in the following you find an abridged preview version of my > MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of > it, but maybe it helps. > > You'll need a current Cygwin with g77 and MinGW-libraries installed. > > Atlas: > ====== > Download and extract latest development ATLAS (3.7.11). Comment out line > 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate > options for your system. Don't activate posix threads (for now). > Overwrite the compiler and linker flags with flags that include > "-mno-cygwin". Use the default architecture settings. Atlas and the test > suite hopefully compile without an error now. > > Lapack: > ======= > Download and extract www.netlib.org/lapack/lapack.tgz > and apply the most > current patch from www.netlib.org/lapack-dev/ > > Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append > "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add > ".PHONY: install testing timing" as the last line to lapack/Makefile. > Run "make install lib" in the lapack root directory in Cygwin. ("make > testing timing" and should also work now, but you probably want to use > your optimised BLAS for that. Some errors in the tests are to be expected.) > > Atlas + Lapack: > =============== > Copy the generated lapack_LINUX.a together with "libatlas.a", > "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. > > In Cygwin execute the following command sequence in that directory to > get an ATLAS-optimized LAPACK library > "ar x liblapack.a > ar r lapack_LINUX.a *.o > rm *.o > mv lapack_LINUX.a liblapack.a" > > Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to > atlas.lib, in order to allow distutils to recognize the libs and at the > same time provide the correct versions for MSVC. > > Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to > this directory and again make .lib copies. > > Compile and install numpy: > ========================== > > Put > "[atlas] > library_dirs = d:\path\to\your\BlasDirectory > atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" > into your site.cfg in the numpy root directory. > > Open an Visual Studio 2003 command prompt and run > "Path\To\Python.exe setup.py config --compiler=msvc build > --compiler=msvc bdist_wininst". > Use the resulting dist/numpy-VERSION.exe installer to install Numpy. > > Testing: > In a Python console run > import numpy.testing > numpy.testing.NumpyTest(numpy).run() > ... hopefully without an error. Test your code base. > > I'll wikify an extended version in the next days. > > Stephan > > > Albert Strasheim wrote: > > Hello all, > > > > It seems that many people are building on Windows without problems, > except > > for Stephan and myself. > > > > Let me start by staying that yes, the default build on Windows with MinGW > > and Visual Studio works nicely. > > > > However, is anybody building with ATLAS and finding that experience to be > > equally painless? If so, *please* can you tell me how you've > organized your > > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about > ATLAS's > > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > > interested in the contents of your site.cfg. I've been trying for many > > weeks to do some small subset of the above without hacking into the > core of > > numpy.distutils. So far, no luck. > > > > Does anybody do debug builds on Windows? Again, please tell me how you do > > this, because I would really like to be able to build a debug version of > > NumPy for debugging with the MSVS compiler. > > > > As for compiler warnings, last time I checked, distutils seems to be > > suppressing the output from the compiler, except when the build actually > > fails. Or am I mistaken? > > > > Eagerly awaiting Windows build nirvana, > > > > Albert > > > >> -----Original Message----- > >> From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > >> discussion-admin at lists.sourceforge.net] On Behalf Of Tim Hochberg > >> Sent: 10 May 2006 23:49 > >> To: Travis Oliphant > >> Cc: Stephan Tolksdorf; numpy-discussion > >> Subject: Re: [Numpy-discussion] Building on Windows > >> > >> Travis Oliphant wrote: > >> > >>> Stephan Tolksdorf wrote: > >>> > >>>> Hi, > >>>> > >>>> there are still some (mostly minor) problems with the Windows build > >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > >>>> documentation enhancements, but before doing so I'd like to know if > >>>> there's interest from one of the core developers to review/commit > >>>> these patches afterwards. I'm asking because in the past questions > >>>> and suggestions regarding the building process of Numpy (especially > >>>> on Windows) often remained unanswered on this list. I realise that > >>>> many developers don't use Windows and that the distutils build is a > >>>> complex beast, but the current situation seems a bit unsatisfactory > >>>> - and I would like to help. > >>> I think your assessment is a bit harsh. I regularly build on MinGW > >>> so I know it works there (at least at release time). I also have > >>> applied several patches with the express purpose of getting the build > >>> working on MSVC and Cygwin. > >>> > >>> So, go ahead and let us know what problems you are having. You are > >>> correct that my main build platform is not Windows, but I think > >>> several other people do use Windows regularly and we definitely want > >>> to support it. > >>> > >> Indeeed. I build from SVN at least once a week using MSVC and it's been > >> compiling warning free and passing all tests for me for some time. > >> > >> -tim > > > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your > job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From cookedm at physics.mcmaster.ca Thu May 11 12:48:05 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 11 12:48:05 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: <4461131B.1050907@ieee.org> (Travis Oliphant's message of "Tue, 09 May 2006 16:09:31 -0600") References: <4461131B.1050907@ieee.org> Message-ID: Travis Oliphant writes: > I'd like to get 0.9.8 of NumPy released by the end of the week. > There are a few Trac tickets that need to be resolved by then. > > In particular #83 suggests returning scalars instead of 1x1 matrices > from certain reduce-like methods. Please chime in on your preference. > I'm waiting to here more feedback before applying the patch. > > If you can help out on any other ticket that would be much > appreciated. I'd like to fix up #81 (Numpy should be installable with setuptool's easy_install), but I'm not going to have anytime to work on it before the weekend. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Thu May 11 13:01:05 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 11 13:01:05 2006 Subject: [Numpy-discussion] Discussion with Guido: an idea for scipy'06 Message-ID: Hi all, I think the presence of Guido as keynote speaker for scipy'06 is something we should try to benefit of as much as possible. I figured a good way to do that would be to set aside a one hour period, at the end of the first day (his keynote is that day), to hold a discussion with him on all aspects of Python which are relevant to us as a group of users, and which he may contribute feedback to, incorporate into future versions, etc. I floated the idea privately by some people and got no negative (and some positive) feedback, so I'm now putting it out on the lists. If you all think it's a waste of time, it's easy to just kill the thing. Since I know that many on this list may not be able to attend the conference, but may still have valuable ideas to contribute, I thought the best way to proceed (assuming people want to do this) would be to prepare the key points for discussion on a public forum, true to the spirit of open source collaboration. For this purpose, I've just created a stub page in a hurry: http://scipy.org/SciPy06DiscussionWithGuido Feel free to contribute to it. Hopefully there (and on-list) we can sort out interesting questions, and we can contact Guido a few days before the conference so he has a chance to read it in advance. Cheers, f ps - I didn't link to this page from anywhere else on the wiki, so outside of this message it won't be easy to find. I just didn't feel comfortable touching the more 'visible' pages, but if this idea floats, we should make it easier to find by linking to it on one of the conference pages. From joris at ster.kuleuven.ac.be Thu May 11 14:14:01 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Thu May 11 14:14:01 2006 Subject: [Numpy-discussion] resize() Message-ID: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> Hi, I was surprised by the following effect of resize() >>> from numpy import * # 0.9.6 >>> a = array([1,2,3,4]) >>> a.resize(2,2) >>> a array([[1, 2], [3, 4]]) >>> a.resize(2,3) Traceback (most recent call last): File "", line 1, in ? ValueError: cannot resize an array that has been referenced or is referencing another array in this way. Use the resize function Where exactly is the reference? I just started the interactive python shell, did nothing else... On the other hand, restarting python and executing >>> from numpy import * >>> a = array([1,2,3,4]) >>> a.resize(2,3) >>> a array([[1, 2, 3], [4, 0, 0]]) does work... Why didn't it work for the first case? Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From fonnesbeck at gmail.com Thu May 11 15:26:02 2006 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu May 11 15:26:02 2006 Subject: [Numpy-discussion] OS X binary installers Message-ID: <723eb6930605111525m1718b70al19cbca2aabe31f22@mail.gmail.com> I am now distributing Mac binary installers for both numpy and scipy in a "meta-package" along with a couple other modules (Matplotlib, PyMC). This will hopefully resolve some of the version conflicts that some have been experiencing with my builds of numpy and scipy that were not compiled together. These builds are recent svn checkouts, and I hope to update them approximately weekly. In addition, now that I have a new dual core Intel Mac Mini, I am distributing both PPC and Intel versions. You can download either at http://trichech.us in the OS X downloads section. Chris -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From filip at ftv.pl Thu May 11 16:00:04 2006 From: filip at ftv.pl (Filip Wasilewski) Date: Thu May 11 16:00:04 2006 Subject: [Numpy-discussion] resize() In-Reply-To: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> References: <1147381970.4463a8d2ca072@webmail.ster.kuleuven.be> Message-ID: <1162264868.20060512005926@gmail.com> Hi joris, > I was surprised by the following effect of resize() >>>> from numpy import * # 0.9.6 >>>> a = array([1,2,3,4]) >>>> a.resize(2,2) >>>> a > array([[1, 2], > [3, 4]]) >>>> a.resize(2,3) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: cannot resize an array that has been referenced or is referencing > another array in this way. Use the resize function > Where exactly is the reference? I just started the interactive python shell, > did nothing else... You have also typed >>> a which in turn prints repr() of the array and causes some side effect in the interactive mode (the `a` array is also referenced by _ special variable after this). Try running this code as a script or use `print a`: a.resize(2,2) >>> print a [[1 2] [3 4]] >>> a.resize(2,3) >>> print a [[1 2 3] [4 0 0]] > On the other hand, restarting python and executing >>>> from numpy import * >>>> a = array([1,2,3,4]) >>>> a.resize(2,3) >>>> a > array([[1, 2, 3], > [4, 0, 0]]) > does work... Yes, no extra referencing before array resizing here. > Why didn't it work for the first case? This is just a small interactive mode feature and does not happen during normal script execution. cheers, fw From wbaxter at gmail.com Thu May 11 16:29:03 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 16:29:03 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: <44636533.3020201@noaa.gov> References: <44636533.3020201@noaa.gov> Message-ID: Ahh, I hadn't noticed the fromstring/fromfile methods. Hmm. That seems ok for making a row-at-a-time, but it doesn't support the full syntax of the matrix string constructor, which allows for things like >>> numpy.matrix("[1 2; 2 3;3 4]") matrix([[1, 2], [2, 3], [3, 4]]) On the other hand since it's 'matrix', it turns things like "1 2 3" into [[1,2,3]] instead of just [1,2,3]. I think an array version of the matrix string constructor that returns the latter would be handy. But it's admittedly a pretty minor thing. ---bb On 5/12/06, Christopher Barker wrote: > > > Bill Baxter wrote: > > Two quick questions: > > ---------1------------ > > Is there any better way to intialize an array from a string than this: > > > > A = asarray(matrix("1 2 3")) > > How about: > > >>> import numpy as N > >>> N.fromstring("1 2 3", sep = " ") > array([1, 2, 3]) > > or > > >>> N.fromstring("1 2 3", dtype = N.Float, sep = " ") > array([ 1., 2., 3.]) > > If you pass a non-empty "sep" parameter, it parses the string, rather > than treating is as binary data. fromfile works this way too -- thanks > Travis! > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > NOAA/OR&R/HAZMAT (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Thu May 11 17:26:04 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 17:26:04 2006 Subject: [Numpy-discussion] numpy crash on numpy.tri(-1) Message-ID: Subject says it all. numpy.tri(-1) crashes the python process. >>> numpy.__version__ '0.9.6' --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 11 17:38:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 11 17:38:02 2006 Subject: [Numpy-discussion] Re: numpy crash on numpy.tri(-1) In-Reply-To: References: Message-ID: Bill Baxter wrote: > Subject says it all. numpy.tri(-1) crashes the python process. > >>>> numpy.__version__ > '0.9.6' On OS X: >>> import numpy >>> numpy.tri(-1) array([], shape=(0, 0), dtype=int32) >>> numpy.__version__ '0.9.7.2476' While probably not "right" in any sense of the word, it doesn't crash in recent versions. In the future, it would be good to post the output of the crash. Sometimes people use the word "crash" to imply that an exception made it to the toplevel rather than, say, a segfault at the C level. Knowing what kind of "crash" is important. With segfaults, it is also usually helpful to know the platform you are on. With segfaults in linear algebra code, it is extremely helpful to know what BLAS and LAPACK you used to compile numpy. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Thu May 11 17:51:13 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 11 17:51:13 2006 Subject: [Numpy-discussion] Re: numpy crash on numpy.tri(-1) In-Reply-To: References: Message-ID: Sorry about the lack of detail. The platform is Windows. I'm using binaries off the Scipy website. And by crash I mean a dialog pops up in my pycrust shell saying "pythonw.exehas encountered a problem and needs to close. We are sorry for the inconvenience", and then asks me if I want to send Microsoft an error report. Perhaps it is fixed in a more recent version of numpy, though. --bb On 5/12/06, Robert Kern wrote: > > Bill Baxter wrote: > > Subject says it all. numpy.tri(-1) crashes the python process. > > > >>>> numpy.__version__ > > '0.9.6' > > On OS X: > > >>> import numpy > >>> numpy.tri(-1) > array([], shape=(0, 0), dtype=int32) > >>> numpy.__version__ > '0.9.7.2476' > > While probably not "right" in any sense of the word, it doesn't crash in > recent > versions. > > In the future, it would be good to post the output of the crash. Sometimes > people use the word "crash" to imply that an exception made it to the > toplevel > rather than, say, a segfault at the C level. Knowing what kind of "crash" > is > important. With segfaults, it is also usually helpful to know the platform > you > are on. With segfaults in linear algebra code, it is extremely helpful to > know > what BLAS and LAPACK you used to compile numpy. > > Thank you. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri May 12 01:18:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri May 12 01:18:01 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: On 5/11/06, Robert Hetland wrote: > > > Is it worth it to convert the arrays to matrices in order to do this > handful of calculation? Almost. I covet the shorthand .T notation > in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. > Also, for involved calculations inverse, transpose et. al are long > enough words such that the line always wraps, resulting in less- > readable code. +1 on a .T shortcut for arrays. +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) I'm not wild about the .I shortcut. I prefer to see big expensive operations like a matrix inverse to stand out a little more when I'm looking at the code. And I hardly ever need an inverse anyway (usually an lu_factor or SVD or something like that will do what I need more efficiently and robustly than directly taking an inverse). I just finished writing my first few hundred lines of code using array instead of matrix. It went fine. Here are my impressions: it was nice not having to worry so much about whether my vectors were row vectors or column vectors all the time, or whether this or that was matrix type or not. It felt much less like I was fighting with numpy than when I was using matrix. I also ported a little bit of matlab code that was full of apostrophes (matlab's transpose operator). With numpy arrays, all but one of the transposes went away. I realized also that most of what I end up doing is wrangling data to get it in the right shape and form so that I can to do just a few lines of linear algebra on it, similar to what you observe, Rob, so it makes little sense to have everything be matrices all the time. So looks like I'm joining the camp of "matrix -- not worth the bother" folks. The .T shortcut for arrays would still be nice though. And some briefer way to make a new axis than 'numpy.newaxis' would be nice too (I'm trying to keep my global namespace clean these days). --- Whoa I just noticed that a[None,:] has the same effect as 'newaxis'. Is that a blessed way to do things? Regards, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetosch at gmx.net Fri May 12 01:34:11 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Fri May 12 01:34:11 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: <4464484E.2040200@gmx.net> Bill Baxter schrieb: > So looks like I'm joining the camp of "matrix -- not worth the bother" > folks. The .T shortcut for arrays would still be nice though. > Bill, I think I see what you mean. Nevertheless, I prefer my vectors to have a row-or-column orientation, for example because the requirement of conforming shapes for multiplication reveals (some of) my bugs when I slice a vector out of a matrix but along the wrong axis. It seems that otherwise I would get a bogus result without necessarily realizing it. I guess that also tells something about my inefficient matrix coding style in general, but still... For rapidly implementing some published formulae I prefer to map them to (numpy) code as directly and proof-readably as possible, at least initially. And that means with matrices. cheers, sven From tim.hochberg at cox.net Fri May 12 08:06:32 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Fri May 12 08:06:32 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> Message-ID: <4464A3F9.3010204@cox.net> Bill Baxter wrote: > On 5/11/06, *Robert Hetland* > wrote: > > > Is it worth it to convert the arrays to matrices in order to do this > handful of calculation? Almost. I covet the shorthand .T notation > in matrix object while getting RSI typing in t-r-a-n-s-p-o-s-e. > Also, for involved calculations inverse, transpose et. al are long > enough words such that the line always wraps, resulting in less- > readable code. > Note that T(a) is only one character longer than a.T and is trivially implementable today. If you're doing enough typing of t-r-a-n-s-p-o-s-e to induce RSI, it's surely not going to hurt you to type: def T(a): return a.transpose() etc, somewhere at the top of your module. > > +1 on a .T shortcut for arrays. > +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) -1. These are really only meaningul for 2D arrays. Without the axis keyword they generally do the wrong things for arrays of other dimensionality (there I usually, but not always, want to transpose the first two or last two axes). In addition, ndarray is already overstuffed with methods and attributes, let's not add any more without careful consideration. > I'm not wild about the .I shortcut. I prefer to see big expensive > operations like a matrix inverse to stand out a little more when I'm > looking at the code. And I hardly ever need an inverse anyway > (usually an lu_factor or SVD or something like that will do what I > need more efficiently and robustly than directly taking an inverse). Agreed. In addition, inverse is another thing that's specific to 2D arrays. > I just finished writing my first few hundred lines of code using array > instead of matrix. It went fine. Here are my impressions: it was > nice not having to worry so much about whether my vectors were row > vectors or column vectors all the time, or whether this or that was > matrix type or not. It felt much less like I was fighting with numpy > than when I was using matrix. I also ported a little bit of matlab > code that was full of apostrophes (matlab's transpose operator). With > numpy arrays, all but one of the transposes went away. I realized > also that most of what I end up doing is wrangling data to get it in > the right shape and form so that I can to do just a few lines of > linear algebra on it, similar to what you observe, Rob, so it makes > little sense to have everything be matrices all the time. > > So looks like I'm joining the camp of "matrix -- not worth the bother" > folks. The .T shortcut for arrays would still be nice though. > > And some briefer way to make a new axis than 'numpy.newaxis' would be > nice too (I'm trying to keep my global namespace clean these days). > --- Whoa I just noticed that a[None,:] has the same effect as > 'newaxis'. Is that a blessed way to do things? numpy.newaxis *is* None. I don't think that I'd spell it that way since None doesn't really have the same conotations as newaxis, so I think I'll stick with np.newaxis, which is how I spell these things. For an even weirder example, which only exists in a parallel universe, I thought about proposing alls as another alias for None. This was so that, for instance, one could do: sum(a, axes=all) instead of sum(a, axes=None). That's clearer, but the cognitive disonance of 'all=None' scared me away. -tim From st at sigmasquared.net Fri May 12 15:03:13 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Fri May 12 15:03:13 2006 Subject: [Numpy-discussion] Some numpy.distutils question Message-ID: <44650286.9060209@sigmasquared.net> Hi all Could maybe someone explain which combinations of Blas, Atlas and Lapack Numpy should support in principle? For example, atlas_info in system_info.py takes a special code path and defines a macro when 'lapack' is not among the found libraries but a lib with the hard-coded name 'lapack_atlas' is. This looks like a Numpy with ATLAS-only option, but it is not documented. It looks like the numpy.distutils mingw32compiler class is copy of the one in the python distutils - but it seems to be out of date. Shouldn't we now make use of the Python one, or resync numpy's code? What should be done with the suppressed compiler warnings? Should there be some config switch to activate them, should they always be shown? Ciao, Stephan From charlesr.harris at gmail.com Fri May 12 17:47:30 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri May 12 17:47:30 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: On 5/11/06, Bill Baxter wrote: > > Ahh, I hadn't noticed the fromstring/fromfile methods. > > Hmm. That seems ok for making a row-at-a-time, but it doesn't support the > full syntax of the matrix string constructor, which allows for things like > > >>> numpy.matrix("[1 2; 2 3;3 4]") > matrix([[1, 2], > [2, 3], > [3, 4]]) > You can reshape the array returned by fromstring, i.e., In [6]: fromstring("1 2 2 3 3 4", sep=" ").reshape(-1,2) Out[6]: array([[1, 2], [2, 3], [3, 4]]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Fri May 12 21:45:08 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri May 12 21:45:08 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: <4464B6D2.2040701@noaa.gov> Bill Baxter wrote: > it doesn't support the > full syntax of the matrix string constructor, which allows for things like > >>>> numpy.matrix("[1 2; 2 3;3 4]") > matrix([[1, 2], > [2, 3], > [3, 4]]) > > I think an array version of the matrix string constructor that returns the > latter would be handy. > But it's admittedly a pretty minor thing. I agree it's pretty minor indeed, but as long as the code is in the matrix object, why not in the array object as well? As I think about it, I can see two reasons: 1) arrays are n-d. after commas and semi-colons, how do you construct a higher-than-rank-two array? 2) is it really that much harder to type the parentheses?: I suppose there is a bit of inefficiency in creating all those tuples, just to have them dumped, but I can't imagine that ever really matters. By the way, you can do: >>> a = numpy.fromstring("1 2; 2 3; 3 4", sep=" ").reshape((-1,2)) >>> a array([[1, 2], [2, 3], [3, 4]]) Which, admittedly, is kind of clunky, and, in fact, the ";" is being ignored, but you can put it there to remind yourself what you meant. A note about fromstring/fromfile: I sometimes might have a mix of separators, like the above example. It would be nice I I could pass in more than one that would get used. the above example will only work if there is a space in addition to the semi-colon. It would be nice to be able to do: a = numpy.fromstring("1 2;2 3;3 4", sep=" ;") or a = numpy.fromstring("1,2;2,3;3,4", sep=",;") and have that work. Travis, I believe you said that this code was inspired by my Scanfile code I posted on this list a while back. In that code, I allowed any character that ?scanf didn't interpret as a number be used as a separator: if you asked for the next ten numbers in the file, you'd get the next ten numbers, regardless of what was in between them. While that seems kind of ripe for masking errors, I find that I need to know what the file format I'm working with looks like anyway, and while this approach might mask an error when you read the data, it'll show up soon enough later on, and it sure does make it easy to use and code. Maybe a special string for sep could give us this behavior, like "*" or something. I'm also not sure it's the best idea to put this functionality into fromstring, rather than a separate function, perhaps fromtext()? (or scantext(), or ? ) That's not a big deal, but it just seems like it's a bit hidden there, and scanning a string is a very different operation that interpreting that string as binary data. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From david.huard at gmail.com Fri May 12 22:09:53 2006 From: david.huard at gmail.com (David Huard) Date: Fri May 12 22:09:53 2006 Subject: [Numpy-discussion] Weird slicing behavior Message-ID: <91cf711d0605120950x23ac031fv88c10fd28f868172@mail.gmail.com> Hi, I noticed that slicing outside bounds with lists returns the whole list : >>> a = range(5) >>> a[-7:] [0, 1, 2, 3, 4] while in numpy, array has a weird behavior >>> a = numpy.arange(5) >>> a[-7:] array([3, 4]) # Ok, it takes modulo(index, length) but >>> a[-11:] array([0, 1, 2, 3, 4]) # ??? Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz at loskot.net Sat May 13 06:49:20 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Sat May 13 06:49:20 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44635ABE.5030200@ieee.org> References: <4463220B.20909@loskot.net> <44635ABE.5030200@ieee.org> Message-ID: <4465E108.9050703@loskot.net> Travis Oliphant wrote: > Mateusz ?oskot wrote: > >> >> I'm a developer contributing to GDAL project (http://www.gdal.org). >> GDAL is a core GeoSpatial Librarry that has had Python bindings >> for a while. Recently, we did get some reports from users that GDAL >> bindings do not work with NumPy package. > > > Most packages can be "ported" simply by replacing include > "Numeric/arrayobject.h" with include "numpy/arrayobject.h" and making > sure the include files are retrieved from the right place. NumPy was > designed to make porting from Numeric a breeze. Yes, portging to NumPy seems to be straightforward. >> This situation brings some questions we'd like to ask NumPy Dev >> Team: >> >> Is it fair to say we are unlikely to see Numeric releases for new >> Pythons in the future? >> > > Yes, that's fair. Nobody is maintaining Numeric. I understand. >> Can we consider NumPy as the only package in future? >> > > Yes. That's where development is currently active. Clear. So, I'm now working on moving GDAL bindings to Python to NumPy. >> What's the recommended way to go now? >> > > > I've ported tens of packages to NumPy from Numeric and have had very > little trouble. It is not difficult. Most of the time, simply > replacing the *.h file with the one from numpy works fine. It might > be a bit trickier to get your headers from the right place. The > directory is returned by > > import numpy.distutils > numpy.distutils.misc_util.get_numpy_include_dirs() > > Give it a try it's not very difficult. Thank you very much for valuable tips. I've just started to port it. I'll give some note about results. Cheers -- Mateusz ?oskot http://mateusz.loskot.net From mateusz at loskot.net Sat May 13 07:21:55 2006 From: mateusz at loskot.net (=?UTF-8?B?TWF0ZXVzeiDFgW9za290?=) Date: Sat May 13 07:21:55 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <44636A10.1030909@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> Message-ID: <4465E839.2000604@loskot.net> Christopher Barker wrote: > Mateusz ?oskot wrote: > >> I'm a developer contributing to GDAL project (http://www.gdal.org). >> GDAL is a core GeoSpatial Librarry that has had Python bindings >> for a while. Recently, we did get some reports from users that GDAL >> bindings do not work with NumPy package. > > > Speaking as a long time numpy (Numeric, etc.) user, and a new user of > GDAL, I had no idea GDAL worked with num* at all! at least not directly. Yes, it is :-) > In fact, I thought I was going to have to write that code myself. Where > do I find docs for this? I'm sure I've just missed something, but I'm > finding the docs a bit sparse. Do you mean docs for Python bindings of GDAL? AFAIK the only docs for Python oriented users is the "GDAL API Tutorial" (http://www.gdal.org/gdal_tutorial.html). Although, I'm not sure if there is or not manual reference for gdal.py module, so I'll take up this subject soon. In the meantime, I'd suggest to use epydoc tool to generate some manual - it won't be a complete reference but it can be usable. > On the other hand, I am also finding GDAL to be an excellent library, > and have so far gotten it to do what I need it to do. So kudos! Thanks! >> Can we consider NumPy as the only package in future? >> Simply, we are wondering which Python library we should develop for >> NumPy, Numeric or numarray to be most generally useful. > > > Speaking as a very interested observer, but not a developer of any of > the num* packages: > > numpy is the way of the future. As an observer, I think that's pretty > clear. On the other hand, it is still beta software, so dropping Numeric > just yet may not be appropriate. If you don't already support numarray, > there is no reason to do so now. > > It will do the python numerical community a world of good for all of us > to get back to a single array package. > > Note also that we hope some day to get a simple n-d array object into > the python standard library. When that happens, that object is all but > guaranteed to be compatible with numpy. There are two separate bindings to Python in GDAL: - native, called traditional - SWIG based, called New Generation Python So, we've decided to port the latter to NumPy. Thank you -- Mateusz ?oskot http://mateusz.loskot.net From amcmorl at gmail.com Sat May 13 20:03:59 2006 From: amcmorl at gmail.com (Angus McMorland) Date: Sat May 13 20:03:59 2006 Subject: [Numpy-discussion] Dot product threading? Message-ID: <44669A5B.8040700@gmail.com> Is there a way to specify which dimensions I want dot to work over? For example, if I have two arrays: In [78]:ma = array([4,5,6]) # shape = (1,3) In [79]:mb = ma.transpose() # shape = (3,1) In [80]:dot(mb,ma) Out[80]: array([[16, 20, 24], [20, 25, 30], [24, 30, 36]]) No problems there. Now I want to do that multiple times, threading over the first dimension of larger arrays: In [85]:mc = array([[[4,5,6]],[[7,8,9]]]) # shape = (2, 1, 3) In [86]:md = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) and I want to calculate the two (3, 1) x (1, 3) dot products to get a shape = (2, 3, 3) result, so that res[i,...] == dot(md[i,...], mc[i,...]) >From my example above res[0,...] would be the same as dot(mb,ma) and res[1,...] would be In [108]:dot(md[1,...],mc[1,...]) Out[108]: array([[49, 56, 63], [56, 64, 72], [63, 72, 81]]) I could do it by explicitly looping over the first dimension (as suggested by my generic example), but it seems like there should be a better way by specifying to the dot function the dimensions over which it should be 'dotting'. Cheers, Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From robert.kern at gmail.com Sat May 13 20:49:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat May 13 20:49:03 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <44669A5B.8040700@gmail.com> References: <44669A5B.8040700@gmail.com> Message-ID: Angus McMorland wrote: > Is there a way to specify which dimensions I want dot to work over? Use swapaxes() on the arrays to put the desired axes in the right places. In [2]: numpy.swapaxes? Type: function Base Class: String Form: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-0.9.7.2476-py2.4-macosx-10.4-ppc.egg/numpy/core/oldnumeric.py Definition: numpy.swapaxes(a, axis1, axis2) Docstring: swapaxes(a, axis1, axis2) returns array a with axis1 and axis2 interchanged. In [3]: numpy.dot? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: matrixproduct(a,b) Returns the dot product of a and b for arrays of floating point types. Like the generic numpy equivalent the product sum is over the last dimension of a and the second-to-last dimension of b. NB: The first argument is not conjugated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stephen.walton at csun.edu Sun May 14 18:20:47 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun May 14 18:20:47 2006 Subject: [Numpy-discussion] downsample vector with averaging In-Reply-To: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> References: <87vesg6i94.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <4467D47B.30907@csun.edu> John Hunter wrote: > >An open source version (GPL) for octave by Paul Kienzle, who is one of >the authors of the matplotlib quadmesh functionality and apparently a >python convertee, is here > > http://users.powernet.co.uk/kienzle/octave/matcompat/scripts/signal/decimate.m > >and it looks like someone has already translated this to python using >scipy.signal > > http://www.bigbold.com/snippets/posts/show/1209 > >Some variant of this would be a nice addition to scipy. > > > I agree, and I just found by experiment, putting the code from the second link into scipy was pretty easy (I put it into signaltools.py). But the Octave code is GPL and the Python translation is pretty much a straight copy of it. Plus I don't know how to get hold of the Python translator. So, how to proceed from here? From simon at arrowtheory.com Mon May 15 00:21:37 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 00:21:37 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN Message-ID: <20060515170846.609be180.simon@arrowtheory.com> >>> numpy.float64(numpy.NaN)==numpy.NaN False Hmm. Bug or feature ? >>> numpy.__version__ 0.9.7.2502 Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From a.u.r.e.l.i.a.n at gmx.net Mon May 15 00:51:20 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon May 15 00:51:20 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <200605150936.32199.a.u.r.e.l.i.a.n@gmx.net> Hi, > >>> numpy.float64(numpy.NaN)==numpy.NaN > > False > According to the standards, two NaNs should never be equal (since nan represents an 'unknown' value). So the 'real' bug is this: ------------------------ In [1]: import numpy as N In [2]: N.nan == N.nan # wrong result! Out[2]: True In [3]: N.array([N.nan]) == N.array([N.nan]) # correct result Out[3]: array([False], dtype=bool) In [4]: N.__version__ Out[4]: '0.9.7.2484' ------------------------ Johannes From fullung at gmail.com Mon May 15 01:11:40 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 15 01:11:40 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <004401c677f4$f17c5260$0502010a@dsp.sun.ac.za> Hey Simon I think this is a feature. NaN is never equal to anything else, including itself. MATLAB does the same thing: >> nan==nan ans = 0 >> inf==inf ans = 1 Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Simon Burton > Sent: 15 May 2006 09:09 > To: numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN > > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? > > >>> numpy.__version__ > 0.9.7.2502 > > Simon. From simon at arrowtheory.com Mon May 15 01:46:26 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 01:46:26 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <20060515181956.6330ea7b.simon@arrowtheory.com> On Mon, 15 May 2006 17:08:46 +1000 Simon Burton wrote: > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? I don't know, but I just found the isnan function. Along with some other curious functions: ['isnan', 'nan', 'nan_to_num', 'nanargmax', 'nanargmin', 'nanmax', 'nanmin', 'nansum'] :) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From ndarray at mac.com Mon May 15 05:52:01 2006 From: ndarray at mac.com (Sasha) Date: Mon May 15 05:52:01 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: <20060515170846.609be180.simon@arrowtheory.com> References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: It's a feature IEEE standard requires that NaNs are not equal to any floating point numbers including other NaNs. On 5/15/06, Simon Burton wrote: > > >>> numpy.float64(numpy.NaN)==numpy.NaN > False > > Hmm. Bug or feature ? > > >>> numpy.__version__ > 0.9.7.2502 > > Simon. > > -- > Simon Burton, B.Sc. > Licensed PO Box 8066 > ANU Canberra 2601 > Australia > Ph. 61 02 6249 6940 > http://arrowtheory.com > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ndarray at mac.com Mon May 15 06:38:01 2006 From: ndarray at mac.com (Sasha) Date: Mon May 15 06:38:01 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: BTW, >>> from numpy import * >>> nan == nan False >>> from decimal import * >>> Decimal('NaN') == Decimal('NaN') False and finally (may not work on all platforms): >>> float('NaN') == float('NaN') False On 5/15/06, Sasha wrote: > It's a feature IEEE standard requires that NaNs are not equal to any > floating point numbers including other NaNs. > > On 5/15/06, Simon Burton wrote: > > > > >>> numpy.float64(numpy.NaN)==numpy.NaN > > False > > > > Hmm. Bug or feature ? > > > > >>> numpy.__version__ > > 0.9.7.2502 > > > > Simon. > > > > -- > > Simon Burton, B.Sc. > > Licensed PO Box 8066 > > ANU Canberra 2601 > > Australia > > Ph. 61 02 6249 6940 > > http://arrowtheory.com > > > > > > ------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From simon at arrowtheory.com Mon May 15 06:57:28 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 06:57:28 2006 Subject: [Numpy-discussion] numpy.float64(numpy.NaN)!=numpy.NaN In-Reply-To: References: <20060515170846.609be180.simon@arrowtheory.com> Message-ID: <20060515232824.302f0aab.simon@arrowtheory.com> On Mon, 15 May 2006 09:21:59 -0400 Sasha wrote: > > BTW, > > >>> from numpy import * > >>> nan == nan > False > > >>> from decimal import * > >>> Decimal('NaN') == Decimal('NaN') > False > > and finally (may not work on all platforms): > > >>> float('NaN') == float('NaN') > False Yes. It looks like the "isnan" function is what I need. I use NaN for missing data points, and I do masking on NaN, etc. Quite handy really. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From st at sigmasquared.net Mon May 15 07:15:32 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Mon May 15 07:15:32 2006 Subject: [Numpy-discussion] Building on Windows In-Reply-To: <44625E03.9080105@ieee.org> References: <44625BE2.4010708@sigmasquared.net> <44625E03.9080105@ieee.org> Message-ID: <4468895A.8060501@sigmasquared.net> > So, go ahead and let us know what problems you are having. You are > correct that my main build platform is not Windows, but I think several > other people do use Windows regularly and we definitely want to support it. I attached a patch to ticket #114 that fixes various build issues. I've also updated http://www.scipy.org/Installing_SciPy/Windows with a new tutorial on how to build Numpy/Scipy on Windows. Any comments or suggestions are very welcome. Should we make ATLAS-optimized Windows binaries of Numpy available? Maybe starting with 0.9.8? I could provide (32bit) Athlon64 binaries. Stephan From karol.langner at kn.pl Mon May 15 13:51:47 2006 From: karol.langner at kn.pl (Karol Langner) Date: Mon May 15 13:51:47 2006 Subject: [Numpy-discussion] basearray / arraykit In-Reply-To: <44636613.7050201@ieee.org> References: <445F8F71.4070801@cox.net> <446362CC.6000004@noaa.gov> <44636613.7050201@ieee.org> Message-ID: <200605152237.51096.karol.langner@kn.pl> On Thursday 11 May 2006 18:28, Travis Oliphant wrote: > Christopher Barker wrote: > > Travis Oliphant wrote: > >> 1) Implement a base-array with no getitem method nor setitem method > >> at all > >> > >> 2) Implement a sub-class that supports only creation of data-types > >> corresponding to existing Python scalars (Boolean, Long-based > >> integers, Double-based floats, complex and object types). Then, all > >> array accesses should return the underlying Python objects. > >> This sub-class should also only do view-based indexing (basically > >> it's old Numeric behavior inside of NumPy). > >> > >> Item 1) should be pushed for inclusion in 2.6 and possibly even > >> something like 2) > > > > + sys.maxint > > > > Having even this very basic n-d object in the standard lib would be a > > MAJOR boon to python. > > I totally agree. I've been advertising this for at least 8 months, but > nobody is really willing to work on it (or fund it). At least we have > a summer student who is going to try and get Google summer-of-code money > for it. If you have any ability to bump up the ratings of summer of > code applications. Please consider bumping up his application. I am this student Travis mentioned. Right now I am intently looking through past archives and slowly digging into the basearray stuff. If I get the SoC money, I officially start working on this May 23rd, in close contact with eveyone connected with numpy I hope. If I don't get it, I may still go ahead, but not so eagerly for sure, as then I'll need to take up a summer job. Those interested that don't have inside access to SoC ;) can find my application here: http://www.mml.ch.pwr.wroc.pl/langner/SoC-NDarray.txt Karol -- written by Karol Langner pon maj 15 22:29:43 CEST 2006 From wbaxter at gmail.com Mon May 15 18:10:22 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon May 15 18:10:22 2006 Subject: [Numpy-discussion] Creating small arrays from strings and concatenating with empty arrays In-Reply-To: References: <44636533.3020201@noaa.gov> Message-ID: On 5/13/06, Charles R Harris wrote: > > > > On 5/11/06, Bill Baxter wrote: > > > > Ahh, I hadn't noticed the fromstring/fromfile methods. > > > > Hmm. That seems ok for making a row-at-a-time, but it doesn't support > > the full syntax of the matrix string constructor, which allows for things > > like > > > > >>> numpy.matrix("[1 2; 2 3;3 4]") > > matrix([[1, 2], > > [2, 3], > > [3, 4]]) > > > > You can reshape the array returned by fromstring, i.e., > > In [6]: fromstring("1 2 2 3 3 4", sep=" ").reshape(-1,2) > Out[6]: > array([[1, 2], > [2, 3], > [3, 4]]) > But if the string comes from someplace other than a literal right there in the code (like loaded from a file or passed in as an argument or something), I may not know the shape in advance. I'll just stick with the matrix constructor, since for my case, I do know the array dim is 2. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon May 15 20:55:11 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 20:55:11 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed Message-ID: <20060516134106.551fe813.simon@arrowtheory.com> Sometime between 0.9.5 and 0.9.7.2502 the behaviour of nonzero changed: >>> a=numpy.array([1,1,0,1,1]) >>> a.nonzero() array([0, 1, 3, 4]) >>> now it returns a tuple: >>> a=numpy.array((0,1,1,0,0)) >>> a.nonzero() (array([1, 2]),) This is rather unpleasant for me. For example, my collegue uses OSX and finds only numpy 0.9.5 under darwinports. :( Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Mon May 15 21:28:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 15 21:28:32 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060516134106.551fe813.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> Message-ID: <44695185.9090700@ieee.org> Simon Burton wrote: > Sometime between 0.9.5 and 0.9.7.2502 the behaviour of nonzero changed: > > >>>> a=numpy.array([1,1,0,1,1]) >>>> a.nonzero() >>>> > array([0, 1, 3, 4]) > > > now it returns a tuple: > > >>>> a=numpy.array((0,1,1,0,0)) >>>> a.nonzero() >>>> > (array([1, 2]),) > > This is rather unpleasant for me. For example, my collegue > uses OSX and finds only numpy 0.9.5 under darwinports. > > Chris Fonnesback releases quite up-to-date binary releases of NumPy and SciPy for Mac OSX. An alternative solution is to use the functional form: nonzero(a) works the same as a.nonzero() did before for backwards compatibility. The behavior of a.nonzero() was changed for compatibility with numarray. -Travis From simon at arrowtheory.com Mon May 15 22:46:20 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 15 22:46:20 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <44695185.9090700@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> Message-ID: <20060516153639.23ac52fa.simon@arrowtheory.com> On Mon, 15 May 2006 22:13:57 -0600 Travis Oliphant wrote: > > Chris Fonnesback releases quite up-to-date binary releases of NumPy and > SciPy for Mac OSX. Yes: http://homepage.mac.com/fonnesbeck/mac/index.html > > An alternative solution is to use the functional form: > > nonzero(a) works the same as a.nonzero() did before for backwards > compatibility. > > The behavior of a.nonzero() was changed for compatibility with numarray. Is this a general strategy employed by numpy ? Ie. functions have old semantics, methods have numarray semantics ? Scary. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From cadot at science-and-technology.nl Tue May 16 05:58:07 2006 From: cadot at science-and-technology.nl (Sidney Cadot) Date: Tue May 16 05:58:07 2006 Subject: [Numpy-discussion] Argument of complex number array? Message-ID: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Hi all, I am looking for a function to calculate the argument (i.e., the 'phase') from a numarray containing complex numbers. A bit to my surprise, I don't see this listed under the ufunc's. Now of course I can do something like arg = arctan2(z.real, z.imag) ... But I would have hoped for a direct function that does this. Any thoughts? (Perhaps I am missing something?) Cheerio, Sidney From pau.gargallo at gmail.com Tue May 16 06:22:44 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue May 16 06:22:44 2006 Subject: [Numpy-discussion] bug in argmax (?) Message-ID: <6ef8f3380605150721m5dd6f637ne428ef98a1a16512@mail.gmail.com> hi all, argmax gives me some errors for arrays with more than 2 dimensions: >>> import numpy >>> numpy.__version__ '0.9.7.2503' >>> x = numpy.zeros((2,3,4)) >>> x.argmax(0) Traceback (most recent call last): File "", line 1, in ? ValueError: bad axis2 argument to swapaxes >>> x.argmax(1) Traceback (most recent call last): File "", line 1, in ? ValueError: bad axis2 argument to swapaxes >>> x.argmax(2) array([[0, 0, 0], [0, 0, 0]]) does this happens to anyone else? pau From st at sigmasquared.net Tue May 16 06:41:11 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Tue May 16 06:41:11 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? Message-ID: <4469D615.3090402@sigmasquared.net> Hi all, am I the only one who is sometimes (particularly today) receiving emails from the numpy-discussion list up to one day after they have been sent? Or is there a problem with sf.net? Stephan From joris at ster.kuleuven.ac.be Tue May 16 06:41:33 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue May 16 06:41:33 2006 Subject: [Numpy-discussion] numpy documentation Message-ID: <1147786827.4469d64b6b0cf@webmail.ster.kuleuven.be> Hi, I've started a numpy documentation wikipage with a list of examples illustrating the use of each numpy function. Although far from complete, you can have a look at what I currently have: http://scipy.org/JorisDeRidder I would like to make the example list more visible for the numpy community, though. Any suggestions? And, needless to say, if anyone likes to contribute, please jump in! Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pau.gargallo at gmail.com Tue May 16 06:56:22 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Tue May 16 06:56:22 2006 Subject: [Numpy-discussion] Bug in ndarray.argmax In-Reply-To: <44561C11.3000601@cmp.uea.ac.uk> References: <44561C11.3000601@cmp.uea.ac.uk> Message-ID: <6ef8f3380605160655p5aa35228r778b5d240f6ed760@mail.gmail.com> On 5/1/06, Pierre Barbier de Reuille wrote: > Hello, > > I notices a bug in ndarray.argmax which prevent from getting the argmax > from any axis but the last one. > I join a patch to correct this. > Also, here is a small python code to test the behaviour of argmax I > implemented : > > ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== > > from numpy import array, random, all > > a = random.normal( 0, 1, ( 4,5,6,7,8 ) ) > for i in xrange( a.ndim ): > amax = a.max( i ) > aargmax = a.argmax( i ) > axes = range( a.ndim ) > axes.remove( i ) > assert all( amax == aargmax.choose( *a.transpose( i, *axes ) ) ) > > ==8<====8<====8<====8<====8<====8<====8<====8<===8<=== > > Pierre > > > diff numpy-0.9.6/numpy/core/src/multiarraymodule.c numpy-0.9.6.mod/numpy/core/src/multiarraymodule.c > 1952a1953,1955 > > If orign > ap->nd, then we cannot "swap it back" > > as the dimension does not exist anymore. It means > > the axis must be put back at the end of the array. > 1956c1959,1979 > < (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ > --- > > int nb_dims = (ap)->nd; \ > > if (orign > nb_dims-1 ) { \ > > PyArray_Dims dims; \ > > int i; \ > > dims.ptr = ( intp* )malloc( sizeof( intp )*nb_dims );\ > > dims.len = nb_dims; \ > > for(i = 0 ; i < axis ; ++i) \ > > { \ > > dims.ptr[i] = i; \ > > } \ > > for(i = axis ; i < nb_dims-1 ; ++i) \ > > { \ > > dims.ptr[i] = i+1; \ > > } \ > > dims.ptr[nb_dims-1] = axis; \ > > (op) = (PyAO *)PyArray_Transpose((ap), &dims ); \ > > } \ > > else \ > > { \ > > (op) = (PyAO *)PyArray_SwapAxes((ap), axis, orign); \ > > } \ > > > The bug seems to be still there in the current svn version, so I filled out a ticket for this. Is the first time i do such a thing, so someone competent should _please_ take a look at it. Thanks, and sorry in advance if i did something wrong. pau From MAILER-DAEMON at adm3.ims.u-tokyo.ac.jp Tue May 16 07:10:55 2006 From: MAILER-DAEMON at adm3.ims.u-tokyo.ac.jp (Mail Delivery Subsystem) Date: Tue May 16 07:10:55 2006 Subject: [Numpy-discussion] Returned mail: User unknown Message-ID: <200605151810.ABX13921@adm3.ims.u-tokyo.ac.jp> The original message was received at Tue, 16 May 2006 03:10:27 +0900 (JST) from [83.244.18.119] ----- The following addresses had permanent delivery errors ----- -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From Numpy-discussion at lists.sourceforge.net Mon May 15 14:08:28 2006 From: Numpy-discussion at lists.sourceforge.net (Numpy-discussion) Date: Tue, 16 May 2006 03:08:28 +0900 (JST) Subject: Fwd: image.jpg Message-ID: <200605151808.ABX13442@adm3.ims.u-tokyo.ac.jp> --ABX13921.1147716629/adm3.ims.u-tokyo.ac.jp-- From wbaxter at gmail.com Tue May 16 07:12:02 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 16 07:12:02 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: <4464A3F9.3010204@cox.net> References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> <4464A3F9.3010204@cox.net> Message-ID: On 5/13/06, Tim Hochberg wrote: > > Bill Baxter wrote: > > > > > +1 on a .T shortcut for arrays. > > +1 on a .H shortcut for arrays, too. (Instead of .conj().transpose()) > > -1. > > These are really only meaningul for 2D arrays. Without the axis keyword > they generally do the wrong things for arrays of other dimensionality > (there I usually, but not always, want to transpose the first two or > last two axes). In addition, ndarray is already overstuffed with methods > and attributes, let's not add any more without careful consideration. What am I missing here? There's already a .transpose() method on array. >From my quick little test it doesn't seem like the argument is useful: >>> a = num.rand(2,2) >>> a array([[ 0.96685836, 0.55643033], [ 0.86387107, 0.39331451]]) >>> a.transpose(1) array([ 0.96685836, 0.55643033]) >>> a array([[ 0.96685836, 0.55643033], [ 0.86387107, 0.39331451]]) >>> a.transpose(0) array([ 0.96685836, 0.86387107]) >>> a.transpose() array([[ 0.96685836, 0.86387107], [ 0.55643033, 0.39331451]]) (Python 2.4 / Windows XP) But maybe I'm using it wrong. The docstring isn't much help: --------------------------- >>> help(a.transpose) Help on built-in function transpose: transpose(...) m.transpose() --------------------------- Assuming I'm missing something about transpose(), that still doesn't rule out a shorter functional form, like a.T() or a.t(). The T(a) that you suggest is short, but it's just not the order people are used to seeing things in their math. Besides, having a global function with a name like T is just asking for trouble. And defining it in every function that uses it would get tedious. Even if the .T shortcut were added as an attribute and not a function, you could still use .transpose() for N-d arrays when the default two axes weren't the ones you wanted to swap. Yes, 2-D *is* a special case of array, but -- correct me if I'm wrong -- it's a very common special case. Matlab's transpose operator, for instance, just raises an error if it's used on anything other than a 1D- or 2D- array. There's some other way to shuffle axes around for N-d arrays in matlab, but I forget what. Not saying that the way Matlab does it is right, but my own experience reading and writing code of various sorts is that 2D stuff is far more common than arbitrary N-d. But for some reason it seems like most people active on this mailing list see things as just the opposite. Perhaps it's just my little niche of the world that is mostly 2D. The other thing is that I suspect you don't get so many transposes when doing arbitrary N-d array stuff so there's not as much need for a shortcut. But again, not much experience with typical N-d array usages here. > > And some briefer way to make a new axis than 'numpy.newaxis' would be > > nice too (I'm trying to keep my global namespace clean these days). > > --- Whoa I just noticed that a[None,:] has the same effect as > > 'newaxis'. Is that a blessed way to do things? > > numpy.newaxis *is* None. I don't think that I'd spell it that way since > None doesn't really have the same conotations as newaxis, so I think > I'll stick with np.newaxis, which is how I spell these things. Well, to a newbie, none of the notation used for indexing has much of any connotation. It's all just arbitrary symbols. ":" means the whole range? 2:10 means a range from 2 to 9? Sure, most of those are familiar to Python users (even '...'?) but even a heavy python user would be puzzled by something like r_[1:2:3j]. Or reshape(-1,2). The j is a convention, the -1 is a convention. NewAxis seems common enough to me that it's worth just learning None as another numpy convention. As long as "numpy.newaxis *is* None", as you say, and not "is *currently* None, but subject to change without notice" , then I think I'd rather use None. It has the feel of being something that's a fundamental part of the language. The first few times I saw indexing with numpy.newaxis it made no sense to me. How can you index with a new axis? What's the new axis'th element of an array? "None" says to me the index we're selecting is "None" as in "None of the above", i.e. we're not taking from the elements that are there, or not using up any of our current axes on this index. Also when you see None, it's clear to a Python user that there's some trickery going on, but when you see a[NewAxis,:] (as it was the first time I saw it) it's not clear if NewAxis is some numeric value defined in the code you're reading or by Numpy or what. For whatever reason None seems more logical and symmetrical to me than numpy.newaxis. Plus it seems that documentation can't be attached to numpy.newaxis because of it being None, which I recall also confused me at first. "help(numpy.newaxis)" gives you a welcome to Python rather than information about using numpy.newaxis. Regards, --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Tue May 16 07:20:38 2006 From: skip at pobox.com (skip at pobox.com) Date: Tue May 16 07:20:38 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? In-Reply-To: <4469D615.3090402@sigmasquared.net> References: <4469D615.3090402@sigmasquared.net> Message-ID: <17513.57172.453964.424726@montanaro.dyndns.org> Stephan> am I the only one who is sometimes (particularly today) Stephan> receiving emails from the numpy-discussion list up to one day Stephan> after they have been sent? Or is there a problem with sf.net? I suspect it's just the status quo. sf.net is rarely "speedy". Skip From tim.hochberg at cox.net Tue May 16 08:29:09 2006 From: tim.hochberg at cox.net (Tim Hochberg) Date: Tue May 16 08:29:09 2006 Subject: [Numpy-discussion] Who uses matrix? In-Reply-To: References: <44623207.2020306@noaa.gov> <1C3F540D-B712-480F-A8AF-031DDCA525A3@tamu.edu> <4464A3F9.3010204@cox.net> Message-ID: <4469EF9A.50202@cox.net> Bill Baxter wrote: > On 5/13/06, *Tim Hochberg* > wrote: > > Bill Baxter wrote: > > > > > +1 on a .T shortcut for arrays. > > +1 on a .H shortcut for arrays, too. (Instead of > .conj().transpose()) > > -1. > > These are really only meaningul for 2D arrays. Without the axis > keyword > they generally do the wrong things for arrays of other dimensionality > (there I usually, but not always, want to transpose the first two or > last two axes). In addition, ndarray is already overstuffed with > methods > and attributes, let's not add any more without careful consideration. > > > What am I missing here? There's already a .transpose() method on array. > From my quick little test it doesn't seem like the argument is useful: > >>> a = num.rand(2,2) > >>> a > array([[ 0.96685836, 0.55643033], > [ 0.86387107, 0.39331451]]) > >>> a.transpose(1) > array([ 0.96685836, 0.55643033]) > >>> a > array([[ 0.96685836, 0.55643033], > [ 0.86387107, 0.39331451]]) > >>> a.transpose(0) > array([ 0.96685836, 0.86387107]) > >>> a.transpose() > array([[ 0.96685836, 0.86387107], > [ 0.55643033 , 0.39331451]]) > > (Python 2.4 / Windows XP) > > But maybe I'm using it wrong. The docstring isn't much help: > --------------------------- > >>> help(a.transpose) > Help on built-in function transpose: > > transpose(...) > m.transpose() > --------------------------- No, the docstring does not appear to be useful. I'm not sure what's happening to your argument there, something bogus I assume; you are supposed to pass a sequence, which becomes the new axis order. For example. >>> import numpy >>> a = numpy.arange(6).reshape([3,2,1]) >>> a.shape (3, 2, 1) >>> a.copy().transpose().shape # By default, the axes are just reversed (1, 2, 3) >>> a.copy().transpose([0,2,1]).shape # Transpose last two axes (3, 1, 2) >>> a.copy().transpose([1,0,2]).shape # Transpose first two axes (2, 3, 1) Most of the time, when I'm using 3D arrays, I use one of the last two versions of transpose. Back to passing a single number. Looking at array_transponse, it appears that a.transpose(1) and a.transpose([1]) are equivalent. The problem is that transpose is accepting sequences of the lengths other than len(a.shape). The results appear bogus: chunks of the array actually disappear as you illustrate. It looks like this can be fixed by just adding a check in PyArray_Transpose that permute->len is the same as ap->nd. I won't have time to get to this today; if possible, could you enter a ticket on this so it doesn't fall through the cracks? > > Assuming I'm missing something about transpose(), that still doesn't > rule out a shorter functional form, like a.T() or a.t(). > > The T(a) that you suggest is short, but it's just not the order people > are used to seeing things in their math. Besides, having a global > function with a name like T is just asking for trouble. And defining > it in every function that uses it would get tedious. > > Even if the .T shortcut were added as an attribute and not a function, > you could still use .transpose() for N-d arrays when the default two > axes weren't the ones you wanted to swap. Yes, 2-D *is* a special > case of array, but -- correct me if I'm wrong -- it's a very common > special case. Not in my experience. I can't claim that this is typical for everyone, but my usage is most often of 3D arrays that represent arrays of matrices. Also common is representing an array of 2x2 (or 4x4) matrices as [[X00, X01], [X10, X11]], where the various Xs are (big) 1D arrays or scalars. This is because I often know that one of the Xs is either all ones or all zeros and by storing them as a scalar values I can save space, and by eliding subsequent operations by special casing one and zero, time. My usage of vanilla 2D arrays is relatively limited and occurs mostly where I interface with the outside world. > Matlab's transpose operator, for instance, just raises an error if > it's used on anything other than a 1D- or 2D- array. There's some > other way to shuffle axes around for N-d arrays in matlab, but I > forget what. Not saying that the way Matlab does it is right, but my > own experience reading and writing code of various sorts is that 2D > stuff is far more common than arbitrary N-d. But for some reason it > seems like most people active on this mailing list see things as just > the opposite. Perhaps it's just my little niche of the world that is > mostly 2D. The other thing is that I suspect you don't get so many > transposes when doing arbitrary N-d array stuff so there's not as much > need for a shortcut. But again, not much experience with typical N-d > array usages here. It's hard to say. People coming from Matlab etc tend to see things in terms of 2D. Some of that may be just that they work in different problem domains than, for instance, I do. Image processing for example seems to be mostly 2D. Although in some cases you might actually want to work on stacks of images, in which case you're back to the 3D regime. Part of it may also be that once you work an ND array language for a while you see ND solutions to some problems that previously you only saw 2D solutions for. > > > > > And some briefer way to make a new axis than 'numpy.newaxis' > would be > > nice too (I'm trying to keep my global namespace clean these days). > > --- Whoa I just noticed that a[None,:] has the same effect as > > 'newaxis'. Is that a blessed way to do things? > > numpy.newaxis *is* None. I don't think that I'd spell it that way > since > None doesn't really have the same conotations as newaxis, so I think > I'll stick with np.newaxis, which is how I spell these things. > > > Well, to a newbie, none of the notation used for indexing has much of > any connotation. It's all just arbitrary symbols. ":" means the > whole range? 2:10 means a range from 2 to 9? Sure, most of those are > familiar to Python users (even '...'?) but even a heavy python user > would be puzzled by something like r_[1:2:3j]. Or reshape(-1,2). The > j is a convention, the -1 is a convention. NewAxis seems common > enough to me that it's worth just learning None as another numpy > convention. > > As long as "numpy.newaxis *is* None", as you say, and not "is > *currently* None, but subject to change without notice" , It's been None since the beginning, so I don't think it's likely to change. Not being in charge, I can't guarantee that, but it would likely be a huge pain to do so I don't see it happening. > then I think I'd rather use None. It has the feel of being something > that's a fundamental part of the language. The first few times I saw > indexing with numpy.newaxis it made no sense to me. How can you index > with a new axis? What's the new axis'th element of an array? "None" > says to me the index we're selecting is "None" as in "None of the > above", i.e. we're not taking from the elements that are there, or not > using up any of our current axes on this index. Also when you see > None, it's clear to a Python user that there's some trickery going on, > but when you see a[NewAxis,:] (as it was the first time I saw it) > it's not clear if NewAxis is some numeric value defined in the code > you're reading or by Numpy or what. For whatever reason None seems > more logical and symmetrical to me than numpy.newaxis. Plus it seems > that documentation can't be attached to numpy.newaxis because of it > being None, which I recall also confused me at first. > "help(numpy.newaxis)" gives you a welcome to Python rather than > information about using numpy.newaxis. In abstract, I don't see any problem with that, but assuming that your sharing code with others, everyone's likely to be happier if you spell newaxis as newaxis, just for consistencies sake. On a related note, I think newaxis is probably overused. at least by me, I just grepped my code for newaxis, and I'd say at least half the uses of newaxis would have been clearer as reshape. -tim From aisaac at american.edu Tue May 16 12:09:02 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 16 12:09:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <44695185.9090700@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: 1. I hope for an array 'a' that nonzero(a) and a.nonzero() will produce the same result, whatever convention is chosen. For 1d arrays only the numarray behavior can be consistent with nd arrays (which Numeric's 'nonzero' did not handle). But ... 2. How are people using this? I trust that the numarray behavior was well considered, but I would have expected coordinates to be grouped rather than spread across the arrays in the tuple. Thank you for any insight, Alan Isaac From simon at arrowtheory.com Tue May 16 17:34:10 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue May 16 17:34:10 2006 Subject: [Numpy-discussion] Sourceforge delaying emails to list? In-Reply-To: <4469D615.3090402@sigmasquared.net> References: <4469D615.3090402@sigmasquared.net> Message-ID: <20060517103347.56fd9194.simon@arrowtheory.com> On Tue, 16 May 2006 15:39:33 +0200 Stephan Tolksdorf wrote: > > Hi all, > > am I the only one who is sometimes (particularly today) receiving emails > from the numpy-discussion list up to one day after they have been sent? > Or is there a problem with sf.net? Yes, me too. It's very confusing, and rather random. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From simon at arrowtheory.com Tue May 16 18:18:05 2006 From: simon at arrowtheory.com (Simon Burton) Date: Tue May 16 18:18:05 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> Message-ID: <20060517111742.421bdf3b.simon@arrowtheory.com> On Tue, 16 May 2006 15:15:23 -0400 Alan G Isaac wrote: > 2. How are people using this? I trust that the numarray > behavior was well considered, but I would have expected > coordinates to be grouped rather than spread across > the arrays in the tuple. Yes, this strikes me as bizarre. How about we make a new function, eg. argwhere, that returns an array of indices ? argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From tim.leslie at gmail.com Tue May 16 21:24:01 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Tue May 16 21:24:01 2006 Subject: [Numpy-discussion] Argument of complex number array? In-Reply-To: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> References: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Message-ID: On 5/16/06, Sidney Cadot wrote: > > Hi all, > > I am looking for a function to calculate the argument (i.e., the > 'phase') from a numarray containing complex numbers. A bit to my > surprise, I don't see this listed under the ufunc's. > > Now of course I can do something like > > arg = arctan2(z.real, z.imag) > > ... But I would have hoped for a direct function that does this. > > Any thoughts? (Perhaps I am missing something?) angle(z, deg=0) Return the angle of the complex argument z. >>> z = complex(1, 1) >>> angle(z)==pi/4 True Cheers, Timl Cheerio, Sidney > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed May 17 01:38:02 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 17 01:38:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060517111742.421bdf3b.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <20060517111742.421bdf3b.simon@arrowtheory.com> Message-ID: <200605171036.53706.faltet@carabos.com> A Dimecres 17 Maig 2006 03:17, Simon Burton va escriure: > On Tue, 16 May 2006 15:15:23 -0400 > > Alan G Isaac wrote: > > 2. How are people using this? I trust that the numarray > > behavior was well considered, but I would have expected > > coordinates to be grouped rather than spread across > > the arrays in the tuple. > > Yes, this strikes me as bizarre. > How about we make a new function, eg. argwhere, that > returns an array of indices ? > > argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] +1 That could be quite useful. -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cadot at science-and-technology.nl Wed May 17 01:45:03 2006 From: cadot at science-and-technology.nl (Sidney Cadot) Date: Wed May 17 01:45:03 2006 Subject: [Numpy-discussion] Argument of complex number array? In-Reply-To: References: <1BDFF588-F29A-443C-84D7-D8773E5C4779@science-and-technology.nl> Message-ID: On May 17, 2006, at 6:23, Tim Leslie wrote: > angle(z, deg=0) Great, I missed that completely. Thanks! Sidney From joris at ster.kuleuven.ac.be Wed May 17 02:54:12 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed May 17 02:54:12 2006 Subject: [Numpy-discussion] numpy docs Message-ID: <1147859593.446af2893a9a8@webmail.ster.kuleuven.be> On Tuesday 16 May 2006 16:32, you wrote: > That's great! > I think it would be nice if it could somehow become a gateway for > docstrings. Are you only interested in examples? I'm not sure what your > intentions are, but it would be nice if there were a Wiki page like yours > where people could contribute docstring fixes and then those fixes would > eventually find their way into the source with the help of someone with CVS > write access. Usage examples like the ones on your page are needed in the > docstrings too. My fear though is that with a wiki page, there's no real > incentive to be concise. People tend to just add more rather than erasing > something someone else wrote. I don't really want to read a flame war in my > docstrings about whether the sum() method is superior to the sum() function, > etc. > > --bill +1 on the docstrings. At the moment I will put my efforts in making the example list more complete, though. It's actually a great way to learn about the numpy features. I recommend numarray users to simply browse through the example list and pick up the new numpy features by reading some of the examples. But be aware that there are actually many more as the list is not yet complete. And don't hesitate to contribute! ;-) Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From jg307 at cam.ac.uk Wed May 17 04:20:06 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 17 04:20:06 2006 Subject: [Numpy-discussion] Distutils/NAG compiler problems Message-ID: <446B06B8.30003@cam.ac.uk> I have been experiencing difficulty with the final linking step when using f2py and the NAG Fortran 95 compiler, using the latest release version of numpy (0.9.6). I believe this is caused by a typo in the options being passed to the compiler (at least, making this change fixes the problem): --- /usr/lib/python2.3/site-packages/numpy/distutils/fcompiler/nag.py 2006-01-06 21:29:40.000000000 +0000 +++ /scratch/jgraham/nag.py 2006-05-17 10:46:38.000000000 +0100 @@ -22,7 +22,7 @@ def get_flags_linker_so(self): if sys.platform=='darwin': return ['-unsharedf95','-Wl,-bundle,-flat_namespace,-undefined,suppress'] - return ["-Wl,shared"] + return ["-Wl,-shared"] def get_flags_opt(self): return ['-O4'] def get_flags_arch(self): (that may be incorrectly wrapped) -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From mateusz at loskot.net Wed May 17 08:35:09 2006 From: mateusz at loskot.net (Mateusz Loskot) Date: Wed May 17 08:35:09 2006 Subject: [Numpy-discussion] NumPy, Numeric or numarray or all of them? In-Reply-To: <446A0A86.3030400@noaa.gov> References: <4463220B.20909@loskot.net> <44636A10.1030909@noaa.gov> <4465E839.2000604@loskot.net> <446A0A86.3030400@noaa.gov> Message-ID: <446B4278.50401@loskot.net> Christopher Barker wrote: > Mateusz ?oskot wrote: >> Christopher Barker wrote: >> >>> Speaking as a long time numpy (Numeric, etc.) user, and a new >>> user of GDAL, I had no idea GDAL worked with num* at all! at >>> least not directly. >> >> >> Yes, it is :-) >> >>> In fact, I thought I was going to have to write that code myself. >>> Where do I find docs for this? I'm sure I've just missed >>> something, but I'm finding the docs a bit sparse. >> >> >> Do you mean docs for Python bindings of GDAL? > > > I meant docs for the num* part. Yes, agreed. >> AFAIK the only docs for Python oriented users is the "GDAL API >> Tutorial" (http://www.gdal.org/gdal_tutorial.html). > > > That's all I've found. And there was no indication of what datatypes > were being returned (or accepted). So far, all I'm doing it reading > images, and it looks to me like ReadRaster() is returning a string, > for instance. I'd love to have it return a numpy array. Now, it's using Numeric package. NumPy will be used soon. >> In the meantime, I'd suggest to use epydoc tool to generate some >> manual - it won't be a complete reference but it can be usable. > > > Good idea. Someone should get around to doing that, and post it > somewhere. I if do it, I'll let you know. I'm going to include epydoc configuration file in GDAL source tree. Such file is similar to doxygen config and can be used to generate html/pdf documentations with single command. I think including generated docs is not necessary. >> There are two separate bindings to Python in GDAL: - native, called >> traditional - SWIG based, called New Generation Python > > > How do I know which ones I'm using? When you're building GDAL you can use appropriate options for ./configure script. > Were the "tradition" ones done my hand? Yes, traditional pymod. Cheers -- Mateusz ?oskot http://mateusz.loskot.net From cookedm at physics.mcmaster.ca Wed May 17 13:53:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed May 17 13:53:01 2006 Subject: [Numpy-discussion] Getting 0.9.8 out this week In-Reply-To: (David M. Cooke's message of "Thu, 11 May 2006 15:47:51 -0400") References: <4461131B.1050907@ieee.org> Message-ID: cookedm at physics.mcmaster.ca (David M. Cooke) writes: > Travis Oliphant writes: > >> I'd like to get 0.9.8 of NumPy released by the end of the week. >> There are a few Trac tickets that need to be resolved by then. >> >> In particular #83 suggests returning scalars instead of 1x1 matrices >> from certain reduce-like methods. Please chime in on your preference. >> I'm waiting to here more feedback before applying the patch. >> >> If you can help out on any other ticket that would be much >> appreciated. > > I'd like to fix up #81 (Numpy should be installable with setuptool's > easy_install), but I'm not going to have anytime to work on it before > the weekend. I'm good to go now. #81 is fixed with a hack, but it's the only way I can see to do it (without a major restructuring of numpy.distutils). Only showstopper I can see is probably #110, which seems to show we're leaking memory in the ufunc machinery (haven't looked into it, though). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed May 17 13:55:02 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed May 17 13:55:02 2006 Subject: [Numpy-discussion] Distutils/NAG compiler problems In-Reply-To: <446B06B8.30003@cam.ac.uk> (James Graham's message of "Wed, 17 May 2006 12:19:20 +0100") References: <446B06B8.30003@cam.ac.uk> Message-ID: James Graham writes: > I have been experiencing difficulty with the final linking step when > using f2py and the NAG Fortran 95 compiler, using the latest release > version of numpy (0.9.6). I believe this is caused by a typo in the > options being passed to the compiler (at least, making this change > fixes the problem): > > --- /usr/lib/python2.3/site-packages/numpy/distutils/fcompiler/nag.py > 2006-01-06 21:29:40.000000000 +0000 > +++ /scratch/jgraham/nag.py 2006-05-17 10:46:38.000000000 +0100 > @@ -22,7 +22,7 @@ > def get_flags_linker_so(self): > if sys.platform=='darwin': > return > ['-unsharedf95','-Wl,-bundle,-flat_namespace,-undefined,suppress'] > - return ["-Wl,shared"] > + return ["-Wl,-shared"] > def get_flags_opt(self): > return ['-O4'] > def get_flags_arch(self): > > (that may be incorrectly wrapped) I've applied that to svn. It'll be in 0.9.8. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Wed May 17 14:34:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 17 14:34:04 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: On Tue, 16 May 2006, Alan G Isaac apparently wrote: > 2. How are people using this? I trust that the numarray > behavior was well considered, but I would have expected > coordinates to be grouped rather than spread across the > arrays in the tuple. OK, just to satisfy my curiosity: does the silence mean that nobody is using 'nonzero' in a fashion that leads them to prefer the current behavior to the "obvious" alternative of grouping the coordinates? Is the current behavior just an inherited convention, or is it useful in specific applications? Sorry to ask again, but I'm interested to know the application that is facilitated by getting one coordinate at a time, or possibly by getting e.g. an array of first coordinates without the others. Thank you, Alan Isaac From oliphant.travis at ieee.org Wed May 17 15:20:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 17 15:20:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <20060517111742.421bdf3b.simon@arrowtheory.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <20060517111742.421bdf3b.simon@arrowtheory.com> Message-ID: <446BA147.7020802@ieee.org> Simon Burton wrote: > On Tue, 16 May 2006 15:15:23 -0400 > Alan G Isaac wrote: > > >> 2. How are people using this? I trust that the numarray >> behavior was well considered, but I would have expected >> coordinates to be grouped rather than spread across >> the arrays in the tuple. >> > > The split-tuple is for fancy-indexing. That's how you index a multidimensional array using an array of integers. The output of a.nonzero() is setup to do that so that a[a.nonzero()] works. > Yes, this strikes me as bizarre. > How about we make a new function, eg. argwhere, that > returns an array of indices ? > > argwhere( array( [[0,1],[0,1]] ) ) -> [[0,1],[1,1]] > > I could see the value of this kind of function too. -Travis From oliphant.travis at ieee.org Wed May 17 15:21:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 17 15:21:17 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today Message-ID: <446BA1AA.8000300@ieee.org> If there are now further difficulties, I'm going to release 0.9.8 today. Then work on 1.0 can begin. The 1.0 release will consist of a series of release candidates. Thank you to David Cooke for his recent flurry of fixes.. Thanks to all the other developers who have contributed as well. -Travis From a.mcmorland at auckland.ac.nz Wed May 17 17:39:19 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed May 17 17:39:19 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> Message-ID: <446BC1FA.2060404@auckland.ac.nz> Robert Kern wrote: > Angus McMorland wrote: > >>Is there a way to specify which dimensions I want dot to work over? > > Use swapaxes() on the arrays to put the desired axes in the right places. Thanks for your reply, Robert. I've explored a bit further, and have made sense of what's going on, to some extent, but have further questions. My interpretation of the dot docstring, is that the shapes I need are: a.shape == (2,3,1) and b.shape == (2,1,3) so that the sum is over the 1s, giving result.shape == (2,3,3) but: In [85]:ma = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) In [86]:mb = array([[[4,5,6]],[[7,8,9]]]) #shape = (2, 1, 3) so In [87]:res = dot(ma,mb).shape In [88]:res.shape Out[88]:(2, 3, 2, 3) such that res[i,:,j,:] == dot(ma[i,:,:], mb[j,:,:]) which means that I can take the results I want out of res by slicing (somehow) res[0,:,0,:] and res[1,:,1,:] out. Is there an easier way, which would make dot only calculate the dot products for the cases where i==j (which is what I call threading over the first dimension)? Since the docstring makes no mention of what happens over other dimensions, should that be added, or is this the conventional numpy behaviour that I need to get used to? Cheers, Angus -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From twisti at iceberg.co.nz Wed May 17 18:22:06 2006 From: twisti at iceberg.co.nz (Chibuzo Twist) Date: Wed May 17 18:22:06 2006 Subject: [Numpy-discussion] Re: your good crdit Message-ID: <000001c67a18$ed1c5aa0$defba8c0@afa45> D p ea l r H r om p e O u wn m er, Your c d re d di e t doesn' f t m r atter to us! If you O w WN s r j ea q l e h st v at f e and wa i nt I s MME x DI g ATE c a as z h to s q pe f nd ANY wa f y you lik v e, or simp d ly wis t h to L e OW s ER your mo q nthly p d aym i ent h s by a thir e d or m m ore, h v ere a e re the d n ea q ls o we hav r e T q ODA x Y: $4 w 90 , 00 x 0 a b s l o ow a j s 3 , 6 b 4 % $ b 370 , 0 z 00 a d s l g ow a g s 3 , 8 u 9 % $ l 490 , 0 g 00 a p s lo p w a p s 3 , 1 k 9 % $2 t 50 , 00 k 0 a u s lo i w a l s 3 , 3 o 4 % $2 s 00 , 00 c 0 a p s lo z w a k s 3 , 5 v 4 % Your c e re f di h t does v n't mat k ter to u u s! g x et ma t tc y hed with l v en g der m s Chibuzo Twist , Ap h pr d ov k al M d ana r ge x r ----- Original Message ----- believe, after a long start. And so do burglars, he added as a parting shot, as he darted back and fled up the tunnel. It was an unfortunate remark, for the dragon spouted terrific flames after him, and fast though he sped up the slope, he had not gone nearly -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 17 19:13:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 17 19:13:04 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <446BC1FA.2060404@auckland.ac.nz> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> Message-ID: Angus McMorland wrote: > Robert Kern wrote: > >>Angus McMorland wrote: >> >>>Is there a way to specify which dimensions I want dot to work over? >> >>Use swapaxes() on the arrays to put the desired axes in the right places. > > Thanks for your reply, Robert. I've explored a bit further, and have > made sense of what's going on, to some extent, but have further questions. > > My interpretation of the dot docstring, is that the shapes I need are: > a.shape == (2,3,1) and b.shape == (2,1,3) > so that the sum is over the 1s, giving result.shape == (2,3,3) I'm not sure why you think you should get that resulting shape. Yes, it will "sum" over the 1s (in this case there is only one element in those axes so there is nothing really to sum). What exactly are the semantics of the operation that you want? I can't tell from just the input and output shapes. > but: > In [85]:ma = array([[[4],[5],[6]],[[7],[8],[9]]]) #shape = (2, 3, 1) > In [86]:mb = array([[[4,5,6]],[[7,8,9]]]) #shape = (2, 1, 3) > so > In [87]:res = dot(ma,mb).shape > In [88]:res.shape > Out[88]:(2, 3, 2, 3) > such that > res[i,:,j,:] == dot(ma[i,:,:], mb[j,:,:]) > which means that I can take the results I want out of res by slicing > (somehow) res[0,:,0,:] and res[1,:,1,:] out. > > Is there an easier way, which would make dot only calculate the dot > products for the cases where i==j (which is what I call threading over > the first dimension)? I'm afraid I really don't understand the operation that you want. > Since the docstring makes no mention of what happens over other > dimensions, should that be added, or is this the conventional numpy > behaviour that I need to get used to? It's fairly conventional for operations that reduce values along an axis to a single value. The remaining axes are left untouched. E.g. In [1]: from numpy import * In [2]: a = random.randint(0, 10, size=(3,4,5)) In [3]: s1 = sum(a, axis=1) In [4]: a.shape Out[4]: (3, 4, 5) In [5]: s1.shape Out[5]: (3, 5) In [6]: for i in range(3): ...: for j in range(5): ...: print i, j, (sum(a[i,:,j]) == s1[i,j]).all() ...: ...: 0 0 True 0 1 True 0 2 True 0 3 True 0 4 True 1 0 True 1 1 True 1 2 True 1 3 True 1 4 True 2 0 True 2 1 True 2 2 True 2 3 True 2 4 True -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Thu May 18 01:56:02 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 01:56:02 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> Message-ID: <446C3636.3070906@ftw.at> Alan G Isaac wrote: > On Tue, 16 May 2006, Alan G Isaac apparently wrote: > >> 2. How are people using this? I trust that the numarray >> behavior was well considered, but I would have expected >> coordinates to be grouped rather than spread across the >> arrays in the tuple. >> > > OK, just to satisfy my curiosity: > does the silence mean that nobody is using > 'nonzero' in a fashion that leads them to > prefer the current behavior to the "obvious" > alternative of grouping the coordinates? > Is the current behavior just an inherited > convention, or is it useful in specific applications? > I also think a function that groups coordinates would be useful. Here's a prototype: def argnonzero(a): return transpose(a.nonzero()) This has the same effect as: def argnonzero(a): nz = a.nonzero() return array([[nz[i][j] for i in xrange(a.ndim)] for j in xrange(len(nz[0]))]) The output is always a 2d array, so >>> a array([[ 0, 1, 2, 3], [ 4, 0, 6, 7], [ 8, 9, 0, 11]]) >>> argnonzero(a) array([[0, 1], [0, 2], [0, 3], [1, 0], [1, 2], [1, 3], [2, 0], [2, 1], [2, 3]]) >>> b = a[0] >>> argnonzero(b) array([[1], [2], [3]]) >>> c = array([a, a-1, a-2]) >>> argnonzero(c) array([[0, 0, 1], [0, 0, 2], ... It looks a little clumsy for 1d arrays, but I'd argue that, if NumPy were to offer a function like this, it should always return a 2d array whose rows are the coordinates for consistency, rather than returning some squeezed version for indices into 1d arrays. I'd support the addition of such a function to NumPy. Although it's tiny, it's not obvious, and it might be useful. -- Ed From wbaxter at gmail.com Thu May 18 02:37:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 02:37:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C3636.3070906@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> Message-ID: Aren't there quite a few functions that return indices like nonzero() does? Should they all have arg*** versions that just transpose the output of the non arg*** version? Maybe the answer is yes, but to me if all it takes to get what you needs is transpose() then I'm not sure how that fails to be obvious. You have [[x y z] [p d q]] and you want ([x p][y d][z q]). Looks like a job for transpose! Maybe a note in the docstring pointing out "the obvious" is what is really warranted. As in "hey, look, you can transpose the output of this function if you'd like the indices the other way, but the way it is it's more useful for indexing, as in a[a.nonzero()]." Sure would be nice if all you had to type was a.nonzero().T, though... ;-P --bill On 5/18/06, Ed Schofield wrote: > > Alan G Isaac wrote: > > On Tue, 16 May 2006, Alan G Isaac apparently wrote: > > > >> 2. How are people using this? I trust that the numarray > >> behavior was well considered, but I would have expected > >> coordinates to be grouped rather than spread across the > >> arrays in the tuple. > >> > > > > OK, just to satisfy my curiosity: > > does the silence mean that nobody is using > > 'nonzero' in a fashion that leads them to > > prefer the current behavior to the "obvious" > > alternative of grouping the coordinates? > > Is the current behavior just an inherited > > convention, or is it useful in specific applications? > > > > I also think a function that groups coordinates would be useful. Here's > a prototype: > > def argnonzero(a): > return transpose(a.nonzero()) > > This has the same effect as: > > def argnonzero(a): > nz = a.nonzero() > return array([[nz[i][j] for i in xrange(a.ndim)] for j in > xrange(len(nz[0]))]) > > The output is always a 2d array, so > > >>> a > array([[ 0, 1, 2, 3], > [ 4, 0, 6, 7], > [ 8, 9, 0, 11]]) > > >>> argnonzero(a) > array([[0, 1], > [0, 2], > [0, 3], > [1, 0], > [1, 2], > [1, 3], > [2, 0], > [2, 1], > [2, 3]]) > > >>> b = a[0] > >>> argnonzero(b) > array([[1], > [2], > [3]]) > > >>> c = array([a, a-1, a-2]) > >>> argnonzero(c) > array([[0, 0, 1], > [0, 0, 2], > ... > > It looks a little clumsy for 1d arrays, but I'd argue that, if NumPy > were to offer a function like this, it should always return a 2d array > whose rows are the coordinates for consistency, rather than returning > some squeezed version for indices into 1d arrays. > > I'd support the addition of such a function to NumPy. Although it's > tiny, it's not obvious, and it might be useful. > > -- Ed > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pau.gargallo at gmail.com Thu May 18 02:48:01 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 02:48:01 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> Message-ID: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> > I'm afraid I really don't understand the operation that you want. I think that the operation Angus wants is the following (at least I would like that one ;-) if you have two 2darrays of shapes: a.shape = (n,k) b.shape = (k,m) you get: dot( a,b ).shape == (n,m) Now, if you have higher dimensional arrays (kind of "arrays of matrices") a.shape = I+(n,k) b.shape = J+(k,m) where I and J are tuples, you get dot( a,b ).shape == I+J+(n,m) dot( a,b )[ i,j ] == dot( a[i],b[j] ) #i,j represent tuples That is the current behaviour, it computes the matrix product between every possible pair. For me that is similar to 'outer' but with matrix product. But sometimes it would be useful (at least for me) to have: a.shape = I+(n,k) b.shape = I+(k,m) and to get only: dot2( a,b ).shape == I+(n,m) dot2( a,b )[i] == dot2( a[i], b[i] ) This would be a natural extension of the scalar product (a*b)[i] == a[i]*b[i] If dot2 was a kind of ufunc, this will be the expected behaviour, while the current dot's behaviour will be obtained by dot2.outer(a,b). Does this make any sense? Cheers, pau From schofield at ftw.at Thu May 18 03:40:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 03:40:01 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> Message-ID: <446C4E85.5000503@ftw.at> Bill Baxter wrote: > Aren't there quite a few functions that return indices like nonzero() > does? > Should they all have arg*** versions that just transpose the output of > the non arg*** version? > > Maybe the answer is yes, but to me if all it takes to get what you > needs is transpose() then I'm not sure how that fails to be obvious. > You have [[x y z] [p d q]] and you want ([x p][y d][z q]). Looks like > a job for transpose! > Maybe a note in the docstring pointing out "the obvious" is what is > really warranted. As in "hey, look, you can transpose the output of > this function if you'd like the indices the other way, but the way it > is it's more useful for indexing, as in a[ a.nonzero()]." Well, it wasn't immediately obvious to me how it could be done so elegantly. But I agree that a new function isn't necessary. I've re-written the docstring for the nonzero method in SVN to point out how to do it. > Sure would be nice if all you had to type was a.nonzero().T, though... ;-P No, this wouldn't be possible -- the output of the nonzero method is a tuple, not an array. Perhaps this is why it's not _that_ obvious ;) -- Ed From amcmorl at gmail.com Thu May 18 04:15:13 2006 From: amcmorl at gmail.com (Angus McMorland) Date: Thu May 18 04:15:13 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: <446C571B.5050700@gmail.com> Pau Gargallo wrote: > I think that the operation Angus wants is the following (at least I > would like that one ;-) > [snip] > > But sometimes it would be useful (at least for me) to have: > a.shape = I+(n,k) > b.shape = I+(k,m) > and to get only: > dot2( a,b ).shape == I+(n,m) > dot2( a,b )[i] == dot2( a[i], b[i] ) > > This would be a natural extension of the scalar product (a*b)[i] == > a[i]*b[i] > If dot2 was a kind of ufunc, this will be the expected behaviour, > while the current dot's behaviour will be obtained by dot2.outer(a,b). > > Does this make any sense? Thank-you Pau, this looks exactly like what I want. For further explanation, here, I believe, is an implementation of the desired routine using a loop. It would, however, be great to do this using quicker (ufunc?) machinery. Pau, can you confirm that this is the same as the routine you're interested in? def dot2(a,b): '''Returns dot product of last two dimensions of two 3-D arrays, threaded over first dimension.''' try: assert a.shape[1] == b.shape[2] assert a.shape[0] == b.shape[0] except AssertionError: print "incorrect input shapes" res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) for i in range(a.shape[0]): res[i,...] = dot( a[i,...], b[i,...] ) return res I think the 'arrays of 2-D matrices' comment (which I've snipped out, oh well) captures the idea well. Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From martin.wiechert at gmx.de Thu May 18 04:25:04 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Thu May 18 04:25:04 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today In-Reply-To: <446BA1AA.8000300@ieee.org> References: <446BA1AA.8000300@ieee.org> Message-ID: <200605181323.19679.martin.wiechert@gmx.de> Does scipy 0.4.8 work on top of that? Martin On Thursday 18 May 2006 00:20, Travis Oliphant wrote: > If there are now further difficulties, I'm going to release 0.9.8 > today. Then work on 1.0 can begin. The 1.0 release will consist of a > series of release candidates. > > Thank you to David Cooke for his recent flurry of fixes.. Thanks to all > the other developers who have contributed as well. > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From pau.gargallo at gmail.com Thu May 18 04:35:09 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 04:35:09 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <446C571B.5050700@gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> <446C571B.5050700@gmail.com> Message-ID: <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> > Pau, can you confirm that this is the same > as the routine you're interested in? > > def dot2(a,b): > '''Returns dot product of last two dimensions of two 3-D arrays, > threaded over first dimension.''' > try: > assert a.shape[1] == b.shape[2] > assert a.shape[0] == b.shape[0] > except AssertionError: > print "incorrect input shapes" > res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) > for i in range(a.shape[0]): > res[i,...] = dot( a[i,...], b[i,...] ) > return res > yes, that is what I would like. I would like it even with more dimensions and with all the broadcasting rules ;-) These can probably be achieved by building actual 'arrays of matrices' (an array of matrix objects) and then using the ufunc machinery. But I think that a simple dot2 function (with an other name of course) will still very useful. pau From wbaxter at gmail.com Thu May 18 04:45:07 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 04:45:07 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C4E85.5000503@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: On 5/18/06, Ed Schofield wrote: > > Bill Baxter wrote: > > Sure would be nice if all you had to type was a.nonzero().T, though... > ;-P > > No, this wouldn't be possible -- the output of the nonzero method is a > tuple, not an array. Perhaps this is why it's not _that_ obvious ;) Oh, I see. I did miss that bit. I think I may have even done something recently myself like vstack(where(a > val)).transpose() not realizing that plain old transpose() would work in place of vstack(xxx).transpose(). If you feel like copy-pasting your doc addition for nonzero() over to where() also, that would be nice. What other functions work like that? ... ... actually it looks like most other functions similar to nonzero() return a boolean array, then you use where() if you need an index list. isnan(), iscomplex(), isinf(), isreal(), isneginf(), etc and of course all the boolean operators like a>0. So nonzero() is kind of an oddball. Is it actually any different from where(a!=0)? --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Thu May 18 05:17:19 2006 From: gnurser at googlemail.com (George Nurser) Date: Thu May 18 05:17:19 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446C4E85.5000503@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> This is a very useful thread. Made me think about what the present arrangement is supposed to do. My usage of this is to get out indices corresponding to a given condition. So E.g. In [38]: a = arange(12).reshape(3,2,2) Out[39]: array([[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]], [[ 8, 9], [10, 11]]]) In [40]: xx = where(a%2==0) In [41]: xx Out[41]: (array([0, 0, 1, 1, 2, 2]), array([0, 1, 0, 1, 0, 1]), array([0, 0, 0, 0, 0, 0])) The transpose idea should have been obvious to me... In [48]: for i,j,k in transpose(xx): ....: print i,j,k ....: ....: 0 0 0 0 1 0 1 0 0 1 1 0 2 0 0 2 1 0 A 1D array In [57]: aa = a.ravel() In [49]: xx = where(aa%2==0) In [53]: xx Out[53]: (array([ 0, 2, 4, 6, 8, 10]),) needs tuple unpacking: n [52]: for i, in transpose(xx): ....: print i ....: ....: 0 2 4 6 8 10 Or more simply, the ugly In [58]: for i in xx[0]: ....: print i ....: ....: 0 2 4 6 8 10 Alternatively, instead of transpose, we can simply use zip. E.g. (3D) In [42]: for i,j,k in zip(*xx): ....: print i,j,k or (1D, again still needs unpacking) In [56]: for i, in zip(*xx): ....: print i From wbaxter at gmail.com Thu May 18 05:40:11 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 05:40:11 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: In Einstein summation notation, what numpy.dot() does now is: c_riqk = a_rij * b_qjk And you want: c_[r]ik = a_[r]ij * b_[r]jk where the brackets indicate a 'no summation' index. Writing the ESN makes it clearer to me anyway. :-) --bb On 5/18/06, Pau Gargallo wrote: > > > I'm afraid I really don't understand the operation that you want. > > I think that the operation Angus wants is the following (at least I > would like that one ;-) > > if you have two 2darrays of shapes: > a.shape = (n,k) > b.shape = (k,m) > you get: > dot( a,b ).shape == (n,m) > > Now, if you have higher dimensional arrays (kind of "arrays of matrices") > a.shape = I+(n,k) > b.shape = J+(k,m) > where I and J are tuples, you get > dot( a,b ).shape == I+J+(n,m) > dot( a,b )[ i,j ] == dot( a[i],b[j] ) #i,j represent tuples > That is the current behaviour, it computes the matrix product between > every possible pair. > For me that is similar to 'outer' but with matrix product. > > But sometimes it would be useful (at least for me) to have: > a.shape = I+(n,k) > b.shape = I+(k,m) > and to get only: > dot2( a,b ).shape == I+(n,m) > dot2( a,b )[i] == dot2( a[i], b[i] ) > > This would be a natural extension of the scalar product (a*b)[i] == > a[i]*b[i] > If dot2 was a kind of ufunc, this will be the expected behaviour, > while the current dot's behaviour will be obtained by dot2.outer(a,b). > > Does this make any sense? > > Cheers, > pau > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joris at ster.kuleuven.ac.be Thu May 18 05:54:03 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Thu May 18 05:54:03 2006 Subject: [Numpy-discussion] Numpy Example List Message-ID: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Newsflash The Numpy Example List has now passed its 100th example, and has now its own wikipage: http://scipy.org/Numpy_Example_List Thanks to Pau Gargallo for his contributions. Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From gruben at bigpond.net.au Thu May 18 07:38:03 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu May 18 07:38:03 2006 Subject: [Numpy-discussion] Numpy Example List proposals In-Reply-To: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> References: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Message-ID: <446C8685.1060203@bigpond.net.au> I want to thank Joris for this fantastic resource. Well done. Further to Bill's suggestion, perhaps the first example(s) in each box could be the docstring example. For functions/methods already with docstrings, we could paste that into the wiki page. Those without could be pasted back the other way. Maybe we could convince the developers to add an example() function to numpy and scipy which takes the function name as an argument and accesses a file containing these examples and spits it out. This way we could have many examples in the documentation without polluting the docstrings with pages of text. The Maxima CAS package provides help, example() and apropos() functions. In this case example() actually executes the example input to generate the output which we could do with numpy/scipy as well. apropos() returns similar sounding functions. I contacted Fernando Perez the other day because I discovered a neat recipe for this functionality in the python cookbook here: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/409000 so we could do this in numpy/scipy too. Gary R. joris at ster.kuleuven.ac.be wrote: > Newsflash > > The Numpy Example List has now passed its 100th example, > and has now its own wikipage: http://scipy.org/Numpy_Example_List > > Thanks to Pau Gargallo for his contributions. > > Joris From aisaac at american.edu Thu May 18 07:43:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 07:43:08 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446BA0B9.7000704@ieee.org> References: <20060516134106.551fe813.simon@arrowtheory.com><44695185.9090700@ieee.org> <446BA0B9.7000704@ieee.org> Message-ID: > Alan G Isaac wrote: >> 1. I hope for an array 'a' that nonzero(a) and a.nonzero() >> will produce the same result, whatever convention is >> chosen. On Wed, 17 May 2006, Travis Oliphant apparently wrote: > Unfortunately this won't work because the nonzero(a) behavior was > inherited from Numeric but the .nonzero() from numarray. > This is the price of trying to merge two user groups. The little bit > of pain is worth it, I think. One last comment: This is definitely trading off surprises for future users against ease for incoming Numeric users. So ... might the best to be to make numpy consistent but, assuming the more consistent numarray behavior is chosen, provide the Numeric behavior as part of the compatability module? fwiw, Alan From wbaxter at gmail.com Thu May 18 07:57:09 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 07:57:09 2006 Subject: [Numpy-discussion] Indexing arrays with arrays Message-ID: One thing I haven't quite managed to figure out yet, is what the heck indexing an array with an array is supposed to give you. This is sort of an offshoot of the "nonzero()" discussion. I was wondering why nonzero() and where() return tuples at all instead of just an array, till I tried it. >>> b array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> b[ num.asarray(num.where(b>4)) ] array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) Whoa, not sure what that is. Can someone explain the current rule and in what situations is it useful? Thanks --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.chauvat at logilab.fr Thu May 18 08:04:09 2006 From: nicolas.chauvat at logilab.fr (Nicolas Chauvat) Date: Thu May 18 08:04:09 2006 Subject: [Numpy-discussion] [ANN] EuroPython 2006 - call for papers Message-ID: <20060518150512.GG4480@crater.logilab.fr> Hi Lists, This year again EuroPython will see pythonistas from all over flock together at the same place and the same time : EuroPython 2006 - July, 3rd to 5th - CERN, Geneva, Switzerland Please do not hesitate to submit your presentation proposals to share your insights and experience with all who have an interest in Python put to good use in Science. http://www.europython.org Deadline is May 31st. See you there, -- Nicolas Chauvat logilab.fr - services en informatique avanc?e et gestion de connaissances From aisaac at american.edu Thu May 18 08:14:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 08:14:03 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: References: Message-ID: On Thu, 18 May 2006, Bill Baxter apparently wrote: > One thing I haven't quite managed to figure out yet, is > what the heck indexing an array with an array is supposed > to give you. I think you want section 3.3.6.1 of Travis's book, which is easily one of the hardest sections of the book. I find it nonobvious that when x and y are nd-arrays that x[y] should differ from x[tuple(y)] or x[list(y)], but as explained in this section it does in a big way. Cheers, Alan Isaac From pau.gargallo at gmail.com Thu May 18 08:24:10 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu May 18 08:24:10 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: References: Message-ID: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> On 5/18/06, Bill Baxter wrote: > One thing I haven't quite managed to figure out yet, is what the heck > indexing an array with an array is supposed to give you. > This is sort of an offshoot of the "nonzero()" discussion. > I was wondering why nonzero() and where() return tuples at all instead of > just an array, till I tried it. > > >>> b > array([[0, 1, 2], > [3, 4, 5], > [6, 7, 8]]) > >>> b[ num.asarray(num.where(b>4)) ] > array([[[3, 4, 5], > [6, 7, 8], > [6, 7, 8], > [6, 7, 8]], > > [[6, 7, 8], > [0, 1, 2], > [3, 4, 5], > [6, 7, 8]]]) > > Whoa, not sure what that is. Can someone explain the current rule and in > what situations is it useful? > > Thanks > --bb i think that is b and x are arrays then, b[x]_ijk = b[ x_ijk ] where ijk is whatever x needs to be indexed with. Note that the indexing on b is _only_ on its first dimension. >>> from numpy import * >>> b = arange(9).reshape(3,3) >>> b array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> x,y = where(b>4) >>> xy = asarray( (x,y) ) >>> xy array([[1, 2, 2, 2], [2, 0, 1, 2]]) >>> b[x,y] array([5, 6, 7, 8]) >>> b[xy] array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) >>> asarray( (b[x],b[y]) ) array([[[3, 4, 5], [6, 7, 8], [6, 7, 8], [6, 7, 8]], [[6, 7, 8], [0, 1, 2], [3, 4, 5], [6, 7, 8]]]) From faltet at carabos.com Thu May 18 09:08:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Thu May 18 09:08:06 2006 Subject: [Numpy-discussion] Numpy Example List In-Reply-To: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> References: <1147956762.446c6e1aa128f@webmail.ster.kuleuven.be> Message-ID: <200605181806.56193.faltet@carabos.com> A Dijous 18 Maig 2006 14:52, joris at ster.kuleuven.ac.be va escriure: > Newsflash > > The Numpy Example List has now passed its 100th example, > and has now its own wikipage: http://scipy.org/Numpy_Example_List > > Thanks to Pau Gargallo for his contributions. Wow, very good resource and indeed I'll check it frequently! Thanks! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Thu May 18 09:41:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu May 18 09:41:04 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> References: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> Message-ID: <446CA360.6040901@ieee.org> Pau Gargallo wrote: > On 5/18/06, Bill Baxter wrote: >> One thing I haven't quite managed to figure out yet, is what the heck >> indexing an array with an array is supposed to give you. >> This is sort of an offshoot of the "nonzero()" discussion. >> I was wondering why nonzero() and where() return tuples at all >> instead of >> just an array, till I tried it. >> >> >>> b >> array([[0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]) >> >>> b[ num.asarray(num.where(b>4)) ] >> array([[[3, 4, 5], >> [6, 7, 8], >> [6, 7, 8], >> [6, 7, 8]], >> >> [[6, 7, 8], >> [0, 1, 2], >> [3, 4, 5], >> [6, 7, 8]]]) >> >> Whoa, not sure what that is. Can someone explain the current rule >> and in >> what situations is it useful? I can't take too much credit for the indexing behavior. I took what numarray had done and extended it just a little bit to allow mixing of slices and index arrays. It was easily one of the most complicated things to write for NumPy. What you are observing is called "partial indexing" by Numarray. The indexing rules where also spelled out in emails to this list last year and placed in the design document for what was then called Numeric3 (I'm not sure if that design document is still visible or not (probably under old.scipy.org it could be found). The section in my book that covers this is based on that information. I can't pretend to have use cases for all the indexing fanciness because I was building off the work that numarray had already pioneered. -Travis From schofield at ftw.at Thu May 18 10:17:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 10:17:15 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> Message-ID: <446CABF3.5020802@ftw.at> Bill Baxter wrote: > On 5/18/06, *Ed Schofield* > wrote: > > Bill Baxter wrote: > > Sure would be nice if all you had to type was a.nonzero().T, > though... ;-P > > No, this wouldn't be possible -- the output of the nonzero method is a > tuple, not an array. Perhaps this is why it's not _that_ obvious ;) > > > Oh, I see. I did miss that bit. I think I may have even done > something recently myself like > vstack(where(a > val)).transpose() > not realizing that plain old transpose() would work in place of > vstack(xxx).transpose(). > > If you feel like copy-pasting your doc addition for nonzero() over to > where() also, that would be nice. Okay, done. > What other functions work like that? ... little> ... actually it looks like most other functions similar to > nonzero() return a boolean array, then you use where() if you need an > index list. isnan(), iscomplex(), isinf(), isreal(), isneginf(), etc > and of course all the boolean operators like a>0. So nonzero() is > kind of an oddball. Is it actually any different from where(a!=0)? I think it's equivalent, only slightly more efficient... -- Ed From schofield at ftw.at Thu May 18 10:21:10 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu May 18 10:21:10 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com> Message-ID: <446CACEB.6030505@ftw.at> George Nurser wrote: > This is a very useful thread. Made me think about what the present > arrangement is supposed to do. Great! > Alternatively, instead of transpose, we can simply use zip. > > E.g. (3D) > In [42]: for i,j,k in zip(*xx): > ....: print i,j,k That's interesting. I've never thought of zip as a transpose operation before, but I guess it is ... -- Ed From fperez.net at gmail.com Thu May 18 10:28:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu May 18 10:28:02 2006 Subject: [Numpy-discussion] Re: Dot product threading? In-Reply-To: References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> Message-ID: On 5/18/06, Bill Baxter wrote: > In Einstein summation notation, what numpy.dot() does now is: > c_riqk = a_rij * b_qjk > > And you want: > c_[r]ik = a_[r]ij * b_[r]jk > > where the brackets indicate a 'no summation' index. > Writing the ESN makes it clearer to me anyway. :-) I recently needed something similar to this, and being too lazy to think up the proper numpy ax-swapping kung-fu, I just opened up weave and was done with it in a hurry. Here it is, in case anyone finds the basic idea of any use. Cheers, f ### class mt3_dot_factory(object): """Generate the mt3t contract function, holding necessary state. This class only needs to be instantiated once, though it doesn't try to enforce this via singleton/borg approaches at all.""" def __init__(self): # The actual expression to contract indices, as a list of strings to be # interpolated into the C++ source mat_ten = ['mat(i,m)*ten(m,j,k)', # first index 'mat(j,m)*ten(i,m,k)', # second 'mat(k,m)*ten(i,j,m)', # third ] # Source template code_tpl = """ for (int i=0;i tensor. A special type of matrix-tensor contraction over a single index. The returned array has the following structure: out(i,j,k) = sum_m(mat(i,m)*ten(m,j,k)) if idx==0 out(i,j,k) = sum_m(mat(j,m)*ten(i,m,k)) if idx==1 out(i,j,k) = sum_m(mat(k,m)*ten(i,j,m)) if idx==2 Inputs: - mat: an NxN array. - ten: an NxNxN array. - idx: the position of the index to contract over, 0 1 or 2.""" # Minimal input validation - we use asserts so they don't kick in # under a -O run of python. assert len(mat.shape)==2,\ "mat must be a rank 2 array, shape=%s" % mat.shape assert mat.shape[0]==mat.shape[1],\ "Only square matrices are supported: mat shape=%s" % mat.shape assert len(ten.shape)==3,\ "mat must be a rank 3 array, shape=%s" % ten.shape assert ten.shape[0]==ten.shape[1]==ten.shape[2],\ "Only equal-dim tensors are supported: ten shape=%s" % ten.shape order = mat.shape[0] out = zeros_like(ten) inline(self.code[idx],('mat','ten','out','order'), type_converters = converters.blitz) return out # Make actual instance mt3_dot = mt3_dot_factory() From aisaac at american.edu Thu May 18 10:56:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 18 10:56:04 2006 Subject: [Numpy-discussion] nonzero() behaviour has changed In-Reply-To: <446CACEB.6030505@ftw.at> References: <20060516134106.551fe813.simon@arrowtheory.com> <44695185.9090700@ieee.org> <446C3636.3070906@ftw.at> <446C4E85.5000503@ftw.at> <1d1e6ea70605180516s28f36707j7105ec3eed0d3d72@mail.gmail.com><446CACEB.6030505@ftw.at> Message-ID: On Thu, 18 May 2006, Ed Schofield apparently wrote: > I've never thought of zip as a transpose operation > before, but I guess it is http://mail.python.org/pipermail/python-list/2004-December/257416.html But I like it. Cheers, Alan Isaac From jonathan.taylor at utoronto.ca Thu May 18 10:59:02 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu May 18 10:59:02 2006 Subject: [Numpy-discussion] NumPy 0.9.8 to be released today In-Reply-To: <446BA1AA.8000300@ieee.org> References: <446BA1AA.8000300@ieee.org> Message-ID: <463e11f90605181058v777535f6y645ebd4bdbeda9c2@mail.gmail.com> When 0.9.8 comes out does that mean the API is stable, or will there be API changes before 1.0? Jon. On 5/17/06, Travis Oliphant wrote: > > If there are now further difficulties, I'm going to release 0.9.8 > today. Then work on 1.0 can begin. The 1.0 release will consist of a > series of release candidates. > > Thank you to David Cooke for his recent flurry of fixes.. Thanks to all > the other developers who have contributed as well. > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From mark at mitre.org Thu May 18 14:09:00 2006 From: mark at mitre.org (Mark Heslep) Date: Thu May 18 14:09:00 2006 Subject: [Numpy-discussion] Guide to NumPy latest? Message-ID: <446CE229.9060302@mitre.org> So is the January '06 version of the Guide to Numpy still the most recent version? Mark From wbaxter at gmail.com Thu May 18 18:46:01 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu May 18 18:46:01 2006 Subject: [Numpy-discussion] Indexing arrays with arrays In-Reply-To: <446CA360.6040901@ieee.org> References: <6ef8f3380605180823h55275363ld0b53e1173c567dd@mail.gmail.com> <446CA360.6040901@ieee.org> Message-ID: I read the chapter in your book and sort of vaguely understand what it does now. Take an array A and index array ind: >>> A array([[ 0, 5, 10, 15], [ 1, 6, 11, 16], [ 2, 7, 12, 17], [ 3, 8, 13, 18], [ 4, 9, 14, 19]]) >>> ind array([[1, 3, 4], [0, 2, 1]]) And you get >>> A[ind] array([[[ 1, 6, 11, 16], [ 3, 8, 13, 18], [ 4, 9, 14, 19]], [[ 0, 5, 10, 15], [ 2, 7, 12, 17], [ 1, 6, 11, 16]]]) In this case it's roughly equivalent to [ A[row] for row in ind ]. >>> num.asarray( [ A[r] for r in ind ] ) array([[[ 1, 6, 11, 16], [ 3, 8, 13, 18], [ 4, 9, 14, 19]], [[ 0, 5, 10, 15], [ 2, 7, 12, 17], [ 1, 6, 11, 16]]]) >>> So I guess it could be useful if you want to take a bunch of different random samples of your data and stack them all up. E.g. you have a (1000,50) shaped grid of data, and you want to take N random samplings, each consisting of 100 rows from the original grid, and then put them all together into an (N,100,50) array. Or say you want to make a stack of sliding windows on the data like rows 0-5, then rows 1-6, then rows 2-7, etc to make a big (1000-5,5,50) array. Might be useful for that kind of thing. But thinking about applying an index obj of shape (2,3,4) to a (10,20,30,40,50) shaped array just makes my head hurt. :-) Does anyone actually use it, though? I also found it unexpected that A[ (ind[0], ind[1] ) ] doesn't do the same thing as A[ind] when ind.shape=( A.ndim, N). List of array -- as in A[ [ind[0], ind[1]] ] -- seems to act just like tuple of array also. --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Thu May 18 23:46:04 2006 From: christian at marquardt.sc (Christian Marquardt) Date: Thu May 18 23:46:04 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <446CE229.9060302@mitre.org> References: <446CE229.9060302@mitre.org> Message-ID: <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> Mine says February 28, 2006 on the title page. As this has come up quite often - is there any kind of mechanism to obtain the most recent version? Regards, Christian. On Thu, May 18, 2006 23:07, Mark Heslep wrote: > So is the January '06 version of the Guide to Numpy still the most > recent version? > > Mark > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From satyaupadhya at yahoo.co.in Fri May 19 06:18:07 2006 From: satyaupadhya at yahoo.co.in (Satya Upadhya) Date: Fri May 19 06:18:07 2006 Subject: [Numpy-discussion] Matrix query Message-ID: <20060519131729.55818.qmail@web8503.mail.in.yahoo.com> Dear Python Users and Gurus, My problem is that i have a distance matrix obtained from the software package PHYLIP. I have placed it in a text file, and using ordinary python i am able to save the matrix elements as a multidimensional array. My problem is that i wish to now convert this multidimensional array into a matrix and subsequently i wish to find the eigenvectors and eigenvalues of this matrix. I tried using scipy and also earlier the Numeric and LinearAlgebra modules but i am facing problems. I can generate a new matrix using scipy (and also with Numeric/LinearAlgebra) and can obtain the corresponding eigenvectors. But i am facing a real problem in making scipy or Numeric/LinearAlgebra accept my multidimensional array as a matrix it can recognize. Please help/give a pointer to a similar query. Thanking you, Satya --------------------------------- Do you have a question on a topic you cant find an Answer to. Try Yahoo! Answers India Get the all new Yahoo! Messenger Beta Now -------------- next part -------------- An HTML attachment was scrubbed... URL: From joris at ster.kuleuven.ac.be Fri May 19 07:46:02 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Fri May 19 07:46:02 2006 Subject: [Numpy-discussion] Numpy Example List Message-ID: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> For your information: http://scipy.org/Numpy_Example_List - all examples are now syntax highlighted (thanks Pau Gargallo!) - most examples have now a clickable "See also:" footnote - is extended to 118 examples Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pau.gargallo at gmail.com Fri May 19 09:57:08 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Fri May 19 09:57:08 2006 Subject: [Numpy-discussion] Numpy Example List In-Reply-To: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> References: <1148049894.446dd9e65009b@webmail.ster.kuleuven.be> Message-ID: <6ef8f3380605190956l3bda1095gf9aa3f80513d884c@mail.gmail.com> On 5/19/06, joris at ster.kuleuven.ac.be wrote: > > For your information: http://scipy.org/Numpy_Example_List > - all examples are now syntax highlighted (thanks Pau Gargallo!) > - most examples have now a clickable "See also:" footnote > - is extended to 118 examples > > Joris > Thanks to you Joris !! 118 examples in a week, you did a huge work! The color syntax have some problems. We are using now the moinmoin command: {{{#!python numbers=disable so moinmoin is coloring the interactive sessions as if they were python files. This gives error when the output of the interactive session don't look like python code. See for example http://www.scipy.org/Numpy_Example_List#histogram where the print commands print opening brackets "[" and not the closing ones. Does someone know an easy way to solve this? Is there some way to deactivate the #!python color highlighting for some lines of the code? All around the scipy.org site there are examples of interactive python sessions, it will be nice to have a special syntax highlighter for them. Like: {{{#!python_interactive may be to much efforts just for colors? pau From fred at ucar.edu Fri May 19 09:59:02 2006 From: fred at ucar.edu (Fred Clare) Date: Fri May 19 09:59:02 2006 Subject: [Numpy-discussion] isdtype Message-ID: <5292be8b1f2b781cb213bc9464f088be@ucar.edu> None of the functions in section 4.5 of the manual seem to be implemented: >>> import numpy >>> numpy.__version__ '0.9.6' >>> a = numpy.array([0.]) >>> numpy.isdtype(a) Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'isdtype' ? Fred Clare From aisaac at american.edu Fri May 19 11:04:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri May 19 11:04:01 2006 Subject: [Numpy-discussion] matrix direct sum Message-ID: It seems to me that the core question is whether you really need the direct sum to be constructed (seem unlikely), or whether you just need the information in the constituent matrices. How about a direct sum class, which takes a list of matrices as an argument, and appropriately defines the operations you need. Cheers, Alan Isaac PS I have argued that this is a better way to handle the Kronecker product as well, but as here I did not offer code. ;-) If you code it, please share it. From erjj1 at 163.com Fri May 19 20:43:02 2006 From: erjj1 at 163.com (=?GB2312?B?NtTCMy00yNXJz7qj?=) Date: Fri May 19 20:43:02 2006 Subject: [Numpy-discussion] =?GB2312?B?1MvTw0VYQ0VMtNm9+MrQs6HTqs/6us2yxs7xudzA7ShBRCk=?= Message-ID: An HTML attachment was scrubbed... URL: From schofield at ftw.at Sat May 20 04:51:01 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat May 20 04:51:01 2006 Subject: [Numpy-discussion] a.squeeze() for 1d and 0d arrays Message-ID: <446F0230.6050101@ftw.at> Hi all, I've discovered a bug in my SciPy code for sparse matrix slicing that was caused by the following behaviour of squeeze(): >>> a = array([3]) # array of shape (1,) >>> type(a.squeeze()) That is, squeezing a 1-dim array returns an array scalar. Could we change this to return a 0-dim array instead? Another related question is this: >>> b = array(3) # 0-dim array >>> type(a.squeeze()) I find this behaviour surprising too; the docstring claims that squeeze eliminates any length-1 dimensions, but a 0-dimensional array has shape (), without any length-1 dimensions. So shouldn't squeeze() leave 0-dimensional arrays alone? -- Ed From svetosch at gmx.net Sat May 20 06:40:12 2006 From: svetosch at gmx.net (Sven Schreiber) Date: Sat May 20 06:40:12 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> References: <446CE229.9060302@mitre.org> <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc> Message-ID: <446F1BF8.9010108@gmx.net> Christian Marquardt schrieb: > Mine says February 28, 2006 on the title page. > > As this has come up quite often - is there any kind of mechanism to > obtain the most recent version? > I would also kindly like to ask why nobody (?) gets the updates of the numpy book. I am totally in favor of the idea to raise funds by selling good documentation, but the deal was to get the updates when they come out. Thanks, Sven From aisaac at american.edu Sat May 20 07:23:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat May 20 07:23:01 2006 Subject: [Numpy-discussion] Guide to NumPy latest? In-Reply-To: <446F1BF8.9010108@gmx.net> References: <446CE229.9060302@mitre.org> <22653.193.17.11.23.1148021135.squirrel@webmail.marquardt.sc><446F1BF8.9010108@gmx.net> Message-ID: On Sat, 20 May 2006, Sven Schreiber apparently wrote: > I would also kindly like to ask why nobody (?) gets the > updates of the numpy book. I am totally in favor of the > idea to raise funds by selling good documentation, but the > deal was to get the updates when they come out. While you are phrasing this fairly politely, I think you should not *assume* you have not gotten your most recent "update". Rather you should ask: what is the definition of an "update"? You seem to be defining it by the date on the title page, but that need not be Travis's definition. (I would agree that he could reduce some traffic on this list by clarifying that, perhaps by versioning the book.) Personally, I do not care to receive a copy each time a few typos are corrected or some grammar is changed. I'd rather receive a copy each time there are important additions or corrections. I assume that this is Travis's practice. Cheers, Alan Isaac From oliphant.travis at ieee.org Sat May 20 07:54:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 07:54:05 2006 Subject: [Numpy-discussion] Updates to "Guide to NumPy" Message-ID: <446F2D68.90607@ieee.org> Owners of "Guide to NumPy": I'm a little late with updates. I really apologize for that. It's due to not having a good system in place to allow people to get the updates in a timely manner. The latest version is from March. It includes a bit more information about writing C-extensions and some corrections. I've been sending them to people who urgently need them and request it. I'm working on an update to go along with 0.9.8 release. When it is ready I will send a location and password for people to download it. I expect 2-3 weeks for its release. If you need the March copy earlier than that, please let me know and I'll send you a personal update. Thanks for your patience. I hope to send updates to the manual at major releases of NumPy. Best regards, -Travis Oliphant From olivetti at itc.it Sat May 20 13:58:06 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Sat May 20 13:58:06 2006 Subject: [Numpy-discussion] numpy and mayavi Message-ID: <20060520205738.GC17718@eloy.itc.it> I need to use a 3D visualization toolkit together with the amazing numpy. On the scipy website mayavi/VTK are suggested but as far as I tried, mayavi uses only Numeric and doesn't like numpy arrays (and mayavi, last release, dates 13 September 2005 so a bit earlier for numpy). Do you know something about it or could suggest alternative 3D visualization packages? Basically my needs are related to this task: I've 3D matrices and I'like to see them not only as 2D slices. Thanks in advance, Emanuele From joris at ster.kuleuven.ac.be Sat May 20 15:15:00 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Sat May 20 15:15:00 2006 Subject: [Numpy-discussion] ascii input/output cookbook doc Message-ID: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> For your information: I just created a small cookbook document http://scipy.org/Cookbook/InputOutput where it is explained how one can read and write Numpy arrays in human readable (ascii) format. The document describes how one can use read_array/write_array if SciPy is installed, or how one can use load/save if Matplotlib is installed. When neither of these two packages is installed, one basically has no other choice then to improvise, so I also give here a few examples how one could do this. Imho, there is something unsatisfactorily about this need to improvise. Ascii input/output of numpy arrays seems to me a very basic need. Even when one defines Numpy crudely as the N-dimensional array object, and Scipy as the science you can do with these array objects, then I would intuitively still expect that ascii input/output would belong to Numpy rather than to Scipy. There are Numpy support functions for binary format, and for pickled format, but strangely enough not really for ascii format. tofile() and fromfile() do not preserve the shape of a 2d array, and are in practice therefore hardly usable. There may be a signficant fraction of Numpy users that do not need SciPy for their work, and have only Numpy installed. My guess is that the read_array and write_array functions have already been re-invented many many times by these users. Imho, I therefore think that Numpy deserves its own read_array/write_array method. Does anyone else have this feeling, or am I the only one? :o) Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From oliphant.travis at ieee.org Sat May 20 15:26:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 15:26:03 2006 Subject: [Numpy-discussion] ascii input/output cookbook doc In-Reply-To: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> References: <1148163231.446f949fd98cd@webmail.ster.kuleuven.ac.be> Message-ID: <446F9751.5030108@ieee.org> joris at ster.kuleuven.ac.be wrote: > For your information: I just created a small cookbook document > http://scipy.org/Cookbook/InputOutput > where it is explained how one can read and write Numpy arrays in human > readable (ascii) format. > > > The document describes how one can use read_array/write_array if SciPy > is installed, or how one can use load/save if Matplotlib is installed. > When neither of these two packages is installed, one basically has no > other choice then to improvise, so I also give here a few examples how > one could do this. > > Imho, there is something unsatisfactorily about this need to improvise. > Ascii input/output of numpy arrays seems to me a very basic need. Even > when one defines Numpy crudely as the N-dimensional array object, and > Scipy as the science you can do with these array objects, then I would > intuitively still expect that ascii input/output would belong to Numpy > rather than to Scipy. There are Numpy support functions for binary format, > and for pickled format, but strangely enough not really for ascii format. > tofile() and fromfile() do not preserve the shape of a 2d array, and are > in practice therefore hardly usable. > > There may be a signficant fraction of Numpy users that do not need SciPy > for their work, and have only Numpy installed. My guess is that the > read_array and write_array functions have already been re-invented many > many times by these users. Imho, I therefore think that Numpy deserves > its own read_array/write_array method. Does anyone else have this feeling, > or am I the only one? :o) > I think you are correct. I'd like to see better ascii input-output. That's why it's supported on a fundamental level in tofile and fromfile. SciPy's support for ascii reading and writing is rather slow as it has a lot of features. Something a little-less grandiose, but still able to read and write simple ascii tables would be a good thing to bring into NumPy. General-purpose parsing can be very difficult, but a simple parser for 2-d arrays would probably be very useful. On the other hand, I've found that even though it understands only one separator at this point, fromfile is still pretty useful for table processing as long as you know the shape of what you want. -Travis > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From simon at arrowtheory.com Sat May 20 19:43:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sat May 20 19:43:02 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? Message-ID: <20060521122810.54f24311.simon@arrowtheory.com> >>> a=numpy.array([1,2,3],numpy.Int8) >>> a array([1, 2, 3], dtype=int8) >>> a*2 array([2, 4, 6], dtype=int16) >>> My little 2 lives happily as an int8, so why is the int8 array cast to an int16 ? I spent quite a while finding this bug. Furthermore, this works: >>> a*a array([1, 4, 9], dtype=int8) but not this: >>> a*numpy.array(2,dtype=numpy.Int8) array([2, 4, 6], dtype=int16) >>> print numpy.__version__ 0.9.9.2533 Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From simon at arrowtheory.com Sat May 20 19:45:04 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sat May 20 19:45:04 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? In-Reply-To: <20060521122810.54f24311.simon@arrowtheory.com> References: <20060521122810.54f24311.simon@arrowtheory.com> Message-ID: <20060521122954.21305c89.simon@arrowtheory.com> On Sun, 21 May 2006 12:28:10 +0100 Simon Burton wrote: > > >>> a=numpy.array([1,2,3],numpy.Int8) > >>> a > array([1, 2, 3], dtype=int8) > >>> a*2 > array([2, 4, 6], dtype=int16) > >>> > > My little 2 lives happily as an int8, so why is the int8 array > cast to an int16 ? As a workaround, i found this: >>> a*=2 >>> a array([2, 4, 6], dtype=int8) Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Sat May 20 20:27:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat May 20 20:27:08 2006 Subject: [Numpy-discussion] implicit coersion int8 -> int16 ? In-Reply-To: <20060521122810.54f24311.simon@arrowtheory.com> References: <20060521122810.54f24311.simon@arrowtheory.com> Message-ID: <446FDDF5.3090001@ieee.org> Simon Burton wrote: >>>> a=numpy.array([1,2,3],numpy.Int8) >>>> a >>>> > array([1, 2, 3], dtype=int8) > >>>> a*2 >>>> > array([2, 4, 6], dtype=int16) > > > My little 2 lives happily as an int8, so why is the int8 array > cast to an int16 ? > Thanks for catching this bug in the logic of scalar upcasting. It should only affect this one case. It's now been fixed. -Travis From bryan at cole.uklinux.net Sun May 21 00:40:01 2006 From: bryan at cole.uklinux.net (Bryan Cole) Date: Sun May 21 00:40:01 2006 Subject: [Numpy-discussion] Re: numpy and mayavi In-Reply-To: <20060520205738.GC17718@eloy.itc.it> References: <20060520205738.GC17718@eloy.itc.it> Message-ID: <1148197014.32313.61.camel@pc1.cole.uklinux.net> On Sat, 2006-05-20 at 22:57 +0200, Emanuele Olivetti wrote: > I need to use a 3D visualization toolkit together with the > amazing numpy. On the scipy website mayavi/VTK are suggested but > as far as I tried, mayavi uses only Numeric and doesn't like > numpy arrays (and mayavi, last release, dates 13 September 2005 > so a bit earlier for numpy). You can safely install Numeric along side numpy; you'll need to do this in order to run mayavi. Alternatively, you could try Paraview (http://www.paraview.org). Like mayavi, Paraview is based on VTK, but it's written in Tcl/Tk rather than python. It's more feature complete than mayavi and easier to use, in my view. I quick test shows that VTK-5 is happy to accept numpy arrays as "Void Pointers" for its vtkDataArrays. Using this method, you can construct any vtkDataObject from numpy arrays. If you just want to turn your 3D array into vtkImageData, you can use the vtkImageImport filter. Once you've got a vtkDataSet (vtkImageData or some other form), you can save this as a .vtk file, which either mayavi or paraview can then open. If you're not already familiar with VTK programming, then another way to get going is to convert your numpy array directly to a VTK data file using a pure python script. The file formats are documented at http://www.vtk.org/pdf/file-formats.pdf and the can be written in either binary or ascii form. The 'legacy' .vtk formats and are quite simple to construct. If you want to use mayavi from a script, then you need to convert your numpy arrays to Numeric arrays (using the Numeric.asarray function) for transfer to mayavi as needed. HTH Bryan > Do you know something about it or > could suggest alternative 3D visualization packages? > Basically my needs are related to this task: I've 3D matrices > and I'like to see them not only as 2D slices. > > Thanks in advance, > > Emanuele > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 From aisaac at american.edu Sun May 21 06:06:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun May 21 06:06:05 2006 Subject: [Numpy-discussion] Re: numpy and mayavi In-Reply-To: <1148197014.32313.61.camel@pc1.cole.uklinux.net> References: <20060520205738.GC17718@eloy.itc.it> <1148197014.32313.61.camel@pc1.cole.uklinux.net> Message-ID: On Sun, 21 May 2006, Bryan Cole apparently wrote: > another way to get going is to convert your numpy array > directly to a VTK data file using a pure python script. For which http://cens.ioc.ee/projects/pyvtk/ may be useful. Cheers, Alan Isaac From simon at arrowtheory.com Sun May 21 19:47:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 21 19:47:02 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays Message-ID: <20060522124610.3b8ba40a.simon@arrowtheory.com> This is something I will need to be able do: >>> a=numpy.array( [(1,2,3)], list('lll') ) >>> a.astype( 'l' ) Traceback (most recent call last): File "", line 1, in ? TypeError: an integer is required (and what a strange error message). Is there a workaround ? .tostring/.fromstring incurs a memcopy copy, if i am not mistaken. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From winnieshop888 at yahoo.com.cn Sun May 21 22:52:00 2006 From: winnieshop888 at yahoo.com.cn (winnie) Date: Sun May 21 22:52:00 2006 Subject: [Numpy-discussion] Wetsuit Message-ID: Multi Neoprene Factory Product name: Wetsuit (shorties --style no. S015) The materials : 2mm Neoprene QTY:2,000PCS The price : USD8.00/pc Delivery : FOB Shenzhen China see the attached http://home.netvigator.com/~sky888s/ Thanks, Winnie From oliphant.travis at ieee.org Mon May 22 00:48:12 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 00:48:12 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays In-Reply-To: <20060522124610.3b8ba40a.simon@arrowtheory.com> References: <20060522124610.3b8ba40a.simon@arrowtheory.com> Message-ID: <44716C9A.4020502@ieee.org> Simon Burton wrote: > This is something I will need to be able do: > > >>>> a=numpy.array( [(1,2,3)], list('lll') ) >>>> >>>> a.astype( 'l' ) >>>> Currently record-arrays can't be cast like this to built-in types. it's true the error message could be more informative. What do you think should actually be done here anyway? How do you want to cast 3 long's to 1 long? You can use the .view method to view a record-array as a different data-type without going through a data-copy using tostring and fromstring. -Travis From simon at arrowtheory.com Mon May 22 01:03:01 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 22 01:03:01 2006 Subject: [Numpy-discussion] casting homogeneous struct arrays In-Reply-To: <44716C9A.4020502@ieee.org> References: <20060522124610.3b8ba40a.simon@arrowtheory.com> <44716C9A.4020502@ieee.org> Message-ID: <20060522174701.78a6ee9e.simon@arrowtheory.com> On Mon, 22 May 2006 01:47:38 -0600 Travis Oliphant wrote: > > Simon Burton wrote: > > This is something I will need to be able do: > > > > > >>>> a=numpy.array( [(1,2,3)], list('lll') ) > >>>> > >>>> a.astype( 'l' ) > >>>> > > Currently record-arrays can't be cast like this to built-in types. > it's true the error message could be more informative. > > What do you think should actually be done here anyway? How do you want > to cast 3 long's to 1 long? with shape = (1,3). I am visualizing the fields as columns, and when the array is homogeneous it seems natural to be able to switch between the two array types. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From martin.wiechert at gmx.de Mon May 22 04:10:02 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 04:10:02 2006 Subject: [Numpy-discussion] numpy (?) bug. Message-ID: <200605221307.47277.martin.wiechert@gmx.de> Hi list, I've a rather huge and involved application which now that I've updateded a couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps crashing on me after "certain patterns of interaction". I've pasted a typical backtrace below, the top part looks always very similar, in particular multiarray.so is always there. Also it's always an illegal call to free (). So you gurus out there, does this mean that numpy is the culprit? Any help would be appreciated. Thanks, Martin *** glibc detected *** python: free(): invalid pointer: 0xb7a95ac0 *** ======= Backtrace: ========= /lib/libc.so.6[0xb7c00911] /lib/libc.so.6(__libc_free+0x84)[0xb7c01f84] /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e3cf31] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb7a47d97] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb7a60dca] /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7a2bd9f] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e3964d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e7542e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x408b)[0xb7e7462b] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0[0xb7e25fce] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e11b05] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e5192e] /usr/local/lib/libpython2.4.so.1.0[0xb7e4aea5] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x25fd)[0xb7e72b9d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x408b)[0xb7e7462b] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0[0xb7e25efa] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0[0xb7e11b05] /usr/local/lib/libpython2.4.so.1.0(PyObject_Call+0x37)[0xb7e0a217] /usr/local/lib/libpython2.4.so.1.0(PyEval_CallObjectWithKeywords+0x7c) [0xb7e6f91c] /usr/local/lib/python2.4/site-packages/sip.so[0xb3aa2817] /usr/local/lib/python2.4/site-packages/qt.so[0xb3b7bb80] /usr/lib/libqt-mt.so.3 (_ZN7QObject15activate_signalEP15QConnectionListP8QUObject+0x16d)[0xb3618b5d] /usr/lib/libqt-mt.so.3 (_ZN9QListView13doubleClickedEP13QListViewItemRK6QPointi+0xe6)[0xb3964686] /usr/lib/libqt-mt.so.3 (_ZN9QListView29contentsMouseDoubleClickEventEP11QMouseEvent+0x168) [0xb36fc908] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d6ae7c] /usr/lib/libqt-mt.so.3 (_ZN11QScrollView29viewportMouseDoubleClickEventEP11QMouseEvent+0xa5) [0xb372e345] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d6a7bc] /usr/lib/libqt-mt.so.3(_ZN11QScrollView11eventFilterEP7QObjectP6QEvent+0x1e1) [0xb372b821] /usr/lib/libqt-mt.so.3(_ZN9QListView11eventFilterEP7QObjectP6QEvent+0xa6) [0xb36f9c96] /usr/local/lib/python2.4/site-packages/qt.so[0xb3d66aab] /usr/lib/libqt-mt.so.3(_ZN7QObject16activate_filtersEP6QEvent+0x5c) [0xb361845c] /usr/lib/libqt-mt.so.3(_ZN7QObject5eventEP6QEvent+0x3b)[0xb36184cb] /usr/lib/libqt-mt.so.3(_ZN7QWidget5eventEP6QEvent+0x2c)[0xb36514fc] /usr/lib/libqt-mt.so.3 (_ZN12QApplication14internalNotifyEP7QObjectP6QEvent+0x97)[0xb35b9c47] /usr/lib/libqt-mt.so.3(_ZN12QApplication6notifyEP7QObjectP6QEvent+0x1cb) [0xb35bab6b] /usr/local/lib/python2.4/site-packages/qt.so[0xb3f3bf05] /usr/lib/libqt-mt.so.3(_ZN9QETWidget19translateMouseEventEPK7_XEvent+0x4c2) [0xb3559c42] /usr/lib/libqt-mt.so.3(_ZN12QApplication15x11ProcessEventEP7_XEvent+0x916) [0xb3558e16] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop13processEventsEj+0x4aa)[0xb356945a] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop9enterLoopEv+0x48)[0xb35d0a78] /usr/lib/libqt-mt.so.3(_ZN10QEventLoop4execEv+0x2e)[0xb35d090e] /usr/lib/libqt-mt.so.3(_ZN12QApplication4execEv+0x1f)[0xb35b97ff] /usr/local/lib/python2.4/site-packages/qt.so[0xb3f39d6e] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x14d)[0xb7e3967d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e7542e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e765c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e76643] /usr/local/lib/libpython2.4.so.1.0(PyRun_FileExFlags+0xb7)[0xb7e9b5c7] /usr/local/lib/libpython2.4.so.1.0(PyRun_SimpleFileExFlags+0x198)[0xb7e9b7c8] /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x7a)[0xb7e9baba] /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ea1f3d] python(main+0x32)[0x80485e2] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bb287c] python[0x8048521] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python 0804a000-087cc000 rw-p 0804a000 00:00 0 [heap] b1f00000-b1f21000 rw-p b1f00000 00:00 0 b1f21000-b2000000 ---p b1f21000 00:00 0 b204b000-b21d7000 rw-p b204b000 00:00 0 b229d000-b24ef000 rw-p b229d000 00:00 0 b2551000-b2583000 rw-p b2551000 00:00 0 b25b5000-b25e7000 rw-p b25b5000 00:00 0 b2619000-b2713000 rw-p b2619000 00:00 0 b2745000-b2786000 rw-p b2745000 00:00 0 b2786000-b278f000 r-xp 00000000 03:05 42242 /usr/X11R6/lib/X11/locale/lib/common/xomGeneric.so.2 b278f000-b2790000 rw-p 00008000 03:05 42242 /usr/X11R6/lib/X11/locale/lib/common/xomGeneric.so.2 b2790000-b27d6000 r--p 00000000 03:05 47073 /var/X11R6/compose-cache/l2_024_35fe9fba b27d6000-b27f1000 r-xp 00000000 03:05 42237 /usr/X11R6/lib/X11/locale/lib/common/ximcp.so.2 b27f1000-b27f3000 rw-p 0001b000 03:05 42237 /usr/X11R6/lib/X11/locale/lib/common/ximcp.so.2 b27f3000-b2816000 r-xp 00000000 03:05 43814 /usr/lib/qt3/plugins/inputmethods/libqsimple.so b2816000-b2817000 rw-p 00022000 03:05 43814 /usr/lib/qt3/plugins/inputmethods/libqsimple.so b2817000-b281f000 r-xp 00000000 03:05 13957 /lib/libnss_files-2.4.so b281f000-b2821000 rw-p 00007000 03:05 13957 /lib/libnss_files-2.4.so b2821000-b2832000 r-xp 00000000 03:05 13951 /lib/libnsl-2.4.so b2832000-b2834000 rw-p 00010000 03:05 13951 /lib/libnsl-2.4.so b2834000-b2836000 rw-p b2834000 00:00 0 b2836000-b2860000 r-xp 00000000 03:05 61788 /opt/kde3/lib/libkdefx.so.4.2.0 b2860000-b2862000 rw-p 00029000 03:05 61788 /opt/kde3/lib/libkdefx.so.4.2.0 b2862000-b2878000 r-xp 00000000 03:05 61766 /opt/kde3/lib/kde3/plugins/styles/light.so b2878000-b2879000 rw-p 00015000 03:05 61766 /opt/kde3/lib/kde3/plugins/styles/light.so b2879000-b289a000 r--p 00000000 03:05 36221 /usr/X11R6/lib/X11/fonts/truetype/DejaVuSerif.ttf b28a8000-b28c8000 r--p 00000000 03:05 36218 /usr/X11R6/lib/X11/fonts/truetype/DejaVuSerif-Bold.ttf b28c8000-b2909000 rw-p b28c8000 00:00 0 b2909000-b2934000 r-xp 00000000 03:05 19354 /usr/lib/liblcms.so.1.0.15 b2934000-b2936000 rw-p 0002a000 03:05 19354 /usr/lib/liblcms.so.1.0.15 b2936000-b2938000 rw-p b2936000 00:00 0 b2938000-b29a5000 r-xp 00000000 03:05 21150 /usr/lib/libmng.so.1.1.0.9 b29a5000-b29a8000 rw-p 0006c000 03:05 21150 /usr/lib/libmng.so.1.1.0.9 b29ab000-b29b3000 r-xp 00000000 03:05 43815 /usr/lib/qt3/plugins/inputmethods/libqxim.so b29b3000-b29b4000 rw-p 00008000 03:05 43815 /usr/lib/qt3/plugins/inputmethods/libqxim.so b29b4000-b29b7000 r-xp 00000000 03:05 43813 /usr/lib/qt3/plugins/inputmethods/libqimsw-none.so b29b7000-b29b8000 rw-p 00003000 03:05 43813 /usr/lib/qt3/plugins/inputmethods/libqimsw-none.so b29b8000-b29bf000 r-xp 00000000 03:05 43812 /usr/lib/qt3/plugins/inputmethods/libqimsw-multi.so b29bf000-b29c0000 rw-p 00007000 03:05 43812 /usr/lib/qt3/plugins/inputmethods/libqimsw-multi.so b29c0000-b29de000 r-xp 00000000 03:05 17664 /usr/lib/libjpeg.so.62.0.0 b29de000-b29df000 rw-p 0001d000 03:05 17664 /usr/lib/libjpeg.so.62.0.0 b29e0000-b29e8000 r-xp 00000000 03:05 13961 /lib/libnss_nis-2.4.so b29e8000-b29ea000 rw-p 00007000 03:05 13961 /lib/libnss_nis-2.4.so b29ea000-b29f0000 r-xp 00000000 03:05 13953 /lib/libnss_compat-2.4.so b29f0000-b29f2000 rw-p 00005000 03:05 13953 /lib/libnss_compat-2.4.so b29f2000-b29f6000 r-xp 00000000 03:05 43810 /usr/lib/qt3/plugins/imageformats/libqmng.so b29f6000-b29f7000 rw-p 00003000 03:05 43810 /usr/lib/qt3/plugins/imageformats/libqmng.so b29f7000-b29f8000 r-xp 00000000 03:05 42239 /usr/X11R6/lib/X11/locale/lib/common/xlcUTF8Load.so.2 b29f8000-b29f9000 rw-p 00000000 03:05 42239 /usr/X11R6/lib/X11/locale/lib/common/xlcUTF8Load.so.2 b29f9000-b29ff000 r--s 00001000 03:05 68835 /var/cache/fontconfig/d0814903482a18ed8717ceb08fcf4410.cache-2 b29ff000-b2a04000 r--s 00001000 0Aborted From robert.kern at gmail.com Mon May 22 04:23:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon May 22 04:23:00 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221307.47277.martin.wiechert@gmx.de> References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: Martin Wiechert wrote: > Hi list, > > I've a rather huge and involved application which now that I've updateded a > couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps crashing > on me after "certain patterns of interaction". I've pasted a typical > backtrace below, the top part looks always very similar, in particular > multiarray.so is always there. Also it's always an illegal call to free (). > > So you gurus out there, does this mean that numpy is the culprit? Possibly. Without access to your code, it's impossible for us to tell and even more impossible for us to fix it. If you can narrow it down to the function in numpy.core.multiarray that's being called and the arguments you are passing to it, we might be able to do something. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From martin.wiechert at gmx.de Mon May 22 05:17:04 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 05:17:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: <200605221415.11183.martin.wiechert@gmx.de> Robert, Thanks for your reply. I have managed to get a gdb backtrace, see below. Does this help? Can I extract more useful information from gdb (never used it before)? Btw. I have no problem showing my code (besides embarrassment), but I'm pretty certain you don't want to read it ;-) I'll try to narrow it down, but it's difficult, e.g. it doesn't seem to happen the first time the guilty code is executed but rather when it is called the third or fourth time or even later. Thanks, Martin. On Monday 22 May 2006 13:21, Robert Kern wrote: > Martin Wiechert wrote: > > Hi list, > > > > I've a rather huge and involved application which now that I've updateded > > a couple of its dependencies (numpy/PyQwt/ScientificPython ...) keeps > > crashing on me after "certain patterns of interaction". I've pasted a > > typical backtrace below, the top part looks always very similar, in > > particular multiarray.so is always there. Also it's always an illegal > > call to free (). > > > > So you gurus out there, does this mean that numpy is the culprit? > > Possibly. Without access to your code, it's impossible for us to tell and > even more impossible for us to fix it. If you can narrow it down to the > function in numpy.core.multiarray that's being called and the arguments you > are passing to it, we might be able to do something. (gdb) backtrace #0 0xffffe410 in __kernel_vsyscall () #1 0xb7c6b7d0 in raise () from /lib/libc.so.6 #2 0xb7c6cea3 in abort () from /lib/libc.so.6 #3 0xb7ca0f8b in __libc_message () from /lib/libc.so.6 #4 0xb7ca6911 in malloc_printerr () from /lib/libc.so.6 #5 0xb7ca7f84 in free () from /lib/libc.so.6 #6 0xb7ee2f31 in PyObject_Free (p=0x2) at Objects/obmalloc.c:798 #7 0xb7af9d97 in arraydescr_dealloc (self=0xb7b47ac0) at numpy/core/src/arrayobject.c:8956 #8 0xb7b12dca in array_dealloc (self=0x8714a18) at numpy/core/src/arrayobject.c:1477 #9 0xb7addd9f in PyUFunc_GenericReduction (self=0x80a49a0, args=0xb299152c, kwds=, operation=2) at numpy/core/src/ufuncobject.c:2521 #10 0xb7edf64d in PyCFunction_Call (func=0xb2d2c82c, arg=0xb299152c, kw=0x6) at Objects/methodobject.c:77 #11 0xb7f1b42e in PyEval_EvalFrame (f=0x8299d24) at Python/ceval.c:3563 #12 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2f623a0, globals=0xb41e59bc, locals=0x0, args=0x812dd1c, argcount=1, kws=0x812dd20, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #13 0xb7f1a62b in PyEval_EvalFrame (f=0x812db8c) at Python/ceval.c:3656 #14 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2f414a0, globals=0xb2f3a0b4, locals=0x0, args=0xb2d45c78, argcount=1, kws=0x869bdf8, kwcount=2, defs=0xb2d148f8, defcount=8, closure=0x0) at Python/ceval.c:2736 #15 0xb7ecbfce in function_call (func=0xb2d148b4, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/funcobject.c:548 #16 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/abstract.c:1795 #17 0xb7eb7b05 in instancemethod_call (func=0xb427e324, arg=0xb2d45c6c, kw=0xb28052d4) at Objects/classobject.c:2447 #18 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb7c0102c, kw=0xb28052d4) at Objects/abstract.c:1795 #19 0xb7ef792e in slot_tp_init (self=0xb27fdbac, args=0xb7c0102c, kwds=0xb28052d4) at Objects/typeobject.c:4759 #20 0xb7ef0ea5 in type_call (type=, args=0xb7c0102c, kwds=0xb28052d4) at Objects/typeobject.c:435 #21 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb7c0102c, kw=0xb28052d4) at Objects/abstract.c:1795 #22 0xb7f18b9d in PyEval_EvalFrame (f=0x849ef24) at Python/ceval.c:3771 #23 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb2d1b0a0, globals=0xb2d12f0c, locals=0x0, args=0x80ab9ec, argcount=2, kws=0x80ab9f4, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #24 0xb7f1a62b in PyEval_EvalFrame (f=0x80ab88c) at Python/ceval.c:3656 #25 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb7bd63e0, globals=0xb7bcd24c, locals=0x0, args=0xb297eee8, argcount=4, kws=0x0, kwcount=0, defs=0xb2d27d18, defcount=2, closure=0x0) at Python/ceval.c:2736 #26 0xb7ecbefa in function_call (func=0xb2d31b8c, arg=0xb297eedc, kw=0x0) at Objects/funcobject.c:548 #27 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb297eedc, kw=0x0) at Objects/abstract.c:1795 #28 0xb7eb7b05 in instancemethod_call (func=0xb427e2fc, arg=0xb297eedc, kw=0x0) at Objects/classobject.c:2447 #29 0xb7eb0217 in PyObject_Call (func=0x5e02, arg=0xb29a8d9c, kw=0x0) at Objects/abstract.c:1795 #30 0xb7f1591c in PyEval_CallObjectWithKeywords (func=0xb427e2fc, arg=0xb29a8d9c, kw=0x0) at Python/ceval.c:3430 #31 0xb3b48817 in initsip () from /usr/local/lib/python2.4/site-packages/sip.so #32 0xb3c21b80 in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #33 0xb36beb5d in QObject::activate_signal () from /usr/lib/libqt-mt.so.3 #34 0xb3a0a686 in QListView::doubleClicked () from /usr/lib/libqt-mt.so.3 #35 0xb37a2908 in QListView::contentsMouseDoubleClickEvent () from /usr/lib/libqt-mt.so.3 #36 0xb3e10e7c in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #37 0xb37d4345 in QScrollView::viewportMouseDoubleClickEvent () from /usr/lib/libqt-mt.so.3 #38 0xb3e107bc in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #39 0xb37d1821 in QScrollView::eventFilter () from /usr/lib/libqt-mt.so.3 #40 0xb379fc96 in QListView::eventFilter () from /usr/lib/libqt-mt.so.3 #41 0xb3e0caab in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #42 0xb36be45c in QObject::activate_filters () from /usr/lib/libqt-mt.so.3 #43 0xb36be4cb in QObject::event () from /usr/lib/libqt-mt.so.3 #44 0xb36f74fc in QWidget::event () from /usr/lib/libqt-mt.so.3 #45 0xb365fc47 in QApplication::internalNotify () from /usr/lib/libqt-mt.so.3 #46 0xb3660b6b in QApplication::notify () from /usr/lib/libqt-mt.so.3 #47 0xb3fe1f05 in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #48 0xb35ffc42 in QETWidget::translateMouseEvent () from /usr/lib/libqt-mt.so.3 #49 0xb35fee16 in QApplication::x11ProcessEvent () from /usr/lib/libqt-mt.so.3 #50 0xb360f45a in QEventLoop::processEvents () from /usr/lib/libqt-mt.so.3 #51 0xb3676a78 in QEventLoop::enterLoop () from /usr/lib/libqt-mt.so.3 #52 0xb367690e in QEventLoop::exec () from /usr/lib/libqt-mt.so.3 #53 0xb365f7ff in QApplication::exec () from /usr/lib/libqt-mt.so.3 #54 0xb3fdfd6e in initqt () from /usr/local/lib/python2.4/site-packages/qt.so #55 0xb7edf67d in PyCFunction_Call (func=0xb2f42c8c, arg=0xb7c0102c, kw=0x0) at Objects/methodobject.c:108 #56 0xb7f1b42e in PyEval_EvalFrame (f=0x80577d4) at Python/ceval.c:3563 #57 0xb7f1c5c9 in PyEval_EvalCodeEx (co=0xb7be1360, globals=0xb7c19824, locals=0xb7c19824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #58 0xb7f1c643 in PyEval_EvalCode (co=0xb7be1360, globals=0xb7c19824, locals=0xb7c19824) at Python/ceval.c:484 #59 0xb7f415c7 in PyRun_FileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", start=257, globals=0xb7c19824, locals=0xb7c19824, closeit=1, flags=0xbfb60814) at Python/pythonrun.c:1265 #60 0xb7f417c8 in PyRun_SimpleFileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", closeit=1, flags=0xbfb60814) at Python/pythonrun.c:860 #61 0xb7f41aba in PyRun_AnyFileExFlags (fp=0x804a008, filename=0xbfb61faf "onset_view.py", closeit=1, flags=0xbfb60814) at Python/pythonrun.c:664 #62 0xb7f47f3d in Py_Main (argc=1, argv=0xbfb608e4) at Modules/main.c:493 #63 0x080485e2 in main (argc=Cannot access memory at address 0x5e02 ) at Modules/python.c:23 From ivilata at carabos.com Mon May 22 05:20:13 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Mon May 22 05:20:13 2006 Subject: [Numpy-discussion] Power with negative exponent Message-ID: <4471AC10.9070000@carabos.com> Hi all, when working with numexpr, I have come across a curiosity in both numarray and numpy:: In [30]:b = numpy.array([1,2,3,4]) In [31]:b ** -1 Out[31]:array([1, 0, 0, 0]) In [32]:4 ** -1 Out[32]:0.25 In [33]: According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. Then, shouldn?t be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])``? Is this behaviour intentional? (I googled for previous messages on the topic but I didn?t find any.) Thanks, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From simon at arrowtheory.com Mon May 22 05:49:04 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 22 05:49:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221415.11183.martin.wiechert@gmx.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221415.11183.martin.wiechert@gmx.de> Message-ID: <20060522223128.0967a826.simon@arrowtheory.com> On Mon, 22 May 2006 14:15:11 +0200 Martin Wiechert wrote: > #9 0xb7addd9f in PyUFunc_GenericReduction (self=0x80a49a0, args=0xb299152c, > kwds=, operation=2) > at numpy/core/src/ufuncobject.c:2521 I was getting segfaults with version 0.9.7.2523: #0 0xb7c5d612 in PyUFunc_GenericFunction (self=0x81b6d08, args=0x76484a40, mps=0xbfffad10) at ufuncobject.c:1484 An svn update fixed it. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From sen at ps.net Mon May 22 06:08:25 2006 From: sen at ps.net (Susannah Senior) Date: Mon May 22 06:08:25 2006 Subject: [Numpy-discussion] Re: test AMBeBtEN Message-ID: <000001c67da0$ad315a90$1e54a8c0@bly51> Hi, S O M ^ A M B / E N P R O Z ^ C L E V / T R A X ^ N A X M E R / D / A V A L / U M V / A G R A C / A L / S http://www.teminolvi.com And so Bilbo was swung down from the wall, and departed with nothing for all his trouble, except the armour which Thorin had given him already. More than one of the dwarves in their hearts felt shame and pity at his going. Farewell! he cried to them. We may meet again as friends. Be off! called Thorin. You have mail upon you, which was made by my folk, and is too good for you. It cannot be pierced.by -------------- next part -------------- An HTML attachment was scrubbed... URL: From Martin.Wiechert at mpimf-heidelberg.mpg.de Mon May 22 07:26:09 2006 From: Martin.Wiechert at mpimf-heidelberg.mpg.de (Martin Takeo Wiechert) Date: Mon May 22 07:26:09 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: References: <200605221307.47277.martin.wiechert@gmx.de> Message-ID: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Robert, I nailed it down. Look at the short interactive session below. numpy version is 0.9.8. Regards, Martin. P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you do your svn update? Python 2.4.3 (#1, May 12 2006, 05:35:54) [GCC 4.1.0 (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** ======= Backtrace: ========= /lib/libc.so.6[0xb7c1a911] /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) [0xb7eb512d] /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) [0xb7eb526b] /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] python(main+0x32)[0x80485e2] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] python[0x8048521] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] b7000000-b7021000 rw-p b7000000 00:00 0 b7021000-b7100000 ---p b7021000 00:00 0 b71b4000-b7297000 rw-p b71b4000 00:00 0 b7297000-b72b2000 r-xp 00000000 03:05 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so b72b2000-b72b6000 rw-p 0001a000 03:05 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 b72d1000-b72d4000 rw-p b72d1000 00:00 0 b72e2000-b72eb000 r-xp 00000000 03:05 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so b72eb000-b72ec000 rw-p 00008000 03:05 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so b72ec000-b758c000 r-xp 00000000 03:05 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so b758c000-b758e000 rw-p 0029f000 03:05 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so b758e000-b75ef000 rw-p b758e000 00:00 0 b75ef000-b75f2000 r-xp 00000000 03:05 208618 /usr/local/lib/python2.4/lib-dynload/math.so b75f2000-b75f3000 rw-p 00002000 03:05 208618 /usr/local/lib/python2.4/lib-dynload/math.so b75f3000-b75f5000 r-xp 00000000 03:05 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so b75f5000-b75f6000 rw-p 00002000 03:05 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so b75f6000-b7610000 r-xp 00000000 03:05 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so b7610000-b7611000 rw-p 00019000 03:05 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so b7611000-b7614000 r-xp 00000000 03:05 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so b7614000-b7615000 rw-p 00003000 03:05 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so b7615000-b7656000 rw-p b7615000 00:00 0 b7656000-b765a000 r-xp 00000000 03:05 208644 /usr/local/lib/python2.4/lib-dynload/strop.so b765a000-b765c000 rw-p 00003000 03:05 208644 /usr/local/lib/python2.4/lib-dynload/strop.so b765c000-b765f000 r-xp 00000000 03:05 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so b765f000-b7660000 rw-p 00003000 03:05 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so b7660000-b7671000 r-xp 00000000 03:05 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so b7671000-b7672000 rw-p 00010000 03:05 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so b7672000-b7964000 r-xp 00000000 03:05 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so b7964000-b7966000 rw-p 002f1000 03:05 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so b7966000-b798e000 r-xp 00000000 03:05 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so b798e000-b7991000 rw-p 00027000 03:05 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so b7991000-b79d3000 rw-p b7991000 00:00 0 b79d3000-b7a28000 r-xp 00000000 03:05 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so b7a28000-b7a32000 rw-p 00054000 03:05 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 b7a78000-b7a79000 rw-p b7a78000 00:00 0 b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 b7af4000-b7af5000 rw-p b7af4000 00:00 0 b7af5000-b7af8000 r-xp 00000000 03:05 208646 /usr/local/lib/python2.4/lib-dynload/readline.so b7af8000-b7af9000 rw-p 000030Aborted From fullung at gmail.com Mon May 22 07:35:01 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon May 22 07:35:01 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Hello all This bug seems to be present in 0.9.9.2536. Martin, it would be great if you could create a ticket in Trac. http://projects.scipy.org/scipy/numpy/newticket Regards, Albert > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > discussion-admin at lists.sourceforge.net] On Behalf Of Martin Takeo Wiechert > Sent: 22 May 2006 16:24 > To: numpy-discussion at lists.sourceforge.net > Subject: Re: [Numpy-discussion] Re: numpy (?) bug. > > Robert, > > I nailed it down. Look at the short interactive session below. numpy > version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did > you > do your svn update? From nwagner at iam.uni-stuttgart.de Mon May 22 07:35:13 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon May 22 07:35:13 2006 Subject: [Fwd: Re: [Numpy-discussion] Re: numpy (?) bug.] Message-ID: <4471CBF0.3090602@iam.uni-stuttgart.de> -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From nwagner at iam.uni-stuttgart.de Mon May 22 10:32:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 22 May 2006 16:32:05 +0200 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <4471CB65.9090903@iam.uni-stuttgart.de> Martin Takeo Wiechert wrote: > Robert, > > I nailed it down. Look at the short interactive session below. numpy version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you > do your svn update? > > > Python 2.4.3 (#1, May 12 2006, 05:35:54) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numpy import * >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > array([225, 225]) > >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** > ======= Backtrace: ========= > /lib/libc.so.6[0xb7c1a911] > /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] > /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] > /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] > /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) > [0xb7eb512d] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) > [0xb7eb526b] > /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] > /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] > python(main+0x32)[0x80485e2] > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] > python[0x8048521] > ======= Memory map: ======== > 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python > 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python > 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] > b7000000-b7021000 rw-p b7000000 00:00 0 > b7021000-b7100000 ---p b7021000 00:00 0 > b71b4000-b7297000 rw-p b71b4000 00:00 0 > b7297000-b72b2000 r-xp 00000000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b2000-b72b6000 rw-p 0001a000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d1000-b72d4000 rw-p b72d1000 00:00 0 > b72e2000-b72eb000 r-xp 00000000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72eb000-b72ec000 rw-p 00008000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72ec000-b758c000 r-xp 00000000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758c000-b758e000 rw-p 0029f000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758e000-b75ef000 rw-p b758e000 00:00 0 > b75ef000-b75f2000 r-xp 00000000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f2000-b75f3000 rw-p 00002000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f3000-b75f5000 r-xp 00000000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f5000-b75f6000 rw-p 00002000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f6000-b7610000 r-xp 00000000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7610000-b7611000 rw-p 00019000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7611000-b7614000 r-xp 00000000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7614000-b7615000 rw-p 00003000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7615000-b7656000 rw-p b7615000 00:00 0 > b7656000-b765a000 r-xp 00000000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765a000-b765c000 rw-p 00003000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765c000-b765f000 r-xp 00000000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b765f000-b7660000 rw-p 00003000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b7660000-b7671000 r-xp 00000000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7671000-b7672000 rw-p 00010000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7672000-b7964000 r-xp 00000000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7964000-b7966000 rw-p 002f1000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7966000-b798e000 r-xp 00000000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b798e000-b7991000 rw-p 00027000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b7991000-b79d3000 rw-p b7991000 00:00 0 > b79d3000-b7a28000 r-xp 00000000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a28000-b7a32000 rw-p 00054000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 > b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 > b7a78000-b7a79000 rw-p b7a78000 00:00 0 > b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 > b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 > b7af4000-b7af5000 rw-p b7af4000 00:00 0 > b7af5000-b7af8000 r-xp 00000000 03:05 > 208646 /usr/local/lib/python2.4/lib-dynload/readline.so > b7af8000-b7af9000 rw-p 000030Aborted > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > I can reproduce it. Numpy version 0.9.9.2537 Scipy version 0.4.9.1906 Starting program: /usr/bin/python [Thread debugging using libthread_db enabled] [New Thread 16384 (LWP 17182)] Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) array([225, 225]) >>> multiply.reduceat ((15,15,15,15), (0,2)) *** glibc detected *** free(): invalid pointer: 0x00002aaaabe87a40 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 17182)] 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 (gdb) bt #0 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 #1 0x00002aaaaadf3821 in pthread_kill () from /lib64/libpthread.so.0 #2 0x00002aaaaadf3be2 in raise () from /lib64/libpthread.so.0 #3 0x00002aaaab61759d in abort () from /lib64/libc.so.6 #4 0x00002aaaab64a7be in __libc_message () from /lib64/libc.so.6 #5 0x00002aaaab64f76c in malloc_printerr () from /lib64/libc.so.6 #6 0x00002aaaab65025a in free () from /lib64/libc.so.6 #7 0x00002aaaabd51879 in array_dealloc (self=0x749be0) at arrayobject.c:1477 #8 0x00002aaaaac1ae77 in insertdict (mp=0x50b0b0, key=0x2aaaaaae79c0, hash=12160036574, value=0x2aaaaadbee30) at dictobject.c:397 #9 0x00002aaaaac1b147 in PyDict_SetItem (op=0x50b0b0, key=0x2aaaaaae79c0, value=0x2aaaaadbee30) at dictobject.c:551 #10 0x00002aaaaac206e4 in PyObject_GenericSetAttr (obj=0x2aaaaaac2bb0, name=0x2aaaaaae79c0, value=0x2aaaaadbee30) at object.c:1370 #11 0x00002aaaaac20136 in PyObject_SetAttr (v=0x2aaaaaac2bb0, name=0x2aaaaaae79c0, value=0x2aaaaadbee30) at object.c:1128 #12 0x00002aaaaac202cd in PyObject_SetAttrString (v=0x2aaaaaac2bb0, name=, w=0x2aaaaadbee30) at object.c:1044 #13 0x00002aaaaac73572 in sys_displayhook (self=, o=0x74c4d0) at sysmodule.c:105 #14 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #15 0x00002aaaaac4efe0 in PyEval_CallObjectWithKeywords (func=0x2aaaaaacf170, arg=0x2aaaaccaa890, kw=0x0) at ceval.c:3419 #16 0x00002aaaaac53a07 in PyEval_EvalFrame (f=0x541960) at ceval.c:1507 #17 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaccadab0, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #18 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #19 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503b50, locals=0x503b50, flags=) at pythonrun.c:1265 #20 0x00002aaaaac71bc7 in PyRun_InteractiveOneFlags (fp=, filename=0x2aaaaac95e73 "", flags=0x7fffff82c870) at pythonrun.c:762 #21 0x00002aaaaac71cbe in PyRun_InteractiveLoopFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", flags=0x7fffff82c870) at pythonrun.c:695 #22 0x00002aaaaac7221c in PyRun_AnyFileExFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", closeit=0, flags=0x7fffff82c870) at pythonrun.c:658 #23 0x00002aaaaac77b25 in Py_Main (argc=, argv=0x7fffff82e7bf) at main.c:484 #24 0x00002aaaab603ced in __libc_start_main () from /lib64/libc.so.6 #25 0x00000000004006ea in _start () at start.S:113 #26 0x00007fffff82c908 in ?? () #27 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 Nils --------------070904090207090509070803-- From robert.kern at gmail.com Mon May 22 07:51:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon May 22 07:51:38 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> References: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Message-ID: Albert Strasheim wrote: > Hello all > > This bug seems to be present in 0.9.9.2536. In order to bracket the bug, I will add that I do not see it in 0.9.7.2477 . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From martin.wiechert at gmx.de Mon May 22 07:52:17 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon May 22 07:52:17 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> References: <005001c67dac$ba3b1940$0a84a8c0@dsp.sun.ac.za> Message-ID: <200605221649.47954.martin.wiechert@gmx.de> I've created ticket #128 Regards, Martin On Monday 22 May 2006 16:33, Albert Strasheim wrote: > Hello all > > This bug seems to be present in 0.9.9.2536. Martin, it would be great if > you could create a ticket in Trac. > > http://projects.scipy.org/scipy/numpy/newticket > > Regards, > > Albert > > > -----Original Message----- > > From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy- > > discussion-admin at lists.sourceforge.net] On Behalf Of Martin Takeo > > Wiechert Sent: 22 May 2006 16:24 > > To: numpy-discussion at lists.sourceforge.net > > Subject: Re: [Numpy-discussion] Re: numpy (?) bug. > > > > Robert, > > > > I nailed it down. Look at the short interactive session below. numpy > > version > > is 0.9.8. > > > > Regards, Martin. > > > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did > > you > > do your svn update? > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From bblais at bryant.edu Mon May 22 09:50:02 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon May 22 09:50:02 2006 Subject: [Numpy-discussion] saving data: which format is recommended? Message-ID: <4471EB90.7030005@bryant.edu> Hello, I am trying to save numpy arrays (actually a list of them) for later use, and distribution to others. Up until yesterday, I've been using the zpickle module from the Cookbook, which is just pickle binary format with gzip compression. Yesterday, I upgraded my operating system, and now I can't read those files. I am using numpy 0.9.9.2536, and unfortunately I can't recall the version that I was using, but it was pretty relatively recent. I also upgraded from Python 2.3 to 2.4. Trying to load the "old" files, I get: AttributeError: 'module' object has no attribute 'dtypedescr' the file consists of a single dictionary, with two elements, like: var={'im': numpy.zeros((5,5)),'im_scale_shift':[0.0,1.0]} My question isn't how can I load these "old" files, because I can regenerate them. I would like to know what file format I should be using so that I don't have to worry about upgrades/version differences when I want to load them. Is there a preferred way to do this? I thought pickle was that way, but perhaps I don't understand how pickle works. thanks, Brian Blais From faltet at carabos.com Mon May 22 11:15:06 2006 From: faltet at carabos.com (Francesc Altet) Date: Mon May 22 11:15:06 2006 Subject: [Numpy-discussion] saving data: which format is recommended? In-Reply-To: <4471EB90.7030005@bryant.edu> References: <4471EB90.7030005@bryant.edu> Message-ID: <1148321632.7596.21.camel@localhost.localdomain> El dl 22 de 05 del 2006 a les 12:49 -0400, en/na Brian Blais va escriure: > I am trying to save numpy arrays (actually a list of them) for later use, and > distribution to others. Up until yesterday, I've been using the zpickle module from > the Cookbook, which is just pickle binary format with gzip compression. Yesterday, I > upgraded my operating system, and now I can't read those files. I am using numpy > 0.9.9.2536, and unfortunately I can't recall the version that I was using, but it was > pretty relatively recent. I also upgraded from Python 2.3 to 2.4. Trying to load > the "old" files, I get: > > AttributeError: 'module' object has no attribute 'dtypedescr' > > > the file consists of a single dictionary, with two elements, like: > > var={'im': numpy.zeros((5,5)),'im_scale_shift':[0.0,1.0]} This could be because NumPy objects has suffered some changes in their structure in the last months. After 1.0 version there will (hopefully) be no more changes in the structure, so your pickles will be more stable (but again, you might have problems in the long run, i.e. when NumPy 2.0 will appear). > > My question isn't how can I load these "old" files, because I can regenerate them. I > would like to know what file format I should be using so that I don't have to worry > about upgrades/version differences when I want to load them. Is there a preferred > way to do this? I thought pickle was that way, but perhaps I don't understand how > pickle works. If you need full comaptibility, a better approach than pickle-based solutions are the .tofile() .fromfile() methods, but you need to save the metadata for your objects (type, shape, etc.) separately. If you need full support for saving data & metadata for your NumPy objects in a transparent way that is independent of pickle you may want to have a look at PyTables [1] or NetCDF4 [2]. Both packages should be able to save NumPy datasets without a need to worry for future changes in NumPy data structures. These both packages are ultimately based on the HDF5 format[3], which has a pretty strong commitement with backward/forward format compatibility along its versions. [1]http://www.pytables.org [2]http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/netCDF4.html [3]http://hdf.ncsa.uiuc.edu/HDF5 Cheers, -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From oliphant.travis at ieee.org Mon May 22 11:28:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 11:28:03 2006 Subject: [Numpy-discussion] saving data: which format is recommended? In-Reply-To: <4471EB90.7030005@bryant.edu> References: <4471EB90.7030005@bryant.edu> Message-ID: <44720288.2080902@ieee.org> Brian Blais wrote: > Hello, > > I am trying to save numpy arrays (actually a list of them) for later > use, and > distribution to others. Up until yesterday, I've been using the > zpickle module from > the Cookbook, which is just pickle binary format with gzip > compression. Yesterday, I > upgraded my operating system, and now I can't read those files. I am > using numpy > 0.9.9.2536, and unfortunately I can't recall the version that I was > using, but it was > pretty relatively recent. I also upgraded from Python 2.3 to 2.4. > Trying to load > the "old" files, I get: > > AttributeError: 'module' object has no attribute 'dtypedescr' The name "dtypedescr" was changed to "dtype" back in early February. The problem with pickle is that it is quite sensitive to these kind of changes. These kind of changes are actually rare, but in the early stages of NumPy, more common. This should be more stable now. I don't expect changes that will cause pickled NumPy arrays to fail in the future. If you needed to read the data on these files, it is likely possible with a little tweaking. While pickle is convenient and the actual data is guaranteed to be readable, reconstructing the data requires that certain names won't change. Many people use other methods for persistence because of this. -Travis From oliphant.travis at ieee.org Mon May 22 11:47:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon May 22 11:47:04 2006 Subject: [Numpy-discussion] Re: numpy (?) bug. In-Reply-To: <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> References: <200605221307.47277.martin.wiechert@gmx.de> <200605221624.01235.wiechert@mpimf-heidelberg.mpg.de> Message-ID: <447206E5.6060608@ieee.org> Martin Takeo Wiechert wrote: > Robert, > > I nailed it down. Look at the short interactive session below. numpy version > is 0.9.8. > > Regards, Martin. > > P.S.: Simon, thanks for your hint. 0.9.8 is only a few days old. When did you > do your svn update? > > > Python 2.4.3 (#1, May 12 2006, 05:35:54) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> from numpy import * >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> > array([225, 225]) > >>>> multiply.reduceat ((15,15,15,15), (0,2)) >>>> Thanks for tracking this down. It was a reference-count bug on the data-type object. The builtin data-types should never be freed, but an attempt was made due to the bug. This should be fixed in SVN now. -Travis > *** glibc detected *** python: free(): invalid pointer: 0xb7a2eac0 *** > ======= Backtrace: ========= > /lib/libc.so.6[0xb7c1a911] > /lib/libc.so.6(__libc_free+0x84)[0xb7c1bf84] > /usr/local/lib/libpython2.4.so.1.0(PyObject_Free+0x51)[0xb7e56f31] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79e0d97] > /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so[0xb79f9dca] > /usr/local/lib/python2.4/site-packages/numpy/core/umath.so[0xb7983d9f] > /usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x11d)[0xb7e5364d] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalFrame+0x4e8e)[0xb7e8f42e] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCodeEx+0x869)[0xb7e905c9] > /usr/local/lib/libpython2.4.so.1.0(PyEval_EvalCode+0x63)[0xb7e90643] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveOneFlags+0x1fd) > [0xb7eb512d] > /usr/local/lib/libpython2.4.so.1.0(PyRun_InteractiveLoopFlags+0x5b) > [0xb7eb526b] > /usr/local/lib/libpython2.4.so.1.0(PyRun_AnyFileExFlags+0x47)[0xb7eb5a87] > /usr/local/lib/libpython2.4.so.1.0(Py_Main+0xbad)[0xb7ebbf3d] > python(main+0x32)[0x80485e2] > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7bcc87c] > python[0x8048521] > ======= Memory map: ======== > 08048000-08049000 r-xp 00000000 03:05 205745 /usr/local/bin/python > 08049000-0804a000 rw-p 00000000 03:05 205745 /usr/local/bin/python > 0804a000-081ad000 rw-p 0804a000 00:00 0 [heap] > b7000000-b7021000 rw-p b7000000 00:00 0 > b7021000-b7100000 ---p b7021000 00:00 0 > b71b4000-b7297000 rw-p b71b4000 00:00 0 > b7297000-b72b2000 r-xp 00000000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b2000-b72b6000 rw-p 0001a000 03:05 > 212490 /usr/local/lib/python2.4/site-packages/numpy/random/mtrand.so > b72b6000-b72d0000 r-xp 00000000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d0000-b72d1000 rw-p 00019000 03:05 201845 /usr/lib/libg2c.so.0.0.0 > b72d1000-b72d4000 rw-p b72d1000 00:00 0 > b72e2000-b72eb000 r-xp 00000000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72eb000-b72ec000 rw-p 00008000 03:05 > 212480 /usr/local/lib/python2.4/site-packages/numpy/dft/fftpack_lite.so > b72ec000-b758c000 r-xp 00000000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758c000-b758e000 rw-p 0029f000 03:05 > 212489 /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so > b758e000-b75ef000 rw-p b758e000 00:00 0 > b75ef000-b75f2000 r-xp 00000000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f2000-b75f3000 rw-p 00002000 03:05 > 208618 /usr/local/lib/python2.4/lib-dynload/math.so > b75f3000-b75f5000 r-xp 00000000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f5000-b75f6000 rw-p 00002000 03:05 > 212481 /usr/local/lib/python2.4/site-packages/numpy/lib/_compiled_base.so > b75f6000-b7610000 r-xp 00000000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7610000-b7611000 rw-p 00019000 03:05 > 212486 /usr/local/lib/python2.4/site-packages/numpy/core/scalarmath.so > b7611000-b7614000 r-xp 00000000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7614000-b7615000 rw-p 00003000 03:05 > 208625 /usr/local/lib/python2.4/lib-dynload/mmap.so > b7615000-b7656000 rw-p b7615000 00:00 0 > b7656000-b765a000 r-xp 00000000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765a000-b765c000 rw-p 00003000 03:05 > 208644 /usr/local/lib/python2.4/lib-dynload/strop.so > b765c000-b765f000 r-xp 00000000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b765f000-b7660000 rw-p 00003000 03:05 > 208595 /usr/local/lib/python2.4/lib-dynload/cStringIO.so > b7660000-b7671000 r-xp 00000000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7671000-b7672000 rw-p 00010000 03:05 > 208619 /usr/local/lib/python2.4/lib-dynload/cPickle.so > b7672000-b7964000 r-xp 00000000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7964000-b7966000 rw-p 002f1000 03:05 > 212484 /usr/local/lib/python2.4/site-packages/numpy/core/_dotblas.so > b7966000-b798e000 r-xp 00000000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b798e000-b7991000 rw-p 00027000 03:05 > 212487 /usr/local/lib/python2.4/site-packages/numpy/core/umath.so > b7991000-b79d3000 rw-p b7991000 00:00 0 > b79d3000-b7a28000 r-xp 00000000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a28000-b7a32000 rw-p 00054000 03:05 > 212482 /usr/local/lib/python2.4/site-packages/numpy/core/multiarray.so > b7a32000-b7a6d000 r-xp 00000000 03:05 17777 /lib/libncurses.so.5.5 > b7a6d000-b7a78000 rw-p 0003a000 03:05 17777 /lib/libncurses.so.5.5 > b7a78000-b7a79000 rw-p b7a78000 00:00 0 > b7a79000-b7aba000 r-xp 00000000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7aba000-b7ac6000 rw-p 00040000 03:05 17792 /usr/lib/libncursesw.so.5.5 > b7ac6000-b7af0000 r-xp 00000000 03:05 18393 /lib/libreadline.so.5.1 > b7af0000-b7af4000 rw-p 0002a000 03:05 18393 /lib/libreadline.so.5.1 > b7af4000-b7af5000 rw-p b7af4000 00:00 0 > b7af5000-b7af8000 r-xp 00000000 03:05 > 208646 /usr/local/lib/python2.4/lib-dynload/readline.so > b7af8000-b7af9000 rw-p 000030Aborted > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From ivilata at carabos.com Mon May 22 23:50:03 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Mon May 22 23:50:03 2006 Subject: [Numpy-discussion] Pow() with negative exponent Message-ID: <4472B013.1010607@carabos.com> (I'm sending this again because I'm afraid the previous post may have qualified as spam because of it subject. Sorry for the inconvenience.) Hi all, when working with numexpr, I have come across a curiosity in both numarray and numpy:: In [30]:b = numpy.array([1,2,3,4]) In [31]:b ** -1 Out[31]:array([1, 0, 0, 0]) In [32]:4 ** -1 Out[32]:0.25 In [33]: According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. Then, shouldn?t be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` (i.e. a floating point result)? Is this behaviour intentional? (I googled for previous messages on the topic but I didn?t find any.) Thanks, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ajikoe at gmail.com Mon May 22 23:58:05 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Mon May 22 23:58:05 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: <4472B013.1010607@carabos.com> References: <4472B013.1010607@carabos.com> Message-ID: use 'f' to tell numpy that its array element is a float type: b = numpy.array([1,2,3,4],'f') an alternative is to put dot after the number: b = numpy.array([1. ,2. ,3. ,4.]) This hopefully solve your problem. Cheers, pujo On 5/23/06, Ivan Vilata i Balaguer wrote: > > (I'm sending this again because I'm afraid the previous post may have > qualified as spam because of it subject. Sorry for the inconvenience.) > > Hi all, when working with numexpr, I have come across a curiosity in > both numarray and numpy:: > > In [30]:b = numpy.array([1,2,3,4]) > In [31]:b ** -1 > Out[31]:array([1, 0, 0, 0]) > In [32]:4 ** -1 > Out[32]:0.25 > In [33]: > > According to http://docs.python.org/ref/power.html: > > For int and long int operands, the result has the same type as the > operands (after coercion) unless the second argument is negative; in > that case, all arguments are converted to float and a float result is > delivered. > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) > > Thanks, > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivilata at carabos.com Tue May 23 00:27:03 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue May 23 00:27:03 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com> Message-ID: <4472B8D0.6040606@carabos.com> En/na Pujo Aji ha escrit:: > use 'f' to tell numpy that its array element is a float type: > b = numpy.array([1,2,3,4],'f') > > an alternative is to put dot after the number: > b = numpy.array([1. ,2. ,3. ,4.]) > > This hopefully solve your problem. You're right, but according to Python reference docs, having an integer base and a negative integer exponent should still return a floating point result, without the need of converting the base to floating point beforehand. I wonder if the numpy/numarray behavior is based on some implicit policy which states that operating integers with integers should always return integers, for return type predictability, or something like that. Could someone please shed some light on this? Thanks! En/na Pujo Aji ha escrit:: > On 5/23/06, *Ivan Vilata i Balaguer* > wrote: > [...] > According to http://docs.python.org/ref/power.html: > > For int and long int operands, the result has the same type as the > operands (after coercion) unless the second argument is negative; in > that case, all arguments are converted to float and a float result is > delivered. > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From ajikoe at gmail.com Tue May 23 01:08:12 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Tue May 23 01:08:12 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: <4472B8D0.6040606@carabos.com> References: <4472B013.1010607@carabos.com> <4472B8D0.6040606@carabos.com> Message-ID: Numpy optimize the python process by explicitly define the element type of array. Just like C++. Python let you work with automatic converting... but it slows down the process. Like having extra code to check the type of your element array. I suggest you check the numpy reference instead of python reference when using numpy. Sincerely Yours, pujo On 5/23/06, Ivan Vilata i Balaguer wrote: > > En/na Pujo Aji ha escrit:: > > > use 'f' to tell numpy that its array element is a float type: > > b = numpy.array([1,2,3,4],'f') > > > > an alternative is to put dot after the number: > > b = numpy.array([1. ,2. ,3. ,4.]) > > > > This hopefully solve your problem. > > You're right, but according to Python reference docs, having an integer > base and a negative integer exponent should still return a floating > point result, without the need of converting the base to floating point > beforehand. > > I wonder if the numpy/numarray behavior is based on some implicit policy > which states that operating integers with integers should always return > integers, for return type predictability, or something like that. Could > someone please shed some light on this? Thanks! > > En/na Pujo Aji ha escrit:: > > > On 5/23/06, *Ivan Vilata i Balaguer* > > wrote: > > [...] > > According to http://docs.python.org/ref/power.html: > > > > For int and long int operands, the result has the same type as the > > operands (after coercion) unless the second argument is negative; > in > > that case, all arguments are converted to float and a float result > is > > delivered. > > > > Then, shouldn't be ``b ** -1 == array([1.0, 0.5, 0.33333333, 0.25 > ])`` > > (i.e. a floating point result)? Is this behaviour intentional? (I > > googled for previous messages on the topic but I didn't find any.) > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C?rabos Coop. V. V V Enjoy Data > "" > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 23 04:56:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue May 23 04:56:36 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com><4472B8D0.6040606@carabos.com> Message-ID: On Tue, 23 May 2006, Pujo Aji apparently wrote: > I suggest you check the numpy reference instead of python > reference when using numpy. http://www.scipy.org/Documentation fyi, Alan Isaac From ivilata at carabos.com Tue May 23 05:52:14 2006 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Tue May 23 05:52:14 2006 Subject: [Numpy-discussion] Pow() with negative exponent In-Reply-To: References: <4472B013.1010607@carabos.com> <4472B8D0.6040606@carabos.com> Message-ID: <447304FD.5090403@carabos.com> En/na Pujo Aji ha escrit:: > Numpy optimize the python process by explicitly define the element type > of array. > Just like C++. > > Python let you work with automatic converting... but it slows down the > process. > Like having extra code to check the type of your element array. > > I suggest you check the numpy reference instead of python reference then > using numpy. OK, I see that predictability of the type of the output result matters. ;) Besides that, I've been told that, according to the manual, power() (as every other ufunc) uses its ``types`` member to find out the type of the result depending only on the types of its arguments. It makes sense to avoid checking for particular values with possibly large arrays for efficiency, as you point out. I expected Python-like behaviour, but I understand this is not the most appropriate thing to do for a high-performace package (but then, I was not able to find out using the public docs). Thanks for your help, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: OpenPGP digital signature URL: From joris at ster.kuleuven.ac.be Tue May 23 07:16:01 2006 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Tue May 23 07:16:01 2006 Subject: [Numpy-discussion] Numpy PR Message-ID: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Hi, I was thinking about our PR for numpy. Imo, the best page that we can currently show to newcomers is www.scipy.org. There they find out what is Numpy, where you can download it, documentation, cookbook recipes, examples, libraries that build on NumPy like SciPy, etc. In addition, it's the page that, imho, looks the most professional. Googling for "numpy" gives: 1) numeric.scipy.org/ Travis' webpage on Numpy. Travis, would you consider putting a much more pronounced link to scipy.org? The current link to is at the very bottom of the page and has no further comments... 2) www.numpy.org/ One is redirected to the sourceforge site. Question: why not to the scipy.org site? The reason why no wiki page is set up here is, I guess, because there is already one at scipy.org. So why not directly linking to it? 3) www.pfdubois.com/numpy/ This site is actually closed. It only contains a short paragraph pointing to the page http://sourceforge.net/projects/numpy. 4) www.pfdubois.com/numpy/html2/numpy.html This is a potentially very confusing web page. It constantly talks about 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this page could be taken down? 5) sourceforge.net/projects/numpy The download site for NumPy. This page doesn't contain a link to scipy.org. 6) www.python.org/moin/NumericAndScientificnumpy.html Informationless webpage. 7) wiki.python.org/moin/NumericAndScientific Up-to-date webpage, but refers to numeric.scipy.org for NumPy. 8) www.scipy.org/ And YES, finally... :o) Perhaps we could try to take scipy.org a bit higher in the Google ranking? I am not a HTML expert at all, but may a header in the www.scipy.org source code like help? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pfdubois at gmail.com Tue May 23 08:19:02 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Tue May 23 08:19:02 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Message-ID: I killed pfdubois/numpy. On 23 May 2006 07:18:32 -0700, joris at ster.kuleuven.ac.be < joris at ster.kuleuven.ac.be> wrote: > > Hi, > > I was thinking about our PR for numpy. Imo, the best page that we can > currently show to newcomers is www.scipy.org. There they find out what is > Numpy, where you can download it, documentation, cookbook recipes, > examples, > libraries that build on NumPy like SciPy, etc. In addition, it's the page > that, imho, looks the most professional. > > > Googling for "numpy" gives: > > 1) numeric.scipy.org/ > > Travis' webpage on Numpy. Travis, would you consider putting a much more > pronounced link to scipy.org? The current link to is at the very bottom of > the page and has no further comments... > > 2) www.numpy.org/ > > One is redirected to the sourceforge site. Question: why not to the > scipy.org > site? The reason why no wiki page is set up here is, I guess, because > there > is already one at scipy.org. So why not directly linking to it? > > > 3) www.pfdubois.com/numpy/ > > This site is actually closed. It only contains a short paragraph pointing > to > the page http://sourceforge.net/projects/numpy. > > > 4) www.pfdubois.com/numpy/html2/numpy.html > > This is a potentially very confusing web page. It constantly talks about > 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this > page > could be taken down? > > > 5) sourceforge.net/projects/numpy > > The download site for NumPy. This page doesn't contain a link to scipy.org > . > > > 6) www.python.org/moin/NumericAndScientificnumpy.html > > Informationless webpage. > > > 7) wiki.python.org/moin/NumericAndScientific > > Up-to-date webpage, but refers to numeric.scipy.org for NumPy. > > > 8) www.scipy.org/ > > And YES, finally... :o) > > > Perhaps we could try to take scipy.org a bit higher in the Google ranking? > I am not a HTML expert at all, but may a header in the www.scipy.orgsource > code like > > > > help? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at utoronto.ca Tue May 23 10:58:04 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Tue May 23 10:58:04 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> Message-ID: <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> I updated the python moin pages. On 5/23/06, joris at ster.kuleuven.ac.be wrote: > Hi, > > I was thinking about our PR for numpy. Imo, the best page that we can > currently show to newcomers is www.scipy.org. There they find out what is > Numpy, where you can download it, documentation, cookbook recipes, examples, > libraries that build on NumPy like SciPy, etc. In addition, it's the page > that, imho, looks the most professional. > > > Googling for "numpy" gives: > > 1) numeric.scipy.org/ > > Travis' webpage on Numpy. Travis, would you consider putting a much more > pronounced link to scipy.org? The current link to is at the very bottom of > the page and has no further comments... > > 2) www.numpy.org/ > > One is redirected to the sourceforge site. Question: why not to the scipy.org > site? The reason why no wiki page is set up here is, I guess, because there > is already one at scipy.org. So why not directly linking to it? > > > 3) www.pfdubois.com/numpy/ > > This site is actually closed. It only contains a short paragraph pointing to > the page http://sourceforge.net/projects/numpy. > > > 4) www.pfdubois.com/numpy/html2/numpy.html > > This is a potentially very confusing web page. It constantly talks about > 'Numpy' but is actually refering to the obsolete 'Numeric'. Perhaps this page > could be taken down? > > > 5) sourceforge.net/projects/numpy > > The download site for NumPy. This page doesn't contain a link to scipy.org. > > > 6) www.python.org/moin/NumericAndScientificnumpy.html > > Informationless webpage. > > > 7) wiki.python.org/moin/NumericAndScientific > > Up-to-date webpage, but refers to numeric.scipy.org for NumPy. > > > 8) www.scipy.org/ > > And YES, finally... :o) > > > Perhaps we could try to take scipy.org a bit higher in the Google ranking? > I am not a HTML expert at all, but may a header in the www.scipy.org source > code like > > > > help? > > Cheers, > Joris > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From rob at hooft.net Tue May 23 11:52:06 2006 From: rob at hooft.net (Rob Hooft) Date: Tue May 23 11:52:06 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> Message-ID: <4473598D.5000400@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jonathan Taylor wrote: | I updated the python moin pages. I created a link from my own bookmark page, which is reasonably ranked in google.... ;-) If we all link to scipy.org somewhere useful, it will be raised in googles ranking automatically, and completely legitimately. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEc1mNH7J/Cv8rb3QRAocsAJ9cYiAy211XLBzTO7LEEhvW+3AxkgCdGdYt Cu//sOM1WzC68YKeFAZMlG0= =hZy8 -----END PGP SIGNATURE----- From jdhunter at ace.bsd.uchicago.edu Tue May 23 12:14:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue May 23 12:14:04 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <4473598D.5000400@hooft.net> (Rob Hooft's message of "Tue, 23 May 2006 20:50:53 +0200") References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> <4473598D.5000400@hooft.net> Message-ID: <877j4czhjv.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Rob" == Rob Hooft writes: Rob> Jonathan Taylor wrote: | I updated the python moin pages. Rob> I created a link from my own bookmark page, which is Rob> reasonably ranked in google.... ;-) If we all link to Rob> scipy.org somewhere useful, it will be raised in googles Rob> ranking automatically, and completely legitimately. A very good (legitimate) way to raise your google page rank is to post announcements to python-announce and python-list with the keywords you want google to match on in the subject heading. Mix these up between announces to cover the space ANN: scipy.xxx: scientific tools for python ANN: scipy.xyz: python algorithms and array methods etc..... These will be magnified across the net through RSS and mirrors, much faster than a few people making links on their homepages. Also, include the keywords you want google to match in the title field of the scipy.org html. The title is now simply scipy.org and making it something like "scientific tools for python" will help. JDH From Chris.Barker at noaa.gov Tue May 23 12:52:07 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue May 23 12:52:07 2006 Subject: [Numpy-discussion] Numpy PR In-Reply-To: <4473598D.5000400@hooft.net> References: <1148393680.447318d0a7796@webmail.ster.kuleuven.be> <463e11f90605231056n552712dcy9badd68a6b6701ac@mail.gmail.com> <4473598D.5000400@hooft.net> Message-ID: <447367B1.10600@noaa.gov> Rob Hooft wrote: > I created a link from my own bookmark page, which is reasonably ranked > in google.... ;-) If we all link to scipy.org somewhere useful, it will > be raised in googles ranking automatically, and completely legitimately. ideally, put your link in so that users click on "numpy" to get there, that has a large impact on Google (see google bombing, or google "failure" for an explanation) like this: numpy However, I"m not sure that the scipy site should be the first one people find. I vote for creating a good the home page for the sourceforge site at www.numpy.org. Travis' page at numeric.scipy.org would be a good start. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ERIC.C.SCHUG at saic.com Tue May 23 15:18:11 2006 From: ERIC.C.SCHUG at saic.com (Schug, Eric C.) Date: Tue May 23 15:18:11 2006 Subject: [Numpy-discussion] win32all dependance Message-ID: When installing Numpy 0.9.8 then running import scipy I get the following Error import testing -> failed: No module named win32pdh Base on reviewing earlier releases appears to have a dependence on win32all which was removed in numpy-0.9.6r1.win32-py2.4.exe Could this dependence be removed from this latest version? Thanks Eric Schug -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazycoding at gmail.com Tue May 23 21:46:05 2006 From: lazycoding at gmail.com (Wenjie He) Date: Tue May 23 21:46:05 2006 Subject: [Numpy-discussion] Fail in building numpy(svn). Message-ID: I just follow the tutorial in http://www.scipy.org/Installing_SciPy/Windows to build numpy step by step, and everything is OK except building numpy. I use cygwin with gcc to compile them all, and use pre-compiler python-2.4 in windows xp sp2. I try build in both windows native command line and cygwin with: python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst And the result is the same with the error which is attached below. It is my first time to build a python module, and I don't know where to start to fix the error. Can anyone show me the way, plz? Wenjie -- I'm lazy, I'm coding. http://my.donews.com/henotii -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 10180 bytes Desc: not available URL: From schofield at ftw.at Wed May 24 01:47:22 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 24 01:47:22 2006 Subject: [Numpy-discussion] win32all dependance In-Reply-To: References: Message-ID: <44741D41.1060700@ftw.at> Schug, Eric C. wrote: > > When installing Numpy 0.9.8 then running import scipy I get the > following Error > > import testing -> failed: No module named win32pdh > > > > Base on reviewing earlier releases appears to have a dependence on > win32all which was removed in > > numpy-0.9.6r1.win32-py2.4.exe > > Could this dependence be removed from this latest version? > Doh! My 0.9.6 rebuild was just a stop-gap measure; I commented out the offending lines in numpy/testing/utils.py, but didn't change the trunk. I should have communicated this better. Travis, shall we just remove all lines from 67 to 97? -- Ed From pearu at scipy.org Wed May 24 01:53:07 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 24 01:53:07 2006 Subject: [Numpy-discussion] win32all dependance In-Reply-To: <44741D41.1060700@ftw.at> References: <44741D41.1060700@ftw.at> Message-ID: On Wed, 24 May 2006, Ed Schofield wrote: > Schug, Eric C. wrote: >> >> When installing Numpy 0.9.8 then running import scipy I get the >> following Error >> >> import testing -> failed: No module named win32pdh >> >> >> >> Base on reviewing earlier releases appears to have a dependence on >> win32all which was removed in >> >> numpy-0.9.6r1.win32-py2.4.exe >> >> Could this dependence be removed from this latest version? >> > > Doh! My 0.9.6 rebuild was just a stop-gap measure; I commented out the > offending lines in numpy/testing/utils.py, but didn't change the trunk. > I should have communicated this better. Travis, shall we just remove > all lines from 67 to 97? No, the code in these lines is not dead. I'll commit a fix to this issue in a moment. Pearu From arnd.baecker at web.de Wed May 24 07:49:06 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 24 07:49:06 2006 Subject: [Numpy-discussion] fwd: ANN: PyX 0.9 released Message-ID: Hi, some of you might have heard about PyX, http://pyx.sourceforge.net/ which """is a Python package for the creation of PostScript and PDF files. It combines an abstraction of the PostScript drawing model with a TeX/LaTeX interface. Features * PostScript and PDF output for device independent, free scalable figures * seamless TeX/LaTeX integration * full access to PostScript features like paths, linestyles, fill patterns, transformations, clipping, bitmap inclusion, etc. * advanced geometric operations on paths like intersections, transformations, splitting, smoothing, etc. * sophisticated graph generation: modular design, pluggable axes, axes partitioning based on rational number arithmetics, flexible graph styles, etc. """ Below is the release announcement for PyX 0.9. Best, Arnd P.S. To simplify the plotting of numpy arrays (1D and 2D) with PyX you could have a look at pyxgraph, http://www.physik.tu-dresden.de/~baecker/python/pyxgraph.html (which is still under heavy development ...;-) ---------- Forwarded message ---------- Date: Wed, 24 May 2006 15:23:22 +0200 From: Andre Wobst To: PyX-user , PyX-devel Subject: [PyX-user] ANN: PyX 0.9 released Hi! We're proud to announce the release of PyX 0.9! After quite some time we finally managed to prepare a new major release. Many improvements and fixes are included (see the attached list of changes), but there are a couple of highlights which should be mentioned separately: This release adds a set of deformers to PyX for path manipulations like smoothing, shifting, etc. A new set of extensively documented examples describing various aspects of PyX in a cookbook-like fashion have been written. Type 1 font-stripping is now handled by a newly written Python module. The evaluation of functions for graph plotting is now left to Python. Thereby some obscure data manipulation could be removed from the bar style for handling of nested bar graphs. Transparency is now supported for PDF output. Let me try to summarize some of the *visible* changes (to existing code out there) we needed to apply to facilitate some of the major goals of this release: - The path system has passed another restructuring phase. The normpath, which allows for all the advanced path features of PyX, have been moved into a separate module normpath. The parametrization of the paths (i.e. of the normpaths) is now handled by normpathparam instances allowing for mixing arc lengths and normpathparam instances in any order in many path functions like split etc. The normpathparam instances allow the addition of arc lengths to walk along a path, for example starting at the end of a path or at an intersection point. - The evaluation of mathematical expressions in the classes from the graph.data module is now left to Python. While this leads to a huge set of other improvements (like being not restricted to the floats datatype anymore), there are no differences between the evaluation of the expression compared to Python anymore. As Python by default still uses integer division for integer arguments, the meaning of a function expression like "1/2" has changed dramatically. In earlier versions of PyX for this example the value 0.5 was calculated, now it becomes 0. (I'm looking forward to Python 3000 to get rid of this situation once and for all! :-)) - Bars graphs on a list of data sets, which in earlier versions have been automatically converted to use a nested bar axis, don't do that automatic conversion anymore. You may want to look at the example http://pyx.sourceforge.net/examples/bargraphs/compare.html to learn more about the new and more flexible solution. - The stroke styles linestyle and dash now use the rellength feature by default. Furthermore the rellength feature was adjusted to not modify the dash lengths for the default linewidth. Hence you're affected by this change when you have used the rellength feature before or you used linestyles on linewidths different from the default. - The bbox calcuation was modified in two respects: First it now properly takes into account the shape of bezier boxes (so the real bounding box is now used instead of the control box). Secondly PyX now takes into account the linewidths when stroking a path and adds it to the bounding box. Happy PyXing ... J?rg, Michael, and Andr? ------------------ 0.9 (2006/05/24): - most important changes (to be included in the release notes): - mathtree removal (warning about integer division) - barpos style does not build tuples for nestedbar axes automatically - new deformers for path manipulation (for smoothing, shifting, ... paths) - font modules: - new framework for font handling - own implementation of type1 font stripping (old pdftex code fragments removed) - complete type1 font command representation and glyph path extraction from font programs - t1code extension module (C version of de-/encoding routines used in Type 1 font files) - AFM file parser - graph modules: - data module: - mathtree removal: more flexibility due to true python expressions - default style instantiation bug (reported by Gregory Novak) - style module: - automatic subaxis tuple creation removed in barpos (create tuples in expressions now; subnames argument removed since it became pointless; adujstaxis became independend from selectstyle for all styles now) - remove multiple painting of frompath in histogram and barpos styles - fix missing attribute select when using a bar style once only (reported by Alan Isaac) - fix histograms for negative y-coordinates (reported by Dominic Ford, bug #1492548) - fix histogram to stroke lines to the baseline for steps=0 when two subsequent values are equal - add key method for histogram style (reported by Hagemann, bug #1371554) - implement a changebar style - graph, axis and style module: - support for mutual linking of axes between graphs - new domethods dependency handling - separate axis range calculation from dolayout - axis.parter module: - linear and logarthmic partitioners always need lists now (as it was documented all the time; renamed tickdist/labeldist to tickdists/labeldists; renamed tickpos/labelpos to tickpreexps/labelpreexps) - axis module: - patch to tickpos and vtickpos (reported by Wojciech Smigaj, cf. patch #1286112) - anchoredpathaxis added (suggested by Wojciech Smigaj) - properly handle range rating on inversed axis (reported by Dominic Ford, cf. bug #1461513) - invalidate axis partitions with a single label only by the distance rater - fallback (with warning) to linear partitioner on a small logarithmics scale - painter module: - patch to allow for tickattrs=None (reported by Wojciech Smigaj, cf. patch #1286116) - color module: - transparency support (PDF only) - conversion between colorspaces - nonlinear palettes added - the former palette must now be initialized as linearpalette - remove min and max arguments of palettes - text module: - improve escapestring to handle all ascii characters - correct vshift when text size is modified by a text.size instance - recover from exceptions (reported by Alan Isaac) - handle missing italic angle information in tfm for pdf output (reported by Brett Calcott) - allow for .def and .fd files in texmessage.loaddef (new name for texmessage.loadfd, which was restricted to .fd files) - path module: - correct closepath (do not invalidate currentpoint but set it to the beginning of the current subpath); structural rework of pathitems - calculate real bboxes for Bezier curves - fix intersection due to non-linear parametrization of bezier curves - add rotate methods to path, normpath, normsubpath, and normsubpathitems - add flushskippedline to normsubpath - add arclentoparam to normsubpath and normsubpathitems - path is no longer a canvasitem - reduce number of parameters of outputPS/outputPDF methods (do not pass context and registry) - normpath module: - contains normpath, normsubpath and normpathparam which have originally been in the path module - return "invalid" for certain path operations when the curve "speed" is below a certain threshold - normpath is no longer a canvasitem - reduce number of parameters of outputPS/outputPDF methods (do not pass context and registry) - deformer module: - rewritten smoothed to make use of the subnormpath facilities - rewritten parallel for arbitrary paths - deco module: - add basic text decorator - allow arrows at arbitrary positions along the path - connector module: - boxdists parameter need to be a list/tuple of two items now - changed the orientation of the angle parameters - trafo module: - renamed _apply to apply_pt - introduce _epsilon for checking the singularity of a trafo - epsfile module: - use rectclip instead of clip to remove the clipping path from the PostScript stack, which otherwise might create strange effects for certain PostScript files (reported by Gert Ingold) - dvifile module: - silently ignore TrueType fonts in font mapping files (reported by Gabriel Vasseur) - type1font module: - accept [ and ] as separators in encoding files (reported by Mojca Miklavec, cf. bug #1429524) - canvas module: - remove registerPS/registerPDF in favour of registering resourcing during the outputPS/outputPDF run - move bbox handling to registry - rename outputPS/outputPDF -> processPS/processPDF - remove set method of canvas - add a pipeGS method to directly pass the PyX output to ghostscript - allow file instances as parameter of the writeXXXfile methods (feature request #1419658 by Jason Pratt) - document modules: - allow file instances as parameter of the writeXXXfile methods (feature request #1419658 by Jason Pratt) - style module: - make rellength the default for dash styles - random notes: - switched to subversion on 2006/03/09 -- by _ _ _ Dr. Andr? Wobst / \ \ / ) wobsta at users.sourceforge.net, http://www.wobsta.de/ / _ \ \/\/ / PyX - High quality PostScript and PDF figures (_/ \_)_/\_/ with Python & TeX: visit http://pyx.sourceforge.net/ ------------------------------------------------------- All the advantages of Linux Managed Hosting--Without the Cost and Risk! Fully trained technicians. The highest number of Red Hat certifications in the hosting industry. Fanatical Support. Click to learn more http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 _______________________________________________ PyX-user mailing list PyX-user at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/pyx-user From ajikoe at gmail.com Wed May 24 08:20:19 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 24 08:20:19 2006 Subject: [Numpy-discussion] error numpy 0.9.8 Message-ID: Hello, I have problem importing numpy 0.9.8: While writing: >>> import numpy I 've got this error message: import testing -> failed: No module named win32pdh This is not happened in 0.9.6 Can anybody help me? Thanks, pujo -------------- next part -------------- An HTML attachment was scrubbed... URL: From st at sigmasquared.net Wed May 24 08:35:03 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed May 24 08:35:03 2006 Subject: [Numpy-discussion] error numpy 0.9.8 In-Reply-To: References: Message-ID: <44747CDF.7050009@sigmasquared.net> The current release accidentally depends on the win32all package: sourceforge.net/projects/pywin32/ This should be fixed in the latest SVN version. Stephan From ajikoe at gmail.com Wed May 24 08:44:02 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 24 08:44:02 2006 Subject: [Numpy-discussion] error numpy 0.9.8 In-Reply-To: <44747CDF.7050009@sigmasquared.net> References: <44747CDF.7050009@sigmasquared.net> Message-ID: Thanks, It works. pujo On 5/24/06, Stephan Tolksdorf wrote: > > The current release accidentally depends on the win32all package: > sourceforge.net/projects/pywin32/ > > This should be fixed in the latest SVN version. > > Stephan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgen.stenarson at bostream.nu Wed May 24 11:42:06 2006 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed May 24 11:42:06 2006 Subject: [Numpy-discussion] Dot product threading? In-Reply-To: <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> References: <44669A5B.8040700@gmail.com> <446BC1FA.2060404@auckland.ac.nz> <6ef8f3380605180247g3b084681o930ea6ea1bf4ecd9@mail.gmail.com> <446C571B.5050700@gmail.com> <6ef8f3380605180434w77092836m8c470ba993b337e8@mail.gmail.com> Message-ID: <4474A8BA.4040208@bostream.nu> This thread discusses one of the things highest on my wishlist for numpy. I have attached my first attempt at a code that will create a broadcastingfunction that will broadcast a function over arrays where the last N indices are assumed to be for use by the function, N=2 would be used for matrices. It is implemented for Numeric. /J?rgen Pau Gargallo skrev: >> Pau, can you confirm that this is the same >> as the routine you're interested in? >> >> def dot2(a,b): >> '''Returns dot product of last two dimensions of two 3-D arrays, >> threaded over first dimension.''' >> try: >> assert a.shape[1] == b.shape[2] >> assert a.shape[0] == b.shape[0] >> except AssertionError: >> print "incorrect input shapes" >> res = zeros( (a.shape[0], a.shape[1], a.shape[1]), dtype=float ) >> for i in range(a.shape[0]): >> res[i,...] = dot( a[i,...], b[i,...] ) >> return res >> > > yes, that is what I would like. I would like it even with more > dimensions and with all the broadcasting rules ;-) > These can probably be achieved by building actual 'arrays of matrices' > (an array of matrix objects) and then using the ufunc machinery. > But I think that a simple dot2 function (with an other name of course) > will still very useful. > > pau > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=k&kid0709&bid&3057&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: jsarray.py URL: From gkmohan at gmail.com Wed May 24 16:40:05 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 24 16:40:05 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean Message-ID: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> Hi, I am trying to build numpy-0.9.5, downloaded from sourceforge download page, as higher versions are not yet tested for pygsl. The build seems to be broken. I uninstall existing numpy and start build from scratch but I get the following errors when I import numpy after install. ==== >>> from numpy import * import core -> failed: module compiled against version 90504 of C-API but this version of numpy is 90500 import random -> failed: 'module' object has no attribute 'dtype' import linalg -> failed: module compiled against version 90504 of C-API but this version of numpy is 90500 ==== Any help is appreciated. Am I doing something wrong or is it known that this build is broken? thanks, Krishna. From chanley at stsci.edu Wed May 24 17:32:33 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Wed May 24 17:32:33 2006 Subject: [Numpy-discussion] need help building on windows Message-ID: <20060524203136.CJJ76221@comet.stsci.edu> Hi, I am new to working on the Windows XP OS and I am having a problem building numpy from source. I am attempting to build using the MinGW compilers on a Windows XP box which has Python 2.4.1 installed. I do not have nor do I intend to use any optimized libraries like ATLAS. I have checked out numpy revision 2542. This is as clean a build as you could probably get. I am attaching my build log to this message. If anyone has any helpful hints they would be most appreciated. Thanks, Chris -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 28662 bytes Desc: not available URL: From robert.kern at gmail.com Wed May 24 19:22:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed May 24 19:22:05 2006 Subject: [Numpy-discussion] Re: need help building on windows In-Reply-To: <20060524203136.CJJ76221@comet.stsci.edu> References: <20060524203136.CJJ76221@comet.stsci.edu> Message-ID: Christopher Hanley wrote: > Hi, > > I am new to working on the Windows XP OS and I am having a problem building numpy from source. I am attempting to build using the MinGW compilers on a Windows XP box which has Python 2.4.1 installed. I do not have nor do I intend to use any optimized libraries like ATLAS. I have checked out numpy revision 2542. This is as clean a build as you could probably get. I am attaching my build log to this message. If anyone has any helpful hints they would be most appreciated. It's a known issue: http://projects.scipy.org/scipy/numpy/ticket/129 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Wed May 24 22:00:10 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed May 24 22:00:10 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> Message-ID: <447539B2.80206@astraw.com> Dear Krishna, it looks like there are some mixed versions of numpy floating around on your system. Before building, remove the "build" directory completely. Krishna Mohan Gundu wrote: > Hi, > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > page, as higher versions are not yet tested for pygsl. The build seems > to be broken. I uninstall existing numpy and start build from scratch > but I get the following errors when I import numpy after install. > > ==== > >>>> from numpy import * >>> > import core -> failed: module compiled against version 90504 of C-API > but this version of numpy is 90500 > import random -> failed: 'module' object has no attribute 'dtype' > import linalg -> failed: module compiled against version 90504 of > C-API but this version of numpy is 90500 > ==== > > Any help is appreciated. Am I doing something wrong or is it known > that this build is broken? > > thanks, > Krishna. > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From gkmohan at gmail.com Wed May 24 22:41:04 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 24 22:41:04 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <447539B2.80206@astraw.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> <447539B2.80206@astraw.com> Message-ID: <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> Dear Andrew, Thanks for your reply. As I said earlier I deleted the existing numpy installation and the build directories. I am more than confidant that I did it right. Is there anyway I can prove myself wrong? I also tried importing umath.so from build directory === $ cd numpy-0.9.5/build/lib.linux-x86_64-2.4/numpy/core $ ls -l umath.so -rwxr-xr-x 1 krishna users 463541 May 22 17:46 umath.so $ python Python 2.4.3 (#1, Apr 8 2006, 19:10:42) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import umath Traceback (most recent call last): File "", line 1, in ? RuntimeError: module compiled against version 90500 of C-API but this version of numpy is 90504 >>> === So something is wrong with the build for sure. Could there be anything wrong other than not deleting the build directory? thanks, Krishna. On 5/24/06, Andrew Straw wrote: > Dear Krishna, it looks like there are some mixed versions of numpy > floating around on your system. Before building, remove the "build" > directory completely. > > Krishna Mohan Gundu wrote: > > > Hi, > > > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > > page, as higher versions are not yet tested for pygsl. The build seems > > to be broken. I uninstall existing numpy and start build from scratch > > but I get the following errors when I import numpy after install. > > > > ==== > > > >>>> from numpy import * > >>> > > import core -> failed: module compiled against version 90504 of C-API > > but this version of numpy is 90500 > > import random -> failed: 'module' object has no attribute 'dtype' > > import linalg -> failed: module compiled against version 90504 of > > C-API but this version of numpy is 90500 > > ==== > > > > Any help is appreciated. Am I doing something wrong or is it known > > that this build is broken? > > > > thanks, > > Krishna. > > > > > > ------------------------------------------------------- > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > Fully trained technicians. The highest number of Red Hat > > certifications in > > the hosting industry. Fanatical Support. Click to learn more > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > From rudolphv at gmail.com Thu May 25 01:36:08 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Thu May 25 01:36:08 2006 Subject: [Numpy-discussion] Numpy 0.9.8 fails under Mac OS X Message-ID: <97670e910605250135l609b6b67k27f1b5d87ea17666@mail.gmail.com> I built and installed Numpy 0.9.8 successfully from the source using python setup.py build sudo python setup.py install When I run the unit-test though, it fails with the following error: [~] python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '0.9.8' >>> numpy.test(1,1) Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 1 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 95 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 9 tests for numpy.lib.twodim_base Found 2 tests for numpy.core.oldnumeric Found 9 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 35 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 6 tests for numpy.core.records Found 19 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ...................................................python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd668; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd4e8; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd650; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd560; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd4a0; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd3f8; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd470; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd320; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd728; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd440; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd410; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd590; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug python(9463) malloc: *** Deallocation of a pointer not malloced: 0xd458; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug F............................................................................................................................................................................................................................................................................................................................ ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 63, in check_types assert val.dtype.num == typeconv[k,l] and \ AssertionError: error with (13,14) ---------------------------------------------------------------------- Ran 368 tests in 1.517s FAILED (failures=1) Any idea what is causing this? Thanks Rudolph -- Rudolph van der Merwe From aisaac at american.edu Thu May 25 11:11:07 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 11:11:07 2006 Subject: [Numpy-discussion] numpy.sin Message-ID: I am a user, not a numerics type, so this is undoubtedly a naive question. Might the sin function be written to give greater accuracy for large real numbers? It seems that significant digits are in some sense being discarded needlessly. I.e., compare these: >>> sin(linspace(0,10*pi,11)) array([ 0.00000000e+00, 1.22460635e-16, -2.44921271e-16, 3.67381906e-16, -4.89842542e-16, 6.12303177e-16, -7.34763812e-16, 8.57224448e-16, -9.79685083e-16, 1.10214572e-15, -1.22460635e-15]) >>> sin(linspace(0,10*pi,11)%(2*pi)) array([ 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00, 1.22460635e-16, 0.00000000e+00]) Just wondering. Cheers, Alan Isaac From robert.kern at gmail.com Thu May 25 11:37:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 11:37:09 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > I am a user, not a numerics type, > so this is undoubtedly a naive question. > > Might the sin function be written to give greater > accuracy for large real numbers? It seems that significant > digits are in some sense being discarded needlessly. Not really. The floating point representation of pi is not exact. The problem only gets worse when you multiply it with something. The method you showed of using % (2*pi) is only accurate when the values are created by multiplying the same pi by another value. Otherwise, it just introduces another source of error, I think. This is one of the few places where a version of trig functions that directly operate on degrees are preferred. 360.0*n is exactly representable by floating point arithmetic until n~=12509998964918 (give or take a power of two). Doing % 360 can be done exactly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 25 11:47:08 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 11:47:08 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > The method you showed of using % (2*pi) is only accurate > when the values are created by multiplying the same pi by > another value. Otherwise, it just introduces another > source of error, I think. Just to be clear, I meant not (!) to presumptuosly propose a method for improving thigs, but just to illustrate the issue: both the loss of accuracy, and the obvious conceptual point that there is (in an abstract sense, at least) no need for sin(x) and sin(x+ 2*pi) to differ. Thanks, Alan From rob at hooft.net Thu May 25 11:53:06 2006 From: rob at hooft.net (Rob Hooft) Date: Thu May 25 11:53:06 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: <4475FCE2.6000802@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Robert Kern wrote: | Alan G Isaac wrote: | |>I am a user, not a numerics type, |>so this is undoubtedly a naive question. |> |>Might the sin function be written to give greater |>accuracy for large real numbers? It seems that significant |>digits are in some sense being discarded needlessly. | | | Not really. The floating point representation of pi is not exact. The problem | only gets worse when you multiply it with something. The method you showed of | using % (2*pi) is only accurate when the values are created by multiplying the | same pi by another value. Otherwise, it just introduces another source of error, | I think. | | This is one of the few places where a version of trig functions that directly | operate on degrees are preferred. 360.0*n is exactly representable by floating | point arithmetic until n~=12509998964918 (give or take a power of two). Doing % | 360 can be done exactly. This reminds me of a story Richard Feynman tells in his autobiography. He used to say: "if you can pose a mathematical question in 10 seconds, I can solve it with 10% accuracy in one minute just calculating in my head". This worked for a long time, until someone told him "please calculate the sine of a million". Actual mantissa bits are used by the multiple of two-pi, and those are lost at the back of the calculated value. Calculating the sine of a million with the same precision as the sine of zero requires 20 more bits of accuracy. Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEdfziH7J/Cv8rb3QRAiO8AKCQdJ+9EMOP6bOmUX0NIhuWVoEFQgCgmvTS fgO08dI16AUFcYKkpRJXg/Q= =qQXI -----END PGP SIGNATURE----- From alexander.belopolsky at gmail.com Thu May 25 12:02:03 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu May 25 12:02:03 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: <4475FCE2.6000802@hooft.net> References: <4475FCE2.6000802@hooft.net> Message-ID: This is not really a numpy issue, but general floating point problem. Consider this: >>> x=linspace(0,10*pi,11) >>> all(array(map(math.sin, x))==sin(x)) True If anything can be improved, that would be the C math library. On 5/25/06, Rob Hooft wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Robert Kern wrote: > | Alan G Isaac wrote: > | > |>I am a user, not a numerics type, > |>so this is undoubtedly a naive question. > |> > |>Might the sin function be written to give greater > |>accuracy for large real numbers? It seems that significant > |>digits are in some sense being discarded needlessly. > | > | > | Not really. The floating point representation of pi is not exact. The > problem > | only gets worse when you multiply it with something. The method you > showed of > | using % (2*pi) is only accurate when the values are created by > multiplying the > | same pi by another value. Otherwise, it just introduces another source > of error, > | I think. > | > | This is one of the few places where a version of trig functions that > directly > | operate on degrees are preferred. 360.0*n is exactly representable by > floating > | point arithmetic until n~=12509998964918 (give or take a power of > two). Doing % > | 360 can be done exactly. > > This reminds me of a story Richard Feynman tells in his autobiography. > He used to say: "if you can pose a mathematical question in 10 seconds, > I can solve it with 10% accuracy in one minute just calculating in my > head". This worked for a long time, until someone told him "please > calculate the sine of a million". > > Actual mantissa bits are used by the multiple of two-pi, and those are > lost at the back of the calculated value. Calculating the sine of a > million with the same precision as the sine of zero requires 20 more > bits of accuracy. > > Rob > - -- > Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.3 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org > > iD8DBQFEdfziH7J/Cv8rb3QRAiO8AKCQdJ+9EMOP6bOmUX0NIhuWVoEFQgCgmvTS > fgO08dI16AUFcYKkpRJXg/Q= > =qQXI > -----END PGP SIGNATURE----- > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From robert.kern at gmail.com Thu May 25 12:10:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:10:01 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>The method you showed of using % (2*pi) is only accurate >>when the values are created by multiplying the same pi by >>another value. Otherwise, it just introduces another >>source of error, I think. > > Just to be clear, I meant not (!) to presumptuosly propose > a method for improving thigs, but just to illustrate the > issue: both the loss of accuracy, and the obvious conceptual > point that there is (in an abstract sense, at least) no need > for sin(x) and sin(x+ 2*pi) to differ. But numpy doesn't deal with abstract senses. It deals with concrete floating point arithmetic. The best value you can *use* for pi in that expression is not the real irrational ?. And the best floating-point algorithm you can use for sin() won't (and shouldn't!) assume that sin(x) will equal sin(x + 2*pi). That your demonstration results in the desired exact 0.0 for multiples of 2*pi is an accident. The results for values other than integer multiples of pi will be as wrong or more wrong. It does not demonstrate that floating-point sin(x) and sin(x + 2*pi) need not differ. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 25 12:11:17 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 12:11:17 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: <4475FCE2.6000802@hooft.net> Message-ID: On Thu, 25 May 2006, Alexander Belopolsky apparently wrote: > This is not really a numpy issue, but general floating point problem. > Consider this: >>>> x=linspace(0,10*pi,11) >>>> all(array(map(math.sin, x))==sin(x)) > True I think this misses the point. I was not suggesting numpy results differ from the C math library results. >>> x1=sin(linspace(0,10*pi,21)) >>> x2=sin(linspace(0,10*pi,21)%(2*pi)) >>> all(x1==x2) False >>> x1 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, -2.44921271e-16, 1.00000000e+00, 3.67381906e-16, -1.00000000e+00, -4.89842542e-16, 1.00000000e+00, 6.12303177e-16, -1.00000000e+00, -7.34763812e-16, 1.00000000e+00, 8.57224448e-16, -1.00000000e+00, -9.79685083e-16, 1.00000000e+00, 1.10214572e-15, -1.00000000e+00, -1.22460635e-15]) >>> x2 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00]) I'd rather have x2: I'm just asking if there is anything exploitable here. Robert suggests not. Cheers, Alan Isaac From aisaac at american.edu Thu May 25 12:16:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 12:16:13 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > But numpy doesn't deal with abstract senses. It deals with > concrete floating point arithmetic. Of course. > That your demonstration results in the desired exact 0.0 > for multiples of 2*pi is an accident. The results for > values other than integer multiples of pi will be as wrong > or more wrong. It seems that a continuity argument should undermine that as a general claim. Right? But like I said: I was just wondering if there was anything exploitable here. Thanks, Alan From robert.kern at gmail.com Thu May 25 12:29:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:29:06 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? What continuity? This is floating-point arithmetic. [~]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 s(1000000) -.34999350217129295211765248678077146906140660532871 [~]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> sin(1000000.0) -0.34999350217129299 >>> sin(1000000.0 % (2*pi)) -0.34999350213477698 >>> > But like I said: I was just wondering if there was anything > exploitable here. Like I said: not really. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 25 12:39:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 12:39:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? Let me clarify. Since you created your values by multiplying the floating-point approximation pi by an integer value. When you perform the operation % (2*pi) on those values, the result happens to be exact or nearly so but only because you used the same approximation of pi. Doing that operation on an arbitrary value (like 1000000) only introduces more error to the calculation. Floating-point sin(1000000.0) should return a value within eps (~2**-52) of the true, real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introduces error in two places: the approximation pi and the operation %. A floating-point implementation of sin(.) will return a value within eps of the real sin(.) of the value that is the result of the floating-point operation (1000000 % (2*pi)), which already has some error accumulated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexander.belopolsky at gmail.com Thu May 25 13:14:13 2006 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu May 25 13:14:13 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: I agree with Robert. In fact on the FPUs such as x87, where floating point registers have extended precision, sin(x % (2%pi)) will give you a less precise answer than sin(x). The improved presision that you see is illusory because gives that 64 bit pi is not a precise mathematical pi, sin(2*pi) is not 0. On 5/25/06, Robert Kern wrote: > Alan G Isaac wrote: > > On Thu, 25 May 2006, Robert Kern apparently wrote: > > >>That your demonstration results in the desired exact 0.0 > >>for multiples of 2*pi is an accident. The results for > >>values other than integer multiples of pi will be as wrong > >>or more wrong. > > > > It seems that a continuity argument should undermine that as > > a general claim. Right? > > Let me clarify. Since you created your values by multiplying the floating-point > approximation pi by an integer value. When you perform the operation % (2*pi) on > those values, the result happens to be exact or nearly so but only because you > used the same approximation of pi. Doing that operation on an arbitrary value > (like 1000000) only introduces more error to the calculation. Floating-point > sin(1000000.0) should return a value within eps (~2**-52) of the true, > real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introduces > error in two places: the approximation pi and the operation %. A floating-point > implementation of sin(.) will return a value within eps of the real sin(.) of > the value that is the result of the floating-point operation (1000000 % (2*pi)), > which already has some error accumulated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Thu May 25 13:18:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 13:18:05 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > What continuity? This is floating-point arithmetic. Sure, but a continuity argument suggests (in the absence of specific floating point reasons to doubt it) that a better approximation at one point will mean better approximations nearby. E.g., >>> epsilon = 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 Compare to the bc result of 9.9999999998333333e-006 bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253968254 Cheers, Alan From aisaac at american.edu Thu May 25 13:29:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 13:29:04 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > Let me clarify. Since you created your values by > multiplying the floating-point approximation pi by an > integer value. When you perform the operation % (2*pi) on > those values, the result happens to be exact or nearly so > but only because you used the same approximation of pi. > Doing that operation on an arbitrary value (like 1000000) > only introduces more error to the calculation. > Floating-point sin(1000000.0) should return a value within > eps (~2**-52) of the true, real-valued function > sin(1000000). Calculating (1000000 % (2*pi)) introduces > error in two places: the approximation pi and the > operation %. A floating-point implementation of sin(.) > will return a value within eps of the real sin(.) of the > value that is the result of the floating-point operation > (1000000 % (2*pi)), which already has some error > accumulated. I do not think that we have any disgreement here, except possibly over eps, which is not constant for different argument sizes. So I wondered if there was a tradeoff: smaller eps (from smaller argument) for the cost of computational error in an additional operation. Anyway, thanks for the feedback on this. Cheers, Alan From robert.kern at gmail.com Thu May 25 13:43:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 13:43:12 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 You aren't using bc correctly. bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 100*pi 0 If you know that you are epsilon from n*2*? (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*? + epsilon). FWIW, for the calculation that you did in bc, numpy.sin() gives the same results (up to the last digit): >>> from numpy import * >>> sin(0.00001) 9.9999999998333335e-06 You wanted to know if something there is something exploitable to improve the accuracy of numpy.sin(). In general, there is not. However, if you know the difference between your value and an integer multiple of the real number 2*?, then you can do your floating-point calculation on that difference. However, you will not in general get an improvement by using % (2*pi) to calculate that difference. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndarray at mac.com Thu May 25 14:29:07 2006 From: ndarray at mac.com (Sasha) Date: Thu May 25 14:29:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: This example looks like an artifact of decimal to binary conversion. Consider this: >>> epsilon = 1./2**16 >>> epsilon 1.52587890625e-05 >>> sin(100*pi+epsilon) 1.5258789063872671e-05 >>> sin((100*pi+epsilon)%(2*pi)) 1.5258789076118735e-05 and in bc: scale=50 epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141310239 On 5/25/06, Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > > What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > > >>> epsilon = 0.00001 > >>> sin(100*pi+epsilon) > 9.999999976550551e-006 > >>> sin((100*pi+epsilon)%(2*pi)) > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 > > Cheers, > Alan > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmdlnk&kid7521&bid$8729&dat1642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From yi at yiqiang.net Thu May 25 15:34:01 2006 From: yi at yiqiang.net (Yi Qiang) Date: Thu May 25 15:34:01 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) Message-ID: <447630A0.60205@yiqiang.net> Hi list, I searched the archives and found various threads regarding this issue and I have not found a solution there. software versions: gfortran 4.0.1 atlas 3.6.0 lapack 3.0 Basically numpy spits out this message when I try to compile it: gcc: numpy/linalg/lapack_litemodule.c /usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so" failed with exit status 1 However I have compiled all the software explicitly with the -fPIC flag on. Attached is my make.inc for LAPACK and my Makefile for ATLAS. I followed these instructions to create a hybrid LAPACK/ATLAS archive: http://math-atlas.sourceforge.net/errata.html#completelp Interestingly enough, if I just use the bare version of ATLAS, numpy compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Any help would be greatly appreciated. -Yi From yi at yiqiang.net Thu May 25 15:41:07 2006 From: yi at yiqiang.net (Yi Qiang) Date: Thu May 25 15:41:07 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: <447630A0.60205@yiqiang.net> References: <447630A0.60205@yiqiang.net> Message-ID: <4476326E.6030308@yiqiang.net> Yi Qiang wrote: > Interestingly enough, if I just use the bare version of ATLAS, numpy > compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Actually, I take that back. I get the same error when trying to link against the standalone version of LAPACK, so that suggests something went wrong there. And here are the files I forgot to attach! > > Any help would be greatly appreciated. > > > > -Yi > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: make.inc URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Make.Linux_HAMMER64SSE2_2 URL: From cookedm at physics.mcmaster.ca Thu May 25 15:46:01 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu May 25 15:46:01 2006 Subject: [Numpy-discussion] Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: <4476326E.6030308@yiqiang.net> (Yi Qiang's message of "Thu, 25 May 2006 15:40:46 -0700") References: <447630A0.60205@yiqiang.net> <4476326E.6030308@yiqiang.net> Message-ID: Yi Qiang writes: > Yi Qiang wrote: >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > #################################################################### > # LAPACK make include file. # > # LAPACK, Version 3.0 # > # June 30, 1999 # > #################################################################### > # > SHELL = /bin/sh > # > # The machine (platform) identifier to append to the library names > # > PLAT = _LINUX > # > # Modify the FORTRAN and OPTS definitions to refer to the > # compiler and desired compiler options for your machine. NOOPT > # refers to the compiler options desired when NO OPTIMIZATION is > # selected. Define LOADER and LOADOPTS to refer to the loader and > # desired load options for your machine. > # > FORTRAN = gfortran > OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 > DRVOPTS = $(OPTS) > NOOPT = Maybe NOOPT needs -fPIC? That's the only one I see where it could be missing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Thu May 25 15:47:04 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu May 25 15:47:04 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: On Thu, 25 May 2006, Robert Kern apparently wrote: > You aren't using bc correctly. Ooops. I am not a user and was just following your post without reading the manual. I hope the below fixes pi; and (I think) it makes the same point I tried to make before: a continuity argument renders the general claim you made suspect. (Of course it's looking like a pretty narrow range of possible benefit as well.) > If you know that you are epsilon from n*2*? (the real > number, not the floating point one), you should just be > calculating sin(epsilon). Usually, you do not know this, > and % (2*pi) will not tell you this. (100*pi + epsilon) is > not the same thing as (100*? + epsilon). Yes, I understand all this. Of course, it is not quite an answer to the question: can '%(2*pi)' offer an advantage in the right circumstances? And the original question was again different: can we learn from such calculations that **some** method might offer an improvement? Anyway, you have already been more than generous with your time. Thanks! Alan bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253967996 or 9.999999999833333e-006 compared to: >>> epsilon = 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 From robert.kern at gmail.com Thu May 25 16:15:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 16:15:24 2006 Subject: [Numpy-discussion] Re: Problem compiling numpy against ATLAS on amd64 (Mandriva 2006) In-Reply-To: References: <447630A0.60205@yiqiang.net> <4476326E.6030308@yiqiang.net> Message-ID: David M. Cooke wrote: > Yi Qiang writes: >>FORTRAN = gfortran >>OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 >>DRVOPTS = $(OPTS) >>NOOPT = > > Maybe NOOPT needs -fPIC? That's the only one I see where it could be > missing. That sounds right. dlamch is not supposed to be compiled with optimization, IIRC. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 25 16:40:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu May 25 16:40:07 2006 Subject: [Numpy-discussion] Re: numpy.sin In-Reply-To: References: Message-ID: Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>You aren't using bc correctly. > > Ooops. I am not a user and was just following your post > without reading the manual. I hope the below fixes pi; > and (I think) it makes the same point I tried to make before: > a continuity argument renders the general claim you > made suspect. (Of course it's looking like a pretty > narrow range of possible benefit as well.) Yes, you probably can construct cases where the % (2*pi) step will ultimately yield an answer closer to what you want. You cannot expect that step to give *reliable* improvements. >>If you know that you are epsilon from n*2*? (the real >>number, not the floating point one), you should just be >>calculating sin(epsilon). Usually, you do not know this, >>and % (2*pi) will not tell you this. (100*pi + epsilon) is >>not the same thing as (100*? + epsilon). > > Yes, I understand all this. Of course, > it is not quite an answer to the question: > can '%(2*pi)' offer an advantage in the > right circumstances? Not in any that aren't contrived. Not in any situations where you don't already have enough knowledge to do a better calculation (e.g. calculating sin(epsilon) rather than sin(2*n*pi + epsilon)). > And the original question > was again different: can we learn > from such calculations that **some** method might > offer an improvement? No, those calculations make no such revelation. Good implementations of sin() already reduce the argument into a small range around 0 just to make the calculation feasible. They do so much more accurately than doing % (2*pi) but they can only work with the information given to the function. It cannot know that, for example, you generated the inputs by multiplying the double-precision approximation of ? by an integer. You can look at the implementation in fdlibm: http://www.netlib.org/fdlibm/s_sin.c > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > pi = 4*a(1) > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253967996 > or > 9.999999999833333e-006 > > compared to: > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 As Sasha noted, that is an artifact of bc's use of decimal rather than binary, and Python's conversion of the literal "0.00001" into binary. [scipy]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141309981 s(epsilon) .00001525878906190788105354014301687863346141310239 [scipy]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> epsilon = 1./2.**16 >>> sin(100*pi + epsilon) 1.5258789063872268e-05 >>> sin((100*pi + epsilon) % (2*pi)) 1.5258789076118735e-05 >>> sin(epsilon) 1.5258789061907882e-05 I do recommend reading up more on floating point arithmetic. A good paper is Goldman's "What Every Computer Scientist Should Know About Floating-Point Arithmetic": http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hubeihuaweichile at 163.com Fri May 26 01:12:06 2006 From: hubeihuaweichile at 163.com (mandy) Date: Fri May 26 01:12:06 2006 Subject: [Numpy-discussion] Re:oil trucks Message-ID: Dear sir: Nice day! See fuel tanks , please browse our website: www.chilegroup.com/en/ Highly appropriated for your dedication on this email. Best regards, Miss Mandy, WEBSITE: WWW.CHILEGROUP.COM/en/ From bylsmao at heartspring.org Fri May 26 07:41:06 2006 From: bylsmao at heartspring.org (Hughie Bylsma) Date: Fri May 26 07:41:06 2006 Subject: [Numpy-discussion] shea 7160 Message-ID: <000001c680d2$42665cd0$cc66a8c0@rqm89> Hi, X & n a x P R O z & C A m o x / c i I l / n M e R / D / A V / a G R A A m B / E N L e V / T R A V A L / u M T r & m a d o I S O m & C i A L / S http://www.bendoutin.com was there anyone to pack him in, even if there had been a chance! It looked as if he would certainly lose his friends this time (nearly all of them had already disappeared through the dark trap-door), and get utterly left behind and have to stay lurking as a permanent burglar in the elf-caves for ever. For even if he could have escaped through the upper gates at once, he had precious small chance of ever finding the -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Sun May 28 00:51:00 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 00:51:00 2006 Subject: [Numpy-discussion] dtype: hashing and cmp Message-ID: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Is there a reason why dtype's are unhashable ? (ouch) On another point, is there a canonical list of dtype's ? I'd like to test the dtype of an array, and I always end up with something like this: if array.dtype == numpy.dtype('l'): ... When I would prefer to write something like: if array.dtype == numpy.Int32: ... (i can never remember these char codes !) Alternatively, should dtype's __cmp__ promote the other arg to a dtype before the compare ? I guess not, since that would break a lot of code: eg. dtype(None) is legal. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From erin.sheldon at gmail.com Sun May 28 08:34:16 2006 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Sun May 28 08:34:16 2006 Subject: [Numpy-discussion] fromfile for dtype=Int8 Message-ID: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> Hi everyone - The "fromfile" method isn't working for Int8 in ascii mode: # cat test.dat 3 4 5 >>> import numpy as np >>> np.__version__ '0.9.9.2547' >>> np.fromfile('test.dat', sep='\n', dtype=np.Int16) array([3, 4, 5], dtype=int16) >>> np.fromfile('test.dat', sep='\n', dtype=np.Int8) Traceback (most recent call last): File "", line 1, in ? ValueError: don't know how to read character files with that array type Was this intended? Erin From robert.kern at gmail.com Sun May 28 12:35:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun May 28 12:35:01 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060528173303.6cd4c5c6.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Message-ID: Simon Burton wrote: > Is there a reason why dtype's are unhashable ? (ouch) No one thought about it, probably. If you would like to submit a patch, I think it we would check it in. > On another point, is there a canonical list of dtype's ? > I'd like to test the dtype of an array, and I always > end up with something like this: > > if array.dtype == numpy.dtype('l'): ... > > When I would prefer to write something like: > > if array.dtype == numpy.Int32: ... numpy.int32 There is a list on page 20 of _The Guide to NumPy_. It is included in the sample chapters: http://www.tramy.us/scipybooksample.pdf > (i can never remember these char codes !) > > Alternatively, should dtype's __cmp__ promote the other arg > to a dtype before the compare ? > I guess not, since that would break a lot of code: eg. dtype(None) > is legal. Correct, it should not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon at arrowtheory.com Sun May 28 12:53:03 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 12:53:03 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> Message-ID: <20060529055411.5bc43330.simon@arrowtheory.com> On Sun, 28 May 2006 14:33:37 -0500 Robert Kern wrote: > > > if array.dtype == numpy.Int32: ... > > numpy.int32 No that doesn't work. >>> numpy.int32 >>> numpy.int32 == numpy.dtype('l') False Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From robert.kern at gmail.com Sun May 28 13:04:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun May 28 13:04:03 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529055411.5bc43330.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> Message-ID: Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern wrote: > >>>if array.dtype == numpy.Int32: ... >> >>numpy.int32 > > No that doesn't work. > >>>>numpy.int32 > > > >>>>numpy.int32 == numpy.dtype('l') > > False >>> from numpy import * >>> a = linspace(0, 10, 11) >>> a.dtype == dtype(float64) True -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Sun May 28 13:38:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 13:38:01 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529055411.5bc43330.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> Message-ID: <447A09DF.3060803@ieee.org> Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern wrote: > > >>> if array.dtype == numpy.Int32: ... >>> >> numpy.int32 >> > > > No that doesn't work. > > Yeah, the "canonical" types (e.g. int32, float64, etc) are actually scalar objects. The type objects themselves are dtype(int32). I don't think they are currently listed anywhere in Python (except there is one for every canonical scalar object). The difference between the scalar object and the data-type object did not become clear until December 2005. Previously the scalar object was used as the data-type (obviously there is still a relationship between them). -Travis >>>> numpy.int32 >>>> > > >>>> numpy.int32 == numpy.dtype('l') >>>> > False > > > Simon. > > From oliphant.travis at ieee.org Sun May 28 13:42:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 13:42:04 2006 Subject: ***[Possible UCE]*** [Numpy-discussion] fromfile for dtype=Int8 In-Reply-To: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> References: <331116dc0605280833r6e5021a2i7a7f02c9b6c9ae5a@mail.gmail.com> Message-ID: <447A0AF9.5050104@ieee.org> Erin Sheldon wrote: > Hi everyone - > > The "fromfile" method isn't working for Int8 in > ascii mode: > > # cat test.dat > 3 > 4 > 5 The problem is that the internal _scan method for that data-type has not been written (it was not just a character code for fscanf). It should not be too hard to write but hasn't been done yet. Perhaps you can file a ticket so we don't lose track of it. -Travis From simon at arrowtheory.com Sun May 28 13:57:02 2006 From: simon at arrowtheory.com (Simon Burton) Date: Sun May 28 13:57:02 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <447A09DF.3060803@ieee.org> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> <447A09DF.3060803@ieee.org> Message-ID: <20060529065757.5d784334.simon@arrowtheory.com> On Sun, 28 May 2006 14:36:47 -0600 Travis Oliphant wrote: > Simon Burton wrote: > > On Sun, 28 May 2006 14:33:37 -0500 > > Robert Kern wrote: > > > > > >>> if array.dtype == numpy.Int32: ... > >>> > >> numpy.int32 > >> > > > > > > No that doesn't work. > > > > > > Yeah, the "canonical" types (e.g. int32, float64, etc) are actually > scalar objects. The type objects themselves are dtype(int32). I don't > think they are currently listed anywhere in Python (except there is one > for every canonical scalar object). ... Can we promote the numarray names: Int32 etc. to their dtype equivalents ? I don't see why having Int32='l' is any more usefull that Int32=dtype('l'), and the later works with cmp (and also is more helpful in the interactive interpreter). Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From oliphant.travis at ieee.org Sun May 28 14:57:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun May 28 14:57:02 2006 Subject: [Numpy-discussion] Re: dtype: hashing and cmp In-Reply-To: <20060529065757.5d784334.simon@arrowtheory.com> References: <20060528173303.6cd4c5c6.simon@arrowtheory.com> <20060529055411.5bc43330.simon@arrowtheory.com> <447A09DF.3060803@ieee.org> <20060529065757.5d784334.simon@arrowtheory.com> Message-ID: <447A1C69.4050602@ieee.org> Simon Burton wrote: > On Sun, 28 May 2006 14:36:47 -0600 > Travis Oliphant wrote: > > >> Simon Burton wrote: >> >>> On Sun, 28 May 2006 14:33:37 -0500 >>> Robert Kern wrote: >>> >>> >>> >>>>> if array.dtype == numpy.Int32: ... >>>>> >>>>> >>>> numpy.int32 >>>> >>>> >>> >>> No that doesn't work. >>> >>> >>> >> Yeah, the "canonical" types (e.g. int32, float64, etc) are actually >> scalar objects. The type objects themselves are dtype(int32). I don't >> think they are currently listed anywhere in Python (except there is one >> for every canonical scalar object). >> > ... > > Can we promote the numarray names: Int32 etc. to their dtype equivalents ? > Perhaps. There is the concern that it might break Numeric compatibility, though. -Travis From maxim.krikun at gmail.com Mon May 29 05:01:03 2006 From: maxim.krikun at gmail.com (Maxim Krikun) Date: Mon May 29 05:01:03 2006 Subject: [Numpy-discussion] 24bit arrays Message-ID: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> Hi all. I'm writing a tool to access uncompressed audio files (currently in wave, aiff and wave64 formats) using numarray.memmap module -- it maps the whole file to memory, then finds the waveform data region and casts it to and array of appropriate type and shape. This works pretty well for both 16-bit integer and 32-bit float data, but not for 24-bit files, since there is no Int24 data type in numarray. Is there some clever way to achieve the same goal for 24bit data without copying everything into a new 32-bit array? The typical 24bit audio file contains two interleaved channels, i.e. frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or (nframes,2,3) Int8 array, but this is hardly a useful representation for audio data. --Maxim From jmniexrw at accunet.net Mon May 29 05:08:08 2006 From: jmniexrw at accunet.net (mzcdobk hxuqvubp) Date: Mon May 29 05:08:08 2006 Subject: [Numpy-discussion] [fwd] here's a winer Message-ID: <28391391.5797162400448.JavaMail.qzimllwg@ov-ff02> CTXE***CTXE***CTXE***CTXE***CTXE***CTXE***CTXE Get CTXE First Thing Today, Check out for HOT NEWS!!! CTXE - CANTEX ENERGY CORP CURRENT_PRICE: $0.53 GET IT N0W! Before we start with the profile of CTXE we would like to mention something very important: There is a Big PR Campaign starting this weeek . And it will go all week so it would be best to get in NOW. Company Profile Cantex Energy Corporation is an independent, managed risk, oil and gas exploration, development, and production company headquartered in San Antonio, Texas. Recent News Cantex Energy Corp. Announces Completion of the GPS Survey Today and the Mobilization of Seismic Crews for Big Canyon 2D Swath, Management would like to report The GPS surveying of our Big Canyon 2D Swath Geophysical program is being completed today. The crew that has been obtained to conduct the seismic survey (Quantum Geophysical) will be mobilizing May 30 (plus or minus 2 days) to the Big Canyon Prospect. It will take the crews about 3 to 4 days to get all the equipment (cable and geophones) laid out on the ground and then another day of testing so we should be in full production mode on or around the 4th or 5th of June. Once the first of three lines are shot we will then get data processed and report progress on a weekly basis. Cantex Energy Corp. Receiving Interest From the Industry as It Enters Next Phase of Development Cantex Energy Corp. (CTXE - News) is pleased to report the following on its Big Canyon Prospect in West Texas. Recent company announcements related to the acquisition of over 48,000 acres of a world-class prospect has captured the attention of many oil & gas industry experts and corporations, who have recently inquired into various participation opportunities ranging from sharing science technology to support findings or expertise to drill, operate and manage wells. Trace Maurin, President of Cantex, commented, "Although we are a small independent oil & gas company, we have a very unique 0pp0rtunity in one of the last under-explored world-class potential gas plays with no geopolitical risks and the industry is starting to take notice. As we prepare to prove up the various structures within our prospect later this month, we are increasing our efforts to communicate on our progress to our shareholders and investors. Our intention is to provide investors with a better understanding of the full potential of this prospect as we embark on the next phase of operations." Starting immediately the company will undertake CEO interviews, radio spots (which will be recorded and published on the company website), publication placements, introductions to small cap institutional investors and funds all in an effort to optimize market awareness and keep our shareholder well informed. GET IN NOW Happy memorial day There's no time like the present. Rise and shine. Shake like a leaf. You can lead a horse to water but you can't make him drink. Stop and smell the roses. Stop and smell the roses. A stepping stone to. When you get lemons, make lemonade.(When life gives you scraps make quilts.) Your in hot water. Sweet as apple pie. Raking in the dough. Shake like a leaf. Stuck in a rut. That's a whole new can of worms. Some like carrots others like cabbage. They're like two peas in a pod. Throw pearls before swine. A rose is a rose is a rose. Season of mists and mellow fruitfulness. Were you born in a barn? Top of the morning. Which came first, the chicken or the egg. Spaceship earth. This is for the birds. Through the grapevine. When the cows come home. From simon at arrowtheory.com Mon May 29 05:17:01 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 29 05:17:01 2006 Subject: [Numpy-discussion] 24bit arrays In-Reply-To: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> References: <45bc4390605290500h5de9ccabmacfb6d5d357ed2a7@mail.gmail.com> Message-ID: <20060529221738.6fb03e53.simon@arrowtheory.com> On Mon, 29 May 2006 14:00:34 +0200 "Maxim Krikun" wrote: > > The typical 24bit audio file contains two interleaved channels, i.e. > frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or > (nframes,2,3) Int8 array, but this is hardly a useful representation for > audio data. Why not ? It's good for slicing and dicing, anything else and you should convert it to float before operating on it. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From pfdubois at gmail.com Mon May 29 10:21:05 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Mon May 29 10:21:05 2006 Subject: [Numpy-discussion] Special issue of CiSE on Python -- Correspondents please reply Message-ID: In April I sent out a request for proposals for a special issue of Computing in Science and Engineering on Python's use in science and engineering. Due to being somewhat inexperienced with a new mailer, I lost some of the correspondence. Would those with whom I corresponded send me back something about what we were talking about? I know some additional people had gotten dragged into the conversation. I did not lose letters from: Jarrod Millman Kent-Andre Mardal Xuan Shi Ryan Krauss I have the word doc only from Peter Bienstman. I have the text outline only from Arnd Baecker Sorry to be so clumsy. Paul Dubois -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfdubois at gmail.com Mon May 29 15:20:02 2006 From: pfdubois at gmail.com (Paul Dubois) Date: Mon May 29 15:20:02 2006 Subject: [Numpy-discussion] ...never mind, found everything Message-ID: Please disregard my previous posting about my special issue correspondence. All is well. Gmail and my twitching don't work well together. I need a computer that ignores me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Mon May 29 23:56:00 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon May 29 23:56:00 2006 Subject: [Numpy-discussion] when does numpy create temporaries ? Message-ID: <20060530165457.769baf5f.simon@arrowtheory.com> Consider these two operations: >>> a=numpy.empty( 1024 ) >>> b=numpy.empty( 1024 ) >>> a[1:] = b[:-1] >>> a[1:] = a[:-1] >>> It seems that in the second operation we need to copy the view a[:-1] but in the first operation we don't need to copy b[:-1]. How does numpy detect this, or does it always copy the source when assigning to a slice ? I've poked around the (numpy) code a bit and tried some benchmarks, but it's still not so clear to me. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From yhaibnjtqq at born.com Tue May 30 01:15:06 2006 From: yhaibnjtqq at born.com (rxctaeg olwhljm) Date: Tue May 30 01:15:06 2006 Subject: [Numpy-discussion] {Info} st ock speculation for Message-ID: <84051149.6664341607410.JavaMail.bufnputnk@dd-yf02> An HTML attachment was scrubbed... URL: From rmuller at sandia.gov Tue May 30 09:30:16 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 09:30:16 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? Message-ID: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Is it possible to create arrays that run from, say -5:5, rather than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? I'm translating a F90 code to Python, and it would be easier to do this than to use a python dictionary. Thanks in advance. Rick Muller rmuller at sandia.gov From rays at blue-cove.com Tue May 30 09:45:02 2006 From: rays at blue-cove.com (Ray Schumacher) Date: Tue May 30 09:45:02 2006 Subject: [Numpy-discussion] Re: 24bit arrays Message-ID: <6.2.5.2.2.20060530084113.0587c1e0@blue-cove.com> >Is there some clever way to achieve the same goal for 24bit data without >copying everything into a new 32-bit array? I agree with the other post, Int8 x 3 can be used with slices to get a lot done, depending on data tasks desired, but not all >The typical 24bit audio file contains two interleaved channels, i.e. >frames of 3bytes+3bytes, so it can be cast to (nframes,3) Int32, or >(nframes,2,3) Int8 array, but this is hardly a useful representation for >audio data. Along these liners, I have been working with 24 bit ADC data returned from pyUSB as tuples, which I need to convert to Float32 and save, like this: WRAP = 2.**23 BITS24 = 2.**24 try: chValue = struct.unpack(">I", struct.pack(">4b", 0,*dataTuple[byteN:byteN+3]) )[0] except: chValue = 0 if chValue>WRAP: chValue = ((BITS24-chValue) / WRAP) * gainFactors[thisCh] else: chValue = (-chValue / WRAP) * gainFactors[thisCh] data[thisCh].append(chValue) which is really slow (no real time is possible). Is there a much faster way evident to others here? We are going to do a pyd in C otherwise... Ray From aisaac at american.edu Tue May 30 10:26:07 2006 From: aisaac at american.edu (Alan Isaac) Date: Tue May 30 10:26:07 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Message-ID: On Tue, 30 May 2006, Rick Muller wrote: > Is it possible to create arrays that run from, say -5:5, rather than > 0:11? Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> x = N.arange(-5,6) >>> x array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) >>> y=N.arange(11) >>> y-5 array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]) hth, Alan Isaac From hetland at tamu.edu Tue May 30 10:33:03 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue May 30 10:33:03 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> Message-ID: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> I believe Rick is talking about negative indices (possible in FORTRAN), in which case the answer is no. -Rob On May 30, 2006, at 11:28 AM, Rick Muller wrote: > Is it possible to create arrays that run from, say -5:5, rather > than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > I'm translating a F90 code to Python, and it would be easier to do > this than to use a python dictionary. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From aisaac at american.edu Tue May 30 10:43:04 2006 From: aisaac at american.edu (Alan Isaac) Date: Tue May 30 10:43:04 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov><8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: > On May 30, 2006, at 11:28 AM, Rick Muller wrote: >> Is it possible to create arrays that run from, say -5:5, rather >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >> I'm translating a F90 code to Python, and it would be easier to do >> this than to use a python dictionary. On Tue, 30 May 2006, Rob Hetland wrote: > I believe Rick is talking about negative indices (possible in > FORTRAN), in which case the answer is no. I see. Perhaps this is still relevant? (Or perhaps not.) >>> y=N.arange(11) >>> x=range(-5,6) >>> y[x] array([ 6, 7, 8, 9, 10, 0, 1, 2, 3, 4, 5]) >>> hth, Alan Isaac From hetland at tamu.edu Tue May 30 10:55:02 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue May 30 10:55:02 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov><8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: <286B76A0-7A06-4459-9B81-ABC0F347128B@tamu.edu> Yes, brilliant. The answer is yes, but you need to modify the array to have it make sense; you need to fold the array over, so that the 'negative' indices reference data from the rear of the array.... I was thinking about having the first negative index first... I'm still not sure if this will be 'skating on dull blades with sharp knives' in converting fortran to numpy. More generally to the problem of code conversion, I think that a direct fortran -> numpy translation is not the best thing -- the numpy code should be vectorized. The array indexing problems will (mostly) go away when the fortran code is vectorized, and will result in much faster python code in the end as well. -r On May 30, 2006, at 12:42 PM, Alan Isaac wrote: >> On May 30, 2006, at 11:28 AM, Rick Muller wrote: >>> Is it possible to create arrays that run from, say -5:5, rather >>> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >>> I'm translating a F90 code to Python, and it would be easier to do >>> this than to use a python dictionary. > > > On Tue, 30 May 2006, Rob Hetland wrote: >> I believe Rick is talking about negative indices (possible in >> FORTRAN), in which case the answer is no. > > > > I see. > Perhaps this is still relevant? > (Or perhaps not.) > >>>> y=N.arange(11) >>>> x=range(-5,6) >>>> y[x] > array([ 6, 7, 8, 9, 10, 0, 1, 2, 3, 4, 5]) >>>> > > hth, > Alan Isaac > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From rmuller at sandia.gov Tue May 30 11:37:06 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 11:37:06 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> Message-ID: <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> Indeed I am. Thanks for the reply On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > I believe Rick is talking about negative indices (possible in > FORTRAN), in which case the answer is no. > > -Rob > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > >> Is it possible to create arrays that run from, say -5:5, rather >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? >> I'm translating a F90 code to Python, and it would be easier to do >> this than to use a python dictionary. > > ----- > Rob Hetland, Assistant Professor > Dept of Oceanography, Texas A&M University > p: 979-458-0096, f: 979-845-6331 > e: hetland at tamu.edu, w: http://pong.tamu.edu > > Rick Muller rmuller at sandia.gov From david.huard at gmail.com Tue May 30 13:05:06 2006 From: david.huard at gmail.com (David Huard) Date: Tue May 30 13:05:06 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> Message-ID: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Just a thought: would it be possible to overload the array __getitem__ method ? I can do it with lists, but not with arrays... For instance, class fortarray(list): def __getitem__(self, index): return list.__getitem__(self, index+5) and >>> l = fortarray() >>> l.append(1) >>> l[-5] 1 There is certainly a more elegant way to define the class with the starting index as an argument, but I didn't look into it. For arrays, this doesn't work out of the box, but I'd surprised if there was no way to tweak it to do the same. Good luck David 2006/5/30, Rick Muller : > > Indeed I am. Thanks for the reply > On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > > > > I believe Rick is talking about negative indices (possible in > > FORTRAN), in which case the answer is no. > > > > -Rob > > > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > > > >> Is it possible to create arrays that run from, say -5:5, rather > >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > >> I'm translating a F90 code to Python, and it would be easier to do > >> this than to use a python dictionary. > > > > ----- > > Rob Hetland, Assistant Professor > > Dept of Oceanography, Texas A&M University > > p: 979-458-0096, f: 979-845-6331 > > e: hetland at tamu.edu, w: http://pong.tamu.edu > > > > > > Rick Muller > rmuller at sandia.gov > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue May 30 13:18:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue May 30 13:18:03 2006 Subject: [Numpy-discussion] Re: arrays that start from negative numbers? In-Reply-To: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Message-ID: David Huard wrote: > Just a thought: > would it be possible to overload the array __getitem__ method ? > > I can do it with lists, but not with arrays... > > For instance, > > class fortarray(list): > def __getitem__(self, index): > return list.__getitem__(self, index+5) > > and >>>> l = fortarray() >>>> l.append(1) >>>> l[-5] > 1 > > There is certainly a more elegant way to define the class with the > starting index as an argument, but I didn't look into it. For arrays, > this doesn't work out of the box, but I'd surprised if there was no way > to tweak it to do the same. One certainly could write a subclass of array that handles arbitrarily-based indices. On the other hand, writing a correct and consistent implementation would be very tricky. On the gripping hand, a quick hack might suffice if one only needed to use it locally, like inside a single function, and convert to and from real arrays at the boundaries. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rmuller at sandia.gov Tue May 30 13:33:02 2006 From: rmuller at sandia.gov (Rick Muller) Date: Tue May 30 13:33:02 2006 Subject: [Numpy-discussion] arrays that start from negative numbers? In-Reply-To: <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> References: <6B389811-2185-4C00-A5FC-4E48E8D44ECB@sandia.gov> <8F52C872-9FF9-4B50-A728-DD271812D6D3@tamu.edu> <67A1F71A-73F3-4918-9F78-84013EC75B61@sandia.gov> <91cf711d0605301304v3ac168dub01ed7ae89b66df6@mail.gmail.com> Message-ID: <2C2B7EF5-2827-41DE-9D8B-D86C913565CA@sandia.gov> I certainly think that something along these lines would be possible. However, in the end I just decided to keep track of the indices using a Python dictionary, which means to access A[-3] I actually have to call A[index[-3]]. A little clunkier, but I was worried that the other solutions would be brittle in the long run. Thanks for all of the comments. On May 30, 2006, at 2:04 PM, David Huard wrote: > Just a thought: > would it be possible to overload the array __getitem__ method ? > > I can do it with lists, but not with arrays... > > For instance, > > class fortarray(list): > def __getitem__(self, index): > return list.__getitem__(self, index+5) > > and > >>> l = fortarray() > >>> l.append(1) > >>> l[-5] > 1 > > There is certainly a more elegant way to define the class with the > starting index as an argument, but I didn't look into it. For > arrays, this doesn't work out of the box, but I'd surprised if > there was no way to tweak it to do the same. > > Good luck > David > > 2006/5/30, Rick Muller : > Indeed I am. Thanks for the reply > On May 30, 2006, at 11:32 AM, Rob Hetland wrote: > > > > > I believe Rick is talking about negative indices (possible in > > FORTRAN), in which case the answer is no. > > > > -Rob > > > > On May 30, 2006, at 11:28 AM, Rick Muller wrote: > > > >> Is it possible to create arrays that run from, say -5:5, rather > >> than 0:11? Analogous to the Fortran "allocate(A(-5:5))" command? > >> I'm translating a F90 code to Python, and it would be easier to do > >> this than to use a python dictionary. > > > > ----- > > Rob Hetland, Assistant Professor > > Dept of Oceanography, Texas A&M University > > p: 979-458-0096, f: 979-845-6331 > > e: hetland at tamu.edu, w: http://pong.tamu.edu > > > > > > Rick Muller > rmuller at sandia.gov > > > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and > Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Rick Muller rmuller at sandia.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue May 30 19:54:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue May 30 19:54:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? Message-ID: <447D051E.9000709@ieee.org> Please help the developers by responding to a few questions. 1) Have you transitioned or started to transition to NumPy (i.e. import numpy)? 2) Will you transition within the next 6 months? (if you answered No to #1) 3) Please, explain your reason(s) for not making the switch. (if you answered No to #2) 4) Please provide any suggestions for improving NumPy. Thanks for your answers. NumPy Developers From mrovner at cadence.com Tue May 30 20:16:03 2006 From: mrovner at cadence.com (Mike Rovner) Date: Tue May 30 20:16:03 2006 Subject: [Numpy-discussion] cx_freezing numpy problem. Message-ID: Hi, I'm trying to package numpy based application but get following TB: No scipy-style subpackage 'core' found in /home/mrovner/dev/psgapp/src/gui/lnx32/dvip/numpy. Ignoring: No module named _internal Traceback (most recent call last): File "/home/mrovner/src/cx_Freeze-3.0.2/initscripts/Console.py", line 26, in ? exec code in m.__dict__ File "dvip.py", line 42, in ? File "dvip.py", line 31, in dvip_gui File "mainui.py", line 1, in ? File "psgdb.pyx", line 162, in psgdb File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/__init__.py", line 35, in ? verbose=NUMPY_IMPORT_VERBOSE,postpone=False) File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", line 173, in __call__ self._init_info_modules(packages or None) File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", line 68, in _init_info_modules exec 'import %s.info as info' % (package_name) File "", line 1, in ? File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/__init__.py", line 5, in ? from type_check import * File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/core/__init__.py", line 6, in ? import umath File "ExtensionLoader.py", line 12, in ? AttributeError: 'module' object has no attribute '_ARRAY_API' I did freezing with: FreezePython --install-dir=lnx32 --include-modules=numpy --include-modules=numpy.core dvip.py I'm using Python-2.4.2 numpy-0.9.8 cx_Freeze-3.0.2 on linux. Everything compiled from source. Any help appreciated. Thanks, Mike From wbaxter at gmail.com Tue May 30 20:44:02 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue May 30 20:44:02 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: pyOpenGL is one project that hasn't upgraded to numpy yet. http://pyopengl.sourceforge.net/ I think the issue is just that noone is really maintaining it, rather than any difficulty in porting to numpy. Since he's probably not reading this list, might be a good idea to send the project admin a copy of the survey: mcfletch at users.sourceforge.net --bb On 5/31/06, Travis Oliphant wrote: > > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > > > > > 2) Will you transition within the next 6 months? (if you answered No to > #1) > > > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > > > > > > 4) Please provide any suggestions for improving NumPy. > > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndarray at mac.com Tue May 30 20:57:05 2006 From: ndarray at mac.com (Sasha) Date: Tue May 30 20:57:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: I am a Numeric user. On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > Started transition. Most applications were easily ported to Numpy. I am still deciding whether or not support both Numpy and Numeric during the transition period. > > 2) Will you transition within the next 6 months? (if you answered No to #1) > Yes, as soon as numpy 1.0 is released. > 4) Please provide any suggestions for improving NumPy. > That's a big topic! Without expanding on anything: - optimized array of interned strings (compatible with char** at the C level) - optimized array of arrays (a restriction of dtype=object array) - use BLAS in umath From andrewm at object-craft.com.au Tue May 30 21:32:01 2006 From: andrewm at object-craft.com.au (Andrew McNamara) Date: Tue May 30 21:32:01 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <20060531043103.0C6996F4C81@longblack.object-craft.com.au> We use Numeric in NetEpi Analysis (www.netepi.org). >Please help the developers by responding to a few questions. > >1) Have you transitioned or started to transition to NumPy (i.e. import >numpy)? No. >2) Will you transition within the next 6 months? (if you answered No to #1) Unknown - someone will have to fund the work. >3) Please, explain your reason(s) for not making the switch. (if you >answered No to #2) NetEpi Analysis implements C extensions to do fast set options on integer Numeric arrays, as well as to support mmap'ed Numeric arrays. I haven't looked at what is required to port these to Numpy (or replace with native Numpy features). NetEpi Analysis also uses rpy, which will potentially need to be updated to support Numpy. We're also concerned about speed - but I haven't done any testing against the latest Numpy. -- Andrew McNamara, Senior Developer, Object Craft http://www.object-craft.com.au/ From rob at hooft.net Tue May 30 21:48:05 2006 From: rob at hooft.net (Rob Hooft) Date: Tue May 30 21:48:05 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <447D1FDF.6010508@hooft.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Travis Oliphant wrote: Numeric user since 1998. Writing commercial application software for control of machines. The machine is sold with the application software, closed source. | 1) Have you transitioned or started to transition to NumPy (i.e. import | numpy)? No | 2) Will you transition within the next 6 months? (if you answered No to #1) Maybe. | 3) Please, explain your reason(s) for not making the switch. (if you | answered No to #2) We are by now late adopters of everything. Everything else we use (we use about 30 non-GPL opensource packages in our development environment, some of which are using Numeric themselves) will need to migrate as well. Our own code is interspersed with Numeric calls, and amounts to about 200k lines... Rob - -- Rob W.W. Hooft || rob at hooft.net || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEfR/fH7J/Cv8rb3QRAvbjAKCCMLpbBbWSBDsRZZzL0+p4HTqcLACbBwiB YLFwX1oEULCH068j2I7ZoDg= =0gj8 -----END PGP SIGNATURE----- From jensj at fysik.dtu.dk Tue May 30 23:15:01 2006 From: jensj at fysik.dtu.dk (Jens =?iso-8859-1?Q?J=F8rgen_Mortensen?=) Date: Tue May 30 23:15:01 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <42703.80.167.103.49.1149056031.squirrel@webmail.fysik.dtu.dk> We use Numeric for our "Atomic Simulation Environment" and for a Density Functional Theory code: http://wiki.fysik.dtu.dk/ASE http://wiki.fysik.dtu.dk/gridcode > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No. > 2) Will you transition within the next 6 months? (if you answered No to > #1) Yes. Only problem is that ASE relies on Konrad Hinsen's Scientific.IO.NetCDF module which is still a Numeric thing. I saw recently that this module has been converted to numpy and put in SciPy/sandbox. What is the future of this module? > 4) Please provide any suggestions for improving NumPy. Can't think of anything! Jens J?rgen Mortensen From arnd.baecker at web.de Wed May 31 00:18:01 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed May 31 00:18:01 2006 Subject: [Numpy-discussion] when does numpy create temporaries ? In-Reply-To: <20060530165457.769baf5f.simon@arrowtheory.com> References: <20060530165457.769baf5f.simon@arrowtheory.com> Message-ID: On Tue, 30 May 2006, Simon Burton wrote: > Consider these two operations: > > >>> a=numpy.empty( 1024 ) > >>> b=numpy.empty( 1024 ) > >>> a[1:] = b[:-1] > >>> a[1:] = a[:-1] > >>> > > It seems that in the second operation > we need to copy the view a[:-1] > but in the first operation we don't need > to copy b[:-1]. > > How does numpy detect this, or does it > always copy the source when assigning to a slice ? > > I've poked around the (numpy) code a bit and > tried some benchmarks, but it's still not so > clear to me. Hi, not being able to give an answer to this question, I would like to emphasize that this can be a very important issue: Firstly, I don't know how to monitor the memory usage *during* the execution of a line of code. (Putting numpy.testing.memusage() before and after that line does not help, if I remember things correctly). With Numeric I ran into a memory problem with the code appended below. It turned out, that internally a copy had been made which for huge arrays brought my system into swapping. (numpy behaves the same as Numeric. Moreover, it seems to consume around 8.5 MB more memory than Numeric?!) So I strongly agree that it would be nice to know in advance when temporaries are created. In addition it would be good to be able to debug memory allocation. (For example, with f2py there is the option -DF2PY_REPORT_ON_ARRAY_COPY=1 Note that this only works when generating the wrapper library, i.e., there is no switch to turn this on or off afterwards, at least as far as I know). Best, Arnd ########################################################## from Numeric import * #from numpy import * import os pid=os.getpid() print "Process id: ",pid N=200 NB=30 # number of wavefunctions NT=20 # time steps print "Expected size of `wfk` (in KB):", N*N*NB*8/1024.0 print "Expected size of `time_arr` (in KB):", N*N*NT*16/1024.0 wfk=zeros( (N,N,NB),Float) phase=ones(NB,Complex) time_arr=zeros( (N,N,NT),Complex) print "press enter and watch the memory" raw_input("(eg. with pmap %d | grep total)" % (pid) ) # this one does a full copy of wfk, because it is complex !!! #while 1: # for tn in range(NT): # time_arr[:,:,tn]+=dot(wfk, phase) # # # memory usage: varies around: # - 38524K/57276K with Numeric # - 46980K/66360K with numpy while 1: for tn in range(NT): time_arr[:,:,tn]+=dot(wfk, phase.real)+1j*dot(wfk, phase.imag) # # memory usage: varies around: # - 38524K/40412K with Numeric # - 46984K/47616K with numpy ################################################################ From rbastian at free.fr Wed May 31 00:54:06 2006 From: rbastian at free.fr (=?iso-8859-1?q?Ren=E9=20Bastian?=) Date: Wed May 31 00:54:06 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <06053109532800.00754@rbastian> Le Mercredi 31 Mai 2006 04:53, Travis Oliphant a ?crit : > Please help the developers by responding to a few questions. > > I am a numarray user > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? no I tried to install numpy but the installation failed. > > 2) Will you transition within the next 6 months? (if you answered No to #1) > no, (hm, but if numarray will be prohibited, ...) > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > numarray works and works fine (from version number 0.8 to 1.5) > > > > 4) Please provide any suggestions for improving NumPy. > > > > > > Thanks for your answers. > > > NumPy Developers > > -- Ren? Bastian http://www.musiques-rb.org http://pythoneon.musiques-rb.org From ajikoe at gmail.com Wed May 31 01:46:02 2006 From: ajikoe at gmail.com (Pujo Aji) Date: Wed May 31 01:46:02 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: On 5/31/06, Travis Oliphant wrote: > > > Please help the developers by responding to a few questions. > > I'm a Numeric user. > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? > No > > > 2) Will you transition within the next 6 months? (if you answered No to > #1) > No > > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > Numeric is ok and the conversion somehow make my unittest fail..... 4) Please provide any suggestions for improving NumPy. > I think the conversion between Numpy and numeric should be > compatible...... > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Wed May 31 04:52:04 2006 From: faltet at carabos.com (Francesc Altet) Date: Wed May 31 04:52:04 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <200605311351.11499.faltet@carabos.com> Hi Travis, Here you have the answers for PyTables project (www.pytables.org). A Dimecres 31 Maig 2006 04:53, Travis Oliphant va escriure: > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? Not yet, although suport for it is in-place through the use of the array protocol. > 2) Will you transition within the next 6 months? (if you answered No to #1) We don't know, but other projects in our radar make us to think that we will not be able to do that in this timeframe. > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) As I said before, it is mainly a matter of priorities. Also, numarray works very well for PyTables usage, and besides, NumPy 1.0 is not yet there. > 4) Please provide any suggestions for improving NumPy. You are already doing a *great* work. Perhaps pushing numexpr in NumPy would be nice. Also working in introducing a simple array class in Python core and using the array protocol to access the data would be very good. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From schofield at ftw.at Wed May 31 06:17:03 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed May 31 06:17:03 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> On 31/05/2006, at 4:53 AM, Travis Oliphant wrote: > Please help the developers by responding to a few questions. I've ported my code to NumPy. But I have some suggestions for improving NumPy. I've now entered them as these tickets: Improvements for NumPy's web presence: http://projects.scipy.org/scipy/numpy/ticket/132 Squeeze behaviour for 1d and 0d arrays: http://projects.scipy.org/scipy/numpy/ticket/133 Array creation from sequences: http://projects.scipy.org/scipy/numpy/ticket/134 -- Ed From nwagner at iam.uni-stuttgart.de Wed May 31 06:35:03 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 31 06:35:03 2006 Subject: [Numpy-discussion] test_scalarmath.py", line 63 Message-ID: <447D9B2E.90500@iam.uni-stuttgart.de> >>> numpy.__version__ '0.9.9.2553' numpy.test(1,10) results in ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 63, in check_types assert val.dtype.num == typeconv[k,l] and \ AssertionError: error with (0,7) ---------------------------------------------------------------------- Ran 368 tests in 0.479s FAILED (failures=1) Nils From jg307 at cam.ac.uk Wed May 31 06:57:04 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 31 06:57:04 2006 Subject: [Numpy-discussion] Distutils problem with g95 Message-ID: <447DA07D.5010003@cam.ac.uk> numpy.distutils seems to have difficulties detecting the current version of the g95 compiler. I believe this is because the output of `g95 --version` has changed. The patch below seems to correct the problem (i.e. it now works with the latest g95) but my regexp foo is very weak so it may not be correct/optimal. --- /usr/lib64/python2.3/site-packages/numpy/distutils/fcompiler/g95.py 2006-01-06 21:29:40.000000000 +0000 +++ /home/jgraham/lib64/python/numpy/distutils/fcompiler/g95.py 2006-05-26 12:49:50.000000000 +0100 @@ -9,7 +9,7 @@ class G95FCompiler(FCompiler): compiler_type = 'g95' - version_pattern = r'G95.*\(experimental\) \(g95!\) (?P.*)\).*' + version_pattern = r'G95.*(?:\(experimental\))? \(g95!\) (?P.*)\).*' executables = { 'version_cmd' : ["g95", "--version"], -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From strang at nmr.mgh.harvard.edu Wed May 31 07:01:10 2006 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Wed May 31 07:01:10 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: Numeric user since somewhere near "the beginning" (~1996-7?). > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No. Just dabbling so far. > 2) Will you transition within the next 6 months? (if you answered No to #1) Depends. > 3) Please, explain your reason(s) for not making the switch. (if you answered > No to #2) (i) Numpy out-of-the-box build failed on my linux boxes, and I will need to upgrade linux and Windoze simultaneously. (ii) So far, Numeric is still working for me. (iii) Time ... I also have ~200k lines of code to convert. (iv) Like others, I guess I'm waiting for NumPy 1.0 to take a stab. > 4) Please provide any suggestions for improving NumPy. >From what I've tried/tested so far on Windoze, Numpy looks awesome. Thanks Travis! And the rest of the development team! (Now, if I could only truly understand and remember all the new indexing options ... esp. a[tuple] vs. a[array] .....) Gary From pearu at scipy.org Wed May 31 07:01:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 31 07:01:12 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: <447DA07D.5010003@cam.ac.uk> References: <447DA07D.5010003@cam.ac.uk> Message-ID: On Wed, 31 May 2006, James Graham wrote: > numpy.distutils seems to have difficulties detecting the current version of > the g95 compiler. I believe this is because the output of `g95 --version` has > changed. The patch below seems to correct the problem (i.e. it now works with > the latest g95) but my regexp foo is very weak so it may not be > correct/optimal. Could you send me the output of g95 --version for reference? Thanks, Pearu From jg307 at cam.ac.uk Wed May 31 07:09:03 2006 From: jg307 at cam.ac.uk (James Graham) Date: Wed May 31 07:09:03 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: References: <447DA07D.5010003@cam.ac.uk> Message-ID: <447DA348.2000405@cam.ac.uk> Pearu Peterson wrote: > > > On Wed, 31 May 2006, James Graham wrote: > >> numpy.distutils seems to have difficulties detecting the current >> version of the g95 compiler. I believe this is because the output of >> `g95 --version` has changed. The patch below seems to correct the >> problem (i.e. it now works with the latest g95) but my regexp foo is >> very weak so it may not be correct/optimal. > > Could you send me the output of > > g95 --version > > for reference? $ g95 --version G95 (GCC 4.0.3 (g95!) May 22 2006) Copyright (C) 2002-2005 Free Software Foundation, Inc. G95 comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of G95 under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes From rays at blue-cove.com Wed May 31 07:12:03 2006 From: rays at blue-cove.com (RayS) Date: Wed May 31 07:12:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <20060531031122.CA26E33E0B@sc8-sf-spam1.sourceforge.net> References: <20060531031122.CA26E33E0B@sc8-sf-spam1.sourceforge.net> Message-ID: <6.2.3.4.2.20060531063528.02bf0970@blue-cove.com> At 08:10 PM 5/30/2006, you wrote: >1) Have you transitioned or started to transition to NumPy (i.e. import >numpy)? only by following the threads here, so far no download yet >2) Will you transition within the next 6 months? (if you answered No to #1) yes, on this next project (assuming the small array, <2048, performance compares to Numeric >3) Please, explain your reason(s) for not making the switch. (if you >answered No to #2) if no, it is because most projects involve small bin number FFTs and correlations >4) Please provide any suggestions for improving NumPy. 24 bit signed integer type, for the new class of ADCs coming out (or at least the ability to cast efficiently to Float32) a GPU back-end option for FFT ;-) Thanks, it's all good, Ray From pearu at scipy.org Wed May 31 07:24:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed May 31 07:24:12 2006 Subject: [Numpy-discussion] Distutils problem with g95 In-Reply-To: <447DA348.2000405@cam.ac.uk> References: <447DA07D.5010003@cam.ac.uk> <447DA348.2000405@cam.ac.uk> Message-ID: On Wed, 31 May 2006, James Graham wrote: > Pearu Peterson wrote: >> >> Could you send me the output of >> >> g95 --version >> >> for reference? > > $ g95 --version > > G95 (GCC 4.0.3 (g95!) May 22 2006) Thanks, I have applied the patch with modifications to numpy svn. Let me know if it fails to work. Pearu From jdhunter at ace.bsd.uchicago.edu Wed May 31 07:32:05 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed May 31 07:32:05 2006 Subject: [Numpy-discussion] masked arrays Message-ID: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> I'm a bit of an ma newbie. I have a 2D masked array R and want to extract the non-masked values in the last column. Below I use logical indexing, but I suspect there is a "built-in" way w/ masked arrays. I read through the docstring, but didn't see anything better. In [66]: c = R[:,-1] In [67]: m = R.mask[:,-1] In [69]: c[m==0] Out[69]: array(data = [ 0.94202899 0.51839465 0.24080268 0.26198439 0.29877369 2.06856187 0.91415831 0.64994426 0.96544036 1.11259755 2.53623188 0.71571906 0.18394649 0.78037904 0.60869565 3.56744705 0.44147157 0.07692308 0.27090301 0.16610925 0.57068004 0.80267559 0.57636566 0.23634337 1.9509476 0.50761427 0.09587514 0.45039019 0.14381271 0.69007804 2.44481605 0.2909699 0.45930881 1.37123746 2.00668896 3.1638796 1.0735786 1.06800446 0.18952062 1.55964326 1.16833891 0.17502787 1.16610925 0.85507246 0.42140468 0.04236343 1.01337793 0.22853958 1.76365663 1.78372352 0.96209588 0.73578595 0.94760312 1.59531773 0.88963211], mask = [False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False False], fill_value=1e+20) From nwagner at iam.uni-stuttgart.de Wed May 31 08:48:02 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed May 31 08:48:02 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault Message-ID: <447DBA8D.5090908@iam.uni-stuttgart.de> test_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok check_types (numpy.core.tests.test_scalarmath.test_types)*** glibc detected *** free() : invalid pointer: 0xb7ab74a0 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 14948)] 0xb7ca51f1 in kill () from /lib/i686/libc.so.6 (gdb) bt #0 0xb7ca51f1 in kill () from /lib/i686/libc.so.6 #1 0xb7e90401 in pthread_kill () from /lib/i686/libpthread.so.0 #2 0xb7e9044b in raise () from /lib/i686/libpthread.so.0 #3 0xb7ca4f84 in raise () from /lib/i686/libc.so.6 #4 0xb7ca6498 in abort () from /lib/i686/libc.so.6 #5 0xb7cd8cf6 in __libc_message () from /lib/i686/libc.so.6 #6 0xb7cde367 in malloc_printerr () from /lib/i686/libc.so.6 #7 0xb7cdfacf in free () from /lib/i686/libc.so.6 #8 0xb7ae3fc5 in gentype_dealloc (v=0x0) at scalartypes.inc.src:281 #9 0xb7f57bed in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #10 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #11 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #12 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #13 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #14 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #15 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #16 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #17 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #18 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #19 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #20 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #21 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #22 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #23 0xb7f34c2c in slot_tp_call () from /usr/lib/libpython2.4.so.1.0 #24 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #25 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #26 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #27 0xb7f1113a in function_call () from /usr/lib/libpython2.4.so.1.0 #28 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #29 0xb7f02edb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #30 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #31 0xb7f34c2c in slot_tp_call () from /usr/lib/libpython2.4.so.1.0 #32 0xb7ef9c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #33 0xb7f58097 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #34 0xb7f5a663 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #35 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #36 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #37 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #38 0xb7f58cc5 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #39 0xb7f5ad21 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #40 0xb7f5aff5 in PyEval_EvalCode () from /usr/lib/libpython2.4.so.1.0 #41 0xb7f75778 in run_node () from /usr/lib/libpython2.4.so.1.0 #42 0xb7f77228 in PyRun_InteractiveOneFlags () from /usr/lib/libpython2.4.so.1.0 #43 0xb7f77396 in PyRun_InteractiveLoopFlags () from /usr/lib/libpython2.4.so.1.0 #44 0xb7f774a7 in PyRun_AnyFileExFlags () from /usr/lib/libpython2.4.so.1.0 #45 0xb7f7d66a in Py_Main () from /usr/lib/libpython2.4.so.1.0 #46 0x0804871a in main (argc=0, argv=0x0) at ccpython.cc:10 Can someone reproduce the segfault ? Linux amanda 2.6.11.4-21.12-default #1 Wed May 10 09:38:20 UTC 2006 i686 athlon i386 GNU/Linux >>> numpy.__version__ '0.9.9.2553' From pgmdevlist at mailcan.com Wed May 31 09:45:00 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Wed May 31 09:45:00 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <200605311243.49056.pgmdevlist@mailcan.com> On Wednesday 31 May 2006 10:25, John Hunter wrote: > I'm a bit of an ma newbie. I have a 2D masked array R and want to > extract the non-masked values in the last column. Below I use logical > indexing, but I suspect there is a "built-in" way w/ masked arrays. I > read through the docstring, but didn't see anything better. R[:,-1].compressed() should do the trick. From eric at enthought.com Wed May 31 09:58:08 2006 From: eric at enthought.com (eric) Date: Wed May 31 09:58:08 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: <447DCB07.8050607@enthought.com> > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. > import numpy)? We (Enthought) have started. > > > 2) Will you transition within the next 6 months? (if you answered No > to #1) We have a number of deployed applications that use Numeric heavily. Much of our code is now NumPy/Numeric compatible, but it is not well tested on NumPy. That said, a recent large project will be delivered on NumPy this summer, and we are releasing an update to a legacy app using NumPy in the next month or so. Pearu, Travis, and the Numeric->NumPy conversion scripts have been very helpful in this respect. It has been (and remains) a big effort to get the ship turned in the direction of NumPy, but we're committed to it. We are very much looking forward to using its new features. > > > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) Just time right now. We have noticed one major slow down in code, but it is a known issue (scalar math). This was easily fixed with a little weave code in the time being (so now we're actually 2-3 times faster than the old Numeric code. :-) > > 4) Please provide any suggestions for improving NumPy. No strong opinions here yet as I (sadly) haven't gotten to use it much yet. The scalar math speed hit us once, so others will probably hit it as well. Thanks again for all the amazing work on this stuff. It has already had an amazing impact on the community involvement and growth. From my own experience, I understand why others are slow to convert. Enthought has wanted to be an early adopter from the beginning, and we are still not there because of the effort involved in conversion and testing along with time pressures from other projects. Still, there is a nice feed back loop that happens here. As scipy/numpy continue to improve (more functionality, 64-bit stability, etc.) and more projects convert over, there are more reasons for people to update their code to the latest and greatest. My bet is it'll take 2-3 more years for the transition to run its course. see ya, eric > > > > > > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat > certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From Chris.Barker at noaa.gov Wed May 31 10:09:03 2006 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed May 31 10:09:03 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> Message-ID: <447DCD79.3000808@noaa.gov> Ed Schofield wrote: > Improvements for NumPy's web presence: > http://projects.scipy.org/scipy/numpy/ticket/132 From that page: NumPy's web presence could be improved by: 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page I don't like this. *numpy is not scipy*. It should have it's own page (which would refer to scipy). That page should be something better than the raw sourceforge page, however. A lot of us use numpy without anything else from the scipy project, and scipy is still a major pain in the *&&^* to build. Can you even build it with gcc 4 yet? I like the other ideas. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From strawman at astraw.com Wed May 31 11:26:06 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed May 31 11:26:06 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <447DC7F9.9060201@astraw.com> John, you want c.compressed(). John Hunter wrote: >I'm a bit of an ma newbie. I have a 2D masked array R and want to >extract the non-masked values in the last column. Below I use logical >indexing, but I suspect there is a "built-in" way w/ masked arrays. I >read through the docstring, but didn't see anything better. > >In [66]: c = R[:,-1] > >In [67]: m = R.mask[:,-1] > > >In [69]: c[m==0] >Out[69]: >array(data = > [ 0.94202899 0.51839465 0.24080268 0.26198439 0.29877369 > 2.06856187 > 0.91415831 0.64994426 0.96544036 1.11259755 2.53623188 > 0.71571906 > 0.18394649 0.78037904 0.60869565 3.56744705 0.44147157 > 0.07692308 > 0.27090301 0.16610925 0.57068004 0.80267559 0.57636566 > 0.23634337 > 1.9509476 0.50761427 0.09587514 0.45039019 0.14381271 > 0.69007804 > 2.44481605 0.2909699 0.45930881 1.37123746 2.00668896 > 3.1638796 > 1.0735786 1.06800446 0.18952062 1.55964326 1.16833891 > 0.17502787 > 1.16610925 0.85507246 0.42140468 0.04236343 1.01337793 > 0.22853958 > 1.76365663 1.78372352 0.96209588 0.73578595 0.94760312 > 1.59531773 > 0.88963211], > mask = > [False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False False False False False > False > False False False False False False False], > fill_value=1e+20) > > > > >------------------------------------------------------- >All the advantages of Linux Managed Hosting--Without the Cost and Risk! >Fully trained technicians. The highest number of Red Hat certifications in >the hosting industry. Fanatical Support. Click to learn more >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From paustin at eos.ubc.ca Wed May 31 11:42:01 2006 From: paustin at eos.ubc.ca (Philip Austin) Date: Wed May 31 11:42:01 2006 Subject: [Numpy-discussion] masked arrays In-Reply-To: <447DC7F9.9060201@astraw.com> References: <87d5duclvb.fsf@peds-pc311.bsd.uchicago.edu> <447DC7F9.9060201@astraw.com> Message-ID: <17533.58189.41909.508462@owl.eos.ubc.ca> > John Hunter wrote: > >I read through the docstring, but didn't see anything better. Andrew Straw writes: > John, you want c.compressed(). I've also found the old Numeric documentation to be helpful: http://numeric.scipy.org/numpydoc/numpy-22.html regards, Phil From bsouthey at gmail.com Wed May 31 11:48:04 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Wed May 31 11:48:04 2006 Subject: [Numpy-discussion] Any Numeric or numarray users on this list? In-Reply-To: <447D051E.9000709@ieee.org> References: <447D051E.9000709@ieee.org> Message-ID: Hi, On 5/30/06, Travis Oliphant wrote: > > Please help the developers by responding to a few questions. > > > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? Yes and No > 2) Will you transition within the next 6 months? (if you answered No to #1) Probably for new code only. Having ported numarray code to NumPy there are too many quirks that need to be found. > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) Hopefully 1.0 will be out by then :-). Also bugs and performance will at a similar level to numeric and numarray. > 4) Please provide any suggestions for improving NumPy. The main one at present is to provide a stable release that can serve as a reference point for users. This is more a reflection of having a stable version of numpy for reference rather than having to check the svn for an appropriate version. Bruce > Thanks for your answers. > > > NumPy Developers > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From mrovner at cadence.com Wed May 31 13:26:11 2006 From: mrovner at cadence.com (Mike Rovner) Date: Wed May 31 13:26:11 2006 Subject: [Numpy-discussion] Re: cx_freezing numpy problem. In-Reply-To: References: Message-ID: Seems, the problem was that _internal.py was not picked up by Freeze. Using --include-modules=numpy.core._internal helps. Mike Rovner wrote: > Hi, > > I'm trying to package numpy based application but get following TB: > > No scipy-style subpackage 'core' found in > /home/mrovner/dev/psgapp/src/gui/lnx32/dvip/numpy. Ignoring: No module > named _internal > Traceback (most recent call last): > File "/home/mrovner/src/cx_Freeze-3.0.2/initscripts/Console.py", line > 26, in ? > exec code in m.__dict__ > File "dvip.py", line 42, in ? > File "dvip.py", line 31, in dvip_gui > File "mainui.py", line 1, in ? > File "psgdb.pyx", line 162, in psgdb > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/__init__.py", > line 35, in ? > verbose=NUMPY_IMPORT_VERBOSE,postpone=False) > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", > line 173, in __call__ > self._init_info_modules(packages or None) > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/_import_tools.py", > line 68, in _init_info_modules > exec 'import %s.info as info' % (package_name) > File "", line 1, in ? > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/__init__.py", > line 5, in ? > from type_check import * > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/lib/type_check.py", > line 8, in ? > import numpy.core.numeric as _nx > File > "/lan/dfm/grp_mm_data1/dev/tools/linux-x86_32/lib/python2.4/site-packages/numpy/core/__init__.py", > line 6, in ? > import umath > File "ExtensionLoader.py", line 12, in ? > AttributeError: 'module' object has no attribute '_ARRAY_API' > > I did freezing with: > FreezePython --install-dir=lnx32 --include-modules=numpy > --include-modules=numpy.core dvip.py > > I'm using Python-2.4.2 numpy-0.9.8 cx_Freeze-3.0.2 on linux. Everything > compiled from source. > > Any help appreciated. > > Thanks, > Mike > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 From gkmohan at gmail.com Wed May 31 13:29:02 2006 From: gkmohan at gmail.com (Krishna Mohan Gundu) Date: Wed May 31 13:29:02 2006 Subject: [Numpy-discussion] numpy-0.9.5 build is not clean In-Reply-To: <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> References: <70ec82800605241639g5ce7ddeay17b596c1b4335ab4@mail.gmail.com> <447539B2.80206@astraw.com> <70ec82800605242240v6cf2f893j53f7ec6b0511b79a@mail.gmail.com> Message-ID: <70ec82800605311327x254d44aci6552314f63d02ce1@mail.gmail.com> Dear Andrew, I forgot to delete the include files from the previous installation, which I installed manually by copying the include files. Sorry for the trouble. Hope this helps someone, if (s)he does the same mistake as mine. cheers, Krishna. On 5/24/06, Krishna Mohan Gundu wrote: > Dear Andrew, > > Thanks for your reply. As I said earlier I deleted the existing numpy > installation and the build directories. I am more than confidant that > I did it right. Is there anyway I can prove myself wrong? > > I also tried importing umath.so from build directory > === > $ cd numpy-0.9.5/build/lib.linux-x86_64-2.4/numpy/core > $ ls -l umath.so > -rwxr-xr-x 1 krishna users 463541 May 22 17:46 umath.so > $ python > Python 2.4.3 (#1, Apr 8 2006, 19:10:42) > [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import umath > Traceback (most recent call last): > File "", line 1, in ? > RuntimeError: module compiled against version 90500 of C-API but this > version of numpy is 90504 > >>> > === > > So something is wrong with the build for sure. Could there be anything > wrong other than not deleting the build directory? > > thanks, > Krishna. > > On 5/24/06, Andrew Straw wrote: > > Dear Krishna, it looks like there are some mixed versions of numpy > > floating around on your system. Before building, remove the "build" > > directory completely. > > > > Krishna Mohan Gundu wrote: > > > > > Hi, > > > > > > I am trying to build numpy-0.9.5, downloaded from sourceforge download > > > page, as higher versions are not yet tested for pygsl. The build seems > > > to be broken. I uninstall existing numpy and start build from scratch > > > but I get the following errors when I import numpy after install. > > > > > > ==== > > > > > >>>> from numpy import * > > >>> > > > import core -> failed: module compiled against version 90504 of C-API > > > but this version of numpy is 90500 > > > import random -> failed: 'module' object has no attribute 'dtype' > > > import linalg -> failed: module compiled against version 90504 of > > > C-API but this version of numpy is 90500 > > > ==== > > > > > > Any help is appreciated. Am I doing something wrong or is it known > > > that this build is broken? > > > > > > thanks, > > > Krishna. > > > > > > > > > ------------------------------------------------------- > > > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > > > Fully trained technicians. The highest number of Red Hat > > > certifications in > > > the hosting industry. Fanatical Support. Click to learn more > > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Numpy-discussion at lists.sourceforge.net > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > From nvf at MIT.EDU Wed May 31 13:48:04 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Wed May 31 13:48:04 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: <20060531031123.F3D7532B05@sc8-sf-spam1.sourceforge.net> References: <20060531031123.F3D7532B05@sc8-sf-spam1.sourceforge.net> Message-ID: <5C73A4D7-66AB-4737-A41E-C61DACFCBB87@mit.edu> > 1) Have you transitioned or started to transition to NumPy (i.e. > import > numpy)? Yes, I've pretty much decided that numpy is the way to go with my young analysis codes, even though it is going to be somewhat painful to distribute to my colleagues and among compute nodes in clusters. I am saved by the fact that we all work within an NFS space, so I can host up-to-date compiles of numpy and scipy without requiring root access nor requiring everyone to compile their own. Of course, I will end up giving them a few lines of LD_LIBRARY_PATH to add. Old codes that used Numeric have all transitioned smoothly to numpy in my internal tests. I may distribute two versions (one Numeric, one numpy) for wider distribution outside my clusters, but there may be a better way to do this. > 2) Will you transition within the next 6 months? (if you answered > No to #1) > 3) Please, explain your reason(s) for not making the switch. (if you > answered No to #2) > 4) Please provide any suggestions for improving NumPy. I think NumPy is fantastic. That said, even the fantastic can improve. I am very happy with numpy as software, so my comments are mostly about adoption and accessibility. I'd really like to see the team getting packages (32 and 64 bit) into standard Redhat, Debian, and Fink repositories quickly after a release. I believe Redhat is at numpy 0.9.5 and neither Debian (testing) nor Fink have packages in the default repositories. Having to add extra lines to /etc/apt/sources.list (or Redhat equivalent) to grab packages from private repositories will dissuade a lot of people from adopting numpy. Many of them are the same people who are unable to compile it themselves. Up-to-date packages also help me with my problem in #1 -- an admin will happily yum an rpm for me an all of the cluster nodes, but might not be willing to install it from a nonstandard source. I agree that Googlability is very important and easily addressable. I'm glad someone brought this up. Btw, googling "numpy rpm" brings up http://pylab.sourceforge.net/, which is some more of Travis's old Numeric stuff (labeled as such on the page). Thanks for the hard work in coding and thanks for keeping a thriving discussion list going. Take care, Nick From fperez.net at gmail.com Wed May 31 14:36:02 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed May 31 14:36:02 2006 Subject: [Numpy-discussion] Suggestions for NumPy In-Reply-To: <447DCD79.3000808@noaa.gov> References: <447D051E.9000709@ieee.org> <27BE229E-1192-4643-8454-5E0790A0AC7F@ftw.at> <447DCD79.3000808@noaa.gov> Message-ID: On 5/31/06, Christopher Barker wrote: > > Ed Schofield wrote: > > Improvements for NumPy's web presence: > > http://projects.scipy.org/scipy/numpy/ticket/132 > > From that page: > > NumPy's web presence could be improved by: > > 2. Pointing www.numpy.org to numeric.scipy.org instead of the SF page > > I don't like this. *numpy is not scipy*. It should have it's own page > (which would refer to scipy). That page should be something better than > the raw sourceforge page, however. Well, ipython is not scipy either, and yet its homepage is ipython.scipy.org. I think it's simply a matter of convenience that the Enthought hosting infrastructure is so much more pleasant to use than SF, that other projects use scipy.org as an umbrella. In that sense, I think it's fair to say that numpy is part of the 'scipy family'. I don't know, at least this doesn't particularly bother me. > A lot of us use numpy without anything else from the scipy project, and > scipy is still a major pain in the *&&^* to build. Can you even build it > with gcc 4 yet? I built it on a recent ubuntu not too long ago, without any glitches. I can check again tonitght on a fresh Dapper with up-to-date SVN if you want. Cheers, f From rowen at cesmail.net Wed May 31 14:49:01 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Wed May 31 14:49:01 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? References: <447D051E.9000709@ieee.org> Message-ID: In article <447D051E.9000709 at ieee.org>, Travis Oliphant wrote: > Please help the developers by responding to a few questions. > > 1) Have you transitioned or started to transition to NumPy (i.e. import > numpy)? No, not beyond installing it to see if it works. > 2) Will you transition within the next 6 months? (if you answered No to #1) I expect to start to transition within a few months of both numpy and pyfits-with-numpy being released and being reported as stable and fast. > 4) Please provide any suggestions for improving NumPy. Please improve notification of documentation updates. I keep seeing complaints from folks who've bought the numpy documentation that they get no notification of updates. That makes me very reluctant to buy the documentation myself. I wish that full support for masked arrays had made it in (i.e. masked arrays are first class citizens that are accepted by all functions). The inability in numeric to apply 2d filters on masked image arrays is the main thing missing for me in numarray. -- Russell From stefan at sun.ac.za Wed May 31 16:08:02 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed May 31 16:08:02 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault In-Reply-To: <447DBA8D.5090908@iam.uni-stuttgart.de> References: <447DBA8D.5090908@iam.uni-stuttgart.de> Message-ID: <20060531230706.GA32246@mentat.za.net> I filed this as ticket #135: http://projects.scipy.org/scipy/numpy/ticket/135 Regards St?fan On Wed, May 31, 2006 at 05:47:25PM +0200, Nils Wagner wrote: > test_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok > check_types (numpy.core.tests.test_scalarmath.test_types)*** glibc > detected *** free() : invalid pointer: > 0xb7ab74a0 *** > > Program received signal SIGABRT, Aborted. From oliphant.travis at ieee.org Wed May 31 17:40:00 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 31 17:40:00 2006 Subject: [Numpy-discussion] numpy.test(1,10) results in a segfault In-Reply-To: <20060531230706.GA32246@mentat.za.net> References: <447DBA8D.5090908@iam.uni-stuttgart.de> <20060531230706.GA32246@mentat.za.net> Message-ID: <447E3711.9080609@ieee.org> Stefan van der Walt wrote: > I filed this as ticket #135: > > http://projects.scipy.org/scipy/numpy/ticket/135 > > Thanks. This one is due to a bug/oddity in Python itself. Apparently complex-number subtypes can't use a different memory allocator than the Python memory allocator. I've let Python powers know about the bug and worked around it in NumPy, for now. -Travis From aisaac at american.edu Wed May 31 19:58:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed May 31 19:58:03 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: On Wed, 31 May 2006, "Russell E. Owen" apparently wrote: > Please improve notification of documentation updates. > I keep seeing complaints from folks who've bought the > numpy documentation that they get no notification of > updates. That makes me very reluctant to buy the > documentation myself. The documentation is excellent, and I've been completely satisfied with Travis's handling of the updates. It is also a minimal investment in an excellent project. fwiw, Alan Isaac From oliphant.travis at ieee.org Wed May 31 21:03:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed May 31 21:03:03 2006 Subject: [Numpy-discussion] test_scalarmath.py", line 63 In-Reply-To: <447D9B2E.90500@iam.uni-stuttgart.de> References: <447D9B2E.90500@iam.uni-stuttgart.de> Message-ID: <447E66D8.7020908@ieee.org> Nils Wagner wrote: >>>> numpy.__version__ >>>> > '0.9.9.2553' > > > numpy.test(1,10) results in > ====================================================================== > FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", > line 63, in check_types > assert val.dtype.num == typeconv[k,l] and \ > AssertionError: error with (0,7) > This is probably on a 64-bit system. It would be great if you could take the code in the test module and adapt it to print out the typecodes that are obtained using 0-dimensional arrays. Of course, maybe that's a better way to run the test.... -Travis > ---------------------------------------------------------------------- > Ran 368 tests in 0.479s > > FAILED (failures=1) > > Nils > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From fperez.net at gmail.com Wed May 31 21:20:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed May 31 21:20:04 2006 Subject: [Numpy-discussion] Re: Any Numeric or numarray users on this list? In-Reply-To: References: <447D051E.9000709@ieee.org> Message-ID: On 5/31/06, Alan G Isaac wrote: > On Wed, 31 May 2006, "Russell E. Owen" apparently wrote: > > Please improve notification of documentation updates. > > I keep seeing complaints from folks who've bought the > > numpy documentation that they get no notification of > > updates. That makes me very reluctant to buy the > > documentation myself. > > The documentation is excellent, and I've been completely > satisfied with Travis's handling of the updates. I'll add my voice on this front. When I've needed a special update (for a workshop, where I needed to print out hardcopies as up-to-date as possible), Travis was very forthcoming and gave me quickly his most recent copy. So while a few weeks ago a couple of emails may not have been replied quite on the spot, overall I don't feel in any way slighted by his handling of the doc system, quite the opposite. And he also indicated he was in the process of setting up a more automated system. To be honest, I'd rather wait for a manual upate than see Travis devote one or two evenings to configuring something of this nature when he could be coding for numpy :) Cheers, f