From cournape at gmail.com Sun Mar 1 03:00:07 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 1 Mar 2009 17:00:07 +0900 Subject: [Numpy-discussion] Call for testing: full blas/lapack builds of numpy on windows 64 bits In-Reply-To: References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <5b8d13220902271133pe80530bj36200e42a1aeafd1@mail.gmail.com> <49A862D7.60009@gmail.com> <5b8d13220902281057g4a53ccb1i8fdbf37cc511116e@mail.gmail.com> Message-ID: <5b8d13220903010000p44815686o175379771788928f@mail.gmail.com> On Sun, Mar 1, 2009 at 12:26 PM, Bruce Southey wrote: > Or just a Vista bug. > > A possible option could be using wine64 > (http://wiki.winehq.org/Wine64) . But is probably more work and if > even if it did work it may not be informative. Win64 is not yet usable AFAIK - it was only a few weeks / months ago they were capable of running a hello world I believe. Numpy is notably more complicated than a hello world :) One problem is gcc - gcc 4.4 will be the first release to properly support win64 (I don't claim to understand all the details, though). > I virtually avoid windows for work so I can not answer your question. > I knew about the issues from another person trying to compile code > using WinGW on another almost identical 64 bit Vista system. Even the > related thread seemed to solutions that only worked for some people > and I do not know if a solution was found in that case. Which related thread ? > Anyhow, I do agree that having Python 2.6 support is more important > than running the anti-virus software. I am afraid that if it crashes from time to time, people will complain to us, not to the AV software. Hopefully, once gdb runs reliably on that platform, we will be able to diagnose the problem a bit better. David From kgdunn at gmail.com Sun Mar 1 10:37:01 2009 From: kgdunn at gmail.com (Kevin Dunn) Date: Sun, 1 Mar 2009 10:37:01 -0500 Subject: [Numpy-discussion] Help on subclassing numpy.ma: __array_wrap__ Message-ID: Hi everyone, I'm subclassing Numpy's MaskedArray to create a data class that handles missing data, but adds some extra info I need to carrry around. However I've been having problems keeping this extra info attached to the subclass instances after performing operations on them. The bare-bones script that I've copied here shows the basic issue: http://pastebin.com/f69b979b8 There are 2 classes: one where I am able to subclass numpy (with help from the great description at http://www.scipy.org/Subclasses), and the other where I subclass numpy.ma, using the same ideas again. When stepping through the code in a debugger, lines 76 to 96, I can see that the numpy subclass, called DT, calls DT.__array_wrap__() after it completes unary and binary operations. But the numpy.ma subclass, called DTMA, does not seem to call DTMA.__array_wrap__(), especially line 111. Just to test this idea, I overrode the __mul__ function in my DTMA subclass to call DTMA.__array_wrap__() and it returns my extra attributes, in the same way that Numpy did. My questions are: (a) Is MA intended to be subclassed? (b) If so, perhaps I'm missing something to make this work. Any pointers will be appreciated. So far it seems the only way for me to sub-class numpy.ma is to override all numpy.ma functions of interest for my class and add a DTMA.__array_wrap() call to the end of them. Hopefully there is an easier way. Related to this question, was there are particular outcome from this archived discussion (I only joined the list recently): http://article.gmane.org/gmane.comp.python.numeric.general/24315 because that dictionary object would be exactly what I'm after here. Thanks, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Mar 1 11:30:42 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 1 Mar 2009 11:30:42 -0500 Subject: [Numpy-discussion] Help on subclassing numpy.ma: __array_wrap__ In-Reply-To: References: Message-ID: <1cd32cbb0903010830y50699ff0v7f18c0d7ad9f2500@mail.gmail.com> On Sun, Mar 1, 2009 at 10:37 AM, Kevin Dunn wrote: > Hi everyone, > > I'm subclassing Numpy's MaskedArray to create a data class that handles > missing data, but adds some extra info I need to carrry around. However I've > been having problems keeping this extra info attached to the subclass > instances after performing operations on them. > > The bare-bones script that I've copied here shows the basic issue: > http://pastebin.com/f69b979b8? There are 2 classes: one where I am able to > subclass numpy (with help from the great description at > http://www.scipy.org/Subclasses), and the other where I subclass numpy.ma, > using the same ideas again. > > When stepping through the code in a debugger, lines 76 to 96, I can see that > the numpy subclass, called DT, calls DT.__array_wrap__() after it completes > unary and binary operations. But the numpy.ma subclass, called DTMA, does > not seem to call DTMA.__array_wrap__(), especially line 111. > > Just to test this idea, I overrode the __mul__ function in my DTMA subclass > to call DTMA.__array_wrap__() and it returns my extra attributes, in the > same way that Numpy did. > > My questions are: > > (a) Is MA intended to be subclassed? > > (b) If so, perhaps I'm missing something to make this work.? Any pointers > will be appreciated. > > So far it seems the only way for me to sub-class numpy.ma is to override all > numpy.ma functions of interest for my class and add a DTMA.__array_wrap() > call to the end of them.? Hopefully there is an easier way. > > Related to this question, was there are particular outcome from this > archived discussion (I only joined the list recently): > http://article.gmane.org/gmane.comp.python.numeric.general/24315? because > that dictionary object would be exactly what I'm after here. > > Thanks, > > Kevin > timeseries in the scikits are subclassing MaskedArray and might provide some examples http://scipy.org/scipy/scikits/browser/trunk/timeseries/scikits/timeseries/tseries.py#L446 Josef From zachary.pincus at yale.edu Sun Mar 1 13:59:16 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 1 Mar 2009 13:59:16 -0500 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <9457e7c80902130150h4dc8755eqf6097fc0d20b6522@mail.gmail.com> <710F2847B0018641891D9A216027636029C432@ex3.envision.co.il> <9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com> <2966A073-2846-4950-8A48-F170990503D8@yale.edu> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> Message-ID: >> Well, the latest cython doesn't help -- both errors still appear as >> below. (Also, the latest cython can't run the numpy tests either.) >> I'm >> befuddled. > > That's pretty weird. Did you remove the .so that was build as well as > any source files, before doing build_ext with the latest Cython? Also > good to make sure that the latest Cython is, in fact, the one being > used. Yeah... and I just tried that again, with the same results. I have no idea what could be going wrong. E.g. why would 'cimport numpy as np' not add np to the namespace on my machine whereas it does so on yours... Also, I assume that constructs like: cdef int i, dim = data.dimensions[0] are some special numpy-support syntax that's supposed to be added by the cimport numpy line? (Because numpy arrays don't expose a 'dimensions' attribute to python code...) It's like for some reason on my machine, cython isn't building its numpy support correctly. Which is understandable in the light of the fact that cython can't pass the numpy tests on my machine either. Odd indeed. Maybe I'll try on the cython list, since you guys seem to have demonstrated that the problem isn't in the bilateral code! Zach From nadavh at visionsense.com Sun Mar 1 14:36:33 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 1 Mar 2009 21:36:33 +0200 Subject: [Numpy-discussion] Bilateral filter References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il><9457e7c80902130150h4dc8755eqf6097fc0d20b6522@mail.gmail.com><710F2847B0018641891D9A216027636029C432@ex3.envision.co.il><9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com><2966A073-2846-4950-8A48-F170990503D8@yale.edu><710F2847B0018641891D9A216027636029C450@ex3.envision.co.il><25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu><9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> Message-ID: <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> 1. "dimensions" is a field in the C struct, that describes the array object. 2. Is there a chance that the header file numpy/arrayobject.h belongs to a different numpy version that you run? I am not very experienced with cython (I suppose that Stefan has some experience). As you said, probably the cython list is a better place to look for an answer. I would be happy to see how this issue resolved. Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Zachary Pincus ????: ? 01-???-09 20:59 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] Bilateral filter >> Well, the latest cython doesn't help -- both errors still appear as >> below. (Also, the latest cython can't run the numpy tests either.) >> I'm >> befuddled. > > That's pretty weird. Did you remove the .so that was build as well as > any source files, before doing build_ext with the latest Cython? Also > good to make sure that the latest Cython is, in fact, the one being > used. Yeah... and I just tried that again, with the same results. I have no idea what could be going wrong. E.g. why would 'cimport numpy as np' not add np to the namespace on my machine whereas it does so on yours... Also, I assume that constructs like: cdef int i, dim = data.dimensions[0] are some special numpy-support syntax that's supposed to be added by the cimport numpy line? (Because numpy arrays don't expose a 'dimensions' attribute to python code...) It's like for some reason on my machine, cython isn't building its numpy support correctly. Which is understandable in the light of the fact that cython can't pass the numpy tests on my machine either. Odd indeed. Maybe I'll try on the cython list, since you guys seem to have demonstrated that the problem isn't in the bilateral code! Zach _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4284 bytes Desc: not available URL: From simpson at math.toronto.edu Sun Mar 1 16:12:14 2009 From: simpson at math.toronto.edu (Gideon Simpson) Date: Sun, 1 Mar 2009 16:12:14 -0500 Subject: [Numpy-discussion] loadtxt slow Message-ID: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> So I have some data sets of about 160000 floating point numbers stored in text files. I find that loadtxt is rather slow. Is this to be expected? Would it be faster if it were loading binary data? -gideon From robert.kern at gmail.com Sun Mar 1 16:17:44 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 1 Mar 2009 15:17:44 -0600 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> Message-ID: <3d375d730903011317h2ae194rcda4c32fd758fed0@mail.gmail.com> On Sun, Mar 1, 2009 at 15:12, Gideon Simpson wrote: > So I have some data sets of about 160000 floating point numbers stored > in text files. ?I find that loadtxt is rather slow. ?Is this to be > expected? Probably. You don't say exactly what you mean by "slow", so it's difficult to tell. But it is unlikely that you are running into some slow corner case or something that no one else has seen. >?Would it be faster if it were loading binary data? Substantially. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zachary.pincus at yale.edu Sun Mar 1 16:53:30 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 1 Mar 2009 16:53:30 -0500 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il><9457e7c80902130150h4dc8755eqf6097fc0d20b6522@mail.gmail.com><710F2847B0018641891D9A216027636029C432@ex3.envision.co.il><9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com><2966A073-2846-4950-8A48-F170990503D8@yale.edu><710F2847B0018641891D9A216027636029C450@ex3.envision.co.il><25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu><9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> Message-ID: <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> Hi guys, Dag, the cython person who seems to deal with the numpy stuff, had this to say: > - cimport and import are different things; you need both. > - The "dimensions" field is in Cython renamed "shape" to be closer > to the Python interface. This is done in Cython/Includes/numpy.pxd After including both the 'cimport' and 'import' lines, and changing 'dimensions' to 'shape', things work perfectly. And now the shoe is on the other foot -- I wonder why you guys were getting proper results with what Dag claims to be buggy code! Odd indeed. Zach On Mar 1, 2009, at 2:36 PM, Nadav Horesh wrote: > 1. "dimensions" is a field in the C struct, that describes the array > object. > 2. Is there a chance that the header file numpy/arrayobject.h > belongs to a > different numpy version that you run? > > I am not very experienced with cython (I suppose that Stefan has > some experience). > As you said, probably the cython list is a better place to look for > an answer. I would be happy to see how this issue resolved. > > Nadav > > > > -----????? ??????----- > ???: numpy-discussion-bounces at scipy.org ??? Zachary Pincus > ????: ? 01-???-09 20:59 > ??: Discussion of Numerical Python > ????: Re: [Numpy-discussion] Bilateral filter > >>> Well, the latest cython doesn't help -- both errors still appear as >>> below. (Also, the latest cython can't run the numpy tests either.) >>> I'm >>> befuddled. >> >> That's pretty weird. Did you remove the .so that was build as well >> as >> any source files, before doing build_ext with the latest Cython? >> Also >> good to make sure that the latest Cython is, in fact, the one being >> used. > > Yeah... and I just tried that again, with the same results. I have no > idea what could be going wrong. E.g. why would 'cimport numpy as np' > not add np to the namespace on my machine whereas it does so on > yours... > > Also, I assume that constructs like: > cdef int i, dim = data.dimensions[0] > are some special numpy-support syntax that's supposed to be added by > the cimport numpy line? (Because numpy arrays don't expose a > 'dimensions' attribute to python code...) It's like for some reason on > my machine, cython isn't building its numpy support correctly. Which > is understandable in the light of the fact that cython can't pass the > numpy tests on my machine either. Odd indeed. Maybe I'll try on the > cython list, since you guys seem to have demonstrated that the problem > isn't in the bilateral code! > > Zach > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From stefan at sun.ac.za Sun Mar 1 16:53:34 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 1 Mar 2009 23:53:34 +0200 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <9457e7c80902130150h4dc8755eqf6097fc0d20b6522@mail.gmail.com> <710F2847B0018641891D9A216027636029C432@ex3.envision.co.il> <9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com> <2966A073-2846-4950-8A48-F170990503D8@yale.edu> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> Message-ID: <9457e7c80903011353u5792e9c3n5d153050c57f1236@mail.gmail.com> Zach, I put the source my Cython generated here: http://mentat.za.net/refer/bilateral_base.c Can you try to compile that? Cheers St?fan 2009/3/1 Zachary Pincus : >>> Well, the latest cython doesn't help -- both errors still appear as >>> below. (Also, the latest cython can't run the numpy tests either.) >>> I'm >>> befuddled. >> >> That's pretty weird. ?Did you remove the .so that was build as well as >> any source files, before doing build_ext with the latest Cython? ?Also >> good to make sure that the latest Cython is, in fact, the one being >> used. > > Yeah... and I just tried that again, with the same results. I have no > idea what could be going wrong. E.g. why would 'cimport numpy as np' > not add np to the namespace on my machine whereas it does so on yours... From stefan at sun.ac.za Sun Mar 1 17:09:57 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 2 Mar 2009 00:09:57 +0200 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com> <2966A073-2846-4950-8A48-F170990503D8@yale.edu> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> Message-ID: <9457e7c80903011409n2f082037mf782c7a4c5014205@mail.gmail.com> Hey Zach, 2009/3/1 Zachary Pincus : > Dag, the cython person who seems to deal with the numpy stuff, had > this to say: >> - cimport and import are different things; you need both. >> - The "dimensions" field is in Cython renamed "shape" to be closer >> to the Python interface. This is done in Cython/Includes/numpy.pxd Thanks for following up. I made the fixes in: http://github.com/stefanv/bilateral.git I think we should combine all these image processing algorithms (and the ones you sent to the list) into an image processing scikit. We've certainly got enough algorithms lying around! If you think that's a good idea, I'll set it up this coming week. Cheers St?fan From zachary.pincus at yale.edu Sun Mar 1 17:23:02 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 1 Mar 2009 17:23:02 -0500 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <9457e7c80903011409n2f082037mf782c7a4c5014205@mail.gmail.com> References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <9457e7c80902141029w451160a2te6f7e6eb57db2d1e@mail.gmail.com> <2966A073-2846-4950-8A48-F170990503D8@yale.edu> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> <9457e7c80903011409n2f082037mf782c7a4c5014205@mail.gmail.com> Message-ID: > 2009/3/1 Zachary Pincus : >> Dag, the cython person who seems to deal with the numpy stuff, had >> this to say: >>> - cimport and import are different things; you need both. >>> - The "dimensions" field is in Cython renamed "shape" to be closer >>> to the Python interface. This is done in Cython/Includes/numpy.pxd > > Thanks for following up. I made the fixes in: > > http://github.com/stefanv/bilateral.git Cool! Does this, out of curiosity, break things for you? (Or Nadav?) > I think we should combine all these image processing algorithms (and > the ones you sent to the list) into an image processing scikit. We've > certainly got enough algorithms lying around! > > If you think that's a good idea, I'll set it up this coming week. I'm all for it. I've got a few other bits lying around that might be good there too: - 2D iso-contour finding (sub-pixel precision) - 2D image warping via thin-plate splines I also have some code for various geometric algorithms lying around: - calculating optimal rigid alignments of point-sets ("Procrustes Analysis") - line intersections, closest points to lines, distance to lines, etc. if that would be of any use to anyone. Zach From stefan at sun.ac.za Sun Mar 1 17:35:15 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 2 Mar 2009 00:35:15 +0200 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> <9457e7c80903011409n2f082037mf782c7a4c5014205@mail.gmail.com> Message-ID: <9457e7c80903011435u41029845ma47d0996d6bfe63d@mail.gmail.com> 2009/3/2 Zachary Pincus : >> http://github.com/stefanv/bilateral.git > > Cool! Does this, out of curiosity, break things for you? (Or Nadav?) I wish I had some way to test. Do you maybe have a short example that I can convert to a test? > I'm all for it. I've got a few other bits lying around that might be > good there too: > ?- 2D iso-contour finding (sub-pixel precision) > ?- 2D image warping via thin-plate splines > I also have some code for various geometric algorithms lying around: > ?- calculating optimal rigid alignments of point-sets ("Procrustes > Analysis") > ?- line intersections, closest points to lines, distance to lines, etc. > > if that would be of any use to anyone. Definitely. In addition I have code for polygon clipping, hough transforms, grey-level co-occurrence matrices, connected components, shortest paths and linear position-invariant filtering. Cheers St?fan From michael.s.gilbert at gmail.com Sun Mar 1 14:29:54 2009 From: michael.s.gilbert at gmail.com (Michael Gilbert) Date: Sun, 1 Mar 2009 14:29:54 -0500 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> Message-ID: <20090301142954.46d15837.michael.s.gilbert@gmail.com> On Sun, 1 Mar 2009 16:12:14 -0500 Gideon Simpson wrote: > So I have some data sets of about 160000 floating point numbers stored > in text files. I find that loadtxt is rather slow. Is this to be > expected? Would it be faster if it were loading binary data? i have run into this as well. loadtxt uses a python list to allocate memory for the data it reads in, so once you get to about 1/4th of your available memory, it will start allocating the updated list (every time it reads a new value from your data file) in swap instead of main memory, which is rediculously slow (in fact it causes my system to be quite unresponsive and a jumpy cursor). i have rewritten loadtxt to be smarter about allocating memory, but it is slower overall and doesn't support all of the original arguments/options (yet). i have some ideas to make it smarter/more efficient, but have not had the time to work on it recently. i will send the current version to the list tomorrow when i have access to the system that it is on. best wishes, mike From michael.s.gilbert at gmail.com Sun Mar 1 14:32:58 2009 From: michael.s.gilbert at gmail.com (Michael Gilbert) Date: Sun, 1 Mar 2009 14:32:58 -0500 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <20090301142954.46d15837.michael.s.gilbert@gmail.com> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> <20090301142954.46d15837.michael.s.gilbert@gmail.com> Message-ID: <20090301143258.55e93675.michael.s.gilbert@gmail.com> On Sun, 1 Mar 2009 14:29:54 -0500 Michael Gilbert wrote: > i have rewritten loadtxt to be smarter about allocating memory, but > it is slower overall and doesn't support all of the original > arguments/options (yet). i had meant to say that my version is slower for smaller data sets (when you aren't close to your main memory limit), but it is orders of magnitude faster for large data sets. From bpederse at gmail.com Sun Mar 1 19:51:00 2009 From: bpederse at gmail.com (Brent Pedersen) Date: Sun, 1 Mar 2009 16:51:00 -0800 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <20090301142954.46d15837.michael.s.gilbert@gmail.com> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> <20090301142954.46d15837.michael.s.gilbert@gmail.com> Message-ID: On Sun, Mar 1, 2009 at 11:29 AM, Michael Gilbert wrote: > On Sun, 1 Mar 2009 16:12:14 -0500 Gideon Simpson wrote: > >> So I have some data sets of about 160000 floating point numbers stored >> in text files. ?I find that loadtxt is rather slow. ?Is this to be >> expected? ?Would it be faster if it were loading binary data? > > i have run into this as well. ?loadtxt uses a python list to allocate > memory for the data it reads in, so once you get to about 1/4th of your > available memory, it will start allocating the updated list (every > time it reads a new value from your data file) in swap instead of main > memory, which is rediculously slow (in fact it causes my system to be > quite unresponsive and a jumpy cursor). i have rewritten loadtxt to be > smarter about allocating memory, but it is slower overall and doesn't > support all of the original arguments/options (yet). ?i have some > ideas to make it smarter/more efficient, but have not had the time > to work on it recently. > > i will send the current version to the list tomorrow when i have access > to the system that it is on. > > best wishes, > mike > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > to address the slowness, i use wrappers around savetxt/loadtxt that save/load a .npy file along with/instead of the .txt file. -- and the loadtxt wrapper checks if the .npy is up-to-date. code here: http://rafb.net/p/dGBJjg80.html of course it's still slow the first time. i look forward to your speedups. -brentp From efiring at hawaii.edu Sun Mar 1 21:47:05 2009 From: efiring at hawaii.edu (Eric Firing) Date: Sun, 01 Mar 2009 16:47:05 -1000 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> Message-ID: <49AB48A9.4070806@hawaii.edu> Gideon Simpson wrote: > So I have some data sets of about 160000 floating point numbers stored > in text files. I find that loadtxt is rather slow. Is this to be > expected? Would it be faster if it were loading binary data? Depending on the format you may be able to use numpy.fromfile, which I suspect would be much faster. It only handles very simple ascii formats, though. Eric From zachary.pincus at yale.edu Sun Mar 1 22:17:26 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 1 Mar 2009 22:17:26 -0500 Subject: [Numpy-discussion] Bilateral filter In-Reply-To: <9457e7c80903011435u41029845ma47d0996d6bfe63d@mail.gmail.com> References: <710F2847B0018641891D9A216027636029C431@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C450@ex3.envision.co.il> <25CF9D14-91C8-4BDB-A824-7365EDBD6130@yale.edu> <9457e7c80902281515n223156d9m6379b5dafe4a8352@mail.gmail.com> <710F2847B0018641891D9A216027636029C460@ex3.envision.co.il> <9D37E933-1A25-4C97-B897-029F1FCF5773@yale.edu> <9457e7c80903011409n2f082037mf782c7a4c5014205@mail.gmail.com> <9457e7c80903011435u41029845ma47d0996d6bfe63d@mail.gmail.com> Message-ID: Hi St?fan, >>> http://github.com/stefanv/bilateral.git >> >> Cool! Does this, out of curiosity, break things for you? (Or Nadav?) > > I wish I had some way to test. Do you maybe have a short example that > I can convert to a test? Here's my test case for basic working-ness (e.g. non exception- throwing) of that bilateral code: In [7]: bilateral.bilateral(numpy.arange(25).reshape((5,5)), 4, 10) Out[7]: array([[ 7, 7, 7, 8, 8], [ 9, 9, 9, 10, 10], [11, 11, 12, 12, 12], [13, 13, 14, 14, 14], [15, 15, 16, 16, 16]]) That's all I'd been using to provoke the errors before, so presumably if you get that far with the fixed code, then things should be good as far as cython's concerned? >> I'm all for it. I've got a few other bits lying around that might be >> good there too: >> - 2D iso-contour finding (sub-pixel precision) >> - 2D image warping via thin-plate splines > >> I also have some code for various geometric algorithms lying around: >> - calculating optimal rigid alignments of point-sets ("Procrustes >> Analysis") >> - line intersections, closest points to lines, distance to lines, >> etc. >> >> if that would be of any use to anyone. > > Definitely. In addition I have code for polygon clipping, hough > transforms, grey-level co-occurrence matrices, connected components, > shortest paths and linear position-invariant filtering. Aah, fantastic. The co-occurrence matrix stuff will be very useful to me! Zach From mail at stevesimmons.com Mon Mar 2 01:50:27 2009 From: mail at stevesimmons.com (Stephen Simmons) Date: Mon, 02 Mar 2009 07:50:27 +0100 Subject: [Numpy-discussion] Easy way to vectorize a loop? In-Reply-To: <49A862D7.60009@gmail.com> References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <5b8d13220902271133pe80530bj36200e42a1aeafd1@mail.gmail.com> <49A862D7.60009@gmail.com> Message-ID: <49AB81B3.6000409@stevesimmons.com> Hi, Can anyone help me out with a simple way to vectorize this loop? # idx and vals are arrays with indexes and values used to update array data # data = numpy.ndarray(shape=(100,100,100,100), dtype='f4') flattened = data.ravel() for i in range(len(vals)): flattened[idx[i]]+=vals[i] Many thanks! Stephen From robert.kern at gmail.com Mon Mar 2 01:58:27 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 00:58:27 -0600 Subject: [Numpy-discussion] Easy way to vectorize a loop? In-Reply-To: <49AB81B3.6000409@stevesimmons.com> References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <5b8d13220902271133pe80530bj36200e42a1aeafd1@mail.gmail.com> <49A862D7.60009@gmail.com> <49AB81B3.6000409@stevesimmons.com> Message-ID: <3d375d730903012258q47faf4dbl92a859deb88aa91e@mail.gmail.com> On Mon, Mar 2, 2009 at 00:50, Stephen Simmons wrote: > Hi, > > Can anyone help me out with a simple way to vectorize this loop? > > # idx and vals are arrays with indexes and values used to update array data > # data = numpy.ndarray(shape=(100,100,100,100), dtype='f4') > flattened = data.ravel() > for i in range(len(vals)): > ? ?flattened[idx[i]]+=vals[i] flattened[idx] = vals -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Mon Mar 2 03:56:35 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Mar 2009 09:56:35 +0100 Subject: [Numpy-discussion] intersect1d and setmember1d In-Reply-To: References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> Message-ID: <49AB9F43.4060804@ntc.zcu.cz> Neil wrote: > mudit sharma yahoo.com> writes: > >> intersect1d and setmember1d doesn't give expected results in case there are > duplicate values in either >> array becuase it works by sorting data and substracting previous value. Is > there an alternative in numpy >> to get indices of intersected values. >> >> In [31]: p nonzero(setmember1d(v1.Id, v2.Id))[0] >> [ 0 1 2 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 > <-------------- index 2 shouldn't be here look at the >> data below. >> 26 27 28 29] >> >> In [32]: p v1.Id[:10] >> [ 232. 232. 233. 233. 234. 234. 235. 235. 237. 237.] >> >> In [33]: p v2.Id[:10] >> [ 232. 232. 234. 234. 235. 235. 236. 236. 237. 237.] >> > > > As far as I know there isn't an obvious way to get the functionality of > setmember1d working on non-unique inputs. However, I've needed this operation > quite a lot, so here's a function I wrote that does it. It's only a few times > slower than numpy's setmember1d. You're welcome to use it. Hi Neil! I would like to add your function to arraysetops.py - is it ok? Just the name would be changed to setmember1d_nu, to follow the naming in the module (like intersect1d_nu). Thank you, r. From neilcrighton at gmail.com Mon Mar 2 04:39:28 2009 From: neilcrighton at gmail.com (Neil Crighton) Date: Mon, 2 Mar 2009 09:39:28 +0000 (UTC) Subject: [Numpy-discussion] intersect1d and setmember1d References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> <49AB9F43.4060804@ntc.zcu.cz> Message-ID: Robert Cimrman ntc.zcu.cz> writes: > Hi Neil! > > I would like to add your function to arraysetops.py - is it ok? Just the > name would be changed to setmember1d_nu, to follow the naming in the > module (like intersect1d_nu). > > Thank you, > r. > That's fine! There's no licence attached, it's in the public domain. Neil From dwf at cs.toronto.edu Mon Mar 2 05:19:18 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 2 Mar 2009 05:19:18 -0500 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: <463e11f90902280927k42a01ae5j7c0ed87ece03dca0@mail.gmail.com> References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <069E94BE-B877-47C8-A723-703A7E3620B9@cs.toronto.edu> <6CF9CBA8-B21A-44F2-BE77-218DBBC05648@cs.toronto.edu> <463e11f90902280927k42a01ae5j7c0ed87ece03dca0@mail.gmail.com> Message-ID: On 28-Feb-09, at 12:27 PM, Jonathan Taylor wrote: > This does seem like the only way to write this nicely. Unfortunately, > I think this may be wasteful memory wise (in contrast to what the > obvious matlab code would do) as it constructs an array with the whole > first index intact at first. True enough, though if I understand correctly, this is only a _view_ onto the original array, and nothing is immediately copied. So it does waste memory creating a view and then a view on the view, but I don't think it's proportional to the size of the returned array. Maybe Robert or someone else can confirm this. David From timmichelsen at gmx-topmail.de Mon Mar 2 05:26:05 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Mon, 2 Mar 2009 10:26:05 +0000 (UTC) Subject: [Numpy-discussion] saving an array of strings Message-ID: Hello, can numpy.savetxt save an array of strings? I got the following arror when saving an array containing strings formatted from datetime objects: File "C:\Programme\pythonxy\python\lib\site-packages\numpy\lib\io.py", line 542, in savetxt fh.write(format % tuple(row) + '\n') TypeError: float argument required Thanks for any help & regards, Timmie From brennan.williams at visualreservoir.com Mon Mar 2 06:08:49 2009 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Tue, 03 Mar 2009 00:08:49 +1300 Subject: [Numpy-discussion] populating an array Message-ID: <49ABBE41.3020805@visualreservoir.com> Ok... I'm using Traits and numpy. I have a 3D grid with directions I,J and K. I have NI,NJ,NK cells in the I,J,K directions so I have NI*NJ*NK cells overall. I have data arrays with a value for each cell in the grid. I'm going to store this as a 1D array, i.e. 1....ncells where ncells=NI*NJ*NK rather than as a 3D array Apart from lots of other data arrays that will be read in from external files, I want to create I, J and K data arrays where the 'I' array contains the I index for the cell, the 'J' array the J index etc. I haven't used numpy extensively and I'm coming from a Fortran/C background so I'm hideously inefficient. At the moment I have something like.... self.iarray=zeros(self.ncells,dtype=int) for icell in arange(1,self.ncells+2): # icell=i+(j-1)*ni+(k-1)*ni*nj i,j,k=rn2ijk(icell,self.ni,self.nj) self.yarray[icell]=i rn2ijk is defined as... def rn2ijk(n,ni,nj): nij=ni*nj k=(n+nij-1)/nij ij=n-(k-1)*nij j=(ij+ni-1)/ni i=ij-(j-1)*ni return i,j,k Obviously I can improve my use of rn2ijk as for a start I'm recalculating nij for every cell (and I can have anything from a few thousand to a few million cells). But I'm sure I could do this more efficiently by probably starting off with a 3d array, looping over i,j,k and then reshaping it into a 1d array. Ideas? Brennan From nwagner at iam.uni-stuttgart.de Mon Mar 2 07:46:37 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 02 Mar 2009 13:46:37 +0100 Subject: [Numpy-discussion] AttributeError: 'str' object has no attribute 'seek' Message-ID: Hi all, I encountered a problem wrt loadtxt. Traceback (most recent call last): File "mac.py", line 9, in mac = loadtxt('mac_diff.pmat.gz',skiprows=27,comments='!',usecols=(0,2,4),dtype='|S40') File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/io.py", line 384, in loadtxt fh = seek_gzip_factory(fname) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/io.py", line 51, in seek_gzip_factory f.seek = new.instancemethod(seek, f) AttributeError: 'str' object has no attribute 'seek' >>> numpy.__version__ '1.3.0.dev6520' Nils From gael.varoquaux at normalesup.org Mon Mar 2 08:16:07 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 2 Mar 2009 14:16:07 +0100 Subject: [Numpy-discussion] Call for testing: full blas/lapack builds of numpy on windows 64 bits In-Reply-To: <5b8d13220902281051n31332145m2f23556e48f261ea@mail.gmail.com> References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <5b8d13220902271133pe80530bj36200e42a1aeafd1@mail.gmail.com> <5b8d13220902281051n31332145m2f23556e48f261ea@mail.gmail.com> Message-ID: <20090302131607.GA19213@phare.normalesup.org> On Sun, Mar 01, 2009 at 03:51:31AM +0900, David Cournapeau wrote: > Great, thank you very much for those informations. It looks like we > will be able to provide a 64 bits numpy binary for 1.3.0. Kudos David. Your efforts are invaluable. Ga?l From watson.jim at gmail.com Mon Mar 2 09:02:33 2009 From: watson.jim at gmail.com (James Watson) Date: Mon, 2 Mar 2009 14:02:33 +0000 Subject: [Numpy-discussion] porting NumPy to Python 3 In-Reply-To: <4989DAD7.9060507@gmail.com> References: <4989DAD7.9060507@gmail.com> Message-ID: The following are very simple changes that allow the 2to3 program to run on numpy without warnings. Can someone check / commit? numpy/linalg/lapack_lite/make_lite.py: 144c144 < if 'BLAS' in filename --- > if 'BLAS' in filename: numpy/distutils/misc_util.py: 957c957,958 < map(data_dict[p].add,files) --- > for f in files: > data_dict[p].add(f) 983c984,985 < map(self.add_data_files, files) --- > for f in files: > self.add_data_files(f) numpy/distutils/command/build_src.py: 468c468,469 < map(self.mkpath, target_dirs) --- > for td in target_dirs: > self.mkpath(td) 635c636,637 < map(self.mkpath, target_dirs) --- > for td in target_dirs: > self.mkpath(td) numpy/f2py/crackfortran.py: 686c686 < raise 'appenddecl: Unknown variable definition key:', k --- > raise Exception('appenddecl: Unknown variable definition key: ' + k) 1545c1545 < raise 'postcrack: Expected block dictionary instead of ',block --- > raise Exception('postcrack: Expected block dictionary instead of ' + block) numpy/lib/function_base.py: 549c549 < raise 'Internal Shape Error' --- > raise Exception('Internal Shape Error') These changes are because 1. Python 3 no longer allows string exceptions (http://www.python.org/dev/peps/pep-0352). The recommended method is to use 'except Exception', 2. map has new behaviour, and the 2to3 tool recommends changing simple map calls to for loops, as returning the result of the map is wasteful. Warnings generated by 2to3 on numpy revision 6521: RefactoringTool: Warnings/messages while refactoring: RefactoringTool: ### In file ./numpy/distutils/misc_util.py ### RefactoringTool: Line 957: You should use a for loop here RefactoringTool: Line 983: You should use a for loop here RefactoringTool: ### In file ./numpy/distutils/command/build_src.py ### RefactoringTool: Line 468: You should use a for loop here RefactoringTool: Line 635: You should use a for loop here RefactoringTool: ### In file ./numpy/f2py/crackfortran.py ### RefactoringTool: Line 686: could not convert: raise 'appenddecl: Unknown variable definition key:', k RefactoringTool: Python 3 does not support string exceptions RefactoringTool: Line 1545: could not convert: raise 'postcrack: Expected block dictionary instead of ',block RefactoringTool: Python 3 does not support string exceptions RefactoringTool: ### In file ./numpy/lib/function_base.py ### RefactoringTool: Line 549: could not convert: raise 'Internal Shape Error' RefactoringTool: Python 3 does not support string exceptions RefactoringTool: There was 1 error: RefactoringTool: Can't parse ./numpy/linalg/lapack_lite/make_lite.py: ParseError: bad input: type=4, value='\n', context=('', (144, 29)) From lists_ravi at lavabit.com Mon Mar 2 10:34:24 2009 From: lists_ravi at lavabit.com (Ravi) Date: Mon, 2 Mar 2009 10:34:24 -0500 Subject: [Numpy-discussion] Easy way to vectorize a loop? In-Reply-To: <3d375d730903012258q47faf4dbl92a859deb88aa91e@mail.gmail.com> References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <49AB81B3.6000409@stevesimmons.com> <3d375d730903012258q47faf4dbl92a859deb88aa91e@mail.gmail.com> Message-ID: <200903021034.28623.lists_ravi@lavabit.com> On Monday 02 March 2009 01:58:27 Robert Kern wrote: > > for i in range(len(vals)): > > ? ?flattened[idx[i]]+=vals[i] > > flattened[idx] = vals Assuming 'idx' and 'vals' are one-dimensional arrays, that should be flattened[ idx[:numpy.size(vals)] ] += vals or flattened[ idx ] += vals if 'vals' and 'idx' have the same size. Regards, Ravi From robert.kern at gmail.com Mon Mar 2 12:23:07 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 11:23:07 -0600 Subject: [Numpy-discussion] Easy way to vectorize a loop? In-Reply-To: <200903021034.28623.lists_ravi@lavabit.com> References: <49A82386.9030208@ar.media.kyoto-u.ac.jp> <49AB81B3.6000409@stevesimmons.com> <3d375d730903012258q47faf4dbl92a859deb88aa91e@mail.gmail.com> <200903021034.28623.lists_ravi@lavabit.com> Message-ID: <3d375d730903020923hadc5000n8a780dbed210b7f5@mail.gmail.com> On Mon, Mar 2, 2009 at 09:34, Ravi wrote: > On Monday 02 March 2009 01:58:27 Robert Kern wrote: >> > for i in range(len(vals)): >> > ? ?flattened[idx[i]]+=vals[i] >> >> flattened[idx] = vals > > Assuming 'idx' and 'vals' are one-dimensional arrays, that should be > ?flattened[ idx[:numpy.size(vals)] ] += vals > or > ?flattened[ idx ] += vals > if 'vals' and 'idx' have the same size. Oops. I missed the +. Actually, neither of these will work when idx has repeated indices. Instead, use: flattened += np.bincount(idx, vals) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Mar 2 12:25:24 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 11:25:24 -0600 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <069E94BE-B877-47C8-A723-703A7E3620B9@cs.toronto.edu> <6CF9CBA8-B21A-44F2-BE77-218DBBC05648@cs.toronto.edu> <463e11f90902280927k42a01ae5j7c0ed87ece03dca0@mail.gmail.com> Message-ID: <3d375d730903020925u647d432em1b76693c6955be13@mail.gmail.com> On Mon, Mar 2, 2009 at 04:19, David Warde-Farley wrote: > On 28-Feb-09, at 12:27 PM, Jonathan Taylor wrote: > >> This does seem like the only way to write this nicely. ?Unfortunately, >> I think this may be wasteful memory wise (in contrast to what the >> obvious matlab code would do) as it constructs an array with the whole >> first index intact at first. > > True enough, though if I understand correctly, this is only a _view_ > onto the original array, and nothing is immediately copied. So it does > waste memory creating a view and then a view on the view, but I don't > think it's proportional to the size of the returned array. a[[2,3,6], ...][..., [3,2]] You're doing fancy indexing, so there are copies both times. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Mar 2 12:28:08 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 11:28:08 -0600 Subject: [Numpy-discussion] intersect1d and setmember1d In-Reply-To: References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> <49AB9F43.4060804@ntc.zcu.cz> Message-ID: <3d375d730903020928q6e0f69ddldc2a81102e8cc840@mail.gmail.com> On Mon, Mar 2, 2009 at 03:39, Neil Crighton wrote: > Robert Cimrman ntc.zcu.cz> writes: > >> Hi Neil! >> >> I would like to add your function to arraysetops.py - is it ok? Just the >> name would be changed to setmember1d_nu, to follow the naming in the >> module (like intersect1d_nu). >> >> Thank you, >> r. >> > > That's fine! ?There's no licence attached, it's in the public domain. Do you mind if we just add you to the THANKS.txt file, and consider you as a "NumPy Developer" per the LICENSE.txt as having released that code under the numpy license? If we're dotting our i's and crossing our t's legally, that's a bit more straightforward (oddly enough). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Mar 2 12:30:55 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 11:30:55 -0600 Subject: [Numpy-discussion] saving an array of strings In-Reply-To: References: Message-ID: <3d375d730903020930j6cce4f68i2b8b2b2bdd9b65f5@mail.gmail.com> On Mon, Mar 2, 2009 at 04:26, Timmie wrote: > Hello, > can numpy.savetxt save an array of strings? You need to use fmt= argument to specify the format string(s). The default is %10.5f, so it only works for floats. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Mar 2 12:33:57 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 11:33:57 -0600 Subject: [Numpy-discussion] populating an array In-Reply-To: <49ABBE41.3020805@visualreservoir.com> References: <49ABBE41.3020805@visualreservoir.com> Message-ID: <3d375d730903020933q1529cf04ya4de3895b13d862f@mail.gmail.com> On Mon, Mar 2, 2009 at 05:08, Brennan Williams wrote: > Ok... I'm using Traits and numpy. > I have a 3D grid with directions I,J and K. > I have NI,NJ,NK cells in the I,J,K directions so I have NI*NJ*NK cells > overall. > I have data arrays with a value for each cell in the grid. > I'm going to store this as a 1D array, i.e. 1....ncells where > ncells=NI*NJ*NK rather than as a 3D array > Apart from lots of other data arrays that will be read in from external > files, I want to create I, J and K data arrays > where the 'I' array contains the I index for the cell, the 'J' array the > J index etc. I, J, K = numpy.mgrid[0:self.ni, 0:self.nj, 0:self.nk] self.iarray = I.ravel() self.jarray = J.ravel() self.karray = K.ravel() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Mon Mar 2 14:10:36 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 2 Mar 2009 14:10:36 -0500 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: <3d375d730903020925u647d432em1b76693c6955be13@mail.gmail.com> References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <069E94BE-B877-47C8-A723-703A7E3620B9@cs.toronto.edu> <6CF9CBA8-B21A-44F2-BE77-218DBBC05648@cs.toronto.edu> <463e11f90902280927k42a01ae5j7c0ed87ece03dca0@mail.gmail.com> <3d375d730903020925u647d432em1b76693c6955be13@mail.gmail.com> Message-ID: <55D1F4AE-AFAB-4309-A059-AD4EACAF40D1@cs.toronto.edu> On 2-Mar-09, at 12:25 PM, Robert Kern wrote: > a[[2,3,6], ...][..., [3,2]] > > You're doing fancy indexing, so there are copies both times. D'oh! So I guess the only way to avoid the second copy is to do what Jon initially suggested, i.e. a[ix_([2,3,6],range(a.shape[1]),[3,2])] ? I suppose xrange would be better than arange() or range() as it wouldn't create and destroy the list all at once. D From stefan at sun.ac.za Mon Mar 2 15:08:22 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 2 Mar 2009 22:08:22 +0200 Subject: [Numpy-discussion] AttributeError: 'str' object has no attribute 'seek' In-Reply-To: References: Message-ID: <9457e7c80903021208r6af5bd4cof2d50a0ed2967786@mail.gmail.com> Nils, 2009/3/2 Nils Wagner : > I encountered a problem wrt loadtxt. > > ? File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/io.py", > line 384, in loadtxt > ? ? fh = seek_gzip_factory(fname) Would you mind trying latest SVN? Thanks St?fan From robert.kern at gmail.com Mon Mar 2 15:10:29 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 14:10:29 -0600 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: <55D1F4AE-AFAB-4309-A059-AD4EACAF40D1@cs.toronto.edu> References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <069E94BE-B877-47C8-A723-703A7E3620B9@cs.toronto.edu> <6CF9CBA8-B21A-44F2-BE77-218DBBC05648@cs.toronto.edu> <463e11f90902280927k42a01ae5j7c0ed87ece03dca0@mail.gmail.com> <3d375d730903020925u647d432em1b76693c6955be13@mail.gmail.com> <55D1F4AE-AFAB-4309-A059-AD4EACAF40D1@cs.toronto.edu> Message-ID: <3d375d730903021210g724e82a5h6b3a6a61f2832d7a@mail.gmail.com> On Mon, Mar 2, 2009 at 13:10, David Warde-Farley wrote: > On 2-Mar-09, at 12:25 PM, Robert Kern wrote: > >> a[[2,3,6], ...][..., [3,2]] >> >> You're doing fancy indexing, so there are copies both times. > > D'oh! > > So I guess the only way to avoid the second copy is to do what Jon > initially suggested, i.e. a[ix_([2,3,6],range(a.shape[1]),[3,2])] ? > > I suppose xrange would be better than arange() or range() as it > wouldn't create and destroy the list all at once. I believe an array would be created from it, so arange() would be the "best" bet there. But really, the difference is so trivial. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simpson at math.toronto.edu Mon Mar 2 15:37:33 2009 From: simpson at math.toronto.edu (Gideon Simpson) Date: Mon, 2 Mar 2009 15:37:33 -0500 Subject: [Numpy-discussion] Floating point question Message-ID: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> I recently discovered that for 8 byte floating point numbers, my fortran compilers (gfortran 4.2 and ifort 11.0) on an OS X core 2 duo machine believe the smallest number 2.220507...E-308. I presume that my C compilers have similar results. I then discovered that the smallest floating point number in python 2.5 is 4.9065...E-324. I have been using numpy to generate data, saving it with savetxt, and then reading it in as ASCII into my fortran code. Recently, it crapped out on something because it didn't like reading it a number that small, though it is apparently perfectly acceptable to python. My two questions are: 1. What is the best way to handle this? Is it just to add a filter of the form u = u * ( np.abs(u) > 2.3 e-308) 2. What gives? What's the origin of this (perceived) inconsistency in floating points across languages within the same platform? I recognize that this isn't specific to Scipy/Numpy, but thought someone here might have the answer. -gideon From michael.s.gilbert at gmail.com Mon Mar 2 15:45:28 2009 From: michael.s.gilbert at gmail.com (Michael S. Gilbert) Date: Mon, 2 Mar 2009 15:45:28 -0500 Subject: [Numpy-discussion] loadtxt slow In-Reply-To: <20090301142954.46d15837.michael.s.gilbert@gmail.com> References: <39294C61-1CFD-41B7-BA86-C559E22B2744@math.toronto.edu> <20090301142954.46d15837.michael.s.gilbert@gmail.com> Message-ID: <20090302154528.09e1c0f5.michael.s.gilbert@gmail.com> On Sun, 1 Mar 2009 14:29:54 -0500, Michael Gilbert wrote: > i will send the current version to the list tomorrow when i have access > to the system that it is on. attached is my current version of loadtxt. like i said, it's slower for small data sets (because it reads through the whole data file twice). the first loop is used to figure out how much memory to allocate, and i can optimize this by intelligently seeking through the file. but like i said, i haven't had the time to implement it. all of the options should work, except for "converters" (i have never used "converters" and i couldn't figure out exactly what it does based on a quick read-through of the docs). best wishes, mike -------------- next part -------------- A non-text attachment was scrubbed... Name: myloadtxt Type: application/octet-stream Size: 1658 bytes Desc: not available URL: From dsdale24 at gmail.com Mon Mar 2 15:45:30 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 2 Mar 2009 15:45:30 -0500 Subject: [Numpy-discussion] RFR: #1008 Loss of precision in (complex) arcsinh & arctanh In-Reply-To: <5b8d13220902280734i3baaeac3h1473535223b43@mail.gmail.com> References: <5b8d13220902280734i3baaeac3h1473535223b43@mail.gmail.com> Message-ID: On Sat, Feb 28, 2009 at 10:34 AM, David Cournapeau wrote: > On Sat, Feb 28, 2009 at 11:08 PM, Pauli Virtanen wrote: > > > > http://scipy.org/scipy/numpy/ticket/1008 > > > > http://codereview.appspot.com/22054 > > I added a few comments - the only significant one concerns types for > unit tests: I think it would be nice to test for float and long double > as well. > I saw some related test failures after updating and reinstalling my checkout. Deleting build and site-packages/numpy, and then reinstalling was all that was required for the tests to pass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Mar 2 15:48:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 14:48:09 -0600 Subject: [Numpy-discussion] Floating point question In-Reply-To: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> References: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> Message-ID: <3d375d730903021248g331c6a49mf5c5843db9300a9@mail.gmail.com> On Mon, Mar 2, 2009 at 14:37, Gideon Simpson wrote: > I recently discovered that for 8 byte floating point numbers, my > fortran compilers (gfortran 4.2 and ifort 11.0) on an OS X core 2 duo > machine believe the ?smallest number 2.220507...E-308. ?I presume that > my C compilers have similar results. > > I then discovered that the smallest floating point number in python > 2.5 is 4.9065...E-324. ?I have been using numpy to generate data, > saving it with savetxt, and then reading it in as ASCII into my > fortran code. ?Recently, it crapped out on something because it didn't > like reading it a number that small, though it is apparently perfectly > acceptable to python. > > My two questions are: > > 1. ?What is the best way to handle this? ?Is it just to add a filter > of the form > > ? ? ? ?u = u * ( np.abs(u) > 2.3 e-308) You can get the precise value from finfo: In [2]: from numpy import finfo In [3]: f = finfo(float64) In [4]: f.tiny Out[4]: array(2.2250738585072014e-308) I'd probably do something like this: u[abs(u) < f.tiny] = 0.0 > 2. ?What gives? ?What's the origin of this (perceived) inconsistency > in floating points across languages within the same platform? 4.9065...E-324 is a denormalized float. http://en.wikipedia.org/wiki/Denormal_number -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon Mar 2 15:50:37 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 2 Mar 2009 20:50:37 +0000 (UTC) Subject: [Numpy-discussion] RFR: #1008 Loss of precision in (complex) arcsinh & arctanh References: <5b8d13220902280734i3baaeac3h1473535223b43@mail.gmail.com> Message-ID: Mon, 02 Mar 2009 15:45:30 -0500, Darren Dale wrote: [clip] > I saw some related test failures after updating and reinstalling my > checkout. Deleting build and site-packages/numpy, and then reinstalling > was all that was required for the tests to pass. The distutils build system apparently doesn't track dependencies to the *.inc.src files. -- Pauli Virtanen From stefan at sun.ac.za Mon Mar 2 15:53:13 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 2 Mar 2009 22:53:13 +0200 Subject: [Numpy-discussion] porting NumPy to Python 3 In-Reply-To: <6a17e9ee0902100434w344e8c87p963e5c029ff2b5d3@mail.gmail.com> References: <4989DAD7.9060507@gmail.com> <6a17e9ee0902100434w344e8c87p963e5c029ff2b5d3@mail.gmail.com> Message-ID: <9457e7c80903021253r73923dbdn2c033c3341b3ec46@mail.gmail.com> 2009/2/10 Scott Sinclair : >> 2009/2/10 James Watson : >> I want to make sure diffs are against latest code, but keep getting >> this svn error: >> svn update >> svn: OPTIONS of 'http://scipy.org/svn/numpy/trunk': Could not read >> status line: Connection reset by peer (http://scipy.org) > > There is some problem at the moment. > > This seems to be quite common recently, during the very early morning > (USA time zones). Note that there is also a git repository available. Clone using git clone --origin svn git://github.com/pv/numpy-svn.git You can always update to the latest version using `git fetch`. Cheers St?fan From michael.s.gilbert at gmail.com Mon Mar 2 16:00:16 2009 From: michael.s.gilbert at gmail.com (Michael S. Gilbert) Date: Mon, 2 Mar 2009 16:00:16 -0500 Subject: [Numpy-discussion] Floating point question In-Reply-To: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> References: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> Message-ID: <20090302160016.90172699.michael.s.gilbert@gmail.com> On Mon, 2 Mar 2009 15:37:33 -0500, Gideon Simpson wrote: > My two questions are: > > 1. What is the best way to handle this? Is it just to add a filter > of the form > > u = u * ( np.abs(u) > 2.3 e-308) > > 2. What gives? What's the origin of this (perceived) inconsistency > in floating points across languages within the same platform? how are you calculating fmin? numpy has a built-in function that will tell you this information: >>> numpy.finfo( numpy.float ).min -1.7976931348623157e+308 hopefully this helps shed some light on your questions. regards, mike From simpson at math.toronto.edu Mon Mar 2 15:59:42 2009 From: simpson at math.toronto.edu (Gideon Simpson) Date: Mon, 2 Mar 2009 15:59:42 -0500 Subject: [Numpy-discussion] Floating point question In-Reply-To: <20090302160016.90172699.michael.s.gilbert@gmail.com> References: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> <20090302160016.90172699.michael.s.gilbert@gmail.com> Message-ID: <119AFF41-AA94-498F-9F1B-5AD42B5FC3BC@math.toronto.edu> On Mar 2, 2009, at 4:00 PM, Michael S. Gilbert wrote: > > how are you calculating fmin? numpy has a built-in function that > will tell you this information: > >>>> numpy.finfo( numpy.float ).min > -1.7976931348623157e+308 > > hopefully this helps shed some light on your questions. > > regards, > mike When I first discovered this, I was computing: numpy.exp(-x**2) If you try x = 26.7, you'll get 2.4877503498797906e-310 I then confirmed this by dividing 1. by 2. until python decided the answer was 0.0 -gideon From robert.kern at gmail.com Mon Mar 2 15:59:43 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 14:59:43 -0600 Subject: [Numpy-discussion] Floating point question In-Reply-To: <20090302160016.90172699.michael.s.gilbert@gmail.com> References: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> <20090302160016.90172699.michael.s.gilbert@gmail.com> Message-ID: <3d375d730903021259r53595448l5463bcb0e313abf7@mail.gmail.com> On Mon, Mar 2, 2009 at 15:00, Michael S. Gilbert wrote: > On Mon, 2 Mar 2009 15:37:33 -0500, Gideon Simpson wrote: >> My two questions are: >> >> 1. ?What is the best way to handle this? ?Is it just to add a filter >> of the form >> >> ? ? ? u = u * ( np.abs(u) > 2.3 e-308) >> >> 2. ?What gives? ?What's the origin of this (perceived) inconsistency >> in floating points across languages within the same platform? > > how are you calculating fmin? ?numpy has a built-in function that > will tell you this information: > >>>> numpy.finfo( numpy.float ).min > -1.7976931348623157e+308 > > hopefully this helps shed some light on your questions. That's the most-negative representable float, but it is not the smallest-positive (normalized) float (.tiny), which is what he needs to work with. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Mar 2 16:12:48 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 2 Mar 2009 23:12:48 +0200 Subject: [Numpy-discussion] porting NumPy to Python 3 In-Reply-To: References: <4989DAD7.9060507@gmail.com> Message-ID: <9457e7c80903021312x372b5ff5i7d361ce115367f65@mail.gmail.com> Thanks, James. Applied in r6535. 2009/3/2 James Watson : > The following are very simple changes that allow the 2to3 program to > run on numpy without warnings. ?Can someone check / commit? From peridot.faceted at gmail.com Mon Mar 2 16:18:59 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 2 Mar 2009 16:18:59 -0500 Subject: [Numpy-discussion] Floating point question In-Reply-To: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> References: <407DDC51-3338-4333-AA5E-32BC293D0403@math.toronto.edu> Message-ID: On 02/03/2009, Gideon Simpson wrote: > I recently discovered that for 8 byte floating point numbers, my > fortran compilers (gfortran 4.2 and ifort 11.0) on an OS X core 2 duo > machine believe the smallest number 2.220507...E-308. I presume that > my C compilers have similar results. > > I then discovered that the smallest floating point number in python > 2.5 is 4.9065...E-324. I have been using numpy to generate data, > saving it with savetxt, and then reading it in as ASCII into my > fortran code. Recently, it crapped out on something because it didn't > like reading it a number that small, though it is apparently perfectly > acceptable to python. > > My two questions are: > > 1. What is the best way to handle this? Is it just to add a filter > of the form > > u = u * ( np.abs(u) > 2.3 e-308) > > 2. What gives? What's the origin of this (perceived) inconsistency > in floating points across languages within the same platform? > > I recognize that this isn't specific to Scipy/Numpy, but thought > someone here might have the answer. What's happening is that numbers like 1e-310 are represented by "denormals". If we use base 10 to explain, suppose that floating point numbers could only have five digits of mantissa and two digits of exponent: 1.3000 e 00 2.1000 e-35 1.0000 e-99 Now what to do if you divide that last number in half? You can write it as: 0.5000 e-99 But this is a bit of an anomaly: unlike all normal floating-point numbers, it has a leading zero, and there are only four digits of information in the mantissa. In binary it's even more of an anomaly, since in binary you can take advantage of the fact that all normal mantissas start with one by not bothering to store the one. But it turns out that implementing denormals is useful to provide graceful degradation as numbers underflow, so it's in IEEE. It appears that some FORTRAN implementations cannot handle denormals, which is giving you trouble. It's usually fairly safe to simply replace all denormals by zero. Anne From stefan at sun.ac.za Mon Mar 2 17:17:11 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 00:17:11 +0200 Subject: [Numpy-discussion] Hosting infrastructure upgrade tomorrow Message-ID: <9457e7c80903021417p7d35d01dh46769ba5c93fb4cf@mail.gmail.com> Hi all, Tomorrow afternoon at 14:00 UTC, the SciPy SVN and Trac services will be migrated to a new machine. Please be advised that, for a period of two hours, access to these and other services hosted on scipy.org may be unavailable. Regards St?fan From bgerke at slac.stanford.edu Mon Mar 2 20:20:53 2009 From: bgerke at slac.stanford.edu (Brian Gerke) Date: Mon, 2 Mar 2009 17:20:53 -0800 Subject: [Numpy-discussion] problem with assigning to recarrays In-Reply-To: <3d375d730902272258k32be62a7j8120064ad1017de8@mail.gmail.com> References: <3d375d730902271630w5f924425t9a88939273c32cce@mail.gmail.com> <2E4F7165-0A1D-47EB-9AF5-64093D99056B@slac.stanford.edu> <3d375d730902272258k32be62a7j8120064ad1017de8@mail.gmail.com> Message-ID: <51AC2892-5916-4948-B4DE-DFB17B7B3E74@slac.stanford.edu> Many thanks for your willingness to help out with this. Not to belabor the point, but I notice that the rules you lay out below don't quite explain why the following syntax works as I originally expected: r[0].field1 = 1 I'm guessing this is because r[0].field1 is already an existing scalar object, with an address in memory, so it can be changed in place, whereas this syntax would need to create an entirely new array object (not even a copy, exactly): r[where(r.field1 == 0)].field1 No need to respond if this understanding is correct. I just wanted to write it down in case someone else is searching the archive with this question in the future. BFG On Feb 27, 2009, at 10:58 PM, Robert Kern wrote: > On Fri, Feb 27, 2009 at 19:06, Brian Gerke > wrote: >> >> On Feb 27, 2009, at 4:30 PM, Robert Kern wrote: >>>> >>> r[where(r.field1 == 1.)] make a copy. There is no way for us to >>> construct a view onto the original memory for this circumstance >>> given >>> numpy's memory model. >> >> Many thanks for the quick reply. I assume that this is true only for >> record arrays, not for ordinary arrays? Certainly I can make an >> assignment in this way with a normal array. > > Well, you are doing two very different things. Let's back up a bit. > > Python gives us two hooks to modify an object in-place with an > assignment: __setitem__ and __setattr__. > > x[] = y ==> x.__setitem__(, y) > x. = y ==> x.__setattr__('', y) > > Now, we don't need to restrict ourselves to just variables for 'x'; we > can have any expression that evaluates to an object. > > ()[] = y ==> ().__setitem__(, y) > (). = y ==> ().__setattr__('', y) > > The key here is that the () on the LHS is evaluated just like > any expression appearing anywhere else in your code. The only special > in-place behavior is restricted to the *outermost* [] or > .. > > So when you do this: > > r[where(r.field1 == 1.)].field2 = 1.0 > > it translates to something like this: > > tmp = r.__getitem__(where(r.field1 == 1.0)) # Makes a copy! > tmp.__setattr__('field2', 1.0) > > Note that the first line is a __getitem__, not a __setitem__ which can > modify r in-place. > >> Also, if it is truly impossible to change this behavior, or to have >> it >> raise an error--then are there any best-practice suggestions for how >> to remember and avoid running into this non-obvious behavior? If one >> thinks of record arrays as inheriting from numpy arrays, then this >> problem is certainly unexpected. > > It's a natural consequence of the preceding rules. This a Python > thing, not a difference between numpy arrays and record arrays. Just > keep those rules in mind. > >> Also, I've just found that the following syntax does do what is >> expected: >> >> (r.field2)[where(field1 == 1.)] = 1. >> >> It is at least a little aesthetically displeasing that the syntax >> works one way but not the other. Perhaps my best bet is to stick >> with >> this syntax and forget that the other exists? A less-than-satisfying >> solution, but workable. > > If you drop the extraneous bits, it becomes a fair bit more readable: > > r.field2[r.field1 == 1] = 1 > > This is idiomatic; you'll see it all over the place where record > arrays are used. The reason that this form modifies r in-place is > because r.__getattr__('field2') is able to return a view rather than a > copy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Mon Mar 2 20:35:45 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 2 Mar 2009 19:35:45 -0600 Subject: [Numpy-discussion] problem with assigning to recarrays In-Reply-To: <51AC2892-5916-4948-B4DE-DFB17B7B3E74@slac.stanford.edu> References: <3d375d730902271630w5f924425t9a88939273c32cce@mail.gmail.com> <2E4F7165-0A1D-47EB-9AF5-64093D99056B@slac.stanford.edu> <3d375d730902272258k32be62a7j8120064ad1017de8@mail.gmail.com> <51AC2892-5916-4948-B4DE-DFB17B7B3E74@slac.stanford.edu> Message-ID: <3d375d730903021735g37aad23ds2adde81f865e8b68@mail.gmail.com> On Mon, Mar 2, 2009 at 19:20, Brian Gerke wrote: > > Many thanks for your willingness to help out with this. ?Not to > belabor the point, but I notice that the rules you lay out below don't > quite explain why the following syntax works as I originally expected: > > r[0].field1 = 1 > > I'm guessing this is because r[0].field1 is already an existing scalar > object, with an address in memory, so it can be changed in place, > whereas this syntax would need to create an entirely new array object > (not even a copy, exactly): > > r[where(r.field1 == 0)].field1 > > No need to respond if this understanding is correct. I just wanted to > write it down in case someone else is searching the archive with this > question in the future. Close. It's not "already existing", but a record scalar that gets created by a[0] is a view onto the corresponding element in the original array and is mutable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Mar 3 02:35:52 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 03 Mar 2009 08:35:52 +0100 Subject: [Numpy-discussion] AttributeError: 'str' object has no attribute 'seek' In-Reply-To: <9457e7c80903021208r6af5bd4cof2d50a0ed2967786@mail.gmail.com> References: <9457e7c80903021208r6af5bd4cof2d50a0ed2967786@mail.gmail.com> Message-ID: On Mon, 2 Mar 2009 22:08:22 +0200 St?fan van der Walt wrote: > Nils, > > 2009/3/2 Nils Wagner : >> I encountered a problem wrt loadtxt. >> >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/io.py", >> line 384, in loadtxt >> fh = seek_gzip_factory(fname) > > Would you mind trying latest SVN? > > Thanks > St?fan Hi St?fan, Works for me. Thank you very much ! BTW, is it possible to use more than one character to indicate the start of a comment ? I would like to use both '!' and '$'. Cheers, Nils From stefan at sun.ac.za Tue Mar 3 04:11:19 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 11:11:19 +0200 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: <3d375d730902271238p7fe29192hb953df2c5f87c245@mail.gmail.com> References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <3d375d730902271238p7fe29192hb953df2c5f87c245@mail.gmail.com> Message-ID: <9457e7c80903030111y590b4e34g2f7d1c42117acbe8@mail.gmail.com> Hi Robert 2009/2/27 Robert Kern : >> a[ix_([2,3,6],range(a.shape[1]),[3,2])] >> >> If anyone knows a better way? > > One could probably make ix_() take slice objects, too, to generate the > correct arange() in the appropriate place. I was wondering how one would implement this, since the ix_ function has no knowledge of the dimensions of "a". The best I could do was to allow a[ix_[[2,3,6], :3, [3, 2]] to work (see attached patch). Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Allow-fully-specified-ranges-in-ix_.patch Type: text/x-diff Size: 3517 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Tue Mar 3 04:58:15 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Mar 2009 18:58:15 +0900 Subject: [Numpy-discussion] RFR: fix signed/unsigned comparison warnings in numpy Message-ID: <49ACFF37.608@ar.media.kyoto-u.ac.jp> Hi, A small patch to fix some last warnings (numpy almost builds warning free with -W -Wall -Wextra now). I am not sure about those (signed/unsigned casts are potentially dangerous), so I did not apply them directly. It did help me discovering a bug or two in numpy (fixed in the trunk): http://codereview.appspot.com/24043/ http://github.com/cournape/numpy/tree/unsigned_warn cheers, David From david at ar.media.kyoto-u.ac.jp Tue Mar 3 05:26:35 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Mar 2009 19:26:35 +0900 Subject: [Numpy-discussion] RFR: fix signed/unsigned comparison warnings in numpy In-Reply-To: <49ACFF37.608@ar.media.kyoto-u.ac.jp> References: <49ACFF37.608@ar.media.kyoto-u.ac.jp> Message-ID: <49AD05DB.4050703@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Hi, > > A small patch to fix some last warnings (numpy almost builds warning > free with -W -Wall -Wextra now). I am not sure about those > (signed/unsigned casts are potentially dangerous), so I did not apply > them directly. It did help me discovering a bug or two in numpy (fixed > in the trunk): > > http://codereview.appspot.com/24043/ > I managed to screw up the link, here is the real one: http://codereview.appspot.com/24043/show thanks Stefan !, David From watson.jim at gmail.com Tue Mar 3 07:10:55 2009 From: watson.jim at gmail.com (James Watson) Date: Tue, 3 Mar 2009 12:10:55 +0000 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test Message-ID: > I would appreciate if people would test building numpy > (trunk); in particular since some issues are moderately complex and > system dependent On Vista with VS2008, numpy rev r6535, I get the following behaviour: 1. Building and installing numpy on python 2.6.1 compiled in debug mode succeeds, but 'import numpy' returns 'ImportError: No module named multiarray'. 2. Building and installing numpy using python compiled in release mode succeeds, 'import numpy' succeeds, but 'numpy.test()' crashes the interpreter. When running in a debug version of python, 'from numpy.core import multiarray' raises an ImportError, but this does not happen with the release version, where multiarray functions seem to work. Could this be related to the PyImport_Import and PyImport_ImportModule changes made in 2.6 (bottom of http://docs.python.org/whatsnew/2.6.html)? James. From david at ar.media.kyoto-u.ac.jp Tue Mar 3 07:09:20 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Mar 2009 21:09:20 +0900 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test In-Reply-To: References: Message-ID: <49AD1DF0.5080707@ar.media.kyoto-u.ac.jp> Hi James, James Watson wrote: >> I would appreciate if people would test building numpy >> (trunk); in particular since some issues are moderately complex and >> system dependent >> > > On Vista with VS2008, numpy rev r6535, I get the following behaviour: > 1. Building and installing numpy on python 2.6.1 compiled in debug > mode succeeds, but 'import numpy' returns 'ImportError: No module > named multiarray'. > Yes, debug mode does not work well if at all. I am not even sure whether this is a regression or not - I am very unfamiliar with how debugging works on the python + windows + VS combination. If you have some insight/recommendations, I would be glad to fix this (e.g. how is this supposed to work ?) - one problem is that building full python on windows with MS compilers is a PITA - or are there any pre-built debugged versions somewhere ? > 2. Building and installing numpy using python compiled in release mode > succeeds, 'import numpy' succeeds, but 'numpy.test()' crashes the > interpreter. > Yes, this has actually nothing to do with python 2.6. I noticed the crash, thought naively it would be easy to fix, but it is actually quite nasty. I've reverted the change which introduced the crash for the time being (r6541). > Could this be related to the PyImport_Import and PyImport_ImportModule > changes made in 2.6 (bottom of > http://docs.python.org/whatsnew/2.6.html)? > I don't know - I would like to say no, but I am not quite sure, specially since the exact rules for dynamic loading of code are still not clear to me on windows, David From matthieu.brucher at gmail.com Tue Mar 3 07:46:12 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 3 Mar 2009 13:46:12 +0100 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test In-Reply-To: References: Message-ID: 2009/3/3 James Watson : >> I would appreciate if people would test building numpy >> (trunk); in particular since some issues are moderately complex and >> system dependent > > On Vista with VS2008, numpy rev r6535, I get the following behaviour: > 1. Building and installing numpy on python 2.6.1 compiled in debug > mode succeeds, but 'import numpy' returns 'ImportError: No module > named multiarray'. > 2. Building and installing numpy using python compiled in release mode > succeeds, 'import numpy' succeeds, but 'numpy.test()' crashes the > interpreter. > > When running in a debug version of python, 'from numpy.core import > multiarray' raises an ImportError, but this does not happen with the > release version, where multiarray functions seem to work. > > Could this be related to the PyImport_Import and PyImport_ImportModule > changes made in 2.6 (bottom of > http://docs.python.org/whatsnew/2.6.html)? Windows debug extensions have a suffix, d. If you don't install the debug version of numpy, you can't use it with debug Python. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From stefan at sun.ac.za Tue Mar 3 08:06:57 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 15:06:57 +0200 Subject: [Numpy-discussion] RFR: fix signed/unsigned comparison warnings in numpy In-Reply-To: <49AD05DB.4050703@ar.media.kyoto-u.ac.jp> References: <49ACFF37.608@ar.media.kyoto-u.ac.jp> <49AD05DB.4050703@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80903030506h67276f6bh6f6a1fdbf09f344@mail.gmail.com> 2009/3/3 David Cournapeau : > David Cournapeau wrote: >> Hi, >> >> ? ? A small patch to fix some last warnings (numpy almost builds warning >> free with -W -Wall -Wextra now). I am not sure about those >> (signed/unsigned casts are potentially dangerous), so I did not apply >> them directly. It did help me discovering a bug or two in numpy (fixed >> in the trunk): >> >> http://codereview.appspot.com/24043/ >> > > I managed to screw up the link, here is the real one: > > http://codereview.appspot.com/24043/show Looks good! Thanks, David, for explaining to me why the type cast in 977 #if @unsigntyp@ 978 if(LONG_MIN < (@ctype@)x && (@ctype@)x < LONG_MAX) 979 return PyInt_FromLong(x); 980 #else has somewhat tricky semantics. A person is never too old to learn some more C :-) Cheers St?fan From watson.jim at gmail.com Tue Mar 3 08:20:29 2009 From: watson.jim at gmail.com (James Watson) Date: Tue, 3 Mar 2009 13:20:29 +0000 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test In-Reply-To: References: Message-ID: > Windows debug extensions have a suffix, d. If you don't install the > debug version of numpy, you can't use it with debug Python. Ah, thank you. Sorry for the newb question: how do you install the debug version (for msvc)? From david at ar.media.kyoto-u.ac.jp Tue Mar 3 09:01:58 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Mar 2009 23:01:58 +0900 Subject: [Numpy-discussion] SVN and TRAC migrations starting NOW Message-ID: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> Dear Numpy and Scipy developers, We are now starting the svn and trac migrations to new servers: - The svn repositories of both numpy and scipy are now unavailable, and should be available around 16:00 UTC (3rd March 2009). You will then be able to update/commit again. - Trac for numpy and scipy are also unavailable. We will send an email when everything will be backed up, The Scipy website administrators From jmiller at stsci.edu Tue Mar 3 11:20:19 2009 From: jmiller at stsci.edu (Todd Miller) Date: Tue, 03 Mar 2009 11:20:19 -0500 Subject: [Numpy-discussion] 64-bit numpy questions? Message-ID: <49AD58C3.20907@stsci.edu> Hi, I've been looking at a 64-bit numpy problem we were having on Solaris: >>> a=numpy.zeros(0x180000000,dtype='b1') >>> a.data Traceback (most recent call last): File "", line 1, in ValueError: size must be zero or positive A working fix seemed to be this: Index: arrayobject.c =================================================================== --- arrayobject.c (revision 6530) +++ arrayobject.c (working copy) @@ -6774,7 +6774,7 @@ static PyObject * array_data_get(PyArrayObject *self) { - intp nbytes; + Py_ssize_t nbytes; if (!(PyArray_ISONESEGMENT(self))) { PyErr_SetString(PyExc_AttributeError, "cannot get single-"\ "segment buffer for discontiguous array"); @@ -6782,10 +6782,10 @@ } nbytes = PyArray_NBYTES(self); if PyArray_ISWRITEABLE(self) { - return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (int) nbytes); + return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (Py_ssize_t) nbytes); } else { - return PyBuffer_FromObject((PyObject *)self, 0, (int) nbytes); + return PyBuffer_FromObject((PyObject *)self, 0, (Py_ssize_t) nbytes); } } This fix could be simpler but still illustrates the typical problem: use of (or cast to) int rather than something "pointer sized". I can see that a lot of effort has gone into making numpy 64-bit enabled, but I also see a number of uses of int which look like problems on LP64 platforms. Is anyone using numpy in 64-bit environments on a day-to-day basis? Are you using very large arrays, i.e. over 2G in size? Cheers, Todd From gael.varoquaux at normalesup.org Tue Mar 3 11:22:24 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 3 Mar 2009 17:22:24 +0100 Subject: [Numpy-discussion] 64-bit numpy questions? In-Reply-To: <49AD58C3.20907@stsci.edu> References: <49AD58C3.20907@stsci.edu> Message-ID: <20090303162224.GA31266@phare.normalesup.org> On Tue, Mar 03, 2009 at 11:20:19AM -0500, Todd Miller wrote: > Is anyone using numpy in 64-bitenvironments on a day-to-day basis? I am. > Are you using very large arrays, i.e. over 2G in size? I believe so, but I may be wrong. Ga?l From robince at gmail.com Tue Mar 3 11:29:58 2009 From: robince at gmail.com (Robin) Date: Tue, 3 Mar 2009 16:29:58 +0000 Subject: [Numpy-discussion] 64-bit numpy questions? In-Reply-To: <49AD58C3.20907@stsci.edu> References: <49AD58C3.20907@stsci.edu> Message-ID: On Tue, Mar 3, 2009 at 4:20 PM, Todd Miller wrote: >?Is anyone using numpy in 64-bit > environments on a day-to-day basis? Yes - linux x86_64 > Are you using very large arrays, > i.e. ?over 2G in size? I have been using arrays this size and larger (mainly sparse matrices) without any problem (except for the machine running out of memory :) Cheers Robin > > Cheers, > Todd > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From hanni.ali at gmail.com Tue Mar 3 11:40:33 2009 From: hanni.ali at gmail.com (Hanni Ali) Date: Tue, 3 Mar 2009 16:40:33 +0000 Subject: [Numpy-discussion] 64-bit numpy questions? In-Reply-To: <49AD58C3.20907@stsci.edu> References: <49AD58C3.20907@stsci.edu> Message-ID: <789d27b10903030840vfb4b219t6c479e148d5b42eb@mail.gmail.com> > > Is anyone using numpy in 64-bit environments on a day-to-day basis? Windows 2003 64 > Are you using very large arrays, i.e. over 2G in size? Yes without any problems, using Python 2.6. Hanni -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Tue Mar 3 11:46:19 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 03 Mar 2009 10:46:19 -0600 Subject: [Numpy-discussion] 64-bit numpy questions? In-Reply-To: <49AD58C3.20907@stsci.edu> References: <49AD58C3.20907@stsci.edu> Message-ID: <49AD5EDB.9000709@enthought.com> Todd Miller wrote: > Hi, > > I've been looking at a 64-bit numpy problem we were having on Solaris: > > >>> a=numpy.zeros(0x180000000,dtype='b1') > >>> a.data > Traceback (most recent call last): > File "", line 1, in > ValueError: size must be zero or positive > > A working fix seemed to be this: > > Index: arrayobject.c > =================================================================== > --- arrayobject.c (revision 6530) > +++ arrayobject.c (working copy) > @@ -6774,7 +6774,7 @@ > static PyObject * > array_data_get(PyArrayObject *self) > { > - intp nbytes; > + Py_ssize_t nbytes; > if (!(PyArray_ISONESEGMENT(self))) { > PyErr_SetString(PyExc_AttributeError, "cannot get single-"\ > "segment buffer for discontiguous array"); > @@ -6782,10 +6782,10 @@ > } > nbytes = PyArray_NBYTES(self); > if PyArray_ISWRITEABLE(self) { > - return PyBuffer_FromReadWriteObject((PyObject *)self, 0, (int) > nbytes); > + return PyBuffer_FromReadWriteObject((PyObject *)self, 0, > (Py_ssize_t) nbytes); > } > else { > - return PyBuffer_FromObject((PyObject *)self, 0, (int) nbytes); > + return PyBuffer_FromObject((PyObject *)self, 0, (Py_ssize_t) > nbytes); > } > } > > This fix could be simpler but still illustrates the typical problem: > use of (or cast to) int rather than something "pointer sized". > This looks like a problem with the port to Python2.5 not getting all the Python C-API changes. There is no need to change the intp nbytes line, but the un-necessary casting to (int) in the calls to the PyBuffer_ should absolutely be changed at least for Python 2.5 and above. -Travis From david at ar.media.kyoto-u.ac.jp Tue Mar 3 13:14:38 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Mar 2009 03:14:38 +0900 Subject: [Numpy-discussion] Proposed schedule for numpy 1.3.0 Message-ID: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> Hi, A few weeks ago, we had a discussion about the 1.3.0 release schedule, but we did not end up stating a schedule. I stand up to be the release manager for 1.3.0, and suggest the following: Beta: 15th March (only doc + severe regressions accepted after, branch trunk into 1.3.x, trunk opened for 1.4.0) RC: 23th March (nothing but build issues/blockers) Release date: 1st April If you are in the middle of something important with a lot of changes, and you don't think you can make it for 15th March, please notify ASAP, in particular for C code (the exact dates can be changed in that case). What constitutes severe regression and blocker issues is decided by the release manager. I also started to update the release notes: http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst Please fill it for missing informations (in particular for documentation, and ma, which I have not followed in detail). cheers, David From pgmdevlist at gmail.com Tue Mar 3 13:57:43 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 3 Mar 2009 13:57:43 -0500 Subject: [Numpy-discussion] Help on subclassing numpy.ma: __array_wrap__ In-Reply-To: References: Message-ID: <4FD60851-93C5-4C3E-9CA4-B5C59DCA6998@gmail.com> Kevin, Sorry for the delayed answer. > > (a) Is MA intended to be subclassed? Yes, that's actually the reason why the class was rewritten, to simplify subclassing. As Josef suggested, you can check the scikits.timeseries package that makes an extensive use of MaskedArray as baseclass. > > (b) If so, perhaps I'm missing something to make this work. Any > pointers will be appreciated. As you've run a debugger on your sources, you must have noticed the calls to MaskedArray._update_from. In your case, the simplest is to define DTMA._update_from as such: _____ def _update_from(self, obj): ma.MaskedArray._update_from(self, obj) self._attr = getattr(obj, '_attr', {'EmptyDict':[]}) _____ Now, because MaskedArray.__array_wrap__() itself calls _update_from, you don't actually need a specific DTMA.__array_wrap__ (unless you have some specific operations to perform, but it doesn't seem to be the case). Now for a word of explanation: __array_wrap__ is intended to transform the output of a numpy function to an object of your class. When we use the numpy.ma functions, we don't need that, we just need to retrieve some of the attributes of the initial MA. That's why _update_from was introduced. Of course, I'm to blame not to have make that aspect explicit in the doc. I gonna try to correct that. In any case, let me know how it goes. P. On Mar 1, 2009, at 10:37 AM, Kevin Dunn wrote: > Hi everyone, > > I'm subclassing Numpy's MaskedArray to create a data class that > handles missing data, but adds some extra info I need to carrry > around. However I've been having problems keeping this extra info > attached to the subclass instances after performing operations on > them. > The bare-bones script that I've copied here shows the basic issue: http://pastebin.com/f69b979b8 > There are 2 classes: one where I am able to subclass numpy (with > help from the great description at http://www.scipy.org/Subclasses), > and the other where I subclass numpy.ma, using the same ideas again. > > When stepping through the code in a debugger, lines 76 to 96, I can > see that the numpy subclass, called DT, calls DT.__array_wrap__() > after it completes unary and binary operations. But the numpy.ma > subclass, called DTMA, does not seem to call DTMA.__array_wrap__(), > especially line 111. > > Just to test this idea, I overrode the __mul__ function in my DTMA > subclass to call DTMA.__array_wrap__() and it returns my extra > attributes, in the same way that Numpy did. > > My questions are: > > (b) If so, perhaps I'm missing something to make this work. Any > pointers will be appreciated. > > So far it seems the only way for me to sub-class numpy.ma is to > override all numpy.ma functions of interest for my class and add a > DTMA.__array_wrap() call to the end of them. Hopefully there is an > easier way. > Related to this question, was there are particular outcome from this > archived discussion (I only joined the list recently): http://article.gmane.org/gmane.comp.python.numeric.general/24315 > because that dictionary object would be exactly what I'm after here. > Thanks, > > Kevin > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion From pwang at enthought.com Tue Mar 3 14:05:50 2009 From: pwang at enthought.com (Peter Wang) Date: Tue, 3 Mar 2009 13:05:50 -0600 Subject: [Numpy-discussion] SVN and Trac servers are back up In-Reply-To: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> Message-ID: <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> Hi everyone, We have moved the scipy and numpy Trac and SVN servers to a new machine. We have also moved the scikits SVN repository, but not its Trac (scipy.org/scipy/scikits). The SVN repositories for wavelets, mpi4py, and other projects that are hosted on scipy have not been moved yet, and will be temporarily unavailable until we get them moved over. Please poke around (gently!) and let us know if you experience any broken links, incorrect redirects, and the like. A few things to note: - The URLs for the trac pages have been simplified to: http://projects.scipy.org/numpy http://projects.scipy.org/scipy You should be seemlessly redirected to these sites if you try to access any of the old URLs (which were of the form /scipy/scipy/ or / scipy/numpy/). - The mailman archives and listinfo pages should now redirect to mail.scipy.org/mailman/ and mail.scipy.org/pipermail/. Again, this should be seemless, so if you experience any difficulties please let us know. Thanks, Peter, Stefan, and David From david at ar.media.kyoto-u.ac.jp Tue Mar 3 14:02:49 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Mar 2009 04:02:49 +0900 Subject: [Numpy-discussion] SVN and Trac servers are back up In-Reply-To: <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> Message-ID: <49AD7ED9.6070905@ar.media.kyoto-u.ac.jp> Hi Peter, Peter Wang wrote: > Hi everyone, > > We have moved the scipy and numpy Trac and SVN servers to a new > machine. We have also moved the scikits SVN repository, but not its > Trac (scipy.org/scipy/scikits). The SVN repositories for wavelets, > mpi4py, and other projects that are hosted on scipy have not been > moved yet, and will be temporarily unavailable until we get them moved > over. > > Please poke around (gently!) and let us know if you experience any > broken links, incorrect redirects, and the like. A few things to note: > > - The URLs for the trac pages have been simplified to: > http://projects.scipy.org/numpy > http://projects.scipy.org/scipy > You should be seemlessly redirected to these sites if you try to > access any of the old URLs (which were of the form /scipy/scipy/ or / > scipy/numpy/). > It looks like modifying tickets still cause some trouble: after clicking "submit changes", the server seems to hang, and the request never ends, Thanks again for all your work on the migration, David From watson.jim at gmail.com Tue Mar 3 14:31:37 2009 From: watson.jim at gmail.com (James Watson) Date: Tue, 3 Mar 2009 19:31:37 +0000 Subject: [Numpy-discussion] numpy and python 2.6 on windows: please test In-Reply-To: References: Message-ID: >> Windows debug extensions have a suffix, d. If you don't install the >> debug version of numpy, you can't use it with debug Python. *red face* forgot about --debug....... > Yes, this has actually nothing to do with python 2.6. I noticed the > crash, thought naively it would be easy to fix, but it is actually quite > nasty. I've reverted the change which introduced the crash for the time > being (r6541). In case this helps, there are only 2 test lines in a single test in r6535 which crash the interpreter on python-2.6.1-win32: numpy/ma/tests/test_mrecords.py, in test_get(), - assert_equal(mbase_first.tolist(), (1,1.1,'one')) - assert_equal(mbase_last.tolist(), (None,None,None)) mbase[int].tolist() crashes Python in _Py_Dealloc(PyObject *op). mbase.tolist() does not. > I am very unfamiliar with how debugging > works on the python + windows + VS combination. If you have some > insight/recommendations, I would be glad to fix this (e.g. how is this > supposed to work ?) I'm not sure if there's a better way, but I've found it easiest to run python via a debug run from within VS, installing and testing numpy from there. The 2.6.1 sources build fine with VS2008. James. From pwang at enthought.com Tue Mar 3 14:39:05 2009 From: pwang at enthought.com (Peter Wang) Date: Tue, 3 Mar 2009 13:39:05 -0600 Subject: [Numpy-discussion] SVN and Trac servers are back up In-Reply-To: <49AD7ED9.6070905@ar.media.kyoto-u.ac.jp> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <49AD7ED9.6070905@ar.media.kyoto-u.ac.jp> Message-ID: On Mar 3, 2009, at 1:02 PM, David Cournapeau wrote: > It looks like modifying tickets still cause some trouble: after > clicking > "submit changes", the server seems to hang, and the request never > ends, > > Thanks again for all your work on the migration, > David Can you try again? I looked again and it looks like there are definitely some files that were not writeable by the Apache server. Thanks, Peter From pgmdevlist at gmail.com Tue Mar 3 14:52:00 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 3 Mar 2009 14:52:00 -0500 Subject: [Numpy-discussion] Proposed schedule for numpy 1.3.0 In-Reply-To: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> References: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> Message-ID: <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> David, > I also started to update the release notes: > > http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst I get a 404. Anyhow, on the ma side: * structured arrays should now be fully supported by MaskedArray (r6463, r6324, r6305, r6300, r6294...) * Minor bug fixes (r6356, r6352, r6335, r6299, r6298) * Improved support for __iter__ (r6326) * made baseclass, sharedmask and hardmask accesible to the user (but read-only) + doc update From chanley at stsci.edu Tue Mar 3 14:59:50 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 03 Mar 2009 14:59:50 -0500 Subject: [Numpy-discussion] Proposed schedule for numpy 1.3.0 In-Reply-To: <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> References: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> Message-ID: <49AD8C36.5040504@stsci.edu> Pierre GM wrote: > David, >> I also started to update the release notes: >> >> http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst > > I get a 404. > > Anyhow, on the ma side: > * structured arrays should now be fully supported by MaskedArray > (r6463, r6324, r6305, r6300, r6294...) > * Minor bug fixes (r6356, r6352, r6335, r6299, r6298) > * Improved support for __iter__ (r6326) > * made baseclass, sharedmask and hardmask accesible to the user (but > read-only) > > + doc update I also received a 404. However it is available in svn (numpy/doc/release). Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From stefan at sun.ac.za Tue Mar 3 15:01:08 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 22:01:08 +0200 Subject: [Numpy-discussion] SVN and Trac servers are back up In-Reply-To: References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <49AD7ED9.6070905@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80903031201sd7e8e55y4f8340f1ec271072@mail.gmail.com> 2009/3/3 Peter Wang : > Can you try again? ?I looked again and it looks like there are > definitely some files that were not writeable by the Apache server. I think it is the notification system that is failing to send out e-mails. I'll look into it. Cheers St?fan From cournape at gmail.com Tue Mar 3 15:02:49 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 4 Mar 2009 05:02:49 +0900 Subject: [Numpy-discussion] Proposed schedule for numpy 1.3.0 In-Reply-To: <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> References: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> Message-ID: <5b8d13220903031202q4695592ax60e4bf063aae4cd8@mail.gmail.com> On Wed, Mar 4, 2009 at 4:52 AM, Pierre GM wrote: > David, >> I also started to update the release notes: >> >> http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst > > I get a 404. Yes, sorry about that, the server migration has caused some changes here. I will wait for the DNS update to get up to my connection. Anyway, the file is in svn, though, so feel free to update it accordingly. thanks for the info on masked arrays, David From pwang at enthought.com Tue Mar 3 15:04:06 2009 From: pwang at enthought.com (Peter Wang) Date: Tue, 3 Mar 2009 14:04:06 -0600 Subject: [Numpy-discussion] Proposed schedule for numpy 1.3.0 In-Reply-To: <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> References: <49AD738E.3080508@ar.media.kyoto-u.ac.jp> <4D1C6E63-6B8B-4E1F-AEFA-9344CEA8E551@gmail.com> Message-ID: <4D202E8B-3D81-440E-A300-23F59D808132@enthought.com> On Mar 3, 2009, at 1:52 PM, Pierre GM wrote: > David, >> I also started to update the release notes: >> >> http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst > > I get a 404. This is most likely a DNS issue. Please try pinging projects.scipy.org from the command line, and you should see it resolve to the IP address 216.62.213.249. If not, then it means that the DNS update has not reached your network yet. This should resolve itself within the next couple of hours. If you continue to get 404s at that point, please let us know. Thanks for your patience, Peter From stefan at sun.ac.za Tue Mar 3 15:06:19 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 22:06:19 +0200 Subject: [Numpy-discussion] SVN and Trac servers are back up In-Reply-To: <9457e7c80903031201sd7e8e55y4f8340f1ec271072@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <49AD7ED9.6070905@ar.media.kyoto-u.ac.jp> <9457e7c80903031201sd7e8e55y4f8340f1ec271072@mail.gmail.com> Message-ID: <9457e7c80903031206t61787786h16b9df98027eb4d9@mail.gmail.com> 2009/3/3 St?fan van der Walt : > 2009/3/3 Peter Wang : >> Can you try again? ?I looked again and it looks like there are >> definitely some files that were not writeable by the Apache server. > > I think it is the notification system that is failing to send out > e-mails. ?I'll look into it. Should be fixed now. Thanks St?fan From efiring at hawaii.edu Tue Mar 3 16:15:31 2009 From: efiring at hawaii.edu (Eric Firing) Date: Tue, 03 Mar 2009 11:15:31 -1000 Subject: [Numpy-discussion] [Fwd: Issue 113 in sphinx: nodes.py KeyError "entries" in numpy doc build] Message-ID: <49AD9DF3.4060302@hawaii.edu> I have been trying to build numpy docs from the svn doc subdirectory, and I have been notifying the sphinx people when things don't work. They have made several fixes, but here is one that will require a change on the numpy side. I know zip about sphinx extensions, so I don't expect to be able to suggest a fix myself. (I am assuming that the doc subdirectory in svn *should* be buildable--am I correct?) Eric -------------- next part -------------- An embedded message was scrubbed... From: issues-noreply at bitbucket.org Subject: Issue 113 in sphinx: nodes.py KeyError "entries" in numpy doc build Date: Tue, 03 Mar 2009 20:23:46 +0000 Size: 2267 URL: From pav at iki.fi Tue Mar 3 17:21:38 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 3 Mar 2009 22:21:38 +0000 (UTC) Subject: [Numpy-discussion] [Fwd: Issue 113 in sphinx: nodes.py KeyError "entries" in numpy doc build] References: <49AD9DF3.4060302@hawaii.edu> Message-ID: Tue, 03 Mar 2009 11:15:31 -1000, Eric Firing wrote: > I have been trying to build numpy docs from the svn doc subdirectory, > and I have been notifying the sphinx people when things don't work. They > have made several fixes, but here is one that will require a change on > the numpy side. I know zip about sphinx extensions, so I don't expect > to be able to suggest a fix myself. > > (I am assuming that the doc subdirectory in svn *should* be > buildable--am I correct?) Yes. But not necessarily with Sphinx 0.6.dev. -- Pauli Virtanen From neilcrighton at gmail.com Tue Mar 3 18:02:37 2009 From: neilcrighton at gmail.com (Neil Crighton) Date: Tue, 3 Mar 2009 23:02:37 +0000 (UTC) Subject: [Numpy-discussion] intersect1d and setmember1d References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> <49AB9F43.4060804@ntc.zcu.cz> <3d375d730903020928q6e0f69ddldc2a81102e8cc840@mail.gmail.com> Message-ID: Robert Kern gmail.com> writes: > Do you mind if we just add you to the THANKS.txt file, and consider > you as a "NumPy Developer" per the LICENSE.txt as having released that > code under the numpy license? If we're dotting our i's and crossing > our t's legally, that's a bit more straightforward (oddly enough). > No, I don't mind having it released under the numpy licence. Neil From jonathan.taylor at utoronto.ca Tue Mar 3 18:52:50 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Tue, 3 Mar 2009 18:52:50 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? Message-ID: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> Hi, I am doing optimization on a vector of rotation angles tx,ty and tz using scipy.optimize.fmin. Unfortunately the function that I am optimizing needs the rotation matrix corresponding to this vector so it is getting constructed once for each iteration with new values. >From profiling I can Hi, I am doing optimization on a vector of rotation angles tx,ty and tz using scipy.optimize.fmin. Unfortunately the function that I am optimizing needs the rotation matrix corresponding to this vector so it is getting constructed once for each iteration with new values. >From profiling I can see that the function I am using to construct this rotation matrix is a bottleneck. I am currently using: def rotation(theta): tx,ty,tz = theta Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, cos(ty)]]) Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) return np.dot(Rx, np.dot(Ry, Rz)) Is there a faster way to do this? Perhaps I can do this faster with a small cython module, but this might be overkill? Thanks for any help, Jonathan.see that the function I am using to construct this rotation matrix is a bottleneck. I am currently using: def rotation(theta): tx,ty,tz = theta Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, cos(ty)]]) Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) return np.dot(Rx, np.dot(Ry, Rz)) Is there a faster way to do this? Perhaps I can do this faster with a small cython module, but this might be overkill? Thanks for any help, Jonathan. From jonathan.taylor at utoronto.ca Tue Mar 3 18:53:44 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Tue, 3 Mar 2009 18:53:44 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> Message-ID: <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> Sorry.. obviously having some copy and paste trouble here. The message should be as follows: Hi, I am doing optimization on a vector of rotation angles tx,ty and tz using scipy.optimize.fmin. Unfortunately the function that I am optimizing needs the rotation matrix corresponding to this vector so it is getting constructed once for each iteration with new values. >From profiling I can see that the function I am using to construct this rotation matrix is a bottleneck. I am currently using: def rotation(theta): tx,ty,tz = theta Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, cos(ty)]]) Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) return np.dot(Rx, np.dot(Ry, Rz)) Is there a faster way to do this? Perhaps I can do this faster with a small cython module, but this might be overkill? Thanks for any help, Jonathan. From robert.kern at gmail.com Tue Mar 3 19:19:10 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 3 Mar 2009 18:19:10 -0600 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> Message-ID: <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> On Tue, Mar 3, 2009 at 17:53, Jonathan Taylor wrote: > Sorry.. obviously having some copy and paste trouble here. ?The > message should be as follows: > > Hi, > > I am doing optimization on a vector of rotation angles tx,ty and tz > using scipy.optimize.fmin. ?Unfortunately the function that I am > optimizing needs the rotation matrix corresponding to this vector so > it is getting constructed once for each iteration with new values. > >From profiling I can see that the function I am using to construct > this rotation matrix is a bottleneck. ?I am currently using: > > def rotation(theta): > ? tx,ty,tz = theta > > ? Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) > ? Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, cos(ty)]]) > ? Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) > > ? return np.dot(Rx, np.dot(Ry, Rz)) > > Is there a faster way to do this? ?Perhaps I can do this faster with a > small cython module, but this might be overkill? You could look up to the full form of the rotation matrix in terms of the angles, or use sympy to do the same. The latter might be more convenient given that the reference you find might be using a different convention for the angles. James Diebel's "Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors" is a nice, comprehensive reference for such formulae. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5F5145BE25D61F87478B25AD1493C8F4?doi=10.1.1.110.5134&rep=rep1&type=pdf&ei=QcetSefqF4GEsQPnx4jSBA&sig2=HjJILSBPFgJTfuifbvKrxw&usg=AFQjCNFbABIxusr-NEbgrinhtR6buvjaYA -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sccolbert at gmail.com Tue Mar 3 20:14:11 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 3 Mar 2009 20:14:11 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> Message-ID: <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> In addition to what Robert said, you also only need to calculate six transcendentals: cx = cos(tx) sx = sin(tx) cy = cos(ty) sy = sin(ty) cz = cos(tz) sz = sin(tz) you, are making sixteen transcendental calls in your loop each time. I can also recommend Chapter 2 of Introduction to Robotics: Mechanics and Controls by John J. Craig for more on more efficient transformations. On Tue, Mar 3, 2009 at 7:19 PM, Robert Kern wrote: > On Tue, Mar 3, 2009 at 17:53, Jonathan Taylor > wrote: > > Sorry.. obviously having some copy and paste trouble here. The > > message should be as follows: > > > > Hi, > > > > I am doing optimization on a vector of rotation angles tx,ty and tz > > using scipy.optimize.fmin. Unfortunately the function that I am > > optimizing needs the rotation matrix corresponding to this vector so > > it is getting constructed once for each iteration with new values. > > >From profiling I can see that the function I am using to construct > > this rotation matrix is a bottleneck. I am currently using: > > > > def rotation(theta): > > tx,ty,tz = theta > > > > Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) > > Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, > cos(ty)]]) > > Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) > > > > return np.dot(Rx, np.dot(Ry, Rz)) > > > > Is there a faster way to do this? Perhaps I can do this faster with a > > small cython module, but this might be overkill? > > You could look up to the full form of the rotation matrix in terms of > the angles, or use sympy to do the same. The latter might be more > convenient given that the reference you find might be using a > different convention for the angles. James Diebel's "Representing > Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors" is a > nice, comprehensive reference for such formulae. > > > http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5F5145BE25D61F87478B25AD1493C8F4?doi=10.1.1.110.5134&rep=rep1&type=pdf&ei=QcetSefqF4GEsQPnx4jSBA&sig2=HjJILSBPFgJTfuifbvKrxw&usg=AFQjCNFbABIxusr-NEbgrinhtR6buvjaYA > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Mar 3 20:15:12 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 3 Mar 2009 20:15:12 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> Message-ID: <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> sorry, i meant you're making 12 calls, not 16... Chris On Tue, Mar 3, 2009 at 8:14 PM, Chris Colbert wrote: > In addition to what Robert said, you also only need to calculate six > transcendentals: > > cx = cos(tx) > sx = sin(tx) > cy = cos(ty) > sy = sin(ty) > cz = cos(tz) > sz = sin(tz) > > you, are making sixteen transcendental calls in your loop each time. > > I can also recommend Chapter 2 of Introduction to Robotics: Mechanics and > Controls by John J. Craig for more on more efficient transformations. > > > > > > On Tue, Mar 3, 2009 at 7:19 PM, Robert Kern wrote: > >> On Tue, Mar 3, 2009 at 17:53, Jonathan Taylor >> wrote: >> > Sorry.. obviously having some copy and paste trouble here. The >> > message should be as follows: >> > >> > Hi, >> > >> > I am doing optimization on a vector of rotation angles tx,ty and tz >> > using scipy.optimize.fmin. Unfortunately the function that I am >> > optimizing needs the rotation matrix corresponding to this vector so >> > it is getting constructed once for each iteration with new values. >> > >From profiling I can see that the function I am using to construct >> > this rotation matrix is a bottleneck. I am currently using: >> > >> > def rotation(theta): >> > tx,ty,tz = theta >> > >> > Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), >> cos(tx)]]) >> > Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, >> cos(ty)]]) >> > Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], >> [0,0,1]]) >> > >> > return np.dot(Rx, np.dot(Ry, Rz)) >> > >> > Is there a faster way to do this? Perhaps I can do this faster with a >> > small cython module, but this might be overkill? >> >> You could look up to the full form of the rotation matrix in terms of >> the angles, or use sympy to do the same. The latter might be more >> convenient given that the reference you find might be using a >> different convention for the angles. James Diebel's "Representing >> Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors" is a >> nice, comprehensive reference for such formulae. >> >> >> http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5F5145BE25D61F87478B25AD1493C8F4?doi=10.1.1.110.5134&rep=rep1&type=pdf&ei=QcetSefqF4GEsQPnx4jSBA&sig2=HjJILSBPFgJTfuifbvKrxw&usg=AFQjCNFbABIxusr-NEbgrinhtR6buvjaYA >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 3 20:26:38 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 3 Mar 2009 19:26:38 -0600 Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously In-Reply-To: <9457e7c80903030111y590b4e34g2f7d1c42117acbe8@mail.gmail.com> References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <3d375d730902271238p7fe29192hb953df2c5f87c245@mail.gmail.com> <9457e7c80903030111y590b4e34g2f7d1c42117acbe8@mail.gmail.com> Message-ID: <3d375d730903031726u4e26a5efm8ca51abc775de1db@mail.gmail.com> On Tue, Mar 3, 2009 at 03:11, St?fan van der Walt wrote: > Hi Robert > > 2009/2/27 Robert Kern : >>> a[ix_([2,3,6],range(a.shape[1]),[3,2])] >>> >>> If anyone knows a better way? >> >> One could probably make ix_() take slice objects, too, to generate the >> correct arange() in the appropriate place. > > I was wondering how one would implement this, since the ix_ function > has no knowledge of the dimensions of "a". No, you're right. It doesn't work. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From borreguero at gmail.com Tue Mar 3 20:53:00 2009 From: borreguero at gmail.com (Jose Borreguero) Date: Tue, 3 Mar 2009 20:53:00 -0500 Subject: [Numpy-discussion] how to multiply the rows of a matrix by a different number? Message-ID: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> I guess there has to be an easy way for this. I have: M.shape=(10000,3) N.shape=(10000,) I want to do this: for i in range(10000): M[i]*=N[i] without the explicit loop -Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Mar 3 21:11:09 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Mar 2009 21:11:09 -0500 Subject: [Numpy-discussion] how to multiply the rows of a matrix by a different number? In-Reply-To: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> References: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> Message-ID: <1cd32cbb0903031811v741946c6ne20efe35eb3593a9@mail.gmail.com> On Tue, Mar 3, 2009 at 8:53 PM, Jose Borreguero wrote: > I guess there has to be an easy way for this. I have: > M.shape=(10000,3) > N.shape=(10000,) > > I want to do this: > for i in range(10000): > M[i]*=N[i] > without the explicit loop > >>> M = np.ones((10,3)) >>> N = np.arange(10) >>> N.shape (10,) >>> (N[:,np.newaxis]).shape (10, 1) >>> M*N[:,np.newaxis] array([[ 0., 0., 0.], [ 1., 1., 1.], [ 2., 2., 2.], [ 3., 3., 3.], [ 4., 4., 4.], [ 5., 5., 5.], [ 6., 6., 6.], [ 7., 7., 7.], [ 8., 8., 8.], [ 9., 9., 9.]]) >>> M *= N[:,np.newaxis] >>> M Josef From focke at slac.stanford.edu Tue Mar 3 21:06:50 2009 From: focke at slac.stanford.edu (Warren Focke) Date: Tue, 3 Mar 2009 18:06:50 -0800 (PST) Subject: [Numpy-discussion] how to multiply the rows of a matrix by a different number? In-Reply-To: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> References: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> Message-ID: M *= N[:, newaxis] On Tue, 3 Mar 2009, Jose Borreguero wrote: > I guess there has to be an easy way for this. I have: > M.shape=(10000,3) > N.shape=(10000,) > > I want to do this: > for i in range(10000): > M[i]*=N[i] > without the explicit loop > > > -Jose > From jonathan.taylor at utoronto.ca Tue Mar 3 23:41:37 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Tue, 3 Mar 2009 23:41:37 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> Message-ID: <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> Thanks, All these things make sense and I should have known to calculate the sins and cosines up front. I managed a few more "tricks" and knocked off 40% of the computation time: def rotation(theta, R = np.zeros((3,3))): cx,cy,cz = np.cos(theta) sx,sy,sz = np.sin(theta) R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, cx*sy, sy*sz, -sy*cz, cy) return R Pretty evil looking ;) but still wouldn't mind somehow getting it faster. Am I right in thinking that I wouldn't get much of a speedup by rewriting this in C as most of the time is spent in necessary python functions? Thanks again, Jon. On Tue, Mar 3, 2009 at 8:15 PM, Chris Colbert wrote: > sorry, i meant you're making 12 calls, not 16... > > Chris > > On Tue, Mar 3, 2009 at 8:14 PM, Chris Colbert wrote: >> >> In addition to what Robert said, you also only need to calculate six >> transcendentals: >> >> cx = cos(tx) >> sx = sin(tx) >> cy = cos(ty) >> sy = sin(ty) >> cz = cos(tz) >> sz = sin(tz) >> >> you, are making sixteen transcendental calls in your loop each time. >> >> I can also recommend Chapter 2 of Introduction to Robotics: Mechanics and >> Controls by John J. Craig for more on more efficient transformations. >> >> >> >> >> >> On Tue, Mar 3, 2009 at 7:19 PM, Robert Kern wrote: >>> >>> On Tue, Mar 3, 2009 at 17:53, Jonathan Taylor >>> wrote: >>> > Sorry.. obviously having some copy and paste trouble here. ?The >>> > message should be as follows: >>> > >>> > Hi, >>> > >>> > I am doing optimization on a vector of rotation angles tx,ty and tz >>> > using scipy.optimize.fmin. ?Unfortunately the function that I am >>> > optimizing needs the rotation matrix corresponding to this vector so >>> > it is getting constructed once for each iteration with new values. >>> > >From profiling I can see that the function I am using to construct >>> > this rotation matrix is a bottleneck. ?I am currently using: >>> > >>> > def rotation(theta): >>> > ? tx,ty,tz = theta >>> > >>> > ? Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), >>> > cos(tx)]]) >>> > ? Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, >>> > cos(ty)]]) >>> > ? Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], >>> > [0,0,1]]) >>> > >>> > ? return np.dot(Rx, np.dot(Ry, Rz)) >>> > >>> > Is there a faster way to do this? ?Perhaps I can do this faster with a >>> > small cython module, but this might be overkill? >>> >>> You could look up to the full form of the rotation matrix in terms of >>> the angles, or use sympy to do the same. The latter might be more >>> convenient given that the reference you find might be using a >>> different convention for the angles. James Diebel's "Representing >>> Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors" is a >>> nice, comprehensive reference for such formulae. >>> >>> >>> http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5F5145BE25D61F87478B25AD1493C8F4?doi=10.1.1.110.5134&rep=rep1&type=pdf&ei=QcetSefqF4GEsQPnx4jSBA&sig2=HjJILSBPFgJTfuifbvKrxw&usg=AFQjCNFbABIxusr-NEbgrinhtR6buvjaYA >>> >>> -- >>> Robert Kern >>> >>> "I have come to believe that the whole world is an enigma, a harmless >>> enigma that is made terrible by our own mad attempt to interpret it as >>> though it had an underlying truth." >>> ?-- Umberto Eco >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > > From robert.kern at gmail.com Tue Mar 3 23:57:23 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 3 Mar 2009 22:57:23 -0600 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> Message-ID: <3d375d730903032057v4b1f8324gd18d2d1cf02aae06@mail.gmail.com> On Tue, Mar 3, 2009 at 22:41, Jonathan Taylor wrote: > Thanks, ?All these things make sense and I should have known to > calculate the sins and cosines up front. ?I managed a few more > "tricks" and knocked off 40% of the computation time: > > def rotation(theta, R = np.zeros((3,3))): > ? ?cx,cy,cz = np.cos(theta) > ? ?sx,sy,sz = np.sin(theta) > ? ?R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > ? ? ? ?-sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > ? ? ? ?cx*sy, sy*sz, -sy*cz, cy) > ? ?return R > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster. > > Am I right in thinking that I wouldn't get much of a speedup by > rewriting this in C as most of the time is spent in necessary python > functions? You would be able to get rid of the Python function call overhead of rotation(), the ufunc machinery, and unnecessary array creation/deletion. It could be worth your time experimenting with a quick Cython implementation. That might help you (and the rest of us!) answer this question instinctively the next time around. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Mar 4 00:11:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 Mar 2009 00:11:08 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> Message-ID: <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> On Tue, Mar 3, 2009 at 11:41 PM, Jonathan Taylor wrote: > Thanks, ?All these things make sense and I should have known to > calculate the sins and cosines up front. ?I managed a few more > "tricks" and knocked off 40% of the computation time: > > def rotation(theta, R = np.zeros((3,3))): > ? ?cx,cy,cz = np.cos(theta) > ? ?sx,sy,sz = np.sin(theta) > ? ?R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > ? ? ? ?-sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > ? ? ? ?cx*sy, sy*sz, -sy*cz, cy) > ? ?return R > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster. One of the usual recommendation on the python list is also to load functions into the local scope to avoid the lookup in the module. e.g. npcos = np.cos or I think the usual: `from numpy import cos, sin, zeros` should be better for speed also you still have a few duplicate multiplications, e.g. cx*cz, cx*sz, ..? but this looks already like micro optimization. Josef From sccolbert at gmail.com Wed Mar 4 00:58:36 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 4 Mar 2009 00:58:36 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> Message-ID: <7f014ea60903032158r37f57a19od6183fba09a0cd81@mail.gmail.com> since you only need to calculate the sine or cosine of a single value (not an array of values) I would recommend using the sine and cosine function of the python standard math library as it is a full order of magnitude faster. (at least on my core 2 windows vista box) i.e. import math as m m.sin m.cos etc.... again, this is microoptimization.... a difference of 10^-5s per call vs 10^-6s per call numpy vs math Chris On Wed, Mar 4, 2009 at 12:11 AM, wrote: > On Tue, Mar 3, 2009 at 11:41 PM, Jonathan Taylor > wrote: > > Thanks, All these things make sense and I should have known to > > calculate the sins and cosines up front. I managed a few more > > "tricks" and knocked off 40% of the computation time: > > > > def rotation(theta, R = np.zeros((3,3))): > > cx,cy,cz = np.cos(theta) > > sx,sy,sz = np.sin(theta) > > R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > > -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > > cx*sy, sy*sz, -sy*cz, cy) > > return R > > > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster. > > One of the usual recommendation on the python list is also to load > functions into the local scope to avoid the lookup in the module. > > e.g. npcos = np.cos > or I think the usual: `from numpy import cos, sin, zeros` should be > better for speed > > also you still have a few duplicate multiplications, e.g. cx*cz, cx*sz, ..? > but this looks already like micro optimization. > > Josef > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hoytak at gmail.com Wed Mar 4 01:43:25 2009 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 3 Mar 2009 22:43:25 -0800 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> Message-ID: <4db580fd0903032243j479d9063l930fb5d7956f14c9@mail.gmail.com> > def rotation(theta, R = np.zeros((3,3))): > cx,cy,cz = np.cos(theta) > sx,sy,sz = np.sin(theta) > R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > cx*sy, sy*sz, -sy*cz, cy) > return R > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster in cython, the above would be (something like): from numpy cimport ndarray cdef extern from "math.h": double cos(double) double sin(double) def rotation(ndarry[double] theta, ndarray[double, ndim=2] R = np.zeros((3,3))): cdef double cx = cos(theta[0]), cy = cos(theta[1]), cz = cos(theta[2]) cdef double sx = sin(theta[0]), sy = sin(theta[1]), sz = sin(theta[2]) R[0,0] = cx*cz - sx*cy*sz R[0,1] = cx*sz + sx*cy*cz R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, cx*sy, sy*sz, -sy*cz, cy) return R > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster > One of the usual recommendation on the python list is also to load > functions into the local scope to avoid the lookup in the module. > also you still have a few duplicate multiplications, e.g. cx*cz, cx*sz, ..? > but this looks already like micro optimizatio ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From hoytak at cs.ubc.ca Wed Mar 4 01:50:08 2009 From: hoytak at cs.ubc.ca (Hoyt Koepke) Date: Tue, 3 Mar 2009 22:50:08 -0800 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <4db580fd0903032243j479d9063l930fb5d7956f14c9@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> <4db580fd0903032243j479d9063l930fb5d7956f14c9@mail.gmail.com> Message-ID: <4db580fd0903032250p2fe8051flb64b3f762c498ba9@mail.gmail.com> Hello, > def rotation(theta, R = np.zeros((3,3))): > cx,cy,cz = np.cos(theta) > sx,sy,sz = np.sin(theta) > R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > cx*sy, sy*sz, -sy*cz, cy) > return R > > Pretty evil looking ;) but still wouldn't mind somehow getting it faster I would definitely encourage you to check out cython. I have to write lots of numerically intensive stuff in my python code, and I tend to cythonize it a lot. In cython, the above would be (something like): from numpy cimport ndarray cdef extern from "math.h": double cos(double) double sin(double) def rotation(ndarry[double] theta, ndarray[double, ndim=2] R = np.zeros((3,3))): cdef double cx = cos(theta[0]), cy = cos(theta[1]), cz = cos(theta[2]) cdef double sx = sin(theta[0]), sy = sin(theta[1]), sz = sin(theta[2]) R[0,0] = cx*cz - sx*cy*sz R[0,1] = cx*sz + sx*cy*cz R[0,2] = sx*sy ... R[2,2] = cy return R And that will be probably be orders of magnitude faster than what you currently have, as everything but the function call and the return statement would become C code. Compilers these days are very good at optimizing that kind of thing too. --Hoyt ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From dwf at cs.toronto.edu Wed Mar 4 01:56:23 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 4 Mar 2009 01:56:23 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> Message-ID: <40FF8AD3-FBC3-4B91-8F47-5255BDD3F10D@cs.toronto.edu> On 3-Mar-09, at 11:41 PM, Jonathan Taylor wrote: > def rotation(theta, R = np.zeros((3,3))): Hey Jon, Just a note, in case you haven't heard this schpiel before: be careful when you use mutables as default arguments. It can lead to unexpected behaviour down the line. The reason is that the np.zeros() is only called once when the function is read by the interpreter, and that reference is retained between calls.Example: >>> def f(a=[0,0,0]): ... print sum(a) ... a[0] += 1 ... >>> f() 0 >>> f() 1 >>> f() 2 In your example this is fine, since you're overwriting every single value in the function body, but if you weren't, it would lead to problems that would be difficult to debug. (On a related note never try and reverse an array in-place with slice syntax. i.e. a[:] = a[::-1] is bad mojo and cost me about a day. :P) Cheers, David From robert.kern at gmail.com Wed Mar 4 01:58:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 4 Mar 2009 00:58:54 -0600 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <40FF8AD3-FBC3-4B91-8F47-5255BDD3F10D@cs.toronto.edu> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <40FF8AD3-FBC3-4B91-8F47-5255BDD3F10D@cs.toronto.edu> Message-ID: <3d375d730903032258t44205c55j2c7488b39d81e43d@mail.gmail.com> On Wed, Mar 4, 2009 at 00:56, David Warde-Farley wrote: > On 3-Mar-09, at 11:41 PM, Jonathan Taylor wrote: > >> def rotation(theta, R = np.zeros((3,3))): > > Hey Jon, > > Just a note, in case you haven't heard this schpiel before: be careful > when you use mutables as default arguments. It can lead to unexpected > behaviour down the line. > > The reason is that the np.zeros() is only called once when the > function is read by the interpreter, and that reference is retained > between calls. I'm pretty sure that's exactly why he did it, and that's what he's calling evil. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Wed Mar 4 02:28:11 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 4 Mar 2009 02:28:11 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <3d375d730903032258t44205c55j2c7488b39d81e43d@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <40FF8AD3-FBC3-4B91-8F47-5255BDD3F10D@cs.toronto.edu> <3d375d730903032258t44205c55j2c7488b39d81e43d@mail.gmail.com> Message-ID: On 4-Mar-09, at 1:58 AM, Robert Kern wrote: > I'm pretty sure that's exactly why he did it, and that's what he's > calling evil. As ever, such nuance is lost on me. I didn't bother to check whether or not it was in the original function. Robert to the rescue. :) It's a neat trick, actually, though I'll probably shoot myself in the foot if I try to use it. I've taken to the somewhat unpythonic convention of passing in "out" arrays everywhere to avoid repeated allocations, i.e. def foo(x, y, out=None): if out == None: out = np.empty(...) It's a neat trick, the default argument one, but I'd probably shoot myself in the foot with it. Also, I guess it only works when the dimensions are fixed (like in this case). David From dwf at cs.toronto.edu Wed Mar 4 02:30:30 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 4 Mar 2009 02:30:30 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <4db580fd0903032250p2fe8051flb64b3f762c498ba9@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> <4db580fd0903032243j479d9063l930fb5d7956f14c9@mail.gmail.com> <4db580fd0903032250p2fe8051flb64b3f762c498ba9@mail.gmail.com> Message-ID: On 4-Mar-09, at 1:50 AM, Hoyt Koepke wrote: > I would definitely encourage you to check out cython. I have to write > lots of numerically intensive stuff in my python code, and I tend to > cythonize it a lot. Seconded. I recently took some distance computation code and Cythonized it, I got an absolutely absurd speedup after about five minutes of effort to remind myself how numpy+cython play together (see: http://wiki.cython.org/tutorials/numpy ). I know some people are reluctant to use anything where their code isn't standard python (possibly with ctypes fiddling) but Cython is definitely worth it for replacing bottlenecks. David From cimrman3 at ntc.zcu.cz Wed Mar 4 03:13:38 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 04 Mar 2009 09:13:38 +0100 Subject: [Numpy-discussion] intersect1d and setmember1d In-Reply-To: References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> <49AB9F43.4060804@ntc.zcu.cz> <3d375d730903020928q6e0f69ddldc2a81102e8cc840@mail.gmail.com> Message-ID: <49AE3832.7040805@ntc.zcu.cz> Neil Crighton wrote: > Robert Kern gmail.com> writes: > >> Do you mind if we just add you to the THANKS.txt file, and consider >> you as a "NumPy Developer" per the LICENSE.txt as having released that >> code under the numpy license? If we're dotting our i's and crossing >> our t's legally, that's a bit more straightforward (oddly enough). >> > > No, I don't mind having it released under the numpy licence. OK, I will tak care of including it - how should I proceed now? - has the workflow discussion settled somehow? r. From cimrman3 at ntc.zcu.cz Wed Mar 4 03:21:09 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 04 Mar 2009 09:21:09 +0100 Subject: [Numpy-discussion] intersect1d and setmember1d In-Reply-To: <49AE3832.7040805@ntc.zcu.cz> References: <90CBFFFE6273484B9579400AC950800502024765@ntsydexm01.pc.internal.macquarie.com> <243385.2089.qm@web94910.mail.in2.yahoo.com> <49AB9F43.4060804@ntc.zcu.cz> <3d375d730903020928q6e0f69ddldc2a81102e8cc840@mail.gmail.com> <49AE3832.7040805@ntc.zcu.cz> Message-ID: <49AE39F5.7060206@ntc.zcu.cz> Robert Cimrman wrote: > Neil Crighton wrote: >> Robert Kern gmail.com> writes: >> >>> Do you mind if we just add you to the THANKS.txt file, and consider >>> you as a "NumPy Developer" per the LICENSE.txt as having released that >>> code under the numpy license? If we're dotting our i's and crossing >>> our t's legally, that's a bit more straightforward (oddly enough). >>> >> No, I don't mind having it released under the numpy licence. > > OK, I will tak care of including it - how should I proceed now? - has > the workflow discussion settled somehow? I have created http://projects.scipy.org/numpy/ticket/1036 - the patch will go there. r. From cimrman3 at ntc.zcu.cz Wed Mar 4 03:27:44 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 04 Mar 2009 09:27:44 +0100 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> Message-ID: <49AE3B80.30907@ntc.zcu.cz> Jonathan Taylor wrote: > Sorry.. obviously having some copy and paste trouble here. The > message should be as follows: > > Hi, > > I am doing optimization on a vector of rotation angles tx,ty and tz > using scipy.optimize.fmin. Unfortunately the function that I am > optimizing needs the rotation matrix corresponding to this vector so > it is getting constructed once for each iteration with new values. >> >From profiling I can see that the function I am using to construct > this rotation matrix is a bottleneck. I am currently using: > > def rotation(theta): > tx,ty,tz = theta > > Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), cos(tx)]]) > Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, cos(ty)]]) > Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], [0,0,1]]) > > return np.dot(Rx, np.dot(Ry, Rz)) > > Is there a faster way to do this? Perhaps I can do this faster with a > small cython module, but this might be overkill? > > Thanks for any help, > Jonathan. An alternative to specifying the rotation by the three angles tx,ty and tz could be creating directly the rotation matrix given an axis and an angle: def make_axis_rotation_matrix(direction, angle): """ Create a rotation matrix corresponding to the rotation around a general axis by a specified angle. R = dd^T + cos(a) (I - dd^T) + sin(a) skew(d) Parameters: angle : float a direction : array d """ d = np.array(direction, dtype=np.float64) d /= np.linalg.norm(d) eye = np.eye(3, dtype=np.float64) ddt = np.outer(d, d) skew = np.array([[ 0, d[2], -d[1]], [-d[2], 0, d[0]], [d[1], -d[0], 0]], dtype=np.float64) mtx = ddt + np.cos(angle) * (eye - ddt) + np.sin(angle) * skew return mtx r. From charlesr.harris at gmail.com Wed Mar 4 04:07:40 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 02:07:40 -0700 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <49AE3B80.30907@ntc.zcu.cz> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <49AE3B80.30907@ntc.zcu.cz> Message-ID: On Wed, Mar 4, 2009 at 1:27 AM, Robert Cimrman wrote: > Jonathan Taylor wrote: > > Sorry.. obviously having some copy and paste trouble here. The > > message should be as follows: > > > > Hi, > > > > I am doing optimization on a vector of rotation angles tx,ty and tz > > using scipy.optimize.fmin. Unfortunately the function that I am > > optimizing needs the rotation matrix corresponding to this vector so > > it is getting constructed once for each iteration with new values. > >> >From profiling I can see that the function I am using to construct > > this rotation matrix is a bottleneck. I am currently using: > > > > def rotation(theta): > > tx,ty,tz = theta > > > > Rx = np.array([[1,0,0], [0, cos(tx), -sin(tx)], [0, sin(tx), > cos(tx)]]) > > Ry = np.array([[cos(ty), 0, -sin(ty)], [0, 1, 0], [sin(ty), 0, > cos(ty)]]) > > Rz = np.array([[cos(tz), -sin(tz), 0], [sin(tz), cos(tz), 0], > [0,0,1]]) > > > > return np.dot(Rx, np.dot(Ry, Rz)) > > > > Is there a faster way to do this? Perhaps I can do this faster with a > > small cython module, but this might be overkill? > > > > Thanks for any help, > > Jonathan. > > An alternative to specifying the rotation by the three angles tx,ty and > tz could be creating directly the rotation matrix given an axis and an > angle: > > def make_axis_rotation_matrix(direction, angle): > """ > Create a rotation matrix corresponding to the rotation around a general > axis by a specified angle. > > R = dd^T + cos(a) (I - dd^T) + sin(a) skew(d) > > Parameters: > > angle : float a > direction : array d > """ > d = np.array(direction, dtype=np.float64) > d /= np.linalg.norm(d) > > eye = np.eye(3, dtype=np.float64) > ddt = np.outer(d, d) > skew = np.array([[ 0, d[2], -d[1]], > [-d[2], 0, d[0]], > [d[1], -d[0], 0]], dtype=np.float64) > > mtx = ddt + np.cos(angle) * (eye - ddt) + np.sin(angle) * skew > return mtx > It might be worth looking at the function in the original problem to see if it can be cast to a different form. Multiplication by a 3d skew matrix can also be represented as a cross product. BTW, the formula above is the matrix exponential of a skew matrix and rotations in higher dimensions can be represented that way also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Wed Mar 4 06:18:43 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 04 Mar 2009 12:18:43 +0100 Subject: [Numpy-discussion] PyArray_SETITEM with object arrays in Cython In-Reply-To: <6c476c8a0902111412y2cb6985x973ff830d1a1412@mail.gmail.com> References: <6c476c8a0902110710q4c9b16e4vcc3e315f64a84caf@mail.gmail.com> <76550d7a4d21a5fbd6764825cc547ae1.squirrel@webmail.uio.no> <6c476c8a0902111412y2cb6985x973ff830d1a1412@mail.gmail.com> Message-ID: <49AE6393.1000602@student.matnat.uio.no> Wes McKinney wrote: > This still doesn't explain why the buffer interface was slow. I finally remembered to look at this; there seems to be a problem in your code: > def reindexObject(ndarray[object, ndim=1] index, > ndarray[object, ndim=1] arr, > dict idxMap): > ''' > Using the provided new index, a given array, and a mapping of > index-value > correpondences in the value array, return a new ndarray conforming to > the new index. > ''' > cdef object idx, value > > cdef int length = index.shape[0] > cdef ndarray[object, ndim = 1] result = np.empty(length, dtype=object) > > cdef int i = 0 > for i from 0 <= i < length: > idx = index[i] > if not PyDict_Contains(idxMap, idx): > result[i] = None > continue > value = arr[idxMap[idx]] > result[i] = value > return result The problem is with arr[idxMap[idx]]. The result from idxMap[idx] is a Python object which leads to non-efficient indexing. Use arr[idxMap[idx]] instead. Dag Sverre From sturla at molden.no Wed Mar 4 06:57:43 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 04 Mar 2009 12:57:43 +0100 Subject: [Numpy-discussion] loadtxt issues In-Reply-To: <7cc4bc500902102140h6cc85bb3wa28c89a147ddce73@mail.gmail.com> References: <7cc4bc500902102140h6cc85bb3wa28c89a147ddce73@mail.gmail.com> Message-ID: <49AE6CB7.9030901@molden.no> On 2/11/2009 6:40 AM, A B wrote: > Hi, > > How do I write a loadtxt command to read in the following file and > store each data point as the appropriate data type: > > 12|h|34.5|44.5 > 14552|bbb|34.5|42.5 > dt = {'names': ('gender','age','weight','bal'), 'formats': ('i4', > 'S4','f4', 'f4')} Does this work for you? dt = {'names': ('gender','age','weight','bal'), 'formats': ('i4','S4','f4', 'f4')} with open('filename.txt', 'rt') as file: linelst = [line.strip('\n').split('|') for line in file] n = len(linelst) data = numpy.zeros(n, dtype=numpy.dtype(dt)) for i,(gender, age, weight, bal) in zip(range(n),linelst): data[i] = (int(gender), age, float(weight), float(bal)) S.M. From sturla at molden.no Wed Mar 4 07:01:10 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 04 Mar 2009 13:01:10 +0100 Subject: [Numpy-discussion] loadtxt issues In-Reply-To: <49AE6CB7.9030901@molden.no> References: <7cc4bc500902102140h6cc85bb3wa28c89a147ddce73@mail.gmail.com> <49AE6CB7.9030901@molden.no> Message-ID: <49AE6D86.10605@molden.no> On 3/4/2009 12:57 PM, Sturla Molden wrote: > Does this work for you? Never mind, it seems my e-mail got messed up. I ought to keep them sorted by date... S.M. From sturla at molden.no Wed Mar 4 07:18:35 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 04 Mar 2009 13:18:35 +0100 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <4db580fd0903032250p2fe8051flb64b3f762c498ba9@mail.gmail.com> References: <463e11f90903031552s518ca37cn9980f472a1baad79@mail.gmail.com> <463e11f90903031553p6245231fmfea3e6b04bd2a0a@mail.gmail.com> <3d375d730903031619y7727b4d3ked07cba58f9b7828@mail.gmail.com> <7f014ea60903031714h6a1a78aah6a05d8c95da032c7@mail.gmail.com> <7f014ea60903031715y3622f301u36b08c88a2c8e958@mail.gmail.com> <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> <1cd32cbb0903032111w38b913e1i7493c1c7aea0f5e1@mail.gmail.com> <4db580fd0903032243j479d9063l930fb5d7956f14c9@mail.gmail.com> <4db580fd0903032250p2fe8051flb64b3f762c498ba9@mail.gmail.com> Message-ID: <49AE719B.4030109@molden.no> On 3/4/2009 7:50 AM, Hoyt Koepke wrote: > In cython, the above would be (something like): It also helps to turn off bounds checks: from numpy cimport ndarray cdef extern from "math.h": double cos(double) double sin(double) @cython.boundscheck(False) cpdef ndarray[double, ndim=2] rotation(ndarry[double] theta, ndarray[double, ndim=2] R = np.zeros((3,3))): cdef double cx = cos(theta[0]), \ cy = cos(theta[1]), \ cz = cos(theta[2]) cdef double sx = sin(theta[0]), \ sy = sin(theta[1]), \ sz = sin(theta[2]) R[0,0] = cx*cz - sx*cy*sz R[0,1] = cx*sz + sx*cy*cz R[0,2] = sx*sy ... R[2,2] = cy return R S.M. From lou_boog2000 at yahoo.com Wed Mar 4 07:52:54 2009 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 4 Mar 2009 04:52:54 -0800 (PST) Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903032041o305dfd31r686a1e69cfc1484a@mail.gmail.com> Message-ID: <517991.75534.qm@web34405.mail.mud.yahoo.com> First, do a profile. That will tell you how much time you are spending in each function and where the bottlenecks are. Easy to do in iPython. Second, (I am guessing here -- the profile will tell you) that the bottleneck is the "call back" to the rotation matrix function from the optimizer. That can be expensive if the optimizer is doing it a lot. I had a similar situation with a numerical integration scheme using SciPy. When I wrote a C version of the integration it ran 10 times faster. Can you get a C-optimizer? Then use ctypes or something else to call it all from Python? -- Lou Pecora, my views are my own. --- On Tue, 3/3/09, Jonathan Taylor wrote: > From: Jonathan Taylor > Subject: Re: [Numpy-discussion] Faster way to generate a rotation matrix? > To: "Discussion of Numerical Python" > Date: Tuesday, March 3, 2009, 11:41 PM > Thanks, All these things make sense and I should have known > to > calculate the sins and cosines up front. I managed a few > more > "tricks" and knocked off 40% of the computation > time: > > def rotation(theta, R = np.zeros((3,3))): > cx,cy,cz = np.cos(theta) > sx,sy,sz = np.sin(theta) > R.flat = (cx*cz - sx*cy*sz, cx*sz + sx*cy*cz, sx*sy, > -sx*cz - cx*cy*sz, -sx*sz + cx*cy*cz, > cx*sy, sy*sz, -sy*cz, cy) > return R > > Pretty evil looking ;) but still wouldn't mind somehow > getting it faster. > > Am I right in thinking that I wouldn't get much of a > speedup by > rewriting this in C as most of the time is spent in > necessary python > functions? > > Thanks again, > Jon. From david at ar.media.kyoto-u.ac.jp Wed Mar 4 08:13:54 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Mar 2009 22:13:54 +0900 Subject: [Numpy-discussion] Why using cblas in umath_test ? Message-ID: <49AE7E92.7020202@ar.media.kyoto-u.ac.jp> Hi, I re-enabled umath tests (to test generalized ufuncs), to fix remaining issues, but I think there is something fundamentally wrong with it: it assumes cblas is available, which is not true. It happens to work on (some) Linux and mac os X only because those platforms provide cblas and blas in the same libraries (Atlas, Accelerate framework). Is there a rationale for using cblas at all ? Why not using straight C functions - it is not like we care about speed for tests, right ? Or am I missing something ? cheers, David From fperez.net at gmail.com Wed Mar 4 08:51:16 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 4 Mar 2009 05:51:16 -0800 Subject: [Numpy-discussion] ANN: python for scientific computing at SIAM CSE 09 Message-ID: Hi all, sorry for the spam, but in case any of you are coming to the SIAM Conference on Computational Science and Engineering (CSE09) in Miami: http://www.siam.org/meetings/cse09/ you might be interested in stopping by the Python sessions on Thursday: http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8044 http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8045 http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8046 Think of it as the East Coast March mini-edition of Scipy'09 ;) Cheers, f From python-ml at nn7.de Wed Mar 4 09:01:39 2009 From: python-ml at nn7.de (Soeren Sonnenburg) Date: Wed, 04 Mar 2009 15:01:39 +0100 Subject: [Numpy-discussion] calling _import_array() twice crashes python Message-ID: <1236175299.29694.39.camel@localhost> Dear all, I've written a wrapper enabling to run python code from within octave (and vice versa). To this end I am embedding python in octave. So I am calling Py_Initialize(); _import_array(); Py_Finalize(); multiple times. While things work nicely on the first run, I am getting a crash on _import_array() on the second run... Is there any cleanup function I should potentially call that would prevent this? Soeren From oliphant at enthought.com Wed Mar 4 09:24:06 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 04 Mar 2009 08:24:06 -0600 Subject: [Numpy-discussion] NumPy SVN? Message-ID: <49AE8F06.1060909@enthought.com> Is commit to NumPy SVN still turned off? How do I get a working SVN again? -Travis From david at ar.media.kyoto-u.ac.jp Wed Mar 4 09:37:40 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Mar 2009 23:37:40 +0900 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <49AE8F06.1060909@enthought.com> References: <49AE8F06.1060909@enthought.com> Message-ID: <49AE9234.1010902@ar.media.kyoto-u.ac.jp> Travis E. Oliphant wrote: > Is commit to NumPy SVN still turned off? How do I get a working SVN > again? > It is on - I could commit a few things 1-2 hours ago. If you still get an administrative error message ("repo is read only ..."), it means you are on the old repo. cheers, David From pwang at enthought.com Wed Mar 4 10:05:39 2009 From: pwang at enthought.com (Peter Wang) Date: Wed, 4 Mar 2009 09:05:39 -0600 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <49AE9234.1010902@ar.media.kyoto-u.ac.jp> References: <49AE8F06.1060909@enthought.com> <49AE9234.1010902@ar.media.kyoto-u.ac.jp> Message-ID: On Mar 4, 2009, at 8:37 AM, David Cournapeau wrote: > Travis E. Oliphant wrote: >> Is commit to NumPy SVN still turned off? How do I get a working SVN >> again? >> > > It is on - I could commit a few things 1-2 hours ago. If you still get > an administrative error message ("repo is read only ..."), it means > you > are on the old repo. > > cheers, > David Yeah, this is an Enthought-internal IT issue, which I will fix this morning as soon as I get in to the office. -Peter From scott.sinclair.za at gmail.com Wed Mar 4 10:06:30 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Wed, 4 Mar 2009 17:06:30 +0200 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <49AE9234.1010902@ar.media.kyoto-u.ac.jp> References: <49AE8F06.1060909@enthought.com> <49AE9234.1010902@ar.media.kyoto-u.ac.jp> Message-ID: <6a17e9ee0903040706q4554273eraadfec36c61fa992@mail.gmail.com> > 2009/3/4 David Cournapeau : > Travis E. Oliphant wrote: >> Is commit to NumPy SVN still turned off? ? How do I get a working SVN >> again? > > It is on - I could commit a few things 1-2 hours ago. If you still get > an administrative error message ("repo is read only ..."), it means you > are on the old repo. The local checkout on my machine stopped working: $ svn up svn: 'http://scipy.org/svn/numpy/trunk' path not found I had to do a fresh checkout from http://svn.scipy.org/svn/numpy/ (note changed URL). Maybe the problem is related? It looks like http://scipy.org/svn/numpy no longer resolves to the SVN repository... Cheers, Scott From cournape at gmail.com Wed Mar 4 10:13:44 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 5 Mar 2009 00:13:44 +0900 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <6a17e9ee0903040706q4554273eraadfec36c61fa992@mail.gmail.com> References: <49AE8F06.1060909@enthought.com> <49AE9234.1010902@ar.media.kyoto-u.ac.jp> <6a17e9ee0903040706q4554273eraadfec36c61fa992@mail.gmail.com> Message-ID: <5b8d13220903040713x4cadf587hd7e3518365977105@mail.gmail.com> On Thu, Mar 5, 2009 at 12:06 AM, Scott Sinclair wrote: > I had to do a fresh checkout from http://svn.scipy.org/svn/numpy/ > (note changed URL). I did not know we could access svn from scipy.org. I have alway used svn.scipy.org - in which case you don't need to do anything to go to the new repo. It should be transparent. David From lou_boog2000 at yahoo.com Wed Mar 4 10:28:11 2009 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 4 Mar 2009 07:28:11 -0800 (PST) Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <49AE719B.4030109@molden.no> Message-ID: <361283.282.qm@web34405.mail.mud.yahoo.com> Whoops. I see you have profiled your code. Sorry to re-suggest that. But I agree with those who suggest a C speed up using ctypes or cthyon. However, thanks for posting your question. It caused a LOT of very useful responses that I didn't know about. Thanks to all who replied. -- Lou Pecora, my views are my own. From bsouthey at gmail.com Wed Mar 4 11:33:23 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 04 Mar 2009 10:33:23 -0600 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <5b8d13220903040713x4cadf587hd7e3518365977105@mail.gmail.com> References: <49AE8F06.1060909@enthought.com> <49AE9234.1010902@ar.media.kyoto-u.ac.jp> <6a17e9ee0903040706q4554273eraadfec36c61fa992@mail.gmail.com> <5b8d13220903040713x4cadf587hd7e3518365977105@mail.gmail.com> Message-ID: <49AEAD53.1020601@gmail.com> David Cournapeau wrote: > On Thu, Mar 5, 2009 at 12:06 AM, Scott Sinclair > wrote: > > >> I had to do a fresh checkout from http://svn.scipy.org/svn/numpy/ >> (note changed URL). >> > > I did not know we could access svn from scipy.org. I have alway used > svn.scipy.org - in which case you don't need to do anything to go to > the new repo. It should be transparent. > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > See links at: http://www.scipy.org/Download ''' Bleeding-edge repository access (See also the Developer Zone.) NumPy svn co http://scipy.org/svn/numpy/trunk numpy SciPy svn co http://scipy.org/svn/scipy/trunk scipy ''' But the Developer Zone page (http://www.scipy.org/Developer_Zone) refers to the other eg for numpy: http://svn.scipy.org/svn/numpy/trunk/ Bruce From cournape at gmail.com Wed Mar 4 11:58:57 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 5 Mar 2009 01:58:57 +0900 Subject: [Numpy-discussion] NumPy SVN? In-Reply-To: <49AEAD53.1020601@gmail.com> References: <49AE8F06.1060909@enthought.com> <49AE9234.1010902@ar.media.kyoto-u.ac.jp> <6a17e9ee0903040706q4554273eraadfec36c61fa992@mail.gmail.com> <5b8d13220903040713x4cadf587hd7e3518365977105@mail.gmail.com> <49AEAD53.1020601@gmail.com> Message-ID: <5b8d13220903040858x195f668dw60474241439e5a2b@mail.gmail.com> On Thu, Mar 5, 2009 at 1:33 AM, Bruce Southey wrote: > David Cournapeau wrote: >> On Thu, Mar 5, 2009 at 12:06 AM, Scott Sinclair >> wrote: >> >> >>> I had to do a fresh checkout from http://svn.scipy.org/svn/numpy/ >>> (note changed URL). >>> >> >> I did not know we could access svn from scipy.org. I have alway used >> svn.scipy.org - in which case you don't need to do anything to go to >> the new repo. It should be transparent. >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > See links at: > http://www.scipy.org/Download > > ''' > Bleeding-edge repository access > (See also the Developer Zone.) > NumPy > svn co http://scipy.org/svn/numpy/trunk numpy > SciPy > svn co http://scipy.org/svn/scipy/trunk scipy > ''' Damn wiki, I wanted to fix that, but it was already fixed. David From borreguero at gmail.com Wed Mar 4 14:17:26 2009 From: borreguero at gmail.com (Jose Borreguero) Date: Wed, 4 Mar 2009 14:17:26 -0500 Subject: [Numpy-discussion] how to multiply the rows of a matrix by a different number? In-Reply-To: <1cd32cbb0903031811v741946c6ne20efe35eb3593a9@mail.gmail.com> References: <7cced4ed0903031753s11ae1aa9p3de97d88c4ba0837@mail.gmail.com> <1cd32cbb0903031811v741946c6ne20efe35eb3593a9@mail.gmail.com> Message-ID: <7cced4ed0903041117w44129abby458e56299e6e9f8@mail.gmail.com> Sweet! I found that *M*b.reshape(10000,1)* will also do the trick. Any guess which method is faster? On Tue, Mar 3, 2009 at 9:11 PM, wrote: > On Tue, Mar 3, 2009 at 8:53 PM, Jose Borreguero > wrote: > > I guess there has to be an easy way for this. I have: > > M.shape=(10000,3) > > N.shape=(10000,) > > > > I want to do this: > > for i in range(10000): > > M[i]*=N[i] > > without the explicit loop > > > > >>> M = np.ones((10,3)) > >>> N = np.arange(10) > >>> N.shape > (10,) > >>> (N[:,np.newaxis]).shape > (10, 1) > >>> M*N[:,np.newaxis] > array([[ 0., 0., 0.], > [ 1., 1., 1.], > [ 2., 2., 2.], > [ 3., 3., 3.], > [ 4., 4., 4.], > [ 5., 5., 5.], > [ 6., 6., 6.], > [ 7., 7., 7.], > [ 8., 8., 8.], > [ 9., 9., 9.]]) > > >>> M *= N[:,np.newaxis] > >>> M > > Josef > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Mar 4 15:18:55 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 13:18:55 -0700 Subject: [Numpy-discussion] Apropos ticked #913 Message-ID: Hi David, It isn't clear to me that trac is mailing out the ticket reponses, so I'm posting to the list. > #913: max is bogus if nan is in the array > --------------------+------------------------------------------------------- > Reporter: cdavid | Owner: somebody > Type: defect | Status: closed > Priority: normal | Milestone: 1.3.0 > Component: Other | Version: none > Severity: major | Resolution: fixed > Keywords: | > --------------------+------------------------------------------------------- > > Comment(by cdavid): > > What's the status on this, Chuck ? What do max/min and amax/amin do ? There are python max/min and their behaviour depends on the scalar type. I haven't looked at the numpy scalars to see precisely what they do. Numpy max/min are aliases for amax/amin defined when the core is imported. The functions amax/amin in turn map to the array methods max/min which call the maximum.reduce/minimum.reduce ufuncs, so they all propagate nans, i.e., if the array contains a nan, nan will be the return value. The nonpropagating comparisons are the ufuncs fmax/fmin and there are no corresponding array methods. I think fmax/fmin should be renamed fmaximum/fminimum before the release of 1.3 and the names fmax/fmin reserved for the reduced versions to match the names amax/amin. I'll do that if there are no objections. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Mar 4 15:37:48 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 4 Mar 2009 22:37:48 +0200 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: Message-ID: <9457e7c80903041237h212f1ba2v312f13ca1e4d0fc6@mail.gmail.com> 2009/3/4 Charles R Harris : > It isn't clear to me that trac is mailing out the ticket reponses, so I'm > posting to the list. It should work now. Cheers St?fan From charlesr.harris at gmail.com Wed Mar 4 15:48:20 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 13:48:20 -0700 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: <9457e7c80903041237h212f1ba2v312f13ca1e4d0fc6@mail.gmail.com> References: <9457e7c80903041237h212f1ba2v312f13ca1e4d0fc6@mail.gmail.com> Message-ID: On Wed, Mar 4, 2009 at 1:37 PM, St?fan van der Walt wrote: > 2009/3/4 Charles R Harris : > > It isn't clear to me that trac is mailing out the ticket reponses, so I'm > > posting to the list. > > It should work now. > I think it was sending mail to an old address. I've re-updated my account and hopefully that will fix it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Mar 4 15:57:15 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 4 Mar 2009 20:57:15 +0000 (UTC) Subject: [Numpy-discussion] Apropos ticked #913 References: Message-ID: Wed, 04 Mar 2009 13:18:55 -0700, Charles R Harris wrote: [clip] > There are python max/min and their behaviour depends on the scalar type. > I haven't looked at the numpy scalars to see precisely what they do. > > Numpy max/min are aliases for amax/amin defined when the core is > imported. The functions amax/amin in turn map to the array methods > max/min which call the maximum.reduce/minimum.reduce ufuncs, so they all > propagate nans, i.e., if the array contains a nan, nan will be the > return value. > > The nonpropagating comparisons are the ufuncs fmax/fmin and there are no > corresponding array methods. I think fmax/fmin should be renamed > fmaximum/fminimum before the release of 1.3 and the names fmax/fmin > reserved for the reduced versions to match the names amax/amin. I'll do > that if there are no objections. Aren't the nonpropagating versions of `amax` and `amin` called `nanmax` and `nanmin`? But these are functions, not array methods. What does the `f` in the beginning of `fmax` and `fmin` stand for? -- Pauli Virtanen From charlesr.harris at gmail.com Wed Mar 4 16:25:25 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 14:25:25 -0700 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: Message-ID: On Wed, Mar 4, 2009 at 1:57 PM, Pauli Virtanen wrote: > Wed, 04 Mar 2009 13:18:55 -0700, Charles R Harris wrote: > [clip] > > There are python max/min and their behaviour depends on the scalar type. > > I haven't looked at the numpy scalars to see precisely what they do. > > > > Numpy max/min are aliases for amax/amin defined when the core is > > imported. The functions amax/amin in turn map to the array methods > > max/min which call the maximum.reduce/minimum.reduce ufuncs, so they all > > propagate nans, i.e., if the array contains a nan, nan will be the > > return value. > > > > The nonpropagating comparisons are the ufuncs fmax/fmin and there are no > > corresponding array methods. I think fmax/fmin should be renamed > > fmaximum/fminimum before the release of 1.3 and the names fmax/fmin > > reserved for the reduced versions to match the names amax/amin. I'll do > > that if there are no objections. > > Aren't the nonpropagating versions of `amax` and `amin` called `nanmax` > and `nanmin`? But these are functions, not array methods. > > What does the `f` in the beginning of `fmax` and `fmin` stand for? > The functions fmax/fmin are C standard library names, I assume the f stands for floating like the f in fabs. Nanmax and nanmin work by replacing nans with a fill value and then performing the specified operation. For instance, nanmin replaces nans with inf. In contrast, the functions fmax and fmin are real ufuncs and return nan when *both* the inputs are nans, return the non-nan value when only one of the inputs is a nan, and do the normal comparisons when both inputs are valid. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gareth.elston.floss at googlemail.com Wed Mar 4 17:10:10 2009 From: gareth.elston.floss at googlemail.com (Gareth Elston) Date: Wed, 4 Mar 2009 22:10:10 +0000 Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions Message-ID: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> I found a nice module for these transforms at http://www.lfd.uci.edu/~gohlke/code/transformations.py.html . I've been using an older version for some time and thought it might make a good addition to numpy/scipy. I made some simple mods to the older version to add a couple of functions I needed and to allow it to be used with Python 2.4. The module is pure Python (2.5, with numpy 1.2 imported), includes doctests, and is BSD licensed. Here's the first part of the module docstring: """Homogeneous Transformation Matrices and Quaternions. A library for calculating 4x4 matrices for translating, rotating, mirroring, scaling, shearing, projecting, orthogonalizing, and superimposing arrays of homogenous coordinates as well as for converting between rotation matrices, Euler angles, and quaternions. """ I'd like to see this added to numpy/scipy so I know I've got some reading to do (scipy.org/Developer_Zone and the huge scipy-dev discussions on Scipy development infrastructure / workflow) to make sure it follows the guidelines, but where would people like to see this? In numpy? scipy? scikits? elsewhere? I seem to remember that there was a first draft of a guide for developers being written. Are there any links available? Thanks, Gareth. From dagss at student.matnat.uio.no Wed Mar 4 17:24:40 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 04 Mar 2009 23:24:40 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited Message-ID: <49AEFFA8.3050903@student.matnat.uio.no> This is NOT yet discussed on the Cython list; I wanted to check with more numerical users to see if the issue should even be brought up there. The idea behind the current syntax was to keep things as close as possible to Python/NumPy, and only provide some "hints" to Cython for optimization. My problem with this now is that a) it's too easy to get non-optimized code without a warning by letting in untyped indices, b) I think the whole thing is a bit too "magic" and that it is too unclear what is going on to newcomers (though I'm guessing there). My proposal: Introduce an explicit "buffer syntax": arr = np.zeros(..) cdef int[:,:] buf = arr # 2D buffer Here, buf would be something else than arr; it is a seperate view to the array for low-level purposes. This has certain disadvantages; consider: a1 = np.zeros(...) + 1; a2 = np.zeros(...) + 2 cdef int[:] b1 = a1, b2 = a2 Here, one would need to use b1 and b2 for for-loop arithmetic, but a1 and a2 for vectorized operations and slicing. "b1 + b2" would mean something else and not-NumPy-related (at first disallowed, but see below). "print b1" would likely coerce b1 to a Python memoryview and print "" (at least on newer Python versions); one would need to use some function to get b1 back to a NumPy array. Advantages: - More explicit - Leaves a path open in the syntax for introducing low-level slicing and arithmetic as seperate operations in Cython independent of NumPy (think Numexpr compile-time folded into Cython code). - Possible to have some creative syntax like "int[0:]" for disallowing negative wraparound and perhaps even "int[-3:]" for non-zero-based indexing. More details: http://wiki.cython.org/enhancements/buffersyntax (Of course, compatability with existing code will be taken seriously no matter how this plays out!) -- Dag Sverre From jonathan.taylor at utoronto.ca Wed Mar 4 18:02:18 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 4 Mar 2009 18:02:18 -0500 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <361283.282.qm@web34405.mail.mud.yahoo.com> References: <49AE719B.4030109@molden.no> <361283.282.qm@web34405.mail.mud.yahoo.com> Message-ID: <463e11f90903041502l7d480bebxcb633df8d87e6413@mail.gmail.com> Just for other peoples reference I eventually went with a cython version that goes about twice as fast as my old post. Here it is: import numpy as np cimport numpy as np cdef extern from "math.h": double cos(double) double sin(double) def rotation(np.ndarray[double] theta): cdef np.ndarray[double, ndim=2] R = np.zeros((3,3)) cdef double cx = cos(theta[0]), cy = cos(theta[1]), cz = cos(theta[2]) cdef double sx = sin(theta[0]), sy = sin(theta[1]), sz = sin(theta[2]) R[0,0] = cx*cz - sx*cy*sz R[0,1] = cx*sz + sx*cy*cz R[0,2] = sx*sy R[1,0] = -sx*cz - cx*cy*sz R[1,1] = -sx*sz + cx*cy*cz R[1,2] = cx*sy R[2,0] = sy*sz R[2,1] = -sy*cz R[2,2] = cy return R Best, Jon. On Wed, Mar 4, 2009 at 10:28 AM, Lou Pecora wrote: > > Whoops. ?I see you have profiled your code. Sorry to re-suggest that. > > But I agree with those who suggest a C speed up using ctypes or cthyon. > > However, thanks for posting your question. ?It caused a LOT of very useful responses that I didn't know about. ?Thanks to all who replied. > > -- Lou Pecora, ? my views are my own. > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From strawman at astraw.com Wed Mar 4 18:08:09 2009 From: strawman at astraw.com (Andrew Straw) Date: Wed, 04 Mar 2009 15:08:09 -0800 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AEFFA8.3050903@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> Message-ID: <49AF09D9.1090408@astraw.com> Dag Sverre Seljebotn wrote: > This is NOT yet discussed on the Cython list; I wanted to check with > more numerical users to see if the issue should even be brought up there. > > The idea behind the current syntax was to keep things as close as > possible to Python/NumPy, and only provide some "hints" to Cython for > optimization. My problem with this now is that a) it's too easy to get > non-optimized code without a warning by letting in untyped indices, b) I > think the whole thing is a bit too "magic" and that it is too unclear > what is going on to newcomers (though I'm guessing there). These may be issues, but I think keeping "cython -a my_module.pyx" in one's development cycle and inspecting the output will lead to great enlightenment on the part of the Cython user. Perhaps this should be advertised more prominently? I always do this with any Cython-generated code, and it works wonders. > My proposal: Introduce an explicit "buffer syntax": > > arr = np.zeros(..) > cdef int[:,:] buf = arr # 2D buffer My initial reaction is that it seems to be a second implementation of buffer interaction Cython, and therefore yet another thing to keep in mind and it's unclear to me how different it would be from the "traditional" Cython numpy ndarray behavior and how the behavior of the two approaches might differ, perhaps in subtle ways. So that's a disadvantage from my perspective. I agree that some of your ideas are advantages, however. Also, it seems it would allow one to (more easily) interact with buffer objects in sophisticated ways without needing the GIL, which is another advantage. Could some or all of this be added to the current numpy buffer implementation, or does it really need the new syntax? Also, is there anything possible with buffer objects that would be limited by the choice of syntax you propose? I imagine this might not work with structured data types (then again, it might...). From sturla at molden.no Wed Mar 4 19:11:49 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 5 Mar 2009 01:11:49 +0100 (CET) Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AEFFA8.3050903@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> Message-ID: > arr = np.zeros(..) > cdef int[:,:] buf = arr # 2D buffer > > Here, buf would be something else than arr; it is a seperate view to the > array for low-level purposes. I like your proposal. The reason we use Fortran for numerical computing is that Fortran makes it easy to manipulate arrays. C or C++ sucks terribly for anything related to numerical computing, and arrays in particular. Cython is currently not better than C. The ndarray syntax you added last summer is useless if we need to pass the array or a view/slice to another function. That is almost always the case. While the syntax is there, the overhead is unbearable, and it doesn't even work with cdefs. Thus one is back to working with those pesky C pointers. And they are even more pesky in Cython, because pointer arithmetics is disallowed. Currently, I think the best way to use Cython with numpy is to call PyArray_AsCArray and use the normal C idiom array[i][k][k]. This works well if we define some array type with C99 restriced pointers: ctypedef double *array1D_t "double *restrict" ctypedef double **array2D_t "double *restrict *restrict" ctypedef double ***array3D_t "double *restrict *restrict *restrict" Thus, having Cython emit C99 takes away some of the pain, but its not nearly as nice as Fortran and f2py. Creating a subarray will e.g. be very painful. In Fortran we just slice the array like we do in Python with NumPy, and the compiler takes care of the rest. > - Leaves a path open in the syntax for introducing low-level slicing and > arithmetic as seperate operations in Cython independent of NumPy (think > Numexpr compile-time folded into Cython code). Fortran 90/95 does this already, which is a major reason for chosing it for numerical computing. If you have something like this working, I believe many scientists would be happy to retire Fortran. It's not that anyone likes it that much. Anyhow, I don't see myself retiring Fortran and f2py any time soon. Sturla Molden From mail at stevesimmons.com Wed Mar 4 19:54:23 2009 From: mail at stevesimmons.com (Stephen Simmons) Date: Thu, 05 Mar 2009 01:54:23 +0100 Subject: [Numpy-discussion] Example code for Numpy C preprocessor 'repeat' directive? In-Reply-To: References: Message-ID: <49AF22BF.2050902@stevesimmons.com> Hi, Please can someone suggest resources for learning how to use the 'repeat' macros in numpy C code to avoid repeating sections of type-specific code for each data type? Ideally there would be two types of resources: (i) a description of how the repeat macros are meant to be used/compiled; and (ii) suggestion for a numpy source file that best illustrates this. Thanks in advance! Stephen P.S. Motivation is this is I'm trying to write an optimised numpy implementation of SQL-style aggregation operators for an OLAP data analysis project (using PyTables to store large numpy data sets). bincount() is being used to implement "SELECT SUM(x) FROM TBL WHERE y GROUP BY fn(z)". My modified bincount code can handle a wider variety of index, weight and output array data types. It also supports passing in the output array as a parameter, allowing multipass aggregation routines. I got the code working for a small number of data type combinations, but now I'm drowning in an exponential explosion of manually maintained data type combinations ---snip---- } else if ((weight_type==NPY_FLOAT)&&(out_type==PyArray_DOUBLE)) { ... } else if (bin_type==PyArray_INTP) { for (i=0; i=0 && bin<=max_bin) ((double*)out_data)[bin] += *((float *)(weights_data + i*wt_stride)); } } else if (bin_type==PyArray_UINT8) { for (i=0; i=0 && bin<=max_bin) ((double*)out_data)[bin] += *((float *)(weights_data + i*wt_stride)); } ---snip---- 'repeat' directives in C comments are obviously the proper way to avoid manual generating all this boilerplate code. Unfortunately I haven't yet understood how to make the autogenerated type-specific code link back into a comment function entry point. Either there is some compiler/distutils magic going on, or it's explained in a different numpy source file from where I'm looking right now... From charlesr.harris at gmail.com Wed Mar 4 20:46:51 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 18:46:51 -0700 Subject: [Numpy-discussion] Example code for Numpy C preprocessor 'repeat' directive? In-Reply-To: <49AF22BF.2050902@stevesimmons.com> References: <49AF22BF.2050902@stevesimmons.com> Message-ID: On Wed, Mar 4, 2009 at 5:54 PM, Stephen Simmons wrote: > Hi, > > Please can someone suggest resources for learning how to use the > 'repeat' macros in numpy C code to avoid repeating sections of > type-specific code for each data type? Ideally there would be two types > of resources: (i) a description of how the repeat macros are meant to be > used/compiled; and (ii) suggestion for a numpy source file that best > illustrates this. > > Thanks in advance! > Stephen > > P.S. Motivation is this is I'm trying to write an optimised numpy > implementation of SQL-style aggregation operators for an OLAP data > analysis project (using PyTables to store large numpy data sets). > bincount() is being used to implement "SELECT SUM(x) FROM TBL WHERE y > GROUP BY fn(z)". My modified bincount code can handle a wider variety of > index, weight and output array data types. It also supports passing in > the output array as a parameter, allowing multipass aggregation routines. > > I got the code working for a small number of data type combinations, but > now I'm drowning in an exponential explosion of manually maintained data > type combinations > ---snip---- > } else if ((weight_type==NPY_FLOAT)&&(out_type==PyArray_DOUBLE)) { > ... > } else if (bin_type==PyArray_INTP) { > for (i=0; i bin = ((npy_intp *) bin_data)[i]; > if (bin>=0 && bin<=max_bin) > ((double*)out_data)[bin] += *((float *)(weights_data + > i*wt_stride)); > } > } else if (bin_type==PyArray_UINT8) { > for (i=0; i bin = ((npy_uint8 *) bin_data)[i]; > if (bin>=0 && bin<=max_bin) > ((double*)out_data)[bin] += *((float *)(weights_data + > i*wt_stride)); > } > ---snip---- > > 'repeat' directives in C comments are obviously the proper way to avoid > manual generating all this boilerplate code. Unfortunately I haven't yet > understood how to make the autogenerated type-specific code link back > into a comment function entry point. Either there is some > compiler/distutils magic going on, or it's explained in a different > numpy source file from where I'm looking right now... > _ Are you referring to example code like the following? /**begin repeat * #type = a, b, c# */ void func at type@(void) {} /**end repeat**/ Templated code like that is preprocessed by calling process_file or process_str in the module /numpy/numpy/distutils/conv_template.py. There is a small amount of documentation in that file. You can also use the command line like so: python conv_template.py file.xxx.src Of course, conv_template.py needs to be in your path for that to work. The templating facility provided by conv_template is pretty basic but adequate for numpy. There are other template systems floating around that might also serve your needs. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hoytak at gmail.com Wed Mar 4 21:10:22 2009 From: hoytak at gmail.com (Hoyt Koepke) Date: Wed, 4 Mar 2009 18:10:22 -0800 Subject: [Numpy-discussion] Faster way to generate a rotation matrix? In-Reply-To: <463e11f90903041502l7d480bebxcb633df8d87e6413@mail.gmail.com> References: <49AE719B.4030109@molden.no> <361283.282.qm@web34405.mail.mud.yahoo.com> <463e11f90903041502l7d480bebxcb633df8d87e6413@mail.gmail.com> Message-ID: <4db580fd0903041810j649eda6ckc0668c36ff90a308@mail.gmail.com> You can get even more of a speed up with a couple tricks, though they might not be noticeable. The following is my modified version of your code: import numpy as np cimport cython from numpy cimport ndarray, empty cdef extern from "math.h": double cos(double) double sin(double) def rotation(ndarray[double] theta): # I think the syntax for empty is the same in the cimported numpy.pxd, should check cdef ndarray[double, ndim=2, mode="c"] R = empty( (3,3) ) cdef double cx = cos(theta[0]), cy = cos(theta[1]), cz = cos(theta[2]) cdef double sx = sin(theta[0]), sy = sin(theta[1]), sz = sin(theta[2]) with cython.boundscheck(False): R[0,0] = cx*cz - sx*cy*sz R[0,1] = cx*sz + sx*cy*cz R[0,2] = sx*sy R[1,0] = -sx*cz - cx*cy*sz R[1,1] = -sx*sz + cx*cy*cz R[1,2] = cx*sy R[2,0] = sy*sz R[2,1] = -sy*cz R[2,2] = cy return R ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + hoytak at gmail.com ++++++++++++++++++++++++++++++++++++++++++ From jonathan.taylor at utoronto.ca Wed Mar 4 22:28:25 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 4 Mar 2009 22:28:25 -0500 Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions In-Reply-To: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> References: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> Message-ID: <463e11f90903041928j7508b2fcu4abbaa65cfe11460@mail.gmail.com> Looks cool but a lot of this should be done in an extension module to make it fast. Perhaps starting this process off as a separate entity until stability is acheived. I would be tempted to do some of this using cython. I just wrote found that generating a rotation matrix from euler angles is about 10x faster when done properly with cython. J. On Wed, Mar 4, 2009 at 5:10 PM, Gareth Elston wrote: > I found a nice module for these transforms at > http://www.lfd.uci.edu/~gohlke/code/transformations.py.html . I've > been using an older version for some time and thought it might make a > good addition to numpy/scipy. I made some simple mods to the older > version to add a couple of functions I needed and to allow it to be > used with Python 2.4. > > The module is pure Python (2.5, with numpy 1.2 imported), includes > doctests, and is BSD licensed. Here's the first part of the module > docstring: > > """Homogeneous Transformation Matrices and Quaternions. > > A library for calculating 4x4 matrices for translating, rotating, mirroring, > scaling, shearing, projecting, orthogonalizing, and superimposing arrays of > homogenous coordinates as well as for converting between rotation matrices, > Euler angles, and quaternions. > """ > > I'd like to see this added to numpy/scipy so I know I've got some > reading to do (scipy.org/Developer_Zone and the huge scipy-dev > discussions on Scipy development infrastructure / workflow) to make > sure it follows the guidelines, but where would people like to see > this? In numpy? scipy? scikits? elsewhere? > > I seem to remember that there was a first draft of a guide for > developers being written. Are there any links available? > > Thanks, > Gareth. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From david at ar.media.kyoto-u.ac.jp Wed Mar 4 23:09:25 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Mar 2009 13:09:25 +0900 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: Message-ID: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On Wed, Mar 4, 2009 at 1:57 PM, Pauli Virtanen > wrote: > > Wed, 04 Mar 2009 13:18:55 -0700, Charles R Harris wrote: > [clip] > > There are python max/min and their behaviour depends on the > scalar type. > > I haven't looked at the numpy scalars to see precisely what they do. > > > > Numpy max/min are aliases for amax/amin defined when the core is > > imported. The functions amax/amin in turn map to the array methods > > max/min which call the maximum.reduce/minimum.reduce ufuncs, so > they all > > propagate nans, i.e., if the array contains a nan, nan will be the > > return value. > > > > The nonpropagating comparisons are the ufuncs fmax/fmin and > there are no > > corresponding array methods. I think fmax/fmin should be renamed > > fmaximum/fminimum before the release of 1.3 and the names fmax/fmin > > reserved for the reduced versions to match the names amax/amin. > I'll do > > that if there are no objections. > > Aren't the nonpropagating versions of `amax` and `amin` called > `nanmax` > and `nanmin`? But these are functions, not array methods. > > What does the `f` in the beginning of `fmax` and `fmin` stand for? > > > The functions fmax/fmin are C standard library names, I assume the f > stands for floating like the f in fabs. Nanmax and nanmin work by > replacing nans with a fill value and then performing the specified > operation. For instance, nanmin replaces nans with inf. In contrast, > the functions fmax and fmin are real ufuncs and return nan when *both* > the inputs are nans, return the non-nan value when only one of the > inputs is a nan, and do the normal comparisons when both inputs are valid. Thanks for the clarification. I agree fmax/fmin is better because of the C convention. We should clearly document the difference between those function, though. Would you have time to implement something similar for sort (sort is important for correct and relatively efficient support of nanmedian I think) ? If not, that's ok, we'll do without for 1.3 series, thanks, David From charlesr.harris at gmail.com Thu Mar 5 00:32:20 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 22:32:20 -0700 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Mar 4, 2009 at 9:09 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > > > On Wed, Mar 4, 2009 at 1:57 PM, Pauli Virtanen > > wrote: > > > > Wed, 04 Mar 2009 13:18:55 -0700, Charles R Harris wrote: > > [clip] > > > There are python max/min and their behaviour depends on the > > scalar type. > > > I haven't looked at the numpy scalars to see precisely what they > do. > > > > > > Numpy max/min are aliases for amax/amin defined when the core is > > > imported. The functions amax/amin in turn map to the array methods > > > max/min which call the maximum.reduce/minimum.reduce ufuncs, so > > they all > > > propagate nans, i.e., if the array contains a nan, nan will be the > > > return value. > > > > > > The nonpropagating comparisons are the ufuncs fmax/fmin and > > there are no > > > corresponding array methods. I think fmax/fmin should be renamed > > > fmaximum/fminimum before the release of 1.3 and the names fmax/fmin > > > reserved for the reduced versions to match the names amax/amin. > > I'll do > > > that if there are no objections. > > > > Aren't the nonpropagating versions of `amax` and `amin` called > > `nanmax` > > and `nanmin`? But these are functions, not array methods. > > > > What does the `f` in the beginning of `fmax` and `fmin` stand for? > > > > > > The functions fmax/fmin are C standard library names, I assume the f > > stands for floating like the f in fabs. Nanmax and nanmin work by > > replacing nans with a fill value and then performing the specified > > operation. For instance, nanmin replaces nans with inf. In contrast, > > the functions fmax and fmin are real ufuncs and return nan when *both* > > the inputs are nans, return the non-nan value when only one of the > > inputs is a nan, and do the normal comparisons when both inputs are > valid. > > Thanks for the clarification. I agree fmax/fmin is better because of the > C convention. Better in what way? I was suggesting renaming them to fmaximum/fminimum but am perfectly happy with the current names if you feel fmax/fmin are better because of the c connection. I was just looking for a reasonable short name for fmax.reduce/fmin.reduce and thought fmax/fmin would be naturals, unfortunately, they were already taken ;) One thing that still bothers me a bit is the return value of fmax/fmin when comparing two complex nan values. A complex number is a nan whenever the real or imaginary part is nan, and currently the functions return such a number but originally they returned a complex number with both parts set to nan. The current implemetation was a compromise that kept the code simple while never explicitly using a nan value, i.e., the nan came from one of the inputs. I avoided the explicit use of a nan value because the NAN macro was possibly unreliable at the time. I'm open to thoughts on what the behavior should be. > We should clearly document the difference between those > function, though. You mean the differences with nanmax/nanmin? Would you have time to implement something similar for > sort (sort is important for correct and relatively efficient support of > nanmedian I think) ? If not, that's ok, we'll do without for 1.3 series, I would rather take more time for the sort functions. It would be easy to make the nans sort to one end or the other in merge sort, but I would want to make sure that quicksort was still efficient. I'm also not convinced that would solve the median problem. If 60% of the entries were nans would nan be the median? If not we would have to find where the nans began or ended and that would most likely need searchsorted to be fixed also. So in the case of sort and median I think we should first settle what the behavior should be, then do benchmarks and testing to see if we are happy with the result. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Mar 5 00:48:38 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 4 Mar 2009 23:48:38 -0600 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> On Wed, Mar 4, 2009 at 23:32, Charles R Harris wrote: > One thing that still bothers me a bit is the return value of fmax/fmin when > comparing two complex nan values. A complex number is a nan whenever the > real or imaginary part is nan, and currently the functions return such a > number but originally they returned a complex number with both parts set to > nan. The current implemetation was a compromise that kept the code simple > while never explicitly using a nan value, i.e., the nan came from one of the > inputs. I avoided the explicit use of a nan value because the NAN macro was > possibly unreliable at the time. I'm open to thoughts on what the behavior > should be. Do we have examples from other implementations? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu Mar 5 00:37:44 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Mar 2009 14:37:44 +0900 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> Message-ID: <49AF6528.90108@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On Wed, Mar 4, 2009 at 9:09 PM, David Cournapeau > > > wrote: > > Charles R Harris wrote: > > > > > > On Wed, Mar 4, 2009 at 1:57 PM, Pauli Virtanen > > >> wrote: > > > > Wed, 04 Mar 2009 13:18:55 -0700, Charles R Harris wrote: > > [clip] > > > There are python max/min and their behaviour depends on the > > scalar type. > > > I haven't looked at the numpy scalars to see precisely > what they do. > > > > > > Numpy max/min are aliases for amax/amin defined when the > core is > > > imported. The functions amax/amin in turn map to the array > methods > > > max/min which call the maximum.reduce/minimum.reduce > ufuncs, so > > they all > > > propagate nans, i.e., if the array contains a nan, nan > will be the > > > return value. > > > > > > The nonpropagating comparisons are the ufuncs fmax/fmin and > > there are no > > > corresponding array methods. I think fmax/fmin should be > renamed > > > fmaximum/fminimum before the release of 1.3 and the names > fmax/fmin > > > reserved for the reduced versions to match the names > amax/amin. > > I'll do > > > that if there are no objections. > > > > Aren't the nonpropagating versions of `amax` and `amin` called > > `nanmax` > > and `nanmin`? But these are functions, not array methods. > > > > What does the `f` in the beginning of `fmax` and `fmin` > stand for? > > > > > > The functions fmax/fmin are C standard library names, I assume the f > > stands for floating like the f in fabs. Nanmax and nanmin work by > > replacing nans with a fill value and then performing the specified > > operation. For instance, nanmin replaces nans with inf. In contrast, > > the functions fmax and fmin are real ufuncs and return nan when > *both* > > the inputs are nans, return the non-nan value when only one of the > > inputs is a nan, and do the normal comparisons when both inputs > are valid. > > Thanks for the clarification. I agree fmax/fmin is better because > of the > C convention. > > > Better in what way? I was suggesting renaming them to > fmaximum/fminimum but am perfectly happy with the current names if you > feel fmax/fmin are better because of the c connection. Oups, I read the contrary of what you meant :) My rationale for the name fmax/fmin is that their behavior is a bit surprising for people not used to C, so having a different name than C would only add to the confusion. It is obviously not a strong rationale. > One thing that still bothers me a bit is the return value of fmax/fmin > when comparing two complex nan values. A complex number is a nan > whenever the real or imaginary part is nan, and currently the > functions return such a number but originally they returned a complex > number with both parts set to nan. The current implemetation was a > compromise that kept the code simple while never explicitly using a > nan value, i.e., the nan came from one of the inputs. I avoided the > explicit use of a nan value because the NAN macro was possibly > unreliable at the time. I'm open to thoughts on what the behavior > should be. Is it a problem if only one part (real or imaginary) is nan ? We should have a reliable NAN macro - this should be part of the npymath library, IMO. I will look into it. > > > We should clearly document the difference between those > function, though. > > > You mean the differences with nanmax/nanmin? max (undefined behavior with nan) vs fmax (same semantics as C counterpart) vs nanmx (ignore nan). In particular, I think it would be helpful to document the differences with matlab and R, and suggestions on how to replace which function from those environments with numpy equivalent code. I can do this. > > Would you have time to implement something similar for > sort (sort is important for correct and relatively efficient > support of > nanmedian I think) ? If not, that's ok, we'll do without for 1.3 > series, > > > I would rather take more time for the sort functions. Sure. My own experience is that this kind of code handling nan is difficult to make right. We specially need a relatively good set of tests, because of compilers/platforms specificities. > I'm also not convinced that would solve the median problem. If 60% of > the entries were nans would nan be the median? If not we would have to > find where the nans began or ended and that would most likely need > searchsorted to be fixed also. I meant nanmedian, sorry. The current implementation is slow and/or buggy (I should check the related tickets, though, maybe it was a scipy ticket) David From stefan at sun.ac.za Thu Mar 5 01:00:22 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 5 Mar 2009 08:00:22 +0200 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AEFFA8.3050903@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> Message-ID: <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> Hi Dag 2009/3/5 Dag Sverre Seljebotn : > More details: http://wiki.cython.org/enhancements/buffersyntax Interesting proposal! Am I correct in thinking that you'd have to re-implement a lot of NumPy yourself to get this working? Or are you planning to build on NumPy + C-API? Cheers St?fan From david at ar.media.kyoto-u.ac.jp Thu Mar 5 00:45:23 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Mar 2009 14:45:23 +0900 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> Message-ID: <49AF66F3.8080406@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > On Wed, Mar 4, 2009 at 23:32, Charles R Harris > wrote: > > >> One thing that still bothers me a bit is the return value of fmax/fmin when >> comparing two complex nan values. A complex number is a nan whenever the >> real or imaginary part is nan, and currently the functions return such a >> number but originally they returned a complex number with both parts set to >> nan. The current implemetation was a compromise that kept the code simple >> while never explicitly using a nan value, i.e., the nan came from one of the >> inputs. I avoided the explicit use of a nan value because the NAN macro was >> possibly unreliable at the time. I'm open to thoughts on what the behavior >> should be. >> > > Do we have examples from other implementations? > matlab max is similar to our fmax: max(1, nan) -> 1 max(nan, nan) -> nan (matlab doc is not clear: they say they ignore nan, but then what's the point of nanmax ? I cannot see a different behavior between both functions in matlab) Then, for complex numbers, matlab does some unexpected things as well. First: a = 1 + nan*i -> print NaN + 1.000i b = nan + 1*i -> print NaN + NaNi And then: a = 1 + nan*i; max(a, a) -> Nan isreal(a) -> 0 isreal(max(a, a)) -> 1 cheers, David From stefan at sun.ac.za Thu Mar 5 01:10:08 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 5 Mar 2009 08:10:08 +0200 Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions In-Reply-To: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> References: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> Message-ID: <9457e7c80903042210q50041615j607162ae794b9be6@mail.gmail.com> Hi Gareth 2009/3/5 Gareth Elston : > I seem to remember that there was a first draft of a guide for > developers being written. Are there any links available? Sorry, I should have posted that already. We are still setting up Trac to support a proper work-flow, which should be done soon. In the meantime, I would suggest creating a repository on http://github.com, so that the code is out in the open. Then we can start the process of maturing it, and maybe even look at the optimisations Jonathan suggested. Regards St?fan From david at ar.media.kyoto-u.ac.jp Thu Mar 5 00:53:55 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Mar 2009 14:53:55 +0900 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> Message-ID: <49AF68F3.7050800@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > On Wed, Mar 4, 2009 at 23:32, Charles R Harris > wrote: > > >> One thing that still bothers me a bit is the return value of fmax/fmin when >> comparing two complex nan values. A complex number is a nan whenever the >> real or imaginary part is nan, and currently the functions return such a >> number but originally they returned a complex number with both parts set to >> nan. The current implemetation was a compromise that kept the code simple >> while never explicitly using a nan value, i.e., the nan came from one of the >> inputs. I avoided the explicit use of a nan value because the NAN macro was >> possibly unreliable at the time. I'm open to thoughts on what the behavior >> should be. >> > > Do we have examples from other implementations? > For R, I believe there is no Nan complex number: 1+NaN * 1i, NaN, NaN+1i are all printed the same as NaN, and according to the doc on complex number (?Im): Complex vectors can be created with 'complex'. The vector can be specified either by giving its length, its real and imaginary parts, or modulus and argument. (Giving just the length generates a vector of complex zeroes.) 'as.complex' attempts to coerce its argument to be of complex type: like 'as.vector' it strips attributes including names. All forms of 'NA' and 'NaN' are coerced to a complex 'NA', for which both the real and imaginary parts are 'NA'. I would guess this is a consequence of R biased toward statistics, with Na being mostly for missing data ? cheers, David From charlesr.harris at gmail.com Thu Mar 5 01:28:23 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Mar 2009 23:28:23 -0700 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: <49AF66F3.8080406@ar.media.kyoto-u.ac.jp> References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> <49AF66F3.8080406@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Mar 4, 2009 at 10:45 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Robert Kern wrote: > > On Wed, Mar 4, 2009 at 23:32, Charles R Harris > > wrote: > > > > > >> One thing that still bothers me a bit is the return value of fmax/fmin > when > >> comparing two complex nan values. A complex number is a nan whenever the > >> real or imaginary part is nan, and currently the functions return such a > >> number but originally they returned a complex number with both parts set > to > >> nan. The current implemetation was a compromise that kept the code > simple > >> while never explicitly using a nan value, i.e., the nan came from one of > the > >> inputs. I avoided the explicit use of a nan value because the NAN macro > was > >> possibly unreliable at the time. I'm open to thoughts on what the > behavior > >> should be. > >> > > > > Do we have examples from other implementations? > > > > matlab max is similar to our fmax: > > max(1, nan) -> 1 > max(nan, nan) -> nan > > (matlab doc is not clear: they say they ignore nan, but then what's the > point of nanmax ? I cannot see a different behavior between both > functions in matlab) > > Then, for complex numbers, matlab does some unexpected things as well. > First: > > a = 1 + nan*i -> print NaN + 1.000i > b = nan + 1*i -> print NaN + NaNi > > And then: > > a = 1 + nan*i; > max(a, a) -> Nan > isreal(a) -> 0 > isreal(max(a, a)) -> 1 > Heh, it's somehow comforting to know Matlab finds it a bit confusing too. I suppose what bothers me is that fmax/fmin return the first argument when both are nans. For reals, that is simply a nan, no problem, but for complex numbers is seems a bit arbitrary. Maybe I'm being a bit obsessive here. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Mar 5 01:33:05 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 Mar 2009 00:33:05 -0600 Subject: [Numpy-discussion] Apropos ticked #913 In-Reply-To: References: <49AF5075.2070207@ar.media.kyoto-u.ac.jp> <3d375d730903042148w19b5332akd5d6e789368f9b1f@mail.gmail.com> <49AF66F3.8080406@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730903042233q228a8b29q8075ed4099e2034e@mail.gmail.com> On Thu, Mar 5, 2009 at 00:28, Charles R Harris wrote: > Heh, it's somehow comforting to know Matlab finds it a bit confusing too. I > suppose what bothers me is that fmax/fmin return the first argument when > both are nans. For reals, that is simply a nan, no problem, but for complex > numbers is seems a bit arbitrary. Maybe I'm being a bit obsessive here. Perhaps we can explicitly label the behavior undefined and reserve the right to nail it down later. Presumably, if there is a use case for one way or the other, someone will complain that it either doesn't work the way the want or that they would like to have guarantees that the current behavior will be preserved. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dalcinl at gmail.com Thu Mar 5 02:12:38 2009 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 5 Mar 2009 04:12:38 -0300 Subject: [Numpy-discussion] calling _import_array() twice crashes python In-Reply-To: <1236175299.29694.39.camel@localhost> References: <1236175299.29694.39.camel@localhost> Message-ID: In general, using complex extension modules like numpy between matching pairs of Py_Initialize()/Py_Finalize() is tricky... Extension modules have to be VERY carefully written as to permit such usage pattern... It is too easy to forget the init/cleanup/finalize steps... I was able to manage this in one of my projects (mpi4py), but my package is by far simpler than numpy ... Do Octave have something like Python's Py_AtExit()? Then you should be able to register a callback to call Py_Finalize() only once, at the very end of your Octave process... Regarding initialization, you can call Py_Initialize() as many times as you want... if Python is already initialized, Py_Initialize() is a non-op ... Moreover, such approach is going to be much faster... No need to reinitialize the full Python runtime every time you want to run some (possibly tiny) piece of Python code ... On Wed, Mar 4, 2009 at 11:01 AM, Soeren Sonnenburg wrote: > Dear all, > > I've written a wrapper enabling to run python code from within octave > (and vice versa). To this end I am embedding python in octave. So I am > calling > > Py_Initialize(); > _import_array(); > > Py_Finalize(); > > multiple times. While things work nicely on the first run, I am getting > a crash on _import_array() on the second run... > > Is there any cleanup function I should potentially call that would > prevent this? > > Soeren > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From dagss at student.matnat.uio.no Thu Mar 5 02:44:29 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 08:44:29 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> References: <49AEFFA8.3050903@student.matnat.uio.no> <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> Message-ID: <49AF82DD.8070909@student.matnat.uio.no> St?fan van der Walt wrote: > Hi Dag > > 2009/3/5 Dag Sverre Seljebotn : >> More details: http://wiki.cython.org/enhancements/buffersyntax > > Interesting proposal! Am I correct in thinking that you'd have to > re-implement a lot of NumPy yourself to get this working? Or are you > planning to build on NumPy + C-API? First off, the proposal now is simply to change the syntax for existing features, which would simply disable arithmetic and slicing this time around. Slicing could perhaps happen over summer, but aritmetic would likely not happen for some time. The only point now is that before the syntax and work habit is *too* fixed, one could leave the road more open for it. But yes, to implement that one would need to reimplement parts of NumPy to get it working. But because code would be generated specifically for the situation inline, I think it would be more like reimplementing Numexpr than reimplementing NumPy. I think one could simply invoke Numexpr as a first implementation (and make it an optional Cython plugin). The fact that any performance improvements cannot be done incrementally and transparently though is certainly speaking against the syntax/semantics. -- Dag Sverre From dagss at student.matnat.uio.no Thu Mar 5 02:51:36 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 08:51:36 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: References: <49AEFFA8.3050903@student.matnat.uio.no> Message-ID: <49AF8488.1040106@student.matnat.uio.no> Sturla Molden wrote: >> arr = np.zeros(..) >> cdef int[:,:] buf = arr # 2D buffer >> >> Here, buf would be something else than arr; it is a seperate view to the >> array for low-level purposes. > > I like your proposal. The reason we use Fortran for numerical computing is > that Fortran makes it easy to manipulate arrays. C or C++ sucks terribly > for anything related to numerical computing, and arrays in particular. What's your take on Blitz++? Around here when you say C++ and numerical in the same sentence, Blitz++ is what they mean. > Cython is currently not better than C. The ndarray syntax you added last > summer is useless if we need to pass the array or a view/slice to another > function. That is almost always the case. While the syntax is there, the This can be fixed with the existing syntax: http://trac.cython.org/cython_trac/ticket/177 Introducing this syntax would actually mean less time to focus on "real usability issues" like that. OTOH, if the syntax I propose is superior, it's better to introduce it early in a long-term perspective. > Fortran 90/95 does this already, which is a major reason for chosing it > for numerical computing. If you have something like this working, I > believe many scientists would be happy to retire Fortran. It's not that > anyone likes it that much. Anyhow, I don't see myself retiring Fortran and > f2py any time soon. That's certainly an interesting perspective. Requires a lot of work though :-) -- Dag Sverre From dagss at student.matnat.uio.no Thu Mar 5 03:05:08 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 09:05:08 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AF09D9.1090408@astraw.com> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF09D9.1090408@astraw.com> Message-ID: <49AF87B4.7090204@student.matnat.uio.no> Andrew Straw wrote: > Dag Sverre Seljebotn wrote: >> This is NOT yet discussed on the Cython list; I wanted to check with >> more numerical users to see if the issue should even be brought up there. >> >> The idea behind the current syntax was to keep things as close as >> possible to Python/NumPy, and only provide some "hints" to Cython for >> optimization. My problem with this now is that a) it's too easy to get >> non-optimized code without a warning by letting in untyped indices, b) I >> think the whole thing is a bit too "magic" and that it is too unclear >> what is going on to newcomers (though I'm guessing there). > > These may be issues, but I think keeping "cython -a my_module.pyx" in > one's development cycle and inspecting the output will lead to great > enlightenment on the part of the Cython user. Perhaps this should be > advertised more prominently? I always do this with any Cython-generated > code, and it works wonders. Well, I do so too (or rather just open the generated C file in emacs, but since I'm working on Cython I'm more used to read that garbage than others I suppose :-) ). But it feels like "one extra step" which must be done. A syntax highlighter in emacs highlighting C operations would help as well though. >> My proposal: Introduce an explicit "buffer syntax": >> >> arr = np.zeros(..) >> cdef int[:,:] buf = arr # 2D buffer > > My initial reaction is that it seems to be a second implementation of > buffer interaction Cython, and therefore yet another thing to keep in Well, it would use much of the same implementation of course :-) It's more of a change in the parser and a few rules here and there than anything else. > mind and it's unclear to me how different it would be from the > "traditional" Cython numpy ndarray behavior and how the behavior of the > two approaches might differ, perhaps in subtle ways. So that's a > disadvantage from my perspective. I agree that some of your ideas are > advantages, however. Also, it seems it would allow one to (more easily) > interact with buffer objects in sophisticated ways without needing the > GIL, which is another advantage. > > Could some or all of this be added to the current numpy buffer > implementation, or does it really need the new syntax? The new features I mention? Most of it could be wedged into the existing syntax, but at the expense of making things even less clear (or at least I feel so) -- for instance, if we decided that Cython was to take care of operations like "a + b" in a NumExpr-like fashion, then it would mean that all declared buffers would get their Python versions of their arithmetic operators hidden and fixed. To make a point, it would mean that the NumPy matrix object would suddenly get componentwise multiplication when assigned to an ndarray[int] variable! (Granted, my feeling is that the matrix class should be better avoided anyway, but...) Regarding passing buffers to C/Fortran, it's just a matter of coming up with a nice syntax. > Also, is there anything possible with buffer objects that would be > limited by the choice of syntax you propose? I imagine this might not > work with structured data types (then again, it might...). mystruct[:,:] should just work, if that is what you mean. It's simply a matter of a) adding something to the parser, and b) disable the current pass-through of "what the buffer cannot handle" to the Python runtime. -- Dag Sverre From faltet at pytables.org Thu Mar 5 03:39:29 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 09:39:29 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AF82DD.8070909@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> <49AF82DD.8070909@student.matnat.uio.no> Message-ID: <200903050939.30247.faltet@pytables.org> A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > But yes, to implement that one would need to reimplement parts of > NumPy to get it working. But because code would be generated > specifically for the situation inline, I think it would be more like > reimplementing Numexpr than reimplementing NumPy. I think one could > simply invoke Numexpr as a first implementation (and make it an > optional Cython plugin). At first sight, having a kind of Numexpr kernel inside Cython would be great, but provided that you can already call Numexpr from both Python/Cython, I wonder which would be the advantage to do so. As I see it, it would be better to have: c = numexpr.evaluate("a + b") in the middle of Cython code than just: c = a + b in the sense that the former would allow the programmer to see whether Numexpr is called explicitely or not. One should not forget that Numexpr starts to be competitive only when expressions whose array operands+result sizes are around the CPU cache size or larger (unless transcental functions are used and local Numexpr has support for Intel VML, in which case this size can be substantially lower). So, getting Numexpr (or the Cython implementation of it) automatically called for *every* expression should be not necessarily a Good Thing, IMO. Cheers, -- Francesc Alted From dagss at student.matnat.uio.no Thu Mar 5 04:11:50 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 10:11:50 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903050939.30247.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> <49AF82DD.8070909@student.matnat.uio.no> <200903050939.30247.faltet@pytables.org> Message-ID: <49AF9756.8020203@student.matnat.uio.no> Francesc Alted wrote: > A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: >> But yes, to implement that one would need to reimplement parts of >> NumPy to get it working. But because code would be generated >> specifically for the situation inline, I think it would be more like >> reimplementing Numexpr than reimplementing NumPy. I think one could >> simply invoke Numexpr as a first implementation (and make it an >> optional Cython plugin). > > At first sight, having a kind of Numexpr kernel inside Cython would be > great, but provided that you can already call Numexpr from both > Python/Cython, I wonder which would be the advantage to do so. As I > see it, it would be better to have: > > c = numexpr.evaluate("a + b") > > in the middle of Cython code than just: > > c = a + b > > in the sense that the former would allow the programmer to see whether > Numexpr is called explicitely or not. The former would need to invoke the parser etc., which one would *not* need to do when one has the Cython compilation step. When I mention numexpr it is simply because there's gone work in it already to optimize these things; that experience could hopefully be kept, while discarding the parser and opcode system. I know too little about these things, but look: Cython can relatively easily transform things like cdef int[:,:] a = ..., b = ... c = a + b * b into a double for-loop with c[i,j] = a[i,j] + b[i,j] * b[i,j] at its core. A little more work could have it iterate the smallest dimension innermost dynamically (in strided mode). If a and b are declared as contiguous arrays and "restrict", I suppose the C compiler could do the most efficient thing in a lot of cases? (I.e. "cdef restrict int[:,:,"c"]" or similar) However if one has a strided array, numexpr could still give an advantage over such a loop. Or? But anyway, this is easily one year ahead of us, unless more numerical Cython developers show up. -- Dag Sverre From faltet at pytables.org Thu Mar 5 04:24:51 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 10:24:51 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AF9756.8020203@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <200903050939.30247.faltet@pytables.org> <49AF9756.8020203@student.matnat.uio.no> Message-ID: <200903051024.51977.faltet@pytables.org> A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > > At first sight, having a kind of Numexpr kernel inside Cython would > > be great, but provided that you can already call Numexpr from both > > Python/Cython, I wonder which would be the advantage to do so. As > > I see it, it would be better to have: > > > > c = numexpr.evaluate("a + b") > > > > in the middle of Cython code than just: > > > > c = a + b > > > > in the sense that the former would allow the programmer to see > > whether Numexpr is called explicitely or not. > > The former would need to invoke the parser etc., which one would > *not* need to do when one has the Cython compilation step. Ah, yes. That's a good point. > When I > mention numexpr it is simply because there's gone work in it already > to optimize these things; that experience could hopefully be kept, > while discarding the parser and opcode system. > > I know too little about these things, but look: > > Cython can relatively easily transform things like > > cdef int[:,:] a = ..., b = ... > c = a + b * b > > into a double for-loop with c[i,j] = a[i,j] + b[i,j] * b[i,j] at its > core. A little more work could have it iterate the smallest dimension > innermost dynamically (in strided mode). > > If a and b are declared as contiguous arrays and "restrict", I > suppose the C compiler could do the most efficient thing in a lot of > cases? (I.e. "cdef restrict int[:,:,"c"]" or similar) Agreed. > > However if one has a strided array, numexpr could still give an > advantage over such a loop. Or? Well, I suppose that, provided that Cython could perform the for-loop transformation, giving support for strided arrays would be relatively trivial, and the performance would be similar than numexpr in this case. The case for unaligned arrays would a bit different, as the next trick is used: whenever an unaligned array is detected, a new 'copy' opcode is issued so that, for each data block, a copy is done in order to make the data aligned. As the block sizes are chosen to fit easily in CPU's level-1 cache, this copy operation is done very fast and impacts rather little on performance. As I see it, this would be the only situation that would be more complicated to implement natively in Cython because it requires non-trivial code for both blocking and handle opcodes. However, for most of situations, my guess is that unaligned array operands do not appear, so perhaps the unaligned case optimization would not be so important for implementing it Cython. > But anyway, this is easily one year ahead of us, unless more > numerical Cython developers show up. Cheers, -- Francesc Alted From faltet at pytables.org Thu Mar 5 04:29:35 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 10:29:35 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903051024.51977.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF9756.8020203@student.matnat.uio.no> <200903051024.51977.faltet@pytables.org> Message-ID: <200903051029.35458.faltet@pytables.org> A Thursday 05 March 2009, Francesc Alted escrigu?: > Well, I suppose that, provided that Cython could perform the for-loop > transformation, giving support for strided arrays would be relatively > trivial, and the performance would be similar than numexpr in this > case. Mmh, perhaps not so trivial, because that implies that the stride of an array should be known in compilation time, and that would require a new qualifier when declaring the array. Tricky... Cheers, -- Francesc Alted From sturla at molden.no Thu Mar 5 05:18:34 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 05 Mar 2009 11:18:34 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AF8488.1040106@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF8488.1040106@student.matnat.uio.no> Message-ID: <49AFA6FA.3090507@molden.no> On 3/5/2009 8:51 AM, Dag Sverre Seljebotn wrote: > What's your take on Blitz++? Around here when you say C++ and numerical > in the same sentence, Blitz++ is what they mean. I have not looked at it for a long time (8 years or so). It is based on profane C++ templates that makes debugging impossible. The compiler does not emit meaningful diagnostic messages, and very often the compiler cannot tell on which line the error occurred. It was efficient for small arrays if loops could be completely unrolled by the template metaprogram. For large arrays, it produced intermediate arrays as no C++ compiler could do escape analysis. > Introducing this syntax would actually mean less time to focus on "real > usability issues" like that. OTOH, if the syntax I propose is superior, > it's better to introduce it early in a long-term perspective. There is not much difference between cdef int[:,:] array and cdef numpy.ndarray[int, dim=2] array except that the latter is a Python object. The only minor issue with that is the GIL. On the other hand, the former is not a Python object, which means it is not garbage collected. S.M. From dagss at student.matnat.uio.no Thu Mar 5 05:33:52 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 11:33:52 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903051029.35458.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF9756.8020203@student.matnat.uio.no> <200903051024.51977.faltet@pytables.org> <200903051029.35458.faltet@pytables.org> Message-ID: <49AFAA90.2030408@student.matnat.uio.no> Francesc Alted wrote: > A Thursday 05 March 2009, Francesc Alted escrigu?: > >> Well, I suppose that, provided that Cython could perform the for-loop >> transformation, giving support for strided arrays would be relatively >> trivial, and the performance would be similar than numexpr in this >> case. >> > > Mmh, perhaps not so trivial, because that implies that the stride of an > array should be known in compilation time, and that would require a new > qualifier when declaring the array. Tricky... > No, one could do the same thing that NumPy does (I think, never looked into it in detail), i.e: decide on dimension to do innermost dynamically from strides and sizes save the stride in that dimension for each array for loop using n-dimensional iterator with larger per-loop overhead: save offsets for loop on the innermost dimension with lower per-loop overhead: component-wise operation using offsets and innermost strides Dag Sverre From robince at gmail.com Thu Mar 5 05:40:14 2009 From: robince at gmail.com (Robin) Date: Thu, 5 Mar 2009 10:40:14 +0000 Subject: [Numpy-discussion] indexing question Message-ID: Hi, I have an indexing problem, and I know it's a bit lazy to ask the list, sometime when people do interesting tricks come up so I hope no one minds! I have a 2D array X.shape = (a,b) and I want to change it into new array which is shape (2,(a*b)) which has the following form: [ X[0,0], X[0,1] X[1,0], X[1,1] X[2,0], X[2,1] .... X[a,0], X[a,1] X[0,1], X[0,2] X[1,1], X[1,2] ... ] The first access is trials and the second axis is the different outputs, which I am trying to represent as an overlapping sliding window of 2 samples (but in general n). Because of the repeats I'm not sure if I can do it without loops. Thanks for any help Cheers Robin From dagss at student.matnat.uio.no Thu Mar 5 05:41:58 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 11:41:58 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AFA6FA.3090507@molden.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF8488.1040106@student.matnat.uio.no> <49AFA6FA.3090507@molden.no> Message-ID: <49AFAC76.8070909@student.matnat.uio.no> Sturla Molden wrote: >> Introducing this syntax would actually mean less time to focus on "real >> usability issues" like that. OTOH, if the syntax I propose is superior, >> it's better to introduce it early in a long-term perspective. >> > > There is not much difference between > > cdef int[:,:] array > > and > > cdef numpy.ndarray[int, dim=2] array > > except that the latter is a Python object. The only minor issue with > that is the GIL. On the other hand, the former is not a Python object, > which means it is not garbage collected. > As with all syntax, the difference is mostly psychological. The former means "now I need fast access and will want to hit the metal, and will no longer look on my array through a NumPy object but through a buffer view", whether the latter is "let Cython can optimize some of the NumPy operations". About garbage collection, int[:,:] would always be a buffer view unto an underlying object which *would* be garbage collected. I.e. it is *not* stack-allocated memory; so when you do cdef np.int_t[:,:] arr = np.zero((10,10), np.int) then the memory of the array is garbage collected insofar the result of np.zero is. "arr" simply adds a reference to the underlying object (and slices add another reference, and so on). Support for GIL-less programming is on the wanted-list anyway for both syntaxes though; Cython can now when one does something illegal and only let through certain uses of the variable, so both syntaxes works for this. Dag Sverre From robince at gmail.com Thu Mar 5 05:43:54 2009 From: robince at gmail.com (Robin) Date: Thu, 5 Mar 2009 10:43:54 +0000 Subject: [Numpy-discussion] indexing question In-Reply-To: References: Message-ID: On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote: > Hi, > > I have an indexing problem, and I know it's a bit lazy to ask the > list, sometime when people do interesting tricks come up so I hope no > one minds! > > I have a 2D array X.shape = (a,b) > > and I want to change it into new array which is shape (2,(a*b)) which > has the following form: > actually the new array would have dimensions (2,(a*(b-1) ) as the last bin wouldn't have the second point. Cheers Robin From sturla at molden.no Thu Mar 5 05:48:31 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 05 Mar 2009 11:48:31 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AF9756.8020203@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <9457e7c80903042200m43953be7m2a1271d09a6ebf1f@mail.gmail.com> <49AF82DD.8070909@student.matnat.uio.no> <200903050939.30247.faltet@pytables.org> <49AF9756.8020203@student.matnat.uio.no> Message-ID: <49AFADFF.2080006@molden.no> On 3/5/2009 10:11 AM, Dag Sverre Seljebotn wrote: > Cython can relatively easily transform things like > > cdef int[:,:] a = ..., b = ... > c = a + b * b Now you are wandering far into Fortran territory... > If a and b are declared as contiguous arrays and "restrict", I suppose > the C compiler could do the most efficient thing in a lot of cases? > (I.e. "cdef restrict int[:,:,"c"]" or similar) A Fortran compiler can compile a vectorized expression like a = b*c(i,:) + sin(k) into do j=1,n a(j) = b(j)*c(i,j) + sin(k(j)) end do The compiler do this because Fortran has strict rules prohibiting aliasing, and because the instrinsic function sin is declared 'elemental'. On the other hand, if the expression contains functions not declared 'elemental' or 'pure', there may be side effects, and temporary copies must be made. The same could happen if the expression contained variables declared 'pointer', in which case it could contain aliases. I think in the case of numexpr, it assumes that NumPy ufuncs are elemental like Fortran intrinsics. Matlab's JIT compiler works with the assumption that arrays are inherently immutable (everything has copy-on-write semantics). That makes life easier. > However if one has a strided array, numexpr could still give an > advantage over such a loop. Or? Fortran compilers often makes copies of strided arrays. The trick is to make sure the working arrays fit in cache. Again, this is safe when the expression only contains 'elemental' or 'pure' functions. Fortran also often does "copy-in copy-out" if a function is called with a strided array as argument. S.M. From robince at gmail.com Thu Mar 5 05:57:01 2009 From: robince at gmail.com (Robin) Date: Thu, 5 Mar 2009 10:57:01 +0000 Subject: [Numpy-discussion] indexing question In-Reply-To: References: Message-ID: On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote: > Hi, > > I have an indexing problem, and I know it's a bit lazy to ask the > list, sometime when people do interesting tricks come up so I hope no > one minds! > > I have a 2D array X.shape = (a,b) > > and I want to change it into new array which is shape (2,(a*b)) which > has the following form: > [ ?X[0,0], X[0,1] > ? X[1,0], X[1,1] > ? X[2,0], X[2,1] > ? .... > ? X[a,0], X[a,1] > ? X[0,1], X[0,2] > ? X[1,1], X[1,2] > ... > ] > Ah, so it's a bit easier than I thought at first glance: X[ ix_( (b-1)*range(a), [0,1]) ] does the trick I think Sorry for the noise. Robin From robince at gmail.com Thu Mar 5 06:09:29 2009 From: robince at gmail.com (Robin) Date: Thu, 5 Mar 2009 11:09:29 +0000 Subject: [Numpy-discussion] indexing question In-Reply-To: References: Message-ID: On Thu, Mar 5, 2009 at 10:57 AM, Robin wrote: > On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote: >> Hi, >> >> I have an indexing problem, and I know it's a bit lazy to ask the >> list, sometime when people do interesting tricks come up so I hope no >> one minds! >> >> I have a 2D array X.shape = (a,b) >> >> and I want to change it into new array which is shape (2,(a*b)) which >> has the following form: >> [ ?X[0,0], X[0,1] >> ? X[1,0], X[1,1] >> ? X[2,0], X[2,1] >> ? .... >> ? X[a,0], X[a,1] >> ? X[0,1], X[0,2] >> ? X[1,1], X[1,2] >> ... >> ] >> > > Ah, so it's a bit easier than I thought at first glance: > > X[ ix_( (b-1)*range(a), [0,1]) ] > does the trick I think Not doing well this morning - that's wrong of course... I need to stack lots of such blocks for [0,1], [1,2], [2,3] etc.. up to [b-1,b]. So I guess the question still stands... Robin From slaunger at gmail.com Thu Mar 5 07:18:23 2009 From: slaunger at gmail.com (Kim Hansen) Date: Thu, 5 Mar 2009 13:18:23 +0100 Subject: [Numpy-discussion] Numpy array in iterable In-Reply-To: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> References: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> Message-ID: Hi again It turned out not to be quite good enough as is, as it requires unique values for both arrays. Whereas this is often true for the second argument, it is never true for the first argument in my use case, and I struggled with that for some time until i realized I could use unique1d with the rever_index optional parameter set True def ismember(totest, members) """ A setmember1d, which works for totest arrays with duplicate values """ uniques_in_test, rev_idx = unique1d(totest, return_inverse=True) uniques_in_members_mask = setmember1d(uniques_in_test, members) # Use this instead is members is not unique # uniques_in_members_mask = setmember1d(uniques_in_test, unique1d(members)) return uniques_in_members_mask[rev_idx] I saw someone else providing an alternative implementation of this, which was longer and included a loop. I do not know which is the most efficient one, but I understand this one better. -- Slaunger 2009/2/25 : > On Wed, Feb 25, 2009 at 7:28 AM, Kim Hansen wrote: >> Hi Numpy discussions >> Quite often I find myself wanting to generate a boolean mask for fancy >> slicing of some array, where the mask itself is generated by checking >> if its value has one of several relevant values (corresponding to >> states) >> So at the the element level thsi corresponds to checking if >> element in iterable >> But I can't use the in operator on a numpy array: >> >> In [1]: test = arange(5) >> In [2]: states = [0, 2] >> In [3]: mask = test in states >> --------------------------------------------------------------------------- >> ValueError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Traceback (most recent call last) >> C:\Documents and Settings\kha\ in () >> ValueError: The truth value of an array with more than one element is ambiguous. >> Use a.any() or a.all() >> >> I can however make my own utility function which works effectively the >> same way by iterating through the states >> >> In [4]: for i, state in enumerate(states): >> ? ...: ? ? if i == 0: >> ? ...: ? ? ? ? result = test == state >> ? ...: ? ? else: >> ? ...: ? ? ? ? result |= test == state >> ? ...: >> ? ...: >> In [5]: result >> Out[5]: array([ True, False, ?True, False, False], dtype=bool) >> >> However, I would have thought such an "array.is_in()" utility function >> was already available in the numpy package? >> >> But I can't find it, and I am curious to hear if it is there or if it >> just available in another form which I have simply overlooked. >> >> If it is not there I think it could be a nice extra utility funtion >> for the ndarray object. >> >> --Slaunger >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://projects.scipy.org/mailman/listinfo/numpy-discussion >> > > does this help: > >>>> np.setmember1d(test,states) > array([ True, False, ?True, False, False], dtype=bool) > > Josef > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From cimrman3 at ntc.zcu.cz Thu Mar 5 07:28:14 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 05 Mar 2009 13:28:14 +0100 Subject: [Numpy-discussion] Numpy array in iterable In-Reply-To: References: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> Message-ID: <49AFC55E.2000603@ntc.zcu.cz> Kim Hansen wrote: > Hi again > > It turned out not to be quite good enough as is, as it requires unique > values for both arrays. Whereas this is often true for the second > argument, it is never true for the first argument in my use case, and > I struggled with that for some time until i realized I could use > unique1d with the rever_index optional parameter set True > > def ismember(totest, members) > """ > A setmember1d, which works for totest arrays with duplicate values > """ > uniques_in_test, rev_idx = unique1d(totest, return_inverse=True) > uniques_in_members_mask = setmember1d(uniques_in_test, members) > # Use this instead is members is not unique > # uniques_in_members_mask = setmember1d(uniques_in_test, > unique1d(members)) > return uniques_in_members_mask[rev_idx] > > I saw someone else providing an alternative implementation of this, > which was longer and included a loop. I do not know which is the most > efficient one, but I understand this one better. > > -- Slaunger I have added your implementation to http://projects.scipy.org/numpy/ticket/1036 - is it ok with you to add the function eventually into arraysetops.py, under the numpy (BSD) license? cheers, r. From faltet at pytables.org Thu Mar 5 07:35:33 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 13:35:33 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AFAA90.2030408@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <200903051029.35458.faltet@pytables.org> <49AFAA90.2030408@student.matnat.uio.no> Message-ID: <200903051335.34056.faltet@pytables.org> A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > No, one could do the same thing that NumPy does (I think, never > looked into it in detail), i.e: > > decide on dimension to do innermost dynamically from strides and > sizes save the stride in that dimension for each array > for loop using n-dimensional iterator with larger per-loop overhead: > save offsets > for loop on the innermost dimension with lower per-loop overhead: > component-wise operation using offsets and innermost strides I see. Yes, it seems definitely doable. However, I don't understand very well when you say that you have to "decide on dimension to do innermost dynamically". For me, this dimension should always be the trailing dimension, in order to maximize the locality of data. Or I'm missing something? -- Francesc Alted From slaunger at gmail.com Thu Mar 5 07:38:44 2009 From: slaunger at gmail.com (Kim Hansen) Date: Thu, 5 Mar 2009 13:38:44 +0100 Subject: [Numpy-discussion] Numpy array in iterable In-Reply-To: <49AFC55E.2000603@ntc.zcu.cz> References: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> <49AFC55E.2000603@ntc.zcu.cz> Message-ID: >2009/3/5 Robert Cimrman : > I have added your implementation to > http://projects.scipy.org/numpy/ticket/1036 - is it ok with you to add > the function eventually into arraysetops.py, under the numpy (BSD) license? > > cheers, > r. > Yes, that would be fine with me. In fact that would be an honor! There is some formatting issue in the code you copied into the ticket... Cheers, Kim From cimrman3 at ntc.zcu.cz Thu Mar 5 07:46:32 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 05 Mar 2009 13:46:32 +0100 Subject: [Numpy-discussion] Numpy array in iterable In-Reply-To: References: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> <49AFC55E.2000603@ntc.zcu.cz> Message-ID: <49AFC9A8.5040907@ntc.zcu.cz> Kim Hansen wrote: >> 2009/3/5 Robert Cimrman : >> I have added your implementation to >> http://projects.scipy.org/numpy/ticket/1036 - is it ok with you to add >> the function eventually into arraysetops.py, under the numpy (BSD) license? >> >> cheers, >> r. >> > Yes, that would be fine with me. In fact that would be an honor! > There is some formatting issue in the code you copied into the ticket... > Cheers, > Kim Great! It's a nice use case for return_inverse=True in unique1d(). I have fixed the formatting, but cannot remove the previous comment. r. From slaunger at gmail.com Thu Mar 5 07:56:20 2009 From: slaunger at gmail.com (Kim Hansen) Date: Thu, 5 Mar 2009 13:56:20 +0100 Subject: [Numpy-discussion] Numpy array in iterable In-Reply-To: <49AFC9A8.5040907@ntc.zcu.cz> References: <1cd32cbb0902250540u66da1969pfe481dafbb061cea@mail.gmail.com> <49AFC55E.2000603@ntc.zcu.cz> <49AFC9A8.5040907@ntc.zcu.cz> Message-ID: >2009/3/5 Robert Cimrman : > > Great! It's a nice use case for return_inverse=True in unique1d(). > > I have fixed the formatting, but cannot remove the previous comment. > > r. ;-) Thank you for fixing the formatting, --Kim From dagss at student.matnat.uio.no Thu Mar 5 08:09:26 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 05 Mar 2009 14:09:26 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903051335.34056.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <200903051029.35458.faltet@pytables.org> <49AFAA90.2030408@student.matnat.uio.no> <200903051335.34056.faltet@pytables.org> Message-ID: <49AFCF06.8070209@student.matnat.uio.no> Francesc Alted wrote: > A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > >> No, one could do the same thing that NumPy does (I think, never >> looked into it in detail), i.e: >> >> decide on dimension to do innermost dynamically from strides and >> sizes save the stride in that dimension for each array >> for loop using n-dimensional iterator with larger per-loop overhead: >> save offsets >> for loop on the innermost dimension with lower per-loop overhead: >> component-wise operation using offsets and innermost strides >> > > I see. Yes, it seems definitely doable. However, I don't understand > very well when you say that you have to "decide on dimension to do > innermost dynamically". For me, this dimension should always be the > trailing dimension, in order to maximize the locality of data. Or I'm > missing something? > For a transposed array (or Fortran-ordered one) it will be the leading. Not sure whether it is possible with other kinds of views (where e.g. a middle dimension varies fastest), but the NumPy model doesn't preclude it and I suppose it would be possible with stride_tricks. Dag Sverre From faltet at pytables.org Thu Mar 5 08:35:00 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 14:35:00 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AFCF06.8070209@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> <200903051335.34056.faltet@pytables.org> <49AFCF06.8070209@student.matnat.uio.no> Message-ID: <200903051435.01008.faltet@pytables.org> A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > Francesc Alted wrote: > > A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > >> No, one could do the same thing that NumPy does (I think, never > >> looked into it in detail), i.e: > >> > >> decide on dimension to do innermost dynamically from strides and > >> sizes save the stride in that dimension for each array > >> for loop using n-dimensional iterator with larger per-loop > >> overhead: save offsets > >> for loop on the innermost dimension with lower per-loop > >> overhead: component-wise operation using offsets and innermost > >> strides > > > > I see. Yes, it seems definitely doable. However, I don't > > understand very well when you say that you have to "decide on > > dimension to do innermost dynamically". For me, this dimension > > should always be the trailing dimension, in order to maximize the > > locality of data. Or I'm missing something? > > For a transposed array (or Fortran-ordered one) it will be the > leading. Good point. I was not aware of this subtlity. In fact, numexpr does not get well with transposed views of NumPy arrays. Filed the bug in: http://code.google.com/p/numexpr/issues/detail?id=18 > Not sure whether it is possible with other kinds of views > (where e.g. a middle dimension varies fastest), but the NumPy model > doesn't preclude it and I suppose it would be possible with > stride_tricks. Middle dimensions varying first? Oh my! :) -- Francesc Alted From sturla at molden.no Thu Mar 5 10:04:44 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 5 Mar 2009 16:04:44 +0100 (CET) Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903051435.01008.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <200903051335.34056.faltet@pytables.org> <49AFCF06.8070209@student.matnat.uio.no> <200903051435.01008.faltet@pytables.org> Message-ID: > A Thursday 05 March 2009, Dag Sverre Seljebotn escrigu?: > Good point. I was not aware of this subtlity. In fact, numexpr does > not get well with transposed views of NumPy arrays. Filed the bug in: > > http://code.google.com/p/numexpr/issues/detail?id=18 > >> Not sure whether it is possible with other kinds of views >> (where e.g. a middle dimension varies fastest), but the NumPy model >> doesn't preclude it and I suppose it would be possible with >> stride_tricks. > > Middle dimensions varying first? Oh my! :) I cannot see any obvious justification for letting the middle dimension varying first. C ordering is natural if we work with a "pointer to an array of pointers" or an "array of arrays", which in both cases will be indexed as array[i][j] in C: array[i][j] = (array[i])[j] = *(array[i]+j) = *(*(array+i)+j) While C has arrays and pointers, the difference is almost never visible to the programmer. This has lead some to erroneously believe that "C has no arrays, only pointers". However: double array[512]; double *p = array; Now sizeof(array) will be sizeof(double)*512, whereas sizeof(p) will be sizeof(long). This is one of very few cases where C arrays and pointers behave differently, but demonstrates the existence of arrays in C. The justification for Fortran ordering is in the mathematics. Say we have a set of linear equations A * X = B and are going to solve for X, using some magical subroutine 'solve'. The most efficient way to store these arrays becomes the Fortran ordering. That is, call solve(A, X, B) will be mathematically equivalent to the loop do i = i, n call solve(A, X(:,i), B) end do All the arrays in the call to solve are still kept contigous! This would not be the case with C ordering, which is an important reason that C sucks so much for numerical computing. To write efficient linear algebra in C, we have to store matrices in memory transposed to how they appear in mathematical equations. In fact, Matlab uses Fortran ordering because of this. While C ordering feels natural to computer scientists, who loves the beauty of pointer and array symmetries, it is a major obstacle for scientists and engineers from other fields. It is perhaps the most important reason why numerical code written in Fortran tend to be the faster: If a matrix is rank n x m in the equation, it should be rank n x m in the program as well, right? Not so with C. The better performance of Fortran for numerical code is often blamed on pointer aliasing in C. I believe wrong memory layout by the hands of the programmer is actually the more important reason. In fact, whenever I have done comparisons, the C compiler has always produced the faster machine code (gcc vs. gfortran or g77, icc vs. ifort). But to avoid the pitfall, one must be aware of it. And when a programmer's specialization is in another field, this is usually not the case. Most scientists doing some casual C, Java or Python programming fall straight into the trap. That is also why I personally feel it was a bad choice to let C ordering be the default in NumPy. S.M. From david at ar.media.kyoto-u.ac.jp Thu Mar 5 10:15:53 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 06 Mar 2009 00:15:53 +0900 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: References: <49AEFFA8.3050903@student.matnat.uio.no> <200903051335.34056.faltet@pytables.org> <49AFCF06.8070209@student.matnat.uio.no> <200903051435.01008.faltet@pytables.org> Message-ID: <49AFECA9.60402@ar.media.kyoto-u.ac.jp> Sturla Molden wrote: > The justification for Fortran ordering is in the mathematics. Say we have > a set of linear equations > > A * X = B > > and are going to solve for X, using some magical subroutine 'solve'. The > most efficient way to store these arrays becomes the Fortran ordering. > That is, > > call solve(A, X, B) > > will be mathematically equivalent to the loop > > do i = i, n > call solve(A, X(:,i), B) > end do > > All the arrays in the call to solve are still kept contigous! This would > not be the case with C ordering, which is an important reason that C sucks > so much for numerical computing. To write efficient linear algebra in C, > we have to store matrices in memory transposed to how they appear in > mathematical equations. In fact, Matlab uses Fortran ordering because of > this. > I don't think that's true: I am pretty sure matlab follows fortran ordering because matlab was born as a "shell" around BLAS, LINPACK and co. I don't understand your argument about Row vs Column matters: which one is best depends on your linear algebra equations. You give an example where Fortran is better, but I can find example where C order will be more appropriate. Most of the time, for anything non trivial, which one is best depends on the dimensions of the problem (Kalman filtering in high dimensional spaces for example), because some parts of the equations are better handled in a row-order fashion, and some other parts in a column order fashion. I don't know whether this is true, but I have read several times that the column order in Fortran is historical and due to some specificities of the early IBM - but I have of course no idea what the hardware in 1954 looks like from a programming point of view :) This does not prevent it from being a happy accident with regard to performances, though. cheers, David From faltet at pytables.org Thu Mar 5 11:30:12 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 5 Mar 2009 17:30:12 +0100 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AFECA9.60402@ar.media.kyoto-u.ac.jp> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AFECA9.60402@ar.media.kyoto-u.ac.jp> Message-ID: <200903051730.13640.faltet@pytables.org> A Thursday 05 March 2009, David Cournapeau escrigu?: > I don't understand your argument about Row vs Column matters: which > one is best depends on your linear algebra equations. You give an > example where Fortran is better, but I can find example where C order > will be more appropriate. Most of the time, for anything non trivial, > which one is best depends on the dimensions of the problem (Kalman > filtering in high dimensional spaces for example), because some parts > of the equations are better handled in a row-order fashion, and some > other parts in a column order fashion. Yeah. Yet another (simple) example coming from linear algebra: a matrix multiplied by a vector. Given a (matrix): a = [[0,1,2], [3,4,5], [6,7,8]] and b (vector): b = [[1], [2], [3]] the most intuitive way to do the multiplication is to take the 1st row of a and do a dot product against b, repeating the process for 2nd and 3rd rows of a. C order coincides with this rule, and it is optimal from the point of view of memory access, while Fortran order is not. -- Francesc Alted From nadavh at visionsense.com Thu Mar 5 14:02:10 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Thu, 5 Mar 2009 21:02:10 +0200 Subject: [Numpy-discussion] Interpolation via Fourier transform Message-ID: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> I apology for this off topic question: I have a 2D FT of size N x N, and I would like to reconstruct the original signal with a lower sampling frequency directly (without using an interpolation procedure): Given M < N the goal is to compute a M x M "time domain" signal. In the case of 1D signal the trick is simple --- given a length N freq. domain Sig: sig = np.fft.ifft(Sig, M) This trick does not work in 2D: sig = np.fft.ifft2(Sig, (M,M)) is far from being the right answer. Any ideas? From charlesr.harris at gmail.com Thu Mar 5 14:16:35 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 5 Mar 2009 12:16:35 -0700 Subject: [Numpy-discussion] Missing svn mailings? Message-ID: It looks like the new system is failing to mail svn commit notifications... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtrumpis at berkeley.edu Thu Mar 5 14:51:44 2009 From: mtrumpis at berkeley.edu (M Trumpis) Date: Thu, 5 Mar 2009 11:51:44 -0800 Subject: [Numpy-discussion] Interpolation via Fourier transform In-Reply-To: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> Message-ID: Hi Nadav.. if you want a lower resolution 2d function with the same field of view (or whatever term is appropriate to your case), then in principle you can truncate your higher frequencies and do this: sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 - M/2:N/2+M/2]) I like to use an fft that transforms from an array indexing negative-to-positive freqs to an array that indexes negative-to-positive spatial points, so in both spaces, the origin is at (N/2,N/2). Then the expression works as-is. The problem is if you've got different indexing in one or both spaces (typically positive frequencies followed by negative) you can play around with a change of variables in your DFT in one or both spaces. If the DFT is defined as a computing frequencies from 0,N, then putting in n' = n-N/2 leads to a term like exp(1j*pi*q) that multiplies f[q]. Here's a toy example: a = np.cos(2*np.pi*5*np.arange(64)/64.) P.plot(np.fft.fft(a).real) P.plot(np.fft.fft(np.power(-1,np.arange(64))*a).real) The second one is centered about index N/2 Similarly, if you need to change the limits of the summation of the DFT from 0,N to -N/2,N/2, then you can multiply exp(1j*pi*n) to the outside of the summation. Like I said, easy enough in principle! Mike On Thu, Mar 5, 2009 at 11:02 AM, Nadav Horesh wrote: > > I apology for this off topic question: > > ?I have a 2D FT of size N x N, and I would like to reconstruct the original signal with a lower sampling frequency directly (without using an interpolation procedure): Given M < N the goal is to compute a M x M "time domain" signal. > > ?In the case of 1D signal the trick is simple --- given a length N freq. domain Sig: > > ?sig = np.fft.ifft(Sig, M) > > This trick does not work in 2D: > > ?sig = np.fft.ifft2(Sig, (M,M)) > > is far from being the right answer. > > ?Any ideas? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From peridot.faceted at gmail.com Thu Mar 5 15:06:56 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 5 Mar 2009 15:06:56 -0500 Subject: [Numpy-discussion] Interpolation via Fourier transform In-Reply-To: References: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> Message-ID: 2009/3/5 M Trumpis : > Hi Nadav.. if you want a lower resolution 2d function with the same > field of view (or whatever term is appropriate to your case), then in > principle you can truncate your higher frequencies and do this: > > sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 - M/2:N/2+M/2]) > > I like to use an fft that transforms from an array indexing > negative-to-positive freqs to an array that indexes > negative-to-positive spatial points, so in both spaces, the origin is > at (N/2,N/2). Then the expression works as-is. > > The problem is if you've got different indexing in one or both spaces > (typically positive frequencies followed by negative) you can play > around with a change of variables in your DFT in one or both spaces. > If the DFT is defined as a computing frequencies from 0,N, then > putting in n' = n-N/2 leads to a term like exp(1j*pi*q) that > multiplies f[q]. Here's a toy example: > > a = np.cos(2*np.pi*5*np.arange(64)/64.) > > P.plot(np.fft.fft(a).real) > > P.plot(np.fft.fft(np.power(-1,np.arange(64))*a).real) > > The second one is centered about index N/2 > > Similarly, if you need to change the limits of the summation of the > DFT from 0,N to -N/2,N/2, then you can multiply exp(1j*pi*n) to the > outside of the summation. > > Like I said, easy enough in principle! There's also the hit-it-with-a-hammer approach: Just downsample in x then in y, using the one-dimensional transforms. Anne From pwang at enthought.com Thu Mar 5 15:45:20 2009 From: pwang at enthought.com (Peter Wang) Date: Thu, 5 Mar 2009 14:45:20 -0600 Subject: [Numpy-discussion] Missing svn mailings? In-Reply-To: References: Message-ID: <219F1401-34AF-4CE2-9271-378B3E78AD9E@enthought.com> On Mar 5, 2009, at 1:16 PM, Charles R Harris wrote: > It looks like the new system is failing to mail svn commit > notifications... Chuck Thanks for the heads up; it should be fixed now. -Peter From stefan at sun.ac.za Thu Mar 5 16:15:40 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 5 Mar 2009 23:15:40 +0200 Subject: [Numpy-discussion] indexing question In-Reply-To: References: Message-ID: <9457e7c80903051315h5dcb9f03jfcafd122a78df6d1@mail.gmail.com> Hi Robin 2009/3/5 Robin : > On Thu, Mar 5, 2009 at 10:57 AM, Robin wrote: >> On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote: >>> Hi, >>> >>> I have an indexing problem, and I know it's a bit lazy to ask the >>> list, sometime when people do interesting tricks come up so I hope no >>> one minds! >>> >>> I have a 2D array X.shape = (a,b) >>> >>> and I want to change it into new array which is shape (2,(a*b)) which >>> has the following form: >>> [ ?X[0,0], X[0,1] >>> ? X[1,0], X[1,1] >>> ? X[2,0], X[2,1] >>> ? .... >>> ? X[a,0], X[a,1] >>> ? X[0,1], X[0,2] >>> ? X[1,1], X[1,2] >>> ... >>> ] >>> >From the array you wrote down above, I assume you meant ((a*b-1), 2): In [23]: x = np.arange(16).reshape((4,4)) In [24]: x Out[24]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) In [25]: x.strides Out[25]: (16, 4) In [26]: np.lib.stride_tricks.as_strided(x, shape=(3, 4, 2), strides=(4, 16, 4)) Out[26]: array([[[ 0, 1], [ 4, 5], [ 8, 9], [12, 13]], [[ 1, 2], [ 5, 6], [ 9, 10], [13, 14]], [[ 2, 3], [ 6, 7], [10, 11], [14, 15]]]) In [27]: np.lib.stride_tricks.as_strided(x, shape=(3, 4, 2), strides=(4, 16, 4)).reshape((12, 2)) Out[27]: array([[ 0, 1], [ 4, 5], [ 8, 9], [12, 13], [ 1, 2], [ 5, 6], [ 9, 10], [13, 14], [ 2, 3], [ 6, 7], [10, 11], [14, 15]]) Does that help? Regards St?fan From robince at gmail.com Thu Mar 5 16:34:36 2009 From: robince at gmail.com (Robin) Date: Thu, 5 Mar 2009 21:34:36 +0000 Subject: [Numpy-discussion] indexing question In-Reply-To: <9457e7c80903051315h5dcb9f03jfcafd122a78df6d1@mail.gmail.com> References: <9457e7c80903051315h5dcb9f03jfcafd122a78df6d1@mail.gmail.com> Message-ID: On Thu, Mar 5, 2009 at 9:15 PM, St?fan van der Walt wrote: > Hi Robin > > 2009/3/5 Robin : >> On Thu, Mar 5, 2009 at 10:57 AM, Robin wrote: >>> On Thu, Mar 5, 2009 at 10:40 AM, Robin wrote: >>>> Hi, >>>> >>>> I have an indexing problem, and I know it's a bit lazy to ask the >>>> list, sometime when people do interesting tricks come up so I hope no >>>> one minds! >>>> >>>> I have a 2D array X.shape = (a,b) >>>> >>>> and I want to change it into new array which is shape (2,(a*b)) which >>>> has the following form: >>>> [ ?X[0,0], X[0,1] >>>> ? X[1,0], X[1,1] >>>> ? X[2,0], X[2,1] >>>> ? .... >>>> ? X[a,0], X[a,1] >>>> ? X[0,1], X[0,2] >>>> ? X[1,1], X[1,2] >>>> ... >>>> ] >>>> > > >From the array you wrote down above, I assume you meant ((a*b-1), 2): > > In [23]: x = np.arange(16).reshape((4,4)) > > In [24]: x > Out[24]: > array([[ 0, ?1, ?2, ?3], > ? ? ? [ 4, ?5, ?6, ?7], > ? ? ? [ 8, ?9, 10, 11], > ? ? ? [12, 13, 14, 15]]) > > In [25]: x.strides > Out[25]: (16, 4) > > In [26]: np.lib.stride_tricks.as_strided(x, shape=(3, 4, 2), strides=(4, 16, 4)) > Out[26]: > array([[[ 0, ?1], > ? ? ? ?[ 4, ?5], > ? ? ? ?[ 8, ?9], > ? ? ? ?[12, 13]], > > ? ? ? [[ 1, ?2], > ? ? ? ?[ 5, ?6], > ? ? ? ?[ 9, 10], > ? ? ? ?[13, 14]], > > ? ? ? [[ 2, ?3], > ? ? ? ?[ 6, ?7], > ? ? ? ?[10, 11], > ? ? ? ?[14, 15]]]) > > In [27]: np.lib.stride_tricks.as_strided(x, shape=(3, 4, 2), > strides=(4, 16, 4)).reshape((12, 2)) > Out[27]: > array([[ 0, ?1], > ? ? ? [ 4, ?5], > ? ? ? [ 8, ?9], > ? ? ? [12, 13], > ? ? ? [ 1, ?2], > ? ? ? [ 5, ?6], > ? ? ? [ 9, 10], > ? ? ? [13, 14], > ? ? ? [ 2, ?3], > ? ? ? [ 6, ?7], > ? ? ? [10, 11], > ? ? ? [14, 15]]) > > Does that help? Ah thats great thanks... I had realised it could be done with as_strided and a reshape from your excellent slides - but I had trouble figure out the new strides so I settled on making a list with _ix and the hstack'ing the list. This is much neater though. Thanks, Robin From charlesr.harris at gmail.com Thu Mar 5 23:11:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 5 Mar 2009 21:11:53 -0700 Subject: [Numpy-discussion] Ticket mailing down? Message-ID: I'm not receiving notifications of new/modified tickets. Anyone else having this problem? ... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Fri Mar 6 00:47:31 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 6 Mar 2009 07:47:31 +0200 Subject: [Numpy-discussion] Ticket mailing down? In-Reply-To: References: Message-ID: <6a17e9ee0903052147i22514847p1bf58c8d7c3cabb6@mail.gmail.com> > 2009/3/6 Charles R Harris : > I'm not receiving notifications of new/modified tickets. Anyone else having > this problem? ... Chuck I haven't seen anything since 3rd March. Cheers, Scott From charlesr.harris at gmail.com Fri Mar 6 00:53:15 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 5 Mar 2009 22:53:15 -0700 Subject: [Numpy-discussion] Ticket mailing down? In-Reply-To: <6a17e9ee0903052147i22514847p1bf58c8d7c3cabb6@mail.gmail.com> References: <6a17e9ee0903052147i22514847p1bf58c8d7c3cabb6@mail.gmail.com> Message-ID: On Thu, Mar 5, 2009 at 10:47 PM, Scott Sinclair wrote: > > 2009/3/6 Charles R Harris : > > I'm not receiving notifications of new/modified tickets. Anyone else > having > > this problem? ... Chuck > > I haven't seen anything since 3rd March. > That was my original problem but it looks like it has been fixed. I just found the most recent mailings, the subject line has changed and my filter wasn't putting them where they belonged. I also had to re-update my address to receive the svn mailings. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Fri Mar 6 01:02:16 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 6 Mar 2009 08:02:16 +0200 Subject: [Numpy-discussion] Ticket mailing down? In-Reply-To: References: <6a17e9ee0903052147i22514847p1bf58c8d7c3cabb6@mail.gmail.com> Message-ID: <6a17e9ee0903052202k3697e0c0t773701898073c788@mail.gmail.com> > 2009/3/6 Charles R Harris : > On Thu, Mar 5, 2009 at 10:47 PM, Scott Sinclair > wrote: >> >> > 2009/3/6 Charles R Harris : >> > I'm not receiving notifications of new/modified tickets. Anyone else >> > having >> > this problem? ... Chuck >> >> I haven't seen anything since 3rd March. > > That was my original problem but it looks like it has been fixed. I just > found the most recent mailings, the subject line has changed and my filter > wasn't putting them where they belonged. I also had to re-update my address > to receive the svn mailings. Hmm. I'm still subscribed to the lists. I'll just wait and see. Cheers, Scott From cimrman3 at ntc.zcu.cz Fri Mar 6 09:06:04 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 06 Mar 2009 15:06:04 +0100 Subject: [Numpy-discussion] setmember1d_nu Message-ID: <49B12DCC.4040307@ntc.zcu.cz> Hi all, I have added to the ticket [1] a script that compares the proposed setmember1d_nu() implementations of Neil and Kim. Comments are welcome! [1] http://projects.scipy.org/numpy/ticket/1036 r. From rmay31 at gmail.com Fri Mar 6 10:28:59 2009 From: rmay31 at gmail.com (Ryan May) Date: Fri, 6 Mar 2009 09:28:59 -0600 Subject: [Numpy-discussion] numpy-svn mails Message-ID: Hi, Is anyone getting mails of the SVN commits? I've gotten 1 spam message from that list, but no commits. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Fri Mar 6 12:04:34 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri, 6 Mar 2009 19:04:34 +0200 Subject: [Numpy-discussion] Interpolation via Fourier transform References: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C46D@ex3.envision.co.il> I found the solution I needed for my peculiar case after reading your email based of the following stages: I have a N x N frequency-domain matrix Z 1. Use fftshift to obtain a DC centered matrix Note: fftshift(fft(a)) replaces np.fft.fft(np.power(-1,np.arange(64))*a) Zs = np.fft.fftshift(Z) 2. pad Zs with zeros scale = int(ceil(float(N)/M)) MM = scale*M Ztemp = np.zeros((MM,MM), dtype=complex) Ztemp[(MM-N)//2:(N-MM)//2,(MM-N)//2:(N-MM)//2] = Zs 3. Shift back to a "normal order" Ztemp = np.fft.ifftshift(Ztemp) 4. Transform to the "time domain" and sub-sample z = np.fft.ifft2(Ztemp)[::scale, ::scale] I went this was since I needed the aliasing, otherwise I could just truncate Zs to size MxM. Thank you, Nadav. -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? M Trumpis ????: ? 05-???-09 21:51 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] Interpolation via Fourier transform Hi Nadav.. if you want a lower resolution 2d function with the same field of view (or whatever term is appropriate to your case), then in principle you can truncate your higher frequencies and do this: sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 - M/2:N/2+M/2]) I like to use an fft that transforms from an array indexing negative-to-positive freqs to an array that indexes negative-to-positive spatial points, so in both spaces, the origin is at (N/2,N/2). Then the expression works as-is. The problem is if you've got different indexing in one or both spaces (typically positive frequencies followed by negative) you can play around with a change of variables in your DFT in one or both spaces. If the DFT is defined as a computing frequencies from 0,N, then putting in n' = n-N/2 leads to a term like exp(1j*pi*q) that multiplies f[q]. Here's a toy example: a = np.cos(2*np.pi*5*np.arange(64)/64.) P.plot(np.fft.fft(a).real) P.plot(np.fft.fft(np.power(-1,np.arange(64))*a).real) The second one is centered about index N/2 Similarly, if you need to change the limits of the summation of the DFT from 0,N to -N/2,N/2, then you can multiply exp(1j*pi*n) to the outside of the summation. Like I said, easy enough in principle! Mike On Thu, Mar 5, 2009 at 11:02 AM, Nadav Horesh wrote: > > I apology for this off topic question: > > ?I have a 2D FT of size N x N, and I would like to reconstruct the original signal with a lower sampling frequency directly (without using an interpolation procedure): Given M < N the goal is to compute a M x M "time domain" signal. > > ?In the case of 1D signal the trick is simple --- given a length N freq. domain Sig: > > ?sig = np.fft.ifft(Sig, M) > > This trick does not work in 2D: > > ?sig = np.fft.ifft2(Sig, (M,M)) > > is far from being the right answer. > > ?Any ideas? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 4579 bytes Desc: not available URL: From strawman at astraw.com Fri Mar 6 12:16:33 2009 From: strawman at astraw.com (Andrew Straw) Date: Fri, 06 Mar 2009 09:16:33 -0800 Subject: [Numpy-discussion] N-D array interface page is out of date In-Reply-To: <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> Message-ID: <49B15A71.1070307@astraw.com> Hi, I have updated http://numpy.scipy.org/array_interface.shtml to have a giant warning first paragraph describing how that information is outdated. Additionally, I have updated http://numpy.scipy.org/ to point people to the buffer interface described in PEP 3118 and implemented in Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write code for older Pythons that will automatically support the buffer interface in newer Pythons. If you have knowledge about these matters (Travis O. and Dag, especially), I'd appreciate it if you could read over the pages to ensure everything is actually correct. Thanks, Andrew St?fan van der Walt wrote: > 2009/2/3 Andrew Straw : > >> Can someone with appropriate permissions fix the page or give me the >> appropriate permissions so I can do it? I think even deleting the page >> is better than keeping it as-is. >> > > Who all has editing access to this page? Is it hosted on scipy.org? > > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From stefan at sun.ac.za Fri Mar 6 12:30:49 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 6 Mar 2009 19:30:49 +0200 Subject: [Numpy-discussion] Interpolation via Fourier transform In-Reply-To: <710F2847B0018641891D9A216027636029C46D@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> <710F2847B0018641891D9A216027636029C46D@ex3.envision.co.il> Message-ID: <9457e7c80903060930v49477c75o84d5fbb8f364666@mail.gmail.com> Hi Nadav You can also read the interesting discussion at http://projects.scipy.org/numpy/ticket/748 which also contains some padding code. I still disagree with the conclusion, but oh well :) Cheers St?fan 2009/3/6 Nadav Horesh : > I found the solution I needed for my peculiar case after reading your email > based of the following stages: > > I have a N x N frequency-domain matrix Z > > 1. Use fftshift to obtain a DC centered matrix > ?? Note: fftshift(fft(a)) replaces np.fft.fft(np.power(-1,np.arange(64))*a) > ?? Zs = np.fft.fftshift(Z) > 2. pad Zs with zeros > ?? scale = int(ceil(float(N)/M)) > ?? MM = scale*M > ?? Ztemp = np.zeros((MM,MM), dtype=complex) > ?? Ztemp[(MM-N)//2:(N-MM)//2,(MM-N)//2:(N-MM)//2] = Zs > 3. Shift back to a "normal order" > ?? Ztemp = np.fft.ifftshift(Ztemp) > 4. Transform to the "time domain" and sub-sample > ?? z = np.fft.ifft2(Ztemp)[::scale, ::scale] > > I went this was since I needed the aliasing, otherwise I could just truncate > Zs to size MxM. From nadavh at visionsense.com Fri Mar 6 12:33:54 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri, 6 Mar 2009 19:33:54 +0200 Subject: [Numpy-discussion] Interpolation via Fourier transform References: <710F2847B0018641891D9A216027636029C46C@ex3.envision.co.il> Message-ID: <710F2847B0018641891D9A216027636029C46E@ex3.envision.co.il> It was one of the first things I tried, without success Nadav. -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Anne Archibald ????: ? 05-???-09 22:06 ??: Discussion of Numerical Python ????: Re: [Numpy-discussion] Interpolation via Fourier transform 2009/3/5 M Trumpis : > Hi Nadav.. if you want a lower resolution 2d function with the same > field of view (or whatever term is appropriate to your case), then in > principle you can truncate your higher frequencies and do this: > > sig = ifft2_func(sig[N/2 - M/2:N/2 + M/2, N/2 - M/2:N/2+M/2]) > > I like to use an fft that transforms from an array indexing > negative-to-positive freqs to an array that indexes > negative-to-positive spatial points, so in both spaces, the origin is > at (N/2,N/2). Then the expression works as-is. > > The problem is if you've got different indexing in one or both spaces > (typically positive frequencies followed by negative) you can play > around with a change of variables in your DFT in one or both spaces. > If the DFT is defined as a computing frequencies from 0,N, then > putting in n' = n-N/2 leads to a term like exp(1j*pi*q) that > multiplies f[q]. Here's a toy example: > > a = np.cos(2*np.pi*5*np.arange(64)/64.) > > P.plot(np.fft.fft(a).real) > > P.plot(np.fft.fft(np.power(-1,np.arange(64))*a).real) > > The second one is centered about index N/2 > > Similarly, if you need to change the limits of the summation of the > DFT from 0,N to -N/2,N/2, then you can multiply exp(1j*pi*n) to the > outside of the summation. > > Like I said, easy enough in principle! There's also the hit-it-with-a-hammer approach: Just downsample in x then in y, using the one-dimensional transforms. Anne _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3955 bytes Desc: not available URL: From matthew.brett at gmail.com Fri Mar 6 13:27:08 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 6 Mar 2009 10:27:08 -0800 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <49AEFFA8.3050903@student.matnat.uio.no> References: <49AEFFA8.3050903@student.matnat.uio.no> Message-ID: <1e2af89e0903061027k734e0374wd157ab0c3730c73c@mail.gmail.com> Hi, > The idea behind the current syntax was to keep things as close as > possible to Python/NumPy, and only provide some "hints" to Cython for > optimization. My problem with this now is that a) it's too easy to get > non-optimized code without a warning by letting in untyped indices, b) I > think the whole thing is a bit too "magic" and that it is too unclear > what is going on to newcomers (though I'm guessing there). > > My proposal: Introduce an explicit "buffer syntax": > > arr = np.zeros(..) > cdef int[:,:] buf = arr # 2D buffer I like this proposal a lot; it seems a great deal clearer to me than the earlier syntax; it helps me think of the new Cython thing that I have in a different and more natural way. Best, Matthew From peridot.faceted at gmail.com Fri Mar 6 13:45:57 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 6 Mar 2009 13:45:57 -0500 Subject: [Numpy-discussion] Cython numerical syntax revisited In-Reply-To: <200903051029.35458.faltet@pytables.org> References: <49AEFFA8.3050903@student.matnat.uio.no> <49AF9756.8020203@student.matnat.uio.no> <200903051024.51977.faltet@pytables.org> <200903051029.35458.faltet@pytables.org> Message-ID: 2009/3/5 Francesc Alted : > A Thursday 05 March 2009, Francesc Alted escrigu?: >> Well, I suppose that, provided that Cython could perform the for-loop >> transformation, giving support for strided arrays would be relatively >> trivial, and the performance would be similar than numexpr in this >> case. > > Mmh, perhaps not so trivial, because that implies that the stride of an > array should be known in compilation time, and that would require a new > qualifier when declaring the array. ?Tricky... Not necessarily. You can transform a[1,2,3] into *(a.data + 1*a.strides[0] + 2*a.strides[1] + 3*a.strides[2]) without any need for static information beyond that a is 3-dimensional. This would already be valuable, though perhaps you'd want to be able to declare that a particular dimension had stride 1 to simplify things. You could then use this same implementation to add automatic iteration. Anne > > Cheers, > > -- > Francesc Alted > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Fri Mar 6 13:46:22 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 11:46:22 -0700 Subject: [Numpy-discussion] numpy-svn mails In-Reply-To: References: Message-ID: On Fri, Mar 6, 2009 at 8:28 AM, Ryan May wrote: > Hi, > > Is anyone getting mails of the SVN commits? I've gotten 1 spam message > from that list, but no commits. > > Ryan > I'm not seeing them either...Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwang at enthought.com Fri Mar 6 15:17:00 2009 From: pwang at enthought.com (Peter Wang) Date: Fri, 6 Mar 2009 14:17:00 -0600 Subject: [Numpy-discussion] numpy-svn mails In-Reply-To: References: Message-ID: <7E1976F2-7A6E-4479-8003-7DE6012B8E6C@enthought.com> On Mar 6, 2009, at 12:46 PM, Charles R Harris wrote: > On Fri, Mar 6, 2009 at 8:28 AM, Ryan May wrote: > Hi, > > Is anyone getting mails of the SVN commits? I've gotten 1 spam > message from that list, but no commits. > > Ryan > > I'm not seeing them either...Chuck Hey guys, I'm working on this problem now. You might see a spurious email here or there, and I will let everyone know on both scipy and numpy lists when they are going again. In the interim, please use Trac to look at checkins: http://projects.scipy.org/numpy/log/ http://projects.scipy.org/scipy/log/ Thanks for your patience, Peter From patrickmarshwx at gmail.com Fri Mar 6 15:41:56 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Fri, 6 Mar 2009 14:41:56 -0600 Subject: [Numpy-discussion] Build Failure on WIndows Vista Message-ID: Greetings, I am running Windows Vista Ultimate and trying to build numpy from the SVN branch using MSVC 2003. I have been able to build previously, but with my latest SVN update I am no longer able to build. My CPU is an Intel Core2 T7600 @2.33GHz. The error is below. e:\svn\numpy\numpy\core\include\numpy\npy_cpu.h(44) : fatal error C1189: #error: Unknown CPU, please report this to numpy maintainers with information about your platform (OS, CPU and compiler) error: Command "D:\Program Files\Microsoft Visual Studio 2003\bin\cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -Inumpy\core\include -Ibuild\src.win32-2.5\numpy\core\include/numpy -Inumpy\core\src -Inumpy\core\include -ID:\Python25\include -ID:\Python25\PC /Tcbuild\src.win32-2.5\numpy\core\src\_sortmodule.c /Fobuild\temp.win32-2.5\Release\build\src.win32-2.5\numpy\core\src\_sortmodule.obj" failed with exit status 2 -Patrick -- Patrick Marsh Graduate Research Assistant School of Meteorology University of Oklahoma http://www.patricktmarsh.com From charlesr.harris at gmail.com Fri Mar 6 16:01:07 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 14:01:07 -0700 Subject: [Numpy-discussion] Changeset 6557 Message-ID: Hi David, Currently, bint.i = __STR2INTCST("ABCD"); It is probably more portable to just initialize the union union { char c[4]; npy_uint32 i; } bint = {'A','B','C','D'}; If you use const union the initialization will be done at compile time. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 6 16:53:17 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 14:53:17 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: Message-ID: On Fri, Mar 6, 2009 at 2:01 PM, Charles R Harris wrote: > Hi David, > > Currently, > > bint.i = __STR2INTCST("ABCD"); > > It is probably more portable to just initialize the union > > union { > char c[4]; > npy_uint32 i; > } bint = {'A','B','C','D'}; > > > If you use const union the initialization will be done at compile time. > Better yet const union { npy_uint32 i; char c[4]; } bint = {0x01020304}; And check for the numbers 1,2,3,4. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesreid1 at gmail.com Fri Mar 6 17:36:10 2009 From: charlesreid1 at gmail.com (charles reid) Date: Fri, 6 Mar 2009 15:36:10 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: Hi there - I've imported some data from a file, and it's in a list called mixfrac. I'd like to take the Fourier transform of the data, but when I try to take the FFT of the list, I get this error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/charles/apriori/read.py in () ----> 1 2 3 4 5 /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc in fft(a, n, axis) 105 """ 106 --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, _fft_cache) 108 109 /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc in _raw_fft(a, n, axis, init_function, work_function, fft_cache) 64 if axis != -1: 65 a = swapaxes(a, axis, -1) ---> 66 r = work_function(a, wsave) 67 if axis != -1: 68 r = swapaxes(r, axis, -1) TypeError: array cannot be safely cast to required type so I convert to an array and run fft(mixfracarray). mixfracarray = array(mixfrac) fft(mixfracarray) whereupon I recieve the error --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/charles/apriori/read.py in () ----> 1 2 3 4 5 /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc in fft(a, n, axis) 105 """ 106 --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, _fft_cache) 108 109 /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc in _raw_fft(a, n, axis, init_function, work_function, fft_cache) 64 if axis != -1: 65 a = swapaxes(a, axis, -1) ---> 66 r = work_function(a, wsave) 67 if axis != -1: 68 r = swapaxes(r, axis, -1) TypeError: array cannot be safely cast to required type This is strange, because I can run fft(array([0,0,0,1,1,1])), or fft([0,0,0,1,1,1]), perfectly fine. This is passing an array and a list, respectively. type(mixfrac) is list and size(mixfrac) is 100; type(mixfracarray) is ndarray, and mixfracarray.shape is (100,). I've also tried taking the FFT of the transpose of mixfracarray, but that doesn't work either. I'm stumped - why can't I run an FFT on either mixfrac or mixfracarray? Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 6 17:44:32 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 15:44:32 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: On Fri, Mar 6, 2009 at 3:36 PM, charles reid wrote: > Hi there - > > I've imported some data from a file, and it's in a list called mixfrac. > I'd like to take the Fourier transform of the data, but when I try to take > the FFT of the list, I get this error: > > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /Users/charles/apriori/read.py in () > ----> 1 > 2 > 3 > 4 > 5 > > /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc > in fft(a, n, axis) > 105 """ > 106 > --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, > _fft_cache) > 108 > 109 > > /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc > in _raw_fft(a, n, axis, init_function, work_function, fft_cache) > 64 if axis != -1: > 65 a = swapaxes(a, axis, -1) > ---> 66 r = work_function(a, wsave) > 67 if axis != -1: > 68 r = swapaxes(r, axis, -1) > > TypeError: array cannot be safely cast to required type > > > so I convert to an array and run fft(mixfracarray). > > mixfracarray = array(mixfrac) > fft(mixfracarray) > > whereupon I recieve the error > > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /Users/charles/apriori/read.py in () > ----> 1 > 2 > 3 > 4 > 5 > > /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc > in fft(a, n, axis) > 105 """ > 106 > --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, > _fft_cache) > 108 > 109 > > /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc > in _raw_fft(a, n, axis, init_function, work_function, fft_cache) > 64 if axis != -1: > 65 a = swapaxes(a, axis, -1) > ---> 66 r = work_function(a, wsave) > 67 if axis != -1: > 68 r = swapaxes(r, axis, -1) > > TypeError: array cannot be safely cast to required type > > This is strange, because I can run fft(array([0,0,0,1,1,1])), or > fft([0,0,0,1,1,1]), perfectly fine. This is passing an array and a list, > respectively. > > type(mixfrac) is list and size(mixfrac) is 100; type(mixfracarray) is > ndarray, and mixfracarray.shape is (100,). I've also tried taking the FFT > of the transpose of mixfracarray, but that doesn't work either. > > I'm stumped - why can't I run an FFT on either mixfrac or mixfracarray? > After you convert to an array what is the array type? I suspect an object in there somewhere. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesreid1 at gmail.com Fri Mar 6 17:51:38 2009 From: charlesreid1 at gmail.com (charles reid) Date: Fri, 6 Mar 2009 15:51:38 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: In [3]: type(mixfrac) Out[3]: In [4]: mixfracarray=array(mixfrac) In [5]: type(mixfracarray) Out[5]: (Is that what you were referring to?) On Fri, Mar 6, 2009 at 3:44 PM, Charles R Harris wrote: > > > On Fri, Mar 6, 2009 at 3:36 PM, charles reid wrote: > >> Hi there - >> >> I've imported some data from a file, and it's in a list called mixfrac. >> I'd like to take the Fourier transform of the data, but when I try to take >> the FFT of the list, I get this error: >> >> >> --------------------------------------------------------------------------- >> TypeError Traceback (most recent call >> last) >> >> /Users/charles/apriori/read.py in () >> ----> 1 >> 2 >> 3 >> 4 >> 5 >> >> /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc >> in fft(a, n, axis) >> 105 """ >> 106 >> --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, >> _fft_cache) >> 108 >> 109 >> >> /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc >> in _raw_fft(a, n, axis, init_function, work_function, fft_cache) >> 64 if axis != -1: >> 65 a = swapaxes(a, axis, -1) >> ---> 66 r = work_function(a, wsave) >> 67 if axis != -1: >> 68 r = swapaxes(r, axis, -1) >> >> TypeError: array cannot be safely cast to required type >> >> >> so I convert to an array and run fft(mixfracarray). >> >> mixfracarray = array(mixfrac) >> fft(mixfracarray) >> >> whereupon I recieve the error >> >> >> --------------------------------------------------------------------------- >> TypeError Traceback (most recent call >> last) >> >> /Users/charles/apriori/read.py in () >> ----> 1 >> 2 >> 3 >> 4 >> 5 >> >> /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc >> in fft(a, n, axis) >> 105 """ >> 106 >> --> 107 return _raw_fft(a, n, axis, fftpack.cffti, fftpack.cfftf, >> _fft_cache) >> 108 >> 109 >> >> /Library/Python/2.5/site-packages/numpy-1.3.0.dev5825-py2.5-macosx-10.3-i386.egg/numpy/fft/fftpack.pyc >> in _raw_fft(a, n, axis, init_function, work_function, fft_cache) >> 64 if axis != -1: >> 65 a = swapaxes(a, axis, -1) >> ---> 66 r = work_function(a, wsave) >> 67 if axis != -1: >> 68 r = swapaxes(r, axis, -1) >> >> TypeError: array cannot be safely cast to required type >> >> This is strange, because I can run fft(array([0,0,0,1,1,1])), or >> fft([0,0,0,1,1,1]), perfectly fine. This is passing an array and a list, >> respectively. >> >> type(mixfrac) is list and size(mixfrac) is 100; type(mixfracarray) is >> ndarray, and mixfracarray.shape is (100,). I've also tried taking the FFT >> of the transpose of mixfracarray, but that doesn't work either. >> >> I'm stumped - why can't I run an FFT on either mixfrac or mixfracarray? >> > > After you convert to an array what is the array type? I suspect an object > in there somewhere. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 6 18:09:49 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 16:09:49 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: On Fri, Mar 6, 2009 at 3:51 PM, charles reid wrote: > In [3]: type(mixfrac) > Out[3]: > > In [4]: mixfracarray=array(mixfrac) > > In [5]: type(mixfracarray) > Out[5]: > > Try mixfracarray.dtype ...Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesreid1 at gmail.com Fri Mar 6 18:22:17 2009 From: charlesreid1 at gmail.com (charles reid) Date: Fri, 6 Mar 2009 16:22:17 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: In [3]: mixfracarray=array(mixfrac) In [4]: mixfracarray.dtype Out[4]: dtype('|S17') On Fri, Mar 6, 2009 at 4:09 PM, Charles R Harris wrote: > > > On Fri, Mar 6, 2009 at 3:51 PM, charles reid wrote: > >> In [3]: type(mixfrac) >> Out[3]: >> >> In [4]: mixfracarray=array(mixfrac) >> >> In [5]: type(mixfracarray) >> Out[5]: >> >> > Try mixfracarray.dtype ...Chuck > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Mar 6 18:23:35 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 7 Mar 2009 00:23:35 +0100 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: This indicates that the values are strings, so you can't make an FFT from them. Convert your array to float or double array first. Matthieu 2009/3/7 charles reid : > In [3]: mixfracarray=array(mixfrac) > > In [4]: mixfracarray.dtype > Out[4]: dtype('|S17') > > > > On Fri, Mar 6, 2009 at 4:09 PM, Charles R Harris > wrote: >> >> >> On Fri, Mar 6, 2009 at 3:51 PM, charles reid >> wrote: >>> >>> In [3]: type(mixfrac) >>> Out[3]: >>> >>> In [4]: mixfracarray=array(mixfrac) >>> >>> In [5]: type(mixfracarray) >>> Out[5]: >>> >> >> Try mixfracarray.dtype ...Chuck >> >> >> >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From charlesr.harris at gmail.com Fri Mar 6 18:26:21 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 16:26:21 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: On Fri, Mar 6, 2009 at 4:22 PM, charles reid wrote: > In [3]: mixfracarray=array(mixfrac) > > In [4]: mixfracarray.dtype > Out[4]: dtype('|S17') > It's a string array. What does your file look like and how do you import it? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesreid1 at gmail.com Fri Mar 6 18:32:40 2009 From: charlesreid1 at gmail.com (charles reid) Date: Fri, 6 Mar 2009 16:32:40 -0700 Subject: [Numpy-discussion] can't take FFT of ndarray In-Reply-To: References: Message-ID: Fixed the problem - I was importing a bunch of numbers from a file, and I wasn't casting them as doubles. Thanks for the help! Charles On Fri, Mar 6, 2009 at 4:26 PM, Charles R Harris wrote: > > > On Fri, Mar 6, 2009 at 4:22 PM, charles reid wrote: > >> In [3]: mixfracarray=array(mixfrac) >> >> In [4]: mixfracarray.dtype >> Out[4]: dtype('|S17') >> > > It's a string array. What does your file look like and how do you import > it? > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Mar 6 20:18:06 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 7 Mar 2009 03:18:06 +0200 Subject: [Numpy-discussion] Assigning complex values to a real array Message-ID: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> Hi all, The following code succeeds, while I thought it should fail: a = np.zeros(6) # real b= np.arange(6)*(2+3j) # complex a[1] = b[1] # shouldn't this break? What is the rationale behind this behaviour? Cheers St?fan From charlesr.harris at gmail.com Fri Mar 6 20:28:03 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 6 Mar 2009 18:28:03 -0700 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> Message-ID: On Fri, Mar 6, 2009 at 6:18 PM, St?fan van der Walt wrote: > Hi all, > > The following code succeeds, while I thought it should fail: > > a = np.zeros(6) # real > b= np.arange(6)*(2+3j) # complex > a[1] = b[1] # shouldn't this break? > > What is the rationale behind this behaviour? > The same as this: In [1]: a = zeros(2) In [2]: a[0] = '1' In [3]: a Out[3]: array([ 1., 0.]) The question is whether such usage is likely in error, calling for an exception, or a useful convenience to avoid a cast. I tend to think that casts should be made explicit in such cases but it's a fine line. What about this? In [5]: a = zeros(2) In [6]: a[0] = 1 In [7]: a Out[7]: array([ 1., 0.]) Should a cast be required? What if the lhs is a float32 and the rhs is a python float? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Mar 7 04:30:03 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 7 Mar 2009 11:30:03 +0200 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> Message-ID: <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> 2009/3/7 Charles R Harris : >> a = np.zeros(6) # real >> b= np.arange(6)*(2+3j) # complex >> a[1] = b[1] # shouldn't this break? >> >> What is the rationale behind this behaviour? > > The same as this: > > In [1]: a = zeros(2) > > In [2]: a[0] = '1' > > In [3]: a > Out[3]: array([ 1.,? 0.]) This difference is that, in your example, no information is lost. When assigning a complex value to a real array, you are probably doing something wrong. Cheers St?fan From robert.kern at gmail.com Sat Mar 7 04:35:35 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 7 Mar 2009 03:35:35 -0600 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> Message-ID: <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> On Sat, Mar 7, 2009 at 03:30, St?fan van der Walt wrote: > 2009/3/7 Charles R Harris : >>> a = np.zeros(6) # real >>> b= np.arange(6)*(2+3j) # complex >>> a[1] = b[1] # shouldn't this break? >>> >>> What is the rationale behind this behaviour? >> >> The same as this: >> >> In [1]: a = zeros(2) >> >> In [2]: a[0] = '1' >> >> In [3]: a >> Out[3]: array([ 1.,? 0.]) > > This difference is that, in your example, no information is lost. > When assigning a complex value to a real array, you are probably doing > something wrong. In [5]: z = zeros(3, int) In [6]: z[1] = 1.5 In [7]: z Out[7]: array([0, 1, 0]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Sat Mar 7 05:10:10 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 7 Mar 2009 12:10:10 +0200 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> Message-ID: <9457e7c80903070210o5704963exa5b5596970a22d8a@mail.gmail.com> 2009/3/7 Robert Kern : > In [5]: z = zeros(3, int) > > In [6]: z[1] = 1.5 > > In [7]: z > Out[7]: array([0, 1, 0]) Blind moment, sorry. So, what is your take -- should this kind of thing pass silently? Regards St?fan From robert.kern at gmail.com Sat Mar 7 05:18:42 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 7 Mar 2009 04:18:42 -0600 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <9457e7c80903070210o5704963exa5b5596970a22d8a@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> <9457e7c80903070210o5704963exa5b5596970a22d8a@mail.gmail.com> Message-ID: <3d375d730903070218v6fd8e607kd0da6da9afefaa9d@mail.gmail.com> On Sat, Mar 7, 2009 at 04:10, St?fan van der Walt wrote: > 2009/3/7 Robert Kern : >> In [5]: z = zeros(3, int) >> >> In [6]: z[1] = 1.5 >> >> In [7]: z >> Out[7]: array([0, 1, 0]) > > Blind moment, sorry. ?So, what is your take -- should this kind of > thing pass silently? Downcasting data is a necessary operation sometimes. We explicitly made a choice a long time ago to allow this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Sat Mar 7 09:29:30 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 7 Mar 2009 16:29:30 +0200 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <3d375d730903070218v6fd8e607kd0da6da9afefaa9d@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> <9457e7c80903070210o5704963exa5b5596970a22d8a@mail.gmail.com> <3d375d730903070218v6fd8e607kd0da6da9afefaa9d@mail.gmail.com> Message-ID: <9457e7c80903070629t282fc492u55aba87a2ed8b8d3@mail.gmail.com> 2009/3/7 Robert Kern : > On Sat, Mar 7, 2009 at 04:10, St?fan van der Walt wrote: >> 2009/3/7 Robert Kern : >>> In [5]: z = zeros(3, int) >>> >>> In [6]: z[1] = 1.5 >>> >>> In [7]: z >>> Out[7]: array([0, 1, 0]) >> >> Blind moment, sorry. ?So, what is your take -- should this kind of >> thing pass silently? > > Downcasting data is a necessary operation sometimes. We explicitly > made a choice a long time ago to allow this. Would it be possible to, optionally, throw an exception? S. From dsdale24 at gmail.com Sat Mar 7 10:15:10 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 7 Mar 2009 10:15:10 -0500 Subject: [Numpy-discussion] Assigning complex values to a real array In-Reply-To: <3d375d730903070218v6fd8e607kd0da6da9afefaa9d@mail.gmail.com> References: <9457e7c80903061718i1dbbcdc7td1c9dfb82e0b387@mail.gmail.com> <9457e7c80903070130j5672a114m159a7b922931b205@mail.gmail.com> <3d375d730903070135u28fb4085x86d0d6139d2dd28f@mail.gmail.com> <9457e7c80903070210o5704963exa5b5596970a22d8a@mail.gmail.com> <3d375d730903070218v6fd8e607kd0da6da9afefaa9d@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 5:18 AM, Robert Kern wrote: > On Sat, Mar 7, 2009 at 04:10, St?fan van der Walt > wrote: > > 2009/3/7 Robert Kern : > >> In [5]: z = zeros(3, int) > >> > >> In [6]: z[1] = 1.5 > >> > >> In [7]: z > >> Out[7]: array([0, 1, 0]) > > > > Blind moment, sorry. So, what is your take -- should this kind of > > thing pass silently? > > Downcasting data is a necessary operation sometimes. We explicitly > made a choice a long time ago to allow this. > In that case, do you know why this raises an exception: np.int64(10+20j) Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 11:30:03 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 01:30:03 +0900 Subject: [Numpy-discussion] Build Failure on WIndows Vista In-Reply-To: References: Message-ID: <5b8d13220903070830k4619030flf86c8198f1c3141a@mail.gmail.com> On Sat, Mar 7, 2009 at 5:41 AM, Patrick Marsh wrote: > Greetings, > > I am running Windows Vista Ultimate and trying to build numpy from the > SVN branch using MSVC 2003. ?I have been able to build previously, but > with my latest SVN update I am no longer able to build. ?My CPU is an > Intel Core2 T7600 @2.33GHz. ?The error is below. Should be fixed in r6559 (note that building with MSVC 2003 will still fail in the random module, though, because of a MSVC compiler limitation). cheers, David From dsdale24 at gmail.com Sat Mar 7 11:46:41 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 7 Mar 2009 11:46:41 -0500 Subject: [Numpy-discussion] question about ndarray.astype(None) Message-ID: I was wondering about the behavior of ndarray.astype when passed None. Currently this defaults to float64, does anyone know why it doesn't default to the instance's dtype? defaulting to float64 seems too arbitrary. Thanks, Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 12:01:41 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 02:01:41 +0900 Subject: [Numpy-discussion] Why using cblas in umath_test ? In-Reply-To: <49AE7E92.7020202@ar.media.kyoto-u.ac.jp> References: <49AE7E92.7020202@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903070901g598c4359hceec95d110a13a52@mail.gmail.com> On Wed, Mar 4, 2009 at 10:13 PM, David Cournapeau wrote: > ? ?Is there a rationale for using cblas at all ? Why not using straight > C functions - it is not like we care about speed for tests, right ? Or > am I missing something ? Since nobody reacted, I removed the corresponding cblas calls, and use the straightforward pure C implementation instead. David From dsdale24 at gmail.com Sat Mar 7 12:23:00 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 7 Mar 2009 12:23:00 -0500 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: > On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: > >> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >> >>> >>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>> >>> > Darren Dale wrote: >>> >> Does anyone know why __array_wrap__ is not called for subclasses >>> >> during >>> >> arithmetic operations where an iterable like a list or tuple >>> >> appears to >>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap is >>> >> not >>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" is >>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>> >> division, >>> > >>> > The masked array subclass does not show this behavior: >>> >>> Because MaskedArray.__mul__ and others are redefined. >>> >>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>> >>> def __mul__(self, other): >>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>> >>> forcing the second term to be a ndarray (or a subclass of). You can do >>> the same thing for the other functions (__add__, __radd__, ...) >> >> >> Thanks for the suggestion. I know this can be done, but ufuncs like >> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >> methods, I take some small performance hit. I've been putting a lot of work >> in lately to get quantities to work with numpy's stock ufuncs. >> > > I should point out: > > import numpy as np > > a=np.array([1,2,3,4]) > b=np.ma.masked_where(a>2,a) > np.multiply([1,2,3,4],b) # yields a masked array > np.multiply(b,[1,2,3,4]) # yields an ndarray > > I'm not familiar with the numpy codebase, could anyone help me figure out where I should look to try to fix this bug? I've got a nice set of generators that work with nosetools to test all combinations of numerical dtypes, including combinations of scalars, arrays, and iterables of each type. In my quantities package, just testing multiplication yields 1031 failures, all of which appear to be caused by this bug (#1026 on trak) or bug #826. Thanks, Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 12:41:20 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 02:41:20 +0900 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: Message-ID: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> On Sat, Mar 7, 2009 at 6:01 AM, Charles R Harris wrote: > Hi David, > > Currently, > > bint.i = __STR2INTCST("ABCD"); > > It is probably more portable to just initialize the union > > ??? union { > ??? ??? char c[4]; > ??? ??? npy_uint32 i; > ??? } bint = {'A','B','C','D'}; > Ah, tempting, right ? It does not work. It has exactly the same problem as multibyte initialization, that is it is undefined, or at least there are some platforms where depending on the compiler, the result will be different. Mac OS X makes this easy to test (on x86). With your initialization scheme, bint.c[0] is 'A' whether I compile with -arch x86 or -arch ppc (mac os x can run ppc code in intel thanks to rosetta, a JIT ppc vm). With mine, it does what is expected ('A' on big endian - ppc, and 'D' on little endian). cheers, David From charlesr.harris at gmail.com Sat Mar 7 12:52:06 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Mar 2009 10:52:06 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 11:41 AM, David Cournapeau wrote: > On Sat, Mar 7, 2009 at 6:01 AM, Charles R Harris > wrote: > > Hi David, > > > > Currently, > > > > bint.i = __STR2INTCST("ABCD"); > > > > It is probably more portable to just initialize the union > > > > union { > > char c[4]; > > npy_uint32 i; > > } bint = {'A','B','C','D'}; > > > Yes, but look at the second version. It does essentially what your macro does, only uses 1,2,3,4 instead of 'A','B','C','D'. I'm tempted to just make the change but it's your baby... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 13:02:03 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 03:02:03 +0900 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> Message-ID: <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> On Sun, Mar 8, 2009 at 2:52 AM, Charles R Harris wrote: > > > On Sat, Mar 7, 2009 at 11:41 AM, David Cournapeau > wrote: >> >> On Sat, Mar 7, 2009 at 6:01 AM, Charles R Harris >> wrote: >> > Hi David, >> > >> > Currently, >> > >> > bint.i = __STR2INTCST("ABCD"); >> > >> > It is probably more portable to just initialize the union >> > >> > ??? union { >> > ??? ??? char c[4]; >> > ??? ??? npy_uint32 i; >> > ??? } bint = {'A','B','C','D'}; >> > > > Yes, but look at the second version. It does essentially what your macro > does, only uses 1,2,3,4 instead of 'A','B','C','D'. Yes, it is the same thing, so I don't see the point of changing :) The const union does not help, BTW. At least with gcc on mac os x, as long as you have an if to test bint.c[0], it does not look smart enough to detect all this is constant. But then, it is not like this function benefits from any optimization anyway - it is only used at import time, and the function call is likely to be the most costly part anyway. David From charlesr.harris at gmail.com Sat Mar 7 13:20:56 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Mar 2009 11:20:56 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 11:02 AM, David Cournapeau wrote: > On Sun, Mar 8, 2009 at 2:52 AM, Charles R Harris > wrote: > > > > > > On Sat, Mar 7, 2009 at 11:41 AM, David Cournapeau > > wrote: > >> > >> On Sat, Mar 7, 2009 at 6:01 AM, Charles R Harris > >> wrote: > >> > Hi David, > >> > > >> > Currently, > >> > > >> > bint.i = __STR2INTCST("ABCD"); > >> > > >> > It is probably more portable to just initialize the union > >> > > >> > union { > >> > char c[4]; > >> > npy_uint32 i; > >> > } bint = {'A','B','C','D'}; > >> > > > > > Yes, but look at the second version. It does essentially what your macro > > does, only uses 1,2,3,4 instead of 'A','B','C','D'. > > Yes, it is the same thing, so I don't see the point of changing :) The macro is ugly, unneeded, and obfuscating. Why construct a number from characters and shifts when you can just *write it down*? > The > const union does not help, BTW. True, it is initialized here: movl $16909060, -8(%ebp) contrast that with the macro. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 7 13:42:27 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Mar 2009 11:42:27 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 11:20 AM, Charles R Harris wrote: > > > On Sat, Mar 7, 2009 at 11:02 AM, David Cournapeau wrote: > >> On Sun, Mar 8, 2009 at 2:52 AM, Charles R Harris >> wrote: >> > >> > >> > On Sat, Mar 7, 2009 at 11:41 AM, David Cournapeau >> > wrote: >> >> >> >> On Sat, Mar 7, 2009 at 6:01 AM, Charles R Harris >> >> wrote: >> >> > Hi David, >> >> > >> >> > Currently, >> >> > >> >> > bint.i = __STR2INTCST("ABCD"); >> >> > >> >> > It is probably more portable to just initialize the union >> >> > >> >> > union { >> >> > char c[4]; >> >> > npy_uint32 i; >> >> > } bint = {'A','B','C','D'}; >> >> > >> > >> > Yes, but look at the second version. It does essentially what your macro >> > does, only uses 1,2,3,4 instead of 'A','B','C','D'. >> >> Yes, it is the same thing, so I don't see the point of changing :) > > > The macro is ugly, unneeded, and obfuscating. Why construct a number from > characters and shifts when you can just *write it down*? > > >> The >> const union does not help, BTW. > > > True, it is initialized here: > > movl $16909060, -8(%ebp) > > contrast that with the macro. > And here is the optimized code (-O2) to print out the byte values in order: main: leal 4(%esp), %ecx andl $-16, %esp pushl -4(%ecx) pushl %ebp movl %esp, %ebp pushl %ecx subl $20, %esp movl $4, 4(%esp) movl $.LC0, (%esp) call printf movl $3, 4(%esp) movl $.LC0, (%esp) call printf movl $2, 4(%esp) movl $.LC0, (%esp) call printf movl $1, 4(%esp) movl $.LC0, (%esp) call printf ... Note the compiler *knows* the byte values, the union never appears. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 13:57:24 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 03:57:24 +0900 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> Message-ID: <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> On Sun, Mar 8, 2009 at 3:20 AM, Charles R Harris wrote: > > The macro is ugly, unneeded, and obfuscating. Why construct a number from > characters and shifts when you can just *write it down*? The idea was to replace the 'ABCD' multi-byte constant. If you think that writing down the corresponding integer is cleaner, so be it - I don't care either way. I am not sure I see a difference between 'A' << 24 .... and 103...., though. > True, it is initialized here: > > ??????? movl??? $16909060, -8(%ebp) > The generated assembly is exactly the same wether the constant is initialized through the macro or the integer (the actual integer is in the assembly). But in the following case: const union { npy_uint32 i; char c[4]; } bint = {some constant}; switch (bint.c[0]) { case 'A': etc.... } The compiler did not remove the conditionals corresponding to the switch - const or not. cheers, David From matthew.brett at gmail.com Sat Mar 7 14:09:09 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 7 Mar 2009 11:09:09 -0800 Subject: [Numpy-discussion] structured array comparisons? Message-ID: <1e2af89e0903071109p4b85b069v6adec990faef1783@mail.gmail.com> Hi, I'm having some difficulty understanding how these work and would be grateful for any help. In the simple case, I get what I expect: In [42]: a = np.zeros((), dtype=[('f1', 'f8'),('f2', 'f8')]) In [43]: a == a Out[43]: True If one of the fields is itself an array, and the other is a scalar, the shape of the truth value appears to be based on the comparison of that array, ignoring the scalar: In [44]: a = np.zeros((), dtype=[('f1', 'f8', 8),('f2', 'f8')]) In [45]: a == a Out[45]: array([ True, True, True, True, True, True, True, True], dtype=bool) If the scalar is different, then the shape is from the array, but the truth value is from the scalar: In [46]: b = a.copy() In [47]: b['f2'] = 3 In [48]: a == b Out[48]: array([False, False, False, False, False, False, False, False], dtype=bool) If there are two arrays, it blows up, even comparing to itself: In [49]: a = np.zeros((), dtype=[('f1', 'f8', 8),('f2', 'f8', 2)]) In [50]: a == a --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/mb312/ in () ValueError: shape mismatch: objects cannot be broadcast to a single shape Is this all expected by someone? Thanks a lot, Matthew From charlesr.harris at gmail.com Sat Mar 7 14:34:18 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Mar 2009 12:34:18 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 11:57 AM, David Cournapeau wrote: > On Sun, Mar 8, 2009 at 3:20 AM, Charles R Harris > wrote: > > > > > The macro is ugly, unneeded, and obfuscating. Why construct a number from > > characters and shifts when you can just *write it down*? > > The idea was to replace the 'ABCD' multi-byte constant. If you think > that writing down the corresponding integer is cleaner, so be it - I > don't care either way. I am not sure I see a difference between 'A' << > 24 .... and 103...., though. > > > > True, it is initialized here: > > > > movl $16909060, -8(%ebp) > > > > The generated assembly is exactly the same wether the constant is > initialized through the macro or the integer (the actual integer is in > the assembly). But in the following case: > > const union { > npy_uint32 i; > char c[4]; > } bint = {some constant}; > > switch (bint.c[0]) { > case 'A': > etc.... > } > > The compiler did not remove the conditionals corresponding to the > switch - const or not. > I got curious to see just how it would all go together. Here is the C: #include enum {little_endian, big_endian, unknown}; static int order(void) { const union { int i; char c[sizeof(int)]; } test = {0x01020304}; if (test.c[0] == 1) { return big_endian; } else if (test.c[0] == 4) { return little_endian; } else { return unknown; } } int main(int argc, char **argv) { printf("%d\n", order()); return 0; } And here is the gcc -S -O2 compiled assembly: .file "order.c" .section .rodata.str1.1,"aMS", at progbits,1 .LC0: .string "%d\n" .text .p2align 4,,15 .globl main .type main, @function main: leal 4(%esp), %ecx andl $-16, %esp pushl -4(%ecx) pushl %ebp movl %esp, %ebp pushl %ecx subl $20, %esp movl $0, 4(%esp) <<<<<<<<< movl $.LC0, (%esp) call printf addl $20, %esp xorl %eax, %eax popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (GNU) 4.3.0 20080428 (Red Hat 4.3.0-8)" .section .note.GNU-stack,"", at progbits The order function has been inlined and the return value, 0, is loaded for printing at the marked line. That line is the only place where anything remains of the order function. The compiler knows the return value and puts it on the stack for the printf call, nothing is computed at run time. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 7 18:29:50 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 7 Mar 2009 18:29:50 -0500 Subject: [Numpy-discussion] np.random.multinomial weird results Message-ID: <1cd32cbb0903071529o3dcb0bc1k27c87c68ec12afc6@mail.gmail.com> np.random.multinomial looks weird. Are these bugs, or is there something not correct with the explanation. Josef from the help/ docstring: >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], [2, 4, 3, 4, 0, 7]]) For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. Note: we also get a 7 in a six sided dice some more examples with a funny shaped six sided dice: >>> rvsmn=np.random.multinomial(20, [1/6.]*6, size=2000) >>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 0 [ 2.9 2.25 2.45 2.55 2.65 2.85] 1 [ 9.15 9.75 10.8 11.4 11.1 10.7 ] 2 [ 20.8 20. 20.25 19.65 18.9 19.2 ] 3 [ 23.75 24.4 23.3 22.75 23.5 23.15] 4 [ 20.85 20.8 20.4 20.95 20.15 19.25] 5 [ 12.6 12.55 12.6 12.55 13.3 14.75] 6 [ 6.4 6.65 6.95 6.55 6.8 6.35] 7 [ 2.8 2.25 2.45 2.8 2.55 2.75] 8 [ 0.5 0.85 0.55 0.55 0.85 0.85] 9 [ 0.2 0.4 0.15 0.1 0.15 0.05] 10 [ 0.05 0.1 0.1 0.1 0.05 0.1 ] 11 [ 0. 0. 0. 0.05 0. 0. ] >>> rvsmn=np.random.multinomial(1, [1/6.]*6, size=2000) >>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 0 [ 81.9 83.35 84.85 84.25 83.7 81.95] 1 [ 18.1 16.65 15.15 15.75 16.3 18.05] >>> rvsmn=np.random.multinomial(2, [1/6.]*6, size=2000) >>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 0 [ 70.45 71.6 68.9 68.1 68. 69.75] 1 [ 26.45 26.1 28.35 28.75 29.6 27.15] 2 [ 3.1 2.3 2.75 3.15 2.4 3.1 ] >>> rvsmn=np.random.multinomial(2000, [1/6.]*6, size=1) >>> rvsmn.shape (1, 6) >>> rvsmn array([[330, 348, 332, 326, 337, 327]]) >>> rvsmn=np.random.multinomial(2000, [1/6.]*6) >>> rvsmn.shape (6,) >>> rvsmn array([334, 322, 323, 348, 322, 351]) Note: this are the tests for multinomial class TestMultinomial(TestCase): def test_basic(self): random.multinomial(100, [0.2, 0.8]) def test_zero_probability(self): random.multinomial(100, [0.2, 0.8, 0.0, 0.0, 0.0]) From robert.kern at gmail.com Sat Mar 7 18:57:34 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 7 Mar 2009 17:57:34 -0600 Subject: [Numpy-discussion] np.random.multinomial weird results In-Reply-To: <1cd32cbb0903071529o3dcb0bc1k27c87c68ec12afc6@mail.gmail.com> References: <1cd32cbb0903071529o3dcb0bc1k27c87c68ec12afc6@mail.gmail.com> Message-ID: <3d375d730903071557u5ad7ba8dr71a13eb185e964a1@mail.gmail.com> On Sat, Mar 7, 2009 at 17:29, wrote: > np.random.multinomial ?looks weird. Are these bugs, or is there > something not correct with the explanation. I would like to know how you are interpreting the documentation. > Josef > > from the help/ docstring: > >>>> np.random.multinomial(20, [1/6.]*6, size=2) > array([[3, 4, 3, 3, 4, 3], > ? ? ? [2, 4, 3, 4, 0, 7]]) > For the first run, we threw 3 times 1, 4 times 2, etc. For the second, > we threw 2 times 1, 4 times 2, etc. > > > Note: we also get a 7 in a six sided dice No you don't. That value means that in the second trial of 20 tosses, you rolled a 6-spot seven times. The result of drawing from a multinomial distribution is the number of times a particular result came up, *not* the results themselves. > some more examples with a funny shaped six sided dice: > >>>> rvsmn=np.random.multinomial(20, [1/6.]*6, size=2000) >>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 > > 0 [ 2.9 ? 2.25 ?2.45 ?2.55 ?2.65 ?2.85] > 1 [ ?9.15 ? 9.75 ?10.8 ? 11.4 ? 11.1 ? 10.7 ] > 2 [ 20.8 ? 20. ? ?20.25 ?19.65 ?18.9 ? 19.2 ] > 3 [ 23.75 ?24.4 ? 23.3 ? 22.75 ?23.5 ? 23.15] > 4 [ 20.85 ?20.8 ? 20.4 ? 20.95 ?20.15 ?19.25] > 5 [ 12.6 ? 12.55 ?12.6 ? 12.55 ?13.3 ? 14.75] > 6 [ 6.4 ? 6.65 ?6.95 ?6.55 ?6.8 ? 6.35] > 7 [ 2.8 ? 2.25 ?2.45 ?2.8 ? 2.55 ?2.75] > 8 [ 0.5 ? 0.85 ?0.55 ?0.55 ?0.85 ?0.85] > 9 [ 0.2 ? 0.4 ? 0.15 ?0.1 ? 0.15 ?0.05] > 10 [ 0.05 ?0.1 ? 0.1 ? 0.1 ? 0.05 ?0.1 ] > 11 [ 0. ? ?0. ? ?0. ? ?0.05 ?0. ? ?0. ?] And? What do you think you are testing here? A more appropriate test would be: rvsmn = np.random.multinomial(N, np.ones(M)/M, size=L) assert is_kinda_close(rvsmn.mean(axis=0) / N, np.ones(M)/M) >>>> rvsmn=np.random.multinomial(1, [1/6.]*6, size=2000) >>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 > > 0 [ 81.9 ? 83.35 ?84.85 ?84.25 ?83.7 ? 81.95] > 1 [ 18.1 ? 16.65 ?15.15 ?15.75 ?16.3 ? 18.05] >>>> rvsmn=np.random.multinomial(2, [1/6.]*6, size=2000) >>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 > > 0 [ 70.45 ?71.6 ? 68.9 ? 68.1 ? 68. ? ?69.75] > 1 [ 26.45 ?26.1 ? 28.35 ?28.75 ?29.6 ? 27.15] > 2 [ 3.1 ? 2.3 ? 2.75 ?3.15 ?2.4 ? 3.1 ] > >>>> rvsmn=np.random.multinomial(2000, [1/6.]*6, size=1) >>>> rvsmn.shape > (1, 6) >>>> rvsmn > array([[330, 348, 332, 326, 337, 327]]) >>>> rvsmn=np.random.multinomial(2000, [1/6.]*6) >>>> rvsmn.shape > (6,) >>>> rvsmn > array([334, 322, 323, 348, 322, 351]) > > > Note: this are the tests for multinomial > class TestMultinomial(TestCase): > ? ?def test_basic(self): > ? ? ? ?random.multinomial(100, [0.2, 0.8]) > > ? ?def test_zero_probability(self): > ? ? ? ?random.multinomial(100, [0.2, 0.8, 0.0, 0.0, 0.0]) These are testing that the call doesn't fail. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Sat Mar 7 19:41:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 7 Mar 2009 19:41:08 -0500 Subject: [Numpy-discussion] np.random.multinomial weird results In-Reply-To: <3d375d730903071557u5ad7ba8dr71a13eb185e964a1@mail.gmail.com> References: <1cd32cbb0903071529o3dcb0bc1k27c87c68ec12afc6@mail.gmail.com> <3d375d730903071557u5ad7ba8dr71a13eb185e964a1@mail.gmail.com> Message-ID: <1cd32cbb0903071641h3aa46977i9494bf05e86f9eb8@mail.gmail.com> On Sat, Mar 7, 2009 at 6:57 PM, Robert Kern wrote: > On Sat, Mar 7, 2009 at 17:29, ? wrote: >> np.random.multinomial ?looks weird. Are these bugs, or is there >> something not correct with the explanation. > > I would like to know how you are interpreting the documentation. > >> Josef >> >> from the help/ docstring: >> >>>>> np.random.multinomial(20, [1/6.]*6, size=2) >> array([[3, 4, 3, 3, 4, 3], >> ? ? ? [2, 4, 3, 4, 0, 7]]) >> For the first run, we threw 3 times 1, 4 times 2, etc. For the second, >> we threw 2 times 1, 4 times 2, etc. >> >> >> Note: we also get a 7 in a six sided dice > > No you don't. That value means that in the second trial of 20 tosses, > you rolled a 6-spot seven times. The result of drawing from a > multinomial distribution is the number of times a particular result > came up, *not* the results themselves. > >> some more examples with a funny shaped six sided dice: >> >>>>> rvsmn=np.random.multinomial(20, [1/6.]*6, size=2000) >>>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 >> >> 0 [ 2.9 ? 2.25 ?2.45 ?2.55 ?2.65 ?2.85] >> 1 [ ?9.15 ? 9.75 ?10.8 ? 11.4 ? 11.1 ? 10.7 ] >> 2 [ 20.8 ? 20. ? ?20.25 ?19.65 ?18.9 ? 19.2 ] >> 3 [ 23.75 ?24.4 ? 23.3 ? 22.75 ?23.5 ? 23.15] >> 4 [ 20.85 ?20.8 ? 20.4 ? 20.95 ?20.15 ?19.25] >> 5 [ 12.6 ? 12.55 ?12.6 ? 12.55 ?13.3 ? 14.75] >> 6 [ 6.4 ? 6.65 ?6.95 ?6.55 ?6.8 ? 6.35] >> 7 [ 2.8 ? 2.25 ?2.45 ?2.8 ? 2.55 ?2.75] >> 8 [ 0.5 ? 0.85 ?0.55 ?0.55 ?0.85 ?0.85] >> 9 [ 0.2 ? 0.4 ? 0.15 ?0.1 ? 0.15 ?0.05] >> 10 [ 0.05 ?0.1 ? 0.1 ? 0.1 ? 0.05 ?0.1 ] >> 11 [ 0. ? ?0. ? ?0. ? ?0.05 ?0. ? ?0. ?] > > And? What do you think you are testing here? A more appropriate test would be: > > rvsmn = np.random.multinomial(N, np.ones(M)/M, size=L) > assert is_kinda_close(rvsmn.mean(axis=0) / N, np.ones(M)/M) > - Show quoted text - >>>>> rvsmn=np.random.multinomial(1, [1/6.]*6, size=2000) >>>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 >> >> 0 [ 81.9 ? 83.35 ?84.85 ?84.25 ?83.7 ? 81.95] >> 1 [ 18.1 ? 16.65 ?15.15 ?15.75 ?16.3 ? 18.05] >>>>> rvsmn=np.random.multinomial(2, [1/6.]*6, size=2000) >>>>> for i in range(rvsmn.min(),rvsmn.max()+1):print i, (rvsmn==i).sum(0)/20.0 >> >> 0 [ 70.45 ?71.6 ? 68.9 ? 68.1 ? 68. ? ?69.75] >> 1 [ 26.45 ?26.1 ? 28.35 ?28.75 ?29.6 ? 27.15] >> 2 [ 3.1 ? 2.3 ? 2.75 ?3.15 ?2.4 ? 3.1 ] >> >>>>> rvsmn=np.random.multinomial(2000, [1/6.]*6, size=1) >>>>> rvsmn.shape >> (1, 6) >>>>> rvsmn >> array([[330, 348, 332, 326, 337, 327]]) >>>>> rvsmn=np.random.multinomial(2000, [1/6.]*6) >>>>> rvsmn.shape >> (6,) >>>>> rvsmn >> array([334, 322, 323, 348, 322, 351]) >> >> >> Note: this are the tests for multinomial >> class TestMultinomial(TestCase): >> ? ?def test_basic(self): >> ? ? ? ?random.multinomial(100, [0.2, 0.8]) >> >> ? ?def test_zero_probability(self): >> ? ? ? ?random.multinomial(100, [0.2, 0.8, 0.0, 0.0, 0.0]) > > These are testing that the call doesn't fail. > > -- > Robert Kern > Sorry, I was working on a multinomial logit distribution, and even though I read the docstring np.random.multinomial, I didn't pay enough attention. So I misinterpreted what the random variable is supposed to mean and that didn't make any sense. Now it looks clearer, Thanks, Josef From cournape at gmail.com Sun Mar 8 01:10:53 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 15:10:53 +0900 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> Message-ID: <5b8d13220903072210m2bfd1d73i11cdffbd52b00ad3@mail.gmail.com> On Sun, Mar 8, 2009 at 4:34 AM, Charles R Harris wrote: > > > On Sat, Mar 7, 2009 at 11:57 AM, David Cournapeau > wrote: >> >> On Sun, Mar 8, 2009 at 3:20 AM, Charles R Harris >> wrote: >> >> > >> > The macro is ugly, unneeded, and obfuscating. Why construct a number >> > from >> > characters and shifts when you can just *write it down*? >> >> The idea was to replace the 'ABCD' multi-byte constant. If you think >> that writing down the corresponding integer is cleaner, so be it - I >> don't care either way. I am not sure I see a difference between 'A' << >> 24 .... and 103...., though. >> >> >> > True, it is initialized here: >> > >> > ??????? movl??? $16909060, -8(%ebp) >> > >> >> The generated assembly is exactly the same wether the constant is >> initialized through the macro or the integer (the actual integer is in >> the assembly). But in the following case: >> >> const union { >> npy_uint32 i; >> ? ?char c[4]; >> } bint = {some constant}; >> >> switch (bint.c[0]) { >> ? case 'A': >> ?etc.... >> } >> >> The compiler did not remove the conditionals corresponding to the >> switch - const or not. > > I got curious to see just how it would all go together. Here is the C: > > #include > > enum {little_endian, big_endian, unknown}; > > static int order(void) > { > ??? const union { > ??????? int i; > ??????? char c[sizeof(int)]; > ??? } test = {0x01020304}; > > ??? if (test.c[0] == 1) { > ??????? return big_endian; > ??? } > ??? else if (test.c[0] == 4) { > ??????? return little_endian; > ??? } > ??? else { > ??????? return unknown; > ??? } > } > > int main(int argc, char **argv) > { > ??? printf("%d\n", order()); > ??? return 0; > } > > > And here is the gcc -S -O2 compiled assembly: > > ??? .file??? "order.c" > ??? .section??? .rodata.str1.1,"aMS", at progbits,1 > .LC0: > ??? .string??? "%d\n" > ??? .text > ??? .p2align 4,,15 > .globl main > ??? .type??? main, @function > main: > ??? leal??? 4(%esp), %ecx > ??? andl??? $-16, %esp > ??? pushl??? -4(%ecx) > ??? pushl??? %ebp > ??? movl??? %esp, %ebp > ??? pushl??? %ecx > ??? subl??? $20, %esp > ??? movl??? $0, 4(%esp)? <<<<<<<<< > ??? movl??? $.LC0, (%esp) > ??? call??? printf > ??? addl??? $20, %esp > ??? xorl??? %eax, %eax > ??? popl??? %ecx > ??? popl??? %ebp > ??? leal??? -4(%ecx), %esp > ??? ret > ??? .size??? main, .-main > ??? .ident??? "GCC: (GNU) 4.3.0 20080428 (Red Hat 4.3.0-8)" > ??? .section??? .note.GNU-stack,"", at progbits > > > The order function has been inlined and the return value, 0, is loaded for > printing at the marked line. That's strange - I redid the compilation this morning, and I now get the same results as you (modulo the function call - I forced the function call because that's how it would work in numpy), that is the return value is builtin at compile time: .text .align 4,0x90 .globl _order _order: pushl %ebp movl $1, %eax movl %esp, %ebp leave ret .subsections_via_symbols And even simpler on ppc: .section __TEXT,__text,regular,pure_instructions .section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 .machine ppc7400 .text .align 2 .p2align 4,,15 .globl _order _order: li r3,0 blr .subsections_via_symbols I don't know what I did wrong yesterday. It almost look like I did not set the optimization flag, but I can't have been that stupid, can I :) cheers, David From charlesr.harris at gmail.com Sun Mar 8 01:49:35 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Mar 2009 23:49:35 -0700 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: <5b8d13220903072210m2bfd1d73i11cdffbd52b00ad3@mail.gmail.com> References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> <5b8d13220903072210m2bfd1d73i11cdffbd52b00ad3@mail.gmail.com> Message-ID: On Sat, Mar 7, 2009 at 11:10 PM, David Cournapeau wrote: > That's strange - I redid the compilation this morning, and I now get > the same results as you (modulo the function call - I forced the > function call because that's how it would work in numpy), that is the > return value is builtin at compile time: > > .text > .align 4,0x90 > .globl _order > _order: > pushl %ebp > movl $1, %eax > movl %esp, %ebp > leave > ret > .subsections_via_symbols > > And even simpler on ppc: > > .section __TEXT,__text,regular,pure_instructions > .section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32 > .machine ppc7400 > .text > .align 2 > .p2align 4,,15 > .globl _order > _order: > li r3,0 > blr > .subsections_via_symbols > > I don't know what I did wrong yesterday. It almost look like I did not > set the optimization flag, but I can't have been that stupid, can I :) > I know the feeling . Anyway, I went ahead and made the change and put everything into the PyArray_GetEndianess function. Was there a reason you split it up? Apropos the release: I've finished most of the stuff I wanted to finish, the rest of the coding style cleanups can wait. So it's off to look at the tickets and trying to fix bugs. Urrgh. Oh, and I suppose I should look into the argmax/argmin functions and see how they handle nans. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Mar 8 03:12:52 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 16:12:52 +0900 Subject: [Numpy-discussion] Changeset 6557 In-Reply-To: References: <5b8d13220903070941y729b3d6ewe0dd46ca9e23a767@mail.gmail.com> <5b8d13220903071002v5fcdb25fq143f4a81a9cd5a1d@mail.gmail.com> <5b8d13220903071057i6ada4872ra64744b726bc6e8a@mail.gmail.com> <5b8d13220903072210m2bfd1d73i11cdffbd52b00ad3@mail.gmail.com> Message-ID: <5b8d13220903072312j1e42a18dg3abf98f74c1a865c@mail.gmail.com> On Sun, Mar 8, 2009 at 3:49 PM, Charles R Harris wrote: > So it's off to look at the > tickets and trying to fix bugs. Urrgh. Oh, and I suppose I should look into > the argmax/argmin functions and see how they handle nans. I think they don't at the moment: they have an implementation defined behavior. If you're in for ref count fun, there is a bug which I did not manage to squash when I looked at it: http://projects.scipy.org/numpy/ticket/1032 cheers, David From stefan at sun.ac.za Sun Mar 8 09:15:21 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 8 Mar 2009 15:15:21 +0200 Subject: [Numpy-discussion] structured array comparisons? In-Reply-To: <1e2af89e0903071109p4b85b069v6adec990faef1783@mail.gmail.com> References: <1e2af89e0903071109p4b85b069v6adec990faef1783@mail.gmail.com> Message-ID: <9457e7c80903080615j633b5236te732702119d88e74@mail.gmail.com> 2009/3/7 Matthew Brett : > If there are two arrays, it blows up, even comparing to itself: > > In [49]: a = np.zeros((), dtype=[('f1', 'f8', 8),('f2', 'f8', 2)]) I wonder what the best approach would be. To return a structured array with the same fields, but dtype changed to bool? St?fan From dsdale24 at gmail.com Sun Mar 8 10:18:24 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 10:18:24 -0400 Subject: [Numpy-discussion] segfaults when passing ndarray subclass to ufunc with out=None In-Reply-To: References: Message-ID: On Sun, Feb 8, 2009 at 12:49 PM, Darren Dale wrote: > I am seeing some really strange behavior when I try to pass an ndarray > subclass and out=None to numpy's ufuncs. This example will reproduce the > problem with svn numpy, the first print statement yields 1 as expected, the > second yields "" and the third yields a > segmentation fault: > > import numpy as np > > class MyArray(np.ndarray): > > __array_priority__ = 20 > > def __new__(cls): > return np.asarray(1).view(cls).copy() > > def __repr__(self): > return 'my_array' > > __str__ = __repr__ > > def __mul__(self, other): > return super(MyArray, self).__mul__(other) > > def __rmul__(self, other): > return super(MyArray, self).__rmul__(other) > > mine = MyArray() > print np.multiply(1, 1, None) > x = np.multiply(mine, mine, None) > print type(x) > print x > I think I might have found a fix for this. The following patch allows my script to run without a segfault: $ svn diff Index: umath_ufunc_object.inc =================================================================== --- umath_ufunc_object.inc (revision 6566) +++ umath_ufunc_object.inc (working copy) @@ -3212,13 +3212,10 @@ output_wrap[i] = wrap; if (j < nargs) { obj = PyTuple_GET_ITEM(args, j); - if (obj == Py_None) { - continue; - } if (PyArray_CheckExact(obj)) { output_wrap[i] = Py_None; } - else { + else if (obj != Py_None) { PyObject *owrap = PyObject_GetAttrString(obj,"__array_wrap__"); incref = 0; if (!(owrap) || !(PyCallable_Check(owrap))) { That call to continue skipped this bit of code in the loop, which is apparently important: if (incref) { Py_XINCREF(output_wrap[i]); } I've tested the trunk on 64 bit linux, with and without this patch applied, and I get the same result in both cases: 1 known failure, 11 skips. Is there any chance someone could consider applying this patch before 1.3 ships? Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 12:31:10 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 12:31:10 -0400 Subject: [Numpy-discussion] strange multiplication behavior with numpy.float64 and ndarray subclass In-Reply-To: References: Message-ID: On Wed, Jan 21, 2009 at 12:43 PM, Pierre GM wrote: > > On Jan 21, 2009, at 11:34 AM, Darren Dale wrote: > > > I have a simple test script here that multiplies an ndarray subclass > > with another number. Can anyone help me understand why each of these > > combinations returns a new instance of MyArray: > > > > mine = MyArray() > > print type(np.float32(1)*mine) > > print type(mine*np.float32(1)) > > print type(mine*np.float64(1)) > > print type(1*mine) > > print type(mine*1) > > > > but this one returns a np.float64 instance? > > FYI, that's the same behavior as observed in ticket #826. A first > thread addressed that issue > http://www.mail-archive.com/numpy-discussion at scipy.org/msg13235.html > But so far, no answer has been suggested. > Any help welcome. I believe ticket #826 can be solved with the application of this patch: $ svn diff scalarmathmodule.c.src Index: scalarmathmodule.c.src =================================================================== --- scalarmathmodule.c.src (revision 6566) +++ scalarmathmodule.c.src (working copy) @@ -566,6 +566,10 @@ Py_DECREF(descr1); return ret; } + else if (PyArray_GetPriority(a, PyArray_SUBTYPE_PRIORITY) > \ + PyArray_SUBTYPE_PRIORITY) { + return -2; + } else if ((temp = PyArray_ScalarFromObject(a)) != NULL) { int retval; retval = _ at name@_convert_to_ctype(temp, arg1); I've run the unit tests and get the same results with and without the patch applied, but it solves the problem in my script and also the problem with masked arrays. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Mar 8 13:33:15 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 8 Mar 2009 17:33:15 +0000 (UTC) Subject: [Numpy-discussion] N-D array interface page is out of date References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> <49B15A71.1070307@astraw.com> Message-ID: Hi, Fri, 06 Mar 2009 09:16:33 -0800, Andrew Straw wrote: > I have updated http://numpy.scipy.org/array_interface.shtml to have a > giant warning first paragraph describing how that information is > outdated. Additionally, I have updated http://numpy.scipy.org/ to point > people to the buffer interface described in PEP 3118 and implemented in > Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write > code for older Pythons that will automatically support the buffer > interface in newer Pythons. > > If you have knowledge about these matters (Travis O. and Dag, > especially), I'd appreciate it if you could read over the pages to > ensure everything is actually correct. I wonder if it would make sense to redirect the page here: http://docs.scipy.org/doc/numpy/reference/arrays.interface.html so that it would be easier to edit etc. in the future? -- Pauli Virtanen From strawman at astraw.com Sun Mar 8 14:00:55 2009 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Mar 2009 11:00:55 -0700 Subject: [Numpy-discussion] numpy documentation editor - retrieve password? Message-ID: <49B407D7.6070805@astraw.com> Hi, I created a login for the numpy documentation editor but cannot remember my password. Would it be possible to have it sent to me or a new one generated? It would be great to have a button on the website so that I could do this myself, but if that's too much pain, my username is AndrewStraw. Thanks, Andrew From gael.varoquaux at normalesup.org Sun Mar 8 14:04:45 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 8 Mar 2009 19:04:45 +0100 Subject: [Numpy-discussion] numpy documentation editor - retrieve password? In-Reply-To: <49B407D7.6070805@astraw.com> References: <49B407D7.6070805@astraw.com> Message-ID: <20090308180445.GB14341@phare.normalesup.org> On Sun, Mar 08, 2009 at 11:00:55AM -0700, Andrew Straw wrote: > Hi, > I created a login for the numpy documentation editor but cannot remember > my password. Would it be possible to have it sent to me or a new one > generated? It would be great to have a button on the website so that I > could do this myself, but if that's too much pain, my username is > AndrewStraw. I'm on it (just so other admins don't change the password twice). Ga?l From strawman at astraw.com Sun Mar 8 15:00:22 2009 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Mar 2009 12:00:22 -0700 Subject: [Numpy-discussion] N-D array interface page is out of date In-Reply-To: References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> <49B15A71.1070307@astraw.com> Message-ID: <49B415C6.5070009@astraw.com> Pauli Virtanen wrote: > Hi, > > Fri, 06 Mar 2009 09:16:33 -0800, Andrew Straw wrote: >> I have updated http://numpy.scipy.org/array_interface.shtml to have a >> giant warning first paragraph describing how that information is >> outdated. Additionally, I have updated http://numpy.scipy.org/ to point >> people to the buffer interface described in PEP 3118 and implemented in >> Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write >> code for older Pythons that will automatically support the buffer >> interface in newer Pythons. >> >> If you have knowledge about these matters (Travis O. and Dag, >> especially), I'd appreciate it if you could read over the pages to >> ensure everything is actually correct. > > I wonder if it would make sense to redirect the page here: > > http://docs.scipy.org/doc/numpy/reference/arrays.interface.html > > so that it would be easier to edit etc. in the future? > Yes, great idea. I just updated the page to point to the page you linked (which I didn't know existed -- thanks for pointing it out). Also, I have made several changes to arrays.interface.rst which I will upload once my password situation gets resolved. From strawman at astraw.com Sun Mar 8 15:02:58 2009 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Mar 2009 12:02:58 -0700 Subject: [Numpy-discussion] numpy.scipy.org Message-ID: <49B41662.10108@astraw.com> Hi all, I have been doing some editing of http://numpy.scipy.org . In general, however, lots of this page is redundant and outdated compared to lots of other documentation that has now sprung up. Shall we kill this page off, redirect it to another page, or continue updating it? (For this latter option, patches are welcome.) -Andrew From dsdale24 at gmail.com Sun Mar 8 15:04:29 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 15:04:29 -0400 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: > On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: > >> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >> >>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>> >>>> >>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>> >>>> > Darren Dale wrote: >>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>> >> during >>>> >> arithmetic operations where an iterable like a list or tuple >>>> >> appears to >>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap is >>>> >> not >>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" is >>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>> >> division, >>>> > >>>> > The masked array subclass does not show this behavior: >>>> >>>> Because MaskedArray.__mul__ and others are redefined. >>>> >>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>> >>>> def __mul__(self, other): >>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>> >>>> forcing the second term to be a ndarray (or a subclass of). You can do >>>> the same thing for the other functions (__add__, __radd__, ...) >>> >>> >>> Thanks for the suggestion. I know this can be done, but ufuncs like >>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>> methods, I take some small performance hit. I've been putting a lot of work >>> in lately to get quantities to work with numpy's stock ufuncs. >>> >> >> I should point out: >> >> import numpy as np >> >> a=np.array([1,2,3,4]) >> b=np.ma.masked_where(a>2,a) >> np.multiply([1,2,3,4],b) # yields a masked array >> np.multiply(b,[1,2,3,4]) # yields an ndarray >> >> > I'm not familiar with the numpy codebase, could anyone help me figure out > where I should look to try to fix this bug? I've got a nice set of > generators that work with nosetools to test all combinations of numerical > dtypes, including combinations of scalars, arrays, and iterables of each > type. In my quantities package, just testing multiplication yields 1031 > failures, all of which appear to be caused by this bug (#1026 on trak) or > bug #826. I finally managed to track done the source of this problem. _find_array_wrap steps through the inputs, asking each of them for their __array_wrap__ and binding it to wrap. If more than one input defines __array_wrap__, you enter a block that selects one based on array priority, and binds it back to wrap. The problem was when the first input defines array_wrap but the second one does not. In that case, _find_array_wrap never bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or None, and wrap is what is returned to the calling function. I've tested numpy with this patch applied, and didn't see any regressions. Would someone please consider committing it? Thanks, Darren $ svn diff numpy/core/src/umath_ufunc_object.inc Index: numpy/core/src/umath_ufunc_object.inc =================================================================== --- numpy/core/src/umath_ufunc_object.inc (revision 6569) +++ numpy/core/src/umath_ufunc_object.inc (working copy) @@ -3173,8 +3173,10 @@ PyErr_Clear(); } } + if (np >= 1) { + wrap = wraps[0]; + } if (np >= 2) { - wrap = wraps[0]; maxpriority = PyArray_GetPriority(with_wrap[0], PyArray_SUBTYPE_PRIORITY); for (i = 1; i < np; ++i) { -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-ml at nn7.de Sun Mar 8 16:44:46 2009 From: python-ml at nn7.de (Soeren Sonnenburg) Date: Sun, 08 Mar 2009 21:44:46 +0100 Subject: [Numpy-discussion] [RFC] running r, octave from python Message-ID: <1236545086.23095.26.camel@localhost> Dear all, a Shogun 0.7.1 is out and available at http://www.shogun-toolbox.org which contains one new feature that might be of interest to python-scipy/numpy users. The eierlegendewollmilchsau interface. In case you don't know what this term stands for use google images :-) It is one file that will interface shogun to octave,r,python,matlab. It provides commands to run code in foreign languages: Example: from elwms import elwms import numpy x=numpy.array([[1,2,3],[4,5,6]],dtype=numpy.float64) y=numpy.array([[7,8,9],[0,1,2]],dtype=numpy.float64) elwms('run_octave','octavecode', 'disp("hi")') a,b,c=elwms('run_octave','x', x, 'y', y, 'octavecode', 'class(x), disp(x),results=list(x+y,1,{"a"})') res1=elwms('run_octave','x', x, 'y', y, 'octavecode', 'disp(x); disp(y); results=x+y+rand(2,3)\n') res2=elwms('run_octave','A', ['test','bla','foo'], 'octavecode', ''' disp(A); disp("hi"); results={"a","b","c"} ''') This would pass around matrices x and y do some processing and return results. So you could use your old matlab scripts passing around strings cells, or whatever (u)int8/16/32, single/double matrix type. See http://www.shogun-toolbox.org/doc/elwmsinterface.html . Don't even try to run octave from python from octave etc nested. Neither octave, R nor python-numpy nor libshogun supports this :-) Soeren From charlesr.harris at gmail.com Sun Mar 8 16:48:03 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 14:48:03 -0600 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: > On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: > >> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >> >>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>> >>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>> >>>>> >>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>> >>>>> > Darren Dale wrote: >>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>> >> during >>>>> >> arithmetic operations where an iterable like a list or tuple >>>>> >> appears to >>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap is >>>>> >> not >>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" is >>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>> >> division, >>>>> > >>>>> > The masked array subclass does not show this behavior: >>>>> >>>>> Because MaskedArray.__mul__ and others are redefined. >>>>> >>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>> >>>>> def __mul__(self, other): >>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>> >>>>> forcing the second term to be a ndarray (or a subclass of). You can do >>>>> the same thing for the other functions (__add__, __radd__, ...) >>>> >>>> >>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>> methods, I take some small performance hit. I've been putting a lot of work >>>> in lately to get quantities to work with numpy's stock ufuncs. >>>> >>> >>> I should point out: >>> >>> import numpy as np >>> >>> a=np.array([1,2,3,4]) >>> b=np.ma.masked_where(a>2,a) >>> np.multiply([1,2,3,4],b) # yields a masked array >>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>> >>> >> I'm not familiar with the numpy codebase, could anyone help me figure out >> where I should look to try to fix this bug? I've got a nice set of >> generators that work with nosetools to test all combinations of numerical >> dtypes, including combinations of scalars, arrays, and iterables of each >> type. In my quantities package, just testing multiplication yields 1031 >> failures, all of which appear to be caused by this bug (#1026 on trak) or >> bug #826. > > > > I finally managed to track done the source of this problem. > _find_array_wrap steps through the inputs, asking each of them for their > __array_wrap__ and binding it to wrap. If more than one input defines > __array_wrap__, you enter a block that selects one based on array priority, > and binds it back to wrap. The problem was when the first input defines > array_wrap but the second one does not. In that case, _find_array_wrap never > bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or > None, and wrap is what is returned to the calling function. > > I've tested numpy with this patch applied, and didn't see any regressions. > Would someone please consider committing it? > > Thanks, > Darren > > $ svn diff numpy/core/src/umath_ufunc_object.inc > Index: numpy/core/src/umath_ufunc_object.inc > =================================================================== > --- numpy/core/src/umath_ufunc_object.inc (revision 6569) > +++ numpy/core/src/umath_ufunc_object.inc (working copy) > @@ -3173,8 +3173,10 @@ > PyErr_Clear(); > } > } > + if (np >= 1) { > + wrap = wraps[0]; > + } > if (np >= 2) { > - wrap = wraps[0]; > maxpriority = PyArray_GetPriority(with_wrap[0], > PyArray_SUBTYPE_PRIORITY); > for (i = 1; i < np; ++i) { > Applied in r6573. Thanks. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 8 16:54:46 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 14:54:46 -0600 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris wrote: > > > On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: > >> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >> >>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>> >>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>> >>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>> >>>>>> >>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>> >>>>>> > Darren Dale wrote: >>>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>>> >> during >>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>> >> appears to >>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap is >>>>>> >> not >>>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" is >>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>> >> division, >>>>>> > >>>>>> > The masked array subclass does not show this behavior: >>>>>> >>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>> >>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>> >>>>>> def __mul__(self, other): >>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>> >>>>>> forcing the second term to be a ndarray (or a subclass of). You can do >>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>> >>>>> >>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>> >>>> >>>> I should point out: >>>> >>>> import numpy as np >>>> >>>> a=np.array([1,2,3,4]) >>>> b=np.ma.masked_where(a>2,a) >>>> np.multiply([1,2,3,4],b) # yields a masked array >>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>> >>>> >>> I'm not familiar with the numpy codebase, could anyone help me figure out >>> where I should look to try to fix this bug? I've got a nice set of >>> generators that work with nosetools to test all combinations of numerical >>> dtypes, including combinations of scalars, arrays, and iterables of each >>> type. In my quantities package, just testing multiplication yields 1031 >>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>> bug #826. >> >> >> >> I finally managed to track done the source of this problem. >> _find_array_wrap steps through the inputs, asking each of them for their >> __array_wrap__ and binding it to wrap. If more than one input defines >> __array_wrap__, you enter a block that selects one based on array priority, >> and binds it back to wrap. The problem was when the first input defines >> array_wrap but the second one does not. In that case, _find_array_wrap never >> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >> None, and wrap is what is returned to the calling function. >> >> I've tested numpy with this patch applied, and didn't see any regressions. >> Would someone please consider committing it? >> >> Thanks, >> Darren >> >> $ svn diff numpy/core/src/umath_ufunc_object.inc >> Index: numpy/core/src/umath_ufunc_object.inc >> =================================================================== >> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >> @@ -3173,8 +3173,10 @@ >> PyErr_Clear(); >> } >> } >> + if (np >= 1) { >> + wrap = wraps[0]; >> + } >> if (np >= 2) { >> - wrap = wraps[0]; >> maxpriority = PyArray_GetPriority(with_wrap[0], >> PyArray_SUBTYPE_PRIORITY); >> for (i = 1; i < np; ++i) { >> > > Applied in r6573. Thanks. > Oh, and can you provide a test for this fix? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 17:02:57 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 17:02:57 -0400 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris wrote: > > > On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >> >>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>> >>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>> >>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>> >>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>> >>>>>>> >>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>> >>>>>>> > Darren Dale wrote: >>>>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>>>> >> during >>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>> >> appears to >>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap is >>>>>>> >> not >>>>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" >>>>>>> is >>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>> >> division, >>>>>>> > >>>>>>> > The masked array subclass does not show this behavior: >>>>>>> >>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>> >>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>>> >>>>>>> def __mul__(self, other): >>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>> >>>>>>> forcing the second term to be a ndarray (or a subclass of). You can >>>>>>> do >>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>> >>>>>> >>>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>>> >>>>> >>>>> I should point out: >>>>> >>>>> import numpy as np >>>>> >>>>> a=np.array([1,2,3,4]) >>>>> b=np.ma.masked_where(a>2,a) >>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>> >>>>> >>>> I'm not familiar with the numpy codebase, could anyone help me figure >>>> out where I should look to try to fix this bug? I've got a nice set of >>>> generators that work with nosetools to test all combinations of numerical >>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>> type. In my quantities package, just testing multiplication yields 1031 >>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>> bug #826. >>> >>> >>> >>> I finally managed to track done the source of this problem. >>> _find_array_wrap steps through the inputs, asking each of them for their >>> __array_wrap__ and binding it to wrap. If more than one input defines >>> __array_wrap__, you enter a block that selects one based on array priority, >>> and binds it back to wrap. The problem was when the first input defines >>> array_wrap but the second one does not. In that case, _find_array_wrap never >>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>> None, and wrap is what is returned to the calling function. >>> >>> I've tested numpy with this patch applied, and didn't see any >>> regressions. Would someone please consider committing it? >>> >>> Thanks, >>> Darren >>> >>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>> Index: numpy/core/src/umath_ufunc_object.inc >>> =================================================================== >>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>> @@ -3173,8 +3173,10 @@ >>> PyErr_Clear(); >>> } >>> } >>> + if (np >= 1) { >>> + wrap = wraps[0]; >>> + } >>> if (np >= 2) { >>> - wrap = wraps[0]; >>> maxpriority = PyArray_GetPriority(with_wrap[0], >>> PyArray_SUBTYPE_PRIORITY); >>> for (i = 1; i < np; ++i) { >>> >> >> Applied in r6573. Thanks. >> > > Oh, and can you provide a test for this fix? > Yes, I'll send a patch for a test as soon as its ready. 6573 closes two tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is also posted at the bug report. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 17:27:15 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 17:27:15 -0400 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 5:02 PM, Darren Dale wrote: > On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >>> >>>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>>> >>>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>>> >>>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>>> >>>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>>> >>>>>>>> >>>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>>> >>>>>>>> > Darren Dale wrote: >>>>>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>>>>> >> during >>>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>>> >> appears to >>>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap >>>>>>>> is >>>>>>>> >> not >>>>>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" >>>>>>>> is >>>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>>> >> division, >>>>>>>> > >>>>>>>> > The masked array subclass does not show this behavior: >>>>>>>> >>>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>>> >>>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>>>> >>>>>>>> def __mul__(self, other): >>>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>>> >>>>>>>> forcing the second term to be a ndarray (or a subclass of). You can >>>>>>>> do >>>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>>> >>>>>>> >>>>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>>>> >>>>>> >>>>>> I should point out: >>>>>> >>>>>> import numpy as np >>>>>> >>>>>> a=np.array([1,2,3,4]) >>>>>> b=np.ma.masked_where(a>2,a) >>>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>>> >>>>>> >>>>> I'm not familiar with the numpy codebase, could anyone help me figure >>>>> out where I should look to try to fix this bug? I've got a nice set of >>>>> generators that work with nosetools to test all combinations of numerical >>>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>>> type. In my quantities package, just testing multiplication yields 1031 >>>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>>> bug #826. >>>> >>>> >>>> >>>> I finally managed to track done the source of this problem. >>>> _find_array_wrap steps through the inputs, asking each of them for their >>>> __array_wrap__ and binding it to wrap. If more than one input defines >>>> __array_wrap__, you enter a block that selects one based on array priority, >>>> and binds it back to wrap. The problem was when the first input defines >>>> array_wrap but the second one does not. In that case, _find_array_wrap never >>>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>>> None, and wrap is what is returned to the calling function. >>>> >>>> I've tested numpy with this patch applied, and didn't see any >>>> regressions. Would someone please consider committing it? >>>> >>>> Thanks, >>>> Darren >>>> >>>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>>> Index: numpy/core/src/umath_ufunc_object.inc >>>> =================================================================== >>>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>>> @@ -3173,8 +3173,10 @@ >>>> PyErr_Clear(); >>>> } >>>> } >>>> + if (np >= 1) { >>>> + wrap = wraps[0]; >>>> + } >>>> if (np >= 2) { >>>> - wrap = wraps[0]; >>>> maxpriority = PyArray_GetPriority(with_wrap[0], >>>> PyArray_SUBTYPE_PRIORITY); >>>> for (i = 1; i < np; ++i) { >>>> >>> >>> Applied in r6573. Thanks. >>> >> >> Oh, and can you provide a test for this fix? >> > > Yes, I'll send a patch for a test as soon as its ready. 6573 closes two > tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is > also posted at the bug report. Index: numpy/core/tests/test_umath.py =================================================================== --- numpy/core/tests/test_umath.py (revision 6573) +++ numpy/core/tests/test_umath.py (working copy) @@ -240,6 +240,19 @@ assert_equal(args[1], a) self.failUnlessEqual(i, 0) + def test_wrap_with_iterable(self): + # test fix for bug #1026: + class with_wrap(np.ndarray): + __array_priority = 10 + def __new__(cls): + return np.asarray(1).view(cls).copy() + def __array_wrap__(self, arr, context): + return arr.view(type(self)) + a = with_wrap() + x = ncu.multiply(a, (1, 2, 3)) + self.failUnless(isinstance(x, with_wrap)) + assert_array_equal(x, np.array((1, 2, 3))) + def test_old_wrap(self): class with_wrap(object): def __array__(self): -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 8 17:55:55 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 15:55:55 -0600 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <49A1DE17.4070001@hawaii.edu> <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 3:02 PM, Darren Dale wrote: > On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >>> >>>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>>> >>>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>>> >>>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>>> >>>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>>> >>>>>>>> >>>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>>> >>>>>>>> > Darren Dale wrote: >>>>>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>>>>> >> during >>>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>>> >> appears to >>>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap >>>>>>>> is >>>>>>>> >> not >>>>>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" >>>>>>>> is >>>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>>> >> division, >>>>>>>> > >>>>>>>> > The masked array subclass does not show this behavior: >>>>>>>> >>>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>>> >>>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>>>> >>>>>>>> def __mul__(self, other): >>>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>>> >>>>>>>> forcing the second term to be a ndarray (or a subclass of). You can >>>>>>>> do >>>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>>> >>>>>>> >>>>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>>>> >>>>>> >>>>>> I should point out: >>>>>> >>>>>> import numpy as np >>>>>> >>>>>> a=np.array([1,2,3,4]) >>>>>> b=np.ma.masked_where(a>2,a) >>>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>>> >>>>>> >>>>> I'm not familiar with the numpy codebase, could anyone help me figure >>>>> out where I should look to try to fix this bug? I've got a nice set of >>>>> generators that work with nosetools to test all combinations of numerical >>>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>>> type. In my quantities package, just testing multiplication yields 1031 >>>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>>> bug #826. >>>> >>>> >>>> >>>> I finally managed to track done the source of this problem. >>>> _find_array_wrap steps through the inputs, asking each of them for their >>>> __array_wrap__ and binding it to wrap. If more than one input defines >>>> __array_wrap__, you enter a block that selects one based on array priority, >>>> and binds it back to wrap. The problem was when the first input defines >>>> array_wrap but the second one does not. In that case, _find_array_wrap never >>>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>>> None, and wrap is what is returned to the calling function. >>>> >>>> I've tested numpy with this patch applied, and didn't see any >>>> regressions. Would someone please consider committing it? >>>> >>>> Thanks, >>>> Darren >>>> >>>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>>> Index: numpy/core/src/umath_ufunc_object.inc >>>> =================================================================== >>>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>>> @@ -3173,8 +3173,10 @@ >>>> PyErr_Clear(); >>>> } >>>> } >>>> + if (np >= 1) { >>>> + wrap = wraps[0]; >>>> + } >>>> if (np >= 2) { >>>> - wrap = wraps[0]; >>>> maxpriority = PyArray_GetPriority(with_wrap[0], >>>> PyArray_SUBTYPE_PRIORITY); >>>> for (i = 1; i < np; ++i) { >>>> >>> >>> Applied in r6573. Thanks. >>> >> >> Oh, and can you provide a test for this fix? >> > > > Yes, I'll send a patch for a test as soon as its ready. 6573 closes two > tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is > also posted at the bug report. > Applied in r6574 . I'm not all that familiar with the priority machinery but the patch looked like it wouldn't break anything and since it fixes things for you, in it went. I haven't closed the ticket yet but will do so if you provide some tests. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 8 18:04:48 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 16:04:48 -0600 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: <444F6AEF-9255-4350-AE1B-B9494A039D43@gmail.com> Message-ID: On Sun, Mar 8, 2009 at 3:27 PM, Darren Dale wrote: > On Sun, Mar 8, 2009 at 5:02 PM, Darren Dale wrote: > >> On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >>>> >>>>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>>>> >>>>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>>>> >>>>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>>>> >>>>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>>>> >>>>>>>>> > Darren Dale wrote: >>>>>>>>> >> Does anyone know why __array_wrap__ is not called for subclasses >>>>>>>>> >> during >>>>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>>>> >> appears to >>>>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap >>>>>>>>> is >>>>>>>>> >> not >>>>>>>>> >> called and I get an ndarray instead of a MyArray. "[1,2,3]*mine" >>>>>>>>> is >>>>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>>>> >> division, >>>>>>>>> > >>>>>>>>> > The masked array subclass does not show this behavior: >>>>>>>>> >>>>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>>>> >>>>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>>>>> >>>>>>>>> def __mul__(self, other): >>>>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>>>> >>>>>>>>> forcing the second term to be a ndarray (or a subclass of). You can >>>>>>>>> do >>>>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>>>> >>>>>>>> >>>>>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>>>>> >>>>>>> >>>>>>> I should point out: >>>>>>> >>>>>>> import numpy as np >>>>>>> >>>>>>> a=np.array([1,2,3,4]) >>>>>>> b=np.ma.masked_where(a>2,a) >>>>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>>>> >>>>>>> >>>>>> I'm not familiar with the numpy codebase, could anyone help me figure >>>>>> out where I should look to try to fix this bug? I've got a nice set of >>>>>> generators that work with nosetools to test all combinations of numerical >>>>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>>>> type. In my quantities package, just testing multiplication yields 1031 >>>>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>>>> bug #826. >>>>> >>>>> >>>>> >>>>> I finally managed to track done the source of this problem. >>>>> _find_array_wrap steps through the inputs, asking each of them for their >>>>> __array_wrap__ and binding it to wrap. If more than one input defines >>>>> __array_wrap__, you enter a block that selects one based on array priority, >>>>> and binds it back to wrap. The problem was when the first input defines >>>>> array_wrap but the second one does not. In that case, _find_array_wrap never >>>>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>>>> None, and wrap is what is returned to the calling function. >>>>> >>>>> I've tested numpy with this patch applied, and didn't see any >>>>> regressions. Would someone please consider committing it? >>>>> >>>>> Thanks, >>>>> Darren >>>>> >>>>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>>>> Index: numpy/core/src/umath_ufunc_object.inc >>>>> =================================================================== >>>>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>>>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>>>> @@ -3173,8 +3173,10 @@ >>>>> PyErr_Clear(); >>>>> } >>>>> } >>>>> + if (np >= 1) { >>>>> + wrap = wraps[0]; >>>>> + } >>>>> if (np >= 2) { >>>>> - wrap = wraps[0]; >>>>> maxpriority = PyArray_GetPriority(with_wrap[0], >>>>> PyArray_SUBTYPE_PRIORITY); >>>>> for (i = 1; i < np; ++i) { >>>>> >>>> >>>> Applied in r6573. Thanks. >>>> >>> >>> Oh, and can you provide a test for this fix? >>> >> >> Yes, I'll send a patch for a test as soon as its ready. 6573 closes two >> tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is >> also posted at the bug report. > > > > Index: numpy/core/tests/test_umath.py > =================================================================== > --- numpy/core/tests/test_umath.py (revision 6573) > +++ numpy/core/tests/test_umath.py (working copy) > @@ -240,6 +240,19 @@ > assert_equal(args[1], a) > self.failUnlessEqual(i, 0) > > + def test_wrap_with_iterable(self): > + # test fix for bug #1026: > + class with_wrap(np.ndarray): > + __array_priority = 10 > + def __new__(cls): > + return np.asarray(1).view(cls).copy() > + def __array_wrap__(self, arr, context): > + return arr.view(type(self)) > + a = with_wrap() > + x = ncu.multiply(a, (1, 2, 3)) > + self.failUnless(isinstance(x, with_wrap)) > + assert_array_equal(x, np.array((1, 2, 3))) > + > def test_old_wrap(self): > class with_wrap(object): > def __array__(self): > Thanks. This was applied in r6575. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 18:38:31 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 18:38:31 -0400 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: Message-ID: On Sun, Mar 8, 2009 at 6:04 PM, Charles R Harris wrote: > > > On Sun, Mar 8, 2009 at 3:27 PM, Darren Dale wrote: > >> On Sun, Mar 8, 2009 at 5:02 PM, Darren Dale wrote: >> >>> On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >>>>> >>>>>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>>>>> >>>>>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>>>>> >>>>>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>>>>> >>>>>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>>>>> >>>>>>>>>> > Darren Dale wrote: >>>>>>>>>> >> Does anyone know why __array_wrap__ is not called for >>>>>>>>>> subclasses >>>>>>>>>> >> during >>>>>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>>>>> >> appears to >>>>>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", array_wrap >>>>>>>>>> is >>>>>>>>>> >> not >>>>>>>>>> >> called and I get an ndarray instead of a MyArray. >>>>>>>>>> "[1,2,3]*mine" is >>>>>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>>>>> >> division, >>>>>>>>>> > >>>>>>>>>> > The masked array subclass does not show this behavior: >>>>>>>>>> >>>>>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>>>>> >>>>>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ as: >>>>>>>>>> >>>>>>>>>> def __mul__(self, other): >>>>>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>>>>> >>>>>>>>>> forcing the second term to be a ndarray (or a subclass of). You >>>>>>>>>> can do >>>>>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>>>>> >>>>>>>>> >>>>>>>>> Thanks for the suggestion. I know this can be done, but ufuncs like >>>>>>>>> np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement these >>>>>>>>> methods, I take some small performance hit. I've been putting a lot of work >>>>>>>>> in lately to get quantities to work with numpy's stock ufuncs. >>>>>>>>> >>>>>>>> >>>>>>>> I should point out: >>>>>>>> >>>>>>>> import numpy as np >>>>>>>> >>>>>>>> a=np.array([1,2,3,4]) >>>>>>>> b=np.ma.masked_where(a>2,a) >>>>>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>>>>> >>>>>>>> >>>>>>> I'm not familiar with the numpy codebase, could anyone help me figure >>>>>>> out where I should look to try to fix this bug? I've got a nice set of >>>>>>> generators that work with nosetools to test all combinations of numerical >>>>>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>>>>> type. In my quantities package, just testing multiplication yields 1031 >>>>>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>>>>> bug #826. >>>>>> >>>>>> >>>>>> >>>>>> I finally managed to track done the source of this problem. >>>>>> _find_array_wrap steps through the inputs, asking each of them for their >>>>>> __array_wrap__ and binding it to wrap. If more than one input defines >>>>>> __array_wrap__, you enter a block that selects one based on array priority, >>>>>> and binds it back to wrap. The problem was when the first input defines >>>>>> array_wrap but the second one does not. In that case, _find_array_wrap never >>>>>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>>>>> None, and wrap is what is returned to the calling function. >>>>>> >>>>>> I've tested numpy with this patch applied, and didn't see any >>>>>> regressions. Would someone please consider committing it? >>>>>> >>>>>> Thanks, >>>>>> Darren >>>>>> >>>>>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>>>>> Index: numpy/core/src/umath_ufunc_object.inc >>>>>> =================================================================== >>>>>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>>>>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>>>>> @@ -3173,8 +3173,10 @@ >>>>>> PyErr_Clear(); >>>>>> } >>>>>> } >>>>>> + if (np >= 1) { >>>>>> + wrap = wraps[0]; >>>>>> + } >>>>>> if (np >= 2) { >>>>>> - wrap = wraps[0]; >>>>>> maxpriority = PyArray_GetPriority(with_wrap[0], >>>>>> PyArray_SUBTYPE_PRIORITY); >>>>>> for (i = 1; i < np; ++i) { >>>>>> >>>>> >>>>> Applied in r6573. Thanks. >>>>> >>>> >>>> Oh, and can you provide a test for this fix? >>>> >>> >>> Yes, I'll send a patch for a test as soon as its ready. 6573 closes two >>> tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is >>> also posted at the bug report. >> >> >> >> Index: numpy/core/tests/test_umath.py >> =================================================================== >> --- numpy/core/tests/test_umath.py (revision 6573) >> +++ numpy/core/tests/test_umath.py (working copy) >> @@ -240,6 +240,19 @@ >> assert_equal(args[1], a) >> self.failUnlessEqual(i, 0) >> >> + def test_wrap_with_iterable(self): >> + # test fix for bug #1026: >> + class with_wrap(np.ndarray): >> + __array_priority = 10 >> + def __new__(cls): >> + return np.asarray(1).view(cls).copy() >> + def __array_wrap__(self, arr, context): >> + return arr.view(type(self)) >> + a = with_wrap() >> + x = ncu.multiply(a, (1, 2, 3)) >> + self.failUnless(isinstance(x, with_wrap)) >> + assert_array_equal(x, np.array((1, 2, 3))) >> + >> def test_old_wrap(self): >> class with_wrap(object): >> def __array__(self): >> > > Thanks. This was applied in r6575. > Chuck, I'm sorry, there was a typo in that test. It should have said __array_priority__, not __array_priority. It didnt influence the test result, which failed without the patch and passed with it, but I think it should still be fixed. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 18:42:56 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 18:42:56 -0400 Subject: [Numpy-discussion] strange multiplication behavior with numpy.float64 and ndarray subclass In-Reply-To: References: Message-ID: On Sun, Mar 8, 2009 at 12:31 PM, Darren Dale wrote: > On Wed, Jan 21, 2009 at 12:43 PM, Pierre GM wrote: > >> >> On Jan 21, 2009, at 11:34 AM, Darren Dale wrote: >> >> > I have a simple test script here that multiplies an ndarray subclass >> > with another number. Can anyone help me understand why each of these >> > combinations returns a new instance of MyArray: >> > >> > mine = MyArray() >> > print type(np.float32(1)*mine) >> > print type(mine*np.float32(1)) >> > print type(mine*np.float64(1)) >> > print type(1*mine) >> > print type(mine*1) >> > >> > but this one returns a np.float64 instance? >> >> FYI, that's the same behavior as observed in ticket #826. A first >> thread addressed that issue >> http://www.mail-archive.com/numpy-discussion at scipy.org/msg13235.html >> But so far, no answer has been suggested. >> Any help welcome. > > > I believe ticket #826 can be solved with the application of this patch: > > > $ svn diff scalarmathmodule.c.src > Index: scalarmathmodule.c.src > =================================================================== > --- scalarmathmodule.c.src (revision 6566) > +++ scalarmathmodule.c.src (working copy) > @@ -566,6 +566,10 @@ > Py_DECREF(descr1); > return ret; > } > + else if (PyArray_GetPriority(a, PyArray_SUBTYPE_PRIORITY) > \ > + PyArray_SUBTYPE_PRIORITY) { > + return -2; > + } > else if ((temp = PyArray_ScalarFromObject(a)) != NULL) { > int retval; > retval = _ at name@_convert_to_ctype(temp, arg1); > > > I've run the unit tests and get the same results with and without the patch > applied, but it solves the problem in my script and also the problem with > masked arrays. Here is a test for this patch, maybe issue #826 can be closed. Index: numpy/core/tests/test_umath.py =================================================================== --- numpy/core/tests/test_umath.py (revision 6575) +++ numpy/core/tests/test_umath.py (working copy) @@ -253,6 +253,17 @@ self.failUnless(isinstance(x, with_wrap)) assert_array_equal(x, np.array((1, 2, 3))) + def test_priority_with_scalar(self): + # test fix for bug #826: + class A(np.ndarray): + __array_priority__ = 10 + def __new__(cls): + return np.asarray(1.0, 'float64').view(cls).copy() + a = A() + x = np.float64(1)*a + self.failUnless(isinstance(x, A)) + assert_array_equal(x, np.array(1)) + def test_old_wrap(self): class with_wrap(object): def __array__(self): -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 8 18:45:39 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 16:45:39 -0600 Subject: [Numpy-discussion] possible bug: __array_wrap__ is not called during arithmetic operations in some cases In-Reply-To: References: Message-ID: On Sun, Mar 8, 2009 at 4:38 PM, Darren Dale wrote: > On Sun, Mar 8, 2009 at 6:04 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sun, Mar 8, 2009 at 3:27 PM, Darren Dale wrote: >> >>> On Sun, Mar 8, 2009 at 5:02 PM, Darren Dale wrote: >>> >>>> On Sun, Mar 8, 2009 at 4:54 PM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Sun, Mar 8, 2009 at 2:48 PM, Charles R Harris < >>>>> charlesr.harris at gmail.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Sun, Mar 8, 2009 at 1:04 PM, Darren Dale wrote: >>>>>> >>>>>>> On Sat, Mar 7, 2009 at 1:23 PM, Darren Dale wrote: >>>>>>> >>>>>>>> On Sun, Feb 22, 2009 at 7:01 PM, Darren Dale wrote: >>>>>>>> >>>>>>>>> On Sun, Feb 22, 2009 at 6:35 PM, Darren Dale wrote: >>>>>>>>> >>>>>>>>>> On Sun, Feb 22, 2009 at 6:28 PM, Pierre GM wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Feb 22, 2009, at 6:21 PM, Eric Firing wrote: >>>>>>>>>>> >>>>>>>>>>> > Darren Dale wrote: >>>>>>>>>>> >> Does anyone know why __array_wrap__ is not called for >>>>>>>>>>> subclasses >>>>>>>>>>> >> during >>>>>>>>>>> >> arithmetic operations where an iterable like a list or tuple >>>>>>>>>>> >> appears to >>>>>>>>>>> >> the right of the subclass? When I do "mine*[1,2,3]", >>>>>>>>>>> array_wrap is >>>>>>>>>>> >> not >>>>>>>>>>> >> called and I get an ndarray instead of a MyArray. >>>>>>>>>>> "[1,2,3]*mine" is >>>>>>>>>>> >> fine, as is "mine*array([1,2,3])". I see the same issue with >>>>>>>>>>> >> division, >>>>>>>>>>> > >>>>>>>>>>> > The masked array subclass does not show this behavior: >>>>>>>>>>> >>>>>>>>>>> Because MaskedArray.__mul__ and others are redefined. >>>>>>>>>>> >>>>>>>>>>> Darren, you can fix your problem by redefining MyArray.__mul__ >>>>>>>>>>> as: >>>>>>>>>>> >>>>>>>>>>> def __mul__(self, other): >>>>>>>>>>> return np.ndarray.__mul__(self, np.asanyarray(other)) >>>>>>>>>>> >>>>>>>>>>> forcing the second term to be a ndarray (or a subclass of). You >>>>>>>>>>> can do >>>>>>>>>>> the same thing for the other functions (__add__, __radd__, ...) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks for the suggestion. I know this can be done, but ufuncs >>>>>>>>>> like np.multiply(mine,[1,2,3]) will still not work. Plus, if I reimplement >>>>>>>>>> these methods, I take some small performance hit. I've been putting a lot of >>>>>>>>>> work in lately to get quantities to work with numpy's stock ufuncs. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I should point out: >>>>>>>>> >>>>>>>>> import numpy as np >>>>>>>>> >>>>>>>>> a=np.array([1,2,3,4]) >>>>>>>>> b=np.ma.masked_where(a>2,a) >>>>>>>>> np.multiply([1,2,3,4],b) # yields a masked array >>>>>>>>> np.multiply(b,[1,2,3,4]) # yields an ndarray >>>>>>>>> >>>>>>>>> >>>>>>>> I'm not familiar with the numpy codebase, could anyone help me >>>>>>>> figure out where I should look to try to fix this bug? I've got a nice set >>>>>>>> of generators that work with nosetools to test all combinations of numerical >>>>>>>> dtypes, including combinations of scalars, arrays, and iterables of each >>>>>>>> type. In my quantities package, just testing multiplication yields 1031 >>>>>>>> failures, all of which appear to be caused by this bug (#1026 on trak) or >>>>>>>> bug #826. >>>>>>> >>>>>>> >>>>>>> >>>>>>> I finally managed to track done the source of this problem. >>>>>>> _find_array_wrap steps through the inputs, asking each of them for their >>>>>>> __array_wrap__ and binding it to wrap. If more than one input defines >>>>>>> __array_wrap__, you enter a block that selects one based on array priority, >>>>>>> and binds it back to wrap. The problem was when the first input defines >>>>>>> array_wrap but the second one does not. In that case, _find_array_wrap never >>>>>>> bothered to rebind the desired wraps[0] to wrap, so wrap remains Null or >>>>>>> None, and wrap is what is returned to the calling function. >>>>>>> >>>>>>> I've tested numpy with this patch applied, and didn't see any >>>>>>> regressions. Would someone please consider committing it? >>>>>>> >>>>>>> Thanks, >>>>>>> Darren >>>>>>> >>>>>>> $ svn diff numpy/core/src/umath_ufunc_object.inc >>>>>>> Index: numpy/core/src/umath_ufunc_object.inc >>>>>>> =================================================================== >>>>>>> --- numpy/core/src/umath_ufunc_object.inc (revision 6569) >>>>>>> +++ numpy/core/src/umath_ufunc_object.inc (working copy) >>>>>>> @@ -3173,8 +3173,10 @@ >>>>>>> PyErr_Clear(); >>>>>>> } >>>>>>> } >>>>>>> + if (np >= 1) { >>>>>>> + wrap = wraps[0]; >>>>>>> + } >>>>>>> if (np >= 2) { >>>>>>> - wrap = wraps[0]; >>>>>>> maxpriority = PyArray_GetPriority(with_wrap[0], >>>>>>> PyArray_SUBTYPE_PRIORITY); >>>>>>> for (i = 1; i < np; ++i) { >>>>>>> >>>>>> >>>>>> Applied in r6573. Thanks. >>>>>> >>>>> >>>>> Oh, and can you provide a test for this fix? >>>>> >>>> >>>> Yes, I'll send a patch for a test as soon as its ready. 6573 closes two >>>> tickets, 1026 and 1022. Did you see the patch I sent for issue #826? It is >>>> also posted at the bug report. >>> >>> >>> >>> Index: numpy/core/tests/test_umath.py >>> =================================================================== >>> --- numpy/core/tests/test_umath.py (revision 6573) >>> +++ numpy/core/tests/test_umath.py (working copy) >>> @@ -240,6 +240,19 @@ >>> assert_equal(args[1], a) >>> self.failUnlessEqual(i, 0) >>> >>> + def test_wrap_with_iterable(self): >>> + # test fix for bug #1026: >>> + class with_wrap(np.ndarray): >>> + __array_priority = 10 >>> + def __new__(cls): >>> + return np.asarray(1).view(cls).copy() >>> + def __array_wrap__(self, arr, context): >>> + return arr.view(type(self)) >>> + a = with_wrap() >>> + x = ncu.multiply(a, (1, 2, 3)) >>> + self.failUnless(isinstance(x, with_wrap)) >>> + assert_array_equal(x, np.array((1, 2, 3))) >>> + >>> def test_old_wrap(self): >>> class with_wrap(object): >>> def __array__(self): >>> >> >> Thanks. This was applied in r6575. >> > > Chuck, I'm sorry, there was a typo in that test. It should have said > __array_priority__, not __array_priority. It didnt influence the test > result, which failed without the patch and passed with it, but I think it > should still be fixed. > Fixed... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 8 19:00:39 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Mar 2009 17:00:39 -0600 Subject: [Numpy-discussion] strange multiplication behavior with numpy.float64 and ndarray subclass In-Reply-To: References: Message-ID: On Sun, Mar 8, 2009 at 4:42 PM, Darren Dale wrote: > On Sun, Mar 8, 2009 at 12:31 PM, Darren Dale wrote: > >> On Wed, Jan 21, 2009 at 12:43 PM, Pierre GM wrote: >> >>> >>> On Jan 21, 2009, at 11:34 AM, Darren Dale wrote: >>> >>> > I have a simple test script here that multiplies an ndarray subclass >>> > with another number. Can anyone help me understand why each of these >>> > combinations returns a new instance of MyArray: >>> > >>> > mine = MyArray() >>> > print type(np.float32(1)*mine) >>> > print type(mine*np.float32(1)) >>> > print type(mine*np.float64(1)) >>> > print type(1*mine) >>> > print type(mine*1) >>> > >>> > but this one returns a np.float64 instance? >>> >>> FYI, that's the same behavior as observed in ticket #826. A first >>> thread addressed that issue >>> http://www.mail-archive.com/numpy-discussion at scipy.org/msg13235.html >>> But so far, no answer has been suggested. >>> Any help welcome. >> >> >> I believe ticket #826 can be solved with the application of this patch: >> >> >> $ svn diff scalarmathmodule.c.src >> Index: scalarmathmodule.c.src >> =================================================================== >> --- scalarmathmodule.c.src (revision 6566) >> +++ scalarmathmodule.c.src (working copy) >> @@ -566,6 +566,10 @@ >> Py_DECREF(descr1); >> return ret; >> } >> + else if (PyArray_GetPriority(a, PyArray_SUBTYPE_PRIORITY) > \ >> + PyArray_SUBTYPE_PRIORITY) { >> + return -2; >> + } >> else if ((temp = PyArray_ScalarFromObject(a)) != NULL) { >> int retval; >> retval = _ at name@_convert_to_ctype(temp, arg1); >> >> >> I've run the unit tests and get the same results with and without the >> patch applied, but it solves the problem in my script and also the problem >> with masked arrays. > > > Here is a test for this patch, maybe issue #826 can be closed. > > Index: numpy/core/tests/test_umath.py > =================================================================== > --- numpy/core/tests/test_umath.py (revision 6575) > +++ numpy/core/tests/test_umath.py (working copy) > @@ -253,6 +253,17 @@ > self.failUnless(isinstance(x, with_wrap)) > assert_array_equal(x, np.array((1, 2, 3))) > > + def test_priority_with_scalar(self): > + # test fix for bug #826: > + class A(np.ndarray): > + __array_priority__ = 10 > + def __new__(cls): > + return np.asarray(1.0, 'float64').view(cls).copy() > + a = A() > + x = np.float64(1)*a > + self.failUnless(isinstance(x, A)) > + assert_array_equal(x, np.array(1)) > + > def test_old_wrap(self): > class with_wrap(object): > def __array__(self): > > __ Added in r6578... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 19:12:08 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 19:12:08 -0400 Subject: [Numpy-discussion] numpy.fix and subclasses In-Reply-To: References: Message-ID: On Sun, Feb 22, 2009 at 11:49 PM, Darren Dale wrote: > On Sun, Feb 22, 2009 at 10:35 PM, Darren Dale wrote: > >> I've been finding some numpy functions that could maybe be improved to >> work better with ndarray subclasses. For example: >> >> def fix(x, y=None): >> x = nx.asanyarray(x) >> if y is None: >> y = nx.zeros_like(x) >> y1 = nx.floor(x) >> y2 = nx.ceil(x) >> y[...] = nx.where(x >= 0, y1, y2) >> return y >> >> This implementation is a problematic for subclasses, since it does not >> allow metadata to propagate using the usual ufunc machinery of >> __array_wrap__, like ceil and floor do. nx.zeros_like does yield another >> instance of type(x), but y does not get x's metadata (such as units or a >> mask). Would it be possible to do something like: >> >> if y is None: >> y = x*0 >> >> "where" is another function that could maybe be improved to work with the >> rules established by array_priority, but I'm a lousy C programmer and I >> haven't actually looked into how this would work. If "where" respected >> array_priority, fix could be implemented as: >> >> def fix(x, y=None): >> x = nx.asanyarray(x) >> y1 = nx.floor(x) >> y2 = nx.ceil(x) >> if y is None: >> return nx.where(x >= 0, y1, y2) >> y[...] = nx.where(x >= 0, y1, y2) >> return y > > > Actually, I just remembered that quantities tries to prevent things like > ([1,2,3,4]*m)[:2] = [0,1], since the units dont match, so setting y=x*0 and > then setting data to a slice of y would be problematic. It would be most > desirable for "where" to respect __array_priority__, if possible. Any > comments? > I was wondering if we could consider applying a decorator to functions like fix that do not tie into the ufunc machinery that determines an appropriate __array_wrap__ to call. It would be simple enough to write a decorator that does the same thing as _find_array_wrap in umath_ufunc_object.inc, and if an array_wrap method is identified, apply it to the output of the existing function. This way numpy's functions would be more cooperative with ndarray subclasses. I don't mind writing the decorator and some unit tests, but I don't have a lot of free time so I would like to discuss it first. Does it sound reasonable? Thanks, Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Sun Mar 8 19:14:33 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 8 Mar 2009 19:14:33 -0400 Subject: [Numpy-discussion] strange multiplication behavior with numpy.float64 and ndarray subclass In-Reply-To: References: Message-ID: On Sun, Mar 8, 2009 at 7:00 PM, Charles R Harris wrote: > > > On Sun, Mar 8, 2009 at 4:42 PM, Darren Dale wrote: > >> On Sun, Mar 8, 2009 at 12:31 PM, Darren Dale wrote: >> >>> On Wed, Jan 21, 2009 at 12:43 PM, Pierre GM wrote: >>> >>>> >>>> On Jan 21, 2009, at 11:34 AM, Darren Dale wrote: >>>> >>>> > I have a simple test script here that multiplies an ndarray subclass >>>> > with another number. Can anyone help me understand why each of these >>>> > combinations returns a new instance of MyArray: >>>> > >>>> > mine = MyArray() >>>> > print type(np.float32(1)*mine) >>>> > print type(mine*np.float32(1)) >>>> > print type(mine*np.float64(1)) >>>> > print type(1*mine) >>>> > print type(mine*1) >>>> > >>>> > but this one returns a np.float64 instance? >>>> >>>> FYI, that's the same behavior as observed in ticket #826. A first >>>> thread addressed that issue >>>> http://www.mail-archive.com/numpy-discussion at scipy.org/msg13235.html >>>> But so far, no answer has been suggested. >>>> Any help welcome. >>> >>> >>> I believe ticket #826 can be solved with the application of this patch: >>> >>> >>> $ svn diff scalarmathmodule.c.src >>> Index: scalarmathmodule.c.src >>> =================================================================== >>> --- scalarmathmodule.c.src (revision 6566) >>> +++ scalarmathmodule.c.src (working copy) >>> @@ -566,6 +566,10 @@ >>> Py_DECREF(descr1); >>> return ret; >>> } >>> + else if (PyArray_GetPriority(a, PyArray_SUBTYPE_PRIORITY) > \ >>> + PyArray_SUBTYPE_PRIORITY) { >>> + return -2; >>> + } >>> else if ((temp = PyArray_ScalarFromObject(a)) != NULL) { >>> int retval; >>> retval = _ at name@_convert_to_ctype(temp, arg1); >>> >>> >>> I've run the unit tests and get the same results with and without the >>> patch applied, but it solves the problem in my script and also the problem >>> with masked arrays. >> >> >> Here is a test for this patch, maybe issue #826 can be closed. >> >> Index: numpy/core/tests/test_umath.py >> =================================================================== >> --- numpy/core/tests/test_umath.py (revision 6575) >> +++ numpy/core/tests/test_umath.py (working copy) >> @@ -253,6 +253,17 @@ >> self.failUnless(isinstance(x, with_wrap)) >> assert_array_equal(x, np.array((1, 2, 3))) >> >> + def test_priority_with_scalar(self): >> + # test fix for bug #826: >> + class A(np.ndarray): >> + __array_priority__ = 10 >> + def __new__(cls): >> + return np.asarray(1.0, 'float64').view(cls).copy() >> + a = A() >> + x = np.float64(1)*a >> + self.failUnless(isinstance(x, A)) >> + assert_array_equal(x, np.array(1)) >> + >> def test_old_wrap(self): >> class with_wrap(object): >> def __array__(self): >> >> __ > > > Added in r6578... Chuck > Thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Mar 8 20:24:59 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 9 Mar 2009 02:24:59 +0200 Subject: [Numpy-discussion] numpy.scipy.org In-Reply-To: <49B41662.10108@astraw.com> References: <49B41662.10108@astraw.com> Message-ID: <9457e7c80903081724s74d69bc1nd314ec5ad860941a@mail.gmail.com> 2009/3/8 Andrew Straw : > I have been doing some editing of http://numpy.scipy.org . In general, Thanks for keeping an eye on this page! > however, lots of this page is redundant and outdated compared to lots of > other documentation that has now sprung up. Shall we kill this page off, > redirect it to another page, or continue updating it? (For this latter > option, patches are welcome.) I like Pauli's suggestion of linking it to the documentation editor. St?fan From robert.kern at gmail.com Sun Mar 8 20:40:02 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 8 Mar 2009 19:40:02 -0500 Subject: [Numpy-discussion] numpy.scipy.org In-Reply-To: <49B41662.10108@astraw.com> References: <49B41662.10108@astraw.com> Message-ID: <3d375d730903081740i433eab9ah4e5f4634c76512b0@mail.gmail.com> On Sun, Mar 8, 2009 at 14:02, Andrew Straw wrote: > Hi all, > > I have been doing some editing of http://numpy.scipy.org . In general, > however, lots of this page is redundant and outdated compared to lots of > other documentation that has now sprung up. Shall we kill this page off, > redirect it to another page, or continue updating it? (For this latter > option, patches are welcome.) We do need a single landing page that has the usual project information: brief description of the package's capabilities, download URLs, checkout information, mailing lists, etc. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Mon Mar 9 01:18:11 2009 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Mar 2009 22:18:11 -0700 Subject: [Numpy-discussion] N-D array interface page is out of date In-Reply-To: <49B415C6.5070009@astraw.com> References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> <49B15A71.1070307@astraw.com> <49B415C6.5070009@astraw.com> Message-ID: <49B4A693.1030701@astraw.com> Andrew Straw wrote: > Pauli Virtanen wrote: >> Hi, >> >> Fri, 06 Mar 2009 09:16:33 -0800, Andrew Straw wrote: >>> I have updated http://numpy.scipy.org/array_interface.shtml to have a >>> giant warning first paragraph describing how that information is >>> outdated. Additionally, I have updated http://numpy.scipy.org/ to point >>> people to the buffer interface described in PEP 3118 and implemented in >>> Python 2.6/3.0. Furthermore, I have suggested Cython has a way to write >>> code for older Pythons that will automatically support the buffer >>> interface in newer Pythons. >>> >>> If you have knowledge about these matters (Travis O. and Dag, >>> especially), I'd appreciate it if you could read over the pages to >>> ensure everything is actually correct. >> I wonder if it would make sense to redirect the page here: >> >> http://docs.scipy.org/doc/numpy/reference/arrays.interface.html >> >> so that it would be easier to edit etc. in the future? >> > > > Yes, great idea. I just updated the page to point to the page you linked > (which I didn't know existed -- thanks for pointing it out). > > Also, I have made several changes to arrays.interface.rst which I will > upload once my password situation gets resolved. OK, I now have a password (thanks Ga?l), but I don't have edit permissions on that page. So I'm attaching a patch against that page source that incorporates the stuff that was on the old page that's not in the new page. I'm happy to apply this myself if someone gives me edit permissions. I wasn't able to check out all the ReST formatting with the online editor because I don't have edit permissions. -Andrew -------------- next part -------------- A non-text attachment was scrubbed... Name: arrays.interface.patch Type: text/x-diff Size: 5157 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Mon Mar 9 02:00:30 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 9 Mar 2009 07:00:30 +0100 Subject: [Numpy-discussion] N-D array interface page is out of date In-Reply-To: <49B4A693.1030701@astraw.com> References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> <49B15A71.1070307@astraw.com> <49B415C6.5070009@astraw.com> <49B4A693.1030701@astraw.com> Message-ID: <20090309060030.GB17141@phare.normalesup.org> On Sun, Mar 08, 2009 at 10:18:11PM -0700, Andrew Straw wrote: > OK, I now have a password (thanks Ga?l), but I don't have edit > permissions on that page. So I'm attaching a patch against that page > source that incorporates the stuff that was on the old page that's not > in the new page. > I'm happy to apply this myself if someone gives me edit permissions. I > wasn't able to check out all the ReST formatting with the online editor > because I don't have edit permissions. You don't have permissions to the corresonding page in the doc wiki: http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.interface.rst/ ? I am a bit lost, if this is the case. Ga?l From strawman at astraw.com Mon Mar 9 02:45:19 2009 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Mar 2009 23:45:19 -0700 Subject: [Numpy-discussion] N-D array interface page is out of date In-Reply-To: <20090309060030.GB17141@phare.normalesup.org> References: <49791FA1.3020803@astraw.com> <4987FDE9.5030303@astraw.com> <9457e7c80902030506l7094e8d3x33996b861f61bff8@mail.gmail.com> <49B15A71.1070307@astraw.com> <49B415C6.5070009@astraw.com> <49B4A693.1030701@astraw.com> <20090309060030.GB17141@phare.normalesup.org> Message-ID: <49B4BAFF.9050102@astraw.com> Gael Varoquaux wrote: > On Sun, Mar 08, 2009 at 10:18:11PM -0700, Andrew Straw wrote: >> OK, I now have a password (thanks Ga?l), but I don't have edit >> permissions on that page. So I'm attaching a patch against that page >> source that incorporates the stuff that was on the old page that's not >> in the new page. > >> I'm happy to apply this myself if someone gives me edit permissions. I >> wasn't able to check out all the ReST formatting with the online editor >> because I don't have edit permissions. > > You don't have permissions to the corresonding page in the doc wiki: > http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.interface.rst/ > ? > > I am a bit lost, if this is the case. OK, thanks for the pointer. Somehow I navigated to a view of that page that I could not edit. I have now uploaded my changes, including bits that didn't hadn't yet made it into the Sphinx-based documentation from the original page on numpy.scipy.org. -Andrew From stefan at sun.ac.za Mon Mar 9 03:35:33 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 9 Mar 2009 09:35:33 +0200 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> Message-ID: <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> Hi all, Here is an outline of recent changes made to the Trac system. I have modified the ticket workflow on projects.scipy.org/{numpy,scipy} to accommodate patch review (see http://mentat.za.net/refer/workflow.png). ?I hope this facility will make it easier to contribute, and I would like to have your feedback/suggestions. Instructions to contributers: ?* [http://projects.scipy.org/numpy/newticket Contribute a patch] or file a bug report ?* [http://docs.scipy.org Write documentation] ?* [http://projects.scipy.org/numpy/report/12 Review patches] available ?* [http://projects.scipy.org/numpy/report/13 Apply reviewed patches] The last two are new items. A ticket can be marked "needs_review", whereafter it can be changed to "review_positive" or "needs_work". Also, a "design decision needed" state is provided for stalled tickets. Other changes: To simplify ticket structure, "severity" was removed ("priority" should be used instead). Furthermore, tickets are no longer "accepted", but simply "assigned". ?You can still assign tickets to yourself. Source repository: A git repository is available on http://projects.scipy.org/git and http://projects.scipy.org/git/{numpy,scipy}. ?This repository can be browsed from Trac by clicking on the "Git Repo" button, or at http://projects.scipy.org/{numpy,scipy}/browse_git It can be cloned from http://projects.scipy.org/numpy.git Pauli installed the necessary SVN post-commit hooks to ensure that the git repository is always up to date. Ticket mailing lists: Trac tries to send out e-mails, but only a handful are going through. We are investigating the problem. Comments and suggestions are very welcome! ?Thank you to David, Peter and Pauli for all their hard work on the server setup during the past week. Regards St?fan From david at ar.media.kyoto-u.ac.jp Mon Mar 9 05:47:59 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 09 Mar 2009 18:47:59 +0900 Subject: [Numpy-discussion] Handling named temporary files in tests Message-ID: <49B4E5CF.4070504@ar.media.kyoto-u.ac.jp> Hi, While fixing several windows specific unit test failures, I encountered some problems I am not sure how to solve. In particular, we have a relatively common idiom as follows: Open file securely with a visible name (using NamedTemporaryFile) write some content into it open the file with another open call Of course, this does not work on windows. NamedTemporaryFile is basically useless on this platform (windows refuses to let a process to reopen a file opened from NamedTemporaryFile). I can see two solutions: - using mkstemp + re-opening the file later from the name returned by mkstemp: AFAICT, this basically defeats the whole purpose of mkstemp - have our own layer to bypass mkstemp on windows, where security is not a concern anyway, and use the proper functions on sane platforms. Do people have an opinion on this ? Or maybe a solution to the problem altogether ? cheers, David From faltet at pytables.org Mon Mar 9 06:59:46 2009 From: faltet at pytables.org (Francesc Alted) Date: Mon, 9 Mar 2009 11:59:46 +0100 Subject: [Numpy-discussion] Handling named temporary files in tests In-Reply-To: <49B4E5CF.4070504@ar.media.kyoto-u.ac.jp> References: <49B4E5CF.4070504@ar.media.kyoto-u.ac.jp> Message-ID: <200903091159.47138.faltet@pytables.org> A Monday 09 March 2009, David Cournapeau escrigu?: > Hi, > > While fixing several windows specific unit test failures, I > encountered some problems I am not sure how to solve. In particular, > we have a relatively common idiom as follows: > > Open file securely with a visible name (using NamedTemporaryFile) > write some content into it > open the file with another open call > > Of course, this does not work on windows. NamedTemporaryFile is > basically useless on this platform (windows refuses to let a process > to reopen a file opened from NamedTemporaryFile). I can see two > solutions: - using mkstemp + re-opening the file later from the name > returned by mkstemp: AFAICT, this basically defeats the whole purpose > of mkstemp - have our own layer to bypass mkstemp on windows, where > security is not a concern anyway, and use the proper functions on > sane platforms. > > Do people have an opinion on this ? Or maybe a solution to the > problem altogether ? We have a similar case of use for the PyTables project, and we ended implementing a class that is meant to be used as a mixin from which the test classes inherits. The class is, more or less: class TempFileMixin: def setUp(self): """Set ``h5file`` and ``h5fname`` instance attributes.""" self.h5fname = tempfile.mktemp(suffix='.h5') self.h5file = tables.openFile( self.h5fname, 'w', title=self._getName()) def tearDown(self): """Close ``h5file`` and remove ``h5fname``.""" self.h5file.close() self.h5file = None os.remove(self.h5fname) def _reopen(self, mode='r'): """Reopen ``h5file`` in the specified ``mode``.""" self.h5file.close() self.h5file = tables.openFile(self.h5fname, mode) return True The advantage is that, by simply inheriting from `TempFileMixin`, the developer have avaliable the ``h5file`` (file handler) and ``h5fname`` (file name). The mixin is responsible to open, close and remove the temporary file. In addition, the `_reopen()` method allows you to manually close and re-open the temporary, if that is what you want. An example of use: # Test for building very large MD columns without defaults class MDLargeColTestCase(common.TempFileMixin, common.PyTablesTestCase): reopen = True def test01_create(self): "Create a Table with a very large MD column. Ticket #211." N = 2**18 cols = {'col1': Int8Col(shape=N, dflt=0)} tbl = self.h5file.createTable('/', 'test', cols) tbl.row.append() # add a single row tbl.flush() if self.reopen: self._reopen() tbl = self.h5file.root.test # Check the value assert allequal(tbl[0]['col1'], zeros(N, 'i1')) This proved to be pretty handy in practice. HTH, -- Francesc Alted From ndbecker2 at gmail.com Mon Mar 9 07:47:40 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 09 Mar 2009 07:47:40 -0400 Subject: [Numpy-discussion] doc error in fromregex Message-ID: http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html#numpy.fromregex says 'str or file', but I don't think it takes str, only file name From stefan at sun.ac.za Mon Mar 9 08:13:17 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 9 Mar 2009 14:13:17 +0200 Subject: [Numpy-discussion] doc error in fromregex In-Reply-To: References: Message-ID: <9457e7c80903090513q7773f61bpc65522ca75bd8f21@mail.gmail.com> The code contains if not hasattr(file, "read"): file = open(file,'r') so it should work. 2009/3/9 Neal Becker : > http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html#numpy.fromregex > says 'str or file', but I don't think it takes str, only file name From ndbecker2 at gmail.com Mon Mar 9 08:33:47 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 09 Mar 2009 08:33:47 -0400 Subject: [Numpy-discussion] doc error in fromregex References: <9457e7c80903090513q7773f61bpc65522ca75bd8f21@mail.gmail.com> Message-ID: St?fan van der Walt wrote: > The code contains > > if not hasattr(file, "read"): > file = open(file,'r') > > so it should work. > > 2009/3/9 Neal Becker : >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html#numpy.fromregex >> says 'str or file', but I don't think it takes str, only file name Oh, so you mean pass a filename or an open file. I thought it meant it could read from a string. From stefan at sun.ac.za Mon Mar 9 08:41:22 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 9 Mar 2009 14:41:22 +0200 Subject: [Numpy-discussion] doc error in fromregex In-Reply-To: References: <9457e7c80903090513q7773f61bpc65522ca75bd8f21@mail.gmail.com> Message-ID: <9457e7c80903090541w4bc0aad7s663084a5ca5ba3dd@mail.gmail.com> 2009/3/9 Neal Becker : > http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html#numpy.fromregex >>> says 'str or file', but I don't think it takes str, only file name > > Oh, so you mean pass a filename or an open file. ?I thought it meant it could > read from a string. Yes, the docs say: file : str or file File name or file object to read. Cheers St?fan From dsdale24 at gmail.com Mon Mar 9 09:50:41 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 9 Mar 2009 08:50:41 -0500 Subject: [Numpy-discussion] suggestion for generalizing numpy functions Message-ID: I spent some time over the weekend fixing a few bugs in numpy that were exposed when attempting to use ufuncs with ndarray subclasses. It got me thinking that, with relatively little work, numpy's functions could be made to be more general. For example, the numpy.ma module redefines many of the standard ufuncs in order to do some preprocessing before the builtin ufunc is called. Likewise, in the units/quantities package I have been working on, I would like to perform a dimensional analysis to make sure an operation is allowed before I call a ufunc that might change data in place. Imagine an ndarray subclass with methods like __gfunc_pre__ and __gfunc_post__. __gfunc_pre__ could accept the context that is currently provided to __array_wrap__ (the inputs and the function called), perform whatever preprocessing is desired, and maybe return a dictionary containing metadata. Numpy functions could then be wrapped with a decorator that 1) calls __gfunc_pre__ and obtain any metadata that is returned 2) calls the wrapped functions, and then 3) calls __gfunc_post__, which might be very similar to __array_wrap__ except that it would also accept the metadata created by __gfunc_pre__. In cases where the routines to be called by __gfunc_pre__ and _post__ depend on what function is called, the the subclass could implement routines and store them in a dictionary-like object that is keyed using the function called. I have been exploring this approach with Quantities and it seems to work well. For example: def __gfunc_pre__(self, gfunc, *args): try: return gfunc_pre_registry[gfunc](*args) except KeyError: return {} I think such an approach for generalizing numpy's functions could be implemented without being disruptive to the existing __array_wrap__ framework. The decorator would attempt to identify an input or output array to use to call __gfunc_pre__ and _post__. If it finds them, it uses them. If it doesnt find them, no harm done, the existing __array_wrap__ mechanisms are still in place if the wrapped function is a ufunc. One other nice feature: the metadata that is returned by __gfunc_pre__ could contain an optional flag that the decorator attempts to pass to the wrapped function so that __gfunc_pre__ and _post are not called for any decorated internal functions. That way the subclass could specify that __gfunc_pre__ and _post should be called only for the outer-most function. Comments? Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Mar 9 11:49:50 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 10 Mar 2009 00:49:50 +0900 Subject: [Numpy-discussion] NotImplemented string in clip not transformed into exception ? Message-ID: <49B53A9E.8060403@ar.media.kyoto-u.ac.jp> Hi, While fixing a segfault in clip, I noticed a strange behavior: import numpy as np # Print NotImplemented, but does not raise any exception a = np.complex128().clip('rrr', 1) Where is this string output coming from ? From numpy or python ? How can I transform this into a proper exception ? cheers, David From p0707 at o2.pl Mon Mar 9 12:55:37 2009 From: p0707 at o2.pl (=?UTF-8?Q?p0707?=) Date: Mon, 09 Mar 2009 17:55:37 +0100 Subject: [Numpy-discussion] =?utf-8?q?Array_with_different_types?= Message-ID: <767e8f6f.487f455f.49b54a09.5331c@o2.pl> Hi! How can I store such arrays: A 1 2.3 1.2 3 d 1.2 B 4 2.3 5.2 3 c 1.2 A 1 2.3 ? 3 e 1.2 using NumPy with support for basic functions: sum, max, min. Is it possible? If not, how I can do this in effective way? Thanks for help Peter From Chris.Barker at noaa.gov Mon Mar 9 13:33:39 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 09 Mar 2009 10:33:39 -0700 Subject: [Numpy-discussion] PyCon, anyone? Message-ID: <49B552F3.7020007@noaa.gov> Hey folks, I'm trying to get an idea of how many folks from the numpy/scipy/mpl community will be at PyCon this year. If enough of us, maybe a sprint is in order, but in any case, it might be nice to get together. Please send me a note off-list (to keep the clutter down) if you are going. I may compile a list and post that, so let me know if it's OK to post your name. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From 1-2 at gmx.net Mon Mar 9 14:01:03 2009 From: 1-2 at gmx.net (=?ISO-8859-15?Q?Julius_Schl=FCter?=) Date: Mon, 09 Mar 2009 19:01:03 +0100 Subject: [Numpy-discussion] ImportError: ccompiler Message-ID: <49B5595F.3010707@gmx.net> Hi, I'm a Newbie, trying to compile Numpy on Vista with Python 2.6, following this guide: http://code.google.com/p/pyamg/wiki/CompilingOnWinXP It worked one time. Now that I've updated Numpy to the latest subversion and tried a second time, I get this error: C:\Users\...>python C:\Python26\Lib\site-packages\numpy\setup.py config --compiler=mingw32 build --compiler=mingw32 install Running from numpy source directory. Traceback (most recent call last): File "C:\Python26\Lib\site-packages\numpy\setup.py", line 96, in setup_package() File "C:\Python26\Lib\site-packages\numpy\setup.py", line 68, in setup_package from numpy.distutils.core import setup File "C:\Python26\Lib\site-packages\numpy\numpy\distutils\__init__.py", line 6, in import ccompiler File "C:\Python26\Lib\site-packages\numpy\numpy\distutils\ccompiler.py", line6, in from distutils.ccompiler import * File "C:\python26\Lib\site-packages\numpy\distutils\__init__.py", line 6, in import ccompiler File "C:\python26\Lib\site-packages\numpy\distutils\ccompiler.py", line 7, in from distutils import ccompiler ImportError: cannot import name ccompiler My PATH: %SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\python26\;C:\Program Files\Bazaar;C:\Program Files\TortoiseSVN\bin;C:\MinGW\bin\ There was someone on this list almost two years ago who had the same problem. But he had \site-packages\numpy\ in his path, which I don't have. Any help is highly appreciated. -Jules From cournape at gmail.com Mon Mar 9 14:21:32 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 10 Mar 2009 03:21:32 +0900 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <49B5595F.3010707@gmx.net> References: <49B5595F.3010707@gmx.net> Message-ID: <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> On Tue, Mar 10, 2009 at 3:01 AM, Julius Schl?ter <1-2 at gmx.net> wrote: > Hi, > > I'm a Newbie, trying to compile Numpy on Vista with Python 2.6, > following this guide: > http://code.google.com/p/pyamg/wiki/CompilingOnWinXP First, the shortest path to numpy, specially on windows, is to use the binary installer. We don't have yet a python 2.6 installer, but the upcoming numpy 1.3 will have one (numpy 1.3.0 is scheduled for April 1st). > > It worked one time. Now that I've updated Numpy to the latest subversion > and tried a second time, I get this error: > > C:\Users\...>python C:\Python26\Lib\site-packages\numpy\setup.py config > --compiler=mingw32 build --compiler=mingw32 install You should not call the setup.py from the installed numpy (in site-packages). You should call the one in your svn checkout (which should not be in site-packages - site-package is *only* for installation. You should never even have to look into it, except for debugging and things like that). So basically, reserve yourself a directory for numpy, and in it: svn co http://svn.scipy.org/svn/numpy/trunk cd trunk python setup.py build -c mingw32 install You should remove the previously installed version before installing: rd /s /q C:?python26?Lib?site-packages?numpy But again, if you can wait for a few weeks, you will be better served with the binary installer, cheers, David From 1-2 at gmx.net Mon Mar 9 14:55:20 2009 From: 1-2 at gmx.net (=?UTF-8?B?SnVsaXVzIFNjaGzDvHRlcg==?=) Date: Mon, 09 Mar 2009 19:55:20 +0100 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> References: <49B5595F.3010707@gmx.net> <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> Message-ID: <49B56618.50601@gmx.net> Hi David, Thanks very much! - Jules > > But again, if you can wait for a few weeks, you will be better served > with the binary installer, > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From lists_ravi at lavabit.com Mon Mar 9 15:34:17 2009 From: lists_ravi at lavabit.com (Ravi) Date: Mon, 9 Mar 2009 15:34:17 -0400 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> References: <49B5595F.3010707@gmx.net> <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> Message-ID: <200903091534.25363.lists_ravi@lavabit.com> Hi David, On Monday 09 March 2009 14:21:32 David Cournapeau wrote: > First, the shortest path to numpy, specially on windows, is to use the > binary installer. We don't have yet a python 2.6 installer, but the > upcoming numpy 1.3 will have one (numpy 1.3.0 is scheduled for April > 1st). If numpy 1.3.0 is available April 1, will compatible versions of scipy binaries also be available then? In other words, if numpy 1.3.0 is available for Python 2.6 on Windows XP on April 1, will a scipy 0.7.x binary that works on the same platform be available? If so, that is extremely good news, for I can finally get rid of lots of hacks (specifically to work around MSVC 7.1 deficiencies) in my code. MSVC 9.0 (used to build python 2.6) will, hopefully, be a little better. Regards, Ravi From pav at iki.fi Mon Mar 9 15:35:23 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 9 Mar 2009 19:35:23 +0000 (UTC) Subject: [Numpy-discussion] Buildbot issues Message-ID: Hi, There seem to be some problems with the buildbot: - It's not building on new commits automatically. IIRC this could be fixed by removing some (all?) of buildmaster's cache files, and/or switching to PersistentSVNPoller. - The buildmaster apparently has the old 'scipy.org' SVN url that doesn't work any more in its config. - The FreeBSD_64 slave (+ maybe others) has a wrong Numpy test stanza in its Makefile, so tests won't run. -- Pauli Virtanen From cournape at gmail.com Mon Mar 9 15:38:24 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 10 Mar 2009 04:38:24 +0900 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <200903091534.25363.lists_ravi@lavabit.com> References: <49B5595F.3010707@gmx.net> <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> <200903091534.25363.lists_ravi@lavabit.com> Message-ID: <5b8d13220903091238s2ac7dac4s7cd64b5fdf66908d@mail.gmail.com> On Tue, Mar 10, 2009 at 4:34 AM, Ravi wrote: > If numpy 1.3.0 is available April 1, will compatible versions of scipy > binaries also be available then? In other words, if numpy 1.3.0 is available > for Python 2.6 on Windows XP on April 1, will a scipy 0.7.x binary that works > on the same platform be available? scipy 0.7 binaries should work with the 1.3.0 without problems. If not, we will build new scipy binaries. cheers, David From stefan at sun.ac.za Mon Mar 9 15:39:27 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 9 Mar 2009 21:39:27 +0200 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <200903091534.25363.lists_ravi@lavabit.com> References: <49B5595F.3010707@gmx.net> <5b8d13220903091121r3b06c7a0s28efff888561b44d@mail.gmail.com> <200903091534.25363.lists_ravi@lavabit.com> Message-ID: <9457e7c80903091239w8e7d593o985ee882db8c11d9@mail.gmail.com> Hi Ravi 2009/3/9 Ravi : > If numpy 1.3.0 is available April 1, will compatible versions of scipy > binaries also be available then? In other words, if numpy 1.3.0 is available > for Python 2.6 on Windows XP on April 1, will a scipy 0.7.x binary that works > on the same platform be available? We made some changes in 1.2 so that you no longer to upgrade SciPy when you move to a different version of NumPy. Regards St?fan From charlesr.harris at gmail.com Mon Mar 9 15:44:54 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 9 Mar 2009 13:44:54 -0600 Subject: [Numpy-discussion] Buildbot issues In-Reply-To: References: Message-ID: On Mon, Mar 9, 2009 at 1:35 PM, Pauli Virtanen wrote: > Hi, > > There seem to be some problems with the buildbot: > > - It's not building on new commits automatically. > > IIRC this could be fixed by removing some (all?) of buildmaster's cache > files, and/or switching to PersistentSVNPoller. > > - The buildmaster apparently has the old 'scipy.org' SVN url that doesn't > work any more in its config. > > - The FreeBSD_64 slave (+ maybe others) has a wrong Numpy test stanza in > its Makefile, so tests won't run. > I wonder if there is also a problem related to the svn commits not generating mail? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists_ravi at lavabit.com Mon Mar 9 16:20:33 2009 From: lists_ravi at lavabit.com (Ravi) Date: Mon, 9 Mar 2009 16:20:33 -0400 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <5b8d13220903091238s2ac7dac4s7cd64b5fdf66908d@mail.gmail.com> References: <49B5595F.3010707@gmx.net> <200903091534.25363.lists_ravi@lavabit.com> <5b8d13220903091238s2ac7dac4s7cd64b5fdf66908d@mail.gmail.com> Message-ID: <200903091620.33958.lists_ravi@lavabit.com> On Monday 09 March 2009 15:38:24 David Cournapeau wrote: > > If numpy 1.3.0 is available April 1, will compatible versions of scipy > > binaries also be available then? In other words, if numpy 1.3.0 is > > available for Python 2.6 on Windows XP on April 1, will a scipy 0.7.x > > binary that works on the same platform be available? > > scipy 0.7 binaries should work with the 1.3.0 without problems. If > not, we will build new scipy binaries. As far as I know, a scipy 0.7.0 binary is not available for python 2.6. If a numpy 1.3.0 binary is available for python 2.6, will a corresponding scipy 0.7.x binary be available for python 2.6? Or is my assumption -- that the scipy 0.7.0 binary available for python 2.5 will not work with python 2.6 -- incorrect? Regards, Ravi From cournape at gmail.com Mon Mar 9 16:41:06 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 10 Mar 2009 05:41:06 +0900 Subject: [Numpy-discussion] ImportError: ccompiler In-Reply-To: <200903091620.33958.lists_ravi@lavabit.com> References: <49B5595F.3010707@gmx.net> <200903091534.25363.lists_ravi@lavabit.com> <5b8d13220903091238s2ac7dac4s7cd64b5fdf66908d@mail.gmail.com> <200903091620.33958.lists_ravi@lavabit.com> Message-ID: <5b8d13220903091341x5529d48fu719e2a07f1238aaf@mail.gmail.com> On Tue, Mar 10, 2009 at 5:20 AM, Ravi wrote: > > As far as I know, a scipy 0.7.0 binary is not available for python 2.6. Yes, you're right. To build scipy, we need numpy, though :) The good news is that we worked on making sure scipy 0.7 moistly works with python 2.6, using numpy svn. > If a > numpy 1.3.0 binary is available for python 2.6, will a corresponding scipy > 0.7.x binary be available for python 2.6? Yes. > Or is my assumption -- that the scipy 0.7.0 binary available for python 2.5 > will not work with python 2.6 -- incorrect? No, this is correct. Python "minor" (2.5 vs 2.6) releases are not backward compatible. On windows, a check is done as to avoid installing a python 2.6 binary in python 2.5. cheers, David > Regards, > Ravi > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From markbak at gmail.com Mon Mar 9 16:45:42 2009 From: markbak at gmail.com (Mark Bakker) Date: Mon, 9 Mar 2009 21:45:42 +0100 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file Message-ID: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> Hello - I tried to figure this out from the list, but haven't succeeded yet. I have a simple FORTRAN binary file. It contains: 1 integer 1 float 1 array with 16 numbers (float) How do I read these into Python? Thanks, Mark From dsdale24 at gmail.com Mon Mar 9 17:37:01 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Mon, 9 Mar 2009 17:37:01 -0400 Subject: [Numpy-discussion] suggestion for generalizing numpy functions In-Reply-To: References: Message-ID: On Mon, Mar 9, 2009 at 9:50 AM, Darren Dale wrote: > I spent some time over the weekend fixing a few bugs in numpy that were > exposed when attempting to use ufuncs with ndarray subclasses. It got me > thinking that, with relatively little work, numpy's functions could be made > to be more general. For example, the numpy.ma module redefines many of the > standard ufuncs in order to do some preprocessing before the builtin ufunc > is called. Likewise, in the units/quantities package I have been working on, > I would like to perform a dimensional analysis to make sure an operation is > allowed before I call a ufunc that might change data in place. > > Imagine an ndarray subclass with methods like __gfunc_pre__ and > __gfunc_post__. __gfunc_pre__ could accept the context that is currently > provided to __array_wrap__ (the inputs and the function called), perform > whatever preprocessing is desired, and maybe return a dictionary containing > metadata. Numpy functions could then be wrapped with a decorator that 1) > calls __gfunc_pre__ and obtain any metadata that is returned 2) calls the > wrapped functions, and then 3) calls __gfunc_post__, which might be very > similar to __array_wrap__ except that it would also accept the metadata > created by __gfunc_pre__. > > In cases where the routines to be called by __gfunc_pre__ and _post__ > depend on what function is called, the the subclass could implement routines > and store them in a dictionary-like object that is keyed using the function > called. I have been exploring this approach with Quantities and it seems to > work well. For example: > > def __gfunc_pre__(self, gfunc, *args): > try: > return gfunc_pre_registry[gfunc](*args) > except KeyError: > return {} > > I think such an approach for generalizing numpy's functions could be > implemented without being disruptive to the existing __array_wrap__ > framework. The decorator would attempt to identify an input or output array > to use to call __gfunc_pre__ and _post__. If it finds them, it uses them. If > it doesnt find them, no harm done, the existing __array_wrap__ mechanisms > are still in place if the wrapped function is a ufunc. > > One other nice feature: the metadata that is returned by __gfunc_pre__ > could contain an optional flag that the decorator attempts to pass to the > wrapped function so that __gfunc_pre__ and _post are not called for any > decorated internal functions. That way the subclass could specify that > __gfunc_pre__ and _post should be called only for the outer-most function. > > Comments? > I'm attaching a proof of concept script, maybe it will better illustrate what I am talking about. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gfuncs.py Type: text/x-python Size: 1868 bytes Desc: not available URL: From oliphant at enthought.com Mon Mar 9 18:08:16 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 09 Mar 2009 17:08:16 -0500 Subject: [Numpy-discussion] suggestion for generalizing numpy functions In-Reply-To: References: Message-ID: <49B59350.7080601@enthought.com> Darren Dale wrote: > On Mon, Mar 9, 2009 at 9:50 AM, Darren Dale > wrote: > > I spent some time over the weekend fixing a few bugs in numpy that > were exposed when attempting to use ufuncs with ndarray > subclasses. It got me thinking that, with relatively little work, > numpy's functions could be made to be more general. For example, > the numpy.ma module redefines many of the > standard ufuncs in order to do some preprocessing before the > builtin ufunc is called. Likewise, in the units/quantities package > I have been working on, I would like to perform a dimensional > analysis to make sure an operation is allowed before I call a > ufunc that might change data in place. > The suggestions behind this idea are interesting. It seems related to me, to the concept of "contexts" that Eric presented at SciPy a couple of years ago that keeps coming up at Enthought. It may be of benefit to solve the problem from that perspective rather than the "sub-class" perspective. Unfortunately, I don't have time to engage this discussion as it deserves, but I wanted to encourage you because I think there are good ideas in what you are doing. The sub-class route may be a decent solution, but it also might be worthwhile to think from the perspective of contexts as well. Basically, the context idea is that rather than "sub-class" the ndarray, you create a more powerful name-space for code that uses arrays to live in. Because python code can execute using a namespace that is any dictionary-like thing, you can create a "namespace" object with more powerful getters and setters that intercepts the getting and setting of names as the Python code is executing. This allows every variable to be "adapted" in a manner analagous to "type-maps" in SWIG --- but in a more powerful way. We have been taking advantage of this basic but powerful idea quite a bit. Unit-handling is a case where "contexts" and generic functions rather than sub-classes appears to be an approach to solving the problem. The other important idea about contexts is that you can layer-on adapters on getting and setting variables into the namespace which provide more hooks for doing some powerful things in easy-to-remember ways. I apologize if it sounds like I'm hi-jacking your question to promote an agenda. I really like the generality you are trying to reach with your suggestions and just wanted to voice the opinion that it might be better to look for a solution using the two dimensions of "objects" and "namespaces" (o.k. generic functions are probably another dimension in my metaphor) rather than just sub-classes of objects. -- Travis Oliphant Enthought, Inc. (512) 536-1057 (office) (512) 536-1059 (fax) http://www.enthought.com oliphant at enthought.com From michael.s.gilbert at gmail.com Mon Mar 9 18:21:45 2009 From: michael.s.gilbert at gmail.com (Michael S. Gilbert) Date: Mon, 9 Mar 2009 18:21:45 -0400 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> References: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> Message-ID: <20090309182145.69669d76.michael.s.gilbert@gmail.com> On Mon, 9 Mar 2009 21:45:42 +0100, Mark Bakker wrote: > Hello - > > I tried to figure this out from the list, but haven't succeeded yet. > > I have a simple FORTRAN binary file. > It contains: > 1 integer > 1 float > 1 array with 16 numbers (float) > > How do I read these into Python? I figured this out a long time (4 years) ago, but haven't thought about it for a while, so I don't really have the answer, but I can provide a little guidance. Fortran pads its output, so you just have to figure out how the padding works and how to get around it to get the information that you actually need. I suggest writing out some examples via fortran and using hexdump to view the results, then you can determine the pattern. For example, you would get something like: $ hexdump -C fort.out 0000008 0000000F 00000008 when you use fortran to write 15 as an integer. Then you can use python's binary file i/o to read in the file and extract the information that you are interested in. Hope this helps. Regards, Mike From Chris.Barker at noaa.gov Mon Mar 9 18:43:15 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 09 Mar 2009 15:43:15 -0700 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <20090309182145.69669d76.michael.s.gilbert@gmail.com> References: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> <20090309182145.69669d76.michael.s.gilbert@gmail.com> Message-ID: <49B59B83.9000002@noaa.gov> > On Mon, 9 Mar 2009 21:45:42 +0100, Mark Bakker wrote: >> I tried to figure this out from the list, but haven't succeeded yet. >> >> I have a simple FORTRAN binary file. >> It contains: >> 1 integer >> 1 float >> 1 array with 16 numbers (float) >> >> How do I read these into Python? there was a lengthy discussion of reading FORTRAN binary files on this list within the last few months -- lots of good info there. However, this is a pretty simple subset of the problem, so you might start with the struct module: http://www.python.org/doc/2.5.4/lib/module-struct.html pad = 1 format = "%ixi5ixf%ix16f"%(pad,pad,pad) num_bytes = struct.calcsize(format) data = infile.read(struct.calcsize(format)) you will need to play with the pad value (1 or 2), and you may not need it between the single values ( I htink this all depends on your FORTRAN compiler) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From fperez.net at gmail.com Mon Mar 9 20:46:28 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 9 Mar 2009 17:46:28 -0700 Subject: [Numpy-discussion] ANN: python for scientific computing at SIAM CSE 09 In-Reply-To: References: Message-ID: Hi folks, On Wed, Mar 4, 2009 at 6:51 AM, Fernando Perez wrote: > Hi all, > > sorry for the spam, but in case any of you are coming to the SIAM > Conference on Computational Science and Engineering (CSE09) in Miami: > > http://www.siam.org/meetings/cse09/ A little trip report: http://fdoperez.blogspot.com/2009/03/python-at-siam-cse09-meeting.html and the slides I have so far, for those who may be interested (I'll continue to add more as I get them): https://cirl.berkeley.edu/fperez/py4science/2009_siam_cse/ Thanks to all the speakers! Cheers, f From charlesr.harris at gmail.com Mon Mar 9 23:31:39 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 9 Mar 2009 21:31:39 -0600 Subject: [Numpy-discussion] sign, signbit and nans once again. Message-ID: I want to get this settled for the 1.3 release. My thoughts are: - signbit returns the signbit whether or not the number is a nan. - sign returns nan for nans. Copysign is currently unimplemented. I'm thinking of adding it, but making it return nans when copying the sign of a nan. This isn't how it is in BSD, but the behavior in this case is unspecified by the standard. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.s.gilbert at gmail.com Mon Mar 9 23:41:12 2009 From: michael.s.gilbert at gmail.com (Michael Gilbert) Date: Mon, 9 Mar 2009 23:41:12 -0400 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <20090309182145.69669d76.michael.s.gilbert@gmail.com> References: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> <20090309182145.69669d76.michael.s.gilbert@gmail.com> Message-ID: <20090309234112.8f5c8495.michael.s.gilbert@gmail.com> On Mon, 9 Mar 2009 18:21:45 -0400 "Michael S. Gilbert" wrote: > On Mon, 9 Mar 2009 21:45:42 +0100, Mark Bakker wrote: > > > Hello - > > > > I tried to figure this out from the list, but haven't succeeded yet. > > > > I have a simple FORTRAN binary file. > > It contains: > > 1 integer > > 1 float > > 1 array with 16 numbers (float) > > > > How do I read these into Python? > > I figured this out a long time (4 years) ago, but haven't thought about > it for a while, so I don't really have the answer, but I can provide > a little guidance. Fortran pads its output, so you just have to > figure out how the padding works and how to get around it to get the > information that you actually need. > > I suggest writing out some examples via fortran and using hexdump to > view the results, then you can determine the pattern. For example, > you would get something like: > > $ hexdump -C fort.out > 0000008 0000000F 00000008 > > when you use fortran to write 15 as an integer. Then you can use > python's binary file i/o to read in the file and extract the > information that you are interested in. Hope this helps. > > Regards, > Mike I probably should have mentioned fromfile, which you can actually use to read the binary data: fid = open( 'fort.out' , 'r' ) junk = numpy.fromfile( fid , numpy.int , 1 ) integer = numpy.fromfile( fid, numpy.int , 1 ) junk = numpy.fromfile( fid , numpy.int , 2 ) floats = numpy.fromfile( fid , numpy.float , 16 ) . . . fid.close() Regards, Mike From robert.kern at gmail.com Mon Mar 9 23:54:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 9 Mar 2009 22:54:09 -0500 Subject: [Numpy-discussion] sign, signbit and nans once again. In-Reply-To: References: Message-ID: <3d375d730903092054u35bab24ekc8318b53bba7ce8f@mail.gmail.com> On Mon, Mar 9, 2009 at 22:31, Charles R Harris wrote: > I want to get this settled for the 1.3 release. My thoughts are: > > signbit returns the signbit whether or not the number is a nan. > sign returns nan for nans. > > Copysign is currently unimplemented. I'm thinking of adding it, but making > it return nans when copying the sign of a nan. This isn't how it is in BSD, > but the behavior in this case is unspecified by the standard. math.copysign() just uses the available copysign() function from libm. I would prefer that we do the same. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shuwj5460 at 163.com Tue Mar 10 01:26:51 2009 From: shuwj5460 at 163.com (shuwj5460 at 163.com) Date: Tue, 10 Mar 2009 13:26:51 +0800 Subject: [Numpy-discussion] why numpy.round get a different result from python round function? Message-ID: <20090310131954.4DC8.SHUWJ5460@163.com> hi, I read the doc for numpy about round function: ----- For values exactly halfway between rounded decimal values, Numpy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [16] and errors introduced when scaling by powers of ten. ---- why not numpy round keep the same with python round function? or provide anothor function to do so? david.shu 2009.3.10 -- <> From cournape at gmail.com Tue Mar 10 02:27:32 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 10 Mar 2009 15:27:32 +0900 Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 Message-ID: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> Hi, For the upcoming 1.3.0 release, I would like to distribute the (built) documentation in some way. But first, I need to be able to build it :) What are the exact requirements to build the documentation ? Is sphinx 0.5 enough ? I can't manage to build it on either mac os x or linux: ... dumping search index... Exception occurred: File "/Users/david/local/stow/sphinx.dev/lib/python2.5/site-packages/sphinx/search.py", line 151, in get_descrefs pdict[name] = (fn2index[doc], i) KeyError: 'reference/c-api.types-and-structures' The full traceback has been saved in /var/folders/b-/b-BC2bPYFouYhoybrvprFE+++TI/-Tmp-/sphinx-err-PKglvL.log, if you want to report the issue to the author. Please also report this if it was a user error, so that a better error message can be provided next time. There are also some errors on mac os x about too many opened files (which can be alleviated by running the make html again, but obviously, that's not great). I don't know if there are easy solutions to that problem, cheers, David From efiring at hawaii.edu Tue Mar 10 03:12:18 2009 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 09 Mar 2009 21:12:18 -1000 Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 In-Reply-To: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> Message-ID: <49B612D2.2020306@hawaii.edu> David Cournapeau wrote: > Hi, > > For the upcoming 1.3.0 release, I would like to distribute the (built) > documentation in some way. But first, I need to be able to build it :) > What are the exact requirements to build the documentation ? Is sphinx > 0.5 enough ? I can't manage to build it on either mac os x or linux: David, A few days ago I was trying to build it on linux. I could not build it with 0.5.1 or with 0.6dev. I submitted tickets to sphinx for the latter, and I think the first three problems were fixed before I hit one that required a change on the numpy side. I was not (and still am not) prepared to work on it, so that was the end of it. http://www.mail-archive.com/numpy-discussion at scipy.org/msg15953.html Eric > > ... > dumping search index... Exception occurred: > File "/Users/david/local/stow/sphinx.dev/lib/python2.5/site-packages/sphinx/search.py", > line 151, in get_descrefs > pdict[name] = (fn2index[doc], i) > KeyError: 'reference/c-api.types-and-structures' > The full traceback has been saved in > /var/folders/b-/b-BC2bPYFouYhoybrvprFE+++TI/-Tmp-/sphinx-err-PKglvL.log, > if you want to report the issue to the author. > Please also report this if it was a user error, so that a better error > message can be provided next time. > > There are also some errors on mac os x about too many opened files > (which can be alleviated by running the make html again, but > obviously, that's not great). I don't know if there are easy solutions > to that problem, > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From nwagner at iam.uni-stuttgart.de Tue Mar 10 05:37:49 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Mar 2009 10:37:49 +0100 Subject: [Numpy-discussion] dot product Message-ID: Hi all, The dot product can be defined for two vectors x and y by x?y=|x||y| \cos(\theta), where theta is the angle between the vectors and |x| is the norm. Now assume that we have arrays(matrices) X = [x_1, ..., x_m] Y = [y_1, ..., y_s] m <> s Is there a built-in function to compute the following matrix for i in arange(0,m): for j in arange(0,s): MAC[i,j] = dot(X[:,i],Y[:,j])**2/(dot(X[:,i],X[:,i])*dot(Y[:,j],Y[:,j])) Each element of the matrix represents the corresponding angle squared. Nils From nadavh at visionsense.com Tue Mar 10 06:15:38 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Tue, 10 Mar 2009 12:15:38 +0200 Subject: [Numpy-discussion] dot product References: Message-ID: <710F2847B0018641891D9A216027636029C474@ex3.envision.co.il> dot(X.transpose(), Y)**2 / ( (X*X).sum(0)[:,None] * (Y*Y).sum(0) ) Nadav -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Nils Wagner ????: ? 10-???-09 11:37 ??: numpy-discussion at scipy.org ????: [Numpy-discussion] dot product Hi all, The dot product can be defined for two vectors x and y by x?y=|x||y| \cos(\theta), where theta is the angle between the vectors and |x| is the norm. Now assume that we have arrays(matrices) X = [x_1, ..., x_m] Y = [y_1, ..., y_s] m <> s Is there a built-in function to compute the following matrix for i in arange(0,m): for j in arange(0,s): MAC[i,j] = dot(X[:,i],Y[:,j])**2/(dot(X[:,i],X[:,i])*dot(Y[:,j],Y[:,j])) Each element of the matrix represents the corresponding angle squared. Nils _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3075 bytes Desc: not available URL: From sturla at molden.no Tue Mar 10 07:27:44 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 10 Mar 2009 12:27:44 +0100 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> References: <6946b9500903091345u6badcbeaw2641a761bf714bf4@mail.gmail.com> Message-ID: <49B64EB0.2060109@molden.no> On 3/9/2009 9:45 PM, Mark Bakker wrote: > How do I read these into Python? One option is to read the file in Fortran (using the same compiler), and call Fortran from python using f2py. If you can write the file in Fortran, you can read the file in Fortran. Another option to find out how the binary data is stored by Fortran (experimenting, hex editor, etc) and read them using numpy.fromfile. You can also use or numpy.memmap with a recarray. Sturla Molden From watson.jim at gmail.com Tue Mar 10 09:22:46 2009 From: watson.jim at gmail.com (James Watson) Date: Tue, 10 Mar 2009 13:22:46 +0000 Subject: [Numpy-discussion] syntax error in scalartypes.inc.src Message-ID: In revision 6609, numpy fails to build on linux x86_64 due to an extra comma on line 779 of numpy/core/src/scalartypes.inc.src. From david.huard at gmail.com Tue Mar 10 09:37:24 2009 From: david.huard at gmail.com (David Huard) Date: Tue, 10 Mar 2009 09:37:24 -0400 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> Message-ID: <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> Stefan, The SciPy site is really nice, but the NumPy site returns a Page Load Error. David On Mon, Mar 9, 2009 at 3:35 AM, St?fan van der Walt wrote: > Hi all, > > Here is an outline of recent changes made to the Trac system. > > I have modified the ticket workflow on > projects.scipy.org/{numpy,scipy}to accommodate patch review (see > http://mentat.za.net/refer/workflow.png). I hope this facility will > make it easier to contribute, and I would like to have your > feedback/suggestions. > > Instructions to contributers: > > * [http://projects.scipy.org/numpy/newticket Contribute a patch] or > file a bug report > * [http://docs.scipy.org Write documentation] > * [http://projects.scipy.org/numpy/report/12 Review patches] available > * [http://projects.scipy.org/numpy/report/13 Apply reviewed patches] > > The last two are new items. > > A ticket can be marked "needs_review", whereafter it can be changed to > "review_positive" or "needs_work". Also, a "design decision needed" > state is provided for stalled tickets. > > Other changes: > > To simplify ticket structure, "severity" was removed ("priority" > should be used instead). Furthermore, tickets are no longer > "accepted", but simply "assigned". You can still assign > tickets to yourself. > > Source repository: > > A git repository is available on http://projects.scipy.org/git and > http://projects.scipy.org/git/{numpy,scipy}. > This repository can be > browsed from Trac by clicking on the "Git Repo" button, or at > > http://projects.scipy.org/{numpy,scipy}/browse_git > > It can be cloned from > > http://projects.scipy.org/numpy.git > > Pauli installed the necessary SVN post-commit hooks to ensure that the > git repository is always up to date. > > Ticket mailing lists: > > Trac tries to send out e-mails, but only a handful are going through. > We are investigating the problem. > > Comments and suggestions are very welcome! Thank you to David, Peter > and Pauli for all their hard work on the server setup during the past > week. > > Regards > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Mar 10 09:44:45 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 15:44:45 +0200 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> Message-ID: <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> Hi David 2009/3/10 David Huard : > Stefan, > > The SciPy site is really nice, but the NumPy site returns a Page Load Error. Which page are you referring to? http://projects.scipy.org/numpy seems to work fine. Cheers St?fan From david at ar.media.kyoto-u.ac.jp Tue Mar 10 09:52:30 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 10 Mar 2009 22:52:30 +0900 Subject: [Numpy-discussion] syntax error in scalartypes.inc.src In-Reply-To: References: Message-ID: <49B6709E.30408@ar.media.kyoto-u.ac.jp> James Watson wrote: > In revision 6609, numpy fails to build on linux x86_64 due to an extra > comma on line 779 of numpy/core/src/scalartypes.inc.src. > Fixed in r6615, cheers, David From stefan at sun.ac.za Tue Mar 10 10:50:01 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 16:50:01 +0200 Subject: [Numpy-discussion] why numpy.round get a different result from python round function? In-Reply-To: <20090310131954.4DC8.SHUWJ5460@163.com> References: <20090310131954.4DC8.SHUWJ5460@163.com> Message-ID: <9457e7c80903100750y42cec7d6i18eac6e8300d859c@mail.gmail.com> Hi David 2009/3/10 shuwj5460 at 163.com : > I read the doc for numpy about round function: > > ----- > For values exactly halfway between rounded decimal values, Numpy rounds > to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 > round to 0.0, etc. Results may also be surprising due to the inexact > representation of decimal fractions in the IEEE floating point standard > [16] and errors introduced when scaling by powers of ten. > ---- > > why not numpy round ?keep the same with python round function? or > provide anothor function to do so? In Python 2.6 and 3.0, rounding is also to the nearest even number. You can round to nearest using something like np.floor(np.abs(x) + 0.5) * np.sign(x) Regards St?fan From chanley at stsci.edu Tue Mar 10 10:57:54 2009 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 10 Mar 2009 10:57:54 -0400 Subject: [Numpy-discussion] new numpy error in 1.3.0.dev6618 Message-ID: <49B67FF2.1090702@stsci.edu> ====================================================================== ERROR: test_float_repr (test_scalarmath.TestRepr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chanley/dev/site-packages/lib/python/numpy/core/tests/test_scalarmath.py", line 101, in test_float_repr val2 = t(eval(val_repr)) File "", line 1, in NameError: name 'nan' is not defined ---------------------------------------------------------------------- Ran 2018 tests in 10.311s FAILED (KNOWNFAIL=1, SKIP=1, errors=1) >>> numpy.__version__ '1.3.0.dev6618' >>> This was run on a Intel Mac running OS X 10.5.6. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From markbak at gmail.com Tue Mar 10 11:05:06 2009 From: markbak at gmail.com (Mark Bakker) Date: Tue, 10 Mar 2009 16:05:06 +0100 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file Message-ID: <6946b9500903100805t2117d1fcv3ecacc11fd8ed9cd@mail.gmail.com> Thanks, Mike. This seems to be a really easy way. One more question. It turns out my file also contains a character string of 16 characters. I tried np.fromfile(fd,np.str,16) But that gives an error. Can I read a characterstring with fromfile? I know, I can read with fd.read, but I am wondering if there is an option for fromfile as well. Thanks, Mark > From: Michael Gilbert > fid = open( 'fort.out' , 'r' ) > junk = numpy.fromfile( fid , numpy.int , 1 ) > integer = numpy.fromfile( fid, numpy.int , 1 ) > junk = numpy.fromfile( fid , numpy.int , 2 ) > floats = numpy.fromfile( fid , numpy.float , 16 ) From michael.s.gilbert at gmail.com Tue Mar 10 11:37:24 2009 From: michael.s.gilbert at gmail.com (Michael S. Gilbert) Date: Tue, 10 Mar 2009 11:37:24 -0400 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <6946b9500903100805t2117d1fcv3ecacc11fd8ed9cd@mail.gmail.com> References: <6946b9500903100805t2117d1fcv3ecacc11fd8ed9cd@mail.gmail.com> Message-ID: <20090310113724.9bcc8e0b.michael.s.gilbert@gmail.com> On Tue, 10 Mar 2009 16:05:06 +0100, Mark Bakker wrote: > Thanks, Mike. > This seems to be a really easy way. > One more question. > It turns out my file also contains a character string of 16 characters. > I tried > np.fromfile(fd,np.str,16) > But that gives an error. > Can I read a characterstring with fromfile? > I know, I can read with fd.read, but I am wondering if there is an > option for fromfile as well. Strings are indeed supported by numpy. You're looking for the "numpy.character" data type. Regards, Mike From sturla at molden.no Tue Mar 10 11:55:57 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 10 Mar 2009 16:55:57 +0100 Subject: [Numpy-discussion] Another question on reading from binary FORTRAN file In-Reply-To: <6946b9500903100805t2117d1fcv3ecacc11fd8ed9cd@mail.gmail.com> References: <6946b9500903100805t2117d1fcv3ecacc11fd8ed9cd@mail.gmail.com> Message-ID: <49B68D8D.4010503@molden.no> Mark Bakker wrote: > Thanks, Mike. > This seems to be a really easy way. > One more question. > It turns out my file also contains a character string of 16 characters. > I tried > np.fromfile(fd,np.str,16) > np.fromfile(fd,np.uint8,16).tostring() From david at ar.media.kyoto-u.ac.jp Tue Mar 10 11:47:33 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 00:47:33 +0900 Subject: [Numpy-discussion] Can we assume both FPU and ALU to have same endianness for numpy ? Message-ID: <49B68B95.7020703@ar.media.kyoto-u.ac.jp> Hi, While working on portable macros for NAN, INF and co, I was wondering why the current version of my code was working (http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/numpy/npy_math.h, first lines). I then realized that IEEE 754 did not impose an endianness, contrary to my belief. The macros would fail if the FPU and the ALU were using a different endianness. Is this still a possibility on the architectures we want to support ? cheers, David From faltet at pytables.org Tue Mar 10 12:13:13 2009 From: faltet at pytables.org (Francesc Alted) Date: Tue, 10 Mar 2009 17:13:13 +0100 Subject: [Numpy-discussion] Can we assume both FPU and ALU to have same endianness for numpy ? In-Reply-To: <49B68B95.7020703@ar.media.kyoto-u.ac.jp> References: <49B68B95.7020703@ar.media.kyoto-u.ac.jp> Message-ID: <200903101713.13264.faltet@pytables.org> A Tuesday 10 March 2009, David Cournapeau escrigu?: > Hi, > > While working on portable macros for NAN, INF and co, I was > wondering why the current version of my code was working > (http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/num >py/npy_math.h, first lines). I then realized that IEEE 754 did not > impose an endianness, contrary to my belief. The macros would fail if > the FPU and the ALU were using a different endianness. Is this still > a possibility on the architectures we want to support ? Could you be more explicit? Currently, there is only a part of the processor that does floating point arithmetic. In old systems, there was in a FPU located outside of the main processor, but in modern ones, I'd say that the FPU is always integrated in the main ALU. At any rate, having an ALU and FPU with different endianess sounds *very* weird to my ears. Cheers, -- Francesc Alted From markbak at gmail.com Tue Mar 10 12:21:23 2009 From: markbak at gmail.com (Mark Bakker) Date: Tue, 10 Mar 2009 17:21:23 +0100 Subject: [Numpy-discussion] array 2 string Message-ID: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> Hello, I want to convert an array to a string. I like array2string, but it puts these annoying square brackets around the array, like [[1 2 3], [3 4 5]] Anyway we can suppress the square brackets and get (this is what is written with savetxt, but I cannot get it to store in a variable) 1 2 3 4 5 6 Thanks, Mark From david at ar.media.kyoto-u.ac.jp Tue Mar 10 12:05:14 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 01:05:14 +0900 Subject: [Numpy-discussion] Can we assume both FPU and ALU to have same endianness for numpy ? In-Reply-To: <200903101713.13264.faltet@pytables.org> References: <49B68B95.7020703@ar.media.kyoto-u.ac.jp> <200903101713.13264.faltet@pytables.org> Message-ID: <49B68FBA.9010603@ar.media.kyoto-u.ac.jp> Francesc Alted wrote: > A Tuesday 10 March 2009, David Cournapeau escrigu?: > >> Hi, >> >> While working on portable macros for NAN, INF and co, I was >> wondering why the current version of my code was working >> (http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/num >> py/npy_math.h, first lines). I then realized that IEEE 754 did not >> impose an endianness, contrary to my belief. The macros would fail if >> the FPU and the ALU were using a different endianness. Is this still >> a possibility on the architectures we want to support ? >> > > Could you be more explicit? Currently, there is only a part of the > processor that does floating point arithmetic. In old systems, there > was in a FPU located outside of the main processor, but in modern ones, > I'd say that the FPU is always integrated in the main ALU. > I am asking whether we can assume that both integer and floating point representation uses the same endianness for all architectures we want to support. I thought IEEE 754 imposed everything to be big endian, but then discovered this was wrong. > At any rate, having an ALU and FPU with different endianess sounds > *very* weird to my ears. > According to wikipedia, it is (was ?) possible: http://en.wikipedia.org/wiki/Endianness#Floating-point_and_endianness Now, whether this happens with current architectures, I don't know. I have tested my code on ppc, x86, x86_64 and sparc, and all of them share the same endianness for ALU and FPU. But maybe some other don't (ARM ? ARM is maybe the platform I am the less familiar with, but is potentially one of the most interesting - with things like ARM-based netbooks and other low-power devices; we can wait a while before idl or matlab to be ported on ARM, I think :) ). cheers, David From michael.s.gilbert at gmail.com Tue Mar 10 12:33:12 2009 From: michael.s.gilbert at gmail.com (Michael S. Gilbert) Date: Tue, 10 Mar 2009 12:33:12 -0400 Subject: [Numpy-discussion] array 2 string In-Reply-To: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> References: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> Message-ID: <20090310123312.06579170.michael.s.gilbert@gmail.com> On Tue, 10 Mar 2009 17:21:23 +0100, Mark Bakker wrote: > Hello, > > I want to convert an array to a string. > > I like array2string, but it puts these annoying square brackets around > the array, like > > [[1 2 3], > [3 4 5]] > > Anyway we can suppress the square brackets and get (this is what is > written with savetxt, but I cannot get it to store in a variable) > 1 2 3 > 4 5 6 This isn't pretty, but: out = '' for i in range( 0 , 2 ): for j in range( 0, 3 ): out += str( A[i,j] ) + ' ' out += '\n' print out From sccolbert at gmail.com Tue Mar 10 12:35:47 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 10 Mar 2009 12:35:47 -0400 Subject: [Numpy-discussion] array 2 string In-Reply-To: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> References: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> Message-ID: <7f014ea60903100935o2256fe68u1bdafcd4ad895115@mail.gmail.com> a = array(....) b = str(a).replace('[','').replace(']','') there's probably a better way, but it works. On Tue, Mar 10, 2009 at 12:21 PM, Mark Bakker wrote: > Hello, > > I want to convert an array to a string. > > I like array2string, but it puts these annoying square brackets around > the array, like > > [[1 2 3], > [3 4 5]] > > Anyway we can suppress the square brackets and get (this is what is > written with savetxt, but I cannot get it to store in a variable) > 1 2 3 > 4 5 6 > > Thanks, Mark > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Mar 10 12:37:23 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 10 Mar 2009 12:37:23 -0400 Subject: [Numpy-discussion] array 2 string In-Reply-To: <7f014ea60903100935o2256fe68u1bdafcd4ad895115@mail.gmail.com> References: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> <7f014ea60903100935o2256fe68u1bdafcd4ad895115@mail.gmail.com> Message-ID: Simplifying the loops from a previous poster: >>> "\n".join((" ".join((str(_) for _ in x)) for x in a)) From faltet at pytables.org Tue Mar 10 12:56:13 2009 From: faltet at pytables.org (Francesc Alted) Date: Tue, 10 Mar 2009 17:56:13 +0100 Subject: [Numpy-discussion] Can we assume both FPU and ALU to have same endianness for numpy ? In-Reply-To: <49B68FBA.9010603@ar.media.kyoto-u.ac.jp> References: <49B68B95.7020703@ar.media.kyoto-u.ac.jp> <200903101713.13264.faltet@pytables.org> <49B68FBA.9010603@ar.media.kyoto-u.ac.jp> Message-ID: <200903101756.13686.faltet@pytables.org> A Tuesday 10 March 2009, David Cournapeau escrigu?: > Francesc Alted wrote: > > A Tuesday 10 March 2009, David Cournapeau escrigu?: > >> Hi, > >> > >> While working on portable macros for NAN, INF and co, I was > >> wondering why the current version of my code was working > >> (http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/ > >>num py/npy_math.h, first lines). I then realized that IEEE 754 did > >> not impose an endianness, contrary to my belief. The macros would > >> fail if the FPU and the ALU were using a different endianness. Is > >> this still a possibility on the architectures we want to support ? > > > > Could you be more explicit? Currently, there is only a part of the > > processor that does floating point arithmetic. In old systems, > > there was in a FPU located outside of the main processor, but in > > modern ones, I'd say that the FPU is always integrated in the main > > ALU. > > I am asking whether we can assume that both integer and floating > point representation uses the same endianness for all architectures > we want to support. I thought IEEE 754 imposed everything to be big > endian, but then discovered this was wrong. Well, provided that most modern processors have the FPU and ALU integrated in the same die, I'd say that it is safe to assume that the both must have the same endianness. When/if NumPy starts to support GPUs, then it would probably be a good time to ask again about this ;-) > > > At any rate, having an ALU and FPU with different endianess sounds > > *very* weird to my ears. > > According to wikipedia, it is (was ?) possible: > > http://en.wikipedia.org/wiki/Endianness#Floating-point_and_endianness > > Now, whether this happens with current architectures, I don't know. I > have tested my code on ppc, x86, x86_64 and sparc, and all of them > share the same endianness for ALU and FPU. But maybe some other don't > (ARM ? ARM is maybe the platform I am the less familiar with, but is > potentially one of the most interesting - with things like ARM-based > netbooks and other low-power devices; we can wait a while before idl > or matlab to be ported on ARM, I think :) ). Yeah, exactly :) -- Francesc Alted From josef.pktd at gmail.com Tue Mar 10 13:02:35 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 10 Mar 2009 12:02:35 -0500 Subject: [Numpy-discussion] array 2 string In-Reply-To: References: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> <7f014ea60903100935o2256fe68u1bdafcd4ad895115@mail.gmail.com> Message-ID: <1cd32cbb0903101002n2faf6d4fk16658a369c9c8d4c@mail.gmail.com> On Tue, Mar 10, 2009 at 11:37 AM, Pierre GM wrote: > Simplifying the loops from a previous poster: > ?>>> ?"\n".join((" ".join((str(_) for _ in x)) for x in a)) > - Show quoted text - > or if you want to control the formatting, e.g. print "\n".join(("%-10.6f "*a.shape[1] % tuple(x) for x in a)) or print "\n".join(("%6d"*a.shape[1] % tuple(x) for x in a)) From david.huard at gmail.com Tue Mar 10 13:28:38 2009 From: david.huard at gmail.com (David Huard) Date: Tue, 10 Mar 2009 13:28:38 -0400 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> Message-ID: <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> On Tue, Mar 10, 2009 at 9:44 AM, St?fan van der Walt wrote: > Hi David > > 2009/3/10 David Huard : > > Stefan, > > > > The SciPy site is really nice, but the NumPy site returns a Page Load > Error. > > Which page are you referring to? > > http://projects.scipy.org/numpy > > seems to work fine. > Yes, this one. I deleted the cookies for the page and then it worked. > > Cheers > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Mar 10 13:41:03 2009 From: david.huard at gmail.com (David Huard) Date: Tue, 10 Mar 2009 13:41:03 -0400 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> Message-ID: <91cf711d0903101041m5f678bf7ice81130ca0981220@mail.gmail.com> On Tue, Mar 10, 2009 at 1:28 PM, David Huard wrote: > > > On Tue, Mar 10, 2009 at 9:44 AM, St?fan van der Walt wrote: > >> Hi David >> >> 2009/3/10 David Huard : >> > Stefan, >> > >> > The SciPy site is really nice, but the NumPy site returns a Page Load >> Error. >> >> Which page are you referring to? >> >> http://projects.scipy.org/numpy >> >> seems to work fine. >> > > Yes, this one. I deleted the cookies for the page and then it worked. > > >> but, if I try to login, I get the same error again. I tried to reset the password, register under a new name, but I always get the following message: The browser has stopped trying to retrieve the requested item. The site is redirecting the request in a way that will never complete. Thanks, David > >> Cheers >> St?fan >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Mar 10 14:22:12 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Mar 2009 18:22:12 +0000 (UTC) Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> Message-ID: Tue, 10 Mar 2009 15:27:32 +0900, David Cournapeau wrote: > For the upcoming 1.3.0 release, I would like to distribute the (built) > documentation in some way. But first, I need to be able to build it :) Yep, buildability would be a nice feature :) > What are the exact requirements to build the documentation ? Is sphinx > 0.5 enough ? I can't manage to build it on either mac os x or linux: Sphinx 0.5.1 worksforme, and on two different Linux machines (and Python versions), so I doubt it's somehow specific to my setup. Sphinx 0.6.dev doesn't work at the moment with autosummary. It's a bit of a moving target, so I haven't made keeping it working a priority. > dumping search index... Exception occurred: > File > "/Users/david/local/stow/sphinx.dev/lib/python2.5/site-packages/ sphinx/search.py", > line 151, in get_descrefs > pdict[name] = (fn2index[doc], i) > KeyError: 'reference/c-api.types-and-structures' The full traceback has > been saved in > /var/folders/b-/b-BC2bPYFouYhoybrvprFE+++TI/-Tmp-/sphinx-err-PKglvL.log, > if you want to report the issue to the author. Please also report this > if it was a user error, so that a better error message can be provided > next time. This is a Sphinx error I run into from time to time. Usually make clean helps, but I'm not sure what causes this. The error looks a bit like http://bitbucket.org/birkenfeld/sphinx/issue/81/ but I think Ctrl+C is not a requirement for triggering it. Did you get this error from a clean build? > There are also some errors on mac os x about too many opened files > (which can be alleviated by running the make html again, but obviously, > that's not great). I don't know if there are easy solutions to that > problem, At which step did this error occur? -- Pauli Virtanen From pav at iki.fi Tue Mar 10 14:26:23 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Mar 2009 18:26:23 +0000 (UTC) Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> Message-ID: Tue, 10 Mar 2009 18:22:12 +0000, Pauli Virtanen wrote: > Tue, 10 Mar 2009 15:27:32 +0900, David Cournapeau wrote: >> For the upcoming 1.3.0 release, I would like to distribute the (built) >> documentation in some way. But first, I need to be able to build it :) > > Yep, buildability would be a nice feature :) > >> What are the exact requirements to build the documentation ? Is sphinx >> 0.5 enough ? I can't manage to build it on either mac os x or linux: > > Sphinx 0.5.1 worksforme, and on two different Linux machines (and Python > versions), so I doubt it's somehow specific to my setup. One additional thing that needs to be taken care of is that import numpy in the python used to run Sphinx imports the version of numpy you want to generated documentation for. -- Pauli Virtanen From Chris.Barker at noaa.gov Tue Mar 10 14:32:32 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 10 Mar 2009 11:32:32 -0700 Subject: [Numpy-discussion] Google summer of Code 2009 In-Reply-To: <49B67625.2010706@stsci.edu> References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> Message-ID: <49B6B240.4030306@noaa.gov> Michael Droettboom wrote: > The PSF will do the work of applying to Google -- we can encourage > prospective students and mentors to apply through the PSF. hmmm -- I wonder if that is best -- it would put MPL projects in competition with all other python projects. My first thought is that a SciPy application would be best -- with SciPy, numpy, MPL, Sage, Cython, etc, it's plenty big, but would have a bit more focus. As an example, wxPython has been a mentoring organization for the last few years. Not that I'm volunteering to put together the application.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From rob.clewley at gmail.com Tue Mar 10 14:39:45 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 10 Mar 2009 13:39:45 -0500 Subject: [Numpy-discussion] [JOB] Short-term python programming consultant - funds expire soon! Message-ID: Dear Pythonistas, Our open-source software project (PyDSTool) has money to hire an experienced Python programmer on a short-term, per-task basis as a technical consultant (i.e., no fringe benefits offered). The work can be done remotely and will be paid after the satisfactory completion of the objectives. The work must be completed by the end of April, when the current funds expire. The basic work plan and design documents are already laid out from previous work on these tasks, but the finer details will be negotiable. We plan to pay approximately $2-3k per task, depending on the exact code design and amount of time required. Prospective consultants could be professionals or students but must have proven experience with SWIG and both python and numpy distutils, and be willing to write a short document about the completed work for future maintenance purposes. We have a template for a simple contract and invoices can be relatively coarse-grained. As an open-source project, all contributed code will be BSD licensed as part of our project, although it will retain attribution of your authorship. We have two objectives for this work, which could be satisfied by two individual consultants but more likely by one: (1) This objective involves completing the implementation of automated compilation of C code into DLLs. These DLLs are dynamically created from a user's specification in python. The DLLs can be updated and reloaded if the user changes specifications at the python level. This functionality is crucial to providing fast solving of differential equations using legacy solvers written in C and Fortran. This functionality is relatively independent from the inner workings of our project so there should be minimal overhead to completing this task. We need to complete the integration of an existing code idea for this objective with the main trunk of our project. The existing code works as a stand-alone test for our C legacy solver but is not completed for our Fortran legacy solver (so that numpy's distutils needs to be used instead of python distutils) and needs to be integrated into the current SVN trunk. The design document and implementation for the C solver should be a helpful template for the Fortran solver. (2) We need a setup.py package installer for our project that automatically compiles the static parts of the legacy differential equation solvers during installation according to the directory structure and SWIG/distutils implementation to be completed in objective (1). If the consultant is experienced with writing python package installers, he/she may wish to negotiate working on a more advanced system such as an egg installer. PyDSTool (pydstool.sourceforge.net) is a multi-platform, open-source environment offering a range of library tools and utilities for research in dynamical systems modeling for scientists and engineers. Please contact Dr. Rob Clewley (rclewley) at (@) the Department of Mathematics, Georgia State University (gsu.edu) for more information. -- Robert H. Clewley, Ph.D. Assistant Professor Department of Mathematics and Statistics and Neuroscience Institute Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From millman at berkeley.edu Tue Mar 10 15:07:32 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 10 Mar 2009 12:07:32 -0700 Subject: [Numpy-discussion] [SciPy-user] Google summer of Code 2009 In-Reply-To: <49B6B240.4030306@noaa.gov> References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> <49B6B240.4030306@noaa.gov> Message-ID: On Tue, Mar 10, 2009 at 11:32 AM, Christopher Barker wrote: > hmmm -- I wonder if that is best -- it would put MPL projects in > competition with all other python projects. > > My first thought is that a SciPy application would be best -- with > SciPy, numpy, MPL, Sage, Cython, etc, it's plenty big, but would have a > bit more focus. I spoke with the SoC coordinator about this last year and was told they would prefer us to stay under the PSF umbrella. This year they plan to sponsor fewer mentoring organizations, I believe (so less chance we would get accepted). Finally, the deadline for submitting an application to be a mentoring organization is Friday (March 13) at 12 noon PDT: http://code.google.com/opensource/gsoc/2009/faqs.html#0_1_mentoring_orgs_52990812492_14255507054617844 From mdroe at stsci.edu Tue Mar 10 15:07:50 2009 From: mdroe at stsci.edu (Michael Droettboom) Date: Tue, 10 Mar 2009 15:07:50 -0400 Subject: [Numpy-discussion] [Matplotlib-users] Google summer of Code 2009 In-Reply-To: <49B6B240.4030306@noaa.gov> References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> <49B6B240.4030306@noaa.gov> Message-ID: <49B6BA86.3080500@stsci.edu> Christopher Barker wrote: > Michael Droettboom wrote: > >> The PSF will do the work of applying to Google -- we can encourage >> prospective students and mentors to apply through the PSF. >> > > hmmm -- I wonder if that is best -- it would put MPL projects in > competition with all other python projects. > > My first thought is that a SciPy application would be best -- with > SciPy, numpy, MPL, Sage, Cython, etc, it's plenty big, but would have a > bit more focus. > > As an example, wxPython has been a mentoring organization for the last > few years. > > Not that I'm volunteering to put together the application.... > There's the kicker -- I believe (and I haven't been heavily involved in this, so correct me if I'm wrong) the due date for applications is this Friday. I don't have time to do that either. MPL tagged along with the PSF last year and it worked great (very low administrative overhead for us mentors), until it fell through for unrelated reasons. Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA From charlesr.harris at gmail.com Tue Mar 10 15:08:17 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 13:08:17 -0600 Subject: [Numpy-discussion] What is the logical value of nan? Message-ID: It isn't 0 so it should be True. Any disagreement?... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Tue Mar 10 15:06:02 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Tue, 10 Mar 2009 21:06:02 +0200 Subject: [Numpy-discussion] What is the logical value of nan? References: Message-ID: <710F2847B0018641891D9A216027636029C47A@ex3.envision.co.il> I think it should be considered as roughly (the numerical) equivalent to None, therefore False Nadav. -----????? ??????----- ???: numpy-discussion-bounces at scipy.org ??? Charles R Harris ????: ? 10-???-09 21:08 ??: numpy-discussion ????: [Numpy-discussion] What is the logical value of nan? It isn't 0 so it should be True. Any disagreement?... Chuck -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2845 bytes Desc: not available URL: From stefan at sun.ac.za Tue Mar 10 16:07:47 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 22:07:47 +0200 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <91cf711d0903101041m5f678bf7ice81130ca0981220@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> <91cf711d0903101041m5f678bf7ice81130ca0981220@mail.gmail.com> Message-ID: <9457e7c80903101307y22bda648r2a73c64fd7fb07b9@mail.gmail.com> 2009/3/10 David Huard : > but, if I try to login, I get the same error again. I tried to reset the > password, register under a new name, but I always get the following message: > > The browser has stopped trying to retrieve the requested item. The site is > redirecting the request in a way that will never complete. Does anyone else see the behaviour David is describing? St?fan From charlesr.harris at gmail.com Tue Mar 10 16:11:22 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 14:11:22 -0600 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: <9457e7c80903101307y22bda648r2a73c64fd7fb07b9@mail.gmail.com> References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> <91cf711d0903101041m5f678bf7ice81130ca0981220@mail.gmail.com> <9457e7c80903101307y22bda648r2a73c64fd7fb07b9@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 2:07 PM, St?fan van der Walt wrote: > 2009/3/10 David Huard : > > but, if I try to login, I get the same error again. I tried to reset the > > password, register under a new name, but I always get the following > message: > > > > The browser has stopped trying to retrieve the requested item. The site > is > > redirecting the request in a way that will never complete. > > Does anyone else see the behaviour David is describing? > I don't. David, what browser are you using? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Mar 10 16:18:52 2009 From: david.huard at gmail.com (David Huard) Date: Tue, 10 Mar 2009 16:18:52 -0400 Subject: [Numpy-discussion] Changes and new workflow on Trac In-Reply-To: References: <9457e7c80903081744n78c936fdl3065285e2fea5ae2@mail.gmail.com> <9457e7c80903090035i3970aebfq8ef683ffe916283a@mail.gmail.com> <91cf711d0903100637m57ce3236v51ddf6ecf9f55ced@mail.gmail.com> <9457e7c80903100644m2a3fb2fy3c7c6362e95168a1@mail.gmail.com> <91cf711d0903101028j7dbd3faet42c6ec41b062a2d7@mail.gmail.com> <91cf711d0903101041m5f678bf7ice81130ca0981220@mail.gmail.com> <9457e7c80903101307y22bda648r2a73c64fd7fb07b9@mail.gmail.com> Message-ID: <91cf711d0903101318x74aa34e0u5bc77787d721c127@mail.gmail.com> Plain old firefox 3.0.6 on fedora 9. On Tue, Mar 10, 2009 at 4:11 PM, Charles R Harris wrote: > > > On Tue, Mar 10, 2009 at 2:07 PM, St?fan van der Walt wrote: > >> 2009/3/10 David Huard : >> > but, if I try to login, I get the same error again. I tried to reset the >> > password, register under a new name, but I always get the following >> message: >> > >> > The browser has stopped trying to retrieve the requested item. The site >> is >> > redirecting the request in a way that will never complete. >> >> Does anyone else see the behaviour David is describing? >> > > I don't. David, what browser are you using? > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Mar 10 17:09:45 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Mar 2009 21:09:45 +0000 (UTC) Subject: [Numpy-discussion] What is the logical value of nan? References: Message-ID: Tue, 10 Mar 2009 13:08:17 -0600, Charles R Harris wrote: > It isn't 0 so it should be True. Any disagreement? +1 Nonzero Python object, hence True. Moreover, it's also True in Python: >>> import numpy as np >>> type(np.nan) >>> bool(np.nan) True IMHO, we should follow Python here, otherwise unnecessary confusion may arise. -- Pauli Virtanen From stefan at sun.ac.za Tue Mar 10 17:16:54 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 23:16:54 +0200 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: Message-ID: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> 2009/3/10 Pauli Virtanen : > Nonzero Python object, hence True. Moreover, it's also True in Python: Also in C: #include #include int main() { double nan = sqrt(-1); printf("%f\n", nan); printf("%i\n", bool(nan)); return 0; } $ ./nan nan 1 Cheers St?fan From stefan at sun.ac.za Tue Mar 10 17:25:49 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 23:25:49 +0200 Subject: [Numpy-discussion] Buildbot issues In-Reply-To: References: Message-ID: <9457e7c80903101425v426da4f1s2b3d6ae0a1c0082c@mail.gmail.com> Hi Pauli 2009/3/9 Pauli Virtanen : > There seem to be some problems with the buildbot: > > - It's not building on new commits automatically. > > ?IIRC this could be fixed by removing some (all?) of buildmaster's cache > ?files, and/or switching to PersistentSVNPoller. The firewall has still not been opened up. I'm sorry this is taking so long, but I have to go through the appropriate channels for each change to the firewall rules. I look forward to snakebite.org being online; I'll send Trent Nelson an email to get some more info. Regards St?fan From charlesr.harris at gmail.com Tue Mar 10 17:49:15 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 15:49:15 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 3:16 PM, St?fan van der Walt wrote: > 2009/3/10 Pauli Virtanen : > > Nonzero Python object, hence True. Moreover, it's also True in Python: > > Also in C: > > #include > #include > > int main() { > double nan = sqrt(-1); > printf("%f\n", nan); > printf("%i\n", bool(nan)); > return 0; > } > > $ ./nan > nan > 1 > So resolved, it is True. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.hochberg at ieee.org Tue Mar 10 18:19:03 2009 From: tim.hochberg at ieee.org (Timothy Hochberg) Date: Tue, 10 Mar 2009 15:19:03 -0700 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 2:49 PM, Charles R Harris wrote: > > > On Tue, Mar 10, 2009 at 3:16 PM, St?fan van der Walt wrote: > >> 2009/3/10 Pauli Virtanen : >> > Nonzero Python object, hence True. Moreover, it's also True in Python: >> >> Also in C: >> >> #include >> #include >> >> int main() { >> double nan = sqrt(-1); >> printf("%f\n", nan); >> printf("%i\n", bool(nan)); >> return 0; >> } >> >> $ ./nan >> nan >> 1 >> > > So resolved, it is True. > I appear to be late to the party, but IMO it should raise an exception in those cases where it's feasible to do so. -- . __ . |-\ . . tim.hochberg at ieee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Mar 10 18:35:51 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 16:35:51 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 4:19 PM, Timothy Hochberg wrote: > > > On Tue, Mar 10, 2009 at 2:49 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Tue, Mar 10, 2009 at 3:16 PM, St?fan van der Walt wrote: >> >>> 2009/3/10 Pauli Virtanen : >>> > Nonzero Python object, hence True. Moreover, it's also True in Python: >>> >>> Also in C: >>> >>> #include >>> #include >>> >>> int main() { >>> double nan = sqrt(-1); >>> printf("%f\n", nan); >>> printf("%i\n", bool(nan)); >>> return 0; >>> } >>> >>> $ ./nan >>> nan >>> 1 >>> >> >> So resolved, it is True. >> > > I appear to be late to the party, but IMO it should raise an exception in > those cases where it's feasible to do so. > > That also seems reasonable to me. There is also the unresolved issue of whether casting nan to an integer should raise an exception, currently it is just converted to 0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From osman at fuse.net Tue Mar 10 23:13:18 2009 From: osman at fuse.net (Osman) Date: Wed, 11 Mar 2009 03:13:18 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Automatic_differentiation_=28was_Re?= =?utf-8?q?=3A=09second-order_gradient=29?= References: <9457e7c80810300836m2f5daebauef4de83f983a0999@mail.gmail.com> Message-ID: Hi, I just saw this python package : PyDX which may answer your needs. The original URL is not working, but the svn location exists. http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html svn co http://gr.anu.edu.au/svn/people/sdburton/pydx br -osman From david at ar.media.kyoto-u.ac.jp Wed Mar 11 01:26:34 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 14:26:34 +0900 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> Message-ID: <49B74B8A.9050605@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On Tue, Mar 10, 2009 at 4:19 PM, Timothy Hochberg > > wrote: > > > > On Tue, Mar 10, 2009 at 2:49 PM, Charles R Harris > > wrote: > > > > On Tue, Mar 10, 2009 at 3:16 PM, St?fan van der Walt > > wrote: > > 2009/3/10 Pauli Virtanen >: > > Nonzero Python object, hence True. Moreover, it's also > True in Python: > > Also in C: > > #include > #include > > int main() { > double nan = sqrt(-1); > printf("%f\n", nan); > printf("%i\n", bool(nan)); > return 0; > } > > $ ./nan > nan > 1 > > > So resolved, it is True. > > > I appear to be late to the party, but IMO it should raise an > exception in those cases where it's feasible to do so. > > > That also seems reasonable to me. There is also the unresolved issue > of whether casting nan to an integer should raise an exception, > currently it is just converted to 0. I think it is reasonable as well - but I am worried about the integration with seterr (not just for this case, but in general in our way toward better handling of this kind of things). I note that matlab convert nan to 0 as well - presumably they did not handle it besides what C guarantees (that is not much in that case I believe): a = nan; int32(a); % gives 0 in C: #define _ISOC99_SOURCE #include #include int main(void) { printf("nan is %f\n", NAN); printf("nan is %d\n", (int)NAN); return 0; } prints nan and 0 respectively - it may well be implementation dependent, but it seems that (int)nan simply gives back the nan binary representation. cheers, David From charlesr.harris at gmail.com Wed Mar 11 01:45:25 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 23:45:25 -0600 Subject: [Numpy-discussion] new numpy error in 1.3.0.dev6618 In-Reply-To: <49B67FF2.1090702@stsci.edu> References: <49B67FF2.1090702@stsci.edu> Message-ID: On Tue, Mar 10, 2009 at 8:57 AM, Christopher Hanley wrote: > ====================================================================== > ERROR: test_float_repr (test_scalarmath.TestRepr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/Users/chanley/dev/site-packages/lib/python/numpy/core/tests/test_scalarmath.py", > line 101, in test_float_repr > val2 = t(eval(val_repr)) > File "", line 1, in > NameError: name 'nan' is not defined > > ---------------------------------------------------------------------- > Ran 2018 tests in 10.311s > > FAILED (KNOWNFAIL=1, SKIP=1, errors=1) > > >>> numpy.__version__ > '1.3.0.dev6618' > >>> > > > This was run on a Intel Mac running OS X 10.5.6. > There are other problems: >>> np.float64(-0.0) -0.0 >>> np.float128(-0.0) -0 >>> np.float32(-0.0) -0 I suppose this is a side effect of float64 being derived from python float. Now that we have endian (and we should expose it to python somewhere, maybe in info/finfo) it should be possible to simplify the test using honest ieee for various extreme values. I suppose we will also need to distinguish between quad precision (SPARC) and extended precision somewhere. Numpy can't do that at the moment. However, it can be detected at runtime or we could use some architecture specific macros. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Wed Mar 11 01:50:24 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 10 Mar 2009 22:50:24 -0700 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: Message-ID: <49B75120.9030707@noaa.gov> Pauli Virtanen wrote: > Tue, 10 Mar 2009 13:08:17 -0600, Charles R Harris wrote: >> It isn't 0 so it should be True. Any disagreement? - 1 > Nonzero Python object, hence True. Empty sequences are False also. There was a lot of discussion about all this when Guido added Bool types to python. Personally, I don't think zero should be false, I think only False (and maybe None) should be false -- is it so hard to write: "if x != 0:", rather than "if x:"? but there is a LOT of legacy to 0 being False! Anyway, Laura Creighton wrote a great post about it, with this basic thesis: > > Python does not distinguish between True and > > False -- Python makes the distinction between something and nothing. In that context, NaN is nothing, thus False. my $0.02 -Chris From david at ar.media.kyoto-u.ac.jp Wed Mar 11 01:37:29 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 14:37:29 +0900 Subject: [Numpy-discussion] new numpy error in 1.3.0.dev6618 In-Reply-To: References: <49B67FF2.1090702@stsci.edu> Message-ID: <49B74E19.8050600@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > > On Tue, Mar 10, 2009 at 8:57 AM, Christopher Hanley > wrote: > > ====================================================================== > ERROR: test_float_repr (test_scalarmath.TestRepr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/chanley/dev/site-packages/lib/python/numpy/core/tests/test_scalarmath.py", > line 101, in test_float_repr > val2 = t(eval(val_repr)) > File "", line 1, in > NameError: name 'nan' is not defined > > ---------------------------------------------------------------------- > Ran 2018 tests in 10.311s > > FAILED (KNOWNFAIL=1, SKIP=1, errors=1) > > >>> numpy.__version__ > '1.3.0.dev6618' > >>> > > > This was run on a Intel Mac running OS X 10.5.6. > > > There are other problems: > > >>> np.float64(-0.0) > -0.0 > >>> np.float128(-0.0) > -0 > >>> np.float32(-0.0) > -0 This is not a regression at least (I have just tested on 1.2.1). There is still a lot of work we can do to make this better - but this will have to wait for 1.4 I believe, because those are really hard to get right (they depend on both C runtimes and python versions). I will look at the mac thing, I wonder why it only appear on this OS, though, cheers, David From charlesr.harris at gmail.com Wed Mar 11 01:59:04 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 10 Mar 2009 23:59:04 -0600 Subject: [Numpy-discussion] new numpy error in 1.3.0.dev6618 In-Reply-To: References: <49B67FF2.1090702@stsci.edu> Message-ID: On Tue, Mar 10, 2009 at 11:45 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Mar 10, 2009 at 8:57 AM, Christopher Hanley wrote: > >> ====================================================================== >> ERROR: test_float_repr (test_scalarmath.TestRepr) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> >> "/Users/chanley/dev/site-packages/lib/python/numpy/core/tests/test_scalarmath.py", >> line 101, in test_float_repr >> val2 = t(eval(val_repr)) >> File "", line 1, in >> NameError: name 'nan' is not defined >> >> ---------------------------------------------------------------------- >> Ran 2018 tests in 10.311s >> >> FAILED (KNOWNFAIL=1, SKIP=1, errors=1) >> >> >>> numpy.__version__ >> '1.3.0.dev6618' >> >>> >> >> >> This was run on a Intel Mac running OS X 10.5.6. >> > > There are other problems: > > >>> np.float64(-0.0) > -0.0 > >>> np.float128(-0.0) > -0 > >>> np.float32(-0.0) > -0 > > I suppose this is a side effect of float64 being derived from python float. > > Now that we have endian (and we should expose it to python somewhere, maybe > in info/finfo) it should be possible to simplify the test using honest ieee > for various extreme values. I suppose we will also need to distinguish > between quad precision (SPARC) and extended precision somewhere. Numpy can't > do that at the moment. However, it can be detected at runtime or we could > use some architecture specific macros. > I also wonder if this is related to ticket #1038. That's on SPARC and ppc, but I suspect problems in finfo for longdoubles. If you print out the values that cause this error, one of them doesn't look right to me even on linux. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Wed Mar 11 02:52:07 2009 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 11 Mar 2009 08:52:07 +0200 (EET) Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B75120.9030707@noaa.gov> References: <49B75120.9030707@noaa.gov> Message-ID: <38254.62.65.217.106.1236754327.squirrel@cens.ioc.ee> On Wed, March 11, 2009 7:50 am, Christopher Barker wrote: > > > Python does not distinguish between True and > > > False -- Python makes the distinction between something and nothing. > > In that context, NaN is nothing, thus False. Mathematically speaking, NaN is a quantity with undefined value. Closer analysis of a particular case may reveal that it may be some finite number, or an infinity with some direction, or be intrinsically undefined. NaN is something that cannot be defined because its value is not unique. Nothing would be the content of empty set. Pearu From david at ar.media.kyoto-u.ac.jp Wed Mar 11 02:43:22 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 15:43:22 +0900 Subject: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero Message-ID: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> Hi, For the record, I have just added the following functionalities to numpy, which may simplify some C code: - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf, positive and negative zeros. Rationale: some code use NAN, _get_nan, etc... NAN is a GNU C extension, INFINITY is not available on many C compilers. The NPY_ macros are defined from the IEEE754 format, and as such should be very fast (the values should be inlined). - we can now use inline safely in numpy C code: it is defined to something recognized by the compiler or nothing if inline is not supported. It is NOT defined publicly to avoid namespace pollution. - NPY_INLINE is a macro which can be used publicly, and has the same usage as inline. cheers, David From cournape at gmail.com Wed Mar 11 03:20:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 11 Mar 2009 16:20:47 +0900 Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 In-Reply-To: References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> Message-ID: <5b8d13220903110020i29b3b237j1b57a0e42f186f9a@mail.gmail.com> On Wed, Mar 11, 2009 at 3:22 AM, Pauli Virtanen wrote: > Tue, 10 Mar 2009 15:27:32 +0900, David Cournapeau wrote: >> For the upcoming 1.3.0 release, I would like to distribute the (built) >> documentation in some way. But first, I need to be able to build it :) > > Yep, buildability would be a nice feature :) Yes, indeed. Ideally, I would the doc to build on as many platforms as possible. > >> What are the exact requirements to build the documentation ? Is sphinx >> 0.5 enough ? I can't manage to build it on either mac os x or linux: > > Sphinx 0.5.1 worksforme, and on two different Linux machines (and Python > versions), so I doubt it's somehow specific to my setup. Yes, it is strange - I can make it work on my workstation, which has the same distribution as my laptop (where it was failing). I am still unsure about the possible differences (sphinx version was of course the same). > > Sphinx 0.6.dev doesn't work at the moment with autosummary. It's a bit of > a moving target, so I haven't made keeping it working a priority. Sure - I was actually afraid I needed sphinx 0.6. > > This is a Sphinx error I run into from time to time. Usually > > ? ? ? ?make clean > > helps, but I'm not sure what causes this. The error looks a bit like > > ? ? ? ?http://bitbucket.org/birkenfeld/sphinx/issue/81/ > > but I think Ctrl+C is not a requirement for triggering it. Did you get > this error from a clean build? Ah, that may be part of the problem. I can't make a clean build on mac os x, because of the "too many opened files" thing. Maybe mac os x has a ulimit kind of thing I should set up to avoid this. > >> There are also some errors on mac os x about too many opened files >> (which can be alleviated by running the make html again, but obviously, >> that's not great). I don't know if there are easy solutions to that >> problem, > > At which step did this error occur? It occurs at the html output phase ("writing output")- maybe there is a bug in sphinx with some files which are not closed properly. Concerning the doc, I would like to add a few notes about the work we did for the C math lib: is it ok to add a chapter to the C reference guide, or is there a more appropriate place ? cheers, David From sebastian.walter at gmail.com Wed Mar 11 06:12:07 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 11 Mar 2009 11:12:07 +0100 Subject: [Numpy-discussion] Automatic differentiation (was Re: second-order gradient) In-Reply-To: References: <9457e7c80810300836m2f5daebauef4de83f983a0999@mail.gmail.com> Message-ID: There are several possibilities, some of them are listed on http://en.wikipedia.org/wiki/Automatic_differentiation == pycppad http://www.seanet.com/~bradbell/pycppad/index.xml pycppad is a wrapper of the C++ library CppAD ( http://www.coin-or.org/CppAD/ ) the wrapper can do up to second order derivatives very efficiently in the so-called reverse mode of AD requires boost::python == pyadolc http://github.com/b45ch1/pyadolc which is a wrapper for the C++ library ADOL-C ( http://www.math.tu-dresden.de/~adol-c/ ) this can do abritrary degree of derivatives and works quite well with numpy, i.e. you can work with numpy arrays also quite efficient in the so-called reverse mode of AD requires boost::python == ScientificPython http://dirac.cnrs-orleans.fr/ScientificPython/ScientificPythonManual/ can provide first order derivatives. But as far as I understand only first order derivatives of functions f: R -> R and only in the usually not so efficient forward mode of AD pure python == Algopy http://github.com/b45ch1/algopy/tree/master pure python, arbitrary derivatives in forward and reverse mode still quite experimental. Offers also the possibility to differentiate functions that make heavy use of matrix operations. == sympy this is not automatic differentiation but symbolic differentiation but is sometimes useful hope that helps, Sebastian On Wed, Mar 11, 2009 at 4:13 AM, Osman wrote: > Hi, > > I just saw this python package : PyDX ?which may answer your needs. > The original URL is not working, but the svn location exists. > > http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html > > svn co http://gr.anu.edu.au/svn/people/sdburton/pydx > > br > -osman > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From pav at iki.fi Wed Mar 11 07:38:06 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 11 Mar 2009 11:38:06 +0000 (UTC) Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> <5b8d13220903110020i29b3b237j1b57a0e42f186f9a@mail.gmail.com> Message-ID: Wed, 11 Mar 2009 16:20:47 +0900, David Cournapeau wrote: > On Wed, Mar 11, 2009 at 3:22 AM, Pauli Virtanen wrote: [clip] >> Sphinx 0.5.1 worksforme, and on two different Linux machines (and >> Python versions), so I doubt it's somehow specific to my setup. > > Yes, it is strange - I can make it work on my workstation, which has the > same distribution as my laptop (where it was failing). I am still > unsure about the possible differences (sphinx version was of course the > same). Did you check Pythonpath and egg-overriding-pythonpath issues? There's also some magic in the autosummary extension, but it's not *too* black, so I'd be surprised if it was behind these troubles. [clip: Sphinx issue #81] > Ah, that may be part of the problem. I can't make a clean build on mac > os x, because of the "too many opened files" thing. Maybe mac os x has a > ulimit kind of thing I should set up to avoid this. Perhaps it even has ulimit, being a sort of POSIX system? > Concerning the doc, I would like to add a few notes about the work we > did for the C math lib: is it ok to add a chapter to the C reference > guide, or is there a more appropriate place? C reference guide is probably the correct place. Since the topic is a bit orthogonal to anything else there currently, I'd suggest creating a new file c-api.npymath.rst and linking it to the toctree in c-api.rst -- Pauli Virtanen From sturla at molden.no Wed Mar 11 08:12:04 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 11 Mar 2009 13:12:04 +0100 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: Message-ID: <49B7AA94.8090509@molden.no> Charles R Harris wrote: > It isn't 0 so it should be True. Any disagreement?... Chuck NaN is not a number equal to 0, so it should be True? NaN is not a number different from 0, so it should be False? Also see Pearu's comment. Why not raise an exception when NaN is evaluated in a boolean context? bool(NaN) has no obvious interpretation, so it should be considered an error. Sturla Molden From sturla at molden.no Wed Mar 11 08:18:51 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 11 Mar 2009 13:18:51 +0100 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> Message-ID: <49B7AC2B.6010707@molden.no> Charles R Harris wrote: > > #include > #include > > int main() { > double nan = sqrt(-1); > printf("%f\n", nan); > printf("%i\n", bool(nan)); > return 0; > } > > $ ./nan > nan > 1 > > > So resolved, it is True. Unless specified in the ISO C standard, I'd say this is system and compiler dependent. Should NumPy rely on a specific binary representation of NaN? A related issue is the boolean value of Inf and -Inf. Sturla Molden From david at ar.media.kyoto-u.ac.jp Wed Mar 11 08:48:18 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Mar 2009 21:48:18 +0900 Subject: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0 In-Reply-To: References: <5b8d13220903092327ycdc6bd2od1405141da736f42@mail.gmail.com> <5b8d13220903110020i29b3b237j1b57a0e42f186f9a@mail.gmail.com> Message-ID: <49B7B312.4010203@ar.media.kyoto-u.ac.jp> Pauli Virtanen wrote: > > Did you check Pythonpath and egg-overriding-pythonpath issues? There's > also some magic in the autosummary extension, but it's not *too* black, > so I'd be surprised if it was behind these troubles. > I think the problem boils down to building from scratch at once. > > Perhaps it even has ulimit, being a sort of POSIX system? > Yes, and it works. I am not convinced it is not a bug in sphinx, but increasing from 256 to 1000 max files opened work. > C reference guide is probably the correct place. Since the topic is a bit > orthogonal to anything else there currently, I'd suggest creating a new > file c-api.npymath.rst and linking it to the toctree in c-api.rst > That's what I ended up doing. Thanks, now I can build the doc on windows, mac os x and linux, David From bsouthey at gmail.com Wed Mar 11 10:24:31 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 11 Mar 2009 09:24:31 -0500 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B7AC2B.6010707@molden.no> References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> <49B7AC2B.6010707@molden.no> Message-ID: <49B7C99F.7080706@gmail.com> Sturla Molden wrote: > Charles R Harris wrote: > >> #include >> #include >> >> int main() { >> double nan = sqrt(-1); >> printf("%f\n", nan); >> printf("%i\n", bool(nan)); >> return 0; >> } >> >> $ ./nan >> nan >> 1 >> >> >> So resolved, it is True. >> > Unless specified in the ISO C standard, I'd say this is system and > compiler dependent. > > Should NumPy rely on a specific binary representation of NaN? > > A related issue is the boolean value of Inf and -Inf. > > Sturla Molden > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > This is one link that shows the different representation of these numbers in IEEE 754: http://www.psc.edu/general/software/packages/ieee/ieee.php It is a little clearer than Wikipedia: http://en.wikipedia.org/wiki/IEEE_754-1985 Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not nothing so not zero. Also, I think that conversion to an integer should be an error for all of these because there is no equivalent representation of these floating point numbers as integers and I think that using zero for NaN is wrong. Now for the other two special representations, I would presume that Numpy's PZERO (positive zero) and NZERO (negative zero) are treated as nothing. Conversion to integer for these should be zero. However, I noticed that the standard has just been revised that may eventually influence Numpy: http://en.wikipedia.org/wiki/IEEE_754r http://en.wikipedia.org/wiki/IEEE_754-2008 Note this defines the min/max behavior: * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x| Bruce From mforbes at physics.ubc.ca Wed Mar 11 10:28:14 2009 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Wed, 11 Mar 2009 08:28:14 -0600 Subject: [Numpy-discussion] array 2 string In-Reply-To: <20090310123312.06579170.michael.s.gilbert@gmail.com> References: <6946b9500903100921q5efc020ered1157d264828ab2@mail.gmail.com> <20090310123312.06579170.michael.s.gilbert@gmail.com> Message-ID: On 10 Mar 2009, at 10:33 AM, Michael S. Gilbert wrote: > On Tue, 10 Mar 2009 17:21:23 +0100, Mark Bakker wrote: >> Hello, >> >> I want to convert an array to a string. >> >> I like array2string, but it puts these annoying square brackets >> around >> the array, like >> >> [[1 2 3], >> [3 4 5]] >> >> Anyway we can suppress the square brackets and get (this is what is >> written with savetxt, but I cannot get it to store in a variable) >> 1 2 3 >> 4 5 6 How about using StringIO: >>> a = np.array([[1,2,3],[4,5,6]]) >>> f = StringIO() >>> savetxt(f, a, fmt="%i") >>> s = f.getvalue() >>> f.close() >>> print s 1 2 3 4 5 6 Michael. From lou_boog2000 at yahoo.com Wed Mar 11 11:00:27 2009 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 11 Mar 2009 08:00:27 -0700 (PDT) Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B7C99F.7080706@gmail.com> Message-ID: <812524.40249.qm@web34401.mail.mud.yahoo.com> --- On Wed, 3/11/09, Bruce Southey wrote: > From: Bruce Southey > Subject: Re: [Numpy-discussion] What is the logical value of nan? > To: "Discussion of Numerical Python" > Date: Wednesday, March 11, 2009, 10:24 AM > > This is one link that shows the different representation of > these > numbers in IEEE 754: > http://www.psc.edu/general/software/packages/ieee/ieee.php > It is a little clearer than Wikipedia: > http://en.wikipedia.org/wiki/IEEE_754-1985 Thanks. Useful sites. > Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not > nothing so not zero. Agreed. +1 > Also, I think that conversion to an integer should be an > error for all of these because there is no equivalent > representation of these floating > point numbers as integers and I think that using zero for > NaN is wrong. Another +1 > Now for the other two special representations, I would > presume that > Numpy's PZERO (positive zero) and NZERO (negative zero) > are treated as > nothing. Conversion to integer for these should be zero. Yet another +1. -- Lou Pecora, my views are my own. From charlesr.harris at gmail.com Wed Mar 11 11:02:03 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 09:02:03 -0600 Subject: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero In-Reply-To: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> References: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Mar 11, 2009 at 12:43 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi, > > For the record, I have just added the following functionalities to > numpy, which may simplify some C code: > - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf, > positive and negative zeros. Rationale: some code use NAN, _get_nan, > etc... NAN is a GNU C extension, INFINITY is not available on many C > compilers. The NPY_ macros are defined from the IEEE754 format, and as > such should be very fast (the values should be inlined). > - we can now use inline safely in numpy C code: it is defined to > something recognized by the compiler or nothing if inline is not > supported. It is NOT defined publicly to avoid namespace pollution. > - NPY_INLINE is a macro which can be used publicly, and has the same > usage as inline. > Great. This should be helpful. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Mar 11 11:16:42 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 09:16:42 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B7C99F.7080706@gmail.com> References: <9457e7c80903101416p3c2dab15w5d9c2436eab07bf3@mail.gmail.com> <49B7AC2B.6010707@molden.no> <49B7C99F.7080706@gmail.com> Message-ID: On Wed, Mar 11, 2009 at 8:24 AM, Bruce Southey wrote: > Sturla Molden wrote: > > Charles R Harris wrote: > > > >> #include > >> #include > >> > >> int main() { > >> double nan = sqrt(-1); > >> printf("%f\n", nan); > >> printf("%i\n", bool(nan)); > >> return 0; > >> } > >> > >> $ ./nan > >> nan > >> 1 > >> > >> > >> So resolved, it is True. > >> > > Unless specified in the ISO C standard, I'd say this is system and > > compiler dependent. > > > > Should NumPy rely on a specific binary representation of NaN? > > > > A related issue is the boolean value of Inf and -Inf. > > > > Sturla Molden > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > This is one link that shows the different representation of these > numbers in IEEE 754: > http://www.psc.edu/general/software/packages/ieee/ieee.php > It is a little clearer than Wikipedia: > http://en.wikipedia.org/wiki/IEEE_754-1985 > > Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not nothing so not zero. > Also, I think that conversion to an integer should be an error for all > of these because there is no equivalent representation of these floating > point numbers as integers and I think that using zero for NaN is wrong. > > Now for the other two special representations, I would presume that > Numpy's PZERO (positive zero) and NZERO (negative zero) are treated as > nothing. Conversion to integer for these should be zero. > > However, I noticed that the standard has just been revised that may > eventually influence Numpy: > http://en.wikipedia.org/wiki/IEEE_754r > http://en.wikipedia.org/wiki/IEEE_754-2008 > > Note this defines the min/max behavior: > > * |min(x,NaN) = min(NaN,x) = x| > * |max(x,NaN) = max(NaN,x) = x| > > We have this behavior in numpy with the fmax/fmin functions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Mar 11 11:39:05 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 09:39:05 -0600 Subject: [Numpy-discussion] Don't call e Euler's constant. Message-ID: Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082 40243 10421 59335 93992... see wikipedia. The constant e is sometimes called Euler's number -- shouldn't that be Napier or Bernoulli in a pc world -- but I think e is more universally understood and the distinction between "constant" and "number" is rather obscure. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Wed Mar 11 11:54:44 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 11 Mar 2009 11:54:44 -0400 Subject: [Numpy-discussion] Don't call e Euler's constant. In-Reply-To: References: Message-ID: <7f014ea60903110854m6e1b1e3bn57ea9009fa485696@mail.gmail.com> as long as we all agree that e has a value of 2.71828?18284?59045?23536, its just a matter of semantics. the constant you reference is indicated by greek lower gamma Chris On Wed, Mar 11, 2009 at 11:39 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082 > 40243 10421 59335 93992... see wikipedia. > The constant e is sometimes called Euler's number -- shouldn't that be > Napier or Bernoulli in a pc world -- but I think e is more universally > understood and the distinction between "constant" and "number" is rather > obscure. > > Chuck > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Mar 11 12:58:23 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 01:58:23 +0900 Subject: [Numpy-discussion] Don't call e Euler's constant. In-Reply-To: References: Message-ID: <5b8d13220903110958t31c5d7d1h50afffa32f196ea7@mail.gmail.com> On Thu, Mar 12, 2009 at 12:39 AM, Charles R Harris wrote: > Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082 > 40243 10421 59335 93992... You're right, Euler constant is generally gamma. Euler number is not that great either (euler numbers in geometry for example), so I just renamed it to base of natural logarithm, David From Chris.Barker at noaa.gov Wed Mar 11 13:06:52 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 11 Mar 2009 10:06:52 -0700 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B7AA94.8090509@molden.no> References: <49B7AA94.8090509@molden.no> Message-ID: <49B7EFAC.1040509@noaa.gov> Sturla Molden wrote: > Why not raise an exception when NaN is evaluated in a boolean > context? bool(NaN) has no obvious interpretation, so it should be > considered an error. +1 Though there is clearly a lot of legacy around this, so maybe it's best to follow C convention (sigh). Bruce Southey wrote: > Also, I think that conversion to an integer should be an error for > all of these because there is no equivalent representation of these > floating point numbers as integers and I think that using zero for > NaN is wrong. +1 A silent wrong conversion is MUCH worse than an exception! As for MATLAB, it was entirely doubles for a long time -- I don't think it's a good example of well thought-out float<->integer interactions. > Now for the other two special representations, I would presume that > Numpy's PZERO (positive zero) and NZERO (negative zero) are treated > as nothing. Conversion to integer for these should be zero. +1 > Note this defines the min/max behavior: > > * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x| nice -- it's nice to have these defined -- of course, who knows how long it will be (never?) before compilers/libraries support this. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlesr.harris at gmail.com Wed Mar 11 13:41:07 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 11:41:07 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B7EFAC.1040509@noaa.gov> References: <49B7AA94.8090509@molden.no> <49B7EFAC.1040509@noaa.gov> Message-ID: On Wed, Mar 11, 2009 at 11:06 AM, Christopher Barker wrote: > Sturla Molden wrote: > > Why not raise an exception when NaN is evaluated in a boolean > > context? bool(NaN) has no obvious interpretation, so it should be > > considered an error. > > +1 > > Though there is clearly a lot of legacy around this, so maybe it's best > to follow C convention (sigh). > > Bruce Southey wrote: > > Also, I think that conversion to an integer should be an error for > > all of these because there is no equivalent representation of these > > floating point numbers as integers and I think that using zero for > > NaN is wrong. > > +1 > > A silent wrong conversion is MUCH worse than an exception! > > As for MATLAB, it was entirely doubles for a long time -- I don't think > it's a good example of well thought-out float<->integer interactions. > > > > Now for the other two special representations, I would presume that > > Numpy's PZERO (positive zero) and NZERO (negative zero) are treated > > as nothing. Conversion to integer for these should be zero. > > +1 > > > Note this defines the min/max behavior: > > > > * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x| > > nice -- it's nice to have these defined -- of course, who knows how long > it will be (never?) before compilers/libraries support this. > Raising exceptions in ufuncs is going to take some work as the inner loops are void functions without any means of indicating an error. Exceptions also need to be thread safe. So I am not opposed but it is something for the future. Casting seems to be implemented in arraytypes.inc.src as void functions also without provision for errors. I would also like to see casting implemented as ufuncs but that is a separate discussion. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Mar 11 13:57:44 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 11:57:44 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <49B7AA94.8090509@molden.no> <49B7EFAC.1040509@noaa.gov> Message-ID: On Wed, Mar 11, 2009 at 11:41 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Wed, Mar 11, 2009 at 11:06 AM, Christopher Barker < > Chris.Barker at noaa.gov> wrote: > >> Sturla Molden wrote: >> > Why not raise an exception when NaN is evaluated in a boolean >> > context? bool(NaN) has no obvious interpretation, so it should be >> > considered an error. >> >> +1 >> >> Though there is clearly a lot of legacy around this, so maybe it's best >> to follow C convention (sigh). >> >> Bruce Southey wrote: >> > Also, I think that conversion to an integer should be an error for >> > all of these because there is no equivalent representation of these >> > floating point numbers as integers and I think that using zero for >> > NaN is wrong. >> >> +1 >> >> A silent wrong conversion is MUCH worse than an exception! >> >> As for MATLAB, it was entirely doubles for a long time -- I don't think >> it's a good example of well thought-out float<->integer interactions. >> >> >> > Now for the other two special representations, I would presume that >> > Numpy's PZERO (positive zero) and NZERO (negative zero) are treated >> > as nothing. Conversion to integer for these should be zero. >> >> +1 >> >> > Note this defines the min/max behavior: >> > >> > * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x| >> >> nice -- it's nice to have these defined -- of course, who knows how long >> it will be (never?) before compilers/libraries support this. >> > > Raising exceptions in ufuncs is going to take some work as the inner loops > are void functions without any means of indicating an error. Exceptions > also need to be thread safe. So I am not opposed but it is something for the > future. > > Casting seems to be implemented in arraytypes.inc.src as void functions > also without provision for errors. I would also like to see casting > implemented as ufuncs but that is a separate discussion. > Hmm, I don't really want to see type conversions go through all the calling machinery of ufuncs. Let's say I'd like to see the kind of inner loops used for ufuncs also used for type conversions. So I would see introducing exceptions starting with at least two steps: 1) Have ufunc type loops return an integer error code. This should just involve replacing void with int and returning 0 in the current loops. 2) Change the type conversion loops to ufunc type loops. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Wed Mar 11 14:15:49 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 13:15:49 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system Message-ID: Hi, I noticed the following in numpy/distutils/system_info.py while trying to get numpy to build against MKL: if cpu.is_Itanium(): plt = '64' #l = 'mkl_ipf' elif cpu.is_Xeon(): plt = 'em64t' #l = 'mkl_em64t' else: plt = '32' #l = 'mkl_ia32' So in the autodetection for MKL, the only way to get plt (platform) set to 'em64t' is to test true for a Xeon. This function returns false on my Core2 Duo system, even though the platform is very much 'em64t'. I think that check should instead read: elif cpu.is_Xeon() or cpu.is_Core2(): Thoughts? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Wed Mar 11 14:19:01 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 11 Mar 2009 19:19:01 +0100 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <49B7AA94.8090509@molden.no> <49B7EFAC.1040509@noaa.gov> Message-ID: <49B80095.6030206@molden.no> Charles R Harris wrote: > > Raising exceptions in ufuncs is going to take some work as the inner > loops are void functions without any means of indicating an error. > Exceptions also need to be thread safe. So I am not opposed but it is > something for the future. > I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses the IEEE754 binary format, at least NPY_NAN should be True in a boolean context. So bool(nan) is True then. And that's what happens now on my computer as well: >>> bool(nan) True I don't like Python exception's raised inside ufuncs. In the future we NumPy might add OpenMP support to ufuncs (multicore CPUs are getting common), and Python exceptions would prevent that, or at least make it difficult (cf. the GIL). S.M. From faltet at pytables.org Wed Mar 11 14:34:00 2009 From: faltet at pytables.org (Francesc Alted) Date: Wed, 11 Mar 2009 19:34:00 +0100 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: Message-ID: <200903111934.00897.faltet@pytables.org> A Wednesday 11 March 2009, Ryan May escrigu?: > Hi, > > I noticed the following in numpy/distutils/system_info.py while > trying to get numpy to build against MKL: > > if cpu.is_Itanium(): > plt = '64' > #l = 'mkl_ipf' > elif cpu.is_Xeon(): > plt = 'em64t' > #l = 'mkl_em64t' > else: > plt = '32' > #l = 'mkl_ia32' > > So in the autodetection for MKL, the only way to get plt (platform) > set to 'em64t' is to test true for a Xeon. This function returns > false on my Core2 Duo system, even though the platform is very much > 'em64t'. I think that check should instead read: > > elif cpu.is_Xeon() or cpu.is_Core2(): > > Thoughts? This may help you to see the developer's view on this subject: http://projects.scipy.org/numpy/ticket/994 Cheers, -- Francesc Alted From charlesr.harris at gmail.com Wed Mar 11 14:36:58 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 11 Mar 2009 12:36:58 -0600 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: <49B80095.6030206@molden.no> References: <49B7AA94.8090509@molden.no> <49B7EFAC.1040509@noaa.gov> <49B80095.6030206@molden.no> Message-ID: On Wed, Mar 11, 2009 at 12:19 PM, Sturla Molden wrote: > Charles R Harris wrote: > > > > Raising exceptions in ufuncs is going to take some work as the inner > > loops are void functions without any means of indicating an error. > > Exceptions also need to be thread safe. So I am not opposed but it is > > something for the future. > > > I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses > the IEEE754 binary format, at least NPY_NAN should be True in a boolean > context. So bool(nan) is True then. > > And that's what happens now on my computer as well: > > >>> bool(nan) > True > > I don't like Python exception's raised inside ufuncs. In the future we > NumPy might add OpenMP support to ufuncs (multicore CPUs are getting > common), and Python exceptions would prevent that, or at least make it > difficult (cf. the GIL). > I think numpy needs someway to raise these errors, but error handling is always tricky. Do you have any suggestions as to how you would like to do it? I was thinking that adding an int return to the loops would provide some way of indicating errors without specifying how they were to be handled at this point. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Mar 11 14:41:41 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 03:41:41 +0900 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: Message-ID: <5b8d13220903111141rbe68cf8ide161a92272358c1@mail.gmail.com> On Thu, Mar 12, 2009 at 3:15 AM, Ryan May wrote: > Hi, > > I noticed the following in numpy/distutils/system_info.py while trying to > get numpy to build against MKL: > > ??????????? if cpu.is_Itanium(): > ??????????????? plt = '64' > ??????????????? #l = 'mkl_ipf' > ??????????? elif cpu.is_Xeon(): > ??????????????? plt = 'em64t' > ??????????????? #l = 'mkl_em64t' > ??????????? else: > ??????????????? plt = '32' > ??????????????? #l = 'mkl_ia32' > > So in the autodetection for MKL, the only way to get plt (platform) set to > 'em64t' is to test true for a Xeon.? This function returns false on my Core2 > Duo system, even though the platform is very much 'em64t'.? I think that > check should instead read: > > elif cpu.is_Xeon() or cpu.is_Core2(): > > Thoughts? I think this whole code is inherently fragile. A much better solution is to make the build process customization easier and more straightforward. Auto-detection will never work well. David From rmay31 at gmail.com Wed Mar 11 14:55:02 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 13:55:02 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <5b8d13220903111141rbe68cf8ide161a92272358c1@mail.gmail.com> References: <5b8d13220903111141rbe68cf8ide161a92272358c1@mail.gmail.com> Message-ID: On Wed, Mar 11, 2009 at 1:41 PM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 3:15 AM, Ryan May wrote: > > Hi, > > > > I noticed the following in numpy/distutils/system_info.py while trying to > > get numpy to build against MKL: > > > > if cpu.is_Itanium(): > > plt = '64' > > #l = 'mkl_ipf' > > elif cpu.is_Xeon(): > > plt = 'em64t' > > #l = 'mkl_em64t' > > else: > > plt = '32' > > #l = 'mkl_ia32' > > > > So in the autodetection for MKL, the only way to get plt (platform) set > to > > 'em64t' is to test true for a Xeon. This function returns false on my > Core2 > > Duo system, even though the platform is very much 'em64t'. I think that > > check should instead read: > > > > elif cpu.is_Xeon() or cpu.is_Core2(): > > > > Thoughts? > > I think this whole code is inherently fragile. A much better solution > is to make the build process customization easier and more > straightforward. Auto-detection will never work well. > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Fair enough. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Wed Mar 11 14:56:19 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 13:56:19 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <200903111934.00897.faltet@pytables.org> References: <200903111934.00897.faltet@pytables.org> Message-ID: On Wed, Mar 11, 2009 at 1:34 PM, Francesc Alted wrote: > A Wednesday 11 March 2009, Ryan May escrigu?: > > Hi, > > > > I noticed the following in numpy/distutils/system_info.py while > > trying to get numpy to build against MKL: > > > > if cpu.is_Itanium(): > > plt = '64' > > #l = 'mkl_ipf' > > elif cpu.is_Xeon(): > > plt = 'em64t' > > #l = 'mkl_em64t' > > else: > > plt = '32' > > #l = 'mkl_ia32' > > > > So in the autodetection for MKL, the only way to get plt (platform) > > set to 'em64t' is to test true for a Xeon. This function returns > > false on my Core2 Duo system, even though the platform is very much > > 'em64t'. I think that check should instead read: > > > > elif cpu.is_Xeon() or cpu.is_Core2(): > > > > Thoughts? > > This may help you to see the developer's view on this subject: > > http://projects.scipy.org/numpy/ticket/994 > > Cheers, > > -- > Francesc Alted > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > You know, I knew this sounded familiar. If you regularly build against MKL, can you send me your site.cfg. I've had a lot more success getting the build to work using the autodetection than the blas_opt and lapack_opt sections. Since the autodetection doesn't seem like the accepted way, I'd love to see how to get the accepted way to actually work. :) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at pytables.org Wed Mar 11 15:20:18 2009 From: faltet at pytables.org (Francesc Alted) Date: Wed, 11 Mar 2009 20:20:18 +0100 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: <200903111934.00897.faltet@pytables.org> Message-ID: <200903112020.18209.faltet@pytables.org> A Wednesday 11 March 2009, Ryan May escrigu?: > You know, I knew this sounded familiar. If you regularly build > against MKL, can you send me your site.cfg. I've had a lot more > success getting the build to work using the autodetection than the > blas_opt and lapack_opt sections. Since the autodetection doesn't > seem like the accepted way, I'd love to see how to get the accepted > way to actually work. :) Not that I'm an expert in that sort of black magic, but the next worked fine for me and numexpr: [mkl] # Example for using MKL 10.0 #library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t #include_dirs = /opt/intel/mkl/10.0.2.018/include # Example for the MKL included in Intel C 11.0 compiler library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/ include_dirs = /opt/intel/Compiler/11.0/074/mkl/include/ ##the following set of libraries is suited for compilation ##with the GNU C compiler (gcc). Refer to the MKL documentation ##if you use other compilers (e.g., Intel C compiler) mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core HTH, -- Francesc Alted From rmay31 at gmail.com Wed Mar 11 15:24:57 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 14:24:57 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <200903112020.18209.faltet@pytables.org> References: <200903111934.00897.faltet@pytables.org> <200903112020.18209.faltet@pytables.org> Message-ID: On Wed, Mar 11, 2009 at 2:20 PM, Francesc Alted wrote: > A Wednesday 11 March 2009, Ryan May escrigu?: > > You know, I knew this sounded familiar. If you regularly build > > against MKL, can you send me your site.cfg. I've had a lot more > > success getting the build to work using the autodetection than the > > blas_opt and lapack_opt sections. Since the autodetection doesn't > > seem like the accepted way, I'd love to see how to get the accepted > > way to actually work. :) > > Not that I'm an expert in that sort of black magic, but the next worked > fine for me and numexpr: > > [mkl] > > # Example for using MKL 10.0 > #library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t > #include_dirs = > /opt/intel/mkl/10.0.2.018/include > > # Example for the MKL included in Intel C 11.0 compiler > library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/ > include_dirs = /opt/intel/Compiler/11.0/074/mkl/include/ > > ##the following set of libraries is suited for compilation > ##with the GNU C compiler (gcc). Refer to the MKL documentation > ##if you use other compilers (e.g., Intel C compiler) > mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core Thanks. That's actually pretty close to what I had. I was actually thinking that you were using only blas_opt and lapack_opt, since supposedly the [mkl] style section is deprecated. Thus far, I cannot get these to work with MKL. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Mar 11 15:28:14 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 04:28:14 +0900 Subject: [Numpy-discussion] What is the logical value of nan? In-Reply-To: References: <49B7AA94.8090509@molden.no> <49B7EFAC.1040509@noaa.gov> <49B80095.6030206@molden.no> Message-ID: <5b8d13220903111228k22e1896fte13c043119d0022d@mail.gmail.com> On Thu, Mar 12, 2009 at 3:36 AM, Charles R Harris wrote: > > > On Wed, Mar 11, 2009 at 12:19 PM, Sturla Molden wrote: >> >> Charles R Harris wrote: >> > >> > Raising exceptions in ufuncs is going to take some work as the inner >> > loops are void functions without any means of indicating an error. >> > Exceptions also need to be thread safe. So I am not opposed but it is >> > something for the future. >> > >> I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses >> the IEEE754 binary format, at least NPY_NAN should be True in a boolean >> context. So bool(nan) is True then. >> >> And that's what happens now on my computer as well: >> >> ?>>> bool(nan) >> True >> >> I don't like Python exception's raised inside ufuncs. In the future we >> NumPy might add OpenMP support to ufuncs (multicore CPUs are getting >> common), and Python exceptions would prevent that, or at least make it >> difficult (cf. the GIL). > > I think numpy needs someway to raise these errors, but error handling is > always tricky. Do you have any suggestions as to how you would like to do > it? I was thinking that adding an int return to the loops would provide some > way of indicating errors without specifying how they were to be handled at > this point. I think that we should think carefully about how to set up a good error system within numpy. If we keep adding ad-hoc error handling, I am afraid it will be hard to read and maintain. We could have something like: typedef struct { int error ; const char *str ; } ErrorStruct ; static ErrorStruct UfuncErrors [] = { {CODE1, "error 1 string"}, ...}; and the related functions to get strings from code. Currently, we can't really pass errors through several callees because we don't have a commonly agreed set of errors. If we don't use an errno, I don't think there are any other options, David From rmay31 at gmail.com Wed Mar 11 15:52:42 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 14:52:42 -0500 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN Message-ID: Hi, This is what I'm getting when I try to build scipy HEAD: building library "superlu_src" sources building library "arpack" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources error: /home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c: No such file or directory This didn't happen until I updated to *numpy* SVN HEAD. Numpy itself is building without errors and no tests fail on my system. Any ideas? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Mar 11 16:00:31 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 05:00:31 +0900 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: References: Message-ID: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> On Thu, Mar 12, 2009 at 4:52 AM, Ryan May wrote: > Hi, > > This is what I'm getting when I try to build scipy HEAD: > > building library "superlu_src" sources > building library "arpack" sources > building library "sc_c_misc" sources > building library "sc_cephes" sources > building library "sc_mach" sources > building library "sc_toms" sources > building library "sc_amos" sources > building library "sc_cdf" sources > building library "sc_specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > error: > /home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c: > No such file or directory > > This didn't happen until I updated to *numpy* SVN HEAD.? Numpy itself is > building without errors and no tests fail on my system.? Any ideas? Yes, as the name implies, it is an ugly hack to support gfortran on windows - and the hack itself is implemented in an ugly way. I will fix it tomorrow - in the mean time, copying the file from svn into the directory where the file is looked for should do it - the file is not used on linux anyway. David From cournape at gmail.com Wed Mar 11 16:06:39 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 05:06:39 +0900 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes Message-ID: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> Hi, I was looking at #936, to implement correctly the hashing protocol for dtypes. Am I right to believe that tp_hash should recursively descend fields for compound dtypes, and the hash value should depend on the size/ndim/typenum/byteorder for each "atomic" dtype + fields name (and titles) ? Contrary to comparison, we can't reuse the python C api, since PyObject_Hash cannot be applied to the fields dict, right ? cheers, David From rmay31 at gmail.com Wed Mar 11 16:25:00 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 11 Mar 2009 15:25:00 -0500 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> Message-ID: On Wed, Mar 11, 2009 at 3:00 PM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 4:52 AM, Ryan May wrote: > > Hi, > > > > This is what I'm getting when I try to build scipy HEAD: > > > > building library "superlu_src" sources > > building library "arpack" sources > > building library "sc_c_misc" sources > > building library "sc_cephes" sources > > building library "sc_mach" sources > > building library "sc_toms" sources > > building library "sc_amos" sources > > building library "sc_cdf" sources > > building library "sc_specfun" sources > > building library "statlib" sources > > building extension "scipy.cluster._vq" sources > > error: > > > /home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c: > > No such file or directory > > > > This didn't happen until I updated to *numpy* SVN HEAD. Numpy itself is > > building without errors and no tests fail on my system. Any ideas? > > Yes, as the name implies, it is an ugly hack to support gfortran on > windows - and the hack itself is implemented in an ugly way. I will > fix it tomorrow - in the mean time, copying the file from svn into the > directory where the file is looked for should do it - the file is not > used on linux anyway. > That's fine. I just wanted to make sure I didn't do something weird while getting numpy built with MKL. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Wed Mar 11 16:29:35 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 11 Mar 2009 21:29:35 +0100 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin Message-ID: Hi, I was wondering if people could comment on which compiler produces faster code, MS-VS2003 or cygwin g++ ? I use Python 2.5 and SWIG. I have C/C++ routines for large (maybe 10MB, 100MB or even >1GB (on XP 64bit)) data processing. I'm not talking about BLAS or anything like that .... just for-loops mostly on contiguous memory. Or should the speed / memory performance of the resulting code be the same ? Thanks, Sebastian Haase From robert.kern at gmail.com Wed Mar 11 16:36:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Mar 2009 15:36:22 -0500 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> Message-ID: <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> On Wed, Mar 11, 2009 at 15:06, David Cournapeau wrote: > Hi, > > I was looking at #936, to implement correctly the hashing protocol for > dtypes. Am I right to believe that tp_hash should recursively descend > fields for compound dtypes, and the hash value should depend on the > size/ndim/typenum/byteorder for each "atomic" dtype + fields name (and > titles) ? Contrary to comparison, we can't reuse the python C api, > since PyObject_Hash cannot be applied to the fields dict, right ? Usually, one constructs a hashable analogue; e.g. taking the .descr and converting all of the lists to tuples. Then use PyObject_Hash on that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sccolbert at gmail.com Wed Mar 11 17:00:59 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 11 Mar 2009 17:00:59 -0400 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin In-Reply-To: References: Message-ID: <7f014ea60903111400q6107865as1d4194a15e5b126b@mail.gmail.com> i don't know the correct answer... but i imagine it would be fairly easy to compile a couple of representative scipts on each compiler and compare their performance. On Wed, Mar 11, 2009 at 4:29 PM, Sebastian Haase wrote: > Hi, > I was wondering if people could comment on which compiler produces faster > code, > MS-VS2003 or cygwin g++ ? > I use Python 2.5 and SWIG. I have C/C++ routines for large (maybe > 10MB, 100MB or even >1GB (on XP 64bit)) data processing. > I'm not talking about BLAS or anything like that .... just for-loops > mostly on contiguous memory. > Or should the speed / memory performance of the resulting code be the same > ? > > > Thanks, > Sebastian Haase > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gareth.elston.floss at googlemail.com Wed Mar 11 19:07:19 2009 From: gareth.elston.floss at googlemail.com (Gareth Elston) Date: Wed, 11 Mar 2009 23:07:19 +0000 Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions In-Reply-To: <463e11f90903041928j7508b2fcu4abbaa65cfe11460@mail.gmail.com> References: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> <463e11f90903041928j7508b2fcu4abbaa65cfe11460@mail.gmail.com> Message-ID: <2352c0540903111607r32d50c4fm976f010a76e1f72d@mail.gmail.com> Does anyone know any good internet references for defining and using homogeneous transformation matrices, especially oblique projection matrices? I'm writing some tests for transformations.py and I'm getting unexpected results, quite possibly because I'm making naive assumptions about how to use projection_matrix(). Thanks, Gareth. On Thu, Mar 5, 2009 at 3:28 AM, Jonathan Taylor wrote: > Looks cool but a lot of this should be done in an extension module to > make it fast. ?Perhaps starting this process off as a separate entity > until stability is acheived. ?I would be tempted to do some of this > using cython. ?I just wrote found that generating a rotation matrix > from euler angles is about 10x faster when done properly with cython. > > J. > > On Wed, Mar 4, 2009 at 5:10 PM, Gareth Elston > wrote: >> I found a nice module for these transforms at >> http://www.lfd.uci.edu/~gohlke/code/transformations.py.html . I've >> been using an older version for some time and thought it might make a >> good addition to numpy/scipy. I made some simple mods to the older >> version to add a couple of functions I needed and to allow it to be >> used with Python 2.4. >> >> The module is pure Python (2.5, with numpy 1.2 imported), includes >> doctests, and is BSD licensed. Here's the first part of the module >> docstring: >> >> """Homogeneous Transformation Matrices and Quaternions. >> >> A library for calculating 4x4 matrices for translating, rotating, mirroring, >> scaling, shearing, projecting, orthogonalizing, and superimposing arrays of >> homogenous coordinates as well as for converting between rotation matrices, >> Euler angles, and quaternions. >> """ >> >> I'd like to see this added to numpy/scipy so I know I've got some >> reading to do (scipy.org/Developer_Zone and the huge scipy-dev >> discussions on Scipy development infrastructure / workflow) to make >> sure it follows the guidelines, but where would people like to see >> this? In numpy? scipy? scikits? elsewhere? >> >> I seem to remember that there was a first draft of a guide for >> developers being written. Are there any links available? >> >> Thanks, >> Gareth. From sccolbert at gmail.com Wed Mar 11 19:52:58 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 11 Mar 2009 19:52:58 -0400 Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions In-Reply-To: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> References: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> Message-ID: <7f014ea60903111652s7e8d9c25l970e3da4961a2fcf@mail.gmail.com> there has already been a port of the robotics toolbox for matlab into python which is built on numpy: http://code.google.com/p/robotics-toolbox-python/ which contains all the function you are describing. Chris On Wed, Mar 4, 2009 at 6:10 PM, Gareth Elston < gareth.elston.floss at googlemail.com> wrote: > I found a nice module for these transforms at > http://www.lfd.uci.edu/~gohlke/code/transformations.py.html. I've > been using an older version for some time and thought it might make a > good addition to numpy/scipy. I made some simple mods to the older > version to add a couple of functions I needed and to allow it to be > used with Python 2.4. > > The module is pure Python (2.5, with numpy 1.2 imported), includes > doctests, and is BSD licensed. Here's the first part of the module > docstring: > > """Homogeneous Transformation Matrices and Quaternions. > > A library for calculating 4x4 matrices for translating, rotating, > mirroring, > scaling, shearing, projecting, orthogonalizing, and superimposing arrays of > homogenous coordinates as well as for converting between rotation matrices, > Euler angles, and quaternions. > """ > > I'd like to see this added to numpy/scipy so I know I've got some > reading to do (scipy.org/Developer_Zone and the huge scipy-dev > discussions on Scipy development infrastructure / workflow) to make > sure it follows the guidelines, but where would people like to see > this? In numpy? scipy? scikits? elsewhere? > > I seem to remember that there was a first draft of a guide for > developers being written. Are there any links available? > > Thanks, > Gareth. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuwj5460 at 163.com Wed Mar 11 20:55:13 2009 From: shuwj5460 at 163.com (shuwj5460 at 163.com) Date: Thu, 12 Mar 2009 08:55:13 +0800 Subject: [Numpy-discussion] is it a bug? Message-ID: <20090312084423.F627.SHUWJ5460@163.com> Hi, import numpy as np x = np.arange(30) x.shape = (2,3,5) idx = np.array([0,1]) e = x[0,idx,:] print e.shape #----> return (2,5). ok. idx = np.array([0,1]) e = x[0,:,idx] print e.shape #-----> return (2,3). I think the right answer should be (3,2). Is # it a bug here? my numpy version is 1.2.1. Regards David -- <> From jonathan.taylor at utoronto.ca Wed Mar 11 22:51:07 2009 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 11 Mar 2009 22:51:07 -0400 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <20090312084423.F627.SHUWJ5460@163.com> References: <20090312084423.F627.SHUWJ5460@163.com> Message-ID: <463e11f90903111951i15a75333v1637983361b841e7@mail.gmail.com> You lost me on > x = np.arange(30) > x.shape = (2,3,5) For me I get: In [2]: x = np.arange(30) In [3]: x.shape Out[3]: (30,) which is what I would expect. Perhaps I missed something? Jon. On Wed, Mar 11, 2009 at 8:55 PM, shuwj5460 at 163.com wrote: > Hi, > > import numpy as np > x = np.arange(30) > x.shape = (2,3,5) > > idx = np.array([0,1]) > e = x[0,idx,:] > print e.shape > #----> return (2,5). ok. > > idx = np.array([0,1]) > e = x[0,:,idx] > print e.shape > > #-----> return (2,3). I think the right answer should be (3,2). Is > # ? ? ? it a bug here? my numpy version is 1.2.1. > > > Regards > > David > -- > ?<> > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Wed Mar 11 22:57:42 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Mar 2009 21:57:42 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <463e11f90903111951i15a75333v1637983361b841e7@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <463e11f90903111951i15a75333v1637983361b841e7@mail.gmail.com> Message-ID: <3d375d730903111957v394c353fjc8afb5fa6d4b75fa@mail.gmail.com> On Wed, Mar 11, 2009 at 21:51, Jonathan Taylor wrote: > You lost me on >> x = np.arange(30) >> x.shape = (2,3,5) > > For me I get: > In [2]: x = np.arange(30) > > In [3]: x.shape > Out[3]: (30,) > > which is what I would expect. ? Perhaps I missed something? He is reshaping x by assigning (2,3,5) to its shape tuple, not asserting that it is equal to (2,3,5) without modification. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Mar 11 22:58:50 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 11 Mar 2009 21:58:50 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <463e11f90903111951i15a75333v1637983361b841e7@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <463e11f90903111951i15a75333v1637983361b841e7@mail.gmail.com> Message-ID: <1cd32cbb0903111958w3232c278wf978bf4dc9656840@mail.gmail.com> On Wed, Mar 11, 2009 at 9:51 PM, Jonathan Taylor wrote: > You lost me on >> x = np.arange(30) >> x.shape = (2,3,5) > > For me I get: > In [2]: x = np.arange(30) > > In [3]: x.shape > Out[3]: (30,) > > which is what I would expect. ? Perhaps I missed something? > > Jon. > - Show quoted text - > On Wed, Mar 11, 2009 at 8:55 PM, shuwj5460 at 163.com wrote: >> Hi, >> >> import numpy as np >> x = np.arange(30) >> x.shape = (2,3,5) >> >> idx = np.array([0,1]) >> e = x[0,idx,:] >> print e.shape >> #----> return (2,5). ok. >> >> idx = np.array([0,1]) >> e = x[0,:,idx] >> print e.shape >> >> #-----> return (2,3). I think the right answer should be (3,2). Is >> # ? ? ? it a bug here? my numpy version is 1.2.1. >> >> >> Regards >> >> David same problem with reshape instead of assigning to shape: >>> x = np.arange(30).reshape(2,3,5) >>> idx = np.array([0,1]); e = x[0,:,idx]; e.shape (2, 3) >>> idx = np.array([0,1]); e = x[0,:,:2]; e.shape (3, 2) >>> e = x3[0,:,[0,1]];e.shape (2, 3) >>> e = x3[0,np.arange(3)[:,np.newaxis],[0,1]]; e.shape (3, 2) >>> e = x3[0,0:3,[0,1]];e.shape (2, 3) I was trying to figure out what the broadcasting rules are doing, but the combination of slice : and an index looks weird, and I'm using this pattern all the time. Josef From robert.kern at gmail.com Wed Mar 11 23:02:15 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Mar 2009 22:02:15 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <20090312084423.F627.SHUWJ5460@163.com> References: <20090312084423.F627.SHUWJ5460@163.com> Message-ID: <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> On Wed, Mar 11, 2009 at 19:55, shuwj5460 at 163.com wrote: > Hi, > > import numpy as np > x = np.arange(30) > x.shape = (2,3,5) > > idx = np.array([0,1]) > e = x[0,idx,:] > print e.shape > #----> return (2,5). ok. > > idx = np.array([0,1]) > e = x[0,:,idx] > print e.shape > > #-----> return (2,3). I think the right answer should be (3,2). Is > # ? ? ? it a bug here? my numpy version is 1.2.1. It's certainly weird, but it's working as designed. Fancy indexing via arrays is a separate subsystem from indexing via slices. Basically, fancy indexing decides the outermost shape of the result (e.g. the leftmost items in the shape tuple). If there are any sliced axes, they are *appended* to the end of that shape tuple. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Mar 11 23:22:07 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 11 Mar 2009 22:22:07 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> Message-ID: <1cd32cbb0903112022j43df4d8dx4af7d70b705d349b@mail.gmail.com> On Wed, Mar 11, 2009 at 10:02 PM, Robert Kern wrote: > On Wed, Mar 11, 2009 at 19:55, shuwj5460 at 163.com wrote: >> Hi, >> >> import numpy as np >> x = np.arange(30) >> x.shape = (2,3,5) >> >> idx = np.array([0,1]) >> e = x[0,idx,:] >> print e.shape >> #----> return (2,5). ok. >> >> idx = np.array([0,1]) >> e = x[0,:,idx] >> print e.shape >> >> #-----> return (2,3). I think the right answer should be (3,2). Is >> # ? ? ? it a bug here? my numpy version is 1.2.1. > > It's certainly weird, but it's working as designed. Fancy indexing via > arrays is a separate subsystem from indexing via slices. Basically, > fancy indexing decides the outermost shape of the result (e.g. the > leftmost items in the shape tuple). If there are any sliced axes, they > are *appended* to the end of that shape tuple. > > -- > Robert Kern But the swapping of axis doesn't seem to happen on the first 2 dimensions (my main use case) >>> x = np.arange(30).reshape(3,5,2) >>> idx = np.array([0,1]); e = x[:,[0,1],0]; e.shape (3, 2) >>> idx = np.array([0,1]); e = x[:,:2,0]; e.shape (3, 2) >>> idx = np.array([0,1]); e = x[0,:,[0,1]]; e.shape (2, 5) >>> idx = np.array([0,1]); e = x[0,:,:2]; e.shape (5, 2) Is there a way to use swapaxis in the 3 or more dimension case that would get the "correct" axis order back? Josef From cournape at gmail.com Wed Mar 11 23:38:31 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 12:38:31 +0900 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin In-Reply-To: References: Message-ID: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> On Thu, Mar 12, 2009 at 5:29 AM, Sebastian Haase wrote: > Hi, > I was wondering if people could comment on which compiler produces faster code, > MS-VS2003 or cygwin g++ ? > I use Python 2.5 and SWIG. ?I have C/C++ routines for large (maybe > 10MB, 100MB or even >1GB (on XP 64bit)) data processing. > I'm not talking about BLAS or anything like that .... just for-loops > mostly on contiguous memory. On windows xp 64 bits, the choice is easy: there is no working native g++ compiler yet, there are quite a few bugs ( in particular the driver is broken, which means you have to call the compiler, assembler and linker manually). AFAIK, cygwin cannot run 64 bits binaries (cygwin itself is only available on 32 bits for sure), and you can't cross compile easily. David From cournape at gmail.com Wed Mar 11 23:39:35 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 12:39:35 +0900 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin In-Reply-To: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> Message-ID: <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> On Thu, Mar 12, 2009 at 12:38 PM, David Cournapeau wrote: > and you can't > cross compile easily. Of course, this applies to numpy/scipy - you can cross compile your own extensions relatively easily (at least I don't see why it would not be possible). David From cournape at gmail.com Wed Mar 11 23:49:19 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 12:49:19 +0900 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> Message-ID: <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> On Thu, Mar 12, 2009 at 5:36 AM, Robert Kern wrote: > On Wed, Mar 11, 2009 at 15:06, David Cournapeau wrote: >> Hi, >> >> I was looking at #936, to implement correctly the hashing protocol for >> dtypes. Am I right to believe that tp_hash should recursively descend >> fields for compound dtypes, and the hash value should depend on the >> size/ndim/typenum/byteorder for each "atomic" dtype + fields name (and >> titles) ? Contrary to comparison, we can't reuse the python C api, >> since PyObject_Hash cannot be applied to the fields dict, right ? > > Usually, one constructs a hashable analogue; e.g. taking the .descr > and converting all of the lists to tuples. Then use PyObject_Hash on > that. Is the .descr of two dtypes guaranteed to be equal whenever the dtypes are equal ? It is not obvious to me that PyArray_EquivTypes is equivalent to comparing the descr ? David From robert.kern at gmail.com Thu Mar 12 00:00:24 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Mar 2009 23:00:24 -0500 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> Message-ID: <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> On Wed, Mar 11, 2009 at 22:49, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 5:36 AM, Robert Kern wrote: >> On Wed, Mar 11, 2009 at 15:06, David Cournapeau wrote: >>> Hi, >>> >>> I was looking at #936, to implement correctly the hashing protocol for >>> dtypes. Am I right to believe that tp_hash should recursively descend >>> fields for compound dtypes, and the hash value should depend on the >>> size/ndim/typenum/byteorder for each "atomic" dtype + fields name (and >>> titles) ? Contrary to comparison, we can't reuse the python C api, >>> since PyObject_Hash cannot be applied to the fields dict, right ? >> >> Usually, one constructs a hashable analogue; e.g. taking the .descr >> and converting all of the lists to tuples. Then use PyObject_Hash on >> that. > > Is the .descr of two dtypes guaranteed to be equal whenever the dtypes > are equal ? It is not obvious to me that PyArray_EquivTypes is > equivalent to comparing the descr ? It was an example. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu Mar 12 02:34:20 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 12 Mar 2009 08:34:20 +0200 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> Message-ID: <9457e7c80903112334q6adf8df7v598e041c149dd8cb@mail.gmail.com> 2009/3/12 Robert Kern : >> idx = np.array([0,1]) >> e = x[0,:,idx] >> print e.shape >> >> #-----> return (2,3). I think the right answer should be (3,2). Is >> # ? ? ? it a bug here? my numpy version is 1.2.1. > > It's certainly weird, but it's working as designed. Fancy indexing via > arrays is a separate subsystem from indexing via slices. Basically, > fancy indexing decides the outermost shape of the result (e.g. the > leftmost items in the shape tuple). If there are any sliced axes, they > are *appended* to the end of that shape tuple. This was my understanding, but now I see: In [31]: x = np.random.random([4,5,6,7]) In [32]: idx = np.array([1,2]) In [33]: x[:, idx, idx, :].shape Out[33]: (4, 2, 7) Cheers St?fan From faltet at pytables.org Thu Mar 12 04:05:19 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 12 Mar 2009 09:05:19 +0100 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: <200903112020.18209.faltet@pytables.org> Message-ID: <200903120905.20466.faltet@pytables.org> A Wednesday 11 March 2009, Ryan May escrigu?: > Thanks. That's actually pretty close to what I had. I was actually > thinking that you were using only blas_opt and lapack_opt, since > supposedly the [mkl] style section is deprecated. Thus far, I cannot > get these to work with MKL. Well, my configuration was thought to link with the VML integrated in the MKL, but I'd say that it would be similar for blas and lapack. What's you configuration? What's the error you are running into? Cheers, -- Francesc Alted From haase at msg.ucsf.edu Thu Mar 12 05:15:55 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 12 Mar 2009 10:15:55 +0100 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin In-Reply-To: <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 4:39 AM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 12:38 PM, David Cournapeau wrote: >> and you can't >> cross compile easily. > > Of course, this applies to numpy/scipy - you can cross compile your > own extensions relatively easily (at least I don't see why it would > not be possible). > > David Thanks for the reply. I actually don't have easy access to the MS compiler. David, will you be making 64bit binary versions of numpy+scipy available ? Cross compiling .... I have never done that: I suppose it's an addition option to "g++" and having extra libraries somewhere, right ? -Sebastian From david at ar.media.kyoto-u.ac.jp Thu Mar 12 05:11:31 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 18:11:31 +0900 Subject: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin In-Reply-To: References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> Message-ID: <49B8D1C3.5030008@ar.media.kyoto-u.ac.jp> Sebastian Haase wrote: > On Thu, Mar 12, 2009 at 4:39 AM, David Cournapeau wrote: > >> On Thu, Mar 12, 2009 at 12:38 PM, David Cournapeau wrote: >> >>> and you can't >>> cross compile easily. >>> >> Of course, this applies to numpy/scipy - you can cross compile your >> own extensions relatively easily (at least I don't see why it would >> not be possible). >> >> David >> > > Thanks for the reply. > I actually don't have easy access to the MS compiler. > The MS compilers are available freely - the trick is to get the PSDK which contains the free compilers targeting 64 bits (Visual studio express does not include 64 bits targeting compilers). AFAIK, no 64 bits hosted compiler is available freely. So the situation is that you get the 32 bits compilers binaries which run on windows 64 bits through WoW and which target 64 bits. It is easier than it sounds :) You have to be careful with the versions: - python 2.5 -> use compiler version 14 (that is VS 2005 -> 64 bits freely available through PSDK 6.0) - python 2.6 -> use compiler version 15 (that is VS 2008 -> 64 bits freely available through PSDK 6.1(a)) > David, will you be making 64bit binary versions of numpy+scipy available ? > Numpy, yes. For scipy, I have yet managed to build it successfully: neither C++ nor Fortran GNU compiler runs well. I have no access to a non free fortran compiler on windows 64 bits. And anyway, it will be experimental; in particular, I won't distribute the corresponding toolchain, and the mingw project does not distribute a native toolchain either (you would have to build it yourself). So if you can limit yourself to numpy, you are better of with MS compilers for now I think. > Cross compiling .... I have never done that: I suppose it's an > addition option to "g++" and having extra libraries somewhere, right ? > Yes - it is easy for your own extensions, I would like numpy to be cross-compilable, because the free cross compilers (windows 32 bits hosted -> targeting 64 bits) are much more stable than the natives ones. The natives ones do not work well for the moment (both g++ and gfortran segfault or worse). The problem is that most projects which use mingw-w64 can cross compile (from linux or mac os X) easily through autoconf, so people do not care so much about the native compilers. cheers, David From wright at esrf.fr Thu Mar 12 05:51:20 2009 From: wright at esrf.fr (Jon Wright) Date: Thu, 12 Mar 2009 10:51:20 +0100 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> Message-ID: <49B8DB18.8070503@esrf.fr> Hello, If I do: C:\> easy_install numpy ... on a windows box, it attempts to do a source download and build, which typically doesn't work. If however I use: C:\> easy_install numpy==1.0.4 ... then the magic works just fine. Any chance of a more recent bdist_egg being made available for windows? Thanks Jon From david at ar.media.kyoto-u.ac.jp Thu Mar 12 05:41:12 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 18:41:12 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49B8DB18.8070503@esrf.fr> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> Message-ID: <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> Hi Jon, Jon Wright wrote: > Hello, > > If I do: > > C:\> easy_install numpy > > ... on a windows box, it attempts to do a source download and build, > which typically doesn't work. If however I use: > > C:\> easy_install numpy==1.0.4 > > ... then the magic works just fine. Any chance of a more recent > bdist_egg being made available for windows? > Is there a reason why you would not just use the binary installer ? cheers, David From asbach at ient.rwth-aachen.de Thu Mar 12 06:26:39 2009 From: asbach at ient.rwth-aachen.de (Mark Asbach) Date: Thu, 12 Mar 2009 11:26:39 +0100 Subject: [Numpy-discussion] image processing using numpy-scipy? In-Reply-To: <332268.94796.qm@web94913.mail.in2.yahoo.com> References: <332268.94796.qm@web94913.mail.in2.yahoo.com> Message-ID: <2EA4EE08-F24B-4166-A663-2DEF29A0297C@ient.rwth-aachen.de> Hi there, > I have read the docs of PIL but there is no function for this. Can I > use numpy-scipy for the matter? > The image size is 1K. did you have a look at OpenCV? http://sourceforge.net/projects/opencvlibrary Since a couple of weeks, we have implemented the numpy array interface so data exchange is easy [check out from SVN]. Best, Mark -- Mark Asbach Institut f?r Nachrichtentechnik, RWTH Aachen University http://www.ient.rwth-aachen.de/cms/team/m_asbach -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4403 bytes Desc: not available URL: From wright at esrf.fr Thu Mar 12 07:08:06 2009 From: wright at esrf.fr (Jon Wright) Date: Thu, 12 Mar 2009 12:08:06 +0100 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> Message-ID: <49B8ED16.4060906@esrf.fr> David Cournapeau wrote: > Hi Jon, > > Jon Wright wrote: >> Hello, >> >> If I do: >> >> C:\> easy_install numpy >> >> ... on a windows box, it attempts to do a source download and build, >> which typically doesn't work. If however I use: >> >> C:\> easy_install numpy==1.0.4 >> >> ... then the magic works just fine. Any chance of a more recent >> bdist_egg being made available for windows? >> > > Is there a reason why you would not just use the binary installer ? I'd like to have numpy as a dependency being pulled into a virtualenv automatically. Is that possible with the binary installer? Thanks, Jon From zachary.pincus at yale.edu Thu Mar 12 08:11:30 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 12 Mar 2009 08:11:30 -0400 Subject: [Numpy-discussion] image processing using numpy-scipy? In-Reply-To: <2EA4EE08-F24B-4166-A663-2DEF29A0297C@ient.rwth-aachen.de> References: <332268.94796.qm@web94913.mail.in2.yahoo.com> <2EA4EE08-F24B-4166-A663-2DEF29A0297C@ient.rwth-aachen.de> Message-ID: <44784810-6A1C-470D-8AEC-B8538B6E9B2C@yale.edu> > did you have a look at OpenCV? > > http://sourceforge.net/projects/opencvlibrary > > Since a couple of weeks, we have implemented the numpy array > interface so data exchange is easy [check out from SVN]. Oh fantastic! That is great news indeed. Zach From cournape at gmail.com Thu Mar 12 08:13:04 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 21:13:04 +0900 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> Message-ID: <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> On Thu, Mar 12, 2009 at 1:00 PM, Robert Kern wrote: > > It was an example. Ok, guess I will have to learn the difference between i.e. and e.g. one day. Anyway, here is a first shot at it: http://codereview.appspot.com/26052 I added a few tests which fail with trunk and work with the patch (for example, two equivalent types now hash the same), only tested on Linux so far. I am not sure I took into account every case: I am not familiar with the PyArray_Descr API (this patch was a good excuse to dive into this part of the code), and I also noticed a few discrepancies with the doc (the fields struct member never seems to be NULL, but set to None for builtin types). cheers, David From cournape at gmail.com Thu Mar 12 08:13:54 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 21:13:54 +0900 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> Message-ID: <5b8d13220903120513g71bc9dafhf5369f75662024f4@mail.gmail.com> On Thu, Mar 12, 2009 at 9:13 PM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 1:00 PM, Robert Kern wrote: > >> >> It was an example. > > Ok, guess I will have to learn the difference between i.e. and e.g. one day. > > Anyway, here is a first shot at it: > > http://codereview.appspot.com/26052 Sorry, the link is http://codereview.appspot.com/26052/show David From david at ar.media.kyoto-u.ac.jp Thu Mar 12 08:23:23 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 21:23:23 +0900 Subject: [Numpy-discussion] 1.3 release: getting rid of sourceforge ? Message-ID: <49B8FEBB.7010407@ar.media.kyoto-u.ac.jp> Hi, I was wondering if there was any reason for still using sourceforge ? AFAIK, we only use it to put the files there, and dealing with sourceforge to upload files is less than optimal to say the least. Is there any drawback to directly put the files to scipy.org ? cheers, David From stefan at sun.ac.za Thu Mar 12 09:19:21 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 12 Mar 2009 15:19:21 +0200 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> Message-ID: <9457e7c80903120619r16cf3038ve84b06d4dadd8c8e@mail.gmail.com> 2009/3/12 David Cournapeau : > Anyway, here is a first shot at it: > > http://codereview.appspot.com/26052 Design question: should [('x', float), ('y', float)] and [('t', float), ('s', float)] hash to the same value or not? Regards St?fan From bsouthey at gmail.com Thu Mar 12 09:19:38 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 12 Mar 2009 08:19:38 -0500 Subject: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero In-Reply-To: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> References: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> Message-ID: <49B90BEA.9030702@gmail.com> David Cournapeau wrote: > Hi, > > For the record, I have just added the following functionalities to > numpy, which may simplify some C code: > - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf, > positive and negative zeros. Rationale: some code use NAN, _get_nan, > etc... NAN is a GNU C extension, INFINITY is not available on many C > compilers. The NPY_ macros are defined from the IEEE754 format, and as > such should be very fast (the values should be inlined). > - we can now use inline safely in numpy C code: it is defined to > something recognized by the compiler or nothing if inline is not > supported. It is NOT defined publicly to avoid namespace pollution. > - NPY_INLINE is a macro which can be used publicly, and has the same > usage as inline. > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hi, I am curious how this relates to Zach's comment in the thread on 'Infinity Definitions': http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035740.html > If I recall correctly, one reason for the plethora of infinity > definitions (which had been mentioned previously on the list) was that > the repr for some or all float/complex types was generated by code in > the host OS, and not in numpy. As such, these reprs were different for > different platforms. As there was a desire to ensure that reprs could > always be evaluated, the various ways that inf and nan could be spit > out by the host libs were all included. > > Has this been fixed now, so that repr(inf), (etc.) looks identical on > all platforms? If this is no longer a concern then we should be able to remove those duplicate definitions and use of uppercase. Bruce From david at ar.media.kyoto-u.ac.jp Thu Mar 12 09:15:43 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 22:15:43 +0900 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <9457e7c80903120619r16cf3038ve84b06d4dadd8c8e@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> <9457e7c80903120619r16cf3038ve84b06d4dadd8c8e@mail.gmail.com> Message-ID: <49B90AFF.2040504@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > 2009/3/12 David Cournapeau : > >> Anyway, here is a first shot at it: >> >> http://codereview.appspot.com/26052 >> > > Design question: should [('x', float), ('y', float)] and [('t', > float), ('s', float)] hash to the same value or not? > According to: http://docs.python.org/reference/datamodel.html#object.__hash__ The only constraint is that a == b -> hash(a) == hash(b) (which is broken currently in numpy, even for builtin dtypes). The main problem is that I am not very clear yet on what a == b is for dtypes (the code for PyArray_EquivTypes goes through PyObject_Compare for compound types). In your example, both dtypes are not equal (and they do not hash the same). cheers, David From rmay31 at gmail.com Thu Mar 12 09:42:05 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 08:42:05 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <200903120905.20466.faltet@pytables.org> References: <200903112020.18209.faltet@pytables.org> <200903120905.20466.faltet@pytables.org> Message-ID: On Thu, Mar 12, 2009 at 3:05 AM, Francesc Alted wrote: > A Wednesday 11 March 2009, Ryan May escrigu?: > > Thanks. That's actually pretty close to what I had. I was actually > > thinking that you were using only blas_opt and lapack_opt, since > > supposedly the [mkl] style section is deprecated. Thus far, I cannot > > get these to work with MKL. > > Well, my configuration was thought to link with the VML integrated in > the MKL, but I'd say that it would be similar for blas and lapack. > What's you configuration? What's the error you are running into? I can get it working now with either the [mkl] section like your config or the following config: [DEFAULT] include_dirs = /opt/intel/mkl/10.0.2.018/include/ library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib [blas] libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 [lapack] libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 It's just confusing I guess because if I change blas and lapack to blas_opt and lapack_opt, I cannot get it to work. The only reason I even care is that site.cfg.example leads me to believe that the *_opt sections are the way you're supposed to add them. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Mar 12 09:43:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 22:43:10 +0900 Subject: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero In-Reply-To: <49B90BEA.9030702@gmail.com> References: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> <49B90BEA.9030702@gmail.com> Message-ID: <5b8d13220903120643n6eddee1dt1008b59c0ae042f8@mail.gmail.com> On Thu, Mar 12, 2009 at 10:19 PM, Bruce Southey wrote: > David Cournapeau wrote: >> Hi, >> >> ? ? For the record, I have just added the following functionalities to >> numpy, which may simplify some C code: >> ? ? - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf, >> positive and negative zeros. Rationale: some code use NAN, _get_nan, >> etc... NAN is a GNU C extension, INFINITY is not available on many C >> compilers. The NPY_ macros are defined from the IEEE754 format, and as >> such should be very fast (the values should be inlined). >> ? ? - we can now use inline safely in numpy C code: it is defined to >> something recognized by the compiler or nothing if inline is not >> supported. It is NOT defined publicly to avoid namespace pollution. >> ? ? - NPY_INLINE is a macro which can be used publicly, and has the same >> usage as inline. >> >> cheers, >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > Hi, > I am curious how this relates to Zach's comment in the thread on > 'Infinity Definitions': It does not directly - but I implemented those macro after Pauli, Charles (Harris) and me worked on improving formatting; those macro replace several ad-hoc solutions through the numpy code base. Concerning formatting, there is much more consistency with python 2.6 (because python itself bypasses the C runtime and does the parsing itself), and we followed them. With numpy 1.3, you should almost never see anything else than nan/inf on any platform. There are still some cases where it fails, and some cases we can't do anything about (print '%s' % a, print a, print '%f' % a all go through different codepath, and we can't control at least one of them, I don't remember which one). > > If this is no longer a concern then we should be able to remove those > duplicate definitions and use of uppercase. Yes, we should also fix the pretty print options, so that arrays and not just scalar arrays print nicely: a = np.array([np.nan, 1, 2]) print a -> NaN, ... print a[0] -> nan But this is much easier, as the code is in python. cheers, David From david at ar.media.kyoto-u.ac.jp Thu Mar 12 09:30:20 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 22:30:20 +0900 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: <200903112020.18209.faltet@pytables.org> <200903120905.20466.faltet@pytables.org> Message-ID: <49B90E6C.3070707@ar.media.kyoto-u.ac.jp> Ryan May wrote: > > [DEFAULT] > include_dirs = /opt/intel/mkl/10.0.2.018/include/ > > library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib > > > [blas] > libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 > > [lapack] > libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 > > It's just confusing I guess because if I change blas and lapack to > blas_opt and lapack_opt, I cannot get it to work. Yes, the whole thing is very confusing; trying to understand it when I try to be compatible with it in numscons drove me crazy (the changes with default section handling in python 2.6 did no help). IMHO, we should get rid of all this at some point, and use something much simpler (one file, no sections, just straight LIBPATH + LIBS + CPPATH options), because the current code has gone much beyond the madness point. But it will break some configurations for sure. cheers, David From rmay31 at gmail.com Thu Mar 12 09:58:53 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 08:58:53 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <49B90E6C.3070707@ar.media.kyoto-u.ac.jp> References: <200903112020.18209.faltet@pytables.org> <200903120905.20466.faltet@pytables.org> <49B90E6C.3070707@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Mar 12, 2009 at 8:30 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Ryan May wrote: > > > > [DEFAULT] > > include_dirs = /opt/intel/mkl/10.0.2.018/include/ > > > > library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib > > > > > > [blas] > > libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 > > > > [lapack] > > libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5 > > > > It's just confusing I guess because if I change blas and lapack to > > blas_opt and lapack_opt, I cannot get it to work. > > > Yes, the whole thing is very confusing; trying to understand it when I > try to be compatible with it in numscons drove me crazy (the changes > with default section handling in python 2.6 did no help). IMHO, we > should get rid of all this at some point, and use something much simpler > (one file, no sections, just straight LIBPATH + LIBS + CPPATH options), > because the current code has gone much beyond the madness point. But it > will break some configurations for sure. > Glad to hear it's not just me. I was beginning to think I was being thick headed.... Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Mar 12 10:02:48 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 23:02:48 +0900 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> Message-ID: <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> On Thu, Mar 12, 2009 at 5:25 AM, Ryan May wrote: > That's fine.? I just wanted to make sure I didn't do something weird while > getting numpy built with MKL. It should be fixed in r6650 David From rmay31 at gmail.com Thu Mar 12 10:23:26 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 09:23:26 -0500 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 9:02 AM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 5:25 AM, Ryan May wrote: > > > That's fine. I just wanted to make sure I didn't do something weird > while > > getting numpy built with MKL. > > It should be fixed in r6650 > Fixed for me. I get a segfault running scipy.test(), but that's probably due to MKL. Thanks, David. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Mar 12 10:55:34 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 12 Mar 2009 23:55:34 +0900 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> Message-ID: <5b8d13220903120755l5cf2d9a0xda3bb476788cad88@mail.gmail.com> On Thu, Mar 12, 2009 at 11:23 PM, Ryan May wrote: > > Fixed for me.? I get a segfault running scipy.test(), but that's probably > due to MKL. Yes, it is. Scipy run the test suite fine for me. David From rmay31 at gmail.com Thu Mar 12 11:10:23 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 10:10:23 -0500 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: <5b8d13220903120755l5cf2d9a0xda3bb476788cad88@mail.gmail.com> References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> <5b8d13220903120755l5cf2d9a0xda3bb476788cad88@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau wrote: > On Thu, Mar 12, 2009 at 11:23 PM, Ryan May wrote: > > > > > Fixed for me. I get a segfault running scipy.test(), but that's probably > > due to MKL. > > Yes, it is. Scipy run the test suite fine for me. > While scipy builds, matplotlib's basemap toolkit spits this out: running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "mpl_toolkits.basemap._proj" sources error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or directory Any ideas? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at pytables.org Thu Mar 12 11:11:05 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 12 Mar 2009 16:11:05 +0100 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: References: <200903120905.20466.faltet@pytables.org> Message-ID: <200903121611.06152.faltet@pytables.org> A Thursday 12 March 2009, Ryan May escrigu?: > I can get it working now with either the [mkl] section like your > config or the following config: > > [DEFAULT] > include_dirs = /opt/intel/mkl/10.0.2.018/include/ > library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib ^^^^^^^^^ I see that you are using a multi-directory path here. My understanding was that this is not supported by numpy.distutils, but apparently it worked for you (?), or if you get rid of the ':/usr/lib' trailing part of library_dirs it works ok too? -- Francesc Alted From rmay31 at gmail.com Thu Mar 12 11:49:29 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 10:49:29 -0500 Subject: [Numpy-discussion] Intel MKL on Core2 system In-Reply-To: <200903121611.06152.faltet@pytables.org> References: <200903120905.20466.faltet@pytables.org> <200903121611.06152.faltet@pytables.org> Message-ID: On Thu, Mar 12, 2009 at 10:11 AM, Francesc Alted wrote: > A Thursday 12 March 2009, Ryan May escrigu?: > > I can get it working now with either the [mkl] section like your > > config or the following config: > > > > [DEFAULT] > > include_dirs = /opt/intel/mkl/10.0.2.018/include/ > > library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib > ^^^^^^^^^ > I see that you are using a multi-directory path here. My understanding > was that this is not supported by numpy.distutils, but apparently it > worked for you (?), or if you get rid of the ':/usr/lib' trailing part > of library_dirs it works ok too? > Well, if by multi-directory you mean the colon-separated list, this is what is documented in site.cfg.example and used by the gentoo ebuild on my system. I need the /usr/lib part so that it can pick up libblas.so and liblapack.so. Otherwise, it won't link in MKL. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Mar 12 13:00:40 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 13 Mar 2009 02:00:40 +0900 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> <5b8d13220903120755l5cf2d9a0xda3bb476788cad88@mail.gmail.com> Message-ID: <5b8d13220903121000r7857cb75vbae7b2cfbf6c3fb1@mail.gmail.com> On Fri, Mar 13, 2009 at 12:10 AM, Ryan May wrote: > On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau > wrote: >> >> On Thu, Mar 12, 2009 at 11:23 PM, Ryan May wrote: >> >> > >> > Fixed for me.? I get a segfault running scipy.test(), but that's >> > probably >> > due to MKL. >> >> Yes, it is. Scipy run the test suite fine for me. > > While scipy builds, matplotlib's basemap toolkit spits this out: > > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands --compiler > options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler > options > running build_src > building extension "mpl_toolkits.basemap._proj" sources > error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or > directory Ok, I've just back out the changes in 6653 - let's not break everything now :) David From rmay31 at gmail.com Thu Mar 12 13:04:53 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 12 Mar 2009 12:04:53 -0500 Subject: [Numpy-discussion] Error building SciPy SVN with NumPy SVN In-Reply-To: <5b8d13220903121000r7857cb75vbae7b2cfbf6c3fb1@mail.gmail.com> References: <5b8d13220903111300q70e955a8kb4e31606151bc34d@mail.gmail.com> <5b8d13220903120702j2a0cff0h29777fc3255b33c4@mail.gmail.com> <5b8d13220903120755l5cf2d9a0xda3bb476788cad88@mail.gmail.com> <5b8d13220903121000r7857cb75vbae7b2cfbf6c3fb1@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 12:00 PM, David Cournapeau wrote: > On Fri, Mar 13, 2009 at 12:10 AM, Ryan May wrote: > > On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau > > wrote: > >> > >> On Thu, Mar 12, 2009 at 11:23 PM, Ryan May wrote: > >> > >> > > >> > Fixed for me. I get a segfault running scipy.test(), but that's > >> > probably > >> > due to MKL. > >> > >> Yes, it is. Scipy run the test suite fine for me. > > > > While scipy builds, matplotlib's basemap toolkit spits this out: > > > > running install > > running build > > running config_cc > > unifing config_cc, config, build_clib, build_ext, build commands > --compiler > > options > > running config_fc > > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler > > options > > running build_src > > building extension "mpl_toolkits.basemap._proj" sources > > error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or > > directory > > Ok, I've just back out the changes in 6653 - let's not break everything now > :) > Thanks, that fixed it. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Mar 12 14:15:32 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 19:15:32 +0100 Subject: [Numpy-discussion] [SciPy-user] numpy aligned memory In-Reply-To: <49B924AA.9030509@astraw.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> Message-ID: <49B95144.2090304@molden.no> On 3/12/2009 4:05 PM, Andrew Straw wrote: > So, what's your take on having each row aligned? Is this also useful for > FFTW, for example? If so, we should perhaps come up with a better > routine for the cookbook. Ok, so here is how it could be done. It fails for a reason I'll attribute to a bug in NumPy. import numpy as np def _nextpow(b,isize): i = 1 while b**i < isize: i += 1 return b**i def aligned_zeros(shape, boundary=16, dtype=float, order='C', imagealign=True): if (not imagealign) or (not hasattr(shape,'__len__')): N = np.prod(shape) d = np.dtype(dtype) tmp = np.zeros(N * d.itemsize + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary return tmp[offset:offset+N*d.itemsize]\ .view(dtype=d)\ .reshape(shape, order=order) else: if order == 'C': ndim0 = shape[-1] dim0 = -1 else: ndim0 = shape[0] dim0 = 0 d = np.dtype(dtype) bshape = [i for i in shape] padding = boundary + _nextpow(boundary, d.itemsize) - d.itemsize bshape[dim0] = ndim0*d.itemsize + padding print bshape tmp = np.zeros(bshape, dtype=np.uint8, order=order) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary aligned_slice = slice(offset, offset + ndim0*d.itemsize) if tmp.flags['C_CONTIGUOUS']: tmp = tmp[..., aligned_slice] print tmp.shape else: tmp = tmp[aligned_slice, ...] print tmp.shape return tmp.view(dtype=dtype) # this will often fail, # probably a bug in numpy So lets reproduce the NumPy issue: >>> a = zeros((10,52), dtype=uint8) >>> b = a[:, 3:8*2+3] >>> b.shape (10, 16) >>> b.view(dtype=float) Traceback (most recent call last): File "", line 1, in b.view(dtype=float) ValueError: new type not compatible with array. However: >>> a = zeros((10,16), dtype=uint8) >>> a.view(dtype=float) array([[ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.]]) Until we find a way to overcome this, it will be difficult to align rows to particular byte boundaries. It fails even if we make sure the padding is a multiple of the item size: padding = (boundary + _nextpow(boundary, d.itemsize) \ - d.itemsize) * d.itemsize Very annoying.. Using allocators in libraries (e.g. FFTW) would not help either, as NumPy would fail in the same way. Maybe we can force NumPy to do the right thing by hard-coding an array descriptor? We can do this in Cython though, as it supports pointers and double indirection. But it would be like using C. Sturla Molden From dagss at student.matnat.uio.no Thu Mar 12 14:59:48 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 12 Mar 2009 19:59:48 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython Message-ID: <49B95BA4.8010800@student.matnat.uio.no> (First off, is it OK to continue polling the NumPy list now and then on Cython language decisions? Or should I expect that any interested Cython users follow the Cython list?) In Python, if I write "-1 % 5", I get 4. However, in C if I write "-1 % 5" I get -1. The question is, what should I get in Cython if I write (a % b) where a and b are cdef ints? Should I [ ] Get 4, because it should behave just like in Python, avoiding surprises when adding types to existing algorithms (this will require extra logic and be a bit slower) [ ] Get -1, because they're C ints, and besides one isn't using Cython if one doesn't care about performance Whatever we do, this also affects the division operator, so that one in any case will have a==(a//b)*b+a%b. (Orthogonal to this, we can introduce compiler directives to change the meaning of the operator from the default in a code blocks, and/or make special functions for the semantics that are not chosen as default.) -- Dag Sverre From gael.varoquaux at normalesup.org Thu Mar 12 14:50:30 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 12 Mar 2009 19:50:30 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <49B95BA4.8010800@student.matnat.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> Message-ID: <20090312185030.GA17774@phare.normalesup.org> On Thu, Mar 12, 2009 at 07:59:48PM +0100, Dag Sverre Seljebotn wrote: > (First off, is it OK to continue polling the NumPy list now and then on > Cython language decisions? Or should I expect that any interested Cython > users follow the Cython list?) Yes, IMHO. > In Python, if I write "-1 % 5", I get 4. However, in C if I write "-1 % > 5" I get -1. The question is, what should I get in Cython if I write (a > % b) where a and b are cdef ints? Should I > [ ] Get 4, because it should behave just like in Python, avoiding > surprises when adding types to existing algorithms (this will require > extra logic and be a bit slower) > [ ] Get -1, because they're C ints, and besides one isn't using > Cython if one doesn't care about performance Behave like in Python. Cython should try to be as Python-like as possible, IMHO. I would like to think of it as an (optionally) static-typed Python. My 2 cents, Ga?l From vincent.thierion at ema.fr Thu Mar 12 14:37:48 2009 From: vincent.thierion at ema.fr (vincent.thierion at ema.fr) Date: Thu, 12 Mar 2009 19:37:48 +0100 Subject: [Numpy-discussion] Numpy and Scientific Python Message-ID: <20090312193748.8141784pq0tei740@webmail.ema.fr> Hello, I use numpy and Scientific Numpy for my work. I installed them in a manner I can use them on remote OS in copying them and using sys.path.append. Many times it works, but sometimes (depending on Python version) I receive this error : ImportError: $MYLIBFOLDER/site-packages/ numpy/core/multiarray.so: cannot open shared object file: No such file or directory While this file exists on the expected place. Error occurs on 2.3.4 (#1, Dec 11 2007, 18:02:43) [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] Thank you in advance Vincent ---------------------------------------------------- Ce message a ete envoye par le serveur IMP de l'EMA. From bsouthey at gmail.com Thu Mar 12 16:21:38 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 12 Mar 2009 15:21:38 -0500 Subject: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero In-Reply-To: <5b8d13220903120643n6eddee1dt1008b59c0ae042f8@mail.gmail.com> References: <49B75D8A.7050507@ar.media.kyoto-u.ac.jp> <49B90BEA.9030702@gmail.com> <5b8d13220903120643n6eddee1dt1008b59c0ae042f8@mail.gmail.com> Message-ID: <49B96ED2.1010806@gmail.com> David Cournapeau wrote: > On Thu, Mar 12, 2009 at 10:19 PM, Bruce Southey wrote: > >> David Cournapeau wrote: >> >>> Hi, >>> >>> For the record, I have just added the following functionalities to >>> numpy, which may simplify some C code: >>> - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf, >>> positive and negative zeros. Rationale: some code use NAN, _get_nan, >>> etc... NAN is a GNU C extension, INFINITY is not available on many C >>> compilers. The NPY_ macros are defined from the IEEE754 format, and as >>> such should be very fast (the values should be inlined). >>> - we can now use inline safely in numpy C code: it is defined to >>> something recognized by the compiler or nothing if inline is not >>> supported. It is NOT defined publicly to avoid namespace pollution. >>> - NPY_INLINE is a macro which can be used publicly, and has the same >>> usage as inline. >>> >>> cheers, >>> >>> David >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> Hi, >> I am curious how this relates to Zach's comment in the thread on >> 'Infinity Definitions': >> > > It does not directly - but I implemented those macro after Pauli, > Charles (Harris) and me worked on improving formatting; those macro > replace several ad-hoc solutions through the numpy code base. > > Concerning formatting, there is much more consistency with python 2.6 > (because python itself bypasses the C runtime and does the parsing > itself), and we followed them. With numpy 1.3, you should almost never > see anything else than nan/inf on any platform. There are still some > cases where it fails, and some cases we can't do anything about (print > '%s' % a, print a, print '%f' % a all go through different codepath, > and we can't control at least one of them, I don't remember which > one). > > >> If this is no longer a concern then we should be able to remove those >> duplicate definitions and use of uppercase. >> > > Yes, we should also fix the pretty print options, so that arrays and > not just scalar arrays print nicely: > > a = np.array([np.nan, 1, 2]) > print a -> NaN, ... > print a[0] -> nan > > But this is much easier, as the code is in python. > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Okay, I have created ticket 1051 for this change with hopefully patches that address this. The patches remove these duplicate definitions and uppercase names but these other usages should be depreciated (but I do not know how). After the changes, all tests pass only Linux system for Python 2.4, 2.5 and 2.6. Regards Bruce From stefan at sun.ac.za Thu Mar 12 17:41:54 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 12 Mar 2009 23:41:54 +0200 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <49B95BA4.8010800@student.matnat.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> Message-ID: <9457e7c80903121441r1a2e159by11212d1bb3a32bdf@mail.gmail.com> Hi Dag 2009/3/12 Dag Sverre Seljebotn : > (First off, is it OK to continue polling the NumPy list now and then on > Cython language decisions? Or should I expect that any interested Cython > users follow the Cython list?) Given that many of the subscribers make use of the NumPy support in Cython, I don't think they would mind; I, for one, don't. > In Python, if I write "-1 % 5", I get 4. However, in C if I write "-1 % > 5" I get -1. The question is, what should I get in Cython if I write (a > % b) where a and b are cdef ints? Should I > > [ ] Get 4, because it should behave just like in Python, avoiding > surprises when adding types to existing algorithms (this will require > extra logic and be a bit slower) I'd much prefer this option. When students struggle to get their code faster, my advice to them is: "run it to Cython, and if you are still not happy, start tweaking this and that". It would be much harder to take that route if you had to take a number of exceptional behaviours into account. > (Orthogonal to this, we can introduce compiler directives to change the > meaning of the operator from the default in a code blocks, and/or make > special functions for the semantics that are not chosen as default.) In my experience, keeping the rules simple has a big benefit (the "programmer's brain cache" can only hold a small number of items -- a very good analogy made by Fernando Perez), so I would prefer not to have this option. Regards St?fan From charlesr.harris at gmail.com Thu Mar 12 18:22:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 12 Mar 2009 16:22:16 -0600 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <49B95BA4.8010800@student.matnat.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> Message-ID: On Thu, Mar 12, 2009 at 12:59 PM, Dag Sverre Seljebotn < dagss at student.matnat.uio.no> wrote: > (First off, is it OK to continue polling the NumPy list now and then on > Cython language decisions? Or should I expect that any interested Cython > users follow the Cython list?) > > In Python, if I write "-1 % 5", I get 4. However, in C if I write "-1 % > 5" I get -1. The question is, what should I get in Cython if I write (a > % b) where a and b are cdef ints? Should I > I almost always want the python version, even in C, because I want the results to lie in the interval [0,5) like a good modulus functions should ;) I suppose the question is : is '%' standing for the modulus or is it standing for the remainder in whatever version of division is being used. This is similar to the difference between the trunc and floor functions; I find using the floor function causes fewer problems, but it isn't the default. That said, I think it best to leave '%' with its C default and add a special modulus function for the python version. Changing its meaning in C-like code is going to confuse things. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Mar 12 18:29:36 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 13 Mar 2009 00:29:36 +0200 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: References: <49B95BA4.8010800@student.matnat.uio.no> Message-ID: <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> 2009/3/13 Charles R Harris : > That said, I think it best to leave '%' with its C default and add a special > modulus function for the python version. Changing its meaning in C-like code > is going to confuse things. This is Cython code, so I think there is an argument to be made that it is Python-like! St?fan From sturla at molden.no Thu Mar 12 18:45:02 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 23:45:02 +0100 (CET) Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> Message-ID: <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> > 2009/3/13 Charles R Harris : >> That said, I think it best to leave '%' with its C default and add a >> special >> modulus function for the python version. Changing its meaning in C-like >> code >> is going to confuse things. > > This is Cython code, so I think there is an argument to be made that > it is Python-like! I'll just repeat what I've already said on the Cython mailing list: I think C types should behave like C types and Python objects like Python objects. If a C long suddenly starts to return double when divided by another C long, then that will be a major source of confusion on my part. If I want the behaviour of Python integers, Cython lets me use Python objects. I don't declare a variable cdef long if I want it to behave like a Python int. Sturla Molden From robert.kern at gmail.com Thu Mar 12 19:05:10 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 12 Mar 2009 18:05:10 -0500 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> Message-ID: <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> On Thu, Mar 12, 2009 at 17:45, Sturla Molden wrote: > > > > >> 2009/3/13 Charles R Harris : >>> That said, I think it best to leave '%' with its C default and add a >>> special >>> modulus function for the python version. Changing its meaning in C-like >>> code >>> is going to confuse things. >> >> This is Cython code, so I think there is an argument to be made that >> it is Python-like! > > > I'll just repeat what I've already said on the Cython mailing list: > > I think C types should behave like C types and Python objects like Python > objects. If a C long suddenly starts to return double when divided by > another C long, then that will be a major source of confusion on my part. > If I want the behaviour of Python integers, Cython lets me use Python > objects. I don't declare a variable cdef long if I want it to behave like > a Python int. That may be part of the confusion. The expression "-1%5" has no variables. Perhaps Dag can clarify what he is asking about: # Constants? (No one uses just constants in expressions, # really, but consistency with the other choices will # affect this.) -1 % 5 # Explicitly declared C types? cdef long i, j, k i = -1 j = 5 k = i % j # Python types? i = -1 j = 5 k = i % j # A mixture? cdef long i i = -1 j = 5 k = i % j When I do (2147483647 + 2147483647) in current Cython, to choose another operation, does it use C types, or does it construct PyInts? I.e., do I get C wraparound arithmetic, or do I get a PyLong? I recommend making % behave consistently with the other operators; i.e. if + uses C semantics, % should, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shuwj5460 at 163.com Thu Mar 12 21:08:47 2009 From: shuwj5460 at 163.com (shuwj5460 at 163.com) Date: Fri, 13 Mar 2009 09:08:47 +0800 Subject: [Numpy-discussion] is it a bug? In-Reply-To: References: Message-ID: <20090313084847.E2A8.SHUWJ5460@163.com> > > On Wed, Mar 11, 2009 at 19:55, shuwj5460 at 163.com wrote: > > Hi, > > > > import numpy as np > > x = np.arange(30) > > x.shape = (2,3,5) > > > > idx = np.array([0,1]) > > e = x[0,idx,:] > > print e.shape > > #----> return (2,5). ok. > > > > idx = np.array([0,1]) > > e = x[0,:,idx] > > print e.shape > > > > #-----> return (2,3). I think the right answer should be (3,2). Is > > # ? ? ? it a bug here? my numpy version is 1.2.1. > > It's certainly weird, but it's working as designed. Fancy indexing via > arrays is a separate subsystem from indexing via slices. Basically, > fancy indexing decides the outermost shape of the result (e.g. the > leftmost items in the shape tuple). If there are any sliced axes, they > are *appended* to the end of that shape tuple. > x = np.arange(30) x.shape = (2,3,5) idx = np.array([0,1,3,4]) e = x[:,:,idx] print e.shape #---> return (2,3,4) just as me think. e = x[0,:,idx] print e.shape #---> return (4,3). e = x[:,0,idx] print e.shape #---> return (2,4). not (4,2). why these three cases excute so # differently? From robert.kern at gmail.com Thu Mar 12 21:07:04 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 12 Mar 2009 20:07:04 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <9457e7c80903112334q6adf8df7v598e041c149dd8cb@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> <9457e7c80903112334q6adf8df7v598e041c149dd8cb@mail.gmail.com> Message-ID: <3d375d730903121807t7f2cc89fq483af70ba20897e7@mail.gmail.com> On Thu, Mar 12, 2009 at 01:34, St?fan van der Walt wrote: > 2009/3/12 Robert Kern : >>> idx = np.array([0,1]) >>> e = x[0,:,idx] >>> print e.shape >>> >>> #-----> return (2,3). I think the right answer should be (3,2). Is >>> # ? ? ? it a bug here? my numpy version is 1.2.1. >> >> It's certainly weird, but it's working as designed. Fancy indexing via >> arrays is a separate subsystem from indexing via slices. Basically, >> fancy indexing decides the outermost shape of the result (e.g. the >> leftmost items in the shape tuple). If there are any sliced axes, they >> are *appended* to the end of that shape tuple. > > This was my understanding, but now I see: > > In [31]: x = np.random.random([4,5,6,7]) > > In [32]: idx = np.array([1,2]) > > In [33]: x[:, idx, idx, :].shape > Out[33]: (4, 2, 7) Hmm. Well, your guess is as good as mine at this point. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Thu Mar 12 23:17:03 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 13 Mar 2009 12:17:03 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49B8ED16.4060906@esrf.fr> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> Message-ID: <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> On Thu, Mar 12, 2009 at 8:08 PM, Jon Wright wrote: > > I'd like to have numpy as a dependency being pulled into a virtualenv > automatically. Is that possible with the binary installer? I don't think so - but I would think that people using virtualenv are familiar with compiling softwares. I now remember that numpy could not be built from sources by easy_install, but I believe we fixed the problem. Would you mind using on a recent svn checkout ? I would like this to be fixed if that's still a problem, Distributing eggs on windows would be too troublesome, I would prefer avoiding it if we can, David From oliphant at enthought.com Thu Mar 12 23:43:13 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 12 Mar 2009 22:43:13 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <20090313084847.E2A8.SHUWJ5460@163.com> References: <20090313084847.E2A8.SHUWJ5460@163.com> Message-ID: <49B9D651.1090205@enthought.com> shuwj5460 at 163.com wrote: >> >> It's certainly weird, but it's working as designed. Fancy indexing via >> arrays is a separate subsystem from indexing via slices. Basically, >> fancy indexing decides the outermost shape of the result (e.g. the >> leftmost items in the shape tuple). If there are any sliced axes, they >> are *appended* to the end of that shape tuple. >> >> > x = np.arange(30) > x.shape = (2,3,5) > > idx = np.array([0,1,3,4]) > e = x[:,:,idx] > print e.shape > #---> return (2,3,4) just as me think. > > e = x[0,:,idx] > print e.shape > #---> return (4,3). > > e = x[:,0,idx] > print e.shape > #---> return (2,4). not (4,2). why these three cases excute so > # differently? > This is probably best characterized as a wart stemming from a use-case oversight in the approach created to handle mixing simple indexing and advanced indexing. Basically, you can understand what happens by noting that when when scalars are used in combination with index arrays, they are treated as if they were part of an indexing array. In other words 0 is interpreted as [0] (or 1 is interpreted as [1]) when combined with advanced indexing. This is in part so that scalars will be broadcast to the shape of any indexing array to correctly handle indexing in other use-cases. Then, when advanced indexing is combined with ':' or '...' some special rules show up in determining the output shape that have to do with resolving potential ambiguities. It is arguable that the rules for resolving ambiguities are a bit simplistic and therefore don't handle some real use-cases very well like the case you show. On the other hand, simple rules are better even if the rules about combining ':' and '...' and advanced indexing are not well-known. So, to be a little more clear about what is going on, define idx2 = [0] and then ask what should the shapes of x[idx2, :, idx] and x[:, idx2, idx] be? Remember that advanced indexing will broadcast idx2 and idx to the same shape ( in this case (4,) but they could broadcast to any shape at all). This broadcasted result shape must be somehow combined with the shape resulting from performing the slice selection. With x[:, idx2, idx] it is unambiguous to tack the broadcasted shape to the end of the shape resulting from the slice-selection (i.e. x[:,0,0].shape). This leads to the (2,4) result. Now, what about x[idx2, :, idx]? The idx2 and idx are still broadcast to the same shape which could be any shape (in this particular case it is (4,)), but the slice-selection is done "in the middle". So, where should the shape of the slice selection (i.e. x[0,:,0].shape) be placed in the output shape? At the time this is determined, there is no notion that idx2 "came from a scalar" and so it could have come from any array. Therefore, when there is this kind of ambiguity, the code always places the broadcasted shape at the beginning. Thus, the result is (4,) + (3,) --> (4.3). Perhaps it is a bit surprising in this particular case, but it is working as designed. I admit that this particular asymmetry does create some cognitive dissonance which leaves something to be desired. -Travis From oliphant at enthought.com Thu Mar 12 23:48:28 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 12 Mar 2009 22:48:28 -0500 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <3d375d730903121807t7f2cc89fq483af70ba20897e7@mail.gmail.com> References: <20090312084423.F627.SHUWJ5460@163.com> <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> <9457e7c80903112334q6adf8df7v598e041c149dd8cb@mail.gmail.com> <3d375d730903121807t7f2cc89fq483af70ba20897e7@mail.gmail.com> Message-ID: <49B9D78C.1000309@enthought.com> Robert Kern wrote: > On Thu, Mar 12, 2009 at 01:34, St?fan van der Walt wrote: > >> 2009/3/12 Robert Kern : >> >>>> idx = np.array([0,1]) >>>> e = x[0,:,idx] >>>> print e.shape >>>> >>>> #-----> return (2,3). I think the right answer should be (3,2). Is >>>> # it a bug here? my numpy version is 1.2.1. >>>> >>> It's certainly weird, but it's working as designed. Fancy indexing via >>> arrays is a separate subsystem from indexing via slices. Basically, >>> fancy indexing decides the outermost shape of the result (e.g. the >>> leftmost items in the shape tuple). If there are any sliced axes, they >>> are *appended* to the end of that shape tuple. >>> >> This was my understanding, but now I see: >> >> In [31]: x = np.random.random([4,5,6,7]) >> >> In [32]: idx = np.array([1,2]) >> >> In [33]: x[:, idx, idx, :].shape >> Out[33]: (4, 2, 7) >> > > Hmm. Well, your guess is as good as mine at this point. > > Referencing my previous post on this topic. In this case, it is unambiguous to replace dimensions 1 and 2 with the result of broadcasting idx and idx together. Thus the (5,6) dimensions is replaced by the (2,) result of indexing leaving the outer dimensions in-tact, thus (4,2,7) is the result. I could be persuaded that this attempt to differentiate "unambiguous" from "ambiguous" sub-space replacements was mis-guided and we should have stuck with the simpler rule expressed above. But, it seemed so aesthetically pleasing to swap-out the indexed sub-space when it was possible to do it. -Travis From david at ar.media.kyoto-u.ac.jp Fri Mar 13 01:42:04 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Mar 2009 14:42:04 +0900 Subject: [Numpy-discussion] A question about import in numpy and in place build Message-ID: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> Hi, While making sure in-place builds work, I got the following problem: python setup.py build_ext -i python -c "import numpy as np; np.test()" -> many errors The error are all import errors: Traceback (most recent call last): File "/usr/media/src/dsp/numpy/git/numpy/tests/test_ctypeslib.py", line 83, in test_shape self.assert_(p.from_param(np.array([[1,2]]))) File "numpy/ctypeslib.py", line 150, in from_param return obj.ctypes File "numpy/core/__init__.py", line 27, in __all__ += numeric.__all__ NameError: name 'numeric' is not defined Now, in the numpy/core/__init__.py, there are some "from numeric import *" lines, but no "import numeric". So indeed numeric is not defined. But why does this work for 'normal' numpy builds ? I want to be sure I don't introduce some subtle issues before fixing the problem the obvious way, cheers, David From stefan at sun.ac.za Fri Mar 13 02:00:30 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 13 Mar 2009 08:00:30 +0200 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <49B9D78C.1000309@enthought.com> References: <20090312084423.F627.SHUWJ5460@163.com> <3d375d730903112002n1f7dfc90n9b3319644b7f516b@mail.gmail.com> <9457e7c80903112334q6adf8df7v598e041c149dd8cb@mail.gmail.com> <3d375d730903121807t7f2cc89fq483af70ba20897e7@mail.gmail.com> <49B9D78C.1000309@enthought.com> Message-ID: <9457e7c80903122300t73302ef6u572298e6f848b0f3@mail.gmail.com> Hey Travis! 2009/3/13 Travis E. Oliphant : > Referencing my previous post on this topic. ? In this case, it is > unambiguous to replace dimensions 1 and 2 with the result of > broadcasting idx and idx together. ? Thus the (5,6) dimensions is > replaced by the (2,) result of indexing leaving the outer dimensions > in-tact, ?thus (4,2,7) is the result. > > I could be persuaded that this attempt to differentiate "unambiguous" > from "ambiguous" sub-space replacements was mis-guided and we should > have stuck with the simpler rule expressed above. ? ?But, it seemed so > aesthetically pleasing to swap-out the indexed sub-space when it was > possible to do it. Thank you for the explanation! It makes sense, intuitively, it is just hard to explain all these rules to newcomers. It also makes it a bit more difficult to tell a machine how to interpret the result of an indexing operation. Are we too far down the road to change this behaviour? I guess some code may already depend on it. Have a great Friday, St?fan From robert.kern at gmail.com Fri Mar 13 02:05:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 01:05:37 -0500 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> On Fri, Mar 13, 2009 at 00:42, David Cournapeau wrote: > Hi, > > ? ?While making sure in-place builds work, I got the following problem: > > python setup.py build_ext -i > python -c "import numpy as np; np.test()" > -> many errors > > The error are all import errors: > > Traceback (most recent call last): > ?File "/usr/media/src/dsp/numpy/git/numpy/tests/test_ctypeslib.py", > line 83, in test_shape > ? ?self.assert_(p.from_param(np.array([[1,2]]))) > ?File "numpy/ctypeslib.py", line 150, in from_param > ? ?return obj.ctypes > ?File "numpy/core/__init__.py", line 27, in > ? ?__all__ += numeric.__all__ > NameError: name 'numeric' is not defined > > Now, in the numpy/core/__init__.py, there are some "from numeric import > *" lines, but no "import numeric". So indeed numeric is not defined. But > why does this work for 'normal' numpy builds ? I want to be sure I don't > introduce some subtle issues before fixing the problem the obvious way, When it does work, the reason is because the import mechanism will place the "numeric" module into the "numpy.core" namespace as soon as it can, so it is usually available in the __init__ after a "from numeric import *". nose tries to control imports a little more tightly as it navigates packages looking for tests, so it can sometimes expose corner cases like this. In any case, it's okay to change the __init__.py's to be explicit about doing both "import numeric" and "from numeric import *". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Mar 13 02:02:36 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Mar 2009 15:02:36 +0900 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> Message-ID: <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > When it does work, the reason is because the import mechanism will > place the "numeric" module into the "numpy.core" namespace as soon as > it can, so it is usually available in the __init__ after a "from > numeric import *". nose tries to control imports a little more tightly > as it navigates packages looking for tests, so it can sometimes expose > corner cases like this. > Ok, thanks for the explanation. > In any case, it's okay to change the __init__.py's to be explicit > about doing both "import numeric" and "from numeric import *". > Is adding additional imports fine too ? Or should we fix those in the unittest instead to avoid more namespace pollution ? David From robert.kern at gmail.com Fri Mar 13 02:22:48 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 01:22:48 -0500 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730903122322s5d29411awbb52433a796fac57@mail.gmail.com> On Fri, Mar 13, 2009 at 01:02, David Cournapeau wrote: > Robert Kern wrote: >> >> When it does work, the reason is because the import mechanism will >> place the "numeric" module into the "numpy.core" namespace as soon as >> it can, so it is usually available in the __init__ after a "from >> numeric import *". nose tries to control imports a little more tightly >> as it navigates packages looking for tests, so it can sometimes expose >> corner cases like this. >> > > Ok, thanks for the explanation. > >> In any case, it's okay to change the __init__.py's to be explicit >> about doing both "import numeric" and "from numeric import *". > > Is adding additional imports fine too ? Or should we fix those in the > unittest instead to avoid more namespace pollution ? What do you mean? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dagss at student.matnat.uio.no Fri Mar 13 02:41:07 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 13 Mar 2009 07:41:07 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> Message-ID: <49BA0003.7010602@student.matnat.uio.no> Robert Kern wrote: > On Thu, Mar 12, 2009 at 17:45, Sturla Molden wrote: >> >> >> >>> 2009/3/13 Charles R Harris : >>>> That said, I think it best to leave '%' with its C default and add a >>>> special >>>> modulus function for the python version. Changing its meaning in C-like >>>> code >>>> is going to confuse things. >>> This is Cython code, so I think there is an argument to be made that >>> it is Python-like! >> >> I'll just repeat what I've already said on the Cython mailing list: >> >> I think C types should behave like C types and Python objects like Python >> objects. If a C long suddenly starts to return double when divided by >> another C long, then that will be a major source of confusion on my part. >> If I want the behaviour of Python integers, Cython lets me use Python >> objects. I don't declare a variable cdef long if I want it to behave like >> a Python int. Whether division returns float or not is an orthogonal and unrelated issue (and when it does, which is not the default, // is the C division operator). When we say that this affects division; what we mean is that -7 // 6 returns -2 in Python; so that (-7 // 6)*6 + (-7 % 6) == -2*6 + 5 == -7. With C behaviour of % and /, I believe -7 // 6 == -1. (And when I use // it is to be unambigious; by default you can use / as well for the same thing.) > That may be part of the confusion. The expression "-1%5" has no > variables. Perhaps Dag can clarify what he is asking about: > > # Constants? (No one uses just constants in expressions, > # really, but consistency with the other choices will > # affect this.) > -1 % 5 > > # Explicitly declared C types? > cdef long i, j, k > i = -1 > j = 5 > k = i % j This one is what I'm really asking about. > When I do (2147483647 + 2147483647) in current Cython, to choose > another operation, does it use C types, or does it construct PyInts? > I.e., do I get C wraparound arithmetic, or do I get a PyLong? C wraparound. Suggestions welcome :-) > I recommend making % behave consistently with the other operators; > i.e. if + uses C semantics, % should, too. -- Dag Sverre From david at ar.media.kyoto-u.ac.jp Fri Mar 13 02:13:25 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Mar 2009 15:13:25 +0900 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <3d375d730903122322s5d29411awbb52433a796fac57@mail.gmail.com> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> <3d375d730903122322s5d29411awbb52433a796fac57@mail.gmail.com> Message-ID: <49B9F985.4050706@ar.media.kyoto-u.ac.jp> Robert Kern wrote: >> Is adding additional imports fine too ? Or should we fix those in the >> unittest instead to avoid more namespace pollution ? >> > > What do you mean? > > For example, we have: ====================================================================== ERROR: Failure: ImportError (cannot import name format) ---------------------------------------------------------------------- Traceback (most recent call last): File "/var/lib/python-support/python2.5/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/var/lib/python-support/python2.5/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/var/lib/python-support/python2.5/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/media/src/dsp/numpy/git/numpy/lib/tests/test_format.py", line 286, in from numpy.lib import format ImportError: cannot import name format But there is no numpy.lib.format.py-related import at all in numpy.lib.__init__.py. cheers, David From robert.kern at gmail.com Fri Mar 13 02:34:02 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 01:34:02 -0500 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <49BA0003.7010602@student.matnat.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> Message-ID: <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> On Fri, Mar 13, 2009 at 01:41, Dag Sverre Seljebotn wrote: > Robert Kern wrote: >> That may be part of the confusion. The expression "-1%5" has no >> variables. Perhaps Dag can clarify what he is asking about: >> >> ? # Constants? ?(No one uses just constants in expressions, >> ? # really, but consistency with the other choices will >> ? # affect this.) >> ? -1 % 5 >> >> ? # Explicitly declared C types? >> ? cdef long i, j, k >> ? i = -1 >> ? j = 5 >> ? k = i % j > > This one is what I'm really asking about. My opinion on this is that C semantics have been explicitly requested, so they should be used. One possibility (that may be opening a can of worms) is to have two sets of operators, one that does "native" semantics (C for cdef longs, Python for Python ints) and one that does Python semantics even on cdef longs. I leave it to you to decide which one gets blessed with "%" and which has to use the alternate ("~%"? there's a whole PEP sitting around which goes over various options). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 13 02:38:12 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 01:38:12 -0500 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <49B9F985.4050706@ar.media.kyoto-u.ac.jp> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> <3d375d730903122322s5d29411awbb52433a796fac57@mail.gmail.com> <49B9F985.4050706@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730903122338r766041b2j69eb02781adfe4e3@mail.gmail.com> On Fri, Mar 13, 2009 at 01:13, David Cournapeau wrote: > Robert Kern wrote: >>> Is adding additional imports fine too ? Or should we fix those in the >>> unittest instead to avoid more namespace pollution ? >>> >> >> What do you mean? >> >> > > For example, we have: > > ====================================================================== > ERROR: Failure: ImportError (cannot import name format) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/var/lib/python-support/python2.5/nose/loader.py", line 364, in > loadTestsFromName > ? ?addr.filename, addr.module) > ?File "/var/lib/python-support/python2.5/nose/importer.py", line 39, in > importFromPath > ? ?return self.importFromDir(dir_path, fqname) > ?File "/var/lib/python-support/python2.5/nose/importer.py", line 84, in > importFromDir > ? ?mod = load_module(part_fqname, fh, filename, desc) > ?File "/usr/media/src/dsp/numpy/git/numpy/lib/tests/test_format.py", > line 286, in > ? ?from numpy.lib import format > ImportError: cannot import name format > > But there is no numpy.lib.format.py-related import at all in > numpy.lib.__init__.py. There shouldn't need to be (and also, there shouldn't be, in this case). That's an odd bug in nose, then. It should be able to import a module from a package. Nothing needs to be in __init__.py for that to work. FWIW, I just change to a different directory, and the in-place build tests fine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Mar 13 02:36:13 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Mar 2009 15:36:13 +0900 Subject: [Numpy-discussion] A question about import in numpy and in place build In-Reply-To: <3d375d730903122338r766041b2j69eb02781adfe4e3@mail.gmail.com> References: <49B9F22C.7010800@ar.media.kyoto-u.ac.jp> <3d375d730903122305x4e8e60ddh36c0cf9da9715303@mail.gmail.com> <49B9F6FC.9060502@ar.media.kyoto-u.ac.jp> <3d375d730903122322s5d29411awbb52433a796fac57@mail.gmail.com> <49B9F985.4050706@ar.media.kyoto-u.ac.jp> <3d375d730903122338r766041b2j69eb02781adfe4e3@mail.gmail.com> Message-ID: <49B9FEDD.6070801@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > There shouldn't need to be (and also, there shouldn't be, in this > case). That's an odd bug in nose, then. It should be able to import a > module from a package. Nothing needs to be in __init__.py for that to > work. > > FWIW, I just change to a different directory, and the in-place build tests fine. > Yes, I did not think about testing this case, and it runs fine for me too. In that case, I guess it is too much of a narrow case to do anything about, David From fperez.net at gmail.com Fri Mar 13 03:23:57 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 13 Mar 2009 00:23:57 -0700 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 11:34 PM, Robert Kern wrote: > One possibility (that may be opening a can of worms) is to have two > sets of operators, one that does "native" semantics (C for cdef longs, > Python for Python ints) and one that does Python semantics even on > cdef longs. I leave it to you to decide which one gets blessed with > "%" and which has to use the alternate ("~%"? there's a whole PEP > sitting around which goes over various options). Without going into the whole pep 225 discussion, would it make sense for this particular case only, to consider instead a new %% operator? It could be the partner to the (/,//) pair that provide Python/C semantics for division, perhaps. That way, we'd know that the division-like operators come in 2 variants. We already know that the moment we do cdef i that thing will not behave like a python int (e.g., its overflow behavior becomes constrained to what happens inside of a finite bit-width, instead of having Python's auto-growth into arbitrary length ints). So it seems acceptable to me that once I cdef integer variables, I'll need to keep the C/Python semantics in mind for / and %, and having pairs (/,//), (%,%%) of operators to access each type of behavior sounds reasonable to me. Just a thought. Cheers, f From dagss at student.matnat.uio.no Fri Mar 13 04:27:09 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 13 Mar 2009 09:27:09 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> Message-ID: <49BA18DD.9060804@student.matnat.uio.no> Fernando Perez wrote: > On Thu, Mar 12, 2009 at 11:34 PM, Robert Kern wrote: > > >> One possibility (that may be opening a can of worms) is to have two >> sets of operators, one that does "native" semantics (C for cdef longs, >> Python for Python ints) and one that does Python semantics even on >> cdef longs. I leave it to you to decide which one gets blessed with >> "%" and which has to use the alternate ("~%"? there's a whole PEP >> sitting around which goes over various options). >> > > Without going into the whole pep 225 discussion, would it make sense > for this particular case only, to consider instead a new %% operator? > This exact proposal was up on the Cython list. The fear was that if Python decides to use this operator for something else in the future... Please note that there are 3 different division operators: - / under Py3/future import, which always returns float ("truediv") - // (which is the same as / in Py2), which always floors the result, i.e. -7 // 6 == -2 - The C /, which truncates the result, i.e. -7 / 6 == -1 So there's no directly corresponding pair of // and %% (one would need to intrudoce /// or similar for C division in addition, in that case). Dag Sverre From schut at sarvision.nl Fri Mar 13 06:13:27 2009 From: schut at sarvision.nl (Vincent Schut) Date: Fri, 13 Mar 2009 11:13:27 +0100 Subject: [Numpy-discussion] is it a bug? In-Reply-To: <49B9D651.1090205@enthought.com> References: <20090313084847.E2A8.SHUWJ5460@163.com> <49B9D651.1090205@enthought.com> Message-ID: Travis E. Oliphant wrote: > shuwj5460 at 163.com wrote: snipsnip Travis, thanks for the excellent explanation! It clears something which I think is related to this, I've been wanting to ask on the ml for some time already. Now here's the case. I often have 4d arrays that are actually related sets of satellite imagery, and have the form of [date, band, y, x]. These can get pretty large, so I like to prevent too much broadcasting or reshape-copy-ing when indexing to save some memory. However, I regularly have to apply some boolean index of [date, y, x] to each of the band dimensions (think of the bool mask as a threshold base on just one of the bands). Currently I usually loop over all band indices, e.g. for b in data.shape[1]: data[:, b, :, :][mask] = 0 Would there be a way to do this in a more numpy-like fashion that is also memory-efficient? Thanks, Vincent. From dineshbvadhia at hotmail.com Fri Mar 13 06:15:38 2009 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 13 Mar 2009 03:15:38 -0700 Subject: [Numpy-discussion] Sorting large numbers of co-ordinate pairs Message-ID: I have a large number (> 1bn) of (32-bit) integer co-ordinates (i, j) in a file. The i are ordered and the j unordered eg. ... 6940, 22886 6940, 38277 6940, 43788 7007, 0 7007, 2362 7007, 34 etc. ... I want to create (j, i) with j ordered and i unordered and store in a file ie. ... 38277, 567 38277, 90023 38277, 6940 43788, 5672 43788, 98 etc ... My computers have sufficient memory (2gb on one and 8gb on another). Any ideas how I could do this using numpy? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Fri Mar 13 06:57:40 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 13 Mar 2009 11:57:40 +0100 (CET) Subject: [Numpy-discussion] Sorting large numbers of co-ordinate pairs In-Reply-To: References: Message-ID: If you just want i to be unordered, use numpy.argsort on j. S.M. > I have a large number (> 1bn) of (32-bit) integer co-ordinates (i, j) in a > file. The i are ordered and the j unordered eg. > ... > 6940, 22886 > 6940, 38277 > 6940, 43788 > 7007, 0 > 7007, 2362 > 7007, 34 > etc. > ... > > I want to create (j, i) with j ordered and i unordered and store in a file > ie. > ... > 38277, 567 > 38277, 90023 > 38277, 6940 > 43788, 5672 > 43788, 98 > etc > ... > > My computers have sufficient memory (2gb on one and 8gb on another). > > Any ideas how I could do this using numpy? > > Dinesh > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sturla at molden.no Fri Mar 13 07:12:01 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 13 Mar 2009 12:12:01 +0100 (CET) Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> Message-ID: > On Fri, Mar 13, 2009 at 01:41, Dag Sverre Seljebotn >>> ? # Explicitly declared C types? >>> ? cdef long i, j, k >>> ? i = -1 >>> ? j = 5 >>> ? k = i % j >> >> This one is what I'm really asking about. > > My opinion on this is that C semantics have been explicitly requested, > so they should be used. I agree with this. I feel that "cdef long i, j, k" is a request to "step into C". But here I feel the Cython team is trying to make me step into a broken C. Sturla Molden From dagss at student.matnat.uio.no Fri Mar 13 07:47:22 2009 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 13 Mar 2009 12:47:22 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> Message-ID: <49BA47CA.2030905@student.matnat.uio.no> Sturla Molden wrote: >> On Fri, Mar 13, 2009 at 01:41, Dag Sverre Seljebotn >> > > >>>> # Explicitly declared C types? >>>> cdef long i, j, k >>>> i = -1 >>>> j = 5 >>>> k = i % j >>>> >>> This one is what I'm really asking about. >>> >> My opinion on this is that C semantics have been explicitly requested, >> so they should be used. >> > > I agree with this. I feel that "cdef long i, j, k" is a request to "step > into C". But here I feel the Cython team is trying to make me step into a > broken C. > Well, first of all the Cython team as a whole is still undecided on the issue, notably project lead Robert Bradshaw is either undecided or leaning towards your view. There's been lot of feedback, and they tend to fall into two groups: - Declaring types means requesting C semantics - Declaring types means typing the Python language to make it faster So it's more about making it "typed Python" than "broken C" IMO. (Introducing a new set of types for "typed Python" is an idea that could please everybody, but I fear the confusion it would bring myself...) Interestingly the Sage list, Cython list and NumPy lists all seem about equally divided on the issue. Dag Sverre From sturla at molden.no Fri Mar 13 08:32:43 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 13 Mar 2009 13:32:43 +0100 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <49BA47CA.2030905@student.matnat.uio.no> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> <49BA47CA.2030905@student.matnat.uio.no> Message-ID: <49BA526B.2030204@molden.no> On 3/13/2009 12:47 PM, Dag Sverre Seljebotn wrote: > (Introducing a new set of types for "typed Python" is an idea that could > please everybody, but I fear the confusion it would bring myself...) AFAIK, Python 3 has optional type annotations. Sturla Molden From Chris.Barker at noaa.gov Fri Mar 13 11:52:40 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 13 Mar 2009 08:52:40 -0700 Subject: [Numpy-discussion] Poll: Semantics for % in Cython In-Reply-To: <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> References: <49B95BA4.8010800@student.matnat.uio.no> <9457e7c80903121529u6fd98f7udc475191a9128dda@mail.gmail.com> <9219a32801d4bac88921285491654ea7.squirrel@webmail.uio.no> <3d375d730903121605x545d12d3jf8207e5aaa893778@mail.gmail.com> <49BA0003.7010602@student.matnat.uio.no> <3d375d730903122334i399fa28fgaccd05990de76c5d@mail.gmail.com> Message-ID: <49BA8148.2020605@noaa.gov> Robert Kern wrote: >>> # Explicitly declared C types? >>> cdef long i, j, k >>> i = -1 >>> j = 5 >>> k = i % j >> This one is what I'm really asking about. > > My opinion on this is that C semantics have been explicitly requested, > so they should be used. maybe ... > One possibility (that may be opening a can of worms) is to have two > sets of operators, one that does "native" semantics (C for cdef longs, > Python for Python ints) and one that does Python semantics even on > cdef longs. ouch! no. I think this is a case of practicality vs. purity. A common use case would be that a person starts out with their code in python, then moves it to cython, then adds the cdef, testing (or not!) as they go. The problem here is that yes, there are going to be differences when you apply a cdef, but a difference like this, that may very well not show up at all in tests (unless the user is aware enough of this particular issue to explicitly test for it). Now the code is broken in a subtle, and hard to find way that could turn up who knows when, with want data. This is kind of like the "new division" issue with python itself -- it is much better to simply be explicit: "/" means float division, "//" means integer division, regardless of the types of the operands. If you apply the same principle here, then we should one operator for "c style modulo", and one for "python style modulo", regardless of the types of the operands. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Fri Mar 13 12:00:12 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 13 Mar 2009 09:00:12 -0700 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> Message-ID: <49BA830C.5060109@noaa.gov> David Cournapeau wrote: > On Thu, Mar 12, 2009 at 8:08 PM, Jon Wright wrote: >> I'd like to have numpy as a dependency being pulled into a virtualenv >> automatically. Is that possible with the binary installer? > > I don't think so - but I would think that people using virtualenv are > familiar with compiling softwares. not on Windows, anyway -- for the most part, people use easy_install with vitualenv. Compiling stuff on Windows is a big 'ol pain in the ^%^&$$, and remarkably few people do it. easy_install is quite capable of installing binary packages (except Universal ones on OS_X...), it would be nice if numpy supported it. > I now remember that numpy could not be built from sources by > easy_install, but I believe we fixed the problem. It would still only work if the user was properly set up to compile python extensions -- not a very common occurrence. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Fri Mar 13 12:57:56 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 13 Mar 2009 18:57:56 +0200 Subject: [Numpy-discussion] Implementing hashing protocol for dtypes In-Reply-To: <5b8d13220903120513g71bc9dafhf5369f75662024f4@mail.gmail.com> References: <5b8d13220903111306j784b845an940b1b0b35f877e7@mail.gmail.com> <3d375d730903111336j2f354eecv2a33e3b7a1324ec3@mail.gmail.com> <5b8d13220903112049r772643bbt6d99dd14071ca699@mail.gmail.com> <3d375d730903112100k1258ca32qb89c4dd9423d24ad@mail.gmail.com> <5b8d13220903120513gb19b2dfn7712b7d972a6a726@mail.gmail.com> <5b8d13220903120513g71bc9dafhf5369f75662024f4@mail.gmail.com> Message-ID: <9457e7c80903130957t27b269e5nb2e7db14a7bb8d@mail.gmail.com> 2009/3/12 David Cournapeau : > Sorry, the link is http://codereview.appspot.com/26052/show I've tried the patch, and it works well! Bonus marks for all the useful comments and tests! I am +1. Cheers St?fan From cournape at gmail.com Fri Mar 13 13:14:22 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 14 Mar 2009 02:14:22 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49BA830C.5060109@noaa.gov> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BA830C.5060109@noaa.gov> Message-ID: <5b8d13220903131014u5509e801m926d3b96813d8cb1@mail.gmail.com> On Sat, Mar 14, 2009 at 1:00 AM, Christopher Barker wrote: > David Cournapeau wrote: >> On Thu, Mar 12, 2009 at 8:08 PM, Jon Wright wrote: >>> I'd like to have numpy as a dependency being pulled into a virtualenv >>> automatically. Is that possible with the binary installer? >> >> I don't think so - but I would think that people using virtualenv are >> familiar with compiling softwares. > > not on Windows, anyway -- for the most part, people use easy_install > with vitualenv. Compiling stuff on Windows is a big 'ol pain in the > ^%^&$$, and remarkably few people do it. easy_install is quite capable > of installing binary packages (except Universal ones on OS_X...), it > would be nice if numpy supported it. Numpy can be built as an egg. We just don't distribute the eggs. Distributing eggs would mean going back to the whole SSE/SSE3/no sse mess. > > It would still only work if the user was properly set up to compile > python extensions -- not a very common occurrence. But then what's the point of installing numpy in virtualenv ? Why not installing it system-wide ? The whole business of pushing people to install multiple versions of the same package for actual deployment is very wrong IMO. cheers, David From faltet at pytables.org Fri Mar 13 13:41:35 2009 From: faltet at pytables.org (Francesc Alted) Date: Fri, 13 Mar 2009 18:41:35 +0100 Subject: [Numpy-discussion] [ANN] PyTables 2.1.1 released Message-ID: <200903131841.35200.faltet@pytables.org> =========================== Announcing PyTables 2.1.1 =========================== PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. This is a maintenance release, so you should not expect API changes. Instead, a handful of bugs, like `File` not being subclassable, incorrectly retrieved default values for data types, a memory leak, and more, have been fixed. Besides, some enhancements have been implemented, like improved Unicode support for filenames, better handling of Unicode attributes, and the possibility to create very large data types exceeding 64 KB in size (with some limitations). Last but not least, this is the first PyTables version fully tested against Python 2.6. It is worth noting that binaries for Windows and Python 2.6 wears the newest HDF5 1.8.2 libraries (instead of the traditional HDF5 1.6.x) now. In case you want to know more in detail what has changed in this version, have a look at: http://www.pytables.org/moin/ReleaseNotes/Release_2.1.1 You can download a source package with generated PDF and HTML docs, as well as binaries for Windows, from: http://www.pytables.org/download/stable For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.1.1 You may want to fetch an evaluation version for PyTables Pro from: http://www.pytables.org/download/evaluation Resources ========= About PyTables: http://www.pytables.org About the HDF5 library: http://www.hdfgroup.org/HDF5/ About NumPy: http://numpy.scipy.org/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Most specially, a lot of kudos go to the HDF5 and NumPy (and numarray!) makers. Without them, PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team -- Francesc Alted From wright at esrf.fr Fri Mar 13 14:15:20 2009 From: wright at esrf.fr (Jon Wright) Date: Fri, 13 Mar 2009 19:15:20 +0100 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> Message-ID: <49BAA2B8.9050804@esrf.fr> David Cournapeau wrote: > I now remember that numpy could not be built from sources by > easy_install, but I believe we fixed the problem. Would you mind using > on a recent svn checkout ? I would like this to be fixed if that's > still a problem, With the current svn (6661) I can build using mingw32, install in a virtualenv and create an egg which can be pulled in as a dependency. Thanks a lot for all your hard work to make this possible! What I want is a simpler way to install things for people to try out our programs. We currently have dependencies on at least numpy, matplotlib, PIL, Pmw and PyOpenGl and having to go through a series of 6 different installations can be a bit intimidating. Any suggestions as to how best to distribute such a beast is most welcome. Perhaps it is just a question of having the "superpack" named in so that setuptools recognises it as the file it is supposed to go fetch for windows? Or maybe better for us to just deliver a "basket of eggs"? Many thanks for any advice, Jon From josh8912 at yahoo.com Fri Mar 13 14:16:01 2009 From: josh8912 at yahoo.com (JJ) Date: Fri, 13 Mar 2009 11:16:01 -0700 (PDT) Subject: [Numpy-discussion] random number generator--problems? Message-ID: <638884.19316.qm@web54007.mail.re2.yahoo.com> Hello: I just ran across this article saying that the random number generator in Linux is broken. http://www-cdf.fnal.gov/publications/cdf6850_badrand.pdf The article starts: <> As I understand it, Numpy uses a different random number generator (Mersenne Twister), but I just wanted to verify that any problems with the Linux random number generators do not apply to Numpy. Can someone please verify this? Thanks John From robert.kern at gmail.com Fri Mar 13 14:27:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 13:27:54 -0500 Subject: [Numpy-discussion] random number generator--problems? In-Reply-To: <638884.19316.qm@web54007.mail.re2.yahoo.com> References: <638884.19316.qm@web54007.mail.re2.yahoo.com> Message-ID: <3d375d730903131127p38787abcy8eab27e87892c2d7@mail.gmail.com> On Fri, Mar 13, 2009 at 13:16, JJ wrote: > > Hello: > I just ran across this article saying that the random number generator in Linux is broken. > > http://www-cdf.fnal.gov/publications/cdf6850_badrand.pdf > > The article starts: > < system-library generator random. Since Linux rand and Linux/UNIX random > use identical algorithms, we will use ?random? to refer to both. The defect > is uncovered when random fails a simple empirical test.>> > > As I understand it, Numpy uses a different random number generator (Mersenne Twister), but I just wanted to verify that any problems with the Linux random number generators do not apply to Numpy. ?Can someone please verify this? Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Fri Mar 13 14:30:22 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 14 Mar 2009 03:30:22 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49BAA2B8.9050804@esrf.fr> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BAA2B8.9050804@esrf.fr> Message-ID: <5b8d13220903131130y7f313e45lf5a2860bf96a4087@mail.gmail.com> On Sat, Mar 14, 2009 at 3:15 AM, Jon Wright wrote: > What I want is a simpler way to install things for people to try out our > programs. We currently have dependencies on at least numpy, matplotlib, > PIL, Pmw and PyOpenGl and having to go through a series of 6 different > installations can be a bit intimidating. Any suggestions as to how best > to distribute such a beast is most welcome. When distributing things, I see only two solutions: either you distribute everything separately (ala linux), or you integrate everything. On windows, integrating is almost always the right solution: you get the uninstall option, etc... It depends on how much resource you can spend on it, but if I were to distribute things on windows, I would build a msi/bdist_wininst of every package, and wrap this into another installer (which is almost exactly what the superpack does). That's how every big software on windows work AFAIK: every MS software installs this way for example. I don't claim any deep knowledge on the things behind bdist_wininst, but nsis, which is the open source system I use to build numpy and scipy so called superpack is powerful, maintained and well documented. Wrapping all the installer in one would be easy - if you need option, and in particular to control each installer independently, then it would become more difficult. cheers, David From robert.kern at gmail.com Fri Mar 13 14:37:39 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 13:37:39 -0500 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <5b8d13220903131014u5509e801m926d3b96813d8cb1@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BA830C.5060109@noaa.gov> <5b8d13220903131014u5509e801m926d3b96813d8cb1@mail.gmail.com> Message-ID: <3d375d730903131137q9ae6abbk95363ae7b9a14de@mail.gmail.com> On Fri, Mar 13, 2009 at 12:14, David Cournapeau wrote: > But then what's the point of installing numpy in virtualenv ? Why not > installing it system-wide ? The whole business of pushing people to > install multiple versions of the same package for actual deployment is > very wrong IMO. Who says he's deploying? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Fri Mar 13 15:22:13 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 13 Mar 2009 12:22:13 -0700 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <5b8d13220903131130y7f313e45lf5a2860bf96a4087@mail.gmail.com> References: <5b8d13220903112038m21269ffbj8aaf0719c5242fc@mail.gmail.com> <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BAA2B8.9050804@esrf.fr> <5b8d13220903131130y7f313e45lf5a2860bf96a4087@mail.gmail.com> Message-ID: <49BAB265.9060204@noaa.gov> David Cournapeau wrote: > It depends on how much resource you can spend on it, but if I were to > distribute things on windows, I would build a msi/bdist_wininst of > every package, and wrap this into another installer (which is almost > exactly what the superpack does). This would stomp on the person's existing python installation -- which is the point of virtualenv -- no none should use the system python for stuff that they distribute this way. So virtualenv or perhaps distributing all of python too, and putting it in a unique location, is the way to go. Another option is py2exe, bbfreeze, PyInstaller. They will install everything needed. I think bbfreeze even supplies a custom interpreter, so you can essentially build a custom python distro with it. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From patrickmarshwx at gmail.com Fri Mar 13 16:15:13 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Fri, 13 Mar 2009 15:15:13 -0500 Subject: [Numpy-discussion] Build Failure on IA64 Message-ID: Hi, I'm trying to build numpy from SVN and ran across this error: numpy/core/include/numpy/npy_cpu.h:44:10: error: #error Unknown CPU, please report this to numpy maintainers with information about your platform (OS, CPU and compiler) This is on a linux machine using gcc. Here is the processor information: processor : 0 vendor : GenuineIntel arch : IA-64 family : Itanium 2 model : 2 revision : 1 archrev : 0 features : branchlong cpu number : 0 cpu regs : 4 cpu MHz : 1500.000000 itc MHz : 1500.000000 BogoMIPS : 2244.60 siblings : 1 Patrick -- Patrick Marsh Graduate Research Assistant School of Meteorology University of Oklahoma http://www.patricktmarsh.com From charlesr.harris at gmail.com Fri Mar 13 16:46:36 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 13 Mar 2009 14:46:36 -0600 Subject: [Numpy-discussion] Build Failure on IA64 In-Reply-To: References: Message-ID: On Fri, Mar 13, 2009 at 2:15 PM, Patrick Marsh wrote: > Hi, > > I'm trying to build numpy from SVN and ran across this error: > numpy/core/include/numpy/npy_cpu.h:44:10: error: #error Unknown CPU, > please report this to numpy maintainers with information about your > platform (OS, CPU and compiler) > > This is on a linux machine using gcc. Here is the processor information: > > processor : 0 > vendor : GenuineIntel > arch : IA-64 > family : Itanium 2 > model : 2 > revision : 1 > archrev : 0 > features : branchlong > cpu number : 0 > cpu regs : 4 > cpu MHz : 1500.000000 > itc MHz : 1500.000000 > BogoMIPS : 2244.60 > siblings : 1 > > Thanks. Looks like the macro __ia64 should get defined and google says**ia64-linux is little-endian. Chuck** Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 13 22:18:45 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 13 Mar 2009 20:18:45 -0600 Subject: [Numpy-discussion] Build Failure on IA64 In-Reply-To: References: Message-ID: On Fri, Mar 13, 2009 at 2:15 PM, Patrick Marsh wrote: > Hi, > > I'm trying to build numpy from SVN and ran across this error: > numpy/core/include/numpy/npy_cpu.h:44:10: error: #error Unknown CPU, > please report this to numpy maintainers with information about your > platform (OS, CPU and compiler) > > This is on a linux machine using gcc. Here is the processor information: > > processor : 0 > vendor : GenuineIntel > arch : IA-64 > family : Itanium 2 > model : 2 > revision : 1 > archrev : 0 > features : branchlong > cpu number : 0 > cpu regs : 4 > cpu MHz : 1500.000000 > itc MHz : 1500.000000 > BogoMIPS : 2244.60 > siblings : 1 > OK, I added some macros in r6662, can you give it a shot? Do you know if folks are using other OSs or compilers on that system? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Mar 13 22:49:09 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 14 Mar 2009 11:49:09 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <3d375d730903131137q9ae6abbk95363ae7b9a14de@mail.gmail.com> References: <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BA830C.5060109@noaa.gov> <5b8d13220903131014u5509e801m926d3b96813d8cb1@mail.gmail.com> <3d375d730903131137q9ae6abbk95363ae7b9a14de@mail.gmail.com> Message-ID: <5b8d13220903131949g3f6e08f5g480aa534e2f889dc@mail.gmail.com> On Sat, Mar 14, 2009 at 3:37 AM, Robert Kern wrote: > On Fri, Mar 13, 2009 at 12:14, David Cournapeau wrote: >> But then what's the point of installing numpy in virtualenv ? Why not >> installing it system-wide ? The whole business of pushing people to >> install multiple versions of the same package for actual deployment is >> very wrong IMO. > > Who says he's deploying? that's how I understand "I want things to be easy for people to try out programs". David From cournape at gmail.com Fri Mar 13 22:52:32 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 14 Mar 2009 11:52:32 +0900 Subject: [Numpy-discussion] numpy via easy_install on windows In-Reply-To: <49BAB265.9060204@noaa.gov> References: <5b8d13220903112039xc8bc283x74519bb3143bab9e@mail.gmail.com> <49B8DB18.8070503@esrf.fr> <49B8D8B8.9050202@ar.media.kyoto-u.ac.jp> <49B8ED16.4060906@esrf.fr> <5b8d13220903122017w4af35501p4376916f20200527@mail.gmail.com> <49BAA2B8.9050804@esrf.fr> <5b8d13220903131130y7f313e45lf5a2860bf96a4087@mail.gmail.com> <49BAB265.9060204@noaa.gov> Message-ID: <5b8d13220903131952v861cd85v8d1c9be5788bfb60@mail.gmail.com> On Sat, Mar 14, 2009 at 4:22 AM, Christopher Barker wrote: > > They will install everything needed. I think bbfreeze even supplies a > custom interpreter, so you can essentially build a custom python distro > with it. I think it is a much better solution. Maybe it is just me, but I am not convinced virtualenv is a good way to deploy programs on many machines, specially on windows. That seems very foreign to the way applications are expected to be installed on this environment, David From cournape at gmail.com Sat Mar 14 12:01:26 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 01:01:26 +0900 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE Message-ID: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> hi, Just a friendly reminder that I will close the trunk for 1.3.0 at the end of 15th March (I will more likely do it at the end of Monday Japan time which roughly corresponds to 15th March midnight Pacific time), cheers, David From patrickmarshwx at gmail.com Sat Mar 14 12:09:12 2009 From: patrickmarshwx at gmail.com (Patrick Marsh) Date: Sat, 14 Mar 2009 11:09:12 -0500 Subject: [Numpy-discussion] Build Failure on IA64 In-Reply-To: References: Message-ID: On Fri, Mar 13, 2009 at 9:18 PM, Charles R Harris wrote: > > > On Fri, Mar 13, 2009 at 2:15 PM, Patrick Marsh > wrote: >> >> Hi, >> >> I'm trying to build numpy from SVN and ran across this error: >> numpy/core/include/numpy/npy_cpu.h:44:10: error: #error Unknown CPU, >> please report this to numpy maintainers with information about your >> platform (OS, CPU and compiler) >> >> This is on a linux machine using gcc. ?Here is the processor information: >> >> processor ?: 0 >> vendor ? ? : GenuineIntel >> arch ? ? ? : IA-64 >> family ? ? : Itanium 2 >> model ? ? ?: 2 >> revision ? : 1 >> archrev ? ?: 0 >> features ? : branchlong >> cpu number : 0 >> cpu regs ? : 4 >> cpu MHz ? ?: 1500.000000 >> itc MHz ? ?: 1500.000000 >> BogoMIPS ? : 2244.60 >> siblings ? : 1 > > OK, I added some macros in r6662, can you give it a shot? Do you know if > folks are using other OSs or compilers on that system? > > Chuck > Worked just fine. I don't think anyone is using another OS on this system. As for other compilers, I do have reason to believe there are other others on there, however I don't use them and don't know what they are. I'll try to ask around on Monday. Patrick -- Patrick Marsh Graduate Research Assistant School of Meteorology University of Oklahoma http://www.patricktmarsh.com From josef.pktd at gmail.com Sat Mar 14 12:57:58 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 Mar 2009 12:57:58 -0400 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> Message-ID: <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> On Sat, Mar 14, 2009 at 12:01 PM, David Cournapeau wrote: > hi, > > Just a friendly reminder that I will close the trunk for 1.3.0 at the > end of 15th March (I will more likely do it at the end of Monday Japan > time which roughly corresponds to 15th March midnight Pacific time), > > cheers, > > David Any chance for tickets 921 and 923. I would like to remove some test failures in the random numbers in scipy.stats.distributions. Josef From charlesr.harris at gmail.com Sat Mar 14 13:40:13 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 11:40:13 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 10:57 AM, wrote: > On Sat, Mar 14, 2009 at 12:01 PM, David Cournapeau > wrote: > > hi, > > > > Just a friendly reminder that I will close the trunk for 1.3.0 at the > > end of 15th March (I will more likely do it at the end of Monday Japan > > time which roughly corresponds to 15th March midnight Pacific time), > > > The fixes look small and I'd like them to go in. Can you put together some short tests for these fixes? Would it help if you had commit privileges in Numpy? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 14 13:52:34 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 02:52:34 +0900 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> Message-ID: <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> On Sun, Mar 15, 2009 at 2:40 AM, Charles R Harris wrote: > > The fixes look small and I'd like them to go in. Can you put together some > short tests for these fixes? Would it help if you had commit privileges in > Numpy? Yes, I was about to suggest giving Josef commit access to numpy, I unfortunately won't have much time to do anything but release tasks in the next few days, including review. If someone else (you :) ) can review the changes, before they go in, then there is no reason why they can't go in - assuming they come in very soon, David From charlesr.harris at gmail.com Sat Mar 14 14:04:18 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 12:04:18 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 11:52 AM, David Cournapeau wrote: > On Sun, Mar 15, 2009 at 2:40 AM, Charles R Harris > wrote: > > > > > The fixes look small and I'd like them to go in. Can you put together > some > > short tests for these fixes? Would it help if you had commit privileges > in > > Numpy? > > Yes, I was about to suggest giving Josef commit access to numpy, I > unfortunately won't have much time to do anything but release tasks in > the next few days, including review. If someone else (you :) ) can > review the changes, before they go in, then there is no reason why > they can't go in - assuming they come in very soon, > The fixes are both one-liners. Testing... I wonder if tests for things like random distributions and computational accuracy of special functions shouldn't be separate scripts. They can be large file, ala special functions, or time consuming and it doesn't help that parts are needed by both scipy and numpy. I don't feel competent to say that the fixes are correct, I'll trust Josef in that regard. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 14:13:59 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 19:13:59 +0100 (CET) Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> Message-ID: <081c78a4da3090750456b9dfedff6527.squirrel@webmail.uio.no> Will memmap be fixed to use offsets correctly before 1.3? > hi, > > Just a friendly reminder that I will close the trunk for 1.3.0 at the > end of 15th March (I will more likely do it at the end of Monday Japan > time which roughly corresponds to 15th March midnight Pacific time), > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Sat Mar 14 14:14:25 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 Mar 2009 14:14:25 -0400 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> Message-ID: <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> On Sat, Mar 14, 2009 at 1:52 PM, David Cournapeau wrote: > On Sun, Mar 15, 2009 at 2:40 AM, Charles R Harris > wrote: > >> >> The fixes look small and I'd like them to go in. Can you put together some >> short tests for these fixes? Would it help if you had commit privileges in >> Numpy? > > Yes, I was about to suggest giving Josef commit access to numpy, I > unfortunately won't have much time to do anything but release tasks in > the next few days, including review. If someone else (you :) ) can > review the changes, before they go in, then there is no reason why > they can't go in - assuming they come in very soon, > > David The correctness of the random numbers are tested in scipy.stats. They are not tested in np.random.tests. Currently, I have the test for logser disabled because it always fails, for hypergeometric, I picked parameters for which the random numbers are correct. Once the bugs are fixed, I can add or re-enable the tests for the current failures. Here are some tests, that should fail with the current trunk and pass after the fix. I don't have an unpatched version of numpy available right now, but these are the cases that initially showed the bugs. Can you verify that they fail on current or recent trunk? They don't fail on my patched version. But it has been some time ago that I did this and I would need to check the details again if these tests don't fail on the current trunk. {{{ import numpy as np assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) pr = 0.8 N = 100000 rvsn = np.random.logseries(pr,size=N) # these two frequency counts should be close to theoretical numbers with this large sample assert np.sum(rvsn==1) / float(N) > 0.45 # theoretical: 0.49706795 assert np.sum(rvsn==1) / float(N) < 0.23 # theoretical: 0.19882718 }}} About commit access: it would be convenient to have it, but not necessary since there are only a few things that I can contribute to numpy directly. Josef From sturla at molden.no Sat Mar 14 14:23:12 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 19:23:12 +0100 (CET) Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <081c78a4da3090750456b9dfedff6527.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <081c78a4da3090750456b9dfedff6527.squirrel@webmail.uio.no> Message-ID: <7412e2b88e3bd6f2d01d6e0127b5e11d.squirrel@webmail.uio.no> > > Will memmap be fixed to use offsets correctly before 1.3? I posted this to scipy-dev (possibly wrong list) on March 9, so I'll repeat it here: In Python 2.6, mmap has a offset keyword. NumPy's memmap should use this to allow big files to be memory mapped on 32 bit systems. Only a minor change is required: if float(sys.version[:3]) > 2.5: bytes = bytes - offset mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=offset) self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, offset=0, order=order) else: mm = mmap.mmap(fid.fileno(), bytes, access=acc) self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, offset=offset, order=order) Instead of just: mm = mmap.mmap(fid.fileno(), bytes, access=acc) self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, offset=offset, order=order) Reagards, Sturla Molden -------------- next part -------------- A non-text attachment was scrubbed... Name: memmap.py Type: text/x-python Size: 9249 bytes Desc: not available URL: From charlesr.harris at gmail.com Sat Mar 14 14:26:01 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 12:26:01 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <7412e2b88e3bd6f2d01d6e0127b5e11d.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <081c78a4da3090750456b9dfedff6527.squirrel@webmail.uio.no> <7412e2b88e3bd6f2d01d6e0127b5e11d.squirrel@webmail.uio.no> Message-ID: Hi Sturla, On Sat, Mar 14, 2009 at 12:23 PM, Sturla Molden wrote: > > > > Will memmap be fixed to use offsets correctly before 1.3? > > I posted this to scipy-dev (possibly wrong list) on March 9, so I'll > repeat it here: In Python 2.6, mmap has a offset keyword. NumPy's memmap > should use this to allow big files to be memory mapped on 32 bit systems. > Only a minor change is required: > > if float(sys.version[:3]) > 2.5: > > bytes = bytes - offset > > mm = mmap.mmap(fid.fileno(), bytes, access=acc, offset=offset) > > self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, > offset=0, order=order) > > else: > > mm = mmap.mmap(fid.fileno(), bytes, access=acc) > > self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, > offset=offset, order=order) > > > Instead of just: > > mm = mmap.mmap(fid.fileno(), bytes, access=acc) > > self = ndarray.__new__(subtype, shape, dtype=descr, buffer=mm, > offset=offset, order=order) > > Can you open a ticket for this? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 14:37:32 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 12:37:32 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 12:14 PM, wrote: > On Sat, Mar 14, 2009 at 1:52 PM, David Cournapeau > wrote: > > On Sun, Mar 15, 2009 at 2:40 AM, Charles R Harris > > wrote: > > > >> > >> The fixes look small and I'd like them to go in. Can you put together > some > >> short tests for these fixes? Would it help if you had commit privileges > in > >> Numpy? > > > > Yes, I was about to suggest giving Josef commit access to numpy, I > > unfortunately won't have much time to do anything but release tasks in > > the next few days, including review. If someone else (you :) ) can > > review the changes, before they go in, then there is no reason why > > they can't go in - assuming they come in very soon, > > > > David > > The correctness of the random numbers are tested in scipy.stats. They > are not tested in np.random.tests. > Currently, I have the test for logser disabled because it always > fails, for hypergeometric, I picked parameters for which the random > numbers are correct. Once the bugs are fixed, I can add or re-enable > the tests for the current failures. > > Here are some tests, that should fail with the current trunk and pass > after the fix. I don't have an unpatched version of numpy available > right now, but these are the cases that initially showed the bugs. Can > you verify that they fail on current or recent trunk? They don't fail > on my patched version. But it has been some time ago that I did this > and I would need to check the details again if these tests don't fail > on the current trunk. > > {{{ > import numpy as np > > assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) > assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) > > pr = 0.8 > N = 100000 > rvsn = np.random.logseries(pr,size=N) > # these two frequency counts should be close to theoretical numbers > with this large sample > assert np.sum(rvsn==1) / float(N) > 0.45 # theoretical: 0.49706795 > assert np.sum(rvsn==1) / float(N) < 0.23 # theoretical: 0.19882718 > }}} > > I can verify that these currently fail on my machine. I'll make regression tests out of them and then commit the fixes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 14:43:02 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 19:43:02 +0100 (CET) Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <081c78a4da3090750456b9dfedff6527.squirrel@webmail.uio.no> <7412e2b88e3bd6f2d01d6e0127b5e11d.squirrel@webmail.uio.no> Message-ID: <0e6277ac2d3858d4ee09204d73bd1b18.squirrel@webmail.uio.no> > Can you open a ticket for this? Done. Ticket #1053 Surla From charlesr.harris at gmail.com Sat Mar 14 15:11:27 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 13:11:27 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> Message-ID: Hi Josef, On Sat, Mar 14, 2009 at 12:14 PM, wrote: > {{{ > import numpy as np > > assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) > assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) > > pr = 0.8 > N = 100000 > rvsn = np.random.logseries(pr,size=N) > # these two frequency counts should be close to theoretical numbers > with this large sample > assert np.sum(rvsn==1) / float(N) > 0.45 # theoretical: 0.49706795 > assert np.sum(rvsn==1) / float(N) < 0.23 # theoretical: 0.19882718 > }}} > I just see one frequency count here. Do you mean that the frequency count should fall in that range with some probability? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 14 15:37:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 Mar 2009 15:37:03 -0400 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> Message-ID: <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> On Sat, Mar 14, 2009 at 3:11 PM, Charles R Harris wrote: > Hi Josef, > > On Sat, Mar 14, 2009 at 12:14 PM, wrote: > > >> >> {{{ >> import numpy as np >> >> assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) >> assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) >> >> pr = 0.8 >> N = 100000 >> rvsn = np.random.logseries(pr,size=N) >> # these two frequency counts should be close to theoretical numbers >> with this large sample Sorry, cut and paste error, the second case is k=2 for k=1 the unpatched version undersamples, for k=2 the unpatched version oversamples, that's the reason for the inequalities; the bugfix should reallocate them correctly. for several runs with N = 100000, I get with the patched version >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==1) / float(N) in range: 0.4951, 0.4984 # unpatched version is too small >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==2) / float(N) in range: 0.1980, 0.2001 # unpatched version is too large with constraints a bit more tight, it should be: >> assert np.sum(rvsn==1) / float(N) > 0.49 ? # theoretical: ?0.49706795 >> assert np.sum(rvsn==2) / float(N) < 0.205 ? # theoretical: ?0.19882718 Josef >> }}} > > I just see one frequency count here. Do you mean that the frequency count > should fall in that range with some probability? > > Chuck > From charlesr.harris at gmail.com Sat Mar 14 15:52:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 13:52:31 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 1:37 PM, wrote: > On Sat, Mar 14, 2009 at 3:11 PM, Charles R Harris > wrote: > > Hi Josef, > > > > On Sat, Mar 14, 2009 at 12:14 PM, wrote: > > > > > >> > >> {{{ > >> import numpy as np > >> > >> assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) > >> assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) > >> > >> pr = 0.8 > >> N = 100000 > >> rvsn = np.random.logseries(pr,size=N) > >> # these two frequency counts should be close to theoretical numbers > >> with this large sample > > Sorry, cut and paste error, the second case is k=2 > for k=1 the unpatched version undersamples, for k=2 the unpatched > version oversamples, that's the reason for the inequalities; the > bugfix should reallocate them correctly. > > for several runs with N = 100000, I get with the patched version > > >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==1) / float(N) > in range: 0.4951, 0.4984 # unpatched version is too small > > >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==2) / float(N) > in range: 0.1980, 0.2001 # unpatched version is too large > > with constraints a bit more tight, it should be: > > >> assert np.sum(rvsn==1) / float(N) > 0.49 # theoretical: 0.49706795 > >> assert np.sum(rvsn==2) / float(N) < 0.205 # theoretical: 0.19882718 > OK. One more question: how often do the tests fail? I want to include a note to repeat testing if the test fails. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 16:12:26 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 14:12:26 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 1:52 PM, Charles R Harris wrote: > > > On Sat, Mar 14, 2009 at 1:37 PM, wrote: > >> On Sat, Mar 14, 2009 at 3:11 PM, Charles R Harris >> wrote: >> > Hi Josef, >> > >> > On Sat, Mar 14, 2009 at 12:14 PM, wrote: >> > >> > >> >> >> >> {{{ >> >> import numpy as np >> >> >> >> assert np.all(np.random.hypergeometric(3,18,11,size=10) < 4) >> >> assert np.all(np.random.hypergeometric(18,3,11,size=10) > 0) >> >> >> >> pr = 0.8 >> >> N = 100000 >> >> rvsn = np.random.logseries(pr,size=N) >> >> # these two frequency counts should be close to theoretical numbers >> >> with this large sample >> >> Sorry, cut and paste error, the second case is k=2 >> for k=1 the unpatched version undersamples, for k=2 the unpatched >> version oversamples, that's the reason for the inequalities; the >> bugfix should reallocate them correctly. >> >> for several runs with N = 100000, I get with the patched version >> >> >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==1) / float(N) >> in range: 0.4951, 0.4984 # unpatched version is too small >> >> >>> rvsn = np.random.logseries(pr,size=N); np.sum(rvsn==2) / float(N) >> in range: 0.1980, 0.2001 # unpatched version is too large >> >> with constraints a bit more tight, it should be: >> >> >> assert np.sum(rvsn==1) / float(N) > 0.49 # theoretical: 0.49706795 >> >> assert np.sum(rvsn==2) / float(N) < 0.205 # theoretical: 0.19882718 >> > > OK. One more question: how often do the tests fail? I want to include a > note to repeat testing if the test fails. > Mind, I don't want to test the distribution in detail, I just want something that fails with the current code and passes with the new. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 16:16:29 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 21:16:29 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> Message-ID: <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> 1) I have noticed that fftpack_litemodule.c does not release the GIL around calls to functions in fftpack.c. I cannot se any obvious reason for this. As far as I can tell, the functions in fftpack.c are re-entrant. 2) If fftpack_lite did release the GIL, it would allow functions in numpy.fft to use multithreading for multiple FFTs in parallel (threading.Thread are ok, not special compilation needed). 3) Is there any reason numpy.fft does not have dct? If not, I'd suggest addition of numpy.fft.dct and numpy.fft.idct. 4) Regarding ticket #400: Cython now makes this easy. NumPy's FFTs should be exposed to C extesions without calling back to Python. Can I open a ticket for this and take care of it? At least 1, 2 and 4 should only take me an hour or so to write, so it might even be ready for 1.3.0. Sturla Molden From sturla at molden.no Sat Mar 14 16:22:25 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 21:22:25 +0100 (CET) Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> Message-ID: > On Sat, Mar 14, 2009 at 1:37 PM, wrote: > OK. One more question: how often do the tests fail? I want to include a > note > to repeat testing if the test fails. I don't like this. I think the prngs should use fixed seeds known to pass the test. Depending on confidence intervals in the units tests is really, really bad style. Tests should be deterministic. S.M. From charlesr.harris at gmail.com Sat Mar 14 16:24:35 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 14:24:35 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 2:16 PM, Sturla Molden wrote: > > > 1) I have noticed that fftpack_litemodule.c does not release the GIL > around calls to functions in fftpack.c. I cannot se any obvious reason for > this. As far as I can tell, the functions in fftpack.c are re-entrant. > > 2) If fftpack_lite did release the GIL, it would allow functions in > numpy.fft to use multithreading for multiple FFTs in parallel > (threading.Thread are ok, not special compilation needed). > > 3) Is there any reason numpy.fft does not have dct? If not, I'd suggest > addition of numpy.fft.dct and numpy.fft.idct. > > 4) Regarding ticket #400: Cython now makes this easy. NumPy's FFTs should > be exposed to C extesions without calling back to Python. > > > Can I open a ticket for this and take care of it? At least 1, 2 and 4 > should only take me an hour or so to write, so it might even be ready for > 1.3.0. > Give it a shot. Note that the fft transforms also use int instead of intp, which limits the maximum transform size to 32 bits. Fixing that is somewhere on my todo list but I would be happy to leave it to you ;) Although I expect transforms > 2GB aren't all that common. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 16:25:39 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 14:25:39 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 2:22 PM, Sturla Molden wrote: > > On Sat, Mar 14, 2009 at 1:37 PM, wrote: > > > OK. One more question: how often do the tests fail? I want to include a > > note > > to repeat testing if the test fails. > > I don't like this. I think the prngs should use fixed seeds known to pass > the test. Depending on confidence intervals in the units tests is really, > really bad style. Tests should be deterministic. > Good idea... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 14 16:28:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 Mar 2009 16:28:01 -0400 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> Message-ID: <1cd32cbb0903141328t4af47344mbd2ed4a300873277@mail.gmail.com> On Sat, Mar 14, 2009 at 4:22 PM, Sturla Molden wrote: >> On Sat, Mar 14, 2009 at 1:37 PM, wrote: > >> OK. One more question: how often do the tests fail? I want to include a >> note >> to repeat testing if the test fails. > > I don't like this. I think the prngs should use fixed seeds known to pass > the test. Depending on confidence intervals in the units tests is really, > really bad style. Tests should be deterministic. > > S.M. > The hypergeometric tests are on the support of the distribution and should never fail. And the outcome is not random. The test of logser with N = 100000 also should be pretty exact and fail only with very low probability in the patched version. But again this is testet in scipy.stats. I think Sturlas idea to find a random seed that differentiates before and after will be better for numpy, and using only a small sample size e.g. N=1000, since it's pretty fast. But since I don't have an unpatched numpy version available right now, I cannot do this. Josef From charlesr.harris at gmail.com Sat Mar 14 16:58:00 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 14:58:00 -0600 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: <1cd32cbb0903141328t4af47344mbd2ed4a300873277@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <1cd32cbb0903140957u59267c4dna20d9a5dd7acd482@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> <1cd32cbb0903141328t4af47344mbd2ed4a300873277@mail.gmail.com> Message-ID: On Sat, Mar 14, 2009 at 2:28 PM, wrote: > On Sat, Mar 14, 2009 at 4:22 PM, Sturla Molden wrote: > >> On Sat, Mar 14, 2009 at 1:37 PM, wrote: > > > >> OK. One more question: how often do the tests fail? I want to include a > >> note > >> to repeat testing if the test fails. > > > > I don't like this. I think the prngs should use fixed seeds known to pass > > the test. Depending on confidence intervals in the units tests is really, > > really bad style. Tests should be deterministic. > > > > S.M. > > > > The hypergeometric tests are on the support of the distribution and > should never fail. And the outcome is not random. > > The test of logser with N = 100000 also should be pretty exact and > fail only with very low probability in the patched version. But again > this is testet in scipy.stats. > > I think Sturlas idea to find a random seed that differentiates before > and after will be better for numpy, and using only a small sample size > e.g. N=1000, since it's pretty fast. But since I don't have an > unpatched numpy version available right now, I cannot do this. > Done. Thanks for the fixes and tests. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 17:01:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 15:01:31 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 2:24 PM, Charles R Harris wrote: > > > On Sat, Mar 14, 2009 at 2:16 PM, Sturla Molden wrote: > >> >> >> 1) I have noticed that fftpack_litemodule.c does not release the GIL >> around calls to functions in fftpack.c. I cannot se any obvious reason for >> this. As far as I can tell, the functions in fftpack.c are re-entrant. >> >> 2) If fftpack_lite did release the GIL, it would allow functions in >> numpy.fft to use multithreading for multiple FFTs in parallel >> (threading.Thread are ok, not special compilation needed). >> >> 3) Is there any reason numpy.fft does not have dct? If not, I'd suggest >> addition of numpy.fft.dct and numpy.fft.idct. >> >> 4) Regarding ticket #400: Cython now makes this easy. NumPy's FFTs should >> be exposed to C extesions without calling back to Python. >> >> >> Can I open a ticket for this and take care of it? At least 1, 2 and 4 >> should only take me an hour or so to write, so it might even be ready for >> 1.3.0. >> > > Give it a shot. Note that the fft transforms also use int instead of intp, > which limits the maximum transform size to 32 bits. Fixing that is somewhere > on my todo list but I would be happy to leave it to you ;) Although I expect > transforms > 2GB aren't all that common. > On the reentrant bit, IIRC fftpack builds a table of sin/cos. It might be worth checking/making that thread safe. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 14 17:12:14 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 Mar 2009 17:12:14 -0400 Subject: [Numpy-discussion] Reminder: code freeze for bet at the end of the WE In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <5b8d13220903141052k783e183am627a64178fbcb80e@mail.gmail.com> <1cd32cbb0903141114j723999c8uf18ad47d6641a048@mail.gmail.com> <1cd32cbb0903141237q26909cdauf292ddecd6611e80@mail.gmail.com> <1cd32cbb0903141328t4af47344mbd2ed4a300873277@mail.gmail.com> Message-ID: <1cd32cbb0903141412r593c6edek74aa18054bf1863d@mail.gmail.com> On Sat, Mar 14, 2009 at 4:58 PM, Charles R Harris wrote: > > > On Sat, Mar 14, 2009 at 2:28 PM, wrote: >> >> On Sat, Mar 14, 2009 at 4:22 PM, Sturla Molden wrote: >> >> On Sat, Mar 14, 2009 at 1:37 PM, wrote: >> > >> >> OK. One more question: how often do the tests fail? I want to include a >> >> note >> >> to repeat testing if the test fails. >> > >> > I don't like this. I think the prngs should use fixed seeds known to >> > pass >> > the test. Depending on confidence intervals in the units tests is >> > really, >> > really bad style. Tests should be deterministic. >> > >> > S.M. >> > >> >> The hypergeometric tests are on the support of the distribution and >> should never fail. And the outcome is not random. >> >> ?The test of logser with N = 100000 also should be pretty exact and >> fail only with very low probability in the patched version. But again >> this is testet in scipy.stats. >> >> I think Sturlas idea to find a random seed that differentiates before >> and after will be better for numpy, and using only a small sample size >> e.g. N=1000, since it's pretty fast. But since I don't have an >> unpatched numpy version available right now, I cannot do this. > > Done. Thanks for the fixes and tests. > > Chuck > Thanks for taking care of this. I will run my scipy.stats.distribution test over it before 1.3 is released and enable the tests in scipy trunk after the release. Josef From sturla at molden.no Sat Mar 14 17:58:18 2009 From: sturla at molden.no (Sturla Molden) Date: Sat, 14 Mar 2009 22:58:18 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: <3cabdb306b26d51ebc46ce5a9a3f623f.squirrel@webmail.uio.no> > On Sat, Mar 14, 2009 at 2:24 PM, Charles R Harris >> Give it a shot. Note that the fft transforms also use int instead of >> intp, >> which limits the maximum transform size to 32 bits. Fixing that is >> somewhere >> on my todo list but I would be happy to leave it to you ;) Although I >> expect >> transforms > 2GB aren't all that common. >> > > On the reentrant bit, IIRC fftpack builds a table of sin/cos. It might be > worth checking/making that thread safe. Thanks, I'll take a careful look at it. Sturla From charlesr.harris at gmail.com Sat Mar 14 19:35:12 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 17:35:12 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <3cabdb306b26d51ebc46ce5a9a3f623f.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <3cabdb306b26d51ebc46ce5a9a3f623f.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 3:58 PM, Sturla Molden wrote: > > On Sat, Mar 14, 2009 at 2:24 PM, Charles R Harris > > >> Give it a shot. Note that the fft transforms also use int instead of > >> intp, > >> which limits the maximum transform size to 32 bits. Fixing that is > >> somewhere > >> on my todo list but I would be happy to leave it to you ;) Although I > >> expect > >> transforms > 2GB aren't all that common. > >> > > > > On the reentrant bit, IIRC fftpack builds a table of sin/cos. It might be > > worth checking/making that thread safe. > > Thanks, I'll take a careful look at it. > There is also a ticket (#579) to add an implementation of the Bluestein algorithm for doing prime order fft's. This could also be used for zoom type fft's. There is lots of fft stuff to be done. I wonder if some of it shouldn't go in Scipy? I think David added some dcts to Scipy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 21:02:55 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 02:02:55 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <3cabdb306b26d51ebc46ce5a9a3f623f.squirrel@webmail.uio.no> Message-ID: > On Sat, Mar 14, 2009 at 3:58 PM, Sturla Molden wrote: > There is also a ticket (#579) to add an implementation of the Bluestein > algorithm for doing prime order fft's. This could also be used for zoom > type fft's. There is lots of fft stuff to be done. I wonder if some of it > shouldn't go in Scipy? I think David added some dcts to Scipy. I am not changing or adding algorithms for now. This is just to prevent NumPy from locking up the interpreter while doing FFTs. The loops that are worth multithreading are done in C in fftpack_litemodule.c, not in Python in fftpack.py. I have added OpenMP pragmas around them. When NumPy gets a build process that supports OpenMP, they will execute in parallel. On GCC 4.4 is means compiling with -fopenmp and linking -lgomp -lpthread (that goes for mingw/cygwin as well). The init function seems to be thread safe. cffti and rffti work on arrays created in the callers (fftpack_cffti and fftpack_rffti), no global objects are touched. I'm attaching a version of fftpack_litemodule.c that fixes most of what I mentioned. Sturla Molden -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: fftpack_litemodule.c URL: From sturla at molden.no Sat Mar 14 21:23:50 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 02:23:50 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: > Give it a shot. Note that the fft transforms also use int instead of > intp, > which limits the maximum transform size to 32 bits. Fixing that is > somewhere > on my todo list but I would be happy to leave it to you ;) Although I > expect > transforms > 2GB aren't all that common. By the way... When looking at fftpack.c there are two things that would likely improve the performance. 1) If we used ISO C (aka C99), arrays could be restricted, thus allowing more aggressive optimization. Now the compiler has to assume aliasing between function arguments. But as the C code is translated from Fortran, this is not the case. 2) In C, indexing arrays with unsigned integers are much more efficient (cf. AMDs optimization guide). I think the use of signed integers as array indices are inherited from Fortran77 FFTPACK. We should probably index the arrays with unsigned longs. Sturla Molden From charlesr.harris at gmail.com Sat Mar 14 21:33:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 19:33:53 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 7:23 PM, Sturla Molden wrote: > > > Give it a shot. Note that the fft transforms also use int instead of > > intp, > > which limits the maximum transform size to 32 bits. Fixing that is > > somewhere > > on my todo list but I would be happy to leave it to you ;) Although I > > expect > > transforms > 2GB aren't all that common. > > > By the way... When looking at fftpack.c there are two things that would > likely improve the performance. > > 1) If we used ISO C (aka C99), arrays could be restricted, thus allowing > more aggressive optimization. Now the compiler has to assume aliasing > between function arguments. But as the C code is translated from Fortran, > this is not the case. > We can't count on C99 at this point. Maybe David will add something so we can use c99 when it is available. > > 2) In C, indexing arrays with unsigned integers are much more efficient > (cf. AMDs optimization guide). I think the use of signed integers as array > indices are inherited from Fortran77 FFTPACK. We should probably index the > arrays with unsigned longs. > I don't have a problem with this, although I not sure what npy type is appropriate without looking. Were you thinking of size_t? I was tempted by that. But why is it more efficient? I haven't seen any special instructions at the assembly level, so unless there is some sort of global optimization that isn't obvious I don't know where the advantage is. I always figured that a really good optimizer should derive the FFT if you just give it the DFT code ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 21:36:07 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 19:36:07 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <3cabdb306b26d51ebc46ce5a9a3f623f.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 7:02 PM, Sturla Molden wrote: > > On Sat, Mar 14, 2009 at 3:58 PM, Sturla Molden wrote: > > > There is also a ticket (#579) to add an implementation of the Bluestein > > algorithm for doing prime order fft's. This could also be used for zoom > > type fft's. There is lots of fft stuff to be done. I wonder if some of it > > shouldn't go in Scipy? I think David added some dcts to Scipy. > > I am not changing or adding algorithms for now. This is just to prevent > NumPy from locking up the interpreter while doing FFTs. > Well, I was hoping to get you sucked into doing some work here ;) > > The loops that are worth multithreading are done in C in > fftpack_litemodule.c, not in Python in fftpack.py. I have added OpenMP > pragmas around them. When NumPy gets a build process that supports OpenMP, > they will execute in parallel. On GCC 4.4 is means compiling with -fopenmp > and linking -lgomp -lpthread (that goes for mingw/cygwin as well). > > The init function seems to be thread safe. cffti and rffti work on arrays > created in the callers (fftpack_cffti and fftpack_rffti), no global > objects are touched. > > I'm attaching a version of fftpack_litemodule.c that fixes most of what I > mentioned. > Can you put it somewhere for review? I don't think this should go into 1.3 at this late date but 1.4 is a good chance. Hopefully we will get the next release out a bit faster than this one. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 22:26:35 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 03:26:35 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> > On Sat, Mar 14, 2009 at 7:23 PM, Sturla Molden wrote: > We can't count on C99 at this point. Maybe David will add something so we > can use c99 when it is available. Ok, but most GNU compilers has a __restrict__ extension for C89 and C++ that we could use. And MSVC has a compiler pragma in VS2003 and a __restrict extension in VS2005 later versions. So we could define a mscro RESTRICT to be restrict in ISO C99, __restrict__ in GCC 3 and 4, __restrict in recent versions of MSVC, and nothing elsewhere. > I don't have a problem with this, although I not sure what npy type is > appropriate without looking. Were you thinking of size_t? I was tempted by > that. But why is it more efficient? I haven't seen any special > instructions > at the assembly level, so unless there is some sort of global optimization > that isn't obvious I don't know where the advantage is. I may be that my memory serves med badly. I thought I read it here, but it does not show examples of different assembly code being generated. So I think I'll just leave it for now and experiment with this later. http://support.amd.com/us/Processor_TechDocs/22007.pdf Is an npy_intp 64 bit on 64 bit systems? Sturla Molden From charlesr.harris at gmail.com Sat Mar 14 22:34:00 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 20:34:00 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 8:26 PM, Sturla Molden wrote: > > On Sat, Mar 14, 2009 at 7:23 PM, Sturla Molden wrote: > > > We can't count on C99 at this point. Maybe David will add something so we > > can use c99 when it is available. > > Ok, but most GNU compilers has a __restrict__ extension for C89 and C++ > that we could use. And MSVC has a compiler pragma in VS2003 and a > __restrict extension in VS2005 later versions. So we could define a mscro > RESTRICT to be restrict in ISO C99, __restrict__ in GCC 3 and 4, > __restrict in recent versions of MSVC, and nothing elsewhere. > > > > I don't have a problem with this, although I not sure what npy type is > > appropriate without looking. Were you thinking of size_t? I was tempted > by > > that. But why is it more efficient? I haven't seen any special > > instructions > > at the assembly level, so unless there is some sort of global > optimization > > that isn't obvious I don't know where the advantage is. > > I may be that my memory serves med badly. I thought I read it here, but it > does not show examples of different assembly code being generated. So I > think I'll just leave it for now and experiment with this later. > > http://support.amd.com/us/Processor_TechDocs/22007.pdf > > Is an npy_intp 64 bit on 64 bit systems? > > Yes, it is the same size as a pointer, but it is signed... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sat Mar 14 22:52:30 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 03:52:30 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: > Ok, but most GNU compilers has a __restrict__ extension for C89 and C++ > that we could use. And MSVC has a compiler pragma in VS2003 and a > __restrict extension in VS2005 later versions. So we could define a mscro > RESTRICT to be restrict in ISO C99, __restrict__ in GCC 3 and 4, > __restrict in recent versions of MSVC, and nothing elsewhere. I know it's ugly, but something like this: #define RESTRICT #define INLINE /*use GNU extension if possible*/ #ifdef __GNUC__ #if (__GNUC__ >= 3) #undef RESTRICT #undef INLINE #define RESTRICT __restrict__ #define INLINE __inline__ #endif #endif /* use MSVC extensions if possible */ #ifdef MSVC #if (MSVC_VER >= 1400) #define RESTRICT __restrict #define INLINE inline #endif #endif #ifdef __cplusplus extern "C" { #undef INLINE #define INLINE inline #else /* use C99 if possible */ #if (__STDC_VERSION__ >= 199901L) #undef RESTRICT #undef INLINE #define RESTRICT restrict #define INLINE inline #endif #endif #ifdef DOUBLE typedef double Treal #else typedef float Treal #endif typedef Treal *RESTRICT Vreal /* V as in "vector" */ S.M. From charlesr.harris at gmail.com Sat Mar 14 22:52:57 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 20:52:57 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 8:26 PM, Sturla Molden wrote: > > On Sat, Mar 14, 2009 at 7:23 PM, Sturla Molden wrote: > > > We can't count on C99 at this point. Maybe David will add something so we > > can use c99 when it is available. > > Ok, but most GNU compilers has a __restrict__ extension for C89 and C++ > that we could use. And MSVC has a compiler pragma in VS2003 and a > __restrict extension in VS2005 later versions. So we could define a mscro > RESTRICT to be restrict in ISO C99, __restrict__ in GCC 3 and 4, > __restrict in recent versions of MSVC, and nothing elsewhere. > > > > I don't have a problem with this, although I not sure what npy type is > > appropriate without looking. Were you thinking of size_t? I was tempted > by > > that. But why is it more efficient? I haven't seen any special > > instructions > > at the assembly level, so unless there is some sort of global > optimization > > that isn't obvious I don't know where the advantage is. > > I may be that my memory serves med badly. I thought I read it here, but it > does not show examples of different assembly code being generated. So I > think I'll just leave it for now and experiment with this later. > > http://support.amd.com/us/Processor_TechDocs/22007.pdf > I suspect the biggest gains can be made from careful attention to cache issues. I had a prototype block based fft -- using a different algorithm than the usual -- that outperformed fftw at that time. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 14 23:46:40 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 21:46:40 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 8:52 PM, Sturla Molden wrote: > > > Ok, but most GNU compilers has a __restrict__ extension for C89 and C++ > > that we could use. And MSVC has a compiler pragma in VS2003 and a > > __restrict extension in VS2005 later versions. So we could define a > mscro > > RESTRICT to be restrict in ISO C99, __restrict__ in GCC 3 and 4, > > __restrict in recent versions of MSVC, and nothing elsewhere. > > > I know it's ugly, but something like this: > Can't help but be ugly when dealing with all the compilers out there. > > #define RESTRICT > #define INLINE > /*use GNU extension if possible*/ > #ifdef __GNUC__ > #if (__GNUC__ >= 3) > #undef RESTRICT > #undef INLINE > #define RESTRICT __restrict__ > #define INLINE __inline__ > #endif > #endif > /* use MSVC extensions if possible */ > #ifdef MSVC > #if (MSVC_VER >= 1400) > #define RESTRICT __restrict > #define INLINE inline > #endif > #endif I think MSVC uses _inline > > #ifdef __cplusplus > extern "C" { > #undef INLINE > #define INLINE inline > #else > /* use C99 if possible */ > #if (__STDC_VERSION__ >= 199901L) > #undef RESTRICT > #undef INLINE > #define RESTRICT restrict > #define INLINE inline > #endif > #endif > What does this last bit do? We implicitly assume that ieee floating point is available. > > #ifdef DOUBLE > typedef double Treal > #else > typedef float Treal > #endif > typedef Treal *RESTRICT Vreal /* V as in "vector" */ > > I'm not sure about the names. I would prefer to keep the declarations along the lines of double * NPY_RESTRICT ptmp; As it is easier to read and understand without going to the macro definition. Note that David has a quite involved check for the inline keyword implementation and I expect he would want to do the same for the restrict keyword. I think using lots of #if defined(xxx) might be easier but I leave that stuff alone. It's David's headache. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sun Mar 15 00:44:04 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 05:44:04 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: <7ae53e8ff27f6a3dbe4177ce4c56b877.squirrel@webmail.uio.no> > > I think MSVC uses _inline No, MSVC uses a double underscore. That is, __restrict for variable names and __declspec(restrict) for function return values. >> #if (__STDC_VERSION__ >= 199901L) >> #undef RESTRICT >> #undef INLINE >> #define RESTRICT restrict >> #define INLINE inline >> #endif >> #endif >> > > What does this last bit do? We implicitly assume that ieee floating point > is > available. It uses the restrict keyword in C99. The last test will pass on any C99 compiler. >> #ifdef DOUBLE >> typedef double Treal >> #else >> typedef float Treal >> #endif >> typedef Treal *RESTRICT Vreal /* V as in "vector" */ >> >> > I'm not sure about the names. I would prefer to keep the declarations > along > the lines of > > double * NPY_RESTRICT ptmp; Yes, but then I would have to go in and edit all function bodies in fftpack.c as well. fftpack.c currently uses "Treal array[]" as naming convention, so I just changed that to "Vreal array". I changed all ints to long for 64 bit support. Well, it compiles and runs ok on my computer now. I'll open a ticket for the FFT. I'll attach the C files to the ticket. Sturla Molden From charlesr.harris at gmail.com Sun Mar 15 01:00:17 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 14 Mar 2009 23:00:17 -0600 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <7ae53e8ff27f6a3dbe4177ce4c56b877.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> <7ae53e8ff27f6a3dbe4177ce4c56b877.squirrel@webmail.uio.no> Message-ID: On Sat, Mar 14, 2009 at 10:44 PM, Sturla Molden wrote: > > > > I think MSVC uses _inline > > No, MSVC uses a double underscore. That is, __restrict for variable names > and __declspec(restrict) for function return values. > Yes, but MSVC uses _inline for inline. > > >> #if (__STDC_VERSION__ >= 199901L) > >> #undef RESTRICT > >> #undef INLINE > >> #define RESTRICT restrict > >> #define INLINE inline > >> #endif > >> #endif > >> > > > > What does this last bit do? We implicitly assume that ieee floating point > > is > > available. > > It uses the restrict keyword in C99. The last test will pass on any C99 > compiler. > > > >> #ifdef DOUBLE > >> typedef double Treal > >> #else > >> typedef float Treal > >> #endif > >> typedef Treal *RESTRICT Vreal /* V as in "vector" */ > >> > >> > > I'm not sure about the names. I would prefer to keep the declarations > > along > > the lines of > > > > double * NPY_RESTRICT ptmp; > So use a local define. > > Yes, but then I would have to go in and edit all function bodies in > fftpack.c as well. fftpack.c currently uses "Treal array[]" as naming > convention, so I just changed that to "Vreal array". > > I changed all ints to long for 64 bit support. > Long is 32 bits on 64 bit windows. You need long long there. That's why npy_intp is preferred. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Mar 15 03:18:48 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 16:18:48 +0900 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> Message-ID: <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> On Sun, Mar 15, 2009 at 5:16 AM, Sturla Molden wrote: > > > 1) I have noticed that fftpack_litemodule.c does not release the GIL > around calls to functions in fftpack.c. I cannot se any obvious reason for > this. As far as I can tell, the functions in fftpack.c are re-entrant. > > 2) If fftpack_lite did release the GIL, it would allow functions in > numpy.fft to use multithreading for multiple FFTs in parallel > (threading.Thread are ok, not special compilation needed). Both are fines to modify for 1.3. > > 3) Is there any reason numpy.fft does not have dct? If not, I'd suggest > addition of numpy.fft.dct and numpy.fft.idct. numpy.fft is only here for compatibility with older array packages AFAIK. So we should not add new features. I would much prefer to see those kind of things in scipy.fftpack (which already have DCT I, II and III). Adding dct to numpy.fft will only add to the confusion, I think. There is already enough duplication between numpy and scipy, let's not add more. > > 4) Regarding ticket #400: Cython now makes this easy. NumPy's FFTs should > be exposed to C extesions without calling back to Python. Agreed - that's one area where we could much more, but this has to be done carefully. I am a bit worried of doing this kind of things at the last minute. 1, 2 are OK for the trunk, but not 4 (because it impacts public API), IMO. cheers, David From cournape at gmail.com Sun Mar 15 03:24:03 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 16:24:03 +0900 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <51b2932a31a036cb30d719c2bbbbe458.squirrel@webmail.uio.no> Message-ID: <5b8d13220903150024k3ecea5eft1707d441418fff2e@mail.gmail.com> On Sun, Mar 15, 2009 at 12:46 PM, Charles R Harris wrote: > > > As it is easier to read and understand without going to the macro > definition. Note that David has a quite involved check for the inline > keyword implementation and I expect he would want to do the same for the > restrict keyword. I think using lots of #if defined(xxx) might be easier It is easier but less maintainable and less portable. With checks, we can deal with new platforms more easily. And it is more robust. Things depending on the platform should be the exception, not the norm. cheers, David From dsdale24 at gmail.com Sun Mar 15 10:46:30 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 15 Mar 2009 10:46:30 -0400 Subject: [Numpy-discussion] suggestion for generalizing numpy functions In-Reply-To: <49B59350.7080601@enthought.com> References: <49B59350.7080601@enthought.com> Message-ID: Hi Travis, On Mon, Mar 9, 2009 at 6:08 PM, Travis E. Oliphant wrote: > Darren Dale wrote: > > On Mon, Mar 9, 2009 at 9:50 AM, Darren Dale > > wrote: > > > > I spent some time over the weekend fixing a few bugs in numpy that > > were exposed when attempting to use ufuncs with ndarray > > subclasses. It got me thinking that, with relatively little work, > > numpy's functions could be made to be more general. For example, > > the numpy.ma module redefines many of the > > standard ufuncs in order to do some preprocessing before the > > builtin ufunc is called. Likewise, in the units/quantities package > > I have been working on, I would like to perform a dimensional > > analysis to make sure an operation is allowed before I call a > > ufunc that might change data in place. > > > > The suggestions behind this idea are interesting. It seems related to > me, to the concept of "contexts" that Eric presented at SciPy a couple > of years ago that keeps coming up at Enthought. It may be of benefit > to solve the problem from that perspective rather than the "sub-class" > perspective. > > Unfortunately, I don't have time to engage this discussion as it > deserves, but I wanted to encourage you because I think there are good > ideas in what you are doing. The sub-class route may be a decent > solution, but it also might be worthwhile to think from the perspective > of contexts as well. > > Basically, the context idea is that rather than "sub-class" the ndarray, > you create a more powerful name-space for code that uses arrays to live > in. Because python code can execute using a namespace that is any > dictionary-like thing, you can create a "namespace" object with more > powerful getters and setters that intercepts the getting and setting of > names as the Python code is executing. > > This allows every variable to be "adapted" in a manner analagous to > "type-maps" in SWIG --- but in a more powerful way. We have been > taking advantage of this basic but powerful idea quite a bit. > Unit-handling is a case where "contexts" and generic functions rather > than sub-classes appears to be an approach to solving the problem. > > The other important idea about contexts is that you can layer-on > adapters on getting and setting variables into the namespace which > provide more hooks for doing some powerful things in easy-to-remember > ways. > > I apologize if it sounds like I'm hi-jacking your question to promote an > agenda. I really like the generality you are trying to reach with your > suggestions and just wanted to voice the opinion that it might be better > to look for a solution using the two dimensions of "objects" and > "namespaces" (o.k. generic functions are probably another dimension in > my metaphor) rather than just sub-classes of objects. > Contexts may be an alternative approach, but I do not understand the vision or how they would be applied to the problem of unit handling. The Quantities package is already in a useful and working state, based on an ndarray subclass. My goal at this point is to make quantities more useful with numpy/scipy. There is already a mechanism for doing so, it just needs to be tweaked in order for it to more generally applicable. Hopefully I can interest some of the current numpy developers to engage in the discussion after 1.3 is released. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Sun Mar 15 11:19:48 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 16:19:48 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> Message-ID: <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> >> 1) I have noticed that fftpack_litemodule.c does not release the GIL >> around calls to functions in fftpack.c. I cannot se any obvious reason >> for >> this. As far as I can tell, the functions in fftpack.c are re-entrant. >> >> 2) If fftpack_lite did release the GIL, it would allow functions in >> numpy.fft to use multithreading for multiple FFTs in parallel >> (threading.Thread are ok, not special compilation needed). > > Both are fines to modify for 1.3. There is a version of fftpack_litemodule.c, fftpack.c and fftpack.h that does this attached to ticket #1055. The two important changes are releasing the GIL and using npy_intp for 64 bit support. Minor changes: There is a restrict qualifier in fftpack.c. If it is not compiled with C99, it tries to use similar GNU or MS extensions. There is some OpenMP pragmas in ftpack_litemodule.c. If you don't compile with OpenMP support, they do nothing. If you do compile with OpenMP, they will make certain FFTs run in parallel. I can comment them out if you prefer. Sturla Molden From david at ar.media.kyoto-u.ac.jp Sun Mar 15 11:33:28 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 16 Mar 2009 00:33:28 +0900 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> Message-ID: <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Sturla Molden wrote: > > There is a version of fftpack_litemodule.c, fftpack.c and fftpack.h that > does this attached to ticket #1055. The two important changes are > releasing the GIL and using npy_intp for 64 bit support. > Would it be possible to make the changes as a patch (svn diff) - this makes things easier to review. > Minor changes: > > There is a restrict qualifier in fftpack.c. If it is not compiled with > C99, it tries to use similar GNU or MS extensions. > > There is some OpenMP pragmas in ftpack_litemodule.c. If you don't compile > with OpenMP support, they do nothing. If you do compile with OpenMP, they > will make certain FFTs run in parallel. I can comment them out if you > prefer. > Yes, I would be more comfortable without them (for 1.3). This is typically the kind of small changes which can be a PITA to deal with just before a release because it breaks some platforms in non obvious ways. For the restrict keyword support, I will add a distutils check to avoid the compiler-specifics (again after 1.3). cheers, David From pav at iki.fi Sun Mar 15 12:32:59 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 15 Mar 2009 16:32:59 +0000 (UTC) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: Mon, 16 Mar 2009 00:33:28 +0900, David Cournapeau wrote: > Sturla Molden wrote: >> >> There is a version of fftpack_litemodule.c, fftpack.c and fftpack.h >> that does this attached to ticket #1055. The two important changes are >> releasing the GIL and using npy_intp for 64 bit support. >> >> > Would it be possible to make the changes as a patch (svn diff) - this > makes things easier to review. Also, you could post the patch on the http://codereview.appspot.com site. Then it would be easier to both review and to keep track of its revisions. (Attachments, especially whole-file ones sent on the mailing list are IMHO significantly more cumbersome for all parties concerned.) In practice the code review tool is also easier to use than sending SVN diffs to the mailing list. If you are working on a SVN checkout, I recommend using the upload tool http://codereview.appspot.com/static/upload.py to upload the patch to the code review site. Just do python upload.py on the SVN checkout containing your changes. (You'll need a Google account for this, though.) When revising the patch after the initial upload, specify the Codereview site issue number to the upload: python upload.py -i 12345 so that the new version of the patch is marked as an improved version of the old one. -- Pauli Virtanen From sturla at molden.no Sun Mar 15 12:43:17 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 17:43:17 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: > Would it be possible to make the changes as a patch (svn diff) - this > makes things easier to review. I've added diff files to ticket #1055. > Yes, I would be more comfortable without them (for 1.3). This is > typically the kind of small changes which can be a PITA to deal with > just before a release because it breaks some platforms in non obvious > ways. Ok, they are commented out. > For the restrict keyword support, I will add a distutils check to avoid > the compiler-specifics (again after 1.3). I've added a header file npy_restrict.h that defines a NPY_RESTRICT symbol. Best regards, Sturla From sturla at molden.no Sun Mar 15 12:48:51 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 17:48:51 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: > Mon, 16 Mar 2009 00:33:28 +0900, David Cournapeau wrote: > > Also, you could post the patch on the http://codereview.appspot.com site. > Then it would be easier to both review and to keep track of its > revisions I have posted the files here: http://projects.scipy.org/numpy/ticket/1055 Sturla From pav at iki.fi Sun Mar 15 13:10:54 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 15 Mar 2009 17:10:54 +0000 (UTC) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: Sun, 15 Mar 2009 17:48:51 +0100, Sturla Molden wrote: >> Mon, 16 Mar 2009 00:33:28 +0900, David Cournapeau wrote: >> >> Also, you could post the patch on the http://codereview.appspot.com >> site. Then it would be easier to both review and to keep track of its >> revisions > > I have posted the files here: > > http://projects.scipy.org/numpy/ticket/1055 Well, that's nearly as good. (Though submitting a single svn diff containing all changes could have been a bit more easy to handle than separate patches for each file. But a small nitpick only.) But I wonder if there is a way to improve the behavior of Trac with attachments/patches, there seem to be currently some warts: - Can a non-admin user delete or mark some attachments obsolete? - Trac doesn't want to show all patches in HTML, and doesn't recognize the .diff suffix. Maybe this can be fixed. - Inline comments in patches would be nice, as in codereview. -- Pauli Virtanen From sturla at molden.no Sun Mar 15 13:43:37 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 18:43:37 +0100 (CET) Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: > Well, that's nearly as good. (Though submitting a single svn diff > containing all changes could have been a bit more easy to handle than > separate patches for each file. But a small nitpick only.) The problem is I am really bad at using these tools. I have TortoiseSVN installed, but no idea how to use it. :( I cannot delete any file attachment in trac, but I can overwrite the files I've posted. S.M. From sturla at molden.no Sun Mar 15 14:57:10 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 15 Mar 2009 19:57:10 +0100 (CET) Subject: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054) Message-ID: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> Regarding ticket #1054. What is the reason for this strange behaviour? >>> a = np.zeros((10,10),order='F') >>> a.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> (a+1).flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False Sturla Molden From charlesr.harris at gmail.com Sun Mar 15 15:10:52 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 15 Mar 2009 13:10:52 -0600 Subject: [Numpy-discussion] Py_ssize_t Message-ID: Some of the calls to the python c-api have been changed to use Py_ssize_t. As Py_ssize_t was not available in Python 2.4 I wonder if we check if it is defined and set it to int if not. Also, are we running tests on Python 2.4 for the release? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Mar 15 15:19:45 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 16 Mar 2009 04:19:45 +0900 Subject: [Numpy-discussion] Py_ssize_t In-Reply-To: References: Message-ID: <5b8d13220903151219n7b2c076cy1a61d8ab4bdd86f5@mail.gmail.com> On Mon, Mar 16, 2009 at 4:10 AM, Charles R Harris wrote: > Some of the calls to the python c-api have been changed to use Py_ssize_t. > As Py_ssize_t was not available in Python 2.4 I wonder if we check if it is > defined and set it to int if not. Yes, we do, in ndarrayobject.h > Also, are we running tests on Python 2.4 > for the release? At least I do - I have access to a RHEL machine, where the default python is 2.4. Since it is 64 bits as well, it is pretty good to detect those kind of problems. David From pav at iki.fi Sun Mar 15 16:51:29 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 15 Mar 2009 20:51:29 +0000 (UTC) Subject: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054) References: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> Message-ID: Sun, 15 Mar 2009 19:57:10 +0100, Sturla Molden wrote: > Regarding ticket #1054. What is the reason for this strange behaviour? > >>>> a = np.zeros((10,10),order='F') >>>> a.flags > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False >>>> (a+1).flags > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False New numpy arrays are by default in C-order, I believe. -- Pauli Virtanen From fperez.net at gmail.com Mon Mar 16 00:42:46 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 15 Mar 2009 21:42:46 -0700 Subject: [Numpy-discussion] Has IPython been useful to you? Please let me know... Message-ID: Hi all, [ apologies for the semi-spam, I'll keep this brief and expect all replies off-list ] IPython is a project that many of you on this list are likely to use in your daily work, either directly or indirectly (if you've embedded it or used it as a component of some other system). I would simply like to ask you, if IPython has been significantly useful for a project you use, lead, develop, etc., to let me know. For legal/professional reasons, I need to gather information about who has found IPython to be of value. I started IPython as a toy 'afternoon hack' in late 2001, and today it continues to grow, as the nicely summarized Ohloh stats show: https://www.ohloh.net/p/ipython (obviously, this is now the result of the work of many, not just myself, as is true of any healthy open source project as it grows). But I have never systematically tracked its impact, and now I need to do so. So, if you have used IPython and it has made a significant contribution to your project, work, research, company, whatever, I'd be very grateful if you let me know. A short paragraph on what this benefit has been is all I ask. Once I gather any information I get, I would contact directly some of the responders to ask for your authorization before quoting you. I should stress that any information you give me will only go in a documentation packet in support of my legal/residency process here in the USA (think of it as an oversized, obnoxiously detailed CV that goes beyond just publications and regular academic information). To keep traffic off this list, please send your replies directly to me, either at this address or my regular work one: Fernando.Perez at berkeley.edu In advance, many thanks to anyone willing to reply. I've never asked for anything in return for working on IPython and the ecosystem of scientific Python tools, but this is actually very important, so any information you can provide me will be very useful. Best regards, Fernando Perez. From dwf at cs.toronto.edu Mon Mar 16 04:11:20 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 16 Mar 2009 04:11:20 -0400 Subject: [Numpy-discussion] inplace dot products In-Reply-To: References: Message-ID: <4028D465-0C82-4267-AAC8-26177A5BBCC6@cs.toronto.edu> On 20-Feb-09, at 6:41 AM, Olivier Grisel wrote: > Alright, thanks for the reply. > > Is there a canonical way /sample code to gain low level access to > blas / lapack > atlas routines using ctypes from numpy / scipy code? > > I don't mind fixing the dimensions and the ndtype of my array if it > can > decrease the memory overhead. I got some clarification from Pearu Peterson off-list. For gemm the issue is that if the matrix C is not Fortran-ordered, it will be copied, and that copy will be over-written. order='F' when creating the array being overwritten will fix this. DWF From pearu at cens.ioc.ee Mon Mar 16 04:27:58 2009 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 16 Mar 2009 10:27:58 +0200 (EET) Subject: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054) In-Reply-To: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> References: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> Message-ID: <39525.172.17.0.4.1237192078.squirrel@cens.ioc.ee> On Sun, March 15, 2009 8:57 pm, Sturla Molden wrote: > > Regarding ticket #1054. What is the reason for this strange behaviour? > >>>> a = np.zeros((10,10),order='F') >>>> a.flags > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False >>>> (a+1).flags > C_CONTIGUOUS : True > F_CONTIGUOUS : False > OWNDATA : True > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False I wonder if this behavior could be considered as a bug because it does not seem to have any advantages but only hides the storage order change and that may introduce inefficiencies. If a operation produces new array then the new array should have the storage properties of the lhs operand. That would allow writing code a = zeros(, order='F') b = a + 1 instead of a = zeros(, order='F') b = a[:] b += 1 to keep the storage properties in operations. Regards, Pearu From sturla at molden.no Mon Mar 16 10:05:31 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 16 Mar 2009 15:05:31 +0100 Subject: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054) In-Reply-To: <39525.172.17.0.4.1237192078.squirrel@cens.ioc.ee> References: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> <39525.172.17.0.4.1237192078.squirrel@cens.ioc.ee> Message-ID: <49BE5CAB.9050204@molden.no> On 3/16/2009 9:27 AM, Pearu Peterson wrote: > If a operation produces new array then the new array should have the > storage properties of the lhs operand. That would not be enough, as 1+a would behave differently from a+1. The former would change storage order and the latter would not. Broadcasting arrays adds futher to the complexity of the problem. It seems necessary to something like this to avoid the trap when using f2py: def some_fortran_function(x): if x.flags['C_CONTIGUOUS']: shape = x.shape[::-1] _x = x.reshape(shape, order='F') _y = _f2py_wrapper(_x) shape = _y.shape[::-1] return y.reshape(shape, order='C') else: return _f2py_wrapper(x) And then preferably never use Fortran ordered arrays directly. Sturla Molden From rmay31 at gmail.com Mon Mar 16 12:35:58 2009 From: rmay31 at gmail.com (Ryan May) Date: Mon, 16 Mar 2009 11:35:58 -0500 Subject: [Numpy-discussion] svn and tickets email status Message-ID: Hi, What's the status on SVN and ticket email notifications? The only messages I'm seeing since the switch is the occasional spam. Should I try re-subscribing? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from: Norman Oklahoma United States. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Mon Mar 16 12:54:38 2009 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 16 Mar 2009 18:54:38 +0200 (EET) Subject: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054) In-Reply-To: <49BE5CAB.9050204@molden.no> References: <1ffdb2ab49b762b119b39943c7d30021.squirrel@webmail.uio.no> <39525.172.17.0.4.1237192078.squirrel@cens.ioc.ee> <49BE5CAB.9050204@molden.no> Message-ID: <10076.62.65.217.106.1237222478.squirrel@cens.ioc.ee> On Mon, March 16, 2009 4:05 pm, Sturla Molden wrote: > On 3/16/2009 9:27 AM, Pearu Peterson wrote: > >> If a operation produces new array then the new array should have the >> storage properties of the lhs operand. > > That would not be enough, as 1+a would behave differently from a+1. The > former would change storage order and the latter would not. Actually, 1+a would be handled by __radd__ method and hence the storage order would be defined by the rhs (lhs of the __radd__ method). > Broadcasting arrays adds futher to the complexity of the problem. I guess, similar rules should be applied to storage order then. Pearu From charlesr.harris at gmail.com Mon Mar 16 13:00:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 Mar 2009 11:00:16 -0600 Subject: [Numpy-discussion] svn and tickets email status In-Reply-To: References: Message-ID: 2009/3/16 Ryan May > Hi, > > What's the status on SVN and ticket email notifications? The only messages > I'm seeing since the switch is the occasional spam. Should I try > re-subscribing? > I get the ticket notifications but I think the svn notifications are still broken. I needed to update my email address to receive ticket notifications, the mail was going to an old address after the change. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Mar 16 14:21:21 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 17 Mar 2009 03:21:21 +0900 Subject: [Numpy-discussion] 1.3.x branch created - trunk now opened for 1.4 Message-ID: <5b8d13220903161121n795238f6oa4e2c7bbff1f7edb@mail.gmail.com> Hi, I have just started the 1.3.x branch - as such, any change done to the trunk will not end up in the 1.3 release. I will announce the 1.3 beta release within the day, hopefully, cheers, David From pzs at dcs.gla.ac.uk Mon Mar 16 17:22:22 2009 From: pzs at dcs.gla.ac.uk (Peter Saffrey) Date: Mon, 16 Mar 2009 21:22:22 -0000 Subject: [Numpy-discussion] Overlapping ranges References: Message-ID: I'm trying to file a set of data points, defined by genome coordinates, into bins, also based on genome coordinates. Each data point is (chromosome, start, end, point) and each bin is (chromosome, start, end). I have about 140 million points to file into around 100,000 bins. Both are (roughly) evenly distributed over the 24 chromosomes (1-22, X and Y). Genome coordinates are integers and my data points are floats. For each data point, (end - start) is roughly 1000, but the bins are are of uneven widths. Bins might have also overlap - in that case, I need to know all the bins that a point overlaps. By overlap, I mean the start or end of the data point (or both) is inside the bin or that the point entirely covers the bin. At the moment, I'm using a fairly naive approach that finds roughly in the genome (which gene) each point might be and then checking it against the bins in that gene. If I split the problem into chromosomes, I feel sure there must be some super-fast matrix approach I can apply using numpy, but I'm struggling a bit. Can anybody suggest something? Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Mar 16 17:29:04 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Mar 2009 16:29:04 -0500 Subject: [Numpy-discussion] Overlapping ranges In-Reply-To: References: Message-ID: <3d375d730903161429j80c2a0ehba223f203d297131@mail.gmail.com> 2009/3/16 Peter Saffrey : > At the moment, I'm using a fairly naive approach that finds roughly in the > genome (which gene) each point might be and then checking it against the > bins in that gene. If I split the problem into chromosomes, I feel sure > there must be some super-fast matrix approach I can apply using numpy, but > I'm struggling a bit. Can anybody suggest something? You probably need something algorithmically better, like interval trees. There are a couple of C/Python implementations floating around. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Mar 16 18:31:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 18:31:29 -0400 Subject: [Numpy-discussion] Overlapping ranges In-Reply-To: <3d375d730903161429j80c2a0ehba223f203d297131@mail.gmail.com> References: <3d375d730903161429j80c2a0ehba223f203d297131@mail.gmail.com> Message-ID: <1cd32cbb0903161531t3051f440n1379c9257f914123@mail.gmail.com> On Mon, Mar 16, 2009 at 5:29 PM, Robert Kern wrote: > 2009/3/16 Peter Saffrey : > >> At the moment, I'm using a fairly naive approach that finds roughly in the >> genome (which gene) each point might be and then checking it against the >> bins in that gene. If I split the problem into chromosomes, I feel sure >> there must be some super-fast matrix approach I can apply using numpy, but >> I'm struggling a bit. Can anybody suggest something? > > You probably need something algorithmically better, like interval > trees. There are a couple of C/Python implementations floating around. > If I understand your problem correctly, then with a smaller scaled problem something like this should work {{{ import numpy as np B = np.array([[1,3],[2,5],[7,10], [6,15],[14,20]]) # bins P = np.c_[np.arange(1,16), 4+np.arange(1,16)] # points #mask = (~(P[:,0:1]>D[:,1:2].T)) * (~(P[:,1:2]B[:,1:2].T), (P[:,1:2] References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903170207m7e25146id1c3fbd1aefbdbda@mail.gmail.com> On Mon, Mar 16, 2009 at 2:43 AM, Sturla Molden wrote: > >> Well, that's nearly as good. (Though submitting a single svn diff >> containing all changes could have been a bit more easy to handle than >> separate patches for each file. But a small nitpick only.) > > The problem is I am really bad at using these tools. I have TortoiseSVN > installed, but no idea how to use it. :( You can use the command line version: svn diff gives exactly what you need. Another thing is to separate different issues in different patches - Treal -> double is different than npy_intp for indexing which is different from the threading issue. It really makes life easier when reviewing code. I have a git branch with those changes, but I don't think I will include it for 1.3. I don't have the time to make sure the fftpack code really is thread-safe, and I don't want to merge the code without at least one person to review it. cheers, David From stefan at sun.ac.za Tue Mar 17 09:25:37 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 17 Mar 2009 15:25:37 +0200 Subject: [Numpy-discussion] Buildbot not building? In-Reply-To: References: Message-ID: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> Pauli, 2009/2/14 Pauli Virtanen : > It seems that the buildbot.scipy.org is not picking up the changes in > Numpy trunk. > > I'd guess this could be some issue with SVNPoller. At least it doesn't > preserve states across buildmaster restarts, so replacing it with the > following might help: The firewall on the buildbot server is finally sorted out. I have applied the change you suggested. Thanks! Cheers St?fan From Chris.Barker at noaa.gov Tue Mar 17 12:24:56 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 17 Mar 2009 09:24:56 -0700 Subject: [Numpy-discussion] Enhancements for NumPy's FFTs In-Reply-To: <5b8d13220903170207m7e25146id1c3fbd1aefbdbda@mail.gmail.com> References: <5b8d13220903140901s71935029m5ade4fdf98c22eaf@mail.gmail.com> <0072be2d392cefde68c19f1fb6e2a7eb.squirrel@webmail.uio.no> <5b8d13220903150018o5ea70799j2759cf2579892840@mail.gmail.com> <644e0ebe8fb9c5be55892719f5643a3d.squirrel@webmail.uio.no> <49BD1FC8.1060205@ar.media.kyoto-u.ac.jp> <5b8d13220903170207m7e25146id1c3fbd1aefbdbda@mail.gmail.com> Message-ID: <49BFCED8.3060700@noaa.gov> David Cournapeau wrote: > You can use the command line version: svn diff gives exactly what you need. http://www.sliksvn.com/en/download is a good source of the command line client for Windows. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pav at iki.fi Tue Mar 17 14:21:22 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Mar 2009 18:21:22 +0000 (UTC) Subject: [Numpy-discussion] Buildbot not building? References: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> Message-ID: Tue, 17 Mar 2009 15:25:37 +0200, St?fan van der Walt wrote: > Pauli, > > 2009/2/14 Pauli Virtanen : >> It seems that the buildbot.scipy.org is not picking up the changes in >> Numpy trunk. >> >> I'd guess this could be some issue with SVNPoller. At least it doesn't >> preserve states across buildmaster restarts, so replacing it with the >> following might help: > > The firewall on the buildbot server is finally sorted out. I have > applied the change you suggested. Excellent! It seems like there still is the old SVN url in the buildmaster config, however, so builds don't work yet: http://buildbot.scipy.org/builders/Linux_x86_Ubuntu/builds/7/steps/svn/logs/stdio -- Pauli Virtanen From pav at iki.fi Tue Mar 17 14:29:02 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Mar 2009 18:29:02 +0000 (UTC) Subject: [Numpy-discussion] Slicing/selection in multiple dimensions simultaneously References: <268febdf0709111511n3ca15d42o85d31831178d96a@mail.gmail.com> <46E71591.20802@gmail.com> <46E72116.8040408@enthought.com> <463e11f90902261900o748940b6yf8410abda82524cc@mail.gmail.com> <3d375d730902271238p7fe29192hb953df2c5f87c245@mail.gmail.com> <9457e7c80903030111y590b4e34g2f7d1c42117acbe8@mail.gmail.com> <3d375d730903031726u4e26a5efm8ca51abc775de1db@mail.gmail.com> Message-ID: Tue, 03 Mar 2009 19:26:38 -0600, Robert Kern wrote: > On Tue, Mar 3, 2009 at 03:11, St?fan van der Walt > wrote: [clip: ix_(...,:,...)] > No, you're right. It doesn't work. The only way to make it work seems to be to define a special shaped slice object that indexing understands... I wonder if these would be worth the trouble. -- Pauli Virtanen From pav at iki.fi Tue Mar 17 16:35:04 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Mar 2009 20:35:04 +0000 (UTC) Subject: [Numpy-discussion] Buildbot not building? References: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> Message-ID: Tue, 17 Mar 2009 15:25:37 +0200, St?fan van der Walt wrote: > 2009/2/14 Pauli Virtanen : >> It seems that the buildbot.scipy.org is not picking up the changes in >> Numpy trunk. >> >> I'd guess this could be some issue with SVNPoller. At least it doesn't >> preserve states across buildmaster restarts, so replacing it with the >> following might help: > > The firewall on the buildbot server is finally sorted out. I have > applied the change you suggested. Ok, it seems like this the issue with no updates is now sorted out: http://buildbot.scipy.org/waterfall?show_events=false and the buildbot gets the changes OK. Now only the SVN url needs fixing... -- Pauli Virtanen From stefan at sun.ac.za Wed Mar 18 03:26:29 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 18 Mar 2009 09:26:29 +0200 Subject: [Numpy-discussion] Buildbot not building? In-Reply-To: References: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> Message-ID: <9457e7c80903180026n6075497ct953974a2d634220e@mail.gmail.com> 2009/3/17 Pauli Virtanen : > Ok, it seems like this the issue with no updates is now sorted out: > > ? ? ? ?http://buildbot.scipy.org/waterfall?show_events=false > > and the buildbot gets the changes OK. Now only the SVN url needs fixing... Oops, should be fixed now. How frequently is your Linux buildslave online? Cheers St?fan From thomas.robitaille at gmail.com Wed Mar 18 14:30:19 2009 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Wed, 18 Mar 2009 14:30:19 -0400 Subject: [Numpy-discussion] Concatenating string arrays Message-ID: Hello, I am trying to find an efficient way to concatenate the elements of two same-length numpy str arrays. For example if I define the following arrays: import numpy as np arr1 = np.array(['a','b','c']) arr2 = np.array(['d','e','f']) I would like to produce a third array that would contain ['ad','be','cf']. Is there an efficient way to do this? I could do this element by element, but I need a faster method, as I need to do this on arrays with several million elements. Thanks for any help, Thomas From sturla at molden.no Wed Mar 18 14:52:24 2009 From: sturla at molden.no (Sturla Molden) Date: Wed, 18 Mar 2009 19:52:24 +0100 Subject: [Numpy-discussion] Concatenating string arrays In-Reply-To: References: Message-ID: <49C142E8.9000906@molden.no> On 3/18/2009 7:30 PM, Thomas Robitaille wrote: > import numpy as np > arr1 = np.array(['a','b','c']) > arr2 = np.array(['d','e','f']) > > I would like to produce a third array that would contain > ['ad','be','cf']. Is there an efficient way to do this? I could do > this element by element, but I need a faster method, as I need to do > this on arrays with several million elements. >>> arr1 = np.array(['a','b','c']) >>> arr2 = np.array(['d','e','f']) >>> arr3 = np.zeros(6, dtype='|S1') >>> arr3[::2] = arr1 >>> arr3[1::2] = arr2 >>> arr3.view(dtype='|S2') array(['ad', 'be', 'cf'], dtype='|S2') Does this help? Sturla Molden From faltet at pytables.org Wed Mar 18 15:37:56 2009 From: faltet at pytables.org (Francesc Alted) Date: Wed, 18 Mar 2009 20:37:56 +0100 Subject: [Numpy-discussion] Concatenating string arrays In-Reply-To: <49C142E8.9000906@molden.no> References: <49C142E8.9000906@molden.no> Message-ID: <200903182037.57159.faltet@pytables.org> A Wednesday 18 March 2009, Sturla Molden escrigu?: > On 3/18/2009 7:30 PM, Thomas Robitaille wrote: > > import numpy as np > > arr1 = np.array(['a','b','c']) > > arr2 = np.array(['d','e','f']) > > > > I would like to produce a third array that would contain > > ['ad','be','cf']. Is there an efficient way to do this? I could do > > this element by element, but I need a faster method, as I need to > > do this on arrays with several million elements. > > > >>> arr1 = np.array(['a','b','c']) > >>> arr2 = np.array(['d','e','f']) > >>> arr3 = np.zeros(6, dtype='|S1') > >>> arr3[::2] = arr1 > >>> arr3[1::2] = arr2 > >>> arr3.view(dtype='|S2') > > array(['ad', 'be', 'cf'], > dtype='|S2') Nice example. After looking at this, it is apparent how beneficial can be the mutable types provided by NumPy. -- Francesc Alted From thomas.robitaille at gmail.com Wed Mar 18 15:49:31 2009 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Wed, 18 Mar 2009 15:49:31 -0400 Subject: [Numpy-discussion] Concatenating string arrays In-Reply-To: <49C142E8.9000906@molden.no> References: <49C142E8.9000906@molden.no> Message-ID: >> import numpy as np >> arr1 = np.array(['a','b','c']) >> arr2 = np.array(['d','e','f']) >> >> I would like to produce a third array that would contain >> ['ad','be','cf']. Is there an efficient way to do this? I could do >> this element by element, but I need a faster method, as I need to do >> this on arrays with several million elements. > >>>> arr1 = np.array(['a','b','c']) >>>> arr2 = np.array(['d','e','f']) >>>> arr3 = np.zeros(6, dtype='|S1') >>>> arr3[::2] = arr1 >>>> arr3[1::2] = arr2 >>>> arr3.view(dtype='|S2') > array(['ad', 'be', 'cf'], > dtype='|S2') > > Does this help? This works wonderfully - thanks! Tom From cournape at gmail.com Wed Mar 18 22:43:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 11:43:29 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 Message-ID: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Hi, I am pleased to announce the release of the first beta for numpy 1.3.0. You can find source tarballs and installers for both Mac OS X and Windows on the sourceforge page: https://sourceforge.net/projects/numpy/ The release note for the 1.3.0 release are below, The Numpy developers ========================= NumPy 1.3.0 Release Notes ========================= This minor includes numerous bug fixes, official python 2.6 support, and several new features such as generalized ufuncs. Highlights ========== Python 2.6 support ~~~~~~~~~~~~~~~~~~ Python 2.6 is now supported on all previously supported platforms, including windows. http://www.python.org/dev/peps/pep-0361/ Generalized ufuncs ~~~~~~~~~~~~~~~~~~ http://projects.scipy.org/numpy/ticket/887 Experimental Windows 64 bits support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS compilers and mingw-w64 compilers: This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt, Windows 64 bits section for more information on limitations and how to build it by yourself. New features ============ Formatting issues ~~~~~~~~~~~~~~~~~ Float formatting is now handled by numpy instead of the C runtime: this enables locale independent formatting, more robust fromstring and related methods. Special values (inf and nan) are also more consistent across platforms (nan vs IND/NaN, etc...), and more consistent with recent python formatting work (in 2.6 and later). Nan handling in max/min ~~~~~~~~~~~~~~~~~~~~~~~ The maximum/minimum ufuncs now reliably propagate nans. If one of the arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax and the array methods max/min. New ufuncs fmax and fmin have been added to deal with non-propagating nans. Nan handling in sign ~~~~~~~~~~~~~~~~~~~~ The ufunc sign now returns nan for the sign of anan. New ufuncs ~~~~~~~~~~ #. fmax - same as maximum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. fmin - same as minimum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. deg2rad - converts degrees to radians, same as the radians ufunc. #. rad2deg - converts radians to degrees, same as the degrees ufunc. #. log2 - base 2 logarithm. #. exp2 - base 2 exponential. #. logaddexp - add numbers stored as logarithms and return the logarithm of the result. #. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2 logarithm of the result result. Masked arrays ~~~~~~~~~~~~~ TODO gfortran support on windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gfortran can now be used as a fortran compiler for numpy on windows, even when the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work). Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran does). It is unclear whether it will be possible to use gfortran and visual studio at all on x64. Arch option for windows binary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Automatic arch detection can now be bypassed from the command line for the superpack installed: numpy-1.3.0-superpack-win32.exe /arch=nosse will install a numpy which works on any x86, even if the running computer supports SSE set. Deprecated features =================== Histogram ~~~~~~~~~ The semantics of histogram has been modified to fix long-standing issues with outliers handling. The main changes concern #. the definition of the bin edges, now including the rightmost edge, and #. the handling of upper outliers, now ignored rather than tallied in the rightmost bin. The previous behavior is still accessible using `new=False`, but this is deprecated, and will be removed entirely in 1.4.0. Documentation changes ===================== A lot of documentation improvements. New C API ========= Multiarray API ~~~~~~~~~~~~~~ The following functions have been added to the multiarray C API: * PyArray_GetEndianness: to get runtime endianness New defines ~~~~~~~~~~~ New public C defines are available for ARCH specific code through numpy/npy_cpu.h: * NPY_CPU_X86: x86 arch (32 bits) * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) * NPY_CPU_PPC: 32 bits ppc * NPY_CPU_PPC64: 64 bits ppc * NPY_CPU_SPARC: 32 bits sparc * NPY_CPU_SPARC64: 64 bits sparc * NPY_CPU_S390: S390 * NPY_CPU_PARISC: PARISC New macros for CPU endianness has been added as well (see internal changes below for details): * NPY_BYTE_ORDER: integer * NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines Those provide portable alternatives to glibc endian.h macros for platforms without it. Portable NAN, INFINITY, etc... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ npy_math.h now makes available several portable macro to get NAN, INFINITY: * NPY_NAN: equivalent to NAN, which is a GNU extension * NPY_INFINITY: equivalent to C99 INFINITY * NPY_PZERO, NPY_NZERO: positive and negative zero respectively Corresponding single and extended precision macros are available as well. All references to NAN, or home-grown computation of NAN on the fly have been removed for consistency. Internal changes ================ numpy.core math configuration revamp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This should make the porting to new platforms easier, and more robust. In particular, the configuration stage does not need to execute any code on the target platform, which is a first step toward cross-compilation. http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt umath refactor ~~~~~~~~~~~~~~ A lot of code cleanup for umath/ufunc code (charris). Improvements to build warnings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now build with -W -Wall without warnings http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt Separate core math library ~~~~~~~~~~~~~~~~~~~~~~~~~~ The core math functions (sin, cos, etc... for basic C types) have been put into a separate library; it acts as a compatibility layer, to support most C99 maths functions (real only for now). The library includes platform-specific fixes for various maths functions, such as using those versions should be more robust than using your platform functions directly. The API for existing functions is exactly the same as the C99 math functions API; the only difference is the npy prefix (npy_cos vs cos). The core library will be made available to any extension in 1.4.0. CPU arch detection ~~~~~~~~~~~~~~~~~~ npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc... Those are portable across OS and toolchains, and set up when the header is parsed, so that they can be safely used even in the case of cross-compilation (the values is not set when numpy is built), or for multi-arch binaries (e.g. fat binaries on Max OS X). npy_endian.h defines numpy specific endianness defines, modeled on the glibc endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set when the header is parsed by the compiler, and as such can be used for cross-compilation and multi-arch binaries. From sole at esrf.fr Thu Mar 19 04:07:20 2009 From: sole at esrf.fr (=?ISO-8859-1?Q?=22V=2EA=2E_Sol=E9=22?=) Date: Thu, 19 Mar 2009 09:07:20 +0100 Subject: [Numpy-discussion] How to force a particular windows numpy installation? Message-ID: <49C1FD38.20506@esrf.fr> Hello, Recent versions of binary numpy installers try to detect the target CPU in order to select the proper extensions. My problem is that I am distributing frozen versions of an analysis code and I would like to target the widest range of CPUs. If I install numpy 1.3.0b1 the installer installs numpy-1.3.0b1-sse2.exe. Am I right to suppose the frozen application will not run on non sse2 machines? (Old Athlon XP for example) Is it possible to force the binary installer to use a different CPU (for instance just sse1)? I am still using numpy 1.0.3 on python 2.5 because it does not seem to detect the CPU but, for python 2.6 I have no other choice. Thanks for your time, Armando From faltet at pytables.org Thu Mar 19 04:20:45 2009 From: faltet at pytables.org (Francesc Alted) Date: Thu, 19 Mar 2009 09:20:45 +0100 Subject: [Numpy-discussion] How to force a particular windows numpy installation? In-Reply-To: <49C1FD38.20506@esrf.fr> References: <49C1FD38.20506@esrf.fr> Message-ID: <200903190920.47310.faltet@pytables.org> A Thursday 19 March 2009, V.A. Sol? escrigu?: > Hello, > > Recent versions of binary numpy installers try to detect the target > CPU in order to select the proper extensions. > > My problem is that I am distributing frozen versions of an analysis > code and I would like to target the widest range of CPUs. > If I install numpy 1.3.0b1 the installer installs > numpy-1.3.0b1-sse2.exe. > > Am I right to suppose the frozen application will not run on non sse2 > machines? (Old Athlon XP for example) > > Is it possible to force the binary installer to use a different CPU > (for instance just sse1)? From the NumPy 1.3.0b1 announcement: Arch option for windows binary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Automatic arch detection can now be bypassed from the command line for the superpack installed: numpy-1.3.0-superpack-win32.exe /arch=nosse will install a numpy which works on any x86, even if the running computer supports SSE set. HTH, -- Francesc Alted From sole at esrf.fr Thu Mar 19 04:42:04 2009 From: sole at esrf.fr (=?ISO-8859-1?Q?=22V=2E_Armando_Sol=E9=22?=) Date: Thu, 19 Mar 2009 09:42:04 +0100 Subject: [Numpy-discussion] How to force a particular windows numpy installation? In-Reply-To: <200903190920.47310.faltet@pytables.org> References: <49C1FD38.20506@esrf.fr> <200903190920.47310.faltet@pytables.org> Message-ID: <49C2055C.6000408@esrf.fr> Francesc Alted wrote: > Arch option for windows binary > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > Automatic arch detection can now be bypassed from the command line for > the superpack installed: > > numpy-1.3.0-superpack-win32.exe /arch=nosse > > will install a numpy which works on any x86, even if the running > computer supports SSE set. > Thanks a lot / Moltes gr?cies, Armando From gael.varoquaux at normalesup.org Thu Mar 19 05:12:27 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Mar 2009 10:12:27 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: <20090319091227.GA7238@phare.normalesup.org> On Thu, Mar 19, 2009 at 11:43:29AM +0900, David Cournapeau wrote: > I am pleased to announce the release of the first beta for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > https://sourceforge.net/projects/numpy/ > The release note for the 1.3.0 release are below, You can also add that np.load now does memmap transparently if asked. This is minor, but important for a certain class of people (especially my collegues). Thanks for all your work, Ga?l From cimrman3 at ntc.zcu.cz Thu Mar 19 06:45:38 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 19 Mar 2009 11:45:38 +0100 Subject: [Numpy-discussion] setmember1d_nu In-Reply-To: <49B12DCC.4040307@ntc.zcu.cz> References: <49B12DCC.4040307@ntc.zcu.cz> Message-ID: <49C22252.1010506@ntc.zcu.cz> Re-hi! Robert Cimrman wrote: > Hi all, > > I have added to the ticket [1] a script that compares the proposed > setmember1d_nu() implementations of Neil and Kim. Comments are welcome! > > [1] http://projects.scipy.org/numpy/ticket/1036 I have attached a patch incorporating the solution that the involved people agreed on, so review, please. best regards, r. From numpy-discussion at maubp.freeserve.co.uk Thu Mar 19 07:02:36 2009 From: numpy-discussion at maubp.freeserve.co.uk (Peter) Date: Thu, 19 Mar 2009 11:02:36 +0000 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: <320fb6e00903190402ybb10f80r573b9fa63fc3e81b@mail.gmail.com> On Thu, Mar 19, 2009 at 2:43 AM, David Cournapeau wrote: > New defines > ~~~~~~~~~~~ > > New public C defines are available for ARCH specific code through > numpy/npy_cpu.h: > > ? ? ? ?* NPY_CPU_X86: x86 arch (32 bits) > ? ? ? ?* NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) > ? ? ? ?* NPY_CPU_PPC: 32 bits ppc > ? ? ? ?* NPY_CPU_PPC64: 64 bits ppc > ? ? ? ?* NPY_CPU_SPARC: 32 bits sparc > ? ? ? ?* NPY_CPU_SPARC64: 64 bits sparc > ? ? ? ?* NPY_CPU_S390: S390 > ? ? ? ?* NPY_CPU_PARISC: PARISC > Is there any desire to include a public C define for Itanium processors? I don't have one, nor am I likely to, but it just looked like a omission. http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/numpy/npy_cpu.h Peter From cournape at gmail.com Thu Mar 19 09:07:08 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 22:07:08 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <20090319091227.GA7238@phare.normalesup.org> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> Message-ID: <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> On Thu, Mar 19, 2009 at 6:12 PM, Gael Varoquaux wrote: > On Thu, Mar 19, 2009 at 11:43:29AM +0900, David Cournapeau wrote: >> I am pleased to announce the release of the first beta for numpy >> 1.3.0. You can find source tarballs and installers for both Mac OS X >> and Windows on the sourceforge page: > >> https://sourceforge.net/projects/numpy/ > >> The release note for the 1.3.0 release are below, > > You can also add that np.load now does memmap transparently if asked. > This is minor, but important for a certain class of people (especially my > collegues). > > Thanks for all your work, Well, why not adding it yourself, then ? It is generally easier, faster and more accurate for people related with the changes to add it themselves to the release notes :) David From gael.varoquaux at normalesup.org Thu Mar 19 09:32:22 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Mar 2009 14:32:22 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> Message-ID: <20090319133222.GC7238@phare.normalesup.org> On Thu, Mar 19, 2009 at 10:07:08PM +0900, David Cournapeau wrote: > Well, why not adding it yourself, then ? It is generally easier, > faster and more accurate for people related with the changes to add it > themselves to the release notes :) I am being stupid. How do I do this? I am sure you already gave the answer, but I can't find it. Sorry. Ga?l From sturla at molden.no Thu Mar 19 09:36:09 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 19 Mar 2009 14:36:09 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> Message-ID: <49C24A49.6090304@molden.no> On 3/19/2009 2:07 PM, David Cournapeau wrote: > Well, why not adding it yourself, then ? It is generally easier, > faster and more accurate for people related with the changes to add it > themselves to the release notes :) How do we do that? E.g. I added an important update to memmap.py in ticket 1053, but it has not even been reviewed. You should really set up a better system for receiving and reviewing contributions. Otherwise people will not care. Sturla Molden From cournape at gmail.com Thu Mar 19 09:39:12 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 22:39:12 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <20090319133222.GC7238@phare.normalesup.org> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> Message-ID: <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> On Thu, Mar 19, 2009 at 10:32 PM, Gael Varoquaux wrote: > On Thu, Mar 19, 2009 at 10:07:08PM +0900, David Cournapeau wrote: >> Well, why not adding it yourself, then ? It is generally easier, >> faster and more accurate for people related with the changes to add it >> themselves to the release notes :) > > I am being stupid. How do I do this? I am sure you already gave the > answer, but I can't find it. You have svn rights, right ? Then, it is just about adding your own contributions to doc/release/release-1.3.0.rst. David From gael.varoquaux at normalesup.org Thu Mar 19 10:00:34 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Mar 2009 15:00:34 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> Message-ID: <20090319140034.GE7238@phare.normalesup.org> On Thu, Mar 19, 2009 at 10:39:12PM +0900, David Cournapeau wrote: > On Thu, Mar 19, 2009 at 10:32 PM, Gael Varoquaux > wrote: > > On Thu, Mar 19, 2009 at 10:07:08PM +0900, David Cournapeau wrote: > >> Well, why not adding it yourself, then ? It is generally easier, > >> faster and more accurate for people related with the changes to add it > >> themselves to the release notes :) > > I am being stupid. How do I do this? I am sure you already gave the > > answer, but I can't find it. > You have svn rights, right ? No. And I am not asking for them. I'd rather have a buffer between me and numpy, because I don't feel I know numpy well-enough to commit directly. Ga?l From cournape at gmail.com Thu Mar 19 10:01:38 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 23:01:38 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <20090319140034.GE7238@phare.normalesup.org> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> <20090319140034.GE7238@phare.normalesup.org> Message-ID: <5b8d13220903190701q723f2591ge5ddf1040a580f04@mail.gmail.com> On Thu, Mar 19, 2009 at 11:00 PM, Gael Varoquaux > No. And I am not asking for them. I'd rather have a buffer between me and > numpy, because I don't feel I know numpy well-enough to commit directly. committing a text file should be easy enough, even for you ;) David From gael.varoquaux at normalesup.org Thu Mar 19 10:15:29 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Mar 2009 15:15:29 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903190701q723f2591ge5ddf1040a580f04@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> <20090319140034.GE7238@phare.normalesup.org> <5b8d13220903190701q723f2591ge5ddf1040a580f04@mail.gmail.com> Message-ID: <20090319141529.GF7238@phare.normalesup.org> On Thu, Mar 19, 2009 at 11:01:38PM +0900, David Cournapeau wrote: > On Thu, Mar 19, 2009 at 11:00 PM, Gael Varoquaux > > > No. And I am not asking for them. I'd rather have a buffer between me and > > numpy, because I don't feel I know numpy well-enough to commit directly. > committing a text file should be easy enough, even for you ;) OK, let's put it this way: if you feel it makes your life easier for me to have SVN acces, I don't mind. But I won't use it for anything else than docs and text file (or maybe a patch after review, but my volume of patches produced has been very, very small). Ga?l From charlesr.harris at gmail.com Thu Mar 19 10:49:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 08:49:31 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <320fb6e00903190402ybb10f80r573b9fa63fc3e81b@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <320fb6e00903190402ybb10f80r573b9fa63fc3e81b@mail.gmail.com> Message-ID: On Thu, Mar 19, 2009 at 5:02 AM, Peter < numpy-discussion at maubp.freeserve.co.uk> wrote: > On Thu, Mar 19, 2009 at 2:43 AM, David Cournapeau > wrote: > > New defines > > ~~~~~~~~~~~ > > > > New public C defines are available for ARCH specific code through > > numpy/npy_cpu.h: > > > > * NPY_CPU_X86: x86 arch (32 bits) > > * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) > > * NPY_CPU_PPC: 32 bits ppc > > * NPY_CPU_PPC64: 64 bits ppc > > * NPY_CPU_SPARC: 32 bits sparc > > * NPY_CPU_SPARC64: 64 bits sparc > > * NPY_CPU_S390: S390 > > * NPY_CPU_PARISC: PARISC > > > > Is there any desire to include a public C define for Itanium > processors? I don't have one, nor am I likely to, but it just looked > like a omission. > > http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/numpy/npy_cpu.h > It's there, just not documented yet. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpyle at post.harvard.edu Thu Mar 19 11:17:43 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 19 Mar 2009 11:17:43 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: Hi, First of all, thanks to everyone for all the hard work. On Mar 18, 2009, at 10:43 PM, David Cournapeau wrote: > I am pleased to announce the release of the first beta for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X I'm on a dual G5 Mac running OS X 10.5.6 and Enthought's EPD python: Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, 15:28:32) I deleted my old numpy, downloaded the Mac .dmg file and went through what claimed to be a successful installation, only to find no numpy. I tracked the new version down to /Library/Python/2.5/site-packages, a directory that I didn't know existed (site-packages is the only thing there). So I downloaded the source tarball and installed in the usual way with no problem into /Library/Frameworks/Python.framework/Versions/4.1.30101/lib/python2.5/ site-packages/ So my question is, why did the Mac .mkpg installer put numpy in the wrong place? I'm getting one test failure with 1.3.0b1 --- FAIL: test_umath.TestComplexFunctions.test_loss_of_precision(,) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ python2.5/site-packages/nose-0.10.3.0001-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ python2.5/site-packages/numpy/core/tests/test_umath.py", line 498, in check_loss_of_precision check(x_series, 2*eps) File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ python2.5/site-packages/numpy/core/tests/test_umath.py", line 480, in check assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max()) AssertionError: (0, nan, nan) ---------------------------------------------------------------------- Bob Pyle Cambridge, MA From charlesr.harris at gmail.com Thu Mar 19 11:35:57 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 09:35:57 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: On Thu, Mar 19, 2009 at 9:17 AM, Robert Pyle wrote: > Hi, > > First of all, thanks to everyone for all the hard work. > > On Mar 18, 2009, at 10:43 PM, David Cournapeau wrote: > > > I am pleased to announce the release of the first beta for numpy > > 1.3.0. You can find source tarballs and installers for both Mac OS X > > I'm on a dual G5 Mac running OS X 10.5.6 and Enthought's EPD python: > Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, > 15:28:32) > > I deleted my old numpy, downloaded the Mac .dmg file and went through > what claimed to be a successful installation, only to find no numpy. > I tracked the new version down to /Library/Python/2.5/site-packages, a > directory that I didn't know existed (site-packages is the only thing > there). So I downloaded the source tarball and installed in the usual > way with no problem into > > /Library/Frameworks/Python.framework/Versions/4.1.30101/lib/python2.5/ > site-packages/ > > So my question is, why did the Mac .mkpg installer put numpy in the > wrong place? > > I'm getting one test failure with 1.3.0b1 --- > > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision( 'numpy.complex256'>,) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/nose-0.10.3.0001-py2.5.egg/nose/case.py", line > 182, in runTest > self.test(*self.arg) > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/numpy/core/tests/test_umath.py", line 498, in > check_loss_of_precision > check(x_series, 2*eps) > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/numpy/core/tests/test_umath.py", line 480, in > check > assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max()) > AssertionError: (0, nan, nan) > Yes, that test fails on some architectures. What type of cpu do you have? It would help if you could track down the cause of the nans, see ticket #1038. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Mar 19 12:00:45 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 19 Mar 2009 12:00:45 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: <49C26C2D.5070808@american.edu> I am really grateful to have NumPy on Python 2.6! Alan Isaac Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in C:\Python26\lib\site-packages\numpy Python version 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] nose version 0.11.0 [snip] ---------------------------------------------------------------------- Ran 1881 tests in 12.406s OK (KNOWNFAIL=3, SKIP=1) >>> From rpyle at post.harvard.edu Thu Mar 19 12:21:59 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 19 Mar 2009 12:21:59 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> On Mar 19, 2009, at 11:35 AM, Charles R Harris wrote: > > > On Thu, Mar 19, 2009 at 9:17 AM, Robert Pyle > wrote: > I'm getting one test failure with 1.3.0b1 --- > > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision( 'numpy.complex256'>,) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/nose-0.10.3.0001-py2.5.egg/nose/case.py", line > 182, in runTest > self.test(*self.arg) > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/numpy/core/tests/test_umath.py", line 498, in > check_loss_of_precision > check(x_series, 2*eps) > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > python2.5/site-packages/numpy/core/tests/test_umath.py", line 480, in > check > assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max()) > AssertionError: (0, nan, nan) > > Yes, that test fails on some architectures. What type of cpu do you > have? It would help if you could track down the cause of the nans, > see ticket #1038. CPU is PPC (G5). I added a print statement in the test to pin things down a bit. The failing test appears to be d = np.absolute(np.arcsinh(x)/np.arcsinh(x+0j).real - 1) assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max()) with dtype = It passes with dtype = and dtype = Is that any help? Bob From charlesr.harris at gmail.com Thu Mar 19 13:01:40 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 11:01:40 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> Message-ID: On Thu, Mar 19, 2009 at 10:21 AM, Robert Pyle wrote: > > On Mar 19, 2009, at 11:35 AM, Charles R Harris wrote: > > > > > > > On Thu, Mar 19, 2009 at 9:17 AM, Robert Pyle > > wrote: > > I'm getting one test failure with 1.3.0b1 --- > > > > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision( > 'numpy.complex256'>,) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > > python2.5/site-packages/nose-0.10.3.0001-py2.5.egg/nose/case.py", line > > 182, in runTest > > self.test(*self.arg) > > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > > python2.5/site-packages/numpy/core/tests/test_umath.py", line 498, in > > check_loss_of_precision > > check(x_series, 2*eps) > > File "/Library/Frameworks/Python.framework/Versions/4.1.30101/lib/ > > python2.5/site-packages/numpy/core/tests/test_umath.py", line 480, in > > check > > assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], d.max()) > > AssertionError: (0, nan, nan) > > > > Yes, that test fails on some architectures. What type of cpu do you > > have? It would help if you could track down the cause of the nans, > > see ticket #1038. > > CPU is PPC (G5). I added a print statement in the test to pin things > down a bit. The failing test appears to be > > d = np.absolute(np.arcsinh(x)/np.arcsinh(x+0j).real - 1) > assert np.all(d < rtol), (np.argmax(d), x[np.argmax(d)], > d.max()) > > with dtype = > > It passes with dtype = and dtype = 'numpy.complex128'> > > Is that any help? > Not yet ;) I think there is a problem with the range of values in x that might have their source in the finfo values. So it would help if you could pin down just where x goes wrong by printing it out. That is what the short script that a included in the ticket comments does. Mind, I think you will need to do a bit of exploration. I don't think the failures are significant in that it probably doesn't need to test the range of values that it does, but it would be nice to understand precisely why it fails. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpyle at post.harvard.edu Thu Mar 19 13:19:18 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 19 Mar 2009 13:19:18 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> Message-ID: <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> Hi Chuck, On Mar 19, 2009, at 1:01 PM, Charles R Harris wrote: > Is that any help? > > Not yet ;) I think there is a problem with the range of values in x > that might have their source in the finfo values. So it would help > if you could pin down just where x goes wrong by printing it out. > That is what the short script that a included in the ticket comments > does. Mind, I think you will need to do a bit of exploration. I > don't think the failures are significant in that it probably doesn't > need to test the range of values that it does, but it would be nice > to understand precisely why it fails. Sorry. I didn't read clear to the end of the ticket. I assume the script you mean is ---------------------------------------------------------------- #! /usr/bin/env python import numpy as np def check_loss_of_precision(dtype): """Check loss of precision in complex arc* functions""" # Check against known-good functions info = np.finfo(dtype) real_dtype = dtype(0.).real.dtype eps = info.eps x_series = np.logspace(np.log10(info.tiny/eps).real, -3, 200, endpoint=False) x_basic = np.logspace(dtype(-3.).real, -1e-8, 10) print x_series if __name__ == "__main__" : check_loss_of_precision(np.longcomplex) ---------------------------------------------------------------- When I run this, it says x_series is an array of 200 NaNs. That would certainly explain why the assertion in test_umath.py failed! Bob From charlesr.harris at gmail.com Thu Mar 19 13:24:46 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 11:24:46 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> Message-ID: On Thu, Mar 19, 2009 at 11:19 AM, Robert Pyle wrote: > Hi Chuck, > On Mar 19, 2009, at 1:01 PM, Charles R Harris wrote: > > Is that any help? > > > > Not yet ;) I think there is a problem with the range of values in x > > that might have their source in the finfo values. So it would help > > if you could pin down just where x goes wrong by printing it out. > > That is what the short script that a included in the ticket comments > > does. Mind, I think you will need to do a bit of exploration. I > > don't think the failures are significant in that it probably doesn't > > need to test the range of values that it does, but it would be nice > > to understand precisely why it fails. > > Sorry. I didn't read clear to the end of the ticket. I assume the > script you mean is > ---------------------------------------------------------------- > #! /usr/bin/env python > import numpy as np > > def check_loss_of_precision(dtype): > """Check loss of precision in complex arc* functions""" > > # Check against known-good functions > > info = np.finfo(dtype) > real_dtype = dtype(0.).real.dtype > eps = info.eps > > x_series = np.logspace(np.log10(info.tiny/eps).real, -3, 200, > endpoint=False) > x_basic = np.logspace(dtype(-3.).real, -1e-8, 10) > > print x_series > > if __name__ == "__main__" : > check_loss_of_precision(np.longcomplex) > ---------------------------------------------------------------- > > > When I run this, it says x_series is an array of 200 NaNs. That would > certainly explain why the assertion in test_umath.py failed! > Yep, that's it. Can you see what info.tiny/eps is in this case. Also info.tiny and eps separately. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 19 13:38:16 2009 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 19 Mar 2009 17:38:16 +0000 (UTC) Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> Message-ID: Thu, 19 Mar 2009 13:19:18 -0400, Robert Pyle wrote: [clip] > When I run this, it says x_series is an array of 200 NaNs. That would > certainly explain why the assertion in test_umath.py failed! Thanks for tracking this! Can you check what your platform gives for: import numpy as np info = np.finfo(np.longcomplex) print "eps:", info.eps, info.eps.dtype print "tiny:", info.tiny, info.tiny.dtype print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) Thanks! -- Pauli Virtanen From David.Sallis at noaa.gov Thu Mar 19 13:50:38 2009 From: David.Sallis at noaa.gov (David E. Sallis) Date: Thu, 19 Mar 2009 12:50:38 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Message-ID: <49C285EE.2020109@noaa.gov> David Cournapeau said the following on 3/18/2009 9:43 PM: > I am pleased to announce the release of the first beta for numpy 1.3.0. I would totally love to begin using this. Can I trouble you to include MD5 (or PGP, or SHA) signatures for your download files in your release notes as you have for your previous versions? It's an IT security thing. Many thanks. Kudos on an excellent product. It makes my work with HDF-5 tremendously easier. I am extremely grateful! -- David E. Sallis, Software Architect General Dynamics Information Technology NOAA Coastal Data Development Center Stennis Space Center, Mississippi 228.688.3805 david.sallis at gdit.com david.sallis at noaa.gov -------------------------------------------- "Better Living Through Software Engineering" -------------------------------------------- From rpyle at post.harvard.edu Thu Mar 19 14:10:40 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 19 Mar 2009 14:10:40 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> Message-ID: <8BBF56CC-44F7-4A5C-A2AE-0736A6B043CD@post.harvard.edu> On Mar 19, 2009, at 1:24 PM, Charles R Harris wrote: > Yep, that's it. Can you see what info.tiny/eps is in this case. Also > info.tiny and eps separately. > > Chuck eps = 1.3817869701e-76 info.tiny = -1.08420217274e-19 info.tiny/eps = -7.84637716375e+56 Bob From rpyle at post.harvard.edu Thu Mar 19 14:13:31 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Thu, 19 Mar 2009 14:13:31 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> Message-ID: <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: Thanks for tracking this! Can you check what your platform gives for: > import numpy as np > info = np.finfo(np.longcomplex) > print "eps:", info.eps, info.eps.dtype > print "tiny:", info.tiny, info.tiny.dtype > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) eps: 1.3817869701e-76 float128 tiny: -1.08420217274e-19 float128 log10: nan nan Bob From Jim.Vickroy at noaa.gov Thu Mar 19 14:21:03 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Thu, 19 Mar 2009 12:21:03 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <8BBF56CC-44F7-4A5C-A2AE-0736A6B043CD@post.harvard.edu> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <8BBF56CC-44F7-4A5C-A2AE-0736A6B043CD@post.harvard.edu> Message-ID: <49C28D0F.9050108@noaa.gov> Apologies for the spam, but I would suggest that the subject be changed to reflect the topic (i.e., Mac OS X problem with this release). Thanks, -- jv Robert Pyle wrote: > On Mar 19, 2009, at 1:24 PM, Charles R Harris wrote: > > >> Yep, that's it. Can you see what info.tiny/eps is in this case. Also >> info.tiny and eps separately. >> >> Chuck >> > > eps = 1.3817869701e-76 > info.tiny = -1.08420217274e-19 > info.tiny/eps = -7.84637716375e+56 > > Bob > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Mar 19 14:24:21 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 19 Mar 2009 20:24:21 +0200 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49C24A49.6090304@molden.no> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <49C24A49.6090304@molden.no> Message-ID: <9457e7c80903191124g34afff43l7549837b159872ec@mail.gmail.com> Hi Sturla 2009/3/19 Sturla Molden : > You should really set up a better system for receiving and reviewing > contributions. Otherwise people will not care. The ticket was not set "ready for review" until Pauli did it this evening. You're supposed to have permission to do that, but if you don't it's a mistake and should be fixed. Cheers St?fan From vincent.thierion at ema.fr Thu Mar 19 14:36:01 2009 From: vincent.thierion at ema.fr (Vincent Thierion) Date: Thu, 19 Mar 2009 19:36:01 +0100 Subject: [Numpy-discussion] numpy for 64 bits machine Message-ID: Hello, I built the numpy module for 32 bits architecture (it seems the default building). However, my programs using this module have to be launched on remote worker nodes whose architecture can be 32 bits as well 64 bits (grid computing). First experimentations on 64 bits machines is bad, my programs don't work (problem of shared files). Is there someone who can provide me some documentations or hints to build 64 bits numpy for linux machines (mainly SL4) ? Thanks Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Mar 19 15:01:44 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 13:01:44 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> Message-ID: 2009/3/19 Robert Pyle > > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: > > Thanks for tracking this! Can you check what your platform gives for: > > > import numpy as np > > info = np.finfo(np.longcomplex) > > print "eps:", info.eps, info.eps.dtype > > print "tiny:", info.tiny, info.tiny.dtype > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) > > eps: 1.3817869701e-76 float128 > tiny: -1.08420217274e-19 float128 > log10: nan nan > The log of a negative number is nan, so part of the problem is the value of tiny. The size of the values also look suspect to me. On my machine In [8]: finfo(longcomplex).eps Out[8]: 1.084202172485504434e-19 In [9]: finfo(float128).tiny Out[9]: array(3.3621031431120935063e-4932, dtype=float128) So at a minimum eps and tiny are reversed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Mar 19 15:46:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Mar 2009 13:46:53 -0600 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> Message-ID: On Thu, Mar 19, 2009 at 1:01 PM, Charles R Harris wrote: > > > 2009/3/19 Robert Pyle > >> >> On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: >> >> Thanks for tracking this! Can you check what your platform gives for: >> >> > import numpy as np >> > info = np.finfo(np.longcomplex) >> > print "eps:", info.eps, info.eps.dtype >> > print "tiny:", info.tiny, info.tiny.dtype >> > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) >> >> eps: 1.3817869701e-76 float128 >> tiny: -1.08420217274e-19 float128 >> log10: nan nan >> > > The log of a negative number is nan, so part of the problem is the value of > tiny. The size of the values also look suspect to me. On my machine > > In [8]: finfo(longcomplex).eps > Out[8]: 1.084202172485504434e-19 > > In [9]: finfo(float128).tiny > Out[9]: array(3.3621031431120935063e-4932, dtype=float128) > > So at a minimum eps and tiny are reversed. > I started to look at the code for this but my eyes rolled up in my head and I passed out. It could use some improvements... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 19 15:48:13 2009 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 19 Mar 2009 19:48:13 +0000 (UTC) Subject: [Numpy-discussion] Buildbot not building? References: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> <9457e7c80903180026n6075497ct953974a2d634220e@mail.gmail.com> Message-ID: Wed, 18 Mar 2009 09:26:29 +0200, St?fan van der Walt wrote: > 2009/3/17 Pauli Virtanen : >> Ok, it seems like this the issue with no updates is now sorted out: >> >> ? ? ? ?http://buildbot.scipy.org/waterfall?show_events=false >> >> and the buildbot gets the changes OK. Now only the SVN url needs >> fixing... > > Oops, should be fixed now. How frequently is your Linux buildslave > online? It's my desktop, so on average a couple of hours per day. Is it worthwhile to keep it? I see that the slave jumping on and off adds some clutter even with show_events=false to the waterfall plot, which is not so nice. -- Pauli Virtanen From stefan at sun.ac.za Thu Mar 19 16:40:21 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 19 Mar 2009 22:40:21 +0200 Subject: [Numpy-discussion] Buildbot not building? In-Reply-To: References: <9457e7c80903170625mdf3f16u594caf21ab615906@mail.gmail.com> <9457e7c80903180026n6075497ct953974a2d634220e@mail.gmail.com> Message-ID: <9457e7c80903191340u8fb5b1avfbeb08aa6629eccb@mail.gmail.com> 2009/3/19 Pauli Virtanen : > It's my desktop, so on average a couple of hours per day. Is it > worthwhile to keep it? > > I see that the slave jumping on and off adds some clutter even with > show_events=false to the waterfall plot, which is not so nice. Having a permanently connected machine would be ideal, but since we don't have such a machine available, I'd rather keep yours than lose the capability completely. Cheers St?fan From Sul at hcp.med.harvard.edu Thu Mar 19 17:08:19 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Thu, 19 Mar 2009 17:08:19 -0400 Subject: [Numpy-discussion] numscons missing directory? Message-ID: Hi, I'm trying to install numpy on a Solaris 10 intel (well AMD) system. I've been struggling to get numpy installed using the native sunperf libraries and have tried to use numscons. Numscons, however, throws an error and complains about a missing directory. It seems that the scons-local directory is not created (see below). Am I missing a step? I'm assuming the scons-local directory should be created when numscons is installed. I did: easy_install numscons (the upgrade changed nothing) I then pulled the latest version of numpy via SVN, and from the numpy directory ran python setupscons.py install Executing scons command (pkg is numpy.core): /usr/bin/python "/usr/lib/python2.4/site-packages/numscons-0.9.4-py2.4.egg/numscons/scons-local/scons.py" -f numpy/core/SConstruct -I. scons_tool_path="" src_dir="numpy/core" pkg_name="numpy.core" log_level=50 distutils_libdir="../../../../build/lib.solaris-2.10-i86pc-2.4" cc_opt=/usr/lib/python2.4/pycc cc_opt_path="/usr/lib/python2.4" f77_opt=sunf77 f77_opt_path="/usr/bin" cxx_opt=/usr/lib/python2.4/pyCC cxx_opt_path="/usr/lib/python2.4" include_bootstrap=../../../../numpy/core/include silent=0 bootstrapping=1 /usr/bin/python: can't open file '/usr/lib/python2.4/site-packages/numscons-0.9.4-py2.4.egg/numscons/scons-local/scons.py': [Errno 2] No such file or directory Full output follows here: Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_6696 non-existing path in 'numpy/core': 'code_generators/array_api_order.txt' non-existing path in 'numpy/core': 'code_generators/multiarray_api_order.txt' running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building extension "numpy.core" sources building extension "numpy.lib" sources building extension "numpy.numarray" sources building extension "numpy.fft" sources building extension "numpy.linalg" sources building extension "numpy.random" sources building extension "numpy" sources building data_files sources running build_py copying build/src.solaris-2.10-i86pc-2.4/numpy/distutils/__config__.py -> build/lib.solaris-2.10-i86pc-2.4/numpy/distutils copying numpy/f2py/__svn_version__.py -> build/lib.solaris-2.10-i86pc-2.4/numpy/f2py copying numpy/core/__svn_version__.py -> build/lib.solaris-2.10-i86pc-2.4/numpy/core running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext running scons customize UnixCCompiler Found executable /usr/lib/python2.4/pycc customize SunFCompiler Found executable /usr/bin/f90 customize SunFCompiler customize UnixCCompiler customize UnixCCompiler using scons Found executable /usr/lib/python2.4/pyCC is bootstrapping ? True Found executable /usr/bin/f90 Executing scons command (pkg is numpy.core): /usr/bin/python "/usr/lib/python2.4/site-packages/numscons-0.9.4-py2.4.egg/numscons/scons-local/scons.py" -f numpy/core/SConstruct -I. scons_tool_path="" src_dir="numpy/core" pkg_name="numpy.core" log_level=50 distutils_libdir="../../../../build/lib.solaris-2.10-i86pc-2.4" cc_opt=/usr/lib/python2.4/pycc cc_opt_path="/usr/lib/python2.4" f77_opt=sunf77 f77_opt_path="/usr/bin" cxx_opt=/usr/lib/python2.4/pyCC cxx_opt_path="/usr/lib/python2.4" include_bootstrap=../../../../numpy/core/include silent=0 bootstrapping=1 /usr/bin/python: can't open file '/usr/lib/python2.4/site-packages/numscons-0.9.4-py2.4.egg/numscons/scons-local/scons.py': [Errno 2] No such file or directory error: Error while executing scons command. See above for more information. If you think it is a problem in numscons, you can also try executing the scons command with --log-level option for more detailed output of what numscons is doing, for example --log-level=0; the lowest the level is, the more detailed the output it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Mar 19 22:06:21 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 11:06:21 +0900 Subject: [Numpy-discussion] numpy for 64 bits machine In-Reply-To: References: Message-ID: <5b8d13220903191906p1e5ae2f3l7c8c2fbaacb34bb3@mail.gmail.com> 2009/3/20 Vincent Thierion : > Hello, > > I built the numpy module for 32 bits architecture (it seems the default > building). However, my programs using this module have to be launched on > remote worker nodes whose architecture can be 32 bits as well 64 bits (grid > computing). First experimentations on 64 bits machines is bad, my programs > don't work (problem of shared files). Is there someone who can provide me > some documentations or hints to build 64 bits numpy for linux machines It is exactly the same as for 32 bits, but you need to build numpy on a 64 bits machine (you can't build 64 bits numpy on a 32 bits machine). The shared problem may be that you forgot to add -fPIC compilation flag when building blas/lapack/atlas, but it is hard to tell without more details, David From cournape at gmail.com Thu Mar 19 22:16:57 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 11:16:57 +0900 Subject: [Numpy-discussion] numscons missing directory? In-Reply-To: References: Message-ID: <5b8d13220903191916v76956e20pee6a46f84ce6baa5@mail.gmail.com> 2009/3/20 Sul, Young L : > Numscons, however, throws an error and complains about a missing directory. > It seems that the scons-local directory is not created (see below). Am I > missing a step? I?m assuming the scons-local directory should be created > when numscons is installed. Yes, it should. You are not the first person to report this, but everytime I try to reproduce it, I can't, and get numscons correctly installed. That's really weird. As a temporary workaround, you may install from sources. Which version of setuptools are you using ? Can you confirm that the numscons/scons-local directory is empty (it should contain a scons installation). cheers, David From cournape at gmail.com Thu Mar 19 22:28:49 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 11:28:49 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49C24A49.6090304@molden.no> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <49C24A49.6090304@molden.no> Message-ID: <5b8d13220903191928y3d86047cvf60475972a104843@mail.gmail.com> On Thu, Mar 19, 2009 at 10:36 PM, Sturla Molden wrote: > How do we do that? > > E.g. I added an important update to memmap.py in ticket 1053, but it has > not even been reviewed. In Gael's case, I was just talking about mentioning his contributions in the release notes (those French are lazy). > > You should really set up a better system for receiving and reviewing > contributions. Otherwise people will not care. I certainly won't disagree - there has been some work on improving our workflow in this area. For now, we can at least tag the trac tickets which need review, and see them easily. Of course, this can't solve the man-power problem, but Pauli, Stefan and me intend to work more on improving our workflow. David From cournape at gmail.com Thu Mar 19 22:31:18 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 11:31:18 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <20090319141529.GF7238@phare.normalesup.org> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> <20090319140034.GE7238@phare.normalesup.org> <5b8d13220903190701q723f2591ge5ddf1040a580f04@mail.gmail.com> <20090319141529.GF7238@phare.normalesup.org> Message-ID: <5b8d13220903191931n434f69fh5c203a38e5dfb9bf@mail.gmail.com> On Thu, Mar 19, 2009 at 11:15 PM, Gael Varoquaux wrote: > On Thu, Mar 19, 2009 at 11:01:38PM +0900, David Cournapeau wrote: >> On Thu, Mar 19, 2009 at 11:00 PM, Gael Varoquaux >> >> > No. And I am not asking for them. I'd rather have a buffer between me and >> > numpy, because I don't feel I know numpy well-enough to commit directly. > >> committing a text file should be easy enough, even for you ;) > > OK, let's put it this way: if you feel it makes your life easier for me > to have SVN acces, I don't mind. But I won't use it for anything else > than docs and text file (or maybe a patch after review, but my volume of > patches produced has been very, very small). I think we can trust each other enough, without enforcing this (with per directory write access). We will add you, then (what's your login ?). cheers, David From gael.varoquaux at normalesup.org Fri Mar 20 02:03:14 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 20 Mar 2009 07:03:14 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903191931n434f69fh5c203a38e5dfb9bf@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <20090319091227.GA7238@phare.normalesup.org> <5b8d13220903190607g17375843o6e4d4f7185d6f58e@mail.gmail.com> <20090319133222.GC7238@phare.normalesup.org> <5b8d13220903190639i54a7856fo3ac1173b4bc0b360@mail.gmail.com> <20090319140034.GE7238@phare.normalesup.org> <5b8d13220903190701q723f2591ge5ddf1040a580f04@mail.gmail.com> <20090319141529.GF7238@phare.normalesup.org> <5b8d13220903191931n434f69fh5c203a38e5dfb9bf@mail.gmail.com> Message-ID: <20090320060314.GA6581@phare.normalesup.org> On Fri, Mar 20, 2009 at 11:31:18AM +0900, David Cournapeau wrote: > > OK, let's put it this way: if you feel it makes your life easier for me > > to have SVN acces, I don't mind. But I won't use it for anything else > > than docs and text file (or maybe a patch after review, but my volume of > > patches produced has been very, very small). > I think we can trust each other enough, without enforcing this (with > per directory write access). Maybe I am a dormant agent from a rival project waiting to torpedo scipy with bad commits. > We will add you, then (what's your login ?). My login on trac and on the other svn's is GaelVaroquaux. My login on the Linux server is gael.varoquaux. I think you care about the first one. Thanks, now I am going to feel even more ashamed not to contribute much. Ga?l From vincent.thierion at ema.fr Fri Mar 20 06:09:49 2009 From: vincent.thierion at ema.fr (Vincent Thierion) Date: Fri, 20 Mar 2009 11:09:49 +0100 Subject: [Numpy-discussion] numpy for 64 bits machine In-Reply-To: <5b8d13220903191906p1e5ae2f3l7c8c2fbaacb34bb3@mail.gmail.com> References: <5b8d13220903191906p1e5ae2f3l7c8c2fbaacb34bb3@mail.gmail.com> Message-ID: Hello, Is there an "easy way" to build numpy on remote 64 bits machines on which I don't have any roots privilege ? The shared problem seems related to 64 bits / 32 bits building. Vincent 2009/3/20 David Cournapeau > 2009/3/20 Vincent Thierion : > > Hello, > > > > I built the numpy module for 32 bits architecture (it seems the default > > building). However, my programs using this module have to be launched on > > remote worker nodes whose architecture can be 32 bits as well 64 bits > (grid > > computing). First experimentations on 64 bits machines is bad, my > programs > > don't work (problem of shared files). Is there someone who can provide me > > some documentations or hints to build 64 bits numpy for linux machines > > It is exactly the same as for 32 bits, but you need to build numpy on > a 64 bits machine (you can't build 64 bits numpy on a 32 bits > machine). > > The shared problem may be that you forgot to add -fPIC compilation > flag when building blas/lapack/atlas, but it is hard to tell without > more details, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Mar 20 06:20:39 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Mar 2009 11:20:39 +0100 Subject: [Numpy-discussion] numpy for 64 bits machine In-Reply-To: References: <5b8d13220903191906p1e5ae2f3l7c8c2fbaacb34bb3@mail.gmail.com> Message-ID: On Fri, 20 Mar 2009 11:09:49 +0100 Vincent Thierion wrote: > Hello, > > Is there an "easy way" to build numpy on remote 64 bits >machines on which I > don't have any roots privilege ? python setup.py install --prefix=$HOME/local Nils From cournape at gmail.com Fri Mar 20 06:59:39 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 19:59:39 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49C285EE.2020109@noaa.gov> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> Message-ID: <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> On Fri, Mar 20, 2009 at 2:50 AM, David E. Sallis wrote: > David Cournapeau said the following on 3/18/2009 9:43 PM: >> I am pleased to announce the release of the first beta for numpy 1.3.0. > > I would totally love to begin using this. ?Can I trouble you to include MD5 (or PGP, or SHA) signatures for your download files in > your release notes as you have for your previous versions? ?It's an IT security thing. ?Many thanks. I added the md5 for every file released in the notes. cheers, David From cournape at gmail.com Fri Mar 20 07:03:11 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 20:03:11 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> Message-ID: <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> On Fri, Mar 20, 2009 at 7:59 PM, David Cournapeau wrote: > On Fri, Mar 20, 2009 at 2:50 AM, David E. Sallis wrote: >> David Cournapeau said the following on 3/18/2009 9:43 PM: >>> I am pleased to announce the release of the first beta for numpy 1.3.0. >> >> I would totally love to begin using this. ?Can I trouble you to include MD5 (or PGP, or SHA) signatures for your download files in >> your release notes as you have for your previous versions? ?It's an IT security thing. ?Many thanks. > > I added the md5 for every file released in the notes. I have also added a .msi for the windows 64 bits installer as well, David From chaos.proton at gmail.com Fri Mar 20 08:15:50 2009 From: chaos.proton at gmail.com (Grissiom) Date: Fri, 20 Mar 2009 20:15:50 +0800 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? Message-ID: Hi all, When I try to use assertEqual in unittest to test my numpy codes I got this: ====================================================================== ERROR: test_test (__main__.Test_data_ana) ---------------------------------------------------------------------- Traceback (most recent call last): File "./unit_test.py", line 24, in test_test [4, 5, 6]])) File "/usr/lib/python2.5/unittest.py", line 332, in failUnlessEqual if not first == second: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ---------------------------------------------------------------------- Ran 2 tests in 0.003s FAILED (errors=1) ================================================ I know I should use array_equal to test two arrays but it will be more convenient to implement it as __eq__. Any hints? Thanks in advance. -- Cheers, Grissiom -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio.luciano at inwind.it Fri Mar 20 09:14:06 2009 From: giorgio.luciano at inwind.it (giorgio.luciano at inwind.it) Date: Fri, 20 Mar 2009 14:14:06 +0100 (CET) Subject: [Numpy-discussion] data software,chemometricas, GUI and scikit Message-ID: <3465130.327311237554846220.JavaMail.root@wmail4.libero.it> Dear All, we are proceeding with the building of our data analysis software and also not raise licensing problem we are splitting the ?core? part and the GUI part. We will try open a scikit for scipy for chemometrics and we are trying to port all the essential routines that already exist. One question that raised for the GUI is what kind of backend to use. Matplotlib seems a very good choice but since for the 3d part we want to use Mayavi, we asked ourself why not using Chaco. Can someone help us in the choice highlighting the pro and contra ? Thanks in advance and also all the people that want to help in the scikit are very welcome. Cheers Giorgio From bsouthey at gmail.com Fri Mar 20 09:45:19 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 20 Mar 2009 08:45:19 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> Message-ID: On Fri, Mar 20, 2009 at 6:03 AM, David Cournapeau wrote: > On Fri, Mar 20, 2009 at 7:59 PM, David Cournapeau wrote: >> On Fri, Mar 20, 2009 at 2:50 AM, David E. Sallis wrote: >>> David Cournapeau said the following on 3/18/2009 9:43 PM: >>>> I am pleased to announce the release of the first beta for numpy 1.3.0. >>> >>> I would totally love to begin using this. ?Can I trouble you to include MD5 (or PGP, or SHA) signatures for your download files in >>> your release notes as you have for your previous versions? ?It's an IT security thing. ?Many thanks. >> >> I added the md5 for every file released in the notes. > > I have also added a .msi for the windows 64 bits installer as well, > Great! I still have the same problem on my Intel vista 64 system (Intel QX6700 CPUZ reports the instruction set as MMX, SSE, SSE2, SSE3, SSSE3, EM64T) with McAfee. I found that double clicking the Python executable in the GUI also allows me to import numpy. But I must disable McAfee's on demand scan for both IDLE and command line. I am also seeing a crash with Python2.6.1 when when running numpy.test(). The output below with verbose=2. Also this code crashes: >>> import numpy as np >>> info = np.finfo(np.longcomplex) >From the Windows Problem signature: Fault Module Name: umath.pyd Bruce C:\>C:Python26\python.exe -c "import numpy; print numpy.__version__; print numpy .show_config()" 1.3.0b1 blas_info: libraries = ['blas'] library_dirs = ['C:\\local\\lib'] language = f77 lapack_info: libraries = ['lapack'] library_dirs = ['C:\\local\\lib'] language = f77 atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['blas'] library_dirs = ['C:\\local\\lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['C:\\local\\lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: NOT AVAILABLE mkl_info: NOT AVAILABLE None C:\> test_vecvecouter (test_numeric.TestDot) ... ok test_lengths (test_numeric.TestFromiter) ... ok test_types (test_numeric.TestFromiter) ... ok test_values (test_numeric.TestFromiter) ... ok test_boolean (test_numeric.TestIndex) ... ok test_empty_like (test_numeric.TestLikeFuncs) ... ok test_zeros_like (test_numeric.TestLikeFuncs) ... ok test_cumproduct (test_numeric.TestNonarrayArgs) ... ok test_mean (test_numeric.TestNonarrayArgs) ... ok test_size (test_numeric.TestNonarrayArgs) ... ok test_squeeze (test_numeric.TestNonarrayArgs) ... ok test_std (test_numeric.TestNonarrayArgs) ... ok test_var (test_numeric.TestNonarrayArgs) ... ok test_copies (test_numeric.TestResize) ... ok test_zeroresize (test_numeric.TestResize) ... ok test_divide_err (test_numeric.TestSeterr) ... ok test_set (test_numeric.TestSeterr) ... ok test_basic (test_numeric.TestStdVar) ... ok test_ddof1 (test_numeric.TestStdVar) ... ok test_ddof2 (test_numeric.TestStdVar) ... ok test_basic (test_numeric.TestStdVarComplex) ... ok Parametric test factory. ... ok Parametric test factory. ... ok test_no_parameter_modification (test_numeric.test_allclose_inf) ... ok test_scalar_loses1 (test_numerictypes.TestCommonType) ... ok test_scalar_loses2 (test_numerictypes.TestCommonType) ... ok test_scalar_wins (test_numerictypes.TestCommonType) ... ok test_scalar_wins2 (test_numerictypes.TestCommonType) ... ok test_scalar_wins3 (test_numerictypes.TestCommonType) ... ok test_assign (test_numerictypes.TestEmptyField) ... ok test_no_tuple (test_numerictypes.TestMultipleFields) ... ok test_return (test_numerictypes.TestMultipleFields) ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation of 0-dimensional objects ... ok Check creation of multi-dimensional objects ... ok Check creation of single-dimensional objects ... ok Check creation of 0-dimensional objects ... ok Check creation of multi-dimensional objects ... ok Check creation of single-dimensional objects ... ok Check reading the top fields of a nested array ... ok Check reading the nested fields of a nested array (1st level) ... ok Check access nested descriptors of a nested array (1st level) ... ok Check reading the nested fields of a nested array (2nd level) ... ok Check access nested descriptors of a nested array (2nd level) ... ok Check reading the top fields of a nested array ... ok Check reading the nested fields of a nested array (1st level) ... ok Check access nested descriptors of a nested array (1st level) ... ok Check reading the nested fields of a nested array (2nd level) ... ok Check access nested descriptors of a nested array (2nd level) ... ok test_access_fields (test_numerictypes.test_read_values_plain_multiple) ... ok test_access_fields (test_numerictypes.test_read_values_plain_single) ... ok Check formatting. ... ok Check formatting. ... ok Check formatting. ... ok Check formatting of nan & inf. ... ok Check formatting of nan & inf. ... ok Check formatting of nan & inf. ... ok Check formatting of complex types. ... ok Check formatting of complex types. ... ok Check formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check inf/nan formatting of complex types. ... ok Check formatting when using print ... ok Check formatting when using print ... ok Check formatting when using print ... ok Check formatting when using print ... ok Check formatting when using print ... ok Check formatting when using print ... ok test_print.test_locale_single ... ok test_print.test_locale_double ... ok test_print.test_locale_longdouble ... ok test_fromrecords (test_records.TestFromrecords) ... ok test_method_array (test_records.TestFromrecords) ... ok test_method_array2 (test_records.TestFromrecords) ... ok test_recarray_conflict_fields (test_records.TestFromrecords) ... ok test_recarray_from_names (test_records.TestFromrecords) ... ok test_recarray_from_obj (test_records.TestFromrecords) ... ok test_recarray_from_repr (test_records.TestFromrecords) ... ok test_recarray_fromarrays (test_records.TestFromrecords) ... ok test_recarray_fromfile (test_records.TestFromrecords) ... ok test_recarray_slices (test_records.TestFromrecords) ... ok test_assignment1 (test_records.TestRecord) ... ok test_assignment2 (test_records.TestRecord) ... ok test_invalid_assignment (test_records.TestRecord) ... ok test_records.test_find_duplicate ... ok Ticket #143 ... ok Ticket #111 ... ok Ticket #616 ... ok Ticket #119 ... ok Ticket #546 ... ok Ticket #516 ... ok Make sure optimization is not called in this case. ... ok Ticket #947. ... ok Ticket #501 ... ok Test for changeset r5065 ... ok Ticket #788, changeset r5155 ... ok Ticket #791 ... ok Ticket #151 ... ok test_binary_repr_0_width (test_regression.TestRegression) ... ok Ticket #950 ... ok Ticket #60 ... ok test_bool_indexing_invalid_nr_elements (test_regression.TestRegression) ... ok Ticket #194 ... ok test_char_array_creation (test_regression.TestRegression) ... ok Ticket #50 ... ok Ticket #246 ... ok Ticket #222 ... ok test_complex_dtype_printing (test_regression.TestRegression) ... ok Ticket #789, changeset 5217. ... ok Convolve should raise an error for empty input array. ... ok Ticket #658 ... ok Ticket #771: strides are not set correctly when reshaping 0-sized ... ok Ticket #658 ... ok Ticket #91 ... ok Test for ticket #551, changeset r5140 ... ok Ticket #588 ... ok Ticket #35 ... ok Ticket #335 ... ok Ticket #344 ... ok Ticket #334 ... ok test_empty_array_type (test_regression.TestRegression) ... ok Ticket #105 ... ok Ticket #955 ... ok Ticket #302 ... ok Correct behaviour of ticket #194 ... ok Ticket #657 ... ok test_flat_index_byteswap (test_regression.TestRegression) ... ok Ticket #640, floats from string ... ok Ticket #674 ... ok Ticket #816 ... ok Ticket #882 ... ok Ticket #503 ... ok test_fromstring (test_regression.TestRegression) ... ok Ticket #632 ... ok Ticket #128 ... ok Ticket #64 ... ok Ticket #65 ... ok Ticket #99 ... ok Ticket #3 ... ok Ticket #483 ... ok Ticket #71 ... ok test_large_fancy_indexing (test_regression.TestRegression) ... ok Lexsort memory error ... ok Ticket #61 ... ok Ticket #17 ... ok Ticket #254 ... ok Ticket #271 ... ok Ticket #473 ... ok Ticket #125 ... ok Ticket #83 ... ok Ticket #714 ... ok Ticket #243 ... ok Ticket #196 ... ok Ticket #327 ... ok Ticket 702 ... ok Ticket #562 ... ok Ticket #95 ... ok Ticket #126 ... ok Ticket #106 ... ok Ticket #93 ... ok Ticket #7 ... ok Ticket #330 ... ok test_mem_fromiter_invalid_dtype_string (test_regression.TestRegression) ... ok Ticket #572 ... ok Ticket #298 ... ok Ticket #62 ... ok Ticket #583 ... ok Ticket #448 ... ok Ticket #603 ... ok Ticket #514 ... ok Ticket #469 ... ok Ticket #325 ... ok test_method_args (test_regression.TestRegression) ... ok Ticket #339 ... ok Ticket #449 ... ok Ticket #273 ... ok Ticket #324 ... ok Ticket #49 ... ok Ticket #413 ... ok Ticket #58. ... ok Non-native endian arrays were incorrectly filled with scalars before ... ok Ticket #341 ... ok Ticket #552 ... ok test_object_argmax (test_regression.TestRegression) ... ok test_object_array_assign (test_regression.TestRegression) ... ok Ticket #86 ... ok Ticket #270 ... ok Ticket #711 ... ok Ticket #633 ... ok Ticket #239 ... ok test_object_casting (test_regression.TestRegression) ... ok Ticket #251 ... ok Ticket #16 ... ok Ticket #28 ... ok Ticket #396 ... ok Ticket #553 ... ok Ticket #554 ... ok Ticket #555 ... ok Ticket #374 ... ok Ticket #322 ... ok Ticket #160 ... ok Ticket #312 ... ok Ticket #372 ... ok Ticket #202 ... ok Ticket #793, changeset r5215 ... ok Ticket #40 ... ok Ticket #713 ... ok Changeset #3443 ... ok Ticket #378 ... ok Ticket #352 ... ok Make sure reshape order works. ... ok Ticket #67 ... ok Ticket #72 ... ok test_searchsorted_variable_length (test_regression.TestRegression) ... ok test_sign_bit (test_regression.TestRegression) ... ok Ticket 794. ... ok Ticket #47 ... ok Ticket #133 ... ok test_startswith (test_regression.TestRegression) ... ok Changeset 3557 ... ok Check argsort for strings containing zeros. ... ok Ticket #342 ... ok Ticket #540 ... ok Check sort for strings containing zeros. ... ok Ticket #265 ... ok Ensure that 'take' honours output parameter. ... ok Fix in r2836 ... ok Ticket #31 ... ok test_uint64_from_negative (test_regression.TestRegression) ... ok Ticket #825 ... ok Ticket #600 ... ok Ticket #190 ... ok Ticket #79 ... ok Ticket #205 ... ok Implemented in r2840 ... ok test_void_coercion (test_regression.TestRegression) ... ok test_void_copyswap (test_regression.TestRegression) ... ok No ticket ... ok Ticket #205 ... ok Ticket #43 ... ok test_int_from_long (test_scalarmath.TestConversion) ... ok test_large_types (test_scalarmath.TestPower) ... From Sul at hcp.med.harvard.edu Fri Mar 20 11:30:52 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Fri, 20 Mar 2009 11:30:52 -0400 Subject: [Numpy-discussion] numscons missing directory? In-Reply-To: <5b8d13220903191916v76956e20pee6a46f84ce6baa5@mail.gmail.com> References: <5b8d13220903191916v76956e20pee6a46f84ce6baa5@mail.gmail.com> Message-ID: Hi! In /usr/lib/python2.4/site-packages/numscons-0.9.4-py2.4.egg/numscons There is this: root at nightingale # ls -F __init__.py* core/ numdist/ tools/ __init__.pyc misc.py* testcode_snippets.py* version.py* checkers/ misc.pyc testcode_snippets.pyc version.pyc I'm not clear on what you mean by setuptools (I'm new to the python world)? I do see a directory called setuptoops in site-packages, and it says setuptools-0.6c9-py2.4.egg Thanks for all your help! -y -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau Sent: Thursday, March 19, 2009 10:17 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] numscons missing directory? 2009/3/20 Sul, Young L : > Numscons, however, throws an error and complains about a missing directory. > It seems that the scons-local directory is not created (see below). Am I > missing a step? I?m assuming the scons-local directory should be created > when numscons is installed. Yes, it should. You are not the first person to report this, but everytime I try to reproduce it, I can't, and get numscons correctly installed. That's really weird. As a temporary workaround, you may install from sources. Which version of setuptools are you using ? Can you confirm that the numscons/scons-local directory is empty (it should contain a scons installation). cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From Sul at hcp.med.harvard.edu Fri Mar 20 12:04:22 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Fri, 20 Mar 2009 12:04:22 -0400 Subject: [Numpy-discussion] numscons missing directory? In-Reply-To: <5b8d13220903191916v76956e20pee6a46f84ce6baa5@mail.gmail.com> References: <5b8d13220903191916v76956e20pee6a46f84ce6baa5@mail.gmail.com> Message-ID: Hi, I went to https://launchpad.net/numpy.scons.support/+download and grabbed numscons-0.9.2.tar.bz2. But, you get this error when installing numpy: RuntimeError: You need numscons >= 0.9.3 to build numpy with numscons (detected 0.9.2 ) So I did the easy install upgrade (easy_install -U numscons). You then end up with two directories in site-packages: numscons and numscons-0.9.4-py2.4.egg. The numscons directory seems to have the proper stuff inside of it, but the ....egg directory doesn't. Is there a way to directly grab the latest version? I didn't see any URL for SVN posted anywhere. -y -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau Sent: Thursday, March 19, 2009 10:17 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] numscons missing directory? 2009/3/20 Sul, Young L : > Numscons, however, throws an error and complains about a missing directory. > It seems that the scons-local directory is not created (see below). Am I > missing a step? I?m assuming the scons-local directory should be created > when numscons is installed. Yes, it should. You are not the first person to report this, but everytime I try to reproduce it, I can't, and get numscons correctly installed. That's really weird. As a temporary workaround, you may install from sources. Which version of setuptools are you using ? Can you confirm that the numscons/scons-local directory is empty (it should contain a scons installation). cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From David.Sallis at noaa.gov Fri Mar 20 13:18:02 2009 From: David.Sallis at noaa.gov (David E. Sallis) Date: Fri, 20 Mar 2009 12:18:02 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> Message-ID: <49C3CFCA.3060903@noaa.gov> David Cournapeau said the following on 3/20/2009 6:03 AM: > On Fri, Mar 20, 2009 at 7:59 PM, David Cournapeau wrote: >> On Fri, Mar 20, 2009 at 2:50 AM, David E. Sallis wrote: >>> David Cournapeau said the following on 3/18/2009 9:43 PM: >>>> I am pleased to announce the release of the first beta for numpy 1.3.0. >>> I would totally love to begin using this. Can I trouble you to include MD5 (or PGP, or SHA) signatures for your download files in >>> your release notes as you have for your previous versions? It's an IT security thing. Many thanks. >> I added the md5 for every file released in the notes. > > I have also added a .msi for the windows 64 bits installer as well, Excellent; thank you again, David! -- David E. Sallis, Software Architect General Dynamics Information Technology NOAA Coastal Data Development Center Stennis Space Center, Mississippi 228.688.3805 david.sallis at gdit.com david.sallis at noaa.gov -------------------------------------------- "Better Living Through Software Engineering" -------------------------------------------- From Sul at hcp.med.harvard.edu Fri Mar 20 16:19:02 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Fri, 20 Mar 2009 16:19:02 -0400 Subject: [Numpy-discussion] more on that missing directory Message-ID: Hi, (I'm on a Solaris 10 intel system, and am trying to use the sunperf libraries) I downloaded the 0.9.4 branch of numpy.scons.support, and tried to install from that. An immediate problem is that some files seem to have embedded ^Ms in them. I had to clean and rerun a few times before numpy installed. I tried to run the tests, but got the "0 tests run" message. In this case, none of the files had their execute bits set. If I ran each test individually, they seemed to pass, with the exception of one, which seems to be a known issue. Now, I am trying to install scipy via numscons. It looked like it was going to work, but it barfed. From the output it looks like whatever is building the compile commands forgot to add the cc command at the beginning of the line (see below. I've highlighted the barf). Can someone point me to the file that builds the commands? It seems like an easy fix. Executing scons command (pkg is scipy.interpolate): /usr/bin/python "/usr/lib/python2.4/site-packages/numscons/scons-local/scons.py" -f scipy/interpolate/SConstruct -I. scons_tool_path="" src_dir="scipy/interpolate" pkg_name="scipy.interpolate" log_level=50 distutils_libdir="../../../../build/lib.solaris-2.10-i86pc-2.4" cc_opt=/usr/lib/python2.4/pycc cc_opt_path="/usr/lib/python2.4" f77_opt=sunf77 f77_opt_path="/usr/bin" cxx_opt=/usr/lib/python2.4/pyCC cxx_opt_path="/usr/lib/python2.4" include_bootstrap=/usr/lib/python2.4/site-packages/numpy/core/include silent=0 bootstrapping=0 scons: Reading SConscript files ... compiler suncc (lang CXX) has no configuration in /usr/lib/python2.4/site-packages/numscons/core/configurations/cxxcompiler.cfg => using default configuration. Checking sunf77 C compatibility runtime ...(cached) -R/opt/SUNWspro/lib -L/opt/SUNWspro/lib -L/opt/SUNWspro/prod/lib -L/usr/ccs/lib -L/lib -L/usr/lib -lfui -lfai -lfsu -lsunmath -lmtsk -lm scons: done reading SConscript files. scons: Building targets ... #######HERE IS THE ERROR####### o build/scons/scipy/interpolate/src/so__interpolate.o -c -KPIC -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -I/usr/lib/python2.4/site-packages/numpy/f2py/src build/scons/scipy/interpolate/src/_interpolate.cpp sh: o: not found #######HERE IS THE ERROR####### CC -o build/scons/scipy/interpolate/_interpolate.so -G build/scons/scipy/interpolate/src/so__interpolate.o -Lbuild/scons/scipy/interpolate -R/opt/SUNWspro/lib -L/opt/SUNWspro/lib -L/opt/SUNWspro/prod/lib -L/usr/ccs/lib -L/lib -L/usr/lib -lfui -lfai -lfsu -lsunmath -lmtsk -lm ld: fatal: file build/scons/scipy/interpolate/src/so__interpolate.o: open failed: No such file or directory ld: fatal: File processing errors. No output written to build/scons/scipy/interpolate/_interpolate.so scons: *** [build/scons/scipy/interpolate/_interpolate.so] Error 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Fri Mar 20 16:39:38 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 20 Mar 2009 13:39:38 -0700 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: References: Message-ID: <49C3FF0A.40206@noaa.gov> Grissiom wrote: > I know I should use array_equal to test two arrays Not answering your question, but I hadn't known about array_equal, so when I saw this, I thought: great! I can get rid of a bunch of ugly code in my tests. However, it doesn't work as I would like for NaNs: >>> a array([ 1., 2., NaN, 4.]) >>> b array([ 1., 2., NaN, 4.]) >>> np.array_equal(a,b) False which makes sense, as: >>> np.nan == np.nan False however, for my purposes, and for my tests, if two arrays have NaNs in the same places, and all the other values are equal, they should be considered equal. I guess my way of thinking about it is that: np.array_equal(a, a) should always return True I understand that there are good reasons that NaN != NaN. However, it sure would be nice to have an array_equal that fit my purposes -- maybe a flag one could set? np.array_equal(a1, a2, nan_equal=True) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pgmdevlist at gmail.com Fri Mar 20 16:49:19 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 20 Mar 2009 16:49:19 -0400 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: <49C3FF0A.40206@noaa.gov> References: <49C3FF0A.40206@noaa.gov> Message-ID: On Mar 20, 2009, at 4:39 PM, Christopher Barker wrote: > Grissiom wrote: >> I know I should use array_equal to test two arrays > > Not answering your question, but I hadn't known about array_equal, so > when I saw this, I thought: great! I can get rid of a bunch of ugly > code > in my tests. However, it doesn't work as I would like for NaNs: Chris, have you tried assert_equal redefined in numpy.ma.testutils ? It was deisgned for masked arrays, but automatically deals with NaNs >>> from numpy.ma.testutils import assert_equal >>> a = np.array([1,2,np.nan,4]) >>> assert_equal(a,a) From josef.pktd at gmail.com Fri Mar 20 17:03:40 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 Mar 2009 17:03:40 -0400 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: References: <49C3FF0A.40206@noaa.gov> Message-ID: <1cd32cbb0903201403v5d6dee0fhd4dec27f2cc5ae8c@mail.gmail.com> On Fri, Mar 20, 2009 at 4:49 PM, Pierre GM wrote: > > On Mar 20, 2009, at 4:39 PM, Christopher Barker wrote: > >> Grissiom wrote: >>> I know I should use array_equal to test two arrays >> >> Not answering your question, but I hadn't known about array_equal, so >> when I saw this, I thought: great! I can get rid of a bunch of ugly >> code >> in my tests. However, it doesn't work as I would like for NaNs: > > Chris, have you tried assert_equal redefined in numpy.ma.testutils ? > It was deisgned for masked arrays, but automatically deals with NaNs > ?>>> from numpy.ma.testutils import assert_equal > ?>>> a = np.array([1,2,np.nan,4]) > ?>>> assert_equal(a,a) for testing purposes it is available in numpy testing: from numpy.testing import assert_equal, assert_almost_equal, assert_array_equal >>> a = np.array([ 1., 2., np.NaN, 4.]) >>> assert_array_equal(a,a) does not raise AssertionError >>> assert_array_equal(a,a+1) Traceback (most recent call last): File "", line 1, in assert_array_equal(a,a+1) File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", line 303, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 100.0%) x: array([ 1., 2., NaN, 4.]) y: array([ 2., 3., NaN, 5.]) From chaos.proton at gmail.com Fri Mar 20 20:31:02 2009 From: chaos.proton at gmail.com (Grissiom) Date: Sat, 21 Mar 2009 08:31:02 +0800 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: <1cd32cbb0903201403v5d6dee0fhd4dec27f2cc5ae8c@mail.gmail.com> References: <49C3FF0A.40206@noaa.gov> <1cd32cbb0903201403v5d6dee0fhd4dec27f2cc5ae8c@mail.gmail.com> Message-ID: On Sat, Mar 21, 2009 at 05:03, wrote: > for testing purposes it is available in numpy testing: > from numpy.testing import assert_equal, assert_almost_equal, > assert_array_equal > >>> a = np.array([ 1., 2., np.NaN, 4.]) > >>> assert_array_equal(a,a) > > does not raise AssertionError > > >>> assert_array_equal(a,a+1) > Traceback (most recent call last): > File "", line 1, in > assert_array_equal(a,a+1) > File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", > line 303, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > > (mismatch 100.0%) > x: array([ 1., 2., NaN, 4.]) > y: array([ 2., 3., NaN, 5.]) > Great! Thanks! In my case, a NaN is indicating something goes wrong and I want testing fail on it. So it meet my demand. One thing more, when I help(np.testing) I only got this: ============================================= Help on package numpy.testing in numpy: NAME numpy.testing - Common test support for all numpy test scripts. FILE /usr/lib/python2.5/site-packages/numpy/testing/__init__.py DESCRIPTION This single module should provide all the common functionality for numpy tests in a single location, so that test scripts can just import it and work right away. PACKAGE CONTENTS decorators noseclasses nosetester nulltester numpytest parametric setup setupscons utils DATA verbose = 0 ================================================ So I have to dir it to see is there any other useful functions. It will be perfect to document package method like assert_equal here. Thanks very much~;) -- Cheers, Grissiom -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Mar 20 21:19:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 21 Mar 2009 10:19:47 +0900 Subject: [Numpy-discussion] more on that missing directory In-Reply-To: References: Message-ID: <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> Hi, 2009/3/21 Sul, Young L : > Hi, > > (I?m on a Solaris 10 intel system, and am trying to use the sunperf > libraries) > An immediate problem is that some files seem to have embedded ^Ms in them. I > had to clean and rerun a few times before numpy installed. Could you tell me what those files are ? In numscons or numpy ? Those files should be fixed, neither numpy or numscons should have any CRF types of end of lines. > Now, I am trying to install scipy via numscons. It looked like it was going > to work, but it barfed. From the output it looks like whatever is building > the compile commands forgot to add the cc command at the beginning of the > line (see below. I?ve highlighted the barf). Yes, it is a bug in scons - its way of looking for compilers is buggy on solaris. I will look into it later today (I don't have a solaris installation in handy ATM), cheers, David From josef.pktd at gmail.com Fri Mar 20 22:15:46 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 Mar 2009 22:15:46 -0400 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: References: <49C3FF0A.40206@noaa.gov> <1cd32cbb0903201403v5d6dee0fhd4dec27f2cc5ae8c@mail.gmail.com> Message-ID: <1cd32cbb0903201915qdd3b5f1x2eed98f58fbfd5a2@mail.gmail.com> 2009/3/20 Grissiom : > On Sat, Mar 21, 2009 at 05:03, wrote: >> >> for testing purposes it is available in numpy testing: >> from numpy.testing import assert_equal, ?assert_almost_equal, >> assert_array_equal >> >>> a = np.array([ ?1., ? 2., ?np.NaN, ? 4.]) >> >>> assert_array_equal(a,a) >> >> does not raise AssertionError >> >> >>> assert_array_equal(a,a+1) >> Traceback (most recent call last): >> ?File "", line 1, in >> ? ?assert_array_equal(a,a+1) >> ?File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", >> line 303, in assert_array_equal >> ? ?verbose=verbose, header='Arrays are not equal') >> ?File "C:\Programs\Python25\lib\site-packages\numpy\testing\utils.py", >> line 295, in assert_array_compare >> ? ?raise AssertionError(msg) >> AssertionError: >> Arrays are not equal >> >> (mismatch 100.0%) >> ?x: array([ ?1., ? 2., ?NaN, ? 4.]) >> ?y: array([ ?2., ? 3., ?NaN, ? 5.]) > > Great! Thanks! In my case, a NaN is indicating something goes wrong and I > want testing fail on it. So it meet my demand. > > One thing more, when I help(np.testing) I only got this: > ============================================= > Help on package numpy.testing in numpy: > > NAME > ??? numpy.testing - Common test support for all numpy test scripts. > > FILE > ??? /usr/lib/python2.5/site-packages/numpy/testing/__init__.py > > DESCRIPTION > ??? This single module should provide all the common functionality for numpy > tests > ??? in a single location, so that test scripts can just import it and work > right > ??? away. > > PACKAGE CONTENTS > ??? decorators > ??? noseclasses > ??? nosetester > ??? nulltester > ??? numpytest > ??? parametric > ??? setup > ??? setupscons > ??? utils > > DATA > ??? verbose = 0 > ================================================ > > So I have to dir it to see is there any other useful functions. It will be > perfect to document package method like assert_equal here. > > Thanks very much~;) > > -- > Cheers, > Grissiom > The testing assert functions are not well documented, I usually just use assert_array_almost_equal with decimal precision for float arrays. useful is also assert_() which is better than the assert statement since it survives optimization flag for python compile. You can browse the help editor http://docs.scipy.org/numpy/docs/numpy.testing.utils/ To see the precise definition and difference between the different asserts you have to check the source, source button on editor page. There are also the http://projects.scipy.org/numpy/wiki/TestingGuidelines , if you haven't seen them yet, they describe the general test setup with nose but not the assert functions. Josef If you know where to look there is some information: >>> help(numpy.testing.utils) Help on module numpy.testing.utils in numpy.testing: NAME numpy.testing.utils - Utility function to facilitate testing. FILE c:\programs\python25\lib\site-packages\numpy\testing\utils.py FUNCTIONS assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True) Raise an assertion if two items are not equal. I think this should be part of unittest.py The test i ... From Sul at hcp.med.harvard.edu Fri Mar 20 22:32:56 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Fri, 20 Mar 2009 22:32:56 -0400 Subject: [Numpy-discussion] more on that missing directory In-Reply-To: <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> References: , <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> Message-ID: I'll have to get back to you on the files. Would you like a login to a solaris 10 system? I could provide that. ________________________________________ From: numpy-discussion-bounces at scipy.org [numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau [cournape at gmail.com] Sent: Friday, March 20, 2009 9:19 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] more on that missing directory Hi, 2009/3/21 Sul, Young L : > Hi, > > (I?m on a Solaris 10 intel system, and am trying to use the sunperf > libraries) > An immediate problem is that some files seem to have embedded ^Ms in them. I > had to clean and rerun a few times before numpy installed. Could you tell me what those files are ? In numscons or numpy ? Those files should be fixed, neither numpy or numscons should have any CRF types of end of lines. > Now, I am trying to install scipy via numscons. It looked like it was going > to work, but it barfed. From the output it looks like whatever is building > the compile commands forgot to add the cc command at the beginning of the > line (see below. I?ve highlighted the barf). Yes, it is a bug in scons - its way of looking for compilers is buggy on solaris. I will look into it later today (I don't have a solaris installation in handy ATM), cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Fri Mar 20 23:15:09 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 21 Mar 2009 12:15:09 +0900 Subject: [Numpy-discussion] more on that missing directory In-Reply-To: References: <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> Message-ID: <5b8d13220903202015s6f9c6884yf40a218d3ec04097@mail.gmail.com> On Sat, Mar 21, 2009 at 11:32 AM, Sul, Young L wrote: > I'll have to get back to you on the files. > > Would you like a login to a solaris 10 system? I could provide that. That could be useful, yes. I had a solaris 10 install, but I am afraid I had to wipe it out at some point, and I don't remember installing solaris 10 as a particularly enjoyable experience. David From chaos.proton at gmail.com Sat Mar 21 06:30:40 2009 From: chaos.proton at gmail.com (Grissiom) Date: Sat, 21 Mar 2009 18:30:40 +0800 Subject: [Numpy-discussion] using assertEqual in unittest to test two np.ndarray? In-Reply-To: <1cd32cbb0903201915qdd3b5f1x2eed98f58fbfd5a2@mail.gmail.com> References: <49C3FF0A.40206@noaa.gov> <1cd32cbb0903201403v5d6dee0fhd4dec27f2cc5ae8c@mail.gmail.com> <1cd32cbb0903201915qdd3b5f1x2eed98f58fbfd5a2@mail.gmail.com> Message-ID: On Sat, Mar 21, 2009 at 10:15, wrote: > > The testing assert functions are not well documented, I usually just > use assert_array_almost_equal with decimal precision for float arrays. > useful is also assert_() which is better than the assert statement > since it survives optimization flag for python compile. > > You can browse the help editor > http://docs.scipy.org/numpy/docs/numpy.testing.utils/ > To see the precise definition and difference between the different > asserts you have to check the source, source button on editor page. > > There are also the > http://projects.scipy.org/numpy/wiki/TestingGuidelines , if you > haven't seen them yet, they describe the general test setup with nose > but not the assert functions. > > Josef > > If you know where to look there is some information: > > >>> help(numpy.testing.utils) > Help on module numpy.testing.utils in numpy.testing: > > NAME > numpy.testing.utils - Utility function to facilitate testing. > > FILE > c:\programs\python25\lib\site-packages\numpy\testing\utils.py > > FUNCTIONS > assert_almost_equal(actual, desired, decimal=7, err_msg='', > verbose=True) > Raise an assertion if two items are not equal. > > I think this should be part of unittest.py > > The test i > > ... > Thanks really~ It helped a lot. ;) -- Cheers, Grissiom -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 21 13:24:19 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 21 Mar 2009 13:24:19 -0400 Subject: [Numpy-discussion] numpy.testing in the docs? Message-ID: <1cd32cbb0903211024j56cbd72bn28b0b50ed08f2853@mail.gmail.com> In following up on a question, I didn't find numpy testing anywhere in the sphinx generated docs. Since especially the asserts are useful also for other applications, it would be nice to have it in the help file. Where in the docs is it supposed to go? Josef From pav at iki.fi Sat Mar 21 13:47:53 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 21 Mar 2009 17:47:53 +0000 (UTC) Subject: [Numpy-discussion] numpy.testing in the docs? References: <1cd32cbb0903211024j56cbd72bn28b0b50ed08f2853@mail.gmail.com> Message-ID: Sat, 21 Mar 2009 13:24:19 -0400, josef.pktd wrote: > In following up on a question, I didn't find numpy testing anywhere in > the sphinx generated docs. > > Since especially the asserts are useful also for other applications, it > would be nice to have it in the help file. > > Where in the docs is it supposed to go? New file routines.testing.rst would perhaps be the best place. -- Pauli Virtanen From pnorthug at gmail.com Sat Mar 21 14:18:56 2009 From: pnorthug at gmail.com (Paul Northug) Date: Sat, 21 Mar 2009 19:18:56 +0100 Subject: [Numpy-discussion] memoization with ndarray arguments Message-ID: I would like to 'memoize' the objective, derivative and hessian functions, each taking a 1d double ndarray argument X, that are passed as arguments to scipy.optimize.fmin_ncg. Each of these 3 functions has calculations in common that are expensive to compute and are a function of X. It seems fmin_ncg computes these quantities at the same X over the course of the optimization. How should I go about doing this? numpy arrays are not hashable, maybe for a good reason. I tried anyway by keeping a dict of hash(tuple(X)), but started having collisions. So I switched to md5.new(X).digest() as the hash function and it seems to work ok. In a quick search, I saw cPickle.dumps and repr are also used as key values. I am assuming this is a common problem with functions with numpy array arguments and was wondering what the best approach is (including not using memoization). Thanks, P?l. From Jason.Woolard at noaa.gov Sat Mar 21 19:35:39 2009 From: Jason.Woolard at noaa.gov (Jason.Woolard at noaa.gov) Date: Sat, 21 Mar 2009 19:35:39 -0400 Subject: [Numpy-discussion] List to Array question? Message-ID: hi all, I'm sort of new to Numpy and I haven't had any luck with the docs or examples on this so I thought I would ask here. I have this small piece of code that's working but I'm wondering if the list really needs to be created or if this is an extra step that could be eliminated and speed things up a bit. It seems like the data could be dumped directly into the numpy array. >>> infile = file.File('test.dat',mode='r') #A binary file containing x,y,z data >>> xdata = [] #Create some empty lists >>> ydata = [] >>> zdata = [] >>> for p in infile: xdata.append(p.x) #Append data to list ydata.append(p.y) zdata.append(p.z) >>> easting = numpy.array(xdata,dtype=float32) #Convert to array >>> northing = numpy.array(ydata,dtype=float32) >>> height = numpy.array(zdata,dtype=float32) >>> print height [-39.54999924 -39.54999924 -39.61000061 ..., -39.54000092 -39.52999878 -39.52999878] I also tried this and it worked but I'd have to loop through the file each time (x,y,z) and that was slower than converting from the list. >>>xiterator = (p.x for p in infile) >>>x = numpy.fromiter(xiterator, dtype=float32, count=-1) >>> >>>ziterator = (p.z for p in infile) >>>z = numpy.fromiter(ziterator, dtype=float32, count=-1) >From what I've read the numpy arrays need to pre-allocated and I have a header that will give me this info but I can't seem to get the data into the array. Sorry if this is something obvious. Thanks in advance. JW From efiring at hawaii.edu Sat Mar 21 20:20:41 2009 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 21 Mar 2009 14:20:41 -1000 Subject: [Numpy-discussion] List to Array question? In-Reply-To: References: Message-ID: <49C58459.5090903@hawaii.edu> Jason.Woolard at noaa.gov wrote: > hi all, > > I'm sort of new to Numpy and I haven't had any luck with the docs or examples on this so I thought I would ask here. I have this small piece of code that's working but I'm wondering if the list really needs to be created or if this is an extra step that could be eliminated and speed things up a bit. It seems like the data could be dumped directly into the numpy array. > If the file consists of a header followed by a sequence of binary numbers, then you can use the numpy.fromfile function to read the whole sequence into an array of shape (n,3). You don't even have to know what n is beforehand if you are reading to the end of the file. You do have to know how to construct a suitable dtype to match what is in the file. And you do have to know how to read the header, at least to the extent of positioning the file at the start of the binary chunk before calling numpy.fromfile. Eric >>>> infile = file.File('test.dat',mode='r') #A binary file containing x,y,z data >>>> xdata = [] #Create some empty lists >>>> ydata = [] >>>> zdata = [] >>>> for p in infile: > xdata.append(p.x) #Append data to list > ydata.append(p.y) > zdata.append(p.z) > >>>> easting = numpy.array(xdata,dtype=float32) #Convert to array >>>> northing = numpy.array(ydata,dtype=float32) >>>> height = numpy.array(zdata,dtype=float32) >>>> print height > [-39.54999924 -39.54999924 -39.61000061 ..., -39.54000092 -39.52999878 > -39.52999878] > > I also tried this and it worked but I'd have to loop through the file each time (x,y,z) and that was slower than converting from the list. > >>>> xiterator = (p.x for p in infile) >>>> x = numpy.fromiter(xiterator, dtype=float32, count=-1) >>>> >>>> ziterator = (p.z for p in infile) >>>> z = numpy.fromiter(ziterator, dtype=float32, count=-1) > >>From what I've read the numpy arrays need to pre-allocated and I have a header that will give me this info but I can't seem to get the data into the array. > > Sorry if this is something obvious. Thanks in advance. > > JW > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From pav at iki.fi Sat Mar 21 20:58:13 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Mar 2009 00:58:13 +0000 (UTC) Subject: [Numpy-discussion] Doc update for 1.3.0? Message-ID: Hi all, (esp. David) Is there still time for a merge from the doc wiki for 1.3.x? Stefan already merged several reviewed docstrings a while ago. My current worry is that there's quite a bit good work still in there that would be useful to have in 1.3.0, but which, despite being an improvement over the what we have now, is not perfect. The only issue is actually cherry-picking "good" changes: - The new version should be a (possibly slight) improvement over the old one - No inappropriate content (eg. malicious doctests) Previously, I did the cherry-picking just by reading through the full patch, but now I added a feature to the doc wiki that allows to parallelize this: 1. Go to http://docs.scipy.org/numpy/patch/ 2. Click on a link in the patch list. This shows a diff vs. SVN 3. See if the change makes sense 4. Change "OK to apply" to Yes or No, accordingly. This serves a separate purpose from the review system and is separate from it: OK to Apply indicates that a docstring revision is "better than previously", whereas the review system indicates whether it is "perfect". I'm doing a first pass through them. I invite other people who have the reviewer permissions to check my judgment. Comments or suggestions? -- Pauli Virtanen From stefan at sun.ac.za Sat Mar 21 21:14:50 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 22 Mar 2009 03:14:50 +0200 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: References: Message-ID: <9457e7c80903211814k1b55899fs2b6ed7a8b1c5a2be@mail.gmail.com> Hi Pauli 2009/3/22 Pauli Virtanen : > Previously, I did the cherry-picking just by reading through the full > patch, but now I added a feature to the doc wiki that allows to > parallelize this: > > ? ?1. Go to http://docs.scipy.org/numpy/patch/ > ? ?2. Click on a link in the patch list. This shows a diff vs. SVN > ? ?3. See if the change makes sense > ? ?4. Change "OK to apply" to Yes or No, accordingly. This is neat! I think it is going to be very useful to quickly decide which changes go through right before a release. Does the status get reset after an edit to the docstring? There is a small problem, in that when I click on any function is shows that it is set to "No", with the option to change to "Yes" -- even when the current status is yes! Cheers St?fan From pav at iki.fi Sat Mar 21 21:26:12 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Mar 2009 01:26:12 +0000 (UTC) Subject: [Numpy-discussion] Doc update for 1.3.0? References: <9457e7c80903211814k1b55899fs2b6ed7a8b1c5a2be@mail.gmail.com> Message-ID: Sun, 22 Mar 2009 03:14:50 +0200, St?fan van der Walt wrote: > Hi Pauli > > 2009/3/22 Pauli Virtanen : >> Previously, I did the cherry-picking just by reading through the full >> patch, but now I added a feature to the doc wiki that allows to >> parallelize this: >> >> ? ?1. Go to http://docs.scipy.org/numpy/patch/ >> 2. Click on a link in the patch list. This shows a diff vs. SVN >> 3. See if the change?makes sense >> ? ?4. Change "OK to apply" to Yes or No, accordingly. > > This is neat! I think it is going to be very useful to quickly decide > which changes go through right before a release. Does the status get > reset after an edit to the docstring? Yes, it gets reset on edit. (Ie. defaults to False for new docstring revisions.) > There is a small problem, in that when I click on any function is shows > that it is set to "No", with the option to change to "Yes" -- even when > the current status is yes! Uh, maybe it's the time of the day, but somehow I fail to understand what is the problem you are having. Could you explain again what you see? Looking at the Django template, it should be quite impossible to have "Yes (Change to Yes)" or "No (Change to No)" appear in the output. It works as expected for me. Maybe you refreshed the Patch page before I changed the status of the linked entry to Yes? -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Sun Mar 22 01:09:07 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 22 Mar 2009 14:09:07 +0900 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: References: Message-ID: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> Hi Pauli, Pauli Virtanen wrote: > Hi all, (esp. David) > > Is there still time for a merge from the doc wiki for 1.3.x? > > Stefan already merged several reviewed docstrings a while ago. My current > worry is that there's quite a bit good work still in there that would be > useful to have in 1.3.0, but which, despite being an improvement over the > what we have now, is not perfect. > You can backport as many docstring changes as possible, since there is little chance to break anything just from docstring. There was no complain about the beta so far, and at least from the sourceforge stats, a reasonable of downloads have been made (~500), across all the binaries, so we may push a bit the RC later if you need some time to merge those changes, cheers, David From matthew.brett at gmail.com Sun Mar 22 01:46:41 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 21 Mar 2009 22:46:41 -0700 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? Message-ID: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> Hello, I found this a little confusing: In [11]: n = 2500000000 In [12]: np.arange(n).shape Out[12]: (0,) Maybe this should raise an error instead. This was a little more obvious, but perhaps again a more explicit error would be helpful? In [13]: np.zeros((n,)) --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) /home/mb312/tmp/max_speed.py in () ----> 1 2 3 4 5 OverflowError: long int too large to convert to int Best, Matthew From charlesr.harris at gmail.com Sun Mar 22 01:56:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 21 Mar 2009 23:56:16 -0600 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> References: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, Mar 21, 2009 at 11:09 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Hi Pauli, > > Pauli Virtanen wrote: > > Hi all, (esp. David) > > > > Is there still time for a merge from the doc wiki for 1.3.x? > > > > Stefan already merged several reviewed docstrings a while ago. My current > > worry is that there's quite a bit good work still in there that would be > > useful to have in 1.3.0, but which, despite being an improvement over the > > what we have now, is not perfect. > > > > You can backport as many docstring changes as possible, since there is > little chance to break anything just from docstring. > > There was no complain about the beta so far, and at least from the > sourceforge stats, a reasonable of downloads have been made (~500), > across all the binaries, so we may push a bit the RC later if you need > some time to merge those changes, > Let's announce the RC somewhere prominent on the scipy page so it gets more notice and testing. I didn't see any mention of the beta when I looked today. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Mar 22 02:03:54 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 Mar 2009 00:03:54 -0600 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? In-Reply-To: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> References: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> Message-ID: On Sat, Mar 21, 2009 at 11:46 PM, Matthew Brett wrote: > Hello, > > I found this a little confusing: > > In [11]: n = 2500000000 > > In [12]: np.arange(n).shape > Out[12]: (0,) > > Maybe this should raise an error instead. > > This was a little more obvious, but perhaps again a more explicit > error would be helpful? > > In [13]: np.zeros((n,)) > --------------------------------------------------------------------------- > OverflowError Traceback (most recent call last) > > /home/mb312/tmp/max_speed.py in () > ----> 1 > 2 > 3 > 4 > 5 > > OverflowError: long int too large to convert to int > Open a ticket. For testing purposes, such large integers are easier to parse if they are written as products, i.e., something like n = 25*10**8. That is about 10 GB for an integer array. How much memory does your machine have? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Mar 22 02:08:06 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 21 Mar 2009 23:08:06 -0700 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? In-Reply-To: References: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> Message-ID: <1e2af89e0903212308v207c6b92qd0cbadddc2408b98@mail.gmail.com> Hi, >> I found this a little confusing: >> >> In [11]: n = 2500000000 >> >> In [12]: np.arange(n).shape >> Out[12]: (0,) >> >> Maybe this should raise an error instead. >> >> This was a little more obvious, but perhaps again a more explicit >> error would be helpful? >> >> In [13]: np.zeros((n,)) >> >> --------------------------------------------------------------------------- >> OverflowError ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call >> last) >> >> /home/mb312/tmp/max_speed.py in () >> ----> 1 >> ? ? ?2 >> ? ? ?3 >> ? ? ?4 >> ? ? ?5 >> >> OverflowError: long int too large to convert to int > > Open a ticket. For testing purposes, such large integers are easier to parse > if they are written as products, i.e., something like n = 25*10**8. That is > about 10 GB for an integer array. How much memory does your machine have? The machine has got 2GB. I notice this gives much more helpful memory errors on a 64 bit machine with 4GB of memory. I will open a ticket, Thanks, Matthew From david at ar.media.kyoto-u.ac.jp Sun Mar 22 02:33:42 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 22 Mar 2009 15:33:42 +0900 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: References: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> Message-ID: <49C5DBC6.2050304@ar.media.kyoto-u.ac.jp> Charles R Harris wrote: > > Let's announce the RC somewhere prominent on the scipy page so it gets > more notice and testing. I didn't see any mention of the beta when I > looked today. Yes, you're right, I completely forgot it. On a side-note, I think the whole release process should be more automated. There are too many things to do manually for the process to be reliable, cheers, David From charlesr.harris at gmail.com Sun Mar 22 03:14:59 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 Mar 2009 01:14:59 -0600 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: <49C5DBC6.2050304@ar.media.kyoto-u.ac.jp> References: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> <49C5DBC6.2050304@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Mar 22, 2009 at 12:33 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > Let's announce the RC somewhere prominent on the scipy page so it gets > > more notice and testing. I didn't see any mention of the beta when I > > looked today. > > Yes, you're right, I completely forgot it. On a side-note, I think the > whole release process should be more automated. There are too many > things to do manually for the process to be reliable, > +1. On another side note, I think the include files npy_cpu.h and npy_endian.h should be combined in the npy_cpu.h file. As is, first you modify one, then you gotta go modify the other, it's a subdivision too far. I don't recall if these files are part of the c-api, but if so I would like to do it before the release. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Mar 22 04:31:06 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 22 Mar 2009 17:31:06 +0900 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? In-Reply-To: <1e2af89e0903212308v207c6b92qd0cbadddc2408b98@mail.gmail.com> References: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> <1e2af89e0903212308v207c6b92qd0cbadddc2408b98@mail.gmail.com> Message-ID: <5b8d13220903220131q4c79114t606b15785a130210@mail.gmail.com> Hi Matthew, On Sun, Mar 22, 2009 at 3:08 PM, Matthew Brett wrote: > > I notice this gives much more helpful memory errors on a 64 bit > machine with 4GB of memory. Can you tell me which version of numpy and which platform you are using ? I get a different (and ever more confusing) error message when the size overflows an int - I get "negative dimension not allowed". cheers, David From matthew.brett at gmail.com Sun Mar 22 04:40:15 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 22 Mar 2009 01:40:15 -0700 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? In-Reply-To: <5b8d13220903220131q4c79114t606b15785a130210@mail.gmail.com> References: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> <1e2af89e0903212308v207c6b92qd0cbadddc2408b98@mail.gmail.com> <5b8d13220903220131q4c79114t606b15785a130210@mail.gmail.com> Message-ID: <1e2af89e0903220140x6e3ae2ccw567eb32f55fc1253@mail.gmail.com> Hi, On Sun, Mar 22, 2009 at 1:31 AM, David Cournapeau wrote: > Hi Matthew, > > On Sun, Mar 22, 2009 at 3:08 PM, Matthew Brett wrote: > >> >> I notice this gives much more helpful memory errors on a 64 bit >> machine with 4GB of memory. > > Can you tell me which version of numpy and which platform you are > using ? I get a different (and ever more confusing) error message when > the size overflows an int - I get "negative dimension not allowed". Ah yes, I forgot: '1.3.0.dev6357'; ubuntu 8.04 32 bit. I get the 'negative dimensions' error in this situation: In [79]: n = 2500000000 In [80]: np.zeros(n) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/mb312/tmp/max_speed.py in () ----> 1 2 3 4 5 ValueError: negative dimensions are not allowed I guess that should go on the ticket too, Thanks a lot, Matthew From gregor.thalhammer at gmail.com Sun Mar 22 06:12:39 2009 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Sun, 22 Mar 2009 11:12:39 +0100 Subject: [Numpy-discussion] memoization with ndarray arguments In-Reply-To: References: Message-ID: <49C60F17.1010606@googlemail.com> Paul Northug schrieb: > I would like to 'memoize' the objective, derivative and hessian > functions, each taking a 1d double ndarray argument X, that are passed > as arguments to > scipy.optimize.fmin_ncg. > > Each of these 3 functions has calculations in common that are > expensive to compute and are a function of X. It seems fmin_ncg > computes these quantities at the same X over the course of the > optimization. > > How should I go about doing this? > Exactly for this purpose I was using something like: cache[tuple(X)] = (subexpression1, subexpression2) This worked fine for me. In your use case it might be enought to store only the latest result to avoid excessive memory usage, since typically the same X is used for consecutive calls of objective, derivative and hessian functions. Gregor > numpy arrays are not hashable, maybe for a good reason. I tried anyway > by keeping a dict of hash(tuple(X)), but started having collisions. > So I switched to md5.new(X).digest() as the hash function and it seems > to work ok. In a quick search, I saw cPickle.dumps and repr are also > used as key values. > > I am assuming this is a common problem with functions with numpy array > arguments and was wondering what the best approach is (including not > using memoization). > From stefan at sun.ac.za Sun Mar 22 08:37:28 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 22 Mar 2009 14:37:28 +0200 Subject: [Numpy-discussion] Doc update for 1.3.0? In-Reply-To: References: <9457e7c80903211814k1b55899fs2b6ed7a8b1c5a2be@mail.gmail.com> Message-ID: <9457e7c80903220537o11f565e2pa942bc8824052718@mail.gmail.com> 2009/3/22 Pauli Virtanen : >> There is a small problem, in that when I click on any function is shows >> that it is set to "No", with the option to change to "Yes" -- even when >> the current status is yes! > > Uh, maybe it's the time of the day, but somehow I fail to understand what > is the problem you are having. Could you explain again what you see? In the list of patches, I see numpy.core.fromnumeric.amax -> OK to apply but when I click on it I see No (Change to Yes) Cheers St?fan From pav at iki.fi Sun Mar 22 09:29:43 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 22 Mar 2009 13:29:43 +0000 (UTC) Subject: [Numpy-discussion] Doc update for 1.3.0? References: <9457e7c80903211814k1b55899fs2b6ed7a8b1c5a2be@mail.gmail.com> <9457e7c80903220537o11f565e2pa942bc8824052718@mail.gmail.com> Message-ID: Sun, 22 Mar 2009 14:37:28 +0200, St?fan van der Walt wrote: > 2009/3/22 Pauli Virtanen : >>> There is a small problem, in that when I click on any function is >>> shows that it is set to "No", with the option to change to "Yes" -- >>> even when the current status is yes! >> >> Uh, maybe it's the time of the day, but somehow I fail to understand >> what is the problem you are having. Could you explain again what you >> see? > > In the list of patches, I see > > numpy.core.fromnumeric.amax -> OK to apply > > but when I click on it I see > > No (Change to Yes) Fixed. -- Pauli Virtanen From gael.varoquaux at normalesup.org Sun Mar 22 11:03:57 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 22 Mar 2009 16:03:57 +0100 Subject: [Numpy-discussion] memoization with ndarray arguments In-Reply-To: References: Message-ID: <20090322150357.GC30682@phare.normalesup.org> On Sat, Mar 21, 2009 at 07:18:56PM +0100, Paul Northug wrote: > I would like to 'memoize' the objective, derivative and hessian > functions, each taking a 1d double ndarray argument X, that are passed > as arguments to > scipy.optimize.fmin_ncg. I have develop a library that does this. It uses heavy, ugly tricks. It is called joblib, http://pypi.python.org/pypi/joblib You can use it, I use it all the time. It is documented and tested (I just uploaded a new version with better documentation). Now I believe that this implementation is not the right approach to solve my problem, and I hope to find some time to do fairy radical changes to joblib. This is one of the reason why I haven't been advertizing it much. However, it does probably what you want, or at least partly. Cheers, Ga?l From jared.subscript at gmail.com Sun Mar 22 18:45:31 2009 From: jared.subscript at gmail.com (Jared MacCleary) Date: Sun, 22 Mar 2009 18:45:31 -0400 Subject: [Numpy-discussion] String arrays from Python to Fortran Message-ID: <6f3cb480903221545x7d4fb656ve3f239c1dc295dc0@mail.gmail.com> Hi all, No doubt it's a simple fix, but I'm stumped. I've tried several ways to send an array of strings from Python to a Fortran function, but nothing seems to work. I don't know what I'm missing. My Python and Fortran code are very simple. At this point, I'm just trying to get anything to work. In Python I try to create an array of strings, and the Fortran subroutine is supposed to print each element of that array on a separate line. When I try, I get this error: Traceback (most recent call last): File "testp.py", line 13, in print_string(a, len(a)) #this is imported from the compiled Fortran code ValueError:* failed to initialize intent(inout) array -- input 'S' not compatible to 'c'* I've googled for the answer but without success. The only encouraging info I've come across was an announcement accompanying the release of a previous version of f2py, declaring that string arrays had finally been fully implemented. So it must be possible. I'm still pretty new to Python and I'm brand new to Fortran. I'd appreciate any advice you can give. My Fortran and Python code are below. Thanks a lot, Jared *My Fortran code:* subroutine print_string (a, c) implicit none character(len=255), dimension(c), intent(inout):: a integer, intent(in) :: c integer :: i do i = 1, size(a) print*, a(i) end do end subroutine print_string *My Python code:* from test import * from numpy import * a = "this is the test string." a = a.split() b = a a = char.array(a, itemsize=1, order = 'Fortran') print_string(a, len(a)) #this is imported from the compiled Fortran code -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Mar 23 03:26:55 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 23 Mar 2009 16:26:55 +0900 Subject: [Numpy-discussion] Unhelpful errors trying to create very large arrays? In-Reply-To: <1e2af89e0903220140x6e3ae2ccw567eb32f55fc1253@mail.gmail.com> References: <1e2af89e0903212246v5a5be2acn92acd2aa4707b1f9@mail.gmail.com> <1e2af89e0903212308v207c6b92qd0cbadddc2408b98@mail.gmail.com> <5b8d13220903220131q4c79114t606b15785a130210@mail.gmail.com> <1e2af89e0903220140x6e3ae2ccw567eb32f55fc1253@mail.gmail.com> Message-ID: <5b8d13220903230026y426bb317p9817e96a63f9734e@mail.gmail.com> Hi Matthew, On Sun, Mar 22, 2009 at 5:40 PM, Matthew Brett wrote: > > I get the 'negative dimensions' error in this situation: I think I have fixed both arange and zeros errors in the trunk. arange error was specific to arange (unchecked overflow in a double -> int cast), but the zero one was more general (it should fix any 'high level' array creation call like empty, ones, etc....) Tell me if you still have problems, David From charlie.xia.fdu at gmail.com Mon Mar 23 03:48:48 2009 From: charlie.xia.fdu at gmail.com (charlie) Date: Mon, 23 Mar 2009 00:48:48 -0700 Subject: [Numpy-discussion] Fwd: numpy installation with nonroot python installation In-Reply-To: <11c6cf4e0903230043q4ea395fbpc563d6dc9a541442@mail.gmail.com> References: <11c6cf4e0903230043q4ea395fbpc563d6dc9a541442@mail.gmail.com> Message-ID: <11c6cf4e0903230048s6e1fd59dw1989c04480b97362@mail.gmail.com> Dear numpyers, I am trying to install numpy 1.3 with my own version of python 2.5. I got stuck with following error: *building 'numpy.core.multiarray' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating build/temp.linux-x86_64-2.5 creating build/temp.linux-x86_64-2.5/numpy creating build/temp.linux-x86_64-2.5/numpy/core creating build/temp.linux-x86_64-2.5/numpy/core/src compile options: '-Ibuild/src.linux-x86_64-2.5/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-x86_64-2.5/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/include -I/home/cmb-01/lxia/usr/include/python2.5 -c' gcc: numpy/core/src/multiarraymodule.c gcc -pthread -shared build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so /usr/bin/ld: cannot find -lpython2.5 collect2: ld returned 1 exit status /usr/bin/ld: cannot find -lpython2.5 collect2: ld returned 1 exit status error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so" failed with exit status 1* I guess it is because the ld can find the libpython2.5.so; So i tried following methods: 1. $export LD_LIBRARY_PATH = $HOME/usr/lib # where my libpython2.5.so is in 2. edited the site.cfg file so that: [DEFAULT] library_dirs = ~/usr/lib include_dirs = ~/usr/include search_static_first = false Both methods don't work. But when I remove the -lpython2.5 flags from the compiling command, the command go through without problem. But I don know where to remove this flag in the numpy package. I ran out choice now and thus I want to get help from you. Thanks for any advice. Charlie -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at pytables.org Mon Mar 23 04:20:17 2009 From: faltet at pytables.org (Francesc Alted) Date: Mon, 23 Mar 2009 09:20:17 +0100 Subject: [Numpy-discussion] memoization with ndarray arguments In-Reply-To: References: Message-ID: <200903230920.18333.faltet@pytables.org> A Saturday 21 March 2009, Paul Northug escrigu?: [clip] > numpy arrays are not hashable, maybe for a good reason. Numpy array are not hashable because they are mutable. > I tried > anyway by keeping a dict of hash(tuple(X)), but started having > collisions. So I switched to md5.new(X).digest() as the hash function > and it seems to work ok. In a quick search, I saw cPickle.dumps and > repr are also used as key values. Having collisions is not necessarily very bad, unless you have *a lot* of them. I wonder what kind of X you are dealing with that can provoke so much collisions when using hash(tuple(X))? Just curious. > I am assuming this is a common problem with functions with numpy > array arguments and was wondering what the best approach is > (including not using memoization). If md5.new(X).digest() works well for you, then go ahead; it seems fast: In [14]: X = np.arange(1000.) In [15]: timeit hash(tuple(X)) 1000 loops, best of 3: 504 ?s per loop In [16]: timeit md5.new(X).digest() 10000 loops, best of 3: 40.4 ?s per loop Cheers, -- Francesc Alted From sturla at molden.no Mon Mar 23 08:25:23 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Mar 2009 13:25:23 +0100 Subject: [Numpy-discussion] String arrays from Python to Fortran In-Reply-To: <6f3cb480903221545x7d4fb656ve3f239c1dc295dc0@mail.gmail.com> References: <6f3cb480903221545x7d4fb656ve3f239c1dc295dc0@mail.gmail.com> Message-ID: <49C77FB3.3060403@molden.no> How did you import the function? f2py? What did you put in your .pyf file? > *My Fortran code:* > > subroutine print_string (a, c) > implicit none > character(len=255), dimension(c), intent(inout):: a > integer, intent(in) :: c > integer :: i > do i = 1, size(a) > print*, a(i) > end do > > end subroutine print_string > > *My Python code:* > > from test import * > from numpy import * > > a = "this is the test string." > a = a.split() > > b = a > > a = char.array(a, itemsize=1, order = 'Fortran') > > print_string(a, len(a)) #this is imported from the compiled Fortran code > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From jens.rantil at telia.com Mon Mar 23 09:36:22 2009 From: jens.rantil at telia.com (Jens Rantil) Date: Mon, 23 Mar 2009 14:36:22 +0100 (CET) Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute Message-ID: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> Hi all, So I have a C-function in a DLL loaded through ctypes. This particular function returns a pointer to a double. In fact I know that this pointer points to the first element in an array of, say for simplicity, 200 elements. How do I convert this pointer to a NumPy array that uses this data (ie. no copy of data in memory)? I am able to create a numpy array using a copy of the data. I have tried using the 'numpy.ctypeslib.ndpointer' but so far failed. In its documentation it claims it should be possible to use for not only argtypes attribute, but also restype. I have not found a single example of this on the web, and I wonder how this is done. As I see it, it would have to use the errcheck attribute to return an ndarray and not just restype. My latest trial was: >>> import ctypes >>> pointer = DLL.my_func() >>> ctypes_arr_type = C.POINTER(200 * ctypes.c_double) >>> ctypes_arr = ctypes_arr_type(pointer) >>> narray = N.ctypeslib.as_array(ctypes_arr) however this didn't work. Any hints would be appreciated. Thanks, Jens From sturla at molden.no Mon Mar 23 10:40:42 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Mar 2009 15:40:42 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute In-Reply-To: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> Message-ID: <49C79F6A.1050204@molden.no> Jens Rantil wrote: > Hi all, > > So I have a C-function in a DLL loaded through ctypes. This particular > function returns a pointer to a double. In fact I know that this > pointer points to the first element in an array of, say for simplicity, > 200 elements. > > How do I convert this pointer to a NumPy array that uses this data (ie. > no copy of data in memory)? I am able to create a numpy array using a > copy of the data. > def fromaddress(address, nbytes, dtype=double): class Dummy(object): pass d = Dummy() d.__array_interface__ = { 'data' : (address, False), 'typestr' : numpy.uint8.str, 'descr' : numpy.uint8.descr, 'shape' : (nbytes,), 'strides' : None, 'version' : 3 } return numpy.asarray(d).view( dtype=dtype ) From rpyle at post.harvard.edu Mon Mar 23 14:34:34 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Mon, 23 Mar 2009 14:34:34 -0400 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> Message-ID: <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Hi all, This is a continuation of something I started last week, but with a more appropriate subject line. To recap, my machine is a dual G5 running OS X 10.5.6, my python is Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, 15:28:32) and numpy 1.3.0b1 was installed from the source tarball in the straightforward way with sudo python setup.py install On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote: > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: > > Thanks for tracking this! Can you check what your platform gives for: > > > import numpy as np > > info = np.finfo(np.longcomplex) > > print "eps:", info.eps, info.eps.dtype > > print "tiny:", info.tiny, info.tiny.dtype > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) > > eps: 1.3817869701e-76 float128 > tiny: -1.08420217274e-19 float128 > log10: nan nan > > The log of a negative number is nan, so part of the problem is the > value of tiny. The size of the values also look suspect to me. On my > machine > > In [8]: finfo(longcomplex).eps > Out[8]: 1.084202172485504434e-19 > > In [9]: finfo(float128).tiny > Out[9]: array(3.3621031431120935063e-4932, dtype=float128) > > So at a minimum eps and tiny are reversed. > > I started to look at the code for this but my eyes rolled up in my > head and I passed out. It could use some improvements... > > Chuck I have chased this a bit (or perhaps 128 bits) further. The problem seems to be that float128 is screwed up in general. I tracked the test error back to lines 95-107 in /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/ machar.py Here is a short program built from these lines that demonstrates what I believe to be at the root of the test failure. ###################################### #! /usr/bin/env python import numpy as np import binascii as b def t(type="float"): max_iterN = 10000 print "\ntesting %s" % type a = np.array([1.0],type) one = a zero = one - one for _ in xrange(max_iterN): a = a + a temp = a + one temp1 = temp - a print _+1, b.b2a_hex(temp[0]), temp1 if any(temp1 - one != zero): break return if __name__ == '__main__': t(np.float32) t(np.float64) t(np.float128) ###################################### This tries to find the number of bits in the significand by calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum is indistinguishable from (2.0**n), that is, when the added 1.0 has fallen off the bottom of the significand. My print statement shows the power of 2.0, the hex representation of ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which one expects to be 1.0 up to the point where the added 1.0 is lost. Here are the last few lines printed for float32: 19 49000010 [ 1.] 20 49800008 [ 1.] 21 4a000004 [ 1.] 22 4a800002 [ 1.] 23 4b000001 [ 1.] 24 4b800000 [ 0.] You can see the added 1.0 marching to the right and off the edge at 24 bits. Similarly, for float64: 48 42f0000000000010 [ 1.] 49 4300000000000008 [ 1.] 50 4310000000000004 [ 1.] 51 4320000000000002 [ 1.] 52 4330000000000001 [ 1.] 53 4340000000000000 [ 0.] There are 53 bits, just as IEEE 754 would lead us to hope. However, for float128: 48 42f00000000000100000000000000000 [1.0] 49 43000000000000080000000000000000 [1.0] 50 43100000000000040000000000000000 [1.0] 51 43200000000000020000000000000000 [1.0] 52 43300000000000010000000000000000 [1.0] 53 43400000000000003ff0000000000000 [1.0] 54 43500000000000003ff0000000000000 [1.0] Something weird happens as we pass 53 bits. I think lines 53 and 54 *should* be 53 43400000000000008000000000000000 [1.0] 54 43500000000000004000000000000000 [1.0] etc., with the added 1.0 continuing to march rightwards to extinction, as before. The calculation eventually terminates with 1022 7fd00000000000003ff0000000000000 [1.0] 1023 7fe00000000000003ff0000000000000 [1.0] 1024 7ff00000000000000000000000000000 [NaN] (7ff00000000000000000000000000000 == Inf). This totally messes up the remaining parts of machar.py, and leaves us with Infs and Nans that give the logs of negative numbers, etc. that we saw last week. But wait, there's more! I also have an Intel Mac (a MacBook Pro). This passes numpy.test(), but when I look in detail with the above code, I find that the float128 significand has only 64 bits, leading me to suspect that it is really the so-called 80-bit "extended precision". Is this true? If so, should there be such large differences between architectures for the same nominal precision? Is float128 just whatever the underlying C compiler thinks a "long double" is? I've spent way too much time on this, so I'm going to bow out here (unless someone can suggest something for me to try that won't take too much time). Bob From charlie.xia.fdu at gmail.com Mon Mar 23 14:52:30 2009 From: charlie.xia.fdu at gmail.com (charlie) Date: Mon, 23 Mar 2009 11:52:30 -0700 Subject: [Numpy-discussion] numpy installation with nonroot python installation In-Reply-To: <11c6cf4e0903230048s6e1fd59dw1989c04480b97362@mail.gmail.com> References: <11c6cf4e0903230043q4ea395fbpc563d6dc9a541442@mail.gmail.com> <11c6cf4e0903230048s6e1fd59dw1989c04480b97362@mail.gmail.com> Message-ID: <11c6cf4e0903231152t74662881o8bee2c0d73af82f9@mail.gmail.com> Alright, I solved this by using numscons. On Mon, Mar 23, 2009 at 12:48 AM, charlie wrote: > Dear numpyers, > > I am trying to install numpy 1.3 with my own version of python 2.5. I got > stuck with following error: > *building 'numpy.core.multiarray' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall > -Wstrict-prototypes -fPIC > > creating build/temp.linux-x86_64-2.5 > creating build/temp.linux-x86_64-2.5/numpy > creating build/temp.linux-x86_64-2.5/numpy/core > creating build/temp.linux-x86_64-2.5/numpy/core/src > compile options: '-Ibuild/src.linux-x86_64-2.5/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-x86_64-2.5/numpy/core/include/numpy > -Inumpy/core/src -Inumpy/core/include > -I/home/cmb-01/lxia/usr/include/python2.5 -c' > gcc: numpy/core/src/multiarraymodule.c > gcc -pthread -shared > build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm > -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so > /usr/bin/ld: cannot find -lpython2.5 > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -lpython2.5 > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared > build/temp.linux-x86_64-2.5/numpy/core/src/multiarraymodule.o -L. -lm -lm > -lpython2.5 -o build/lib.linux-x86_64-2.5/numpy/core/multiarray.so" failed > with exit status 1* > > I guess it is because the ld can find the libpython2.5.so; So i tried > following methods: > 1. $export LD_LIBRARY_PATH = $HOME/usr/lib # where my libpython2.5.so is > in > 2. edited the site.cfg file so that: > [DEFAULT] > library_dirs = ~/usr/lib > include_dirs = ~/usr/include > search_static_first = false > > Both methods don't work. But when I remove the -lpython2.5 flags from the > compiling command, the command go through without problem. But I don know > where to remove this flag in the numpy package. I ran out choice now and > thus I want to get help from you. Thanks for any advice. > > Charlie > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Mar 23 15:22:29 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 13:22:29 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle wrote: > Hi all, > > This is a continuation of something I started last week, but with a > more appropriate subject line. > > To recap, my machine is a dual G5 running OS X 10.5.6, my python is > > Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, > 15:28:32) > > and numpy 1.3.0b1 was installed from the source tarball in the > straightforward way with > > sudo python setup.py install > > > On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote: > > > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: > > > > Thanks for tracking this! Can you check what your platform gives for: > > > > > import numpy as np > > > info = np.finfo(np.longcomplex) > > > print "eps:", info.eps, info.eps.dtype > > > print "tiny:", info.tiny, info.tiny.dtype > > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) > > > > eps: 1.3817869701e-76 float128 > > tiny: -1.08420217274e-19 float128 > > log10: nan nan > > > > The log of a negative number is nan, so part of the problem is the > > value of tiny. The size of the values also look suspect to me. On my > > machine > > > > In [8]: finfo(longcomplex).eps > > Out[8]: 1.084202172485504434e-19 > > > > In [9]: finfo(float128).tiny > > Out[9]: array(3.3621031431120935063e-4932, dtype=float128) > > > > So at a minimum eps and tiny are reversed. > > > > I started to look at the code for this but my eyes rolled up in my > > head and I passed out. It could use some improvements... > > > > Chuck > > I have chased this a bit (or perhaps 128 bits) further. > > The problem seems to be that float128 is screwed up in general. I > tracked the test error back to lines 95-107 in > > /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/ > machar.py > > Here is a short program built from these lines that demonstrates what > I believe to be at the root of the test failure. > > ###################################### > #! /usr/bin/env python > > import numpy as np > import binascii as b > > def t(type="float"): > max_iterN = 10000 > print "\ntesting %s" % type > a = np.array([1.0],type) > one = a > zero = one - one > for _ in xrange(max_iterN): > a = a + a > temp = a + one > temp1 = temp - a > print _+1, b.b2a_hex(temp[0]), temp1 > if any(temp1 - one != zero): > break > return > > if __name__ == '__main__': > t(np.float32) > t(np.float64) > t(np.float128) > > ###################################### > > This tries to find the number of bits in the significand by > calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum > is indistinguishable from (2.0**n), that is, when the added 1.0 has > fallen off the bottom of the significand. > > My print statement shows the power of 2.0, the hex representation of > ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which > one expects to be 1.0 up to the point where the added 1.0 is lost. > > Here are the last few lines printed for float32: > > 19 49000010 [ 1.] > 20 49800008 [ 1.] > 21 4a000004 [ 1.] > 22 4a800002 [ 1.] > 23 4b000001 [ 1.] > 24 4b800000 [ 0.] > > You can see the added 1.0 marching to the right and off the edge at 24 > bits. > > Similarly, for float64: > > 48 42f0000000000010 [ 1.] > 49 4300000000000008 [ 1.] > 50 4310000000000004 [ 1.] > 51 4320000000000002 [ 1.] > 52 4330000000000001 [ 1.] > 53 4340000000000000 [ 0.] > > There are 53 bits, just as IEEE 754 would lead us to hope. > > However, for float128: > > 48 42f00000000000100000000000000000 [1.0] > 49 43000000000000080000000000000000 [1.0] > 50 43100000000000040000000000000000 [1.0] > 51 43200000000000020000000000000000 [1.0] > 52 43300000000000010000000000000000 [1.0] > 53 43400000000000003ff0000000000000 [1.0] > 54 43500000000000003ff0000000000000 [1.0] > > Something weird happens as we pass 53 bits. I think lines 53 and 54 > *should* be > PPC stores long doubles as two doubles. I don't recall exactly how the two are used, but the result is that the numbers aren't in the form you would expect. Long doubles on the PPC have always been iffy, so it is no surprise that machar fails. The failure on SPARC quad precision bothers me more. I think the easy thing to do for the 1.3 release is to fix the precision test to use a hardwired range of values, I don't think testing the extreme small values is necessary to check the power series expansion. But I have been leaving that fixup to Pauli. Longer term, I think the values in finfo could come from npy_cpu.h and be hardwired in. We only support ieee floats and I don't think it should be difficult to track extended precision (current intel) vs quad precision (SPARC). Although at some point I expect intel will also go to quad precision and then things might get sticky. Hmm..., I wonder what some to the other supported achitectures do? Anyhow, PPC is an exception in the way it treats long doubles and I'm not even sure it hasn't changed in some of the more recent models. > > 53 43400000000000008000000000000000 [1.0] > 54 43500000000000004000000000000000 [1.0] > > etc., with the added 1.0 continuing to march rightwards to extinction, > as before. > > The calculation eventually terminates with > > 1022 7fd00000000000003ff0000000000000 [1.0] > 1023 7fe00000000000003ff0000000000000 [1.0] > 1024 7ff00000000000000000000000000000 [NaN] > > (7ff00000000000000000000000000000 == Inf). > Interesting. > > This totally messes up the remaining parts of machar.py, and leaves us > with Infs and Nans that give the logs of negative numbers, etc. that > we saw last week. > > But wait, there's more! I also have an Intel Mac (a MacBook Pro). > This passes numpy.test(), but when I look in detail with the above > code, I find that the float128 significand has only 64 bits, leading > me to suspect that it is really the so-called 80-bit "extended > precision". > That's right. On 32 bit machines extended precision is stored in 96 bits (3*32) for alignment purposes. On 64 bit machines it is stored in 128 bits (2*64). > > Is this true? If so, should there be such large differences between > architectures for the same nominal precision? Is float128 just > whatever the underlying C compiler thinks a "long double" is? > Yes. I've raised the question of using something more explicit than bit length to track extended precision, not least because of the quad precision mix up. But we can't really do anything unless C supports it. > > I've spent way too much time on this, so I'm going to bow out here > (unless someone can suggest something for me to try that won't take > too much time). > Thanks for the effort you have made. It helps. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 23 15:55:17 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 19:55:17 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: [clip] > PPC stores long doubles as two doubles. I don't recall exactly how the > two are used, but the result is that the numbers aren't in the form you > would expect. Long doubles on the PPC have always been iffy, so it is no > surprise that machar fails. The failure on SPARC quad precision bothers > me more. The test fails on SPARC, since we need one term more in the Horner series to reach quad precision accuracy. I'll add that for long doubles. > I think the easy thing to do for the 1.3 release is to fix the precision > test to use a hardwired range of values, I don't think testing the > extreme small values is necessary to check the power series expansion. > But I have been leaving that fixup to Pauli. I'll do just that. The test is overly strict. -- Pauli Virtanen From bsouthey at gmail.com Mon Mar 23 16:03:09 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 23 Mar 2009 15:03:09 -0500 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: <49C7EAFD.1010405@gmail.com> Pauli Virtanen wrote: > Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: > [clip] > >> PPC stores long doubles as two doubles. I don't recall exactly how the >> two are used, but the result is that the numbers aren't in the form you >> would expect. Long doubles on the PPC have always been iffy, so it is no >> surprise that machar fails. The failure on SPARC quad precision bothers >> me more. >> > > The test fails on SPARC, since we need one term more in the Horner series > to reach quad precision accuracy. I'll add that for long doubles. > > >> I think the easy thing to do for the 1.3 release is to fix the precision >> test to use a hardwired range of values, I don't think testing the >> extreme small values is necessary to check the power series expansion. >> But I have been leaving that fixup to Pauli. >> > > I'll do just that. The test is overly strict. > > I do not know if this is related, but I got similar error with David's //windows 64 bits installer on my 64 bit Vista system. // http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html In particular this code crashes: >>>/ import numpy as np />>>/ info = np.finfo(np.longcomplex) / >/From the Windows Problem signature: / Fault Module Name: umath.pyd Bruce From cycomanic at gmail.com Mon Mar 23 16:03:48 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Tue, 24 Mar 2009 09:03:48 +1300 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute In-Reply-To: <49C79F6A.1050204@molden.no> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <49C79F6A.1050204@molden.no> Message-ID: <20090323200347.GB4035@jochen.schroeder.phy.auckland.ac.nz> On 23/03/09 15:40, Sturla Molden wrote: > Jens Rantil wrote: > > Hi all, > > > > So I have a C-function in a DLL loaded through ctypes. This particular > > function returns a pointer to a double. In fact I know that this > > pointer points to the first element in an array of, say for simplicity, > > 200 elements. > > > > How do I convert this pointer to a NumPy array that uses this data (ie. > > no copy of data in memory)? I am able to create a numpy array using a > > copy of the data. > > > > def fromaddress(address, nbytes, dtype=double): > > class Dummy(object): pass > > d = Dummy() > > d.__array_interface__ = { > > 'data' : (address, False), > > 'typestr' : numpy.uint8.str, > > 'descr' : numpy.uint8.descr, > > 'shape' : (nbytes,), > > 'strides' : None, > > 'version' : 3 > > } > > return numpy.asarray(d).view( dtype=dtype ) > Might I suggest that restype is going to be removed from the documentation, it also cost me quite some time trying to get ndpointer to work with restype when I first tried it until I finally came to the conclusion that an approach like the above is necessary and ndpointer does not work with restype. Cheers Jochen From pav at iki.fi Mon Mar 23 16:17:16 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 20:17:16 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> Message-ID: Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote: [clip] > I do not know if this is related, but I got similar error with David's > windows 64 bits installer on my 64 bit Vista system. > http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html > > In particular this code crashes: > >>> import numpy as np > >>> info = np.finfo(np.longcomplex) Could you narrow that down a bit: do import numpy as np z = np.longcomplex(complex(1.,1.)) z + z z - z z * z z / z z + 2 z - 2 z * 2 z / 2 z**0 z**1 z**2 z**3 z**4 z**4.5 z**(-1) z**(-2) z**101 Do you get a crash at some point? -- Pauli Virtanen From rpyle at post.harvard.edu Mon Mar 23 16:22:30 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Mon, 23 Mar 2009 16:22:30 -0400 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: > PPC stores long doubles as two doubles. I don't recall exactly how > the two are used, but the result is that the numbers aren't in the > form you would expect. Long doubles on the PPC have always been > iffy, so it is no surprise that machar fails. The failure on SPARC > quad precision bothers me more. Ah, now I see. A little more googling and I find that the PPC long double value is just the sum of the two halves, each looking like a double on its own. That brought back a distant memory! The DEC-20 used a similar scheme. Conversion from double to single precision floating point was as simple as adding the two halves. Now this at most changes the least-significant bit of the upper half. Sometime around 1970, I wrote something in DEC-20 assembler that accumulated in double precision, but returned a single-precision result. In order to insure that I understood the double-precision floating-point format, I wrote a trivial Fortran program to test the conversion from double to single precision. The Fortran program set the LSB of the more-significant half seemingly at random, with no apparent relation to the actual value of the less-significant half. More digging with dumps from the Fortran compiler showed that its authors had not understood the double-precision FP format at all. It took quite a few phone calls to DEC before they believed it, but they did fix it about two or three months later. Bob From sturla at molden.no Mon Mar 23 16:26:38 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Mar 2009 21:26:38 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute In-Reply-To: <49C79F6A.1050204@molden.no> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <49C79F6A.1050204@molden.no> Message-ID: <49C7F07E.7010100@molden.no> Sturla Molden wrote: >> def fromaddress(address, nbytes, dtype=double): I guess dtype=float works better... S.M. From charlesr.harris at gmail.com Mon Mar 23 16:42:09 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 14:42:09 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: On Mon, Mar 23, 2009 at 2:22 PM, Robert Pyle wrote: > > PPC stores long doubles as two doubles. I don't recall exactly how > > the two are used, but the result is that the numbers aren't in the > > form you would expect. Long doubles on the PPC have always been > > iffy, so it is no surprise that machar fails. The failure on SPARC > > quad precision bothers me more. > > Ah, now I see. A little more googling and I find that the PPC long > double value is just the sum of the two halves, each looking like a > double on its own. > > That brought back a distant memory! The DEC-20 used a similar > scheme. Conversion from double to single precision floating point was > as simple as adding the two halves. Now this at most changes the > least-significant bit of the upper half. Sometime around 1970, I > wrote something in DEC-20 assembler that accumulated in double > precision, but returned a single-precision result. In order to insure > that I understood the double-precision floating-point format, I wrote > a trivial Fortran program to test the conversion from double to single > precision. The Fortran program set the LSB of the more-significant > half seemingly at random, with no apparent relation to the actual > value of the less-significant half. > > More digging with dumps from the Fortran compiler showed that its > authors had not understood the double-precision FP format at all. It > took quite a few phone calls to DEC before they believed it, but they > did fix it about two or three months later. > I wonder if you could get the same service today? Making even one phone call can be a long term project calling for a plate of cheese and fruit and a bottle of wine... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeterson at enthought.com Mon Mar 23 16:46:40 2009 From: dpeterson at enthought.com (Dave Peterson) Date: Mon, 23 Mar 2009 15:46:40 -0500 Subject: [Numpy-discussion] ANNOUNCE: ETS 3.2.0 Released Message-ID: <49C7F530.5020700@enthought.com> Hello, I'm pleased to announce that Enthought Tool Suite (ETS) version 3.2.0 has been tagged and released! Source distributions (.tar.gz) have been uploaded to PyPi, and Windows binaries will be follow shortly. A full install of ETS can be done using Setuptools via a command like: easy_install -U "ets[nonets] >= 3.2.0" NOTE 1: Users of an old ETS release will need to first uninstall prior to installing the new ETS. NOTE 2: If you get a 'SandboxViolation' error, simply re-run the command again -- it may take multiple invocations to get everything installed. (This error appears to be a long-standing incompatibility between numpy.distutils and setuptools.) Please see below for a list of what's new in this release. What Is ETS? =========== The Enthought Tool Suite (ETS) is a collection of components developed by Enthought and the open-source community, which we use every day to construct custom scientific applications. It includes a wide variety of components, including: * an extensible application framework * application building blocks * 2-D and 3-D graphics libraries * scientific and math libraries * developer tools The cornerstone on which these tools rest is the Traits package, which provides explicit type declarations in Python; its features include initialization, validation, delegation, notification, and visualization of typed attributes. More information on ETS is available from the development home page: http://code.enthought.com/projects/index.php Changelog ========= ETS 3.2.0 is a feature-added update to ETS 3.1.0, including numerous bug-fixes. Some of the notable changes include: Chaco ----- * Domain limits - Mappers now can declare the "limits" of their valid domain. PanTool and ZoomTool respect these limits. (pwang) * Adding "hide_grids" parameter to Plot.img_plot() and Plot.contour_plot() so users can override the default behavior of hiding grids. (pwang) * Refactored examples to declare a Demo object so they can be be run with the demo.py example launcher. (vibha) * Adding chaco.overlays package with some canned SVG overlays. (bhendrix) * DragZoom now can scale both X and Y axes independently corresponding to the mouse cursor motion along the X and Y axes (similar to the zoom behavior in Matplotlib). (pwang) * New Examples: * world map (bhendrix) * more financial plots (pwang) * scatter_toggle (pwang) * stacked_axis (pwang) * Fixing the chaco.scales TimeFormatter to use the built-in localtime() instead of the one in the safetime.py module due to Daylight Savings Time issues with timedelta. (r23231, pwang) * Improved behavior of ScatterPlot when it doesn't get the type of metadata it expects in its "selections" and "selection_masks" metadata keys (r23121, pwang) * Setting the .range2d attribute on GridMapper now properly sets the two DataRange1D instances of its sub-mappers. (r23119, pwang) * ScatterPlot.map_index() now respects the index_only flag (r23060, pwang) * Fixed occasional traceback/bug in LinePlot that occurred when data was completely outside the visible range (r23059, pwang) * Implementing is_in() on legends to account for padding and alignment (caused by tools that move the legend) (r23052, bhendrix) * Legend behaves properly when there are no plots to display (r23012, judah) * Fixed LogScale in the chaco.scales package to correctly handle the case when the length of the interval is less than a decade (r22907, warren.weckesser) * Fixed traceback when calling copy_traits() on a DataView (r22894, vibha) * Scatter plots generated by Plot.plot() now properly use the "auto" coloring feature of Plot. (r22727, pwang) * Reduced the size of screenshots in the user manual. (r22720, rkern) Mayavi ------ * 17, 18 March, 2009 (PR): * NEW: A simple example to show how one can use TVTK?s visual module with mlab. [23250] * BUG: The size trait was being overridden and was different from the parent causing a bug with resizing the viewer. [23243] * 15 March, 2009 (GV): * ENH: Add a volume factory to mlab that knows how to set color, vmin and vmax for the volume module [23221]. * 14 March, 2009 (PR): * API/TEST: Added a new testing entry point: ?mayavi -t? now runs tests in separate process, for isolation. Added enthought.mayavi.api.test to allow for simple testing from the interpreter [23195]...[23200], [23213], [23214], [23223]. * BUG: The volume module was directly importing the wx_gradient_editor leading to an import error when no wxPython is available. This has been tested and fixed. Thanks to Christoph Bohme for reporting this issue. [23191] * 14 March, 2009 (GV): * BUG: [mlab]: fix positioning for titles [23194], and opacity for titles and text [23193]. * ENH: Add the mlab_source attribute on all objects created by mlab, when possible [23201], [23209]. * ENH: Add a message to help the first-time user, using the new banner feature of the IPython shell view [23208]. * 13 March, 2009 (PR): * NEW/API: Adding a powerful TCP/UDP server for scripting mayavi via the network. This is available in enthought.mayavi.tools.server and is fully documented. It uses twisted and currently only works with wxPython. It is completely insecure though since it allows a remote user to do practically anything from mayavi. * 13 March, 2009 (GV) * API: rename mlab.orientationaxes to mlab.orientation_axes [23184] * 11 March, 2009 (GV) * API: Expose ?traverse? in mlab.pipeline [23181] * 10 March, 2009 (PR) * BUG: Fixed a subtle bug that affected the ImagePlaneWidget. This happened because the scalar_type of the output data from the VTKDataSource was not being set correctly. Getting the range of any input scalars also seems to silence warnings from VTK. This should hopefully fix issues with the use of the IPW with multiple scalars. I?ve added two tests for this, one is an integration test since those errors really show up only when the display is used. The other is a traditional unittest. [23166] * 08 March, 2009 (GV) * ENH: Raises an error when the user passes to mlab an array with infinite values [23150] * 07 March, 2009 (PR) * BUG: A subtle bug with a really gross error in the GridPlane component, I was using the extents when I should really have been looking at the dimensions. The extract grid filter was also not flushing the data changes downstream leading to errors that are also fixed now. These errors would manifest when you use an ExtractGrid to select a VOI or a sample rate and then used a grid plane down stream causing very wierd and incorrect rendering of the grid plane (thanks to conflation of extents and dimensions). This bug was seen at NAL for a while and also reported by Fred with a nice CME. The CME was then converted to a nice unittest by Suyog and then improved. Thanks to them all. [23146] * 28 February, 2009 (PR) * BUG: Fixed some issues reported by Ondrej Certik regarding the use Of mlab.options.offscreen, mlab.options.backend = ?test?, removed cruft from earlier ?null? backend, fixed bug with incorrect imports, add_dataset set no longer adds one new null engine each time figure=False is passed, added test case for the options.backend test. [23088] * 23 February, 2009 (PR) * ENH: Updating show so that it supports a stop keyword argument that pops up a little UI that lets the user stop the mainloop temporarily and continue using Python [23049] * 21 February, 2009 (GV) * ENH: Add a richer view for the pipeline to the MayaviScene [23035] * ENH: Add safegards to capture wrong triangle array sizes in mlab.triangular_mesh_source. [23037] * 21 February, 2009 (PR) * ENH: Making the transform data filter recordable. [23033] * NEW: A simple animator class to make it relatively to create animations. [23036] [23039] * 20 February, 2009 (PR) * ENH: Added readers for various image file formats, poly data readers and unstructured grid readers. These include DICOM, GESigna, DEM, MetaImage (mha,mhd) MINC, AVSucd, GAMBIT, Exodus, STL, Points, Particle, PLY, PDB, SLC, OBJ, Facet and BYU files. Also added several tests for most of this functionality along with small data files. These are additions from PR?s project staff, Suyog Jain and Sreekanth Ravindran. [23013] * ENH: We now change the default so the ImagePlaneWidget does not control the LUT. Also made the IPW recordable. [23011] * 18 February, 2009 (GV) * ENH: Add a preference manager view for editing preferences outside envisage [22998] * 08 February, 2009 (GV) * ENH: Center the glyphs created by barchart on the data points, as mentioned by Rauli Ruohonen [22906] * 29 January, 2009 (GV) * ENH: Make it possible to avoid redraws with mlab by using mlab.gcf().scene.disable_render = True [22869] * 28 January, 2009 (PR and GV) * ENH: Make the mlab.pipeline.user_defined factory function usable to add arbitrary filters on the pipeline. [22867], [22865] * 11 January, 2009 (GV) * ENH: Make mlab.imshow use the ImageActor. Enhance the ImageActor to map scalars to colors when needed. [22816] Traits ------ * Fixed a bug whereby faulty error handling in the PyProtocols Pyrex speedup code keeps references to tracebacks that have been handled. In so doing, clean up the same code such that it can be used with a modern Pyrex release (a bare raise can no longer be used outside of an except: clause). * RangeEditor factory now supports a 'logslider' mode: Thanks to Matthew Turk for the patch * TabularEditor factory now supports editing of all columns: Thanks to Didrik Pinte for the patch * DateEditor factory in 'custom' style now supports multi-select feature. * DateEditor and TimeEditor now support the 'readonly' style. * Fixed a bug in the ArrayEditor factory that was causing multiple trait change events to get fired when the underlying array is changed externally to the editor: Thanks to Matthew Turk for he patch. * Fixed a circular import error in Color, Font and RGBColor traits * Fixed a bug in the factory for ArrayViewEditor so it now calls the toolkit backend-specific editor TraitsBackendWX --------------- * RangeEditor now supports a 'logslider' mode: Thanks to Matthew Turk for the patch * TabularEditor now supports editing of all columns: Thanks to Didrik Pinte for the patch * DateEditor in 'custom' style now supports multi-select feature. * DateEditor and TimeEditor now support the 'readonly' style. * Added a trait to the wx pyface workbench View to indicate if the view dock window should be closeable. * Fixed the DirectoryEditor to popup the correct file dialog (thanks to Luca Fasano and Phil Thompson) * Fixed a circular import error in Color, Font and RGBColor traits * Fixed a bug in the ColorEditor that was causing the revert action to not work correctly. * Fixed a bug that caused a traceback when trying to undock a pyface dock window * Fixed a bug in the 'livemodal' view that caused the UI to become unresponsive if the 'updated' event was fired on the contained view. * Fixed bugs in ListEditor (notebook style) that caused a loss of sync between the 'selected' trait and the activated dock window. TraitsBackendQt --------------- * RangeEditor now supports a 'logslider' mode: Thanks to Matthew Turk for the patch * Fixed the DirectoryEditor to popup the correct file dialog (thanks to Luca Fasano and Phil Thompson) From bsouthey at gmail.com Mon Mar 23 17:32:47 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 23 Mar 2009 16:32:47 -0500 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> Message-ID: <49C7FFFF.5000101@gmail.com> Pauli Virtanen wrote: > Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote: > [clip] > >> I do not know if this is related, but I got similar error with David's >> windows 64 bits installer on my 64 bit Vista system. >> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html >> >> In particular this code crashes: >> >>>>> import numpy as np >>>>> info = np.finfo(np.longcomplex) >>>>> > > Could you narrow that down a bit: do > > import numpy as np > z = np.longcomplex(complex(1.,1.)) > z + z > z - z > z * z > z / z > z + 2 > z - 2 > z * 2 > z / 2 > z**0 > z**1 > z**2 > z**3 > z**4 > z**4.5 > z**(-1) > z**(-2) > z**101 > > Do you get a crash at some point? > > No. I get a problem with using longdouble as that is the dtype that causes the TestPower.test_large_types to crash. Also, np.finfo(np.float128) crashes. I can assign and multiple longdoubles and take the square root but not use the power '**'. >>> y=np.longdouble(2) >>> y 2.0 >>> y**1 2.0 >>> y**2 crash Bruce From charlesr.harris at gmail.com Mon Mar 23 17:57:15 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 15:57:15 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: <49C7FFFF.5000101@gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: On Mon, Mar 23, 2009 at 3:32 PM, Bruce Southey wrote: > Pauli Virtanen wrote: > > Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote: > > [clip] > > > >> I do not know if this is related, but I got similar error with David's > >> windows 64 bits installer on my 64 bit Vista system. > >> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html > >> > >> In particular this code crashes: > >> > >>>>> import numpy as np > >>>>> info = np.finfo(np.longcomplex) > >>>>> > > > > Could you narrow that down a bit: do > > > > import numpy as np > > z = np.longcomplex(complex(1.,1.)) > > z + z > > z - z > > z * z > > z / z > > z + 2 > > z - 2 > > z * 2 > > z / 2 > > z**0 > > z**1 > > z**2 > > z**3 > > z**4 > > z**4.5 > > z**(-1) > > z**(-2) > > z**101 > > > > Do you get a crash at some point? > > > > > No. > > I get a problem with using longdouble as that is the dtype that causes > the TestPower.test_large_types to crash. > Also, np.finfo(np.float128) crashes. I can assign and multiple > longdoubles and take the square root but not use the power '**'. > >>> y=np.longdouble(2) > >>> y > 2.0 > >>> y**1 > 2.0 > >>> y**2 > crash > Do you know if your binary was compiled with MSVC or mingw? I suspect the latter because I don't think MSVC supports float128 (long doubles are doubles). So there might be a library problem... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Mar 23 17:59:32 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 15:59:32 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: On Mon, Mar 23, 2009 at 3:57 PM, Charles R Harris wrote: > > > On Mon, Mar 23, 2009 at 3:32 PM, Bruce Southey wrote: > >> Pauli Virtanen wrote: >> > Mon, 23 Mar 2009 15:03:09 -0500, Bruce Southey wrote: >> > [clip] >> > >> >> I do not know if this is related, but I got similar error with David's >> >> windows 64 bits installer on my 64 bit Vista system. >> >> >> http://mail.scipy.org/pipermail/numpy-discussion/2009-March/041282.html >> >> >> >> In particular this code crashes: >> >> >> >>>>> import numpy as np >> >>>>> info = np.finfo(np.longcomplex) >> >>>>> >> > >> > Could you narrow that down a bit: do >> > >> > import numpy as np >> > z = np.longcomplex(complex(1.,1.)) >> > z + z >> > z - z >> > z * z >> > z / z >> > z + 2 >> > z - 2 >> > z * 2 >> > z / 2 >> > z**0 >> > z**1 >> > z**2 >> > z**3 >> > z**4 >> > z**4.5 >> > z**(-1) >> > z**(-2) >> > z**101 >> > >> > Do you get a crash at some point? >> > >> > >> No. >> >> I get a problem with using longdouble as that is the dtype that causes >> the TestPower.test_large_types to crash. >> Also, np.finfo(np.float128) crashes. I can assign and multiple >> longdoubles and take the square root but not use the power '**'. >> >>> y=np.longdouble(2) >> >>> y >> 2.0 >> >>> y**1 >> 2.0 >> >>> y**2 >> crash >> > > Do you know if your binary was compiled with MSVC or mingw? I suspect the > latter because I don't think MSVC supports float128 (long doubles are > doubles). So there might be a library problem... > > Chuck > On the other hand, I believe python is compiled with MSVC. This might be causing some incompatibilities with a mingw compiled numpy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 23 18:01:11 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 22:01:11 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: Mon, 23 Mar 2009 19:55:17 +0000, Pauli Virtanen wrote: > Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: [clip] >> PPC stores long doubles as two doubles. I don't recall exactly how the >> two are used, but the result is that the numbers aren't in the form you >> would expect. Long doubles on the PPC have always been iffy, so it is >> no surprise that machar fails. The failure on SPARC quad precision >> bothers me more. > > The test fails on SPARC, since we need one term more in the Horner > series to reach quad precision accuracy. I'll add that for long doubles. Another reason turned out to be that (1./6) is a double-precision constant, whereas the series of course needs an appropriate precision for each data type. Fixed in r6715, r6716. I also skip the long double test if it seems that finfo(longdouble) is bogus. Backport? -- Pauli Virtanen From charlesr.harris at gmail.com Mon Mar 23 18:18:52 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 16:18:52 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: On Mon, Mar 23, 2009 at 4:01 PM, Pauli Virtanen wrote: > Mon, 23 Mar 2009 19:55:17 +0000, Pauli Virtanen wrote: > > > Mon, 23 Mar 2009 13:22:29 -0600, Charles R Harris wrote: [clip] > >> PPC stores long doubles as two doubles. I don't recall exactly how the > >> two are used, but the result is that the numbers aren't in the form you > >> would expect. Long doubles on the PPC have always been iffy, so it is > >> no surprise that machar fails. The failure on SPARC quad precision > >> bothers me more. > > > > The test fails on SPARC, since we need one term more in the Horner > > series to reach quad precision accuracy. I'll add that for long doubles. > > Another reason turned out to be that (1./6) is a double-precision > constant, whereas the series of course needs an appropriate precision for > each data type. Fixed in r6715, r6716. > Heh, I should have caught that too when I looked it over. > > I also skip the long double test if it seems that finfo(longdouble) is > bogus. > > Backport? > I think so. It is a bug and the fix doesn't look complicated. I don't much like all the ifdefs in the middle of the code, but if there is a cleaner way to do it, it can wait. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 23 18:27:35 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 22:27:35 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: Mon, 23 Mar 2009 16:32:47 -0500, Bruce Southey wrote: [clip: crashes with longdouble on Windows 64] > No. > > I get a problem with using longdouble as that is the dtype that causes > the TestPower.test_large_types to crash. Also, np.finfo(np.float128) > crashes. I can assign and multiple longdoubles and take the square root > but not use the power '**'. > >>> y=np.longdouble(2) > >>> y > 2.0 > >>> y**1 > 2.0 > >>> y**2 > crash Ok, this looks a bit tricky, I have no idea what's going on. Why does it not crash with the exponent 1... -- Pauli Virtanen From pav at iki.fi Mon Mar 23 18:39:57 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 22:39:57 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: Mon, 23 Mar 2009 16:18:52 -0600, Charles R Harris wrote: [clip: #1008 fixes] >> Backport? >> >> > I think so. It is a bug and the fix doesn't look complicated. > > I don't much like all the ifdefs in the middle of the code, but if there > is a cleaner way to do it, it can wait. Done, r6717. Sorry about the long time it took to get this fixed... -- Pauli Virtanen From charlesr.harris at gmail.com Mon Mar 23 18:52:28 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 16:52:28 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: On Mon, Mar 23, 2009 at 4:27 PM, Pauli Virtanen wrote: > Mon, 23 Mar 2009 16:32:47 -0500, Bruce Southey wrote: > [clip: crashes with longdouble on Windows 64] > > No. > > > > I get a problem with using longdouble as that is the dtype that causes > > the TestPower.test_large_types to crash. Also, np.finfo(np.float128) > > crashes. I can assign and multiple longdoubles and take the square root > > but not use the power '**'. > > >>> y=np.longdouble(2) > > >>> y > > 2.0 > > >>> y**1 > > 2.0 > > >>> y**2 > > crash > > Ok, this looks a bit tricky, I have no idea what's going on. Why does it > not crash with the exponent 1... > I'd guess because nothing happens, the function simply returns. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 23 20:06:51 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Mar 2009 00:06:51 +0000 (UTC) Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: Mon, 23 Mar 2009 16:52:28 -0600, Charles R Harris wrote: [clip] >> > >>> y=np.longdouble(2) >> > >>> y >> > 2.0 >> > >>> y**1 >> > 2.0 >> > >>> y**2 >> > crash >> >> Ok, this looks a bit tricky, I have no idea what's going on. Why does >> it not crash with the exponent 1... > > I'd guess because nothing happens, the function simply returns. Which function? The code path in question appears to be through @name at _power at scalarmathmodule.c.src:755, and from there it directly seems to go to _basic_longdouble_pow, where it calls npy_pow, which calls system's powl. (Like so on my system, verified with gdb.) I don't see branches testing for exponent 1, so this probably means that the crash occurs inside powl? -- Pauli Virtanen From charlesr.harris at gmail.com Mon Mar 23 20:14:26 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 Mar 2009 18:14:26 -0600 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: On Mon, Mar 23, 2009 at 6:06 PM, Pauli Virtanen wrote: > Mon, 23 Mar 2009 16:52:28 -0600, Charles R Harris wrote: > [clip] > >> > >>> y=np.longdouble(2) > >> > >>> y > >> > 2.0 > >> > >>> y**1 > >> > 2.0 > >> > >>> y**2 > >> > crash > >> > >> Ok, this looks a bit tricky, I have no idea what's going on. Why does > >> it not crash with the exponent 1... > > > > I'd guess because nothing happens, the function simply returns. > > Which function? The code path in question appears to be through > @name at _power at scalarmathmodule.c.src:755, and from there it directly > seems to go to _basic_longdouble_pow, where it calls npy_pow, which calls > system's powl. (Like so on my system, verified with gdb.) > > I don't see branches testing for exponent 1, so this probably means that > the crash occurs inside powl? > Yes, I think so. But powl itself might special case for some exponent values and follow different paths accordingly. The float128 looks like mingw and, since 64 bit support is still in development, what we might be seeing is a bug in either mingw or its linking with MS. I don't know if mingw uses its own library for extended precision, but I'm pretty sure MS doesn't support it yet. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Mar 23 20:53:18 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 24 Mar 2009 09:53:18 +0900 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <68E46E50-2CFA-4755-B890-11096A67CE45@post.harvard.edu> <8F39D662-5696-4511-B2D2-27678D88BB41@post.harvard.edu> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> Message-ID: <5b8d13220903231753o688dea69hbe20056dfd6a2020@mail.gmail.com> 2009/3/24 Charles R Harris : > > > On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle > wrote: >> >> Hi all, >> >> This is a continuation of something I started last week, but with a >> more appropriate subject line. >> >> To recap, my machine is a dual G5 running OS X 10.5.6, my python is >> >> ? ?Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, >> 15:28:32) >> >> and numpy 1.3.0b1 was installed from the source tarball in the >> straightforward way with >> >> ? ?sudo python setup.py install >> >> >> On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote: >> >> > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote: >> > >> > Thanks for tracking this! Can you check what your platform gives for: >> > >> > > import numpy as np >> > > info = np.finfo(np.longcomplex) >> > > print "eps:", info.eps, info.eps.dtype >> > > print "tiny:", info.tiny, info.tiny.dtype >> > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps) >> > >> > eps: 1.3817869701e-76 float128 >> > tiny: -1.08420217274e-19 float128 >> > log10: nan nan >> > >> > The log of a negative number is nan, so part of the problem is the >> > value of tiny. The size of the values also look suspect to me. On my >> > machine >> > >> > In [8]: finfo(longcomplex).eps >> > Out[8]: 1.084202172485504434e-19 >> > >> > In [9]: finfo(float128).tiny >> > Out[9]: array(3.3621031431120935063e-4932, dtype=float128) >> > >> > So at a minimum eps and tiny are reversed. >> > >> > I started to look at the code for this but my eyes rolled up in my >> > head and I passed out. It could use some improvements... >> > >> > Chuck >> >> I have chased this a bit (or perhaps 128 bits) further. >> >> The problem seems to be that float128 is screwed up in general. ?I >> tracked the test error back to lines 95-107 in >> >> /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/ >> machar.py >> >> Here is a short program built from these lines that demonstrates what >> I believe to be at the root of the test failure. >> >> ###################################### >> #! /usr/bin/env python >> >> import numpy as np >> import binascii as b >> >> def t(type="float"): >> ? ? max_iterN = 10000 >> ? ? print "\ntesting %s" % type >> ? ? a = np.array([1.0],type) >> ? ? one = a >> ? ? zero = one - one >> ? ? for _ in xrange(max_iterN): >> ? ? ? ? a = a + a >> ? ? ? ? temp = a + one >> ? ? ? ? temp1 = temp - a >> ? ? ? ? print _+1, b.b2a_hex(temp[0]), temp1 >> ? ? ? ? if any(temp1 - one != zero): >> ? ? ? ? ? ? break >> ? ? return >> >> if __name__ == '__main__': >> ? ? t(np.float32) >> ? ? t(np.float64) >> ? ? t(np.float128) >> >> ###################################### >> >> This tries to find the number of bits in the significand by >> calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum >> is indistinguishable from (2.0**n), that is, when the added 1.0 has >> fallen off the bottom of the significand. >> >> My print statement shows the power of 2.0, the hex representation of >> ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which >> one expects to be 1.0 up to the point where the added 1.0 is lost. >> >> Here are the last few lines printed for float32: >> >> 19 49000010 [ 1.] >> 20 49800008 [ 1.] >> 21 4a000004 [ 1.] >> 22 4a800002 [ 1.] >> 23 4b000001 [ 1.] >> 24 4b800000 [ 0.] >> >> You can see the added 1.0 marching to the right and off the edge at 24 >> bits. >> >> Similarly, for float64: >> >> 48 42f0000000000010 [ 1.] >> 49 4300000000000008 [ 1.] >> 50 4310000000000004 [ 1.] >> 51 4320000000000002 [ 1.] >> 52 4330000000000001 [ 1.] >> 53 4340000000000000 [ 0.] >> >> There are 53 bits, just as IEEE 754 would lead us to hope. >> >> However, for float128: >> >> 48 42f00000000000100000000000000000 [1.0] >> 49 43000000000000080000000000000000 [1.0] >> 50 43100000000000040000000000000000 [1.0] >> 51 43200000000000020000000000000000 [1.0] >> 52 43300000000000010000000000000000 [1.0] >> 53 43400000000000003ff0000000000000 [1.0] >> 54 43500000000000003ff0000000000000 [1.0] >> >> Something weird happens as we pass 53 bits. ?I think lines 53 and 54 >> *should* be > > PPC stores long doubles as two doubles. I don't recall exactly how the two > are used, but the result is that the numbers aren't in the form you would > expect. Long doubles on the PPC have always been iffy, so it is no surprise > that machar fails. The failure on SPARC quad precision bothers me more. > > I think the easy thing to do for the 1.3 release is to fix the precision > test to use a hardwired range of values, I don't think testing the extreme > small values is necessary to check the power series expansion. But I have > been leaving that fixup to Pauli. > > Longer term, I think the values in finfo could come from npy_cpu.h and be > hardwired in. I don't think it is a good idea: long double support depends on 3 things (CPU, toolchain, OS), so hardwiring them would be a nightmare, since the number of cases could easily go > 100. > Anyhow, PPC is an exception in the way > it treats long doubles and I'm not even sure it hasn't changed in some of > the more recent models. I have not been able to find a lot of information yet, but maybe part of the problem is gcc on Mac OS X - maybe we would need to fix some flags. I have an old ppc minimac at home, I will look at it in more details, cheers, David From cournape at gmail.com Mon Mar 23 20:59:25 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 24 Mar 2009 09:59:25 +0900 Subject: [Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1 In-Reply-To: <49C7FFFF.5000101@gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <66DBF91F-BDD0-45C0-84D3-C3152689781E@post.harvard.edu> <2BF17180-D64C-432B-9113-F67F4F783DF3@post.harvard.edu> <49C7EAFD.1010405@gmail.com> <49C7FFFF.5000101@gmail.com> Message-ID: <5b8d13220903231759x470b35ddwd84795920b334f41@mail.gmail.com> On Tue, Mar 24, 2009 at 6:32 AM, Bruce Southey wrote: > I get a problem with using longdouble as that is the dtype that causes > the ?TestPower.test_large_types to crash. Hey, when I said the windows 64 bits support was experimental, I meant it :) > Also, np.finfo(np.float128) crashes. I can assign and multiple > longdoubles and take the square root but not use the power '**'. > ?>>> y=np.longdouble(2) > ?>>> y > 2.0 > ?>>> y**1 > 2.0 > ?>>> y**2 > crash There was a bug in the mingw powl function, but I thought the problem was fixed upstream. I will look at it. This shows that numpy lacks long testing, though - the numpy test suite passes 100 % (when it does not crash :) ), but the long double support is very flaky at best on the windows 64 + mingw combination ATM. cheers, David From jens.rantil at telia.com Tue Mar 24 06:10:42 2009 From: jens.rantil at telia.com (Jens Rantil) Date: Tue, 24 Mar 2009 11:10:42 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute In-Reply-To: <49C79F6A.1050204@molden.no> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <49C79F6A.1050204@molden.no> Message-ID: <1237889442.6364.5.camel@supraflex> On Mon, 2009-03-23 at 15:40 +0100, Sturla Molden wrote: > def fromaddress(address, nbytes, dtype=double): > class Dummy(object): pass > d = Dummy() > d.__array_interface__ = { > 'data' : (address, False), > 'typestr' : numpy.uint8.str, > 'descr' : numpy.uint8.descr, > 'shape' : (nbytes,), > 'strides' : None, > 'version' : 3 > } > > return numpy.asarray(d).view( dtype=dtype ) Thanks Sturla. However numpy.uint8 seem to be lacking attributes 'str' and 'descr'. I'm using installed Ubuntu package 1:1.1.1-1. Is it too old or is the code broken? Also, could you elaborate why dtype=float would work better? Jens From ndbecker2 at gmail.com Tue Mar 24 09:05:46 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 24 Mar 2009 09:05:46 -0400 Subject: [Numpy-discussion] savetxt for complex Message-ID: How does savetxt format a complex vector? How can I control it? From sturla at molden.no Tue Mar 24 09:13:57 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 24 Mar 2009 14:13:57 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute In-Reply-To: <1237889442.6364.5.camel@supraflex> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <49C79F6A.1050204@molden.no> <1237889442.6364.5.camel@supraflex> Message-ID: <49C8DC95.8040705@molden.no> Jens Rantil wrote: > > Thanks Sturla. However numpy.uint8 seem to be lacking attributes 'str' > and 'descr'. I'm using installed Ubuntu package 1:1.1.1-1. Is it too old > or is the code broken? Oops, my fault :) def fromaddress(address, nbytes, dtype=float): class Dummy(object): pass d = Dummy() bytetype = numpy.dtype(numpy.uint8) d.__array_interface__ = { 'data' : (address, False), 'typestr' : bytetype.str, 'descr' : bytetype.descr, 'shape' : (nbytes,), 'strides' : None, 'version' : 3 } return numpy.asarray(d).view(dtype=dtype) You will have to make sure the address is an integer. > Also, could you elaborate why dtype=float would work better? Because there is no such thing as a double type in Python? Sturla Molden From nwagner at iam.uni-stuttgart.de Tue Mar 24 10:14:20 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Mar 2009 15:14:20 +0100 Subject: [Numpy-discussion] manipulating lists Message-ID: Hi all, How can I extract the numbers from the following list ['&', '-1.878722E-08,', '3.835992E-11', '1.192970E-03,-5.080192E-06'] It is easy to extract >>> liste[1] '-1.878722E-08,' >>> liste[2] '3.835992E-11' but >>> liste[3] '1.192970E-03,-5.080192E-06' How can I accomplish that ? Nils From josef.pktd at gmail.com Tue Mar 24 10:27:18 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 24 Mar 2009 10:27:18 -0400 Subject: [Numpy-discussion] manipulating lists In-Reply-To: References: Message-ID: <1cd32cbb0903240727i78b92cd2sdb5d5e479a815b0c@mail.gmail.com> On Tue, Mar 24, 2009 at 10:14 AM, Nils Wagner wrote: > Hi all, > > How can I extract the numbers from the following list > > ['&', '-1.878722E-08,', '3.835992E-11', > '1.192970E-03,-5.080192E-06'] > > It is easy to extract > >>>> liste[1] > '-1.878722E-08,' >>>> liste[2] > '3.835992E-11' > > but > >>>> liste[3] > '1.192970E-03,-5.080192E-06' > > How can I accomplish that ? > in python I would do this: >>> ss=['&', '-1.878722E-08,', '3.835992E-11','1.192970E-03,-5.080192E-06'] >>> li = [] >>> for j in ss: for ii in j.split(','): # assumes "," is delimiter try: li.append(float(ii)); except ValueError: pass >>> li [-1.8787219999999999e-008, 3.8359920000000003e-011, 0.00119297, -5.0801919999999999e-006] >>> np.array(li) array([ -1.87872200e-08, 3.83599200e-11, 1.19297000e-03, -5.08019200e-06]) Josef From kfrancoi at gmail.com Tue Mar 24 10:33:47 2009 From: kfrancoi at gmail.com (=?ISO-8859-1?Q?Kevin_Fran=E7oisse?=) Date: Tue, 24 Mar 2009 15:33:47 +0100 Subject: [Numpy-discussion] SWIG and numpy.i Message-ID: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> Hi everyone, I have been using NumPy for a couple of month now, as part of my research project at the university. But now, I have to use a big C library I wrote myself in a python project. So I choose to use SWIG for the interface between both my python script and my C library. To make things more comprehensible, I wrote a small C methods that illustrate my problem: /* matrix.c */ #include #include /* Compute the sum of a vector of reals */ double vecSum(int* vec,int m){ int i; double sum =0.0; for(i=0;i>> import matrix >>> from numpy import * >>> a = arange(10) >>> matrix.vecSum(a,a.shape[0]) Traceback (most recent call last): File "", line 1, in TypeError: in method 'vecSum', argument 1 of type 'int *' How can I tell SWIG that my Integer NumPy array should represent a int* array in C ? Thank you very much, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Mar 24 10:42:50 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Mar 2009 15:42:50 +0100 Subject: [Numpy-discussion] manipulating lists In-Reply-To: <1cd32cbb0903240727i78b92cd2sdb5d5e479a815b0c@mail.gmail.com> References: <1cd32cbb0903240727i78b92cd2sdb5d5e479a815b0c@mail.gmail.com> Message-ID: On Tue, 24 Mar 2009 10:27:18 -0400 josef.pktd at gmail.com wrote: > On Tue, Mar 24, 2009 at 10:14 AM, Nils Wagner > wrote: >> Hi all, >> >> How can I extract the numbers from the following list >> >> ['&', '-1.878722E-08,', '3.835992E-11', >> '1.192970E-03,-5.080192E-06'] >> >> It is easy to extract >> >>>>> liste[1] >> '-1.878722E-08,' >>>>> liste[2] >> '3.835992E-11' >> >> but >> >>>>> liste[3] >> '1.192970E-03,-5.080192E-06' >> >> How can I accomplish that ? >> > > in python I would do this: > >>>> ss=['&', '-1.878722E-08,', >>>>'3.835992E-11','1.192970E-03,-5.080192E-06'] >>>> li = [] >>>> for j in ss: > for ii in j.split(','): # assumes "," is delimiter > try: li.append(float(ii)); > except ValueError: pass >>>> li > [-1.8787219999999999e-008, 3.8359920000000003e-011, >0.00119297, > -5.0801919999999999e-006] >>>> np.array(li) > array([ -1.87872200e-08, 3.83599200e-11, > 1.19297000e-03, > -5.08019200e-06]) > > Josef Thank you. Works like a charm. Nils From wfspotz at sandia.gov Tue Mar 24 13:13:35 2009 From: wfspotz at sandia.gov (Bill Spotz) Date: Tue, 24 Mar 2009 13:13:35 -0400 Subject: [Numpy-discussion] SWIG and numpy.i In-Reply-To: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> References: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> Message-ID: <49A4F2A3-1E5A-45F9-9A50-3F8460604D88@sandia.gov> Kevin, You need to declare vecSum() *after* you %include "numpy.i" and use the %apply directive. Based on what you have, I think you can just get rid of the "extern double vecSum(...)". I don't see what purpose it serves. As is, it is telling swig to wrap vecSum() before you have set up your numpy typemaps. On Mar 24, 2009, at 10:33 AM, Kevin Fran?oisse wrote: > Hi everyone, > > I have been using NumPy for a couple of month now, as part of my > research project at the university. But now, I have to use a big C > library I wrote myself in a python project. So I choose to use SWIG > for the interface between both my python script and my C library. To > make things more comprehensible, I wrote a small C methods that > illustrate my problem: > > /* matrix.c */ > > #include > #include > /* Compute the sum of a vector of reals */ > double vecSum(int* vec,int m){ > int i; > double sum =0.0; > > for(i=0;i sum += vec[i]; > } > return sum; > } > > /***/ > > /* matrix.h */ > > double vecSum(int* vec,int m); > > /***/ > > /* matrix.i */ > > %module matrix > %{ > #define SWIG_FILE_WITH_INIT > #include "matrix.h" > %} > > extern double vecSum(int* vec, int m); > > %include "numpy.i" > > %init %{ > import_array(); > %} > > %apply (int* IN_ARRAY1, int DIM1) {(int* vec, int m)}; > %include "matrix.h" > > /***/ > > I'm using a python script to compile my swig interface and my C > files (running Mac OS X 10.5) > > /* matrixSetup.py */ > > from distutils.core import setup, Extension > import numpy > > setup(name='matrix', version='1.0', ext_modules > =[Extension('_matrix', ['matrix.c','matrix.i'], > include_dirs = [numpy.get_include(),'.'])]) > > /***/ > > Everything seems to work fine ! But when I test my wrapped module in > python with an small NumPy array, here what I get : > > >>> import matrix > >>> from numpy import * > >>> a = arange(10) > >>> matrix.vecSum(a,a.shape[0]) > Traceback (most recent call last): > File "", line 1, in > TypeError: in method 'vecSum', argument 1 of type 'int *' > > How can I tell SWIG that my Integer NumPy array should represent a > int* array in C ? > > Thank you very much, > > Kevin > ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From dsdale24 at gmail.com Tue Mar 24 13:15:21 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 24 Mar 2009 13:15:21 -0400 Subject: [Numpy-discussion] test failure in numpy trunk Message-ID: Hello, I just performed an svn update, deleted my old build/ and site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit linux machine: ====================================================================== FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py", line 169, in knownfailer return f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 557, in test_loss_of_precision_longcomplex self.check_loss_of_precision(np.longcomplex) File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 510, in check_loss_of_precision check(x_series, 2*eps) File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 497, in check 'arctanh') AssertionError: (135, 3.4039637354191726288e-09, 3.9031278209478159624e-18, 'arctanh') -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Mar 24 13:20:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 24 Mar 2009 11:20:53 -0600 Subject: [Numpy-discussion] test failure in numpy trunk In-Reply-To: References: Message-ID: 2009/3/24 Darren Dale > Hello, > > I just performed an svn update, deleted my old build/ and > site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit > linux machine: > > ====================================================================== > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in > runTest > self.test(*self.arg) > File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py", > line 169, in knownfailer > return f(*args, **kwargs) > File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", > line 557, in test_loss_of_precision_longcomplex > self.check_loss_of_precision(np.longcomplex) > File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", > line 510, in check_loss_of_precision > check(x_series, 2*eps) > File "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", > line 497, in check > 'arctanh') > AssertionError: (135, 3.4039637354191726288e-09, 3.9031278209478159624e-18, > 'arctanh') > What machine is it? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Mar 24 13:36:56 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Mar 2009 18:36:56 +0100 Subject: [Numpy-discussion] test failure in numpy trunk In-Reply-To: References: Message-ID: On Tue, 24 Mar 2009 11:20:53 -0600 Charles R Harris wrote: > 2009/3/24 Darren Dale > >> Hello, >> >> I just performed an svn update, deleted my old build/ >>and >> site-packages/numpy*, reinstalled, and I see a new test >>failure on a 64 bit >> linux machine: >> >> ====================================================================== >> FAIL: >>test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >>"/usr/lib64/python2.6/site-packages/nose/case.py", line >>182, in >> runTest >> self.test(*self.arg) >> File >>"/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py", >> line 169, in knownfailer >> return f(*args, **kwargs) >> File >>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", >> line 557, in test_loss_of_precision_longcomplex >> self.check_loss_of_precision(np.longcomplex) >> File >>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", >> line 510, in check_loss_of_precision >> check(x_series, 2*eps) >> File >>"/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", >> line 497, in check >> 'arctanh') >> AssertionError: (135, 3.4039637354191726288e-09, >>3.9031278209478159624e-18, >> 'arctanh') >> > > What machine is it? > > Chuck I can reproduce the failure. Linux linux-mogv 2.6.27.19-3.2-default #1 SMP 2009-02-25 15:40:44 +0100 x86_64 x86_64 x86_64 GNU/Linux cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Pentium(R) Dual CPU T3200 @ 2.00GHz stepping : 13 cpu MHz : 1000.000 cache size : 1024 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl pni monitor ds_cpl est tm2 ssse3 cx16 xtpr lahf_lm bogomips : 3996.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Pentium(R) Dual CPU T3200 @ 2.00GHz stepping : 13 cpu MHz : 1000.000 cache size : 1024 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl pni monitor ds_cpl est tm2 ssse3 cx16 xtprlahf_lm bogomips : 3996.82 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: ====================================================================== FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nwagner/local/lib64/python2.6/site-packages/nose-0.10.4-py2.6.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/decorators.py", line 169, in knownfailer return f(*args, **kwargs) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 557, in test_loss_of_precision_longcomplex self.check_loss_of_precision(np.longcomplex) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 510, in check_loss_of_precision check(x_series, 2*eps) File "/home/nwagner/local/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line 497, in check 'arctanh') AssertionError: (135, 3.4039637354191726288e-09, 3.9031278209478159624e-18, 'arctanh') ---------------------------------------------------------------------- Ran 2031 tests in 15.923s FAILED (KNOWNFAIL=1, failures=1) From dsdale24 at gmail.com Tue Mar 24 13:38:45 2009 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 24 Mar 2009 13:38:45 -0400 Subject: [Numpy-discussion] test failure in numpy trunk In-Reply-To: References: Message-ID: 2009/3/24 Charles R Harris > > > 2009/3/24 Darren Dale > > Hello, >> >> I just performed an svn update, deleted my old build/ and >> site-packages/numpy*, reinstalled, and I see a new test failure on a 64 bit >> linux machine: >> >> ====================================================================== >> FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib64/python2.6/site-packages/nose/case.py", line 182, in >> runTest >> self.test(*self.arg) >> File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py", >> line 169, in knownfailer >> return f(*args, **kwargs) >> File >> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line >> 557, in test_loss_of_precision_longcomplex >> self.check_loss_of_precision(np.longcomplex) >> File >> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line >> 510, in check_loss_of_precision >> check(x_series, 2*eps) >> File >> "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", line >> 497, in check >> 'arctanh') >> AssertionError: (135, 3.4039637354191726288e-09, >> 3.9031278209478159624e-18, 'arctanh') >> > > What machine is it? > 64-bit gentoo linux, gcc-4.3.3, python-2.6.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From delcampo at stats.ox.ac.uk Tue Mar 24 14:13:32 2009 From: delcampo at stats.ox.ac.uk (F. David del Campo Hill) Date: Tue, 24 Mar 2009 18:13:32 -0000 Subject: [Numpy-discussion] Win32 MSI Message-ID: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> Dear Numpy Forum, I have found the Win64 (Windows x64) Numpy MSI installer in Sourceforge (numpy-1.3.0b1.win-amd64-py2.6.msi), but cannot find the Win32 (Windows i386) one. I have tried unpacking the Win32 EXE installer package (numpy-1.3.0b1-win32-superpack-python2.6.exe) to see if the MSI installer could be found inside, but without luck. Does the package I look for exist, and if so, where could someone point me to where I can download it from? Thank you for your help. Yours, David del Campo "The more corrupt the state, the more numerous the laws." -Gaius Cornelius Tacitus (ca. 56-ca. 117), Annals, Book III, 27 From martyfuhry at gmail.com Tue Mar 24 14:34:57 2009 From: martyfuhry at gmail.com (Marty Fuhry) Date: Tue, 24 Mar 2009 14:34:57 -0400 Subject: [Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy Message-ID: Hello, Sorry for any overlap, as I've been referred here from the scipi-dev mailing list. I was reading through the Summer of Code ideas and I'm terribly interested in date/time proposal (http://projects.scipy.org/numpy/browser/trunk/doc/neps/datetime-proposal3.rst). I would love to work on this for a Google Summer of Code project. I'm a sophmore studying Computer Science and Mathematics at Kent State University in Ohio, so this project directly relates to my studies. Is there anyone looking into this proposal yet? Thank you. -Marty Fuhry From cournape at gmail.com Tue Mar 24 15:04:35 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 25 Mar 2009 04:04:35 +0900 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> Message-ID: <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> On Wed, Mar 25, 2009 at 3:13 AM, F. David del Campo Hill wrote: > Dear Numpy Forum, > > ? ? ? ?I have found the Win64 (Windows x64) Numpy MSI installer in Sourceforge (numpy-1.3.0b1.win-amd64-py2.6.msi), but cannot find the Win32 (Windows i386) one. I have tried unpacking the Win32 EXE installer package (numpy-1.3.0b1-win32-superpack-python2.6.exe) to see if the MSI installer could be found inside, but without luck. Does the package I look for exist, and if so, where could someone point me to where I can download it from? No, it does not. The problem is that I need to add a way to execute .msi from nsis (nsis is the software I use to build the superpack), and I did not find a way when I tried - but it should be possible. Now, I am not so familiar with msi: what does it bring compared to .exe ? Would an exe installing a .msi solve your problems ? (windows64 has an msi because 64 bits implies SSE2, and as such we don't need to check for CPU wo SSE). cheers, David From pav at iki.fi Tue Mar 24 15:12:01 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Mar 2009 19:12:01 +0000 (UTC) Subject: [Numpy-discussion] test failure in numpy trunk References: Message-ID: Tue, 24 Mar 2009 13:15:21 -0400, Darren Dale wrote: > I just performed an svn update, deleted my old build/ and > site-packages/numpy*, reinstalled, and I see a new test failure on a 64 > bit linux machine: > > ====================================================================== > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex > ---------------------------------------------------------------------- [clip] > "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", > line 497, in check > 'arctanh') > AssertionError: (135, 3.4039637354191726288e-09, > 3.9031278209478159624e-18, 'arctanh') I can reproduce this (on another 64-bit machine). This time around, it's the real function that is faulty: >>> x = np.longdouble(3e-9) >>> np.arctanh(x+0j).real - x 9.0876776281460559983e-27 >>> np.arctanh(x).real - x 0.0 >>> np.finfo(np.longdouble).eps * x 3.2526065174565132804e-28 So, the system atanhl is ~ 30 relative eps away from the correct answer: >>> from sympy import mpmath >>> mpmath.mp.dps=60 >>> p = mpmath.mpf('3e-9') >>> print (mpmath.atanh(p) - p)*1e27 9.00000000000000016818799564800000095820042512435586643130912 I'll relax the test tolerance to allow for this... -- Pauli Virtanen From lranderson at pppl.gov Tue Mar 24 15:11:05 2009 From: lranderson at pppl.gov (Lewis E. Randerson) Date: Tue, 24 Mar 2009 15:11:05 -0400 Subject: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale) Message-ID: <0FA37068-49EB-4EA6-9300-AF9520714644@pppl.gov> Hi, I am trying to setup a new compiler for numpy and my lack of python pattern matching syntax knowledge is bogging me down. Here is one of my non-working patterns. ================================================= version_pattern = r'Pathscale(TM) Compiler Suite: Version (? P[^\s]*)' ================================================= Here is the string I am trying to get the version from. ======================================================= $ pathf95 --version PathScale(TM) Compiler Suite: Version 3.2 Built on: 2008-06-16 16:45:36 -0700 Thread model: posix GNU gcc version 3.3.1 (PathScale 3.2 driver) Copyright 2000, 2001 Silicon Graphics, Inc. All Rights Reserved. Copyright 2002, 2003, 2004, 2005, 2006 PathScale, Inc. All Rights Reserved. Copyright 2006, 2007 QLogic Corporation. All Rights Reserved. Copyright 2007, 2008 PathScale LLC. All Rights Reserved. See complete copyright, patent and legal notices in the /usr/pppl/pathscale/3.2/share/doc/pathscale-compilers-3.2/LEGAL.pdf file. =========================================================== Any idea what the correct value for "version_pattern" should be". Even better, If you have a working pathscale.py for the above, I'll take that instead. Thanks for any help! --Lew From pav at iki.fi Tue Mar 24 15:32:33 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Mar 2009 19:32:33 +0000 (UTC) Subject: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale) References: <0FA37068-49EB-4EA6-9300-AF9520714644@pppl.gov> Message-ID: Tue, 24 Mar 2009 15:11:05 -0400, Lewis E. Randerson wrote: > Hi, > > I am trying to setup a new compiler for numpy and my lack of python > pattern matching syntax knowledge is bogging me down. > > Here is one of my non-working patterns. > ================================================= > version_pattern = r'Pathscale(TM) Compiler Suite: Version (?P[^\s]*)' > ================================================= Possibly like so: version_pattern = r'PathScale\(TM\) Compiler Suite: Version (?P[^\s]*)' You need the escapes to avoid the first braces to be interpreted as a group. -- Pauli Virtanen From lranderson at pppl.gov Tue Mar 24 15:44:07 2009 From: lranderson at pppl.gov (Lewis E. Randerson) Date: Tue, 24 Mar 2009 15:44:07 -0400 Subject: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale) In-Reply-To: References: <0FA37068-49EB-4EA6-9300-AF9520714644@pppl.gov> Message-ID: Puali, I was wondering why the there seemed to be two uses for parens in the string. I now have the braces in. The issue now I suspect is the stuff after (?P. That is where I am really confused. Any opinions there. --Lew On Mar 24, 2009, at 3:32 PM, Pauli Virtanen wrote: > Tue, 24 Mar 2009 15:11:05 -0400, Lewis E. Randerson wrote: > >> Hi, >> >> I am trying to setup a new compiler for numpy and my lack of python >> pattern matching syntax knowledge is bogging me down. >> >> Here is one of my non-working patterns. >> ================================================= >> version_pattern = r'Pathscale(TM) Compiler Suite: Version (? >> P[^\s]*)' >> ================================================= > > Possibly like so: > > version_pattern = r'PathScale\(TM\) Compiler Suite: Version (? > P[^\s]*)' > > You need the escapes to avoid the first braces to be interpreted as > a group. > > -- > Pauli Virtanen > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ---------------------------------------------------------------------- Lewis E. Randerson DOE Princeton University Plasma Physics Laboratory, Princeton University, James Forrestal Campus 100 Stellarator Road, Princeton, NJ 08543 Work: 609/243-3134, Fax: 609/243-3086, PPPL Web: http://www.pppl.gov From charlesr.harris at gmail.com Tue Mar 24 15:47:21 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 24 Mar 2009 13:47:21 -0600 Subject: [Numpy-discussion] test failure in numpy trunk In-Reply-To: References: Message-ID: On Tue, Mar 24, 2009 at 1:12 PM, Pauli Virtanen wrote: > Tue, 24 Mar 2009 13:15:21 -0400, Darren Dale wrote: > > I just performed an svn update, deleted my old build/ and > > site-packages/numpy*, reinstalled, and I see a new test failure on a 64 > > bit linux machine: > > > > ====================================================================== > > FAIL: test_umath.TestComplexFunctions.test_loss_of_precision_longcomplex > > ---------------------------------------------------------------------- > [clip] > > "/usr/lib64/python2.6/site-packages/numpy/core/tests/test_umath.py", > > line 497, in check > > 'arctanh') > > AssertionError: (135, 3.4039637354191726288e-09, > > 3.9031278209478159624e-18, 'arctanh') > > I can reproduce this (on another 64-bit machine). This time around, it's > the real function that is faulty: > > >>> x = np.longdouble(3e-9) > >>> np.arctanh(x+0j).real - x > 9.0876776281460559983e-27 > >>> np.arctanh(x).real - x > 0.0 > >>> np.finfo(np.longdouble).eps * x > 3.2526065174565132804e-28 > > So, the system atanhl is ~ 30 relative eps away from the correct answer: > I see this also. The compiler is gcc version 4.3.0 20080428 (Red Hat 4.3.0-8) (GCC). Maybe we should ping the compiler folks? I could also open a Fedora bug for this. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Mar 24 15:55:09 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Mar 2009 19:55:09 +0000 (UTC) Subject: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale) References: <0FA37068-49EB-4EA6-9300-AF9520714644@pppl.gov> Message-ID: Tue, 24 Mar 2009 15:44:07 -0400, Lewis E. Randerson wrote: > Puali, > > I was wondering why the there seemed to be two uses for parens in the > string. I now have the braces in. The issue now I suspect is the > stuff after (?P. That is where I am really confused. This is probably best answered by the documentation: http://docs.python.org/library/re.html In short, the (?P<...>) construct defines a named group. -- Pauli Virtanen From lranderson at pppl.gov Tue Mar 24 16:00:31 2009 From: lranderson at pppl.gov (Lewis E. Randerson) Date: Tue, 24 Mar 2009 16:00:31 -0400 Subject: [Numpy-discussion] Defining version_pattern for fcompiler (pathscale) In-Reply-To: References: <0FA37068-49EB-4EA6-9300-AF9520714644@pppl.gov> Message-ID: Puali, Thanks. Somehow google failed me when I was looking for a clear explanation. --Lew On Mar 24, 2009, at 3:55 PM, Pauli Virtanen wrote: > Tue, 24 Mar 2009 15:44:07 -0400, Lewis E. Randerson wrote: > >> Puali, >> >> I was wondering why the there seemed to be two uses for parens in the >> string. I now have the braces in. The issue now I suspect is the >> stuff after (?P. That is where I am really confused. > > This is probably best answered by the documentation: > > http://docs.python.org/library/re.html > > In short, the (?P<...>) construct defines a named group. > > -- > Pauli Virtanen > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ---------------------------------------------------------------------- Lewis E. Randerson DOE Princeton University Plasma Physics Laboratory, Princeton University, James Forrestal Campus 100 Stellarator Road, Princeton, NJ 08543 Work: 609/243-3134, Fax: 609/243-3086, PPPL Web: http://www.pppl.gov From Sul at hcp.med.harvard.edu Tue Mar 24 16:33:47 2009 From: Sul at hcp.med.harvard.edu (Sul, Young L) Date: Tue, 24 Mar 2009 16:33:47 -0400 Subject: [Numpy-discussion] more on that missing directory In-Reply-To: <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> References: , <5b8d13220903201819n7c46f62bgc603fffd8d418666@mail.gmail.com> Message-ID: Hi, The following is a list of files that have ^Ms in them. I did an svn checkout of the latest stuff today and ran "grep -Ril ^M *" in numpy and scipy scipy seems to have more embedded feeds than numpy. It also seems that whatever 'builds' the files to be compiled in scipy is the thing that is embedding the linefeeds prior to the compile. (I tried cleaning them out, but re-running numscons overwrites the cleaned up files). I have attached to this message a list of the files that have embedded feeds still in them. ________________________________ From: numpy-discussion-bounces at scipy.org [numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau [cournape at gmail.com] Sent: Friday, March 20, 2009 9:19 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] more on that missing directory Hi, 2009/3/21 Sul, Young L : > Hi, > > (I?m on a Solaris 10 intel system, and am trying to use the sunperf > libraries) > An immediate problem is that some files seem to have embedded ^Ms in them. I > had to clean and rerun a few times before numpy installed. Could you tell me what those files are ? In numscons or numpy ? Those files should be fixed, neither numpy or numscons should have any CRF types of end of lines. > Now, I am trying to install scipy via numscons. It looked like it was going > to work, but it barfed. From the output it looks like whatever is building > the compile commands forgot to add the cc command at the beginning of the > line (see below. I?ve highlighted the barf). Yes, it is a bug in scons - its way of looking for compilers is buggy on solaris. I will look into it later today (I don't have a solaris installation in handy ATM), cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_lines Type: application/octet-stream Size: 4478 bytes Desc: scipy_lines URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_lines Type: application/octet-stream Size: 2642 bytes Desc: numpy_lines URL: From dyamins at gmail.com Tue Mar 24 18:09:38 2009 From: dyamins at gmail.com (Dan Yamins) Date: Tue, 24 Mar 2009 18:09:38 -0400 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays Message-ID: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> Hi all, I'm having a seg fault error from numpy.rec.fromarrays. I have a python list L = [Col1, Col2] where Col1 and Col2 are python lists of short strings (the max length of Col1 strings is 4 chars and max length of Col2 is 7 chars). The len of Col1 and Col2 is about 11500. Then I attempt >>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__']) This should produce a numpy record array with two columns, one called 'Aggregates', the other called '__color__'. In and of it self, this runs. But then when I attempt to look at the contents of A, running the __getitem__ method, say by doing: >>> print A or >>> A.tolist() or >>> A[0] then I get a seg fault error. (Acutally, the segfault only occurs about 80% of the time I run these commands.) However, the __getitem__ method does work to produce attribute arrays from column names , e.g. >>> Ag = A['Aggregates'] or >>> col = A['__color__'] both produce (apparently) completely correct and working numpy arrays. Moreover, If I pickle the object A before looking at it, everything works fine. E.g. if I execute: >>> Hold_A = A.dumps() >>> A = numpy.loads(Hold_A) then A seems to work fine. (Also: pickling the list L = [Col1,Col2] first, before running the numpy.rec.fromarrays method, does not always fix the segfault.) Can someone explain why this might be happening, and how I can fix it (without having to use the pickling hack)? Thanks, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.schmolck at gmx.net Tue Mar 24 19:08:46 2009 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Wed, 25 Mar 2009 00:08:46 +0100 Subject: [Numpy-discussion] [ANN] mlabwrap 1.0.1 Message-ID: <20090324230846.24270@gmx.net> Mlabwrap allows pythonistas to interface to Matlab(tm) in a very straightforward fashion: >>> from mlabwrap import mlab >>> mlab.eig([[0,1],[1,1]]) array([[-0.61803399], [ 1.61803399]]) More at . Mlabwrap 1.0.1 is just a maintenance release that fixes a few bugs and simplifies installation (no more LD_LIBRARY_PATH hassles). No future (non-bugfix) releases of mlabwrap are currently planned, but if and when I find the time to finish overhauling and extending the API I will make an official release of scikits.mlabwrap, which probably won't be 100% backwards compatible. 'as -- Psssst! Schon vom neuen GMX MultiMessenger geh?rt? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger01 From pav at iki.fi Tue Mar 24 19:18:10 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Mar 2009 23:18:10 +0000 (UTC) Subject: [Numpy-discussion] Doc update for 1.3.0? References: <49C5C7F3.3010609@ar.media.kyoto-u.ac.jp> Message-ID: Sun, 22 Mar 2009 14:09:07 +0900, David Cournapeau wrote: [clip] > You can backport as many docstring changes as possible, since there is > little chance to break anything just from docstring. Merge to trunk is here: http://projects.scipy.org/numpy/changeset/6725 I'll backport it and some other doc fixes to 1.3.x tomorrow (if no objections): git://github.com/pv/numpy-work.git work-1.3.x -- Pauli Virtanen From brennan.williams at visualreservoir.com Tue Mar 24 19:29:34 2009 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Wed, 25 Mar 2009 12:29:34 +1300 Subject: [Numpy-discussion] trying to speed up the following.... Message-ID: <49C96CDE.4090904@visualreservoir.com> I have an array (porvatt.yarray) of ni*nj*nk values. I want to create two further arrays. activeatt.yarray is of size ni*nj*nk and is a pointer array to an active cell number. If a cell is inactive then its activeatt.yarray value will be 0 ijkatt.yarray is of size nactive, the number of active cells (which I already know). ijkatt.yarray holds the ijk cell number for each active cell. My code looks something like... activeatt.yarray=zeros(ncells,dtype=int) ijkatt.yarray=zeros(nactivecells,dtype=int) iactive=-1 ni=currentgrid.ni nj=currentgrid.nj nk=currentgrid.nk for ijk in range(0,ni*nj*nk): if porvatt.yarray[ijk]>0: iactive+=1 activeatt.yarray[ijk]=iactive ijkatt.yarray[iactive]=ijk I may often have a million+ cells. So the code above is slow. How can I speed it up? TIA Brennan From robert.kern at gmail.com Tue Mar 24 19:39:49 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Mar 2009 18:39:49 -0500 Subject: [Numpy-discussion] trying to speed up the following.... In-Reply-To: <49C96CDE.4090904@visualreservoir.com> References: <49C96CDE.4090904@visualreservoir.com> Message-ID: <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> On Tue, Mar 24, 2009 at 18:29, Brennan Williams wrote: > I have an array (porvatt.yarray) of ni*nj*nk values. > I want to create two further arrays. > > activeatt.yarray is of size ni*nj*nk and is a pointer array to an active > cell number. If a cell is inactive then its activeatt.yarray value will be 0 > > ijkatt.yarray is of size nactive, the number of active cells (which I > already know). ijkatt.yarray holds the ijk cell number for each active cell. > > > My code looks something like... > > ? ? ? ? ? activeatt.yarray=zeros(ncells,dtype=int) > ? ? ? ? ? ijkatt.yarray=zeros(nactivecells,dtype=int) > > ? ? ? ? ? ?iactive=-1 > ? ? ? ? ? ?ni=currentgrid.ni > ? ? ? ? ? ?nj=currentgrid.nj > ? ? ? ? ? ?nk=currentgrid.nk > ? ? ? ? ? ?for ijk in range(0,ni*nj*nk): > ? ? ? ? ? ? ?if porvatt.yarray[ijk]>0: > ? ? ? ? ? ? ? ?iactive+=1 > ? ? ? ? ? ? ? ?activeatt.yarray[ijk]=iactive > ? ? ? ? ? ? ? ?ijkatt.yarray[iactive]=ijk > > I may often have a million+ cells. > So the code above is slow. > How can I speed it up? mask = (porvatt.yarray.flat > 0) ijkatt.yarray = np.nonzero(mask) # This is not what your code does, but what I think you want. # Where porvatt.yarray is inactive, activeatt.yarray is -1. # 0 might be an active cell. activeatt.yarray = np.empty(ncells, dtype=int) activeatt.yarray.fill(-1) activeatt.yarray[mask] = ijkatt.yarray -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Tue Mar 24 20:41:45 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 24 Mar 2009 18:41:45 -0600 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays In-Reply-To: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> References: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> Message-ID: 2009/3/24 Dan Yamins > Hi all, > > I'm having a seg fault error from numpy.rec.fromarrays. > > I have a python list > L = [Col1, Col2] > where Col1 and Col2 are python lists of short strings (the max length of > Col1 strings is 4 chars and max length of Col2 is 7 chars). The len of Col1 > and Col2 is about 11500. > > Then I attempt > >>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__']) > > This should produce a numpy record array with two columns, one called > 'Aggregates', the other called '__color__'. > > In and of it self, this runs. But then when I attempt to look at the > contents of A, running the __getitem__ method, say by doing: > > >>> print A > or > >>> A.tolist() > or > >>> A[0] > > then I get a seg fault error. (Acutally, the segfault only occurs about > 80% of the time I run these commands.) > > However, the __getitem__ method does work to produce attribute arrays from > column names , e.g. > > >>> Ag = A['Aggregates'] > > or > > >>> col = A['__color__'] > > both produce (apparently) completely correct and working numpy arrays. > > Moreover, If I pickle the object A before looking at it, everything works > fine. E.g. if I execute: > > >>> Hold_A = A.dumps() > >>> A = numpy.loads(Hold_A) > > then A seems to work fine. > > (Also: pickling the list L = [Col1,Col2] first, before running the > numpy.rec.fromarrays method, does not always fix the segfault.) > > > Can someone explain why this might be happening, and how I can fix it > (without having to use the pickling hack)? > What architecture/operating system is this? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dyamins at gmail.com Tue Mar 24 20:47:04 2009 From: dyamins at gmail.com (Dan Yamins) Date: Tue, 24 Mar 2009 20:47:04 -0400 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays In-Reply-To: References: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> Message-ID: <15e4667e0903241747g1ae1b2bs86e984ead2cc9a53@mail.gmail.com> > >> Can someone explain why this might be happening, and how I can fix it >> (without having to use the pickling hack)? >> > > What architecture/operating system is this? > Sorry, I should have included this information before. it's OS 10.5.6. the is a 64-bit intel core-2 duo, but the python is the standard OS X 10.5 binary from the python.org website, which is a 32-bit framework build. It's numpy 1.3, which I built on this machine. (The same problem happens with earlier version of numpy as well, I tried the same computation using numpy 1.1 earlier.) Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From brennan.williams at visualreservoir.com Wed Mar 25 01:09:09 2009 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Wed, 25 Mar 2009 18:09:09 +1300 Subject: [Numpy-discussion] trying to speed up the following.... In-Reply-To: <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> References: <49C96CDE.4090904@visualreservoir.com> <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> Message-ID: <49C9BC75.1020107@visualreservoir.com> Robert Kern wrote: > On Tue, Mar 24, 2009 at 18:29, Brennan Williams > wrote: > >> I have an array (porvatt.yarray) of ni*nj*nk values. >> I want to create two further arrays. >> >> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active >> cell number. If a cell is inactive then its activeatt.yarray value will be 0 >> >> ijkatt.yarray is of size nactive, the number of active cells (which I >> already know). ijkatt.yarray holds the ijk cell number for each active cell. >> >> >> My code looks something like... >> >> activeatt.yarray=zeros(ncells,dtype=int) >> ijkatt.yarray=zeros(nactivecells,dtype=int) >> >> iactive=-1 >> ni=currentgrid.ni >> nj=currentgrid.nj >> nk=currentgrid.nk >> for ijk in range(0,ni*nj*nk): >> if porvatt.yarray[ijk]>0: >> iactive+=1 >> activeatt.yarray[ijk]=iactive >> ijkatt.yarray[iactive]=ijk >> >> I may often have a million+ cells. >> So the code above is slow. >> How can I speed it up? >> > > mask = (porvatt.yarray.flat > 0) > ijkatt.yarray = np.nonzero(mask) > > # This is not what your code does, but what I think you want. > # Where porvatt.yarray is inactive, activeatt.yarray is -1. > # 0 might be an active cell. > activeatt.yarray = np.empty(ncells, dtype=int) > activeatt.yarray.fill(-1) > activeatt.yarray[mask] = ijkatt.yarray > > > Thanks. Concise & fast. This is what I've got so far (minor mods from the above).... from numpy import * ... mask=porvatt.yarray>0.0 ijkatt.yarray=nonzero(mask)[0] activeindices=arange(0,ijkatt.yarray.size) activeatt.yarray = empty(ncells, dtype=int) activeatt.yarray.fill(-1) activeatt.yarray[mask] = activeindices I have... ijkatt.yarray=nonzero(mask)[0] because it looks like nonzero returns a tuple of arrays rather than an array. I used activeindices=arange(0,ijkatt.yarray.size) and activeatt.yarray[mask] = activeindices as I have 686000 cells of which 129881 are 'active' so my activeatt.yarray values range from -1 for inactive through 0 for the first active cell up to 129880 for the last active cell. About to test it out by replacing my old for loop. Looks like it will be about 20x faster for 1m cells. Brennan From robert.kern at gmail.com Wed Mar 25 01:44:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Mar 2009 00:44:22 -0500 Subject: [Numpy-discussion] trying to speed up the following.... In-Reply-To: <49C9BC75.1020107@visualreservoir.com> References: <49C96CDE.4090904@visualreservoir.com> <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> <49C9BC75.1020107@visualreservoir.com> Message-ID: <3d375d730903242244o7b37156akf50f222f1db6fe44@mail.gmail.com> On Wed, Mar 25, 2009 at 00:09, Brennan Williams wrote: > Robert Kern wrote: >> On Tue, Mar 24, 2009 at 18:29, Brennan Williams >> wrote: >> >>> I have an array (porvatt.yarray) of ni*nj*nk values. >>> I want to create two further arrays. >>> >>> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active >>> cell number. If a cell is inactive then its activeatt.yarray value will be 0 >>> >>> ijkatt.yarray is of size nactive, the number of active cells (which I >>> already know). ijkatt.yarray holds the ijk cell number for each active cell. >>> >>> >>> My code looks something like... >>> >>> ? ? ? ? ? activeatt.yarray=zeros(ncells,dtype=int) >>> ? ? ? ? ? ijkatt.yarray=zeros(nactivecells,dtype=int) >>> >>> ? ? ? ? ? ?iactive=-1 >>> ? ? ? ? ? ?ni=currentgrid.ni >>> ? ? ? ? ? ?nj=currentgrid.nj >>> ? ? ? ? ? ?nk=currentgrid.nk >>> ? ? ? ? ? ?for ijk in range(0,ni*nj*nk): >>> ? ? ? ? ? ? ?if porvatt.yarray[ijk]>0: >>> ? ? ? ? ? ? ? ?iactive+=1 >>> ? ? ? ? ? ? ? ?activeatt.yarray[ijk]=iactive >>> ? ? ? ? ? ? ? ?ijkatt.yarray[iactive]=ijk >>> >>> I may often have a million+ cells. >>> So the code above is slow. >>> How can I speed it up? >>> >> >> mask = (porvatt.yarray.flat > 0) >> ijkatt.yarray = np.nonzero(mask) >> >> # This is not what your code does, but what I think you want. >> # Where porvatt.yarray is inactive, activeatt.yarray is -1. >> # 0 might be an active cell. >> activeatt.yarray = np.empty(ncells, dtype=int) >> activeatt.yarray.fill(-1) >> activeatt.yarray[mask] = ijkatt.yarray >> >> >> > Thanks. Concise & fast. This is what I've got so far (minor mods from > the above).... > > from numpy import * > ... > mask=porvatt.yarray>0.0 > ijkatt.yarray=nonzero(mask)[0] > activeindices=arange(0,ijkatt.yarray.size) > activeatt.yarray = empty(ncells, dtype=int) > activeatt.yarray.fill(-1) > activeatt.yarray[mask] = activeindices > > I have... > > ijkatt.yarray=nonzero(mask)[0] > > because it looks like nonzero returns a tuple of arrays rather than an > array. Yes. Apologies. > I used > > activeindices=arange(0,ijkatt.yarray.size) > > and > > activeatt.yarray[mask] = activeindices Yes. You are correct. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From brennan.williams at visualreservoir.com Wed Mar 25 04:00:48 2009 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Wed, 25 Mar 2009 21:00:48 +1300 Subject: [Numpy-discussion] trying to speed up the following.... In-Reply-To: <3d375d730903242244o7b37156akf50f222f1db6fe44@mail.gmail.com> References: <49C96CDE.4090904@visualreservoir.com> <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> <49C9BC75.1020107@visualreservoir.com> <3d375d730903242244o7b37156akf50f222f1db6fe44@mail.gmail.com> Message-ID: <49C9E4B0.1010607@visualreservoir.com> Robert Kern wrote: > On Wed, Mar 25, 2009 at 00:09, Brennan Williams > wrote: > >> Robert Kern wrote: >> >>> On Tue, Mar 24, 2009 at 18:29, Brennan Williams >>> wrote: >>> >>> >>>> I have an array (porvatt.yarray) of ni*nj*nk values. >>>> I want to create two further arrays. >>>> >>>> activeatt.yarray is of size ni*nj*nk and is a pointer array to an active >>>> cell number. If a cell is inactive then its activeatt.yarray value will be 0 >>>> >>>> ijkatt.yarray is of size nactive, the number of active cells (which I >>>> already know). ijkatt.yarray holds the ijk cell number for each active cell. >>>> >>>> >>>> My code looks something like... >>>> >>>> activeatt.yarray=zeros(ncells,dtype=int) >>>> ijkatt.yarray=zeros(nactivecells,dtype=int) >>>> >>>> iactive=-1 >>>> ni=currentgrid.ni >>>> nj=currentgrid.nj >>>> nk=currentgrid.nk >>>> for ijk in range(0,ni*nj*nk): >>>> if porvatt.yarray[ijk]>0: >>>> iactive+=1 >>>> activeatt.yarray[ijk]=iactive >>>> ijkatt.yarray[iactive]=ijk >>>> >>>> I may often have a million+ cells. >>>> So the code above is slow. >>>> How can I speed it up? >>>> >>>> >>> mask = (porvatt.yarray.flat > 0) >>> ijkatt.yarray = np.nonzero(mask) >>> >>> # This is not what your code does, but what I think you want. >>> # Where porvatt.yarray is inactive, activeatt.yarray is -1. >>> # 0 might be an active cell. >>> activeatt.yarray = np.empty(ncells, dtype=int) >>> activeatt.yarray.fill(-1) >>> activeatt.yarray[mask] = ijkatt.yarray >>> >>> >>> >>> >> Thanks. Concise & fast. This is what I've got so far (minor mods from >> the above).... >> >> from numpy import * >> ... >> mask=porvatt.yarray>0.0 >> ijkatt.yarray=nonzero(mask)[0] >> activeindices=arange(0,ijkatt.yarray.size) >> activeatt.yarray = empty(ncells, dtype=int) >> activeatt.yarray.fill(-1) >> activeatt.yarray[mask] = activeindices >> >> I have... >> >> ijkatt.yarray=nonzero(mask)[0] >> >> because it looks like nonzero returns a tuple of arrays rather than an >> array. >> > > Yes. Apologies. > > Apology accepted. Don't do it again. On a more serious note, it is clear that, as expected, operating on elements of an array inside a Python for loop is slow for large arrays. Soon I will be writing an import interface to read corner point grid geometries and I'm currently looking at vtk unstructured grids etc. Most of the numpy vectorization is aimed at relatively simply structured arrays on the basis that you'll never meet everyone's needs/data structures. So I presume that if I find I have a bottleneck in my code which looks specific to my data structures I should then look at offloading that to C or Fortran? (assuming I can't find it in numpy or scipy). I'm already doing this to read in data from Fortran binary files although I actually decided to code it in C rather than use Fortran. >> I used >> >> activeindices=arange(0,ijkatt.yarray.size) >> >> and >> >> activeatt.yarray[mask] = activeindices >> > > Yes. You are correct. > > From robert.kern at gmail.com Wed Mar 25 04:03:45 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Mar 2009 03:03:45 -0500 Subject: [Numpy-discussion] trying to speed up the following.... In-Reply-To: <49C9E4B0.1010607@visualreservoir.com> References: <49C96CDE.4090904@visualreservoir.com> <3d375d730903241639s2a6cbf42o2eeed2ac376be3ad@mail.gmail.com> <49C9BC75.1020107@visualreservoir.com> <3d375d730903242244o7b37156akf50f222f1db6fe44@mail.gmail.com> <49C9E4B0.1010607@visualreservoir.com> Message-ID: <3d375d730903250103n444497acxb8a160f0f6b6979@mail.gmail.com> On Wed, Mar 25, 2009 at 03:00, Brennan Williams wrote: > On a more serious note, it is clear that, as expected, operating on > elements of an array inside a Python for loop is slow for large arrays. > Soon I will be writing an import interface to read corner point grid > geometries and I'm currently looking at vtk unstructured grids etc. > Most of the numpy vectorization is aimed at relatively simply structured > arrays on the basis that you'll never meet everyone's needs/data structures. > So I presume that if I find I have a bottleneck in my code which looks > specific to my data structures I should then look at offloading that to C or > Fortran? (assuming I can't find it in numpy or scipy). If you're comfortable with those languages, and need the speed, yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jesper.webmail at gmail.com Wed Mar 25 07:04:43 2009 From: jesper.webmail at gmail.com (Jesper Larsen) Date: Wed, 25 Mar 2009 12:04:43 +0100 Subject: [Numpy-discussion] Creating array containing empty lists Message-ID: Hi numpy people, I have a problem with array broadcasting for object arrays and list. I would like to create a numpy array containing empty lists (initially - I will append to them later): import numpy as npy a = npy.empty((2), dtype=npy.object_) # Works fine: for i in range(len(a)): a[i] = [] print a # Does not work: a[:] = [] a[:] = list() Is it possible to broadcast a list to all elements of a numpy array? Or will I have to loop through it and set the individual elements? Best regards, Jesper From pav at iki.fi Wed Mar 25 07:29:14 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 25 Mar 2009 11:29:14 +0000 (UTC) Subject: [Numpy-discussion] Creating array containing empty lists References: Message-ID: Wed, 25 Mar 2009 12:04:43 +0100, Jesper Larsen wrote: > I have a problem with array broadcasting for object arrays and list. I > would like to create a numpy array containing empty lists (initially - I > will append to them later): [clip] > Is it possible to broadcast a list to all elements of a numpy array? Or > will I have to loop through it and set the individual elements? You can use the `fill` method, which will broadcast correctly: >>> import numpy as np >>> a = np.empty((5,), dtype=np.object_) >>> a.fill([]) >>> a array([[], [], [], [], []], dtype=object) But note that it's now the *same* list in a[0] and a[1]: >>> a[0].append('foo') >>> a array([['foo'], ['foo'], ['foo'], ['foo'], ['foo']], dtype=object) You can avoid the loop by vectorizing the list constructor: >>> a = np.empty((5,), dtype=np.object_) >>> a.fill([]) >>> a = np.frompyfunc(list,1,1)(a) >>> a array([[], [], [], [], []], dtype=object) >>> a[0].append('foo') >>> a array([['foo'], [], [], [], []], dtype=object) Possibly not any faster or cleaner than the for loop. -- Pauli Virtanen From kfrancoi at gmail.com Wed Mar 25 07:39:33 2009 From: kfrancoi at gmail.com (=?ISO-8859-1?Q?Kevin_Fran=E7oisse?=) Date: Wed, 25 Mar 2009 12:39:33 +0100 Subject: [Numpy-discussion] SWIG and numpy.i In-Reply-To: <49A4F2A3-1E5A-45F9-9A50-3F8460604D88@sandia.gov> References: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> <49A4F2A3-1E5A-45F9-9A50-3F8460604D88@sandia.gov> Message-ID: <36c2e0ca0903250439r2b363873qbabe4722b6445b8f@mail.gmail.com> Thanks Bill, it helps me a lot ! My function works fine now. But I encounter an other problem. This time with a NumPy array of 2 dimensions. Here is the function I want to use : /****************/ double matSum(double** mat, int n, int m){ int i,j; double sum = 0.0; for (i=0;i wrote: > Kevin, > > You need to declare vecSum() *after* you %include "numpy.i" and use the > %apply directive. Based on what you have, I think you can just get rid of > the "extern double vecSum(...)". I don't see what purpose it serves. As > is, it is telling swig to wrap vecSum() before you have set up your numpy > typemaps. > > > On Mar 24, 2009, at 10:33 AM, Kevin Fran?oisse wrote: > > Hi everyone, >> >> I have been using NumPy for a couple of month now, as part of my research >> project at the university. But now, I have to use a big C library I wrote >> myself in a python project. So I choose to use SWIG for the interface >> between both my python script and my C library. To make things more >> comprehensible, I wrote a small C methods that illustrate my problem: >> >> /* matrix.c */ >> >> #include >> #include >> /* Compute the sum of a vector of reals */ >> double vecSum(int* vec,int m){ >> int i; >> double sum =0.0; >> >> for(i=0;i> sum += vec[i]; >> } >> return sum; >> } >> >> /***/ >> >> /* matrix.h */ >> >> double vecSum(int* vec,int m); >> >> /***/ >> >> /* matrix.i */ >> >> %module matrix >> %{ >> #define SWIG_FILE_WITH_INIT >> #include "matrix.h" >> %} >> >> extern double vecSum(int* vec, int m); >> >> %include "numpy.i" >> >> %init %{ >> import_array(); >> %} >> >> %apply (int* IN_ARRAY1, int DIM1) {(int* vec, int m)}; >> %include "matrix.h" >> >> /***/ >> >> I'm using a python script to compile my swig interface and my C files >> (running Mac OS X 10.5) >> >> /* matrixSetup.py */ >> >> from distutils.core import setup, Extension >> import numpy >> >> setup(name='matrix', version='1.0', ext_modules =[Extension('_matrix', >> ['matrix.c','matrix.i'], >> include_dirs = [numpy.get_include(),'.'])]) >> >> /***/ >> >> Everything seems to work fine ! But when I test my wrapped module in >> python with an small NumPy array, here what I get : >> >> >>> import matrix >> >>> from numpy import * >> >>> a = arange(10) >> >>> matrix.vecSum(a,a.shape[0]) >> Traceback (most recent call last): >> File "", line 1, in >> TypeError: in method 'vecSum', argument 1 of type 'int *' >> >> How can I tell SWIG that my Integer NumPy array should represent a int* >> array in C ? >> >> Thank you very much, >> >> Kevin >> >> > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Mar 25 07:43:03 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 25 Mar 2009 13:43:03 +0200 Subject: [Numpy-discussion] Creating array containing empty lists In-Reply-To: References: Message-ID: <9457e7c80903250443t1e179a47q22b78d7edc3fcd02@mail.gmail.com> 2009/3/25 Jesper Larsen : > import numpy as npy > a = npy.empty((2), dtype=npy.object_) > > # Works fine: > for i in range(len(a)): > ?a[i] = [] > print a > > # Does not work: > a[:] = [] > a[:] = list() Slightly simpler would be: In [26]: x = np.empty((2,), dtype=object) In [27]: x[:] = [[] * len(x)] St?fan From delcampo at stats.ox.ac.uk Wed Mar 25 08:17:18 2009 From: delcampo at stats.ox.ac.uk (F. David del Campo Hill) Date: Wed, 25 Mar 2009 12:17:18 -0000 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> Message-ID: <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> Dear David, Without going into the inherent benefits of the MSI (Microsoft Installer) architecture over other EXE setup formats, its main advantage is that MSI packages can be added to Group Policy Objects in Active Directory (Windows domain controller database); this means that, as long as a piece of software comes in MSI format, it can be automatically installed on Windows systems from our central servers without need for our intervention. On top of that, Microsoft have created an open-source (no kidding!) package called WIX (Windows Installer XML; http://wix.sourceforge.net/) which allows you to create MSI packages for free. It does have conditional execution, though I have no idea if it can detect different types of processors. In my case, I need to install Python and Numpy on 30+ Windows systems; I have found Python already comes with MSI packages, and would also like to get a Numpy MSI, otherwise I will have to manually install it on all the systems. As far as I am concerned, I do not need the win32 superpack (all my systems are similar), and if there were different MSI packages for different processors I would not mind; it just has to be MSI. Sometimes, EXE setup packages are just MSI packages wrapped in an EXE file, that is why I tried to extract the files from your superpack (without luck). Note: I do not work for Microsoft or receive any money from them; I am just an IT officer one of whose users needs Numpy for teaching. I do not know what Numpy does or doesn't do, I just need it installed fast. Thank you for your help. Yours, David -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau Sent: 24 March 2009 19:05 To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Win32 MSI On Wed, Mar 25, 2009 at 3:13 AM, F. David del Campo Hill wrote: > Dear Numpy Forum, > > ? ? ? ?I have found the Win64 (Windows x64) Numpy MSI installer in Sourceforge (numpy-1.3.0b1.win-amd64-py2.6.msi), but cannot find the Win32 (Windows i386) one. I have tried unpacking the Win32 EXE installer package (numpy-1.3.0b1-win32-superpack-python2.6.exe) to see if the MSI installer could be found inside, but without luck. Does the package I look for exist, and if so, where could someone point me to where I can download it from? No, it does not. The problem is that I need to add a way to execute .msi from nsis (nsis is the software I use to build the superpack), and I did not find a way when I tried - but it should be possible. Now, I am not so familiar with msi: what does it bring compared to .exe ? Would an exe installing a .msi solve your problems ? (windows64 has an msi because 64 bits implies SSE2, and as such we don't need to check for CPU wo SSE). cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From david at ar.media.kyoto-u.ac.jp Wed Mar 25 08:18:59 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 25 Mar 2009 21:18:59 +0900 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> Message-ID: <49CA2133.7040806@ar.media.kyoto-u.ac.jp> Hi Davie, F. David del Campo Hill wrote: > Sometimes, EXE setup packages are just MSI packages wrapped in an EXE file, that is why I tried to extract the files from your superpack (without luck). > Currently, with the superpack installer, the individual per arch installers can be extracted easily from the command line. If the superpack used MSI internally, would that solve your problem ? I am worried about distributing the per arch msi directly, but someone who would extract the msi from the super pack installer would be a power-user/administrator, cheers, David From delcampo at stats.ox.ac.uk Wed Mar 25 08:50:27 2009 From: delcampo at stats.ox.ac.uk (F. David del Campo Hill) Date: Wed, 25 Mar 2009 12:50:27 -0000 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <49CA2133.7040806@ar.media.kyoto-u.ac.jp> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com><4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> <49CA2133.7040806@ar.media.kyoto-u.ac.jp> Message-ID: <4E30CAE7A3A7B242AFBBC418520EA0D644EBEE@exchange1.stats.ox.ac.uk> Dear David, I did not have any problems in extracting the three EXE installers (numpy-1.3.0b1-nosse.exe, numpy-1.3.0b1-sse2.exe, numpy-1.3.0b1-sse3.exe) from the superpack (7zip can do that with a right-click), it is when I tried to extract the files inside the per-arch installers that I realized they did not have MSI packages inside and I could not go any further. So, yes, if the three architecture installers were MSI files, that would solve my problems as I would be able to extract and install them separately. Also (and pardon me if this is a stupid question), wouldn't the non-SSE installer work anywhere (albeit more slowly)? Thanks, David -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau Sent: 25 March 2009 12:19 To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Win32 MSI Hi Davie, F. David del Campo Hill wrote: > Sometimes, EXE setup packages are just MSI packages wrapped in an EXE file, that is why I tried to extract the files from your superpack (without luck). > Currently, with the superpack installer, the individual per arch installers can be extracted easily from the command line. If the superpack used MSI internally, would that solve your problem ? I am worried about distributing the per arch msi directly, but someone who would extract the msi from the super pack installer would be a power-user/administrator, cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From wfspotz at sandia.gov Wed Mar 25 09:03:16 2009 From: wfspotz at sandia.gov (Bill Spotz) Date: Wed, 25 Mar 2009 09:03:16 -0400 Subject: [Numpy-discussion] SWIG and numpy.i In-Reply-To: <36c2e0ca0903250439r2b363873qbabe4722b6445b8f@mail.gmail.com> References: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> <49A4F2A3-1E5A-45F9-9A50-3F8460604D88@sandia.gov> <36c2e0ca0903250439r2b363873qbabe4722b6445b8f@mail.gmail.com> Message-ID: <81CAFBF8-B131-4910-B985-DE66022FA28D@sandia.gov> Kevin, In this instance, the best thing is to write a wrapper function that calls your matSum() function, and takes a double* rather than a double**. You can %ignore the original function and %rename the wrapper so that the python interface gets the name you want. On Mar 25, 2009, at 7:39 AM, Kevin Fran?oisse wrote: > Thanks Bill, it helps me a lot ! My function works fine now. > > But I encounter an other problem. This time with a NumPy array of 2 > dimensions. > > Here is the function I want to use : > > /****************/ > double matSum(double** mat, int n, int m){ > int i,j; > double sum = 0.0; > for (i=0;i for (j=0;j sum += mat[i][j]; > } > } > return sum; > } > /****************/ > > I supposed that the typemaps to use is the following : > > %apply (double* IN_ARRAY2, int DIM1, int DIM2) {(double** mat, int > n, int m)}; > > But it is not working. Of course, my typemaps assignement is not > compatible with my function parameters. I tried several ways of > using a two dimensional array but I'm not sure what is the best way > to do it ? > > Thanks > > --- > Kevin Fran?oisse > Ph.D. at Machine Learning Group at UCL > Belgium > kevin.francoisse at uclouvain.be > > On Tue, Mar 24, 2009 at 6:13 PM, Bill Spotz > wrote: > Kevin, > > You need to declare vecSum() *after* you %include "numpy.i" and use > the %apply directive. Based on what you have, I think you can just > get rid of the "extern double vecSum(...)". I don't see what > purpose it serves. As is, it is telling swig to wrap vecSum() > before you have set up your numpy typemaps. > > > On Mar 24, 2009, at 10:33 AM, Kevin Fran?oisse wrote: > > Hi everyone, > > I have been using NumPy for a couple of month now, as part of my > research project at the university. But now, I have to use a big C > library I wrote myself in a python project. So I choose to use SWIG > for the interface between both my python script and my C library. To > make things more comprehensible, I wrote a small C methods that > illustrate my problem: > > /* matrix.c */ > > #include > #include > /* Compute the sum of a vector of reals */ > double vecSum(int* vec,int m){ > int i; > double sum =0.0; > > for(i=0;i sum += vec[i]; > } > return sum; > } > > /***/ > > /* matrix.h */ > > double vecSum(int* vec,int m); > > /***/ > > /* matrix.i */ > > %module matrix > %{ > #define SWIG_FILE_WITH_INIT > #include "matrix.h" > %} > > extern double vecSum(int* vec, int m); > > %include "numpy.i" > > %init %{ > import_array(); > %} > > %apply (int* IN_ARRAY1, int DIM1) {(int* vec, int m)}; > %include "matrix.h" > > /***/ > > I'm using a python script to compile my swig interface and my C > files (running Mac OS X 10.5) > > /* matrixSetup.py */ > > from distutils.core import setup, Extension > import numpy > > setup(name='matrix', version='1.0', ext_modules > =[Extension('_matrix', ['matrix.c','matrix.i'], > include_dirs = [numpy.get_include(),'.'])]) > > /***/ > > Everything seems to work fine ! But when I test my wrapped module in > python with an small NumPy array, here what I get : > > >>> import matrix > >>> from numpy import * > >>> a = arange(10) > >>> matrix.vecSum(a,a.shape[0]) > Traceback (most recent call last): > File "", line 1, in > TypeError: in method 'vecSum', argument 1 of type 'int *' > > How can I tell SWIG that my Integer NumPy array should represent a > int* array in C ? > > Thank you very much, > > Kevin > > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > > ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From bsouthey at gmail.com Wed Mar 25 09:36:31 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 25 Mar 2009 08:36:31 -0500 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays In-Reply-To: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> References: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> Message-ID: <49CA335F.40501@gmail.com> Dan Yamins wrote: > Hi all, > > I'm having a seg fault error from numpy.rec.fromarrays. > > I have a python list > L = [Col1, Col2] > where Col1 and Col2 are python lists of short strings (the max length > of Col1 strings is 4 chars and max length of Col2 is 7 chars). The > len of Col1 and Col2 is about 11500. > > Then I attempt > >>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__']) So what happens when you set the dtype here? Since you have variable lengths of strings, numpy probably has guessed incorrectly. I would also check that Col1 and Col2 are what you expect, especially the minimum lengths and really are strings. Can you provide a small example that exhibits the problem? Bruce From dyamins at gmail.com Wed Mar 25 10:05:18 2009 From: dyamins at gmail.com (Dan Yamins) Date: Wed, 25 Mar 2009 10:05:18 -0400 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays In-Reply-To: <49CA335F.40501@gmail.com> References: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> <49CA335F.40501@gmail.com> Message-ID: <15e4667e0903250705oe1ce9afkc57a8945dce86ebd@mail.gmail.com> > > > Then I attempt > > >>> A = numpy.rec.fromarrays(L,names = ['Aggregates','__color__']) > > So what happens when you set the dtype here? > > Since you have variable lengths of strings, numpy probably has guessed > incorrectly. I would also check that Col1 and Col2 are what you expect, > especially the minimum lengths and really are strings. These objects to indeed seem to by exactly what I expect. The dtype appears exactly to be right. I had the same problem when I provided the correct dtype by hand, and numpy seems to be guessing right when I don't. Can you provide a small example that exhibits the problem? > Well that's the problem. I can't easily. The code that makes the example that crashes is buried fairly deeply in some other routines. When I try to produce the proximate problem manually by creating what should (in theory) be identical lists, at the interpreter prompt, I don't get the segfault problem. I've attached a .png picture of an short interpreter session where I show the result of what is returned by the routines -- the object that causes fault in its behavior, how its dtype seems right, and how pickling it solves the problem. If you can open attachments, perhaps looking at this would be instructive. I know the problem must have something to do with the way the lists Col1 and Col2 are getting created -- and that somehow, in a manner I don't understand and is not readily apparent by i, they are not what I think they are. But I can't find out how they're wrong. I was wondering if anybody had encountered similar problems that might help me narrow my search. I'm also willing to share all the code that leads up to the problem, if that's the only way to identify the problem, although it would be a somewhat laborious effort I think to subject you all to :) Thanks, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Picture 4.png Type: image/png Size: 20501 bytes Desc: not available URL: From bsouthey at gmail.com Wed Mar 25 10:32:22 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 25 Mar 2009 09:32:22 -0500 Subject: [Numpy-discussion] Seg fault from numpy.rec.fromarrays In-Reply-To: <15e4667e0903250705oe1ce9afkc57a8945dce86ebd@mail.gmail.com> References: <15e4667e0903241509t50cfa9f2t3c2653fd138a8d4@mail.gmail.com> <49CA335F.40501@gmail.com> <15e4667e0903250705oe1ce9afkc57a8945dce86ebd@mail.gmail.com> Message-ID: <49CA4076.8070500@gmail.com> Dan Yamins wrote: > > > > Then I attempt > > >>> A = numpy.rec.fromarrays(L,names = > ['Aggregates','__color__']) > > So what happens when you set the dtype here? > > > Since you have variable lengths of strings, numpy probably has guessed > incorrectly. I would also check that Col1 and Col2 are what you > expect, > especially the minimum lengths and really are strings. > > > These objects to indeed seem to by exactly what I expect. The dtype > appears exactly to be right. I had the same problem when I provided > the correct dtype by hand, and numpy seems to be guessing right when I > don't. > > > Can you provide a small example that exhibits the problem? > > > Well that's the problem. I can't easily. The code that makes the > example that crashes is buried fairly deeply in some other routines. > When I try to produce the proximate problem manually by creating what > should (in theory) be identical lists, at the interpreter prompt, I > don't get the segfault problem. Well you should be able to save the Col1 and Col2 lists... Hopefully if you save these as files and read them back but I would guess that is what pickle is doing. > > I've attached a .png picture of an short interpreter session where I > show the result of what is returned by the routines -- the object that > causes fault in its behavior, how its dtype seems right, and how > pickling it solves the problem. If you can open attachments, perhaps > looking at this would be instructive. In the picture many of the list elements appear empty strings but it is not clear if these are really empty strings. The error for A[0] may indicate that especially if other indices work such as those to individual elements. > > I know the problem must have something to do with the way the lists > Col1 and Col2 are getting created -- and that somehow, in a manner I > don't understand and is not readily apparent by i, they are not what I > think they are. But I can't find out how they're wrong. I was > wondering if anybody had encountered similar problems that might help > me narrow my search. I would try: 1) create separate arrays from Col1 and Col2 and check these can be viewed. Then create an recarray from those. 2) create list L from subsets of the Col1 and Col2 until I found which entries cause it. > > I'm also willing to share all the code that leads up to the problem, > if that's the only way to identify the problem, although it would be a > somewhat laborious effort I think to subject you all to :) An example is always useful. While I do not have a Mac, you can sent it to me on or offlist. Bruce From faltet at pytables.org Wed Mar 25 12:06:27 2009 From: faltet at pytables.org (Francesc Alted) Date: Wed, 25 Mar 2009 17:06:27 +0100 Subject: [Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy In-Reply-To: References: Message-ID: <200903251706.28633.faltet@pytables.org> Hello Marty, A Tuesday 24 March 2009, Marty Fuhry escrigu?: > Hello, > > Sorry for any overlap, as I've been referred here from the scipi-dev > mailing list. > I was reading through the Summer of Code ideas and I'm terribly > interested in date/time proposal > (http://projects.scipy.org/numpy/browser/trunk/doc/neps/datetime-prop >osal3.rst). I would love to work on this for a Google Summer of Code > project. I'm a sophmore studying Computer Science and Mathematics at > Kent State University in Ohio, so this project directly relates to my > studies. Is there anyone looking into this proposal yet? To my knowledge, nobody is actively working on this anymore. As a matter of fact, during the discussions that led to the proposal, many people showed a real interested on the implementation of data/time types in NumPy. So it would be great if you can have a stab on this. Luck! -- Francesc Alted From pgmdevlist at gmail.com Wed Mar 25 12:33:47 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 25 Mar 2009 12:33:47 -0400 Subject: [Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy In-Reply-To: <200903251706.28633.faltet@pytables.org> References: <200903251706.28633.faltet@pytables.org> Message-ID: <5B5B8EC8-8D23-42D0-92D2-84427CAF3A3A@gmail.com> Ciao Marty, Great idea indeed ! However, I'd really like to have an easy way to plug the suggested dtype w/ the existing Date class from the scikits.timeseries package (Date is implemented in C, you can find the sources through the link on http://pytseries.sourceforge.net). I agree that this particular aspect is not a priority, but it'd be nice to keep it in a corner of the mind. In any case, keep me in the loop. Cheers, P. On Mar 25, 2009, at 12:06 PM, Francesc Alted wrote: > Hello Marty, > > A Tuesday 24 March 2009, Marty Fuhry escrigu?: >> Hello, >> >> Sorry for any overlap, as I've been referred here from the scipi-dev >> mailing list. >> I was reading through the Summer of Code ideas and I'm terribly >> interested in date/time proposal >> (http://projects.scipy.org/numpy/browser/trunk/doc/neps/datetime-prop >> osal3.rst). I would love to work on this for a Google Summer of Code >> project. I'm a sophmore studying Computer Science and Mathematics at >> Kent State University in Ohio, so this project directly relates to my >> studies. Is there anyone looking into this proposal yet? > > To my knowledge, nobody is actively working on this anymore. As a > matter of fact, during the discussions that led to the proposal, many > people showed a real interested on the implementation of data/time > types in NumPy. So it would be great if you can have a stab on this. > > Luck! > > -- > Francesc Alted > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From robert.kern at gmail.com Wed Mar 25 15:12:56 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Mar 2009 14:12:56 -0500 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> Message-ID: <3d375d730903251212s7fc51635y5fb95475addb50f2@mail.gmail.com> On Wed, Mar 25, 2009 at 07:17, F. David del Campo Hill wrote: > Note: I do not work for Microsoft or receive any money from them; I am just an IT officer one of whose users needs Numpy for teaching. I do not know what Numpy does or doesn't do, I just need it installed fast. [Disclaimer: I work for Enthought, whose product I am shamelessly about to flog.] You may want to consider the Enthought Python Distribution (EPD), which provides an MSI installer for numpy and a wide variety of other related scientific Python software. It is free for degree-granting institutions. http://www.enthought.com/products/epd.php -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Mar 25 15:23:55 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 26 Mar 2009 04:23:55 +0900 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <4E30CAE7A3A7B242AFBBC418520EA0D644EBEE@exchange1.stats.ox.ac.uk> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk> <5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com> <4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk> <49CA2133.7040806@ar.media.kyoto-u.ac.jp> <4E30CAE7A3A7B242AFBBC418520EA0D644EBEE@exchange1.stats.ox.ac.uk> Message-ID: <5b8d13220903251223g1845bcd5sae0cffa374e60975@mail.gmail.com> On Wed, Mar 25, 2009 at 9:50 PM, F. David del Campo Hill wrote: > > ? ? ? ?Also (and pardon me if this is a stupid question), wouldn't the non-SSE installer work anywhere (albeit more slowly)? Yes, it would - but then people would complain about numpy being slow, etc... because average users would install the msi, ask the difference with the superpack, etc... I may be misguided, but I make a strong difference between making a msi available as in the case of win64 installers, and making it possible for a user to extract the msi from the superpack. The later inherently filters people who don't like to bother with "details" such as CPU support. The superpack solved almost all issues we had on windows, and I would prefer keeping it that way. cheers, David From charlesr.harris at gmail.com Wed Mar 25 15:56:53 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 25 Mar 2009 13:56:53 -0600 Subject: [Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy In-Reply-To: <5B5B8EC8-8D23-42D0-92D2-84427CAF3A3A@gmail.com> References: <200903251706.28633.faltet@pytables.org> <5B5B8EC8-8D23-42D0-92D2-84427CAF3A3A@gmail.com> Message-ID: On Wed, Mar 25, 2009 at 10:33 AM, Pierre GM wrote: > Ciao Marty, > Great idea indeed ! However, I'd really like to have an easy way to > plug the suggested dtype w/ the existing Date class from the > scikits.timeseries package (Date is implemented in C, you can find the > sources through the link on http://pytseries.sourceforge.net). I agree > that this particular aspect is not a priority, but it'd be nice to > keep it in a corner of the mind. > In any case, keep me in the loop. > Cheers, > P. > Just for clarification, was it the intent that these new data types should be implemented at the C/cython level? If so, the project probably requires a knowlege of C and some help digging through the lower levels of numpy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Mar 25 17:04:53 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Mar 2009 22:04:53 +0100 Subject: [Numpy-discussion] Fortran source Message-ID: Hi all, How do I compile any legacy C and Fortran code in 64 bit using gcc/gfortran ? Any pointer would be appreciated. Thanks in advance Nils From martyfuhry at gmail.com Wed Mar 25 20:29:00 2009 From: martyfuhry at gmail.com (Marty Fuhry) Date: Wed, 25 Mar 2009 20:29:00 -0400 Subject: [Numpy-discussion] Summer of Code: Proposal for Implementing date/time types in NumPy In-Reply-To: References: <200903251706.28633.faltet@pytables.org> <5B5B8EC8-8D23-42D0-92D2-84427CAF3A3A@gmail.com> Message-ID: Thanks for the input, guys. I'll be looking into the scikits.timeseries package before submitting an application. >was it the intent that these new data types should be implemented at the C/cython level? That's fine with me. I've got plenty of experience in C++, and I've delved into my fair share of C (just for the record). I'm planning on writing up a proposal, so if anyone has further input, I would really appreciate it. 2009/3/25 Charles R Harris : > > > On Wed, Mar 25, 2009 at 10:33 AM, Pierre GM wrote: >> >> Ciao Marty, >> Great idea indeed ! ?However, I'd really like to have an easy way to >> plug the suggested dtype w/ the existing Date class from the >> scikits.timeseries package (Date is implemented in C, you can find the >> sources through the link on http://pytseries.sourceforge.net). I agree >> that this particular aspect is not a priority, but it'd be nice to >> keep it in a corner of the mind. >> In any case, keep me in the loop. >> Cheers, >> P. > > Just for clarification, was it the intent that these new data types should > be implemented at the C/cython level? If so, the project probably requires a > knowlege of C and some help digging through the lower levels of numpy. > > > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From cournape at gmail.com Thu Mar 26 06:48:17 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 26 Mar 2009 19:48:17 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> Message-ID: <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> Hi Bruce On Fri, Mar 20, 2009 at 10:45 PM, Bruce Southey wrote: > I still have the same problem on my Intel vista 64 system (Intel > QX6700 ?CPUZ reports the instruction set as MMX, SSE, SSE2, SSE3, > SSSE3, EM64T) with McAfee. The binary is built with every optimization turned off, and no ATLAS (AFAIK, atlas has not been ported on windows 64), so we are relatively safe on this side :) > > I am also seeing a crash with Python2.6.1 when when running numpy.test(). > The output below with verbose=2. > > Also this code ?crashes: >>>> import numpy as np >>>> info = np.finfo(np.longcomplex) Would you mind testing this binary: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy//numpy-1.3.0b1.win-amd64-py2.6.msi It is built with an updated toolchain + a few patches to mingw I have yet submitted upstream, David From cournape at gmail.com Thu Mar 26 06:57:23 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 26 Mar 2009 19:57:23 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> Message-ID: <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> On Thu, Mar 26, 2009 at 7:48 PM, David Cournapeau wrote: > It is built with an updated toolchain + a few patches to mingw I have > yet submitted upstream, I created a ticket as well to track this issue: http://projects.scipy.org/numpy/ticket/1068 From jens.rantil at telia.com Thu Mar 26 07:41:28 2009 From: jens.rantil at telia.com (Jens Rantil) Date: Thu, 26 Mar 2009 12:41:28 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute [patch] In-Reply-To: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> Message-ID: <1238067688.6464.29.camel@supraflex> Hi again, On Mon, 2009-03-23 at 14:36 +0100, Jens Rantil wrote: > So I have a C-function in a DLL loaded through ctypes. This particular > function returns a pointer to a double. In fact I know that this > pointer points to the first element in an array of, say for > simplicity, 200 elements. > > How do I convert this pointer to a NumPy array that uses this data > (ie. no copy of data in memory)? I am able to create a numpy array > using a copy of the data. Just a follow up on this topic: While constructing a home made __array_interface__ attribute does the job of converting from a ctypes pointer to a NumPy array, it seems both like an undocumented and magic solution to a common problem. Therefor, I used the code that Sturla Molden posted and wrote a highly useful piece of code that enables ctypes DLL functions to return NumPy arrays on the fly. See it as a replacement for the incorrectly documented ndpointer/restype functionality if you want. The code is attached with the mail, including Nose tests. An example workflow to have a ctypes DLL function return a 4 element double array would be: >>> returns_ndarray(dll.my_func, ctypes.c_double, 4) >>> my_array = dll.my_func() >>> my_array array([ 2.1, 4. , 97. , 6. ]) Notice that 'my_array' will be sharing the memory that the DLL function returned. Also, I have not done extensive testing of ndim and shape parameters. Wouldn't my code, or a tweak of it, be a nice feature in numpy.ctypeslib? Is this the wrong channel for proposing things like this? Thanks, Jens Rantil Lund University & Modelon AB, Sweden -------------- next part -------------- A non-text attachment was scrubbed... Name: returns_ndarray.py Type: text/x-python Size: 5148 bytes Desc: not available URL: From sturla at molden.no Thu Mar 26 09:17:02 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Mar 2009 14:17:02 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute [patch] In-Reply-To: <1238067688.6464.29.camel@supraflex> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <1238067688.6464.29.camel@supraflex> Message-ID: <49CB804E.8060402@molden.no> On 3/26/2009 12:41 PM, Jens Rantil wrote: > Wouldn't my code, or a tweak of it, be a nice feature in > numpy.ctypeslib? Is this the wrong channel for proposing things like > this? If you look at http://svn.scipy.org/svn/numpy/trunk/numpy/ctypeslib.py you will see that it does almost the same. I think it would be better to work out why ndpointer fails as restype and patch that. Sturla Molden From delcampo at stats.ox.ac.uk Thu Mar 26 09:32:46 2009 From: delcampo at stats.ox.ac.uk (F. David del Campo Hill) Date: Thu, 26 Mar 2009 13:32:46 -0000 Subject: [Numpy-discussion] Win32 MSI In-Reply-To: <5b8d13220903251223g1845bcd5sae0cffa374e60975@mail.gmail.com> References: <4E30CAE7A3A7B242AFBBC418520EA0D644EBAD@exchange1.stats.ox.ac.uk><5b8d13220903241204i6a78de96p555fbd48e41d2186@mail.gmail.com><4E30CAE7A3A7B242AFBBC418520EA0D644EBEB@exchange1.stats.ox.ac.uk><49CA2133.7040806@ar.media.kyoto-u.ac.jp><4E30CAE7A3A7B242AFBBC418520EA0D644EBEE@exchange1.stats.ox.ac.uk> <5b8d13220903251223g1845bcd5sae0cffa374e60975@mail.gmail.com> Message-ID: <4E30CAE7A3A7B242AFBBC418520EA0D644EC63@exchange1.stats.ox.ac.uk> David, Yes, I agree that kind of "complexity filter" (hey, I just invented a phrase!) would probably work, though it should be theoretically possible to create a conditional execution within a MSI package using WIX. Now for the critical question: when? David -----Original Message----- From: numpy-discussion-bounces at scipy.org [mailto:numpy-discussion-bounces at scipy.org] On Behalf Of David Cournapeau Sent: 25 March 2009 19:24 To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] Win32 MSI On Wed, Mar 25, 2009 at 9:50 PM, F. David del Campo Hill wrote: > > ? ? ? ?Also (and pardon me if this is a stupid question), wouldn't the non-SSE installer work anywhere (albeit more slowly)? Yes, it would - but then people would complain about numpy being slow, etc... because average users would install the msi, ask the difference with the superpack, etc... I may be misguided, but I make a strong difference between making a msi available as in the case of win64 installers, and making it possible for a user to extract the msi from the superpack. The later inherently filters people who don't like to bother with "details" such as CPU support. The superpack solved almost all issues we had on windows, and I would prefer keeping it that way. cheers, David _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion From bsouthey at gmail.com Thu Mar 26 09:49:11 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 26 Mar 2009 08:49:11 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> Message-ID: <49CB87D7.6020403@gmail.com> David Cournapeau wrote: > On Thu, Mar 26, 2009 at 7:48 PM, David Cournapeau wrote: > > >> It is built with an updated toolchain + a few patches to mingw I have >> yet submitted upstream, >> > > I created a ticket as well to track this issue: > > I added my comments to it. Is there a way to skip these longdouble/longcomplex tests in Python2.6 on 64 bit Vista? I understand the experimental nature and I am more than willing to help test it as much I am capable of doing. Bruce From david at ar.media.kyoto-u.ac.jp Thu Mar 26 09:53:16 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Mar 2009 22:53:16 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49CB87D7.6020403@gmail.com> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> <49CB87D7.6020403@gmail.com> Message-ID: <49CB88CC.5030406@ar.media.kyoto-u.ac.jp> Bruce Southey wrote: > David Cournapeau wrote: > >> On Thu, Mar 26, 2009 at 7:48 PM, David Cournapeau wrote: >> >> >> >>> It is built with an updated toolchain + a few patches to mingw I have >>> yet submitted upstream, >>> >>> >> I created a ticket as well to track this issue: >> >> >> > I added my comments to it. > > Is there a way to skip these longdouble/longcomplex tests in Python2.6 > on 64 bit Vista? > I could add a known failure - but then that kind of defeats the purpose of testing. Just to be clear, did you use the .msi I have just posted ? Because I can't reproduce the crash at all on my machine. I do get occasional crashes right after numpy import when running the test suite, but those are of different nature I believe. cheers, David From bsouthey at gmail.com Thu Mar 26 10:44:37 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 26 Mar 2009 09:44:37 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49CB88CC.5030406@ar.media.kyoto-u.ac.jp> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> <49CB87D7.6020403@gmail.com> <49CB88CC.5030406@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Mar 26, 2009 at 8:53 AM, David Cournapeau wrote: > Bruce Southey wrote: >> David Cournapeau wrote: >> >>> On Thu, Mar 26, 2009 at 7:48 PM, David Cournapeau wrote: >>> >>> >>> >>>> It is built with an updated toolchain + a few patches to mingw I have >>>> yet submitted upstream, >>>> >>>> >>> I created a ticket as well to track this issue: >>> >>> >>> >> I added my comments to it. >> >> Is there a way to skip these longdouble/longcomplex tests in Python2.6 >> on 64 bit Vista? >> > > I could add a known failure - but then that kind of defeats the purpose > of testing. > > Just to be clear, did you use the .msi I have just posted ? Because I > can't reproduce the crash at all on my machine. I do get occasional > crashes right after numpy import when running the test suite, but those > are of different nature I believe. > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hi, Apparently not: Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. **************************************************************** Personal firewall software may warn about the connection IDLE makes to its subprocess using this computer's internal loopback interface. This connection is not visible on any external interface and no data is sent to or received from the Internet. **************************************************************** IDLE 2.6.1 >>> import numpy Traceback (most recent call last): File "", line 1, in import numpy File "C:\Python26\lib\site-packages\numpy\__init__.py", line 130, in import add_newdocs File "C:\Python26\lib\site-packages\numpy\add_newdocs.py", line 9, in from lib import add_newdoc File "C:\Python26\lib\site-packages\numpy\lib\__init__.py", line 4, in from type_check import * File "C:\Python26\lib\site-packages\numpy\lib\type_check.py", line 8, in import numpy.core.numeric as _nx File "C:\Python26\lib\site-packages\numpy\core\__init__.py", line 5, in import multiarray ImportError: DLL load failed: %1 is not a valid Win32 application. From david at ar.media.kyoto-u.ac.jp Thu Mar 26 10:32:57 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Mar 2009 23:32:57 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <49C285EE.2020109@noaa.gov> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> <49CB87D7.6020403@gmail.com> <49CB88CC.5030406@ar.media.kyoto-u.ac.jp> Message-ID: <49CB9219.1030907@ar.media.kyoto-u.ac.jp> Bruce Southey wrote: > Hi, > Apparently not: > > Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit > (Intel)] on win32 > Well, installing 64 bits numpy on 32 bits python will not work very well :) I am surprised the installation worked at all (I noticed msi were less robust than pure wininst.exe-based installers w.r.t python verification/installation, though). I am not sure I can do anything to guard against this, unfortunately, cheers, David From bsouthey at gmail.com Thu Mar 26 11:24:20 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 26 Mar 2009 10:24:20 -0500 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0b1 In-Reply-To: <49CB9219.1030907@ar.media.kyoto-u.ac.jp> References: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> <5b8d13220903200359y54dd2c17ud80c75aa406d0512@mail.gmail.com> <5b8d13220903200403x6098d43ar6d225c57338b33d8@mail.gmail.com> <5b8d13220903260348j39e155dfrd8e2e02a91330623@mail.gmail.com> <5b8d13220903260357p45f428f4gb1691a210d7a4b3c@mail.gmail.com> <49CB87D7.6020403@gmail.com> <49CB88CC.5030406@ar.media.kyoto-u.ac.jp> <49CB9219.1030907@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Mar 26, 2009 at 9:32 AM, David Cournapeau wrote: > Bruce Southey wrote: >> Hi, >> Apparently not: >> >> Python 2.6.1 (r261:67517, Dec ?4 2008, 16:51:00) [MSC v.1500 32 bit >> (Intel)] on win32 >> > > Well, installing 64 bits numpy on 32 bits python will not work very well :) > > I am surprised the installation worked at all (I noticed msi were less > robust than pure wininst.exe-based installers w.r.t python > verification/installation, though). I am not sure I can do anything to > guard against this, unfortunately, > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Sorry I wanted to ensure I had a clean slate and got the wrong Python version. Having fixed that, I see the message so I do have the right version. I can not import numpy in IDLE even if the anti-virus software is disabled. However I can get to work with the command line without disabling the anti-virus software. But I do get the occasional crash when trying to import numpy. Usually the test_scalarmath.TestTypes fails but occasionally it does pass (10 to 20% of the time) and there are not further unexpected failures. Bruce From dekievit at strw.LeidenUniv.nl Thu Mar 26 12:14:16 2009 From: dekievit at strw.LeidenUniv.nl (Sander de Kievit) Date: Thu, 26 Mar 2009 17:14:16 +0100 Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python Message-ID: <49CBA9D8.9000602@strw.LeidenUniv.nl> Hi, On my PC the following code freezes python: [code] import numpy as np from StringIO import StringIO c = StringIO("0 1\n2 3") np.loadtxt(c) np.loadtxt(c) [/code] Is this intentional behaviour or should I report this as a bug? Regards, Sander From cournape at gmail.com Thu Mar 26 12:58:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 01:58:47 +0900 Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python In-Reply-To: <49CBA9D8.9000602@strw.LeidenUniv.nl> References: <49CBA9D8.9000602@strw.LeidenUniv.nl> Message-ID: <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit wrote: > Hi, > > On my PC the following code freezes python: > > [code] > import numpy as np > from StringIO import StringIO > c = StringIO("0 1\n2 3") > np.loadtxt(c) > np.loadtxt(c) > [/code] > > Is this intentional behaviour or should I report this as a bug? Which version of numpy are you using (numpy.version.version), on which OS ? That's a most definitly not expected behavior (you should get an exception the second time because the "stream" is empty - which is exactly what happens on my installation, but your problem may be platform specific). cheers, David From pav at iki.fi Thu Mar 26 13:41:56 2009 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Mar 2009 17:41:56 +0000 (UTC) Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python References: <49CBA9D8.9000602@strw.LeidenUniv.nl> <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> Message-ID: Fri, 27 Mar 2009 01:58:47 +0900, David Cournapeau wrote: > On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit [clip] >> On my PC the following code freezes python: >> >> [code] >> import numpy as np >> from StringIO import StringIO >> c = StringIO("0 1\n2 3") >> np.loadtxt(c) >> np.loadtxt(c) >> [/code] [clip] I see this too, on Numpy 1.2.1. It gets stuck at {{{ >>> np.loadtxt(c) Traceback (most recent call last): File "", line 1, in File ".../numpy-1.2.1-py2.5-linux-x86_64.egg/numpy/lib/io.py", line 375, in loadtxt first_vals = split_line(first_line) File ".../numpy-1.2.1-py2.5-linux-x86_64.egg/numpy/lib/io.py", line 356, in split_line line = line.split(comments)[0].strip() KeyboardInterrupt >>> np.__version__ '1.2.1' }}} However, on 1.4.0.dev6723, I get an exception: {{{ Traceback (most recent call last): File "", line 1, in File ".../numpy-1.4.0.dev-py2.5-linux-x86_64.egg/numpy/lib/io.py", line 436, in loadtxt raise IOError('End-of-file reached before encountering data.') }}} There haven't been any changes to lib/io.py since branching off 1.3.x, so I believe it should also work OK on 1.3.0. -- Pauli Virtanen From josh8912 at yahoo.com Thu Mar 26 14:08:10 2009 From: josh8912 at yahoo.com (JJ) Date: Thu, 26 Mar 2009 11:08:10 -0700 (PDT) Subject: [Numpy-discussion] numpy int64 arrays and ctypes Message-ID: <92215.4142.qm@web54009.mail.re2.yahoo.com> Hello: I hope someone can give me a tip on how to solve this simple problem. I use Ubuntu 8.10 64 bit and want to pass a numpy integer array to a shared library C program. For some reason, my C program is not able to read the info passed in the integer array, but can read from a passed double array. It is probably a simple fix, but I do not know what it is. I have the same problem using Python 2.5 and Python 2.6 In the python program, I have something like: myInts = array([0,2,5,7]) #(of type 64 bit integer) myDoub= random.rand(100) myLib_ = ctypes.cdll['./mylib.so'] myfun = _myLib.myfun p_myInts = ndpointer(myInts.dtype, myInts.ndim, myInts.shape, "C_CONTIGUOUS") p_myDoub = ndpointer(myDoub.dtype, myDoub.ndim, myDoub.shape, "C_CONTIGUOUS") myfun.argtypes = [p_myInts, p_myDoub] myfun.restype = ctypes.c_int result = myfun(myInts, myDoub) Then in the header file I have: int myfun(int *myInts double *myDoub); Finally, in the C program I have: int myfun(int *myInts double *myDoub){ int i; for (i=0;, i<4; i++){ printf("myInt= %d, myDoub[myInt]= %g\n", myInt[i], myDoub[myInt[i]]); } } This does not work, the integer array is not passed correctly, but the double array is. Neither %d or %Int64 work for the printf statement. Any suggestions? John From robert.kern at gmail.com Thu Mar 26 14:40:30 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Mar 2009 13:40:30 -0500 Subject: [Numpy-discussion] numpy int64 arrays and ctypes In-Reply-To: <92215.4142.qm@web54009.mail.re2.yahoo.com> References: <92215.4142.qm@web54009.mail.re2.yahoo.com> Message-ID: <3d375d730903261140m64d3292co9f8a11e18ce11c44@mail.gmail.com> On Thu, Mar 26, 2009 at 13:08, JJ wrote: > > Hello: > I hope someone can give me a tip on how to solve this simple problem. I use Ubuntu 8.10 64 bit and want to pass a numpy integer array to a shared library C program. For some reason, my C program is not able to read the info passed in the integer array, but can read from a passed double array. It is probably a simple fix, but I do not know what it is. I have the same problem using Python 2.5 and Python 2.6 numpy int64 is almost never a C int. C ints are almost always 32 bits on most modern systems. The default numpy integer type always corresponds to a C long on all systems. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From benparkla at gmail.com Thu Mar 26 15:32:10 2009 From: benparkla at gmail.com (Ben Park) Date: Thu, 26 Mar 2009 12:32:10 -0700 (PDT) Subject: [Numpy-discussion] Where is the instruction of installing numpy with Intel lib MKL? In-Reply-To: <22728276.post@talk.nabble.com> References: <22728276.post@talk.nabble.com> Message-ID: <22729404.post@talk.nabble.com> BTW, this timing on a core 2 Duo 2.0GH laptop ,with the Enthought Python Distribution, is around 0.2 second. Ben Park wrote: > > I have spent many hours trying to do this, to no avail. The numpy > installation I got didn't seem to link to the MKL library. A 1000x1000 > matrix multiplication took 8 seconds. > > import numpy as N > import numpy.random as RN > import time > > m = 1000 > #m = 2000 > #m = 10000 > #m = 30000 > #m = 40000 > > X = N.zeros((m,m),'f') > for i in range(m): > X[i,:] = RN.random((m,)).astype('f') > #X = RN.random((m, m)).astype('f') > > t1 = time.time() > Y = N.dot(X, X) > t2 = time.time() > print 'dt = %f' %(t2-t1) > > The result is about 8 seconds on a Intel Core 2 Duo 2.0GH laptop. > Certainly not fast. > -- View this message in context: http://www.nabble.com/Where-is-the-instruction-of-installing-numpy-with-Intel-lib-MKL--tp22728276p22729404.html Sent from the Numpy-discussion mailing list archive at Nabble.com. From theller at ctypes.org Thu Mar 26 16:12:00 2009 From: theller at ctypes.org (Thomas Heller) Date: Thu, 26 Mar 2009 21:12:00 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute [patch] In-Reply-To: <49CB804E.8060402@molden.no> References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <1238067688.6464.29.camel@supraflex> <49CB804E.8060402@molden.no> Message-ID: Sturla Molden schrieb: > On 3/26/2009 12:41 PM, Jens Rantil wrote: > >> Wouldn't my code, or a tweak of it, be a nice feature in >> numpy.ctypeslib? Is this the wrong channel for proposing things like >> this? > > If you look at > > http://svn.scipy.org/svn/numpy/trunk/numpy/ctypeslib.py > > you will see that it does almost the same. I think it would be better to > work out why ndpointer fails as restype and patch that. ndpointer(...), which returns an _nptr instance, does not work as restype because neither it is a base class of one of the ctypes base types like ctypes.c_void_p, also it is not callable with one argument. There are two ways to fix this. The first one is to make the _nptr callable with one argument, by implementing a method like this: def __init__(self, anInteger): The foreign function is assumed to return an integer, and the restype is called with this integer. Obviously this will only work on systems where 'sizeof(int) == sizeof(void *)'. See also http://docs.python.org/library/ctypes.html#return-types I consider the 'callable as restype' protocol broken, but backwards compatibility reasons forbid to change that. The other way is to make _nptr a subclass of ctypes.c_void_p, the result that the foreign function call returns will then be an instance of this class. Unfortunately, ctypes will not call __new__() to create this instance; so a custom __new__() implementation cannot return a numpy array and we are left with the _nptr instance. The only way to create and access the numpy array is to construct and return one from a method call on the _nptr instance, or a property on the _nptr instance. Ok, .errcheck could call that method and return the result. -- Thanks, Thomas From dekievit at strw.leidenuniv.nl Thu Mar 26 16:24:30 2009 From: dekievit at strw.leidenuniv.nl (Sander de Kievit) Date: Thu, 26 Mar 2009 21:24:30 +0100 Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python In-Reply-To: <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> References: <49CBA9D8.9000602@strw.LeidenUniv.nl> <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> Message-ID: <49CBE47E.50105@strw.LeidenUniv.nl> David Cournapeau wrote: > On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit > wrote: >> Hi, >> >> On my PC the following code freezes python: >> >> [code] >> import numpy as np >> from StringIO import StringIO >> c = StringIO("0 1\n2 3") >> np.loadtxt(c) >> np.loadtxt(c) >> [/code] >> >> Is this intentional behaviour or should I report this as a bug? > > Which version of numpy are you using (numpy.version.version), on which OS ? The specifics for my platform: Fedora release 10 (Cambridge) kernel: 2.6.27.19-170.2.35.fc10.i686 #1 SMP python: 2.5.2 numpy: 1.2.0 Also, if I close the file in between the two calls it works without problem (if I use a real file, that is). > > That's a most definitly not expected behavior (you should get an > exception the second time because the "stream" is empty - which is > exactly what happens on my installation, but your problem may be > platform specific). > > cheers, > > David Thanks for the quick replies! I'll report the bug. Sander From rmay31 at gmail.com Thu Mar 26 16:29:24 2009 From: rmay31 at gmail.com (Ryan May) Date: Thu, 26 Mar 2009 15:29:24 -0500 Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python In-Reply-To: <49CBE47E.50105@strw.LeidenUniv.nl> References: <49CBA9D8.9000602@strw.LeidenUniv.nl> <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> <49CBE47E.50105@strw.LeidenUniv.nl> Message-ID: On Thu, Mar 26, 2009 at 3:24 PM, Sander de Kievit < dekievit at strw.leidenuniv.nl> wrote: > David Cournapeau wrote: > > On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit > > wrote: > >> Hi, > >> > >> On my PC the following code freezes python: > >> > >> [code] > >> import numpy as np > >> from StringIO import StringIO > >> c = StringIO("0 1\n2 3") > >> np.loadtxt(c) > >> np.loadtxt(c) > >> [/code] > >> > >> Is this intentional behaviour or should I report this as a bug? > > > > Which version of numpy are you using (numpy.version.version), on which OS > ? > The specifics for my platform: > Fedora release 10 (Cambridge) > kernel: 2.6.27.19-170.2.35.fc10.i686 #1 SMP > python: 2.5.2 > numpy: 1.2.0 > > Also, if I close the file in between the two calls it works without > problem (if I use a real file, that is). > > > > > That's a most definitly not expected behavior (you should get an > > exception the second time because the "stream" is empty - which is > > exactly what happens on my installation, but your problem may be > > platform specific). > > > > cheers, > > > > David > > Thanks for the quick replies! I'll report the bug. Before reporting the bug, can you upgrade to 1.2.1. I seem to remember something about this bug and my gut tells me it got fixed in between 1.2.0 and 1.2.1. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma Sent from Norman, Oklahoma, United States -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 26 17:02:28 2009 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Mar 2009 21:02:28 +0000 (UTC) Subject: [Numpy-discussion] Using loadtxt() twice on same file freezes python References: <49CBA9D8.9000602@strw.LeidenUniv.nl> <5b8d13220903260958g3bcdffc4nf8e2fecbd541c397@mail.gmail.com> <49CBE47E.50105@strw.LeidenUniv.nl> Message-ID: Thu, 26 Mar 2009 21:24:30 +0100, Sander de Kievit wrote: > David Cournapeau wrote: >> On Fri, Mar 27, 2009 at 1:14 AM, Sander de Kievit >> wrote: >>> On my PC the following code freezes python: >>> >>> [code] >>> import numpy as np >>> from StringIO import StringIO >>> c = StringIO("0 1\n2 3") >>> np.loadtxt(c) >>> np.loadtxt(c) >>> [/code] >>> >>> Is this intentional behaviour or should I report this as a bug? >> >> Which version of numpy are you using (numpy.version.version), on which >> OS ? > The specifics for my platform: > Fedora release 10 (Cambridge) > kernel: 2.6.27.19-170.2.35.fc10.i686 #1 SMP python: 2.5.2 > numpy: 1.2.0 > > Also, if I close the file in between the two calls it works without > problem (if I use a real file, that is). Could you test this with Numpy SVN version (or the 1.3 beta). It's very likely the bug is this one: http://projects.scipy.org/numpy/ticket/908 and it has already been fixed. -- Pauli Virtanen From lutz.maibaum at gmail.com Thu Mar 26 19:56:13 2009 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Thu, 26 Mar 2009 16:56:13 -0700 Subject: [Numpy-discussion] Normalization of ifft Message-ID: Hello, I just started to use python and numpy for some numerical analysis. I have a question about the definition of the inverse Fourier transform. The user gives the formula (p.180) x[m] = Sum_k X[k] exp(j 2pi k m / n) where X[k] are the Fourier coefficients, and n is the length of the arrays. The online documentation (http://docs.scipy.org/doc/numpy/reference/routines.fft.html), on the other hand, states that there is an additional factor of 1/n, which is required to make ifft() the inverse of fft(). Is this a misprint in the user guide? Lutz From dwf at cs.toronto.edu Thu Mar 26 20:02:44 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 26 Mar 2009 20:02:44 -0400 Subject: [Numpy-discussion] Where is the instruction of installing numpy with Intel lib MKL? In-Reply-To: <22729404.post@talk.nabble.com> References: <22728276.post@talk.nabble.com> <22729404.post@talk.nabble.com> Message-ID: <203EB68F-2BA4-4F14-80C5-7A263C5BAECA@cs.toronto.edu> On 26-Mar-09, at 3:32 PM, Ben Park wrote: > > BTW, this timing on a core 2 Duo 2.0GH laptop ,with the Enthought > Python > Distribution, is around 0.2 second. You're going to have to build NumPy yourself to link it against the MKL, I believe. EPD's is probably using something fairly basic. You don't mention what platform you're on. Windows? Linux? Mac OS X? This is pretty important. Some instructions about the MKL are in the comments of site.cfg in the numpy tarball. Also (if you're running Linux): http://www.scipy.org/Installing_SciPy/Linux#head-7ce43956a69ec51c6f2cedd894a4715d5bfff974 David From christian at marquardt.sc Thu Mar 26 20:20:22 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 01:20:22 +0100 (GMT+01:00) Subject: [Numpy-discussion] numpy v1.2.0 vs 1.2.1 and setup_tools In-Reply-To: <21516370.1231238113016068.JavaMail.root@athene> Message-ID: <9618077.1251238113222077.JavaMail.root@athene> Hello! I ran into the following problem: I have easy_installable packages which list numpy as a dependency. numpy itself is installed in the system's site-packages directory and works fine. When running a python setup.py install of the package with numpy v1.2.0 installed, everything works fine. When running the same command with numpy 1.2.1 installed, it tries to download a numpy tar file from Pypi and to compile and install it again. It looks as if v1.2.1 isn't providing the relevant information to the setup tools, but 1.2.0 did. I don't know about v1.3.0b1 yet - I have difficulties to compile that currently (another email). I'd be more than willing to track this down, but is there anybody who could give me a starting point where I should start to look? Many thanks, Christian. From christian at marquardt.sc Thu Mar 26 20:25:53 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 01:25:53 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type Message-ID: <30783290.1281238113553259.JavaMail.root@athene> Hello, I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python 2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The distutils do find the compilers and the MKL (when using similar settings as I used successfully for all previous numpy versions sonce 1.0.4 or so), but then bail out with the following error: ...>python setup.py install [...] running config running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_clib Found executable /opt/intel/cc/10.1.018/bin/icc Could not locate executable ecc customize IntelCCompiler customize IntelCCompiler using build_clib building 'npymath' library compiling C sources C compiler: icc error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') I think the error message does not even come from the compiler... I'm lost... What does it mean, and why are there source files named ...c.src? Many thanks, Christian From christian at marquardt.sc Thu Mar 26 20:06:55 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 01:06:55 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy 1.3.0b1 with Intel compiler In-Reply-To: <17606926.1161238112196150.JavaMail.root@athene> Message-ID: <20459721.1181238112415383.JavaMail.root@athene> Hello, I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python 2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The distutils do find the compilers and the MKL (when using similar settings as I used successfully for all previous numpy versions sonce 1.0.4 or so), but then bail out with the following error: ...>python setup.py install [...] running config running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_clib Found executable /opt/intel/cc/10.1.018/bin/icc Could not locate executable ecc customize IntelCCompiler customize IntelCCompiler using build_clib building 'npymath' library compiling C sources C compiler: icc error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') I think the error message does not even come from the compiler... I'm lost... WHat does it mean, and why are there source files named ...c.src? Many thanks, Christian From charlesr.harris at gmail.com Thu Mar 26 20:49:25 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 18:49:25 -0600 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <30783290.1281238113553259.JavaMail.root@athene> References: <30783290.1281238113553259.JavaMail.root@athene> Message-ID: On Thu, Mar 26, 2009 at 6:25 PM, Christian Marquardt wrote: > Hello, > > I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with > Python > 2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. > The > distutils do find the compilers and the MKL (when using similar settings as > I > used successfully for all previous numpy versions sonce 1.0.4 or so), but > then > bail out with the following error: > > ...>python setup.py install > > [...] > > running config > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_clib > Found executable /opt/intel/cc/10.1.018/bin/icc > Could not locate executable ecc > customize IntelCCompiler > customize IntelCCompiler using build_clib > building 'npymath' library > compiling C sources > C compiler: icc > > error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') > > I think the error message does not even come from the compiler... > > I'm lost... What does it mean, and why are there source files named > ...c.src? > That file should be preprocessed to produce npy_math.c which ends up in the build directory. I don't know what is going on here, but you might first try deleting the build directory just to see what happens. There might be some setup file that is screwy/outdated also. Did you download the beta and do a clean extract? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.s.gilbert at gmail.com Thu Mar 26 21:17:57 2009 From: michael.s.gilbert at gmail.com (Michael Gilbert) Date: Thu, 26 Mar 2009 21:17:57 -0400 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: References: Message-ID: <20090326211757.2376e3fc.michael.s.gilbert@gmail.com> On Thu, 26 Mar 2009 16:56:13 -0700 Lutz Maibaum wrote: > Hello, > > I just started to use python and numpy for some numerical analysis. I > have a question about the definition of the inverse Fourier transform. > The user gives the formula (p.180) > > x[m] = Sum_k X[k] exp(j 2pi k m / n) > > where X[k] are the Fourier coefficients, and n is the length of the arrays. > > The online documentation > (http://docs.scipy.org/doc/numpy/reference/routines.fft.html), on the > other hand, states that there is an additional factor of 1/n, which is > required to make ifft() the inverse of fft(). Is this a misprint in > the user guide? this documentation is saying that the difference between the equations for the fft and ifft is a factor of 1/n (not the numpy implementations). if you do output = numpy.ifft( numpy.fft( input ) ) and you get output = input, then the normalizations are appropriately weighted. the "correct" normalization (from a mathemetician viewpoint) is actually 1/sqrt (n) so that the fft is the same function as the ifft, but computer implementations tend not to do this since the sqrt takes a lot more operations than plain old 1/n. mike From charlesr.harris at gmail.com Thu Mar 26 21:34:29 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 19:34:29 -0600 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: <20090326211757.2376e3fc.michael.s.gilbert@gmail.com> References: <20090326211757.2376e3fc.michael.s.gilbert@gmail.com> Message-ID: On Thu, Mar 26, 2009 at 7:17 PM, Michael Gilbert < michael.s.gilbert at gmail.com> wrote: > On Thu, 26 Mar 2009 16:56:13 -0700 Lutz Maibaum wrote: > > > Hello, > > > > I just started to use python and numpy for some numerical analysis. I > > have a question about the definition of the inverse Fourier transform. > > The user gives the formula (p.180) > > > > x[m] = Sum_k X[k] exp(j 2pi k m / n) > > > > where X[k] are the Fourier coefficients, and n is the length of the > arrays. > > > > The online documentation > > (http://docs.scipy.org/doc/numpy/reference/routines.fft.html), on the > > other hand, states that there is an additional factor of 1/n, which is > > required to make ifft() the inverse of fft(). Is this a misprint in > > the user guide? > > this documentation is saying that the difference between the equations > for the fft and ifft is a factor of 1/n (not the numpy implementations). > if you do > > output = numpy.ifft( numpy.fft( input ) ) > > and you get output = input, then the normalizations are appropriately > weighted. > > the "correct" normalization (from a mathemetician viewpoint) is actually > 1/sqrt (n) so that the fft is the same function as the ifft, but > computer implementations tend not to do this since the sqrt takes a lot > more operations than plain old 1/n. > > mike > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.maibaum at gmail.com Thu Mar 26 21:37:32 2009 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Thu, 26 Mar 2009 18:37:32 -0700 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: <20090326211757.2376e3fc.michael.s.gilbert@gmail.com> References: <20090326211757.2376e3fc.michael.s.gilbert@gmail.com> Message-ID: Hi Michael, > this documentation is saying that the difference between the equations > for the fft and ifft is a factor of 1/n (not the numpy implementations). > if you do > > ?output = numpy.ifft( numpy.fft( input ) ) > > and you get output = input, then the normalizations are appropriately > weighted. what you say is of course correct, but I am wondering if there is a mistake in the user guide (p. 180 of http://numpy.scipy.org/numpybook.pdf): according to the expressions in the user guide, both fft and ifft are not normalized. The implementation if ifft, on the other hand, has the additional 1/n factor, consistent with the online documentation. Lutz From simpson at math.toronto.edu Thu Mar 26 22:02:26 2009 From: simpson at math.toronto.edu (Gideon Simpson) Date: Thu, 26 Mar 2009 22:02:26 -0400 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: References: Message-ID: <4611501B-F600-412B-AA7C-CBAB8ABFDA33@math.toronto.edu> I thought it was the same as the MATLAB format: http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/fft.html&http://www.google.com/search ?client=safari&rls=en-us&q=MATLAB+fft&ie=UTF-8&oe=UTF-8 On Mar 26, 2009, at 7:56 PM, Lutz Maibaum wrote: > Hello, > > I just started to use python and numpy for some numerical analysis. I > have a question about the definition of the inverse Fourier transform. > The user gives the formula (p.180) > > x[m] = Sum_k X[k] exp(j 2pi k m / n) > > where X[k] are the Fourier coefficients, and n is the length of the > arrays. > > The online documentation > (http://docs.scipy.org/doc/numpy/reference/routines.fft.html), on the > other hand, states that there is an additional factor of 1/n, which is > required to make ifft() the inverse of fft(). Is this a misprint in > the user guide? > > Lutz > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -gideon From lutz.maibaum at gmail.com Thu Mar 26 22:10:31 2009 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Thu, 26 Mar 2009 19:10:31 -0700 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: <4611501B-F600-412B-AA7C-CBAB8ABFDA33@math.toronto.edu> References: <4611501B-F600-412B-AA7C-CBAB8ABFDA33@math.toronto.edu> Message-ID: On Thu, Mar 26, 2009 at 7:02 PM, Gideon Simpson wrote: > I thought it was the same as the MATLAB format: > > http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/fft.html&http://www.google.com/search > ?client=safari&rls=en-us&q=MATLAB+fft&ie=UTF-8&oe=UTF-8 I believe this is true for the implementation, but I think the description of ifft in the NumPy User Guide might be incorrect. Lutz From christian at marquardt.sc Thu Mar 26 22:28:50 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 03:28:50 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <24246665.1371238120447953.JavaMail.root@athene> Message-ID: <33451475.1391238120930883.JavaMail.root@athene> Hmm. I downloaded the beta tar file and started from the untarred contents plus a patch for the Intel compilers (some changes of the command line arguments for the compiler and a added setup.cfg file specifying the paths to the Intel MKL libraries) which applied cleanly. I then ran python setup.py config --compiler=intel config_fc --fcompiler=intel build_clib --compiler=intel build_ext --compiler=intel install which failed. After playing around a bit, I found that it seems that the build_clib --compiler=intel subcommand which causes the trouble; after disabling it, that is with python setup.py config --compiler=intel config_fc --fcompiler=intel build_ext --compiler=intel install things compile fine - and all but four of the unit tests fail (test_linalg.TestEigh and test_linalg.TestEigvalsh in both test_csingle and test_cdouble - should I be worried?) How are the .src files converted? Many thanks, Christian. ----- "Charles R Harris" wrote: > > > > On Thu, Mar 26, 2009 at 6:25 PM, Christian Marquardt < christian at marquardt.sc > wrote: > Hello, > > I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python > 2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The > distutils do find the compilers and the MKL (when using similar settings as I > used successfully for all previous numpy versions sonce 1.0.4 or so), but then > bail out with the following error: > > ...>python setup.py install > > [...] > > running config > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options > running build_clib > Found executable /opt/intel/cc/10.1.018/bin/icc > Could not locate executable ecc > customize IntelCCompiler > customize IntelCCompiler using build_clib > building 'npymath' library > compiling C sources > C compiler: icc > > error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') > > I think the error message does not even come from the compiler... > > I'm lost... What does it mean, and why are there source files named ...c.src? > > That file should be preprocessed to produce npy_math.c which ends up in the build directory. I don't know what is going on here, but you might first try deleting the build directory just to see what happens. There might be some setup file that is screwy/outdated also. Did you download the beta and do a clean extract? > > Chuck > > > > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Dr. Christian Marquardt Email: christian at marquardt.sc Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 64293 Darmstadt Mobile: +49 (0) 179 290 84 74 Germany Fax: +49 (0) 6151 95 13 885 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Thu Mar 26 22:45:04 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 03:45:04 +0100 (GMT+01:00) Subject: [Numpy-discussion] numpy v1.2.0 vs 1.2.1 and setup_tools In-Reply-To: <21297350.1441238121732313.JavaMail.root@athene> Message-ID: <4181977.1461238121904021.JavaMail.root@athene> v1.3.0b1 has the same problem - setup_tools doesn't seem to recognize that numpy is already installed in the system's site-packages directory. Maybe I should add that I'm using virtualenv to generate a test environment which includes the systems site-packages; setup_tools does seem recognize other packages which are only available there (e.g., scipy or netCDF4 if specified as a requirement for the install... strange. It also doesn't seem to work for tables (2.1, so not the newest version)). Any ideas on what might be going on? Thanks a lot, Christian. ----- "Christian Marquardt" wrote: > Hello! > > I ran into the following problem: > > I have easy_installable packages which list numpy as a dependency. > numpy itself is installed in the system's site-packages directory and > works fine. > > When running a python setup.py install of the package with numpy > v1.2.0 installed, everything works fine. When running the same command > with numpy 1.2.1 installed, it tries to download a numpy tar file from > Pypi and to compile and install it again. It looks as if v1.2.1 isn't > providing the relevant information to the setup tools, but 1.2.0 did. > > I don't know about v1.3.0b1 yet - I have difficulties to compile that > currently (another email). I'd be more than willing to track this > down, but is there anybody who could give me a starting point where I > should start to look? > > Many thanks, > > Christian. > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Dr. Christian Marquardt Email: christian at marquardt.sc Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 64293 Darmstadt Mobile: +49 (0) 179 290 84 74 Germany Fax: +49 (0) 6151 95 13 885 From charlesr.harris at gmail.com Thu Mar 26 22:45:22 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 20:45:22 -0600 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <33451475.1391238120930883.JavaMail.root@athene> References: <24246665.1371238120447953.JavaMail.root@athene> <33451475.1391238120930883.JavaMail.root@athene> Message-ID: 2009/3/26 Christian Marquardt > Hmm. > > I downloaded the beta tar file and started from the untarred contents plus > a patch for the Intel compilers > (some changes of the command line arguments for the compiler and a added > setup.cfg file specifying the > paths to the Intel MKL libraries) which applied cleanly. I then ran > > python setup.py config --compiler=intel config_fc --fcompiler=intel > build_clib --compiler=intel build_ext --compiler=intel install > > which failed. > > After playing around a bit, I found that it seems that the build_clib > --compiler=intel subcommand which > causes the trouble; after disabling it, that is with > > python setup.py config --compiler=intel config_fc --fcompiler=intel > build_ext --compiler=intel install > > things compile fine - and all but four of the unit tests fail > (test_linalg.TestEigh and test_linalg.TestEigvalsh > in both test_csingle and test_cdouble - should I be worried?) > Four unit tests fail, or all fail except four? I assume you meant the former. I'm not sure what the failures mean, can you check if they are really bad or just some numbers a little bit off. I'm guessing these routines are calling into MKL. > > How are the .src files converted? > The *.xxx.src files are templates that are processed by numpy/distutils/conv_template.py to produce *.xxx files. When you have to repeat basically the same code with umpteen different types a bit of automation helps. The actual conversion is controlled by the setup/scons files, I don't remember exactly where. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Thu Mar 26 22:49:54 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 03:49:54 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <33058402.1491238122047172.JavaMail.root@athene> Message-ID: <17630460.1511238122194646.JavaMail.root@athene> Oh sorry - you are right (too late in the night here in Europe). The output is similar in all four cases - it looks like AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], dtype=complex64) Are x and y the expected and actual results? That would just show that there are small rounding errors in the imaginary part, and that MKL returns the results in another order, no? ----- "Charles R Harris" wrote: > > > > 2009/3/26 Christian Marquardt < christian at marquardt.sc > > > > Hmm. > > I downloaded the beta tar file and started from the untarred contents plus a patch for the Intel compilers > (some changes of the command line arguments for the compiler and a added setup.cfg file specifying the > paths to the Intel MKL libraries) which applied cleanly. I then ran > > python setup.py config --compiler=intel config_fc --fcompiler=intel build_clib --compiler=intel build_ext --compiler=intel install > > which failed. > > After playing around a bit, I found that it seems that the build_clib --compiler=intel subcommand which > causes the trouble; after disabling it, that is with > > python setup.py config --compiler=intel config_fc --fcompiler=intel build_ext --compiler=intel install > > things compile fine - and all but four of the unit tests fail (test_linalg.TestEigh and test_linalg.TestEigvalsh > in both test_csingle and test_cdouble - should I be worried?) > > Four unit tests fail, or all fail except four? I assume you meant the former. I'm not sure what the failures mean, can you check if they are really bad or just some numbers a little bit off. I'm guessing these routines are calling into MKL. > > > > > How are the .src files converted? > > The *.xxx.src files are templates that are processed by numpy/distutils/conv_template.py to produce *.xxx files. When you have to repeat basically the same code with umpteen different types a bit of automation helps. The actual conversion is controlled by the setup/scons files, I don't remember exactly where. > > Chuck > > > > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Dr. Christian Marquardt Email: christian at marquardt.sc Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 64293 Darmstadt Mobile: +49 (0) 179 290 84 74 Germany Fax: +49 (0) 6151 95 13 885 -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Mar 26 22:56:47 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 20:56:47 -0600 Subject: [Numpy-discussion] Changeset 6729 Message-ID: Hi Stephan, You can actually change these functions to use fmax.reduce/fmin.reduce and get about a 50% speedup. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Mar 26 23:06:13 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 21:06:13 -0600 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <17630460.1511238122194646.JavaMail.root@athene> References: <33058402.1491238122047172.JavaMail.root@athene> <17630460.1511238122194646.JavaMail.root@athene> Message-ID: 2009/3/26 Christian Marquardt > Oh sorry - you are right (too late in the night here in Europe). > > The output is similar in all four cases - it looks like > > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) > y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], > dtype=complex64) > > Are x and y the expected and actual results? That would just show that > there > are small rounding errors in the imaginary part, and that MKL returns the > results > in another order, no? > Looks like a sorting error, the eigen values should be sorted. So it looks like a buggy test from here. Having an imaginary part to the eigenvalues returned by a routine that is supposed to process Hermitean matrices doesn't look right, but the errors are in the double precision range, which is pretty good for float32. I think we need a ticket to fix those tests. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Thu Mar 26 23:35:31 2009 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 26 Mar 2009 23:35:31 -0400 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: (numpy-discussion-request@scipy.org) References: Message-ID: Hi Lutz, > what you say is of course correct, but I am wondering if there is a > mistake in the user guide (p. 180 of > http://numpy.scipy.org/numpybook.pdf): according to the expressions in > the user guide, both fft and ifft are not normalized. The > implementation if ifft, on the other hand, has the additional 1/n > factor, consistent with the online documentation. You are looking at Travis Oliphant's book Guide to NumPy, last updated 2006. The routine docstrings (the "help" text) are now maintained by the community at docs.scipy.org. They get synced into the source repository relatively often. They are also the sources to the routine docs presented in the NumPy Reference Guide, also available at that site. Travis has freed his original book and large parts of it (e.g., the C API docs) are now being incorporated into the actively-maintained manuals at docs.scipy.org. Please go there for the latest docs. You'll find that the fft section gives the 1/n formula when discussing ifft. All, I can see where Lutz got the impression that Guide to Numpy was the doc to read. The descriptions of books on both numpy.scipy.org and docs.scipy.org do give that impression. But, Guide is "mature" because its scope was (necessarily) limited. At this point the Reference Guide, while not complete because of its more-ambitious scope, has every docstring in Guide to Numpy, and has substantially more complete and accurate pages for a large number of functions, and much additional text that Guide to Numpy does not have. I haven't checked in detail but much of the rest of Guide to Numpy is now included in the Reference Guide. Would it be ok to put some words on both sites to the effect that the RG is the place to go for routine, class, and module docs, or (possibly) just the place to go, period? I don't want to downplay Travis's contribution; Guide was *very* useful and it lives on in the work descended from it. But, I think the audience that should read it first is relatively limited now. --jh-- From charlesr.harris at gmail.com Fri Mar 27 00:14:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 22:14:16 -0600 Subject: [Numpy-discussion] Changeset 6729 In-Reply-To: References: Message-ID: On Thu, Mar 26, 2009 at 8:56 PM, Charles R Harris wrote: > Hi Stephan, > > You can actually change these functions to use fmax.reduce/fmin.reduce and > get about a 50% speedup. > Also, the test is buggy. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 27 00:21:26 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Mar 2009 22:21:26 -0600 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: References: <33058402.1491238122047172.JavaMail.root@athene> <17630460.1511238122194646.JavaMail.root@athene> Message-ID: On Thu, Mar 26, 2009 at 9:06 PM, Charles R Harris wrote: > > > 2009/3/26 Christian Marquardt > >> Oh sorry - you are right (too late in the night here in Europe). >> >> The output is similar in all four cases - it looks like >> >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 100.0%) >> x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) >> y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], >> dtype=complex64) >> >> Are x and y the expected and actual results? That would just show that >> there >> are small rounding errors in the imaginary part, and that MKL returns the >> results >> in another order, no? >> > > Looks like a sorting error, the eigen values should be sorted. So it looks > like a buggy test from here. Having an imaginary part to the eigenvalues > returned by a routine that is supposed to process Hermitean matrices doesn't > look right, but the errors are in the double precision range, which is > pretty good for float32. > > I think we need a ticket to fix those tests. > Can you post the actual error messages? It will make it easier to find where the failure is. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Mar 27 01:37:42 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 14:37:42 +0900 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <33451475.1391238120930883.JavaMail.root@athene> References: <24246665.1371238120447953.JavaMail.root@athene> <33451475.1391238120930883.JavaMail.root@athene> Message-ID: <5b8d13220903262237s2a7b6ffblce95618f7743d06d@mail.gmail.com> 2009/3/27 Christian Marquardt : > Hmm. > > I downloaded the beta tar file and started from the untarred contents plus a > patch for the Intel compilers > (some changes of the command line arguments for the compiler and a added > setup.cfg file specifying the > paths to the Intel MKL libraries) which applied cleanly. I then ran > > ?? python setup.py config --compiler=intel config_fc --fcompiler=intel > build_clib --compiler=intel build_ext --compiler=intel install > > which failed. > > After playing around a bit, I found that it seems that the build_clib > --compiler=intel subcommand which > causes the trouble; after disabling it, that is with I *guess* that the compiler command line does not work with your changes, and that distutils got confused, and fails somewhere later (or sooner, who knows). Without actually seeing the errors you got, it is difficult to know more - but I would make sure the command line arguments are ok instead of focusing on the .src error, cheers, David From dwf at cs.toronto.edu Fri Mar 27 03:46:14 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 27 Mar 2009 03:46:14 -0400 Subject: [Numpy-discussion] Moved python install, import errors Message-ID: <4C9C48A7-0F3E-451C-ACDD-7921D57D85B5@cs.toronto.edu> Hi all, I built ATLAS, Python 2.5 and NumPy on the local disk of a cluster node, so that disk access would be faster than over NFS, and then moved it back. I made sure to modify all the relevant paths in __config__.py but when importing I receive this error, which I can't make heads or tails of, since core/ does contain an __init__.py. Has anyone seen anything like this before? Thanks, David In [1]: import numpy --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /home/dwf/ in () /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/ __init__.py in () 128 return loader(*packages, **options) 129 --> 130 import add_newdocs 131 __all__ = ['add_newdocs'] 132 /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/ add_newdocs.py in () 7 # core/fromnumeric.py, core/defmatrix.py up-to-date. 8 ----> 9 from lib import add_newdoc 10 11 ############################################################################### /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/lib/ __init__.py in () 11 12 import scimath as emath ---> 13 from polynomial import * 14 from machar import * 15 from getlimits import * /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/lib/ polynomial.py in () 9 import re 10 import warnings ---> 11 import numpy.core.numeric as NX 12 13 from numpy.core import isscalar, abs AttributeError: 'module' object has no attribute 'core' From pav at iki.fi Fri Mar 27 06:53:16 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Mar 2009 10:53:16 +0000 (UTC) Subject: [Numpy-discussion] Normalization of ifft References: <4611501B-F600-412B-AA7C-CBAB8ABFDA33@math.toronto.edu> Message-ID: Thu, 26 Mar 2009 19:10:31 -0700, Lutz Maibaum wrote: > On Thu, Mar 26, 2009 at 7:02 PM, Gideon Simpson > wrote: >> I thought it was the same as the MATLAB format: >> >> http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/ access/helpdesk/help/techdoc/ref/fft.html&http://www.google.com/search >> ?client=safari&rls=en-us&q=MATLAB+fft&ie=UTF-8&oe=UTF-8 > > I believe this is true for the implementation, but I think the > description of ifft in the NumPy User Guide might be incorrect. Yes, the description of ifft in the "Guide to NumPy" book is probably incorrect: >>> np.fft.ifft([1,0,0,0]) array([ 0.25+0.j, 0.25+0.j, 0.25+0.j, 0.25+0.j]) whereas that of the online reference guide is correct. (To avoid confusion between the different docs, it's probably best to use refer to the ebook by its name.) -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Fri Mar 27 06:48:28 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Mar 2009 19:48:28 +0900 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? Message-ID: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> Hi, To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: http://trac.adiumx.com/browser/trunk/Release For now, I keep the build scripts separately from the svn repository, but it would be more practical if everything was together. As far as I understand, the GPL does not apply to the output of some build scripts, and as such, nothing would be "tainted" by the GPL - is this right ? Would it be problematic to put those in the svn repo ? cheers, David From aisaac at american.edu Fri Mar 27 07:44:07 2009 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 27 Mar 2009 07:44:07 -0400 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> Message-ID: <49CCBC07.6000607@american.edu> On 3/27/2009 6:48 AM David Cournapeau apparently wrote: > To build the numpy .dmg mac os x installer, I use a script from the > adium project, which uses applescript and some mac os x black magic. The > script seems to be GPL, as adium itself: It might be worth a query to see if the author would release just this script under the modified BSD license. http://trac.adiumx.com/wiki/ContactUs Alan Isaac From yves.frederix at gmail.com Fri Mar 27 07:49:53 2009 From: yves.frederix at gmail.com (Yves Frederix) Date: Fri, 27 Mar 2009 12:49:53 +0100 Subject: [Numpy-discussion] Behavior of numpy.random.exponential Message-ID: <62e6eafb0903270449g7b33b3bbge412b2c13308df6e@mail.gmail.com> Hi, I noticed a problem with numpy.random.exponential. Apparently, the samples generated by numpy.random.exponential(scale=scale) follow the distribution f(x)=1/scale*exp(-x/scale) (and not f(x)=scale*exp(-x*scale) as stated by the docstring). The script below illustrates this. -- import numpy as N import pylab as pl print N.__version__ pl.figure() lamda = 2. noise_modulus = N.random.exponential(scale=lamda,\ size=(100000,)) #noise_modulus = -N.log(N.random.uniform(size=(100000,)))/lamda # this works y_hist, x_hist = N.histogram(noise_modulus, bins=51,\ normed=True, new=True) x_pl = N.linspace(0, x_hist.max()) pl.semilogy(x_hist[0:-1], y_hist, label='Empirical, lambda=%s' % lamda) pl.semilogy(x_pl, lamda * N.exp(-x_pl*lamda), ':', \ label='exact, lambda=%s' % lamda) pl.semilogy(x_pl, 1./lamda * N.exp(-x_pl*1./lamda), ':', \ label='exact, lambda=1/%s' % lamda) pl.legend(loc='best') pl.show() -- Could this be a bug? I also checked with the latest svn version: In [1]: import numpy; numpy.__version__ Out[1]: '1.4.0.dev6731' Best, YVES From stefan at sun.ac.za Fri Mar 27 08:18:15 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 27 Mar 2009 14:18:15 +0200 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: <49CCBC07.6000607@american.edu> References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> <49CCBC07.6000607@american.edu> Message-ID: <9457e7c80903270518q71044da9k142a26135a62e57d@mail.gmail.com> 2009/3/27 Alan G Isaac : > On 3/27/2009 6:48 AM David Cournapeau apparently wrote: >> ? ? To build the numpy .dmg mac os x installer, I use a script from the >> adium project, which uses applescript and some mac os x black magic. The >> script seems to be GPL, as adium itself: > > > It might be worth a query to see if the > author would release just this script > under the modified BSD license. > http://trac.adiumx.com/wiki/ContactUs I don't see the need. This is just a tool, of which the source code, as well as our modifications, are available. We don't link to it, we don't derive anything in NumPy from it and we do not distribute it, so we are not in any disagreement with the GPL. Regards St?fan From stefan at sun.ac.za Fri Mar 27 08:22:27 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 27 Mar 2009 14:22:27 +0200 Subject: [Numpy-discussion] Changeset 6729 In-Reply-To: References: Message-ID: <9457e7c80903270522n1fdb1d9m2179938bb81eca2f@mail.gmail.com> Hi Chuck 2009/3/27 Charles R Harris : > Also, the test is buggy. Could you be a bit more specific? Which test, what is the problem, what would you like to see? Cheers St?fan From christian at marquardt.sc Fri Mar 27 08:42:45 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 13:42:45 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <17048133.1591238157753233.JavaMail.root@athene> Message-ID: <7827114.1611238157765377.JavaMail.root@athene> Error messages? Sure;-) python -c 'import numpy; numpy.test()' Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ gcc 4.2 mode] nose version 0.10.4 .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................FF..............FF.......................................................................................................................................................................................................................................................................................................................................................................... ====================================================================== FAIL: test_cdouble (test_linalg.TestEigh) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 221, in test_cdouble self.do(a) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 259, in do assert_almost_equal(ev, evalues) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555128, -2.60555128]) y: array([-2.60555128 +1.11022302e-16j, 4.60555128 -1.11022302e-16j]) ====================================================================== FAIL: test_csingle (test_linalg.TestEigh) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 217, in test_csingle self.do(a) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 259, in do assert_almost_equal(ev, evalues) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555124, -2.60555124], dtype=float32) y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], dtype=complex64) ====================================================================== FAIL: test_cdouble (test_linalg.TestEigvalsh) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 221, in test_cdouble self.do(a) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 249, in do assert_almost_equal(ev, evalues) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555128+0.j, -2.60555128+0.j]) y: array([-2.60555128 +1.11022302e-16j, 4.60555128 -1.11022302e-16j]) ====================================================================== FAIL: test_csingle (test_linalg.TestEigvalsh) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 217, in test_csingle self.do(a) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 249, in do assert_almost_equal(ev, evalues) File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], dtype=complex64) ---------------------------------------------------------------------- Ran 2029 tests in 19.729s FAILED (KNOWNFAIL=1, failures=4) MEDEA /home/marq> ----- "Charles R Harris" wrote: > > > > On Thu, Mar 26, 2009 at 9:06 PM, Charles R Harris < charlesr.harris at gmail.com > wrote: > > > > > 2009/3/26 Christian Marquardt < christian at marquardt.sc > > > Oh sorry - you are right (too late in the night here in Europe). > > > The output is similar in all four cases - it looks like > > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) > y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], dtype=complex64) > > Are x and y the expected and actual results? That would just show that there > are small rounding errors in the imaginary part, and that MKL returns the results > in another order, no? > Looks like a sorting error, the eigen values should be sorted. So it looks like a buggy test from here. Having an imaginary part to the eigenvalues returned by a routine that is supposed to process Hermitean matrices doesn't look right, but the errors are in the double precision range, which is pretty good for float32. > > I think we need a ticket to fix those tests. > > Can you post the actual error messages? It will make it easier to find where the failure is. > > Chuck > > > > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Dr. Christian Marquardt Email: christian at marquardt.sc Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 64293 Darmstadt Mobile: +49 (0) 179 290 84 74 Germany Fax: +49 (0) 6151 95 13 885 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Fri Mar 27 08:55:34 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Fri, 27 Mar 2009 13:55:34 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <24841812.1721238158241666.JavaMail.root@athene> Message-ID: <20415074.1741238158534841.JavaMail.root@athene> Hi David, > I *guess* that the compiler command line does not work with your > changes, and that distutils got confused, and fails somewhere later > (or sooner, who knows). Without actually seeing the errors you got, it > is difficult to know more - but I would make sure the command line > arguments are ok instead of focusing on the .src error, > > cheers, > > David I'n not sure if I understand... The compiler options I have changed seem to work (and installation without the "build_clib --compiler=intel" option to setup.py works fine with them). To be sure I've compiled numpy from the distribution tar file without any patches. With python setup.py config --compiler=intel \ config_fc --fcompiler=intel \ build_ext --compiler=intel build everything compiles fine (and builds the internal lapack, as I haven't given the MKL paths, and have no other lapack / blas installed). With python setup.py config --compiler=intel \ config_fc --fcompiler=intel \ build_clib --compiler=intel \ build_ext --compiler=intel build the attempt to build fails (complete output is below). The python installation I use is also build with the Intel icc compiler; so it does pick up that compiler by default. Maybe something is going wrong in the implementation of build_clib in the numpy distutils? Where would I search for that in the code? Many thanks, Chris. ------------------------------------------------------------ MEDEA /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1>python setup.py config --compiler=intel config_fc --fcompiler=intel build_clib --compiler=intel build_ext --compiler=intel build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32 NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /opt/apps/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1383: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /opt/apps/lib libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1392: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1395: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32 NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib libraries lapack_atlas not found in /opt/apps/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /opt/apps/lib libraries lapack_atlas not found in /opt/apps/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1290: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /opt/apps/lib libraries lapack not found in /usr/local/lib libraries lapack not found in /usr/lib NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1301: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1304: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) NOT AVAILABLE running config running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_clib Found executable /opt/intel/cc/10.1.018/bin/icc Could not locate executable ecc customize IntelCCompiler customize IntelCCompiler using build_clib building 'npymath' library compiling C sources C compiler: icc error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') MEDEA /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1> From josef.pktd at gmail.com Fri Mar 27 09:20:09 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 27 Mar 2009 09:20:09 -0400 Subject: [Numpy-discussion] Behavior of numpy.random.exponential In-Reply-To: <62e6eafb0903270449g7b33b3bbge412b2c13308df6e@mail.gmail.com> References: <62e6eafb0903270449g7b33b3bbge412b2c13308df6e@mail.gmail.com> Message-ID: <1cd32cbb0903270620r23855ca3x7fb003065fc31e78@mail.gmail.com> On Fri, Mar 27, 2009 at 7:49 AM, Yves Frederix wrote: > Hi, > > I noticed a problem with numpy.random.exponential. Apparently, the > samples generated by numpy.random.exponential(scale=scale) follow the > distribution f(x)=1/scale*exp(-x/scale) (and not > f(x)=scale*exp(-x*scale) as stated by the docstring). > > The script below illustrates this. > > -- > import numpy as N > import pylab as pl > > print N.__version__ > > pl.figure() > > lamda = 2. > > noise_modulus = N.random.exponential(scale=lamda,\ > ? ?size=(100000,)) > #noise_modulus = -N.log(N.random.uniform(size=(100000,)))/lamda # this works > > y_hist, x_hist = N.histogram(noise_modulus, bins=51,\ > ? ? ? ?normed=True, new=True) > x_pl = N.linspace(0, x_hist.max()) > pl.semilogy(x_hist[0:-1], y_hist, label='Empirical, lambda=%s' % lamda) > pl.semilogy(x_pl, lamda * N.exp(-x_pl*lamda), ':', \ > ? ? ? ?label='exact, lambda=%s' % lamda) > pl.semilogy(x_pl, 1./lamda * N.exp(-x_pl*1./lamda), ':', \ > ? ? ? ?label='exact, lambda=1/%s' % lamda) > > pl.legend(loc='best') > pl.show() > > -- > > Could this be a bug? I also checked with the latest svn version: > > In [1]: import numpy; numpy.__version__ > Out[1]: '1.4.0.dev6731' > > Best, > YVES > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > I changed this a while ago in the documentation editor, but it hasn't been merged yet to the source docstring http://docs.scipy.org/numpy/docs/numpy.random.mtrand.RandomState.exponential/ There is also an open ticket for this http://projects.scipy.org/numpy/ticket/987 Can you review the new docstring, so we can mark it as reviewed and close the ticket? Josef From yves.frederix at gmail.com Fri Mar 27 09:36:10 2009 From: yves.frederix at gmail.com (Yves Frederix) Date: Fri, 27 Mar 2009 14:36:10 +0100 Subject: [Numpy-discussion] Behavior of numpy.random.exponential In-Reply-To: <1cd32cbb0903270620r23855ca3x7fb003065fc31e78@mail.gmail.com> References: <62e6eafb0903270449g7b33b3bbge412b2c13308df6e@mail.gmail.com> <1cd32cbb0903270620r23855ca3x7fb003065fc31e78@mail.gmail.com> Message-ID: <62e6eafb0903270636x1c56f9efgcbfbe1fa0bb840d0@mail.gmail.com> Hi, > I changed this a while ago in the documentation editor, but it hasn't > been merged yet to the source docstring > > http://docs.scipy.org/numpy/docs/numpy.random.mtrand.RandomState.exponential/ > > There is also an open ticket for this http://projects.scipy.org/numpy/ticket/987 > > Can you review the new docstring, so we can mark it as reviewed and > close the ticket? The new docstring looks fine to me. Please go ahead and close it. Regards, YVES From sccolbert at gmail.com Fri Mar 27 10:25:39 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 10:25:39 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU Message-ID: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? Thanks! Chris Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import scipy >>> numpy.show_config() atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 include_dirs = ['/usr/local/atlas/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = c include_dirs = ['/usr/local/atlas/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c include_dirs = ['/usr/local/atlas/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = f77 include_dirs = ['/usr/local/atlas/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE >>> scipy.show_config() umfpack_info: NOT AVAILABLE atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 include_dirs = ['/usr/local/atlas/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] language = c include_dirs = ['/usr/local/atlas/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c include_dirs = ['/usr/local/atlas/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = f77 include_dirs = ['/usr/local/atlas/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 10:27:38 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 10:27:38 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> Message-ID: <7f014ea60903270727x4370bec7w5be1abfb57f84271@mail.gmail.com> This is numpy 1.3.0b1 and scipy 0.7.0 by the way. Forgot to mention it. On Fri, Mar 27, 2009 at 10:25 AM, Chris Colbert wrote: > Hey Everyone, > > I built Lapack and Atlas from source last night on a C2D running 32-bit > Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and > ran the dynamic LU decomp test on atlas. Both packages checked out fine. > > Then, I built numpy and scipy against them using the appropriate flags in > site.cfg for the parallel thread atlas libraries. This seems to have worked > properly as numpy.dot() utilizes both cores at 100% on very large arrays. I > have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified > that the two ids are different. > > So I believe the build went properly. The problem I am having now is that > numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never > returns (no matter the array size). Numpy.test() hung as well, I'm assuming > for this same reason. I have included the configurations below. Any idea > what would cause this? > > Thanks! > > Chris > > Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49) > [GCC 4.3.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> import scipy > >>> numpy.show_config() > atlas_threads_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > language = f77 > include_dirs = ['/usr/local/atlas/include'] > > blas_opt_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > define_macros = [('NO_ATLAS_INFO', 2)] > language = c > include_dirs = ['/usr/local/atlas/include'] > > atlas_blas_threads_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > language = c > include_dirs = ['/usr/local/atlas/include'] > > lapack_opt_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > define_macros = [('NO_ATLAS_INFO', 2)] > language = f77 > include_dirs = ['/usr/local/atlas/include'] > > lapack_mkl_info: > NOT AVAILABLE > > blas_mkl_info: > NOT AVAILABLE > > mkl_info: > NOT AVAILABLE > > >>> scipy.show_config() > umfpack_info: > NOT AVAILABLE > > atlas_threads_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > language = f77 > include_dirs = ['/usr/local/atlas/include'] > > blas_opt_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] > language = c > include_dirs = ['/usr/local/atlas/include'] > > atlas_blas_threads_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > language = c > include_dirs = ['/usr/local/atlas/include'] > > lapack_opt_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/usr/local/atlas/lib'] > define_macros = [('NO_ATLAS_INFO', 2)] > language = f77 > include_dirs = ['/usr/local/atlas/include'] > > lapack_mkl_info: > NOT AVAILABLE > > blas_mkl_info: > NOT AVAILABLE > > mkl_info: > NOT AVAILABLE > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 10:12:33 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Mar 2009 23:12:33 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> Message-ID: <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > Hey Everyone, > > I built Lapack and Atlas from source last night on a C2D running > 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack > build, and ran the dynamic LU decomp test on atlas. Both packages > checked out fine. > > Then, I built numpy and scipy against them using the appropriate flags > in site.cfg for the parallel thread atlas libraries. This seems to > have worked properly as numpy.dot() utilizes both cores at 100% on > very large arrays. I have also done id(numpy.dot) and > id(numpy.core.multiarray.dot) and verified that the two ids are > different. > > So I believe the build went properly. The problem I am having now is > that numpy.linalg.eig (and the eig functions in scipy) hang at 100% > CPU and never returns (no matter the array size). Numpy.test() hung as > well, I'm assuming for this same reason. I have included the > configurations below. Any idea what would cause this? What does numpy.test() returns ? This smells like a fortran runtime problem, cheers, David From sccolbert at gmail.com Fri Mar 27 10:31:49 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 10:31:49 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> numpy.test() doesn't return (after 2 hours of running at 100% at least). I imagine its hanging on this eig function as well. Chris On Fri, Mar 27, 2009 at 10:12 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > Hey Everyone, > > > > I built Lapack and Atlas from source last night on a C2D running > > 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack > > build, and ran the dynamic LU decomp test on atlas. Both packages > > checked out fine. > > > > Then, I built numpy and scipy against them using the appropriate flags > > in site.cfg for the parallel thread atlas libraries. This seems to > > have worked properly as numpy.dot() utilizes both cores at 100% on > > very large arrays. I have also done id(numpy.dot) and > > id(numpy.core.multiarray.dot) and verified that the two ids are > > different. > > > > So I believe the build went properly. The problem I am having now is > > that numpy.linalg.eig (and the eig functions in scipy) hang at 100% > > CPU and never returns (no matter the array size). Numpy.test() hung as > > well, I'm assuming for this same reason. I have included the > > configurations below. Any idea what would cause this? > > What does numpy.test() returns ? This smells like a fortran runtime > problem, > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 10:24:08 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Mar 2009 23:24:08 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> Message-ID: <49CCE188.3090203@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > numpy.test() doesn't return (after 2 hours of running at 100% at > least). I imagine its hanging on this eig function as well. Can you run the following test ? nosetests -v -s test_build.py (in numpy/linalg). If it fails, it almost surely a problem in the way you built numpy and/or atlas. Make sure that everything is built with the same fortran compiler (blas, lapack, atlas and numpy). cheers, David From charlesr.harris at gmail.com Fri Mar 27 11:16:01 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 27 Mar 2009 09:16:01 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> Message-ID: 2009/3/27 Chris Colbert > Hey Everyone, > > I built Lapack and Atlas from source last night on a C2D running 32-bit > Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and > ran the dynamic LU decomp test on atlas. Both packages checked out fine. > > Then, I built numpy and scipy against them using the appropriate flags in > site.cfg for the parallel thread atlas libraries. This seems to have worked > properly as numpy.dot() utilizes both cores at 100% on very large arrays. I > have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified > that the two ids are different. > > So I believe the build went properly. The problem I am having now is that > numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never > returns (no matter the array size). Numpy.test() hung as well, I'm assuming > for this same reason. I have included the configurations below. Any idea > what would cause this? > This is a problem that used to turn up regularly and was related to the atlas build. The atlas version can matter here, but I don't know what the currently recommended atlas version is. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 11:18:28 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 11:18:28 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CCE188.3090203@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> here are the results from that test: test_lapack (test_build.TestF77Mismatch) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.055s OK thanks again for the help, Chris On Fri, Mar 27, 2009 at 10:24 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > numpy.test() doesn't return (after 2 hours of running at 100% at > > least). I imagine its hanging on this eig function as well. > > Can you run the following test ? > > nosetests -v -s test_build.py (in numpy/linalg). > > If it fails, it almost surely a problem in the way you built numpy > and/or atlas. Make sure that everything is built with the same fortran > compiler (blas, lapack, atlas and numpy). > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 11:19:07 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 11:19:07 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> Message-ID: <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> I compiled everything with gfortran. I dont even have g77 on my system. On Fri, Mar 27, 2009 at 11:18 AM, Chris Colbert wrote: > here are the results from that test: > > test_lapack (test_build.TestF77Mismatch) ... ok > > ---------------------------------------------------------------------- > Ran 1 test in 0.055s > > OK > > > thanks again for the help, > > Chris > > > On Fri, Mar 27, 2009 at 10:24 AM, David Cournapeau < > david at ar.media.kyoto-u.ac.jp> wrote: > >> Chris Colbert wrote: >> > numpy.test() doesn't return (after 2 hours of running at 100% at >> > least). I imagine its hanging on this eig function as well. >> >> Can you run the following test ? >> >> nosetests -v -s test_build.py (in numpy/linalg). >> >> If it fails, it almost surely a problem in the way you built numpy >> and/or atlas. Make sure that everything is built with the same fortran >> compiler (blas, lapack, atlas and numpy). >> >> cheers, >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 11:20:22 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 11:20:22 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> Message-ID: <7f014ea60903270820q6f4664d7waebed38c11b292c7@mail.gmail.com> I built Atlas 3.8.3 which I assume is the newest release. Chris 2009/3/27 Charles R Harris > > > 2009/3/27 Chris Colbert > >> Hey Everyone, >> >> I built Lapack and Atlas from source last night on a C2D running 32-bit >> Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and >> ran the dynamic LU decomp test on atlas. Both packages checked out fine. >> >> Then, I built numpy and scipy against them using the appropriate flags in >> site.cfg for the parallel thread atlas libraries. This seems to have worked >> properly as numpy.dot() utilizes both cores at 100% on very large arrays. I >> have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified >> that the two ids are different. >> >> So I believe the build went properly. The problem I am having now is that >> numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never >> returns (no matter the array size). Numpy.test() hung as well, I'm assuming >> for this same reason. I have included the configurations below. Any idea >> what would cause this? >> > > This is a problem that used to turn up regularly and was related to the > atlas build. The atlas version can matter here, but I don't know what the > currently recommended atlas version is. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 11:05:29 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 00:05:29 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> Message-ID: <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > I compiled everything with gfortran. I dont even have g77 on my system. Ok. Which version of atlas and lapack are you using ? Lapack 3.2 is known to cause trouble. Atlas 3.8.0 and 3.8.1 had some bugs too, I can't remember exactly which one. cheers, David From sccolbert at gmail.com Fri Mar 27 11:25:38 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 11:25:38 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> Atlas 3.8.3 and Lapack 3.1.1 On Fri, Mar 27, 2009 at 11:05 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > I compiled everything with gfortran. I dont even have g77 on my system. > > Ok. Which version of atlas and lapack are you using ? Lapack 3.2 is > known to cause trouble. Atlas 3.8.0 and 3.8.1 had some bugs too, I can't > remember exactly which one. > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Mar 27 11:33:50 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Mar 2009 15:33:50 +0000 (UTC) Subject: [Numpy-discussion] Behavior of numpy.random.exponential References: <62e6eafb0903270449g7b33b3bbge412b2c13308df6e@mail.gmail.com> <1cd32cbb0903270620r23855ca3x7fb003065fc31e78@mail.gmail.com> Message-ID: Fri, 27 Mar 2009 09:20:09 -0400, josef.pktd wrote: [clip: numpy.random.exponential docstring] > I changed this a while ago in the documentation editor, but it hasn't > been merged yet to the source docstring It is merged, but I forgot to regenerate the mtrand.c file. -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Fri Mar 27 11:23:36 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 00:23:36 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> Message-ID: <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > Atlas 3.8.3 and Lapack 3.1.1 Hm... I am afraid I don't see what may cause this problem. Could you rebuild numpy from scratch and give us the log ? rm -rf build && python setup.py build &> build.log David From sccolbert at gmail.com Fri Mar 27 11:58:08 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 11:58:08 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> Message-ID: <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> David, The log was too big for the list, so I sent it to your email address directly. Chris 2009/3/27 Chris Colbert > David, > > The log is attached. > > Thanks for giving me the bash command. I would have never figured that one > out.... > > Chris > > > On Fri, Mar 27, 2009 at 11:23 AM, David Cournapeau < > david at ar.media.kyoto-u.ac.jp> wrote: > >> Chris Colbert wrote: >> > Atlas 3.8.3 and Lapack 3.1.1 >> >> Hm... I am afraid I don't see what may cause this problem. Could you >> rebuild numpy from scratch and give us the log ? >> >> rm -rf build && python setup.py build &> build.log >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 11:56:07 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 00:56:07 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCDED1.3090707@ar.media.kyoto-u.ac.jp> <7f014ea60903270731p57c2612v80fbf25b905ae4c1@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> Message-ID: <49CCF717.5020308@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > David, > > The log was too big for the list, so I sent it to your email address > directly. Hm, never saw this one. In the build log, one can see: ... compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status failure. This does not look good. It may be a problem in your toolchain (i.e. how your distribution build gcc and co). I am afraid there is not much we can do at this point - you should report the problem to your OS vendor, hoping someone knows about this, cheers, David From sccolbert at gmail.com Fri Mar 27 12:17:56 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 12:17:56 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CCF717.5020308@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> So you think its a problem with gcc? im using version 4.3.1 shipped with the ubuntu 8.10 distro. Chris On Fri, Mar 27, 2009 at 11:56 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > David, > > > > The log was too big for the list, so I sent it to your email address > > directly. > > Hm, never saw this one. In the build log, one can see: > > ... > compile options: '-c' > > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas > -lptcblas -latlas -o _configtest > /usr/bin/ld: _configtest: hidden symbol `__powidf2' in > /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > /usr/bin/ld: _configtest: hidden symbol `__powidf2' in > /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > failure. > > This does not look good. It may be a problem in your toolchain (i.e. how > your distribution build gcc and co). I am afraid there is not much we can do > at this point - you should report the problem to your OS vendor, hoping > someone knows about this, > > cheers, > > David > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 12:05:20 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 01:05:20 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCE188.3090203@ar.media.kyoto-u.ac.jp> <7f014ea60903270818u4cb94051r97c89ca3e28d0990@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> Message-ID: <49CCF940.2050507@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > So you think its a problem with gcc? That's my guess, yes. > > im using version 4.3.1 shipped with the ubuntu 8.10 distro. I thought you were using mint ? If you are using ubuntu, then it is very strange, because many people build and use numpy on this platform without any trouble. Is your OS 64 bits ? cheers, David From sccolbert at gmail.com Fri Mar 27 12:26:06 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 12:26:06 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CCF940.2050507@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270819n35bd333dje5b3000e4ab430d0@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> mint is built from like 98% ubuntu. In this case, Mint 6 is built from ubuntu 8.10. Most repository access is through the Ubuntu repositories. gcc falls under this... 32 bit OS Thanks again for your patience! I'm wet behind the ears when it comes to this kind of stuff. Chris On Fri, Mar 27, 2009 at 12:05 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > So you think its a problem with gcc? > > That's my guess, yes. > > > > > im using version 4.3.1 shipped with the ubuntu 8.10 distro. > > I thought you were using mint ? If you are using ubuntu, then it is very > strange, because many people build and use numpy on this platform > without any trouble. > > Is your OS 64 bits ? > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 27 12:37:11 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 27 Mar 2009 10:37:11 -0600 Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: <7827114.1611238157765377.JavaMail.root@athene> References: <17048133.1591238157753233.JavaMail.root@athene> <7827114.1611238157765377.JavaMail.root@athene> Message-ID: 2009/3/27 Christian Marquardt > Error messages? Sure;-) > > python -c 'import numpy; numpy.test()' > Running unit tests for numpy > NumPy version 1.3.0b1 > NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy > Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ > gcc 4.2 mode] > nose version 0.10.4 > > .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................FF..............FF.......................................................................................................................................................................................................................................................................................................................................................................... > OK, the tests should be fixed in r6773. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 27 12:38:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 27 Mar 2009 10:38:31 -0600 Subject: [Numpy-discussion] Changeset 6729 In-Reply-To: <9457e7c80903270522n1fdb1d9m2179938bb81eca2f@mail.gmail.com> References: <9457e7c80903270522n1fdb1d9m2179938bb81eca2f@mail.gmail.com> Message-ID: 2009/3/27 St?fan van der Walt > Hi Chuck > > 2009/3/27 Charles R Harris : > > Also, the test is buggy. > > Could you be a bit more specific? Which test, what is the problem, > what would you like to see? > I fixed it. You used assert_equal instead of assert_array_equal which caused the axis test to fail. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Mar 27 12:43:00 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 01:43:00 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCEB39.4020900@ar.media.kyoto-u.ac.jp> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> Message-ID: <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> 2009/3/28 Chris Colbert : > mint is built from like 98% ubuntu. Ok. The problem is that fortran often falls into the bottom percent as far as support is concerned, since so few people care :) Note that on Ubuntu 8.10, you can just install atlas from the repositories - and 1.3.0 deb will be provided once it is released cheers, David From sccolbert at gmail.com Fri Mar 27 12:47:07 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 12:47:07 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> Message-ID: <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Chris On Fri, Mar 27, 2009 at 12:43 PM, David Cournapeau wrote: > 2009/3/28 Chris Colbert : > > mint is built from like 98% ubuntu. > > Ok. The problem is that fortran often falls into the bottom percent as > far as support is concerned, since so few people care :) > > Note that on Ubuntu 8.10, you can just install atlas from the > repositories - and 1.3.0 deb will be provided once it is released > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lutz.maibaum at gmail.com Fri Mar 27 12:52:05 2009 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Fri, 27 Mar 2009 09:52:05 -0700 Subject: [Numpy-discussion] Normalization of ifft In-Reply-To: References: Message-ID: Hi Joe, > Travis has freed his original book and large parts of it (e.g., > the C API docs) are now being incorporated into the > actively-maintained manuals at docs.scipy.org. ?Please go there for > the latest docs. ?You'll find that the fft section gives the 1/n > formula when discussing ifft. Thanks for the explanation. It sounds like the ebook "Guide to Numpy" is no longer being updated. If that is the case, it might be useful to maintain a list of errata. > I can see where Lutz got the impression that Guide to Numpy was the > doc to read. ?The descriptions of books on both numpy.scipy.org and > docs.scipy.org do give that impression. That is indeed what happened. As someone who had never used Numpy before, I figured the "mature" documentation, while possibly not entirely up to date, would be the best start. > I haven't > checked in detail but much of the rest of Guide to Numpy is now > included in the Reference Guide. ?Would it be ok to put some words on > both sites to the effect that the RG is the place to go for routine, > class, and module docs, or (possibly) just the place to go, period? That is probably a good idea. However, it seems to me that the reference guide might not be the best place to starts if one wants to learn Numpy from scratch. I guess the Numpy User Guide will eventually replace the Guide to Numpy in that role, but it looks rather incomplete for now. Thanks for clearing this up, Lutz From david at ar.media.kyoto-u.ac.jp Fri Mar 27 12:36:55 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 01:36:55 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270825v6d15c157h44157839ad43dedc@mail.gmail.com> <49CCEF78.7040600@ar.media.kyoto-u.ac.jp> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> Message-ID: <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> Chris Colbert wrote: > forgive my ignorance, but wouldn't installing atlas from the > repositories defeat the purpose of installing atlas at all, since the > build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David From sccolbert at gmail.com Fri Mar 27 12:57:39 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 12:57:39 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270849v330f69b9o1b732cf004293447@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> Message-ID: <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> this is true. but not nearly as good of a learning experience :) I'm a mechanical engineer, so all of this computer science stuff is really new and interesting to me. So i'm trying my best to get a handle on exactly what is going on behind the scenes. Chris On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Chris Colbert wrote: > > forgive my ignorance, but wouldn't installing atlas from the > > repositories defeat the purpose of installing atlas at all, since the > > build process optimizes it to your own cpu timings? > > Yes and no. Yes, it will be slower than a cutom-build atlas, but it will > be reasonably faster than blas/lapack. Please also keep in mind that > this mostly matters for linear algebra and big matrices. > > Thinking from another POV: how many 1000x1000 matrices could have you > inverted while wasting your time on this already :) > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 13:09:59 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 13:09:59 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270858xafbdfb3ud4a20c200c9591e3@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> Message-ID: <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> some other things I might mention, though I doubt they would have an effect: When i built Atlas, I had to force it to use a 32-bit pointer length (I assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my system) in numpy's site.cfg I only linked to the pthread .so's. Should I have also linked to the single threaded counterparts in the section above? (I assumed one would be overridden by the other) Other than those, I followed closely the instructions on scipy.org. Chris On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert wrote: > this is true. but not nearly as good of a learning experience :) > > I'm a mechanical engineer, so all of this computer science stuff is really > new and interesting to me. So i'm trying my best to get a handle on exactly > what is going on behind the scenes. > > Chris > > > On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau < > david at ar.media.kyoto-u.ac.jp> wrote: > >> Chris Colbert wrote: >> > forgive my ignorance, but wouldn't installing atlas from the >> > repositories defeat the purpose of installing atlas at all, since the >> > build process optimizes it to your own cpu timings? >> >> Yes and no. Yes, it will be slower than a cutom-build atlas, but it will >> be reasonably faster than blas/lapack. Please also keep in mind that >> this mostly matters for linear algebra and big matrices. >> >> Thinking from another POV: how many 1000x1000 matrices could have you >> inverted while wasting your time on this already :) >> >> cheers, >> >> David >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Mar 27 15:00:45 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 04:00:45 +0900 Subject: [Numpy-discussion] A few more questions about build doc Message-ID: <49CD225D.1090101@ar.media.kyoto-u.ac.jp> Hi, I spent the whole evening on automating our whole release process on supported platforms. I am almost there, but I have a few relatively minor annoyances related to doc: - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build without setting LANG=C, I got some weird latex errors at the latex->pdf stage, which I am reluctant to track down :) - I modified doc/source/conf.py such as the reported numpy version is exactly the one used to build the doc. Am I right that building the numpy documentation requires numpy to be installed (for autodoc and co), or is this a wrong assumption ? I've realized this after the change, but I can of course revert it if that's a problem, cheers, David From robert.kern at gmail.com Fri Mar 27 16:15:32 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Mar 2009 15:15:32 -0500 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: <9457e7c80903270518q71044da9k142a26135a62e57d@mail.gmail.com> References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> <49CCBC07.6000607@american.edu> <9457e7c80903270518q71044da9k142a26135a62e57d@mail.gmail.com> Message-ID: <3d375d730903271315p54903f70o204e73a8135efdcf@mail.gmail.com> 2009/3/27 St?fan van der Walt : > 2009/3/27 Alan G Isaac : >> On 3/27/2009 6:48 AM David Cournapeau apparently wrote: >>> ? ? To build the numpy .dmg mac os x installer, I use a script from the >>> adium project, which uses applescript and some mac os x black magic. The >>> script seems to be GPL, as adium itself: >> >> >> It might be worth a query to see if the >> author would release just this script >> under the modified BSD license. >> http://trac.adiumx.com/wiki/ContactUs > > I don't see the need. ?This is just a tool, of which the source code, > as well as our modifications, are available. ?We don't link to it, we > don't derive anything in NumPy from it and we do not distribute it, so > we are not in any disagreement with the GPL. I concur. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Fri Mar 27 16:25:52 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 27 Mar 2009 13:25:52 -0700 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Mar 27, 2009 at 3:48 AM, David Cournapeau wrote: > ? ?To build the numpy .dmg mac os x installer, I use a script from the > adium project, which uses applescript and some mac os x black magic. The > script seems to be GPL, as adium itself: Why do you need to use the adium project? I am just curious why the scripts I was using aren't sufficient: http://projects.scipy.org/numpy/browser/trunk/tools/osxbuild Jarrod From cournape at gmail.com Fri Mar 27 16:28:35 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 05:28:35 +0900 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903271328o18fe9a13yb5a93a14a162398c@mail.gmail.com> On Sat, Mar 28, 2009 at 5:25 AM, Jarrod Millman wrote: > On Fri, Mar 27, 2009 at 3:48 AM, David Cournapeau > wrote: >> ? ?To build the numpy .dmg mac os x installer, I use a script from the >> adium project, which uses applescript and some mac os x black magic. The >> script seems to be GPL, as adium itself: > > Why do you need to use the adium project? ?I am just curious why the > scripts I was using aren't sufficient: > http://projects.scipy.org/numpy/browser/trunk/tools/osxbuild For "fancy" things like background images, fixing the windows size, etc... How mac os x does it is undocumented, and the only script I found to do it automatically was from adium. cheers, David From theller at ctypes.org Fri Mar 27 16:32:49 2009 From: theller at ctypes.org (Thomas Heller) Date: Fri, 27 Mar 2009 21:32:49 +0100 Subject: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute [patch] In-Reply-To: References: <6183458.298391237815382345.JavaMail.tomcat@pne-ps1-sn2> <1238067688.6464.29.camel@supraflex> <49CB804E.8060402@molden.no> Message-ID: > Sturla Molden schrieb: >> On 3/26/2009 12:41 PM, Jens Rantil wrote: >> >>> Wouldn't my code, or a tweak of it, be a nice feature in >>> numpy.ctypeslib? Is this the wrong channel for proposing things like >>> this? >> >> If you look at >> >> http://svn.scipy.org/svn/numpy/trunk/numpy/ctypeslib.py >> >> you will see that it does almost the same. I think it would be better to >> work out why ndpointer fails as restype and patch that. > Thomas Heller schrieb: > ndpointer(...), which returns an _nptr instance, does not work as restype > because neither it is a base class of one of the ctypes base types like > ctypes.c_void_p, also it is not callable with one argument. > > There are two ways to fix this. The first one is to make the _nptr callable [...] > > The other way is to make _nptr a subclass of ctypes.c_void_p, > the result that the foreign function call returns will then be > an instance of this class. Unfortunately, ctypes will not call > __new__() to create this instance; so a custom __new__() implementation > cannot return a numpy array and we are left with the _nptr instance. > The only way to create and access the numpy array is to construct > and return one from a method call on the _nptr instance, or a property > on the _nptr instance. > Ok, .errcheck could call that method and return the result. > Well, looking into the ctypes sources trying to invent a new protocol for the restype attribute I found out that THERE IS ALREADY a mechanism for it, but I had totally forgotten that it exists. When the .restype attribute of a function is set to a SUBCLASS of a ctypes type (c_void_p for example), an instance of this subclass is created. After that, if this instance has a _check_retval_ method, this method is called and the result of this call is returned. So, it is indeed possible to create a class that can be assigned to .restype, and which can convert the return value of a function to whatever we like. I will prepare a patch for numpy.ctypeslib. -- Thanks, Thomas From bryan at cole.uklinux.net Fri Mar 27 18:38:25 2009 From: bryan at cole.uklinux.net (Bryan Cole) Date: Fri, 27 Mar 2009 22:38:25 +0000 Subject: [Numpy-discussion] array of matrices Message-ID: <1238193504.12867.4.camel@pc2.cole.uklinux.net> I have a number of arrays of shape (N,4,4). I need to perform a vectorised matrix-multiplication between pairs of them I.e. matrix-multiplication rules for the last two dimensions, usual element-wise rule for the 1st dimension (of length N). (How) is this possible with numpy? thanks, BC From robert.kern at gmail.com Fri Mar 27 18:43:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Mar 2009 17:43:25 -0500 Subject: [Numpy-discussion] array of matrices In-Reply-To: <1238193504.12867.4.camel@pc2.cole.uklinux.net> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> Message-ID: <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: > I have a number of arrays of shape (N,4,4). I need to perform a > vectorised matrix-multiplication between pairs of them I.e. > matrix-multiplication rules for the last two dimensions, usual > element-wise rule for the 1st dimension (of length N). > > (How) is this possible with numpy? dot(a,b) was specifically designed for this use case. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Fri Mar 27 18:56:23 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Mar 2009 22:56:23 +0000 (UTC) Subject: [Numpy-discussion] A few more questions about build doc References: <49CD225D.1090101@ar.media.kyoto-u.ac.jp> Message-ID: Sat, 28 Mar 2009 04:00:45 +0900, David Cournapeau wrote: [clip] > - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build > without setting LANG=C, I got some weird latex errors at the latex->pdf > stage, which I am reluctant to track down :) LANG=C should be ok. > - I modified doc/source/conf.py such as the reported numpy version > is exactly the one used to build the doc. Am I right that building the > numpy documentation requires numpy to be installed (for autodoc and co), > or is this a wrong assumption ? I've realized this after the change, but > I can of course revert it if that's a problem, It's the correct assumption. I thought about this too, but decided to leave it alone so that the version number reported in the docs would correspond to the major XX.YY and not the bugfix XX.YY.ZZ releases. The point was that there ought to be no API changes in the .ZZ, so we'd like docs for newer versions (possibly containing updates etc.) be labelled as compatible with all XX.YY. versions. -- Pauli Virtanen From cournape at gmail.com Fri Mar 27 19:04:28 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 08:04:28 +0900 Subject: [Numpy-discussion] A few more questions about build doc In-Reply-To: References: <49CD225D.1090101@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903271604q2e6ef4a3hb42081895a717f05@mail.gmail.com> On Sat, Mar 28, 2009 at 7:56 AM, Pauli Virtanen wrote: > Sat, 28 Mar 2009 04:00:45 +0900, David Cournapeau wrote: > [clip] >> ? ? - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build >> without setting LANG=C, I got some weird latex errors at the latex->pdf >> stage, which I am reluctant to track down :) > > LANG=C should be ok. Ok - it looks like the problem may have not been caused by this, though, but by some weird import stuff (I am pretty happy with the almost 100 % automation, but paver + virtualenv + setuptools interaction for imports can be mind blowing). > > It's the correct assumption. > > I thought about this too, but decided to leave it alone so that the > version number reported in the docs would correspond to the major XX.YY > and not the bugfix XX.YY.ZZ releases. The point was that there ought to > be no API changes in the .ZZ, so we'd like docs for newer versions > (possibly containing updates etc.) be labelled as compatible with all > XX.YY. versions. Ok, I broke this, then - but this can be easily fixed by generating several version numbers in the version.py cheers, David From millman at berkeley.edu Fri Mar 27 19:19:41 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 27 Mar 2009 16:19:41 -0700 Subject: [Numpy-discussion] SciPy 2009 Conference will be Aug. 18-23 Message-ID: The subject says it all. Over the next few days, we will be updating the conference website with additional information. So if you are interested, please keep an eye on: http://conference.scipy.org/ Jarrod From charlesr.harris at gmail.com Fri Mar 27 21:48:10 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 27 Mar 2009 19:48:10 -0600 Subject: [Numpy-discussion] array of matrices In-Reply-To: <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> Message-ID: On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote: > On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: > > I have a number of arrays of shape (N,4,4). I need to perform a > > vectorised matrix-multiplication between pairs of them I.e. > > matrix-multiplication rules for the last two dimensions, usual > > element-wise rule for the 1st dimension (of length N). > > > > (How) is this possible with numpy? > > dot(a,b) was specifically designed for this use case. > I think maybe he wants to treat them as stacked matrices. In [13]: a = arange(8).reshape(2,2,2) In [14]: (a[:,:,:,newaxis]*a[:,newaxis,:,:]).sum(-2) Out[14]: array([[[ 2, 3], [ 6, 11]], [[46, 55], [66, 79]]]) In [15]: for i in range(2) : dot(a[i],a[i]) ....: Out[15]: array([[ 2, 3], [ 6, 11]]) Out[15]: array([[46, 55], [66, 79]]) Although it might be easier to keep them in a list. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 22:32:38 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 22:32:38 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCF717.5020308@ar.media.kyoto-u.ac.jp> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> Message-ID: <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> Ok, im getting the same error on an install of straight ubuntu 8.10 the guy in this thread got the same error as me, but its not clear how he worked it out: http://www.mail-archive.com/numpy-discussion at scipy.org/msg13565.html from googling here: http://sources.redhat.com/ml/binutils/2004-12/msg00033.html it says that the library was not built correctly. does this mean my atlas .so's (which i built via -> make ptshared) are incorrect? I suppose I could just grab atlas from the repositories, but that would be admitting defeat. Chris On Fri, Mar 27, 2009 at 1:09 PM, Chris Colbert wrote: > some other things I might mention, though I doubt they would have an > effect: > > When i built Atlas, I had to force it to use a 32-bit pointer length (I > assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my > system) > > in numpy's site.cfg I only linked to the pthread .so's. Should I have also > linked to the single threaded counterparts in the section above? (I assumed > one would be overridden by the other) > > Other than those, I followed closely the instructions on scipy.org. > > Chris > > > On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert wrote: > >> this is true. but not nearly as good of a learning experience :) >> >> I'm a mechanical engineer, so all of this computer science stuff is really >> new and interesting to me. So i'm trying my best to get a handle on exactly >> what is going on behind the scenes. >> >> Chris >> >> >> On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau < >> david at ar.media.kyoto-u.ac.jp> wrote: >> >>> Chris Colbert wrote: >>> > forgive my ignorance, but wouldn't installing atlas from the >>> > repositories defeat the purpose of installing atlas at all, since the >>> > build process optimizes it to your own cpu timings? >>> >>> Yes and no. Yes, it will be slower than a cutom-build atlas, but it will >>> be reasonably faster than blas/lapack. Please also keep in mind that >>> this mostly matters for linear algebra and big matrices. >>> >>> Thinking from another POV: how many 1000x1000 matrices could have you >>> inverted while wasting your time on this already :) >>> >>> cheers, >>> >>> David >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Fri Mar 27 23:41:31 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Fri, 27 Mar 2009 23:41:31 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270917v78035c62o202ede951dcd3403@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> Message-ID: <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> Alright, building numpy against atlas from the repositories works, but this atlas only contains the single threaded libraries. So i would like to get my build working completely. I think the problem has to do with how im making the atlas .so's from the .a files. I am simply calling the command 'make ptshared' in the atlas lib directory. The LDFLAGS of that particular makefile is set to '-melf_i386'. I have no idea what this means, the only thing I know is that LDFLAGS has something to do with linking, and from what I read on google, the error I am getting is do to improperly created .so files. I've attached both makefiles to this message, if anyone could take a look and see if something obvious is amiss. Thanks, Chris On Fri, Mar 27, 2009 at 10:32 PM, Chris Colbert wrote: > Ok, im getting the same error on an install of straight ubuntu 8.10 > > the guy in this thread got the same error as me, but its not clear how he > worked it out: > http://www.mail-archive.com/numpy-discussion at scipy.org/msg13565.html > > from googling here: > http://sources.redhat.com/ml/binutils/2004-12/msg00033.html > > it says that the library was not built correctly. > > does this mean my atlas .so's (which i built via -> make ptshared) are > incorrect? > > I suppose I could just grab atlas from the repositories, but that would be > admitting defeat. > > Chris > > > On Fri, Mar 27, 2009 at 1:09 PM, Chris Colbert wrote: > >> some other things I might mention, though I doubt they would have an >> effect: >> >> When i built Atlas, I had to force it to use a 32-bit pointer length (I >> assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my >> system) >> >> in numpy's site.cfg I only linked to the pthread .so's. Should I have also >> linked to the single threaded counterparts in the section above? (I assumed >> one would be overridden by the other) >> >> Other than those, I followed closely the instructions on scipy.org. >> >> Chris >> >> >> On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert wrote: >> >>> this is true. but not nearly as good of a learning experience :) >>> >>> I'm a mechanical engineer, so all of this computer science stuff is >>> really new and interesting to me. So i'm trying my best to get a handle on >>> exactly what is going on behind the scenes. >>> >>> Chris >>> >>> >>> On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau < >>> david at ar.media.kyoto-u.ac.jp> wrote: >>> >>>> Chris Colbert wrote: >>>> > forgive my ignorance, but wouldn't installing atlas from the >>>> > repositories defeat the purpose of installing atlas at all, since the >>>> > build process optimizes it to your own cpu timings? >>>> >>>> Yes and no. Yes, it will be slower than a cutom-build atlas, but it will >>>> be reasonably faster than blas/lapack. Please also keep in mind that >>>> this mostly matters for linear algebra and big matrices. >>>> >>>> Thinking from another POV: how many 1000x1000 matrices could have you >>>> inverted while wasting your time on this already :) >>>> >>>> cheers, >>>> >>>> David >>>> _______________________________________________ >>>> Numpy-discussion mailing list >>>> Numpy-discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 4184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Make.inc Type: application/octet-stream Size: 6425 bytes Desc: not available URL: From robert.kern at gmail.com Sat Mar 28 03:47:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Mar 2009 02:47:09 -0500 Subject: [Numpy-discussion] array of matrices In-Reply-To: References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> Message-ID: <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> 2009/3/27 Charles R Harris : > > On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote: >> >> On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: >> > I have a number of arrays of shape (N,4,4). I need to perform a >> > vectorised matrix-multiplication between pairs of them I.e. >> > matrix-multiplication rules for the last two dimensions, usual >> > element-wise rule for the 1st dimension (of length N). >> > >> > (How) is this possible with numpy? >> >> dot(a,b) was specifically designed for this use case. > > I think maybe he wants to treat them as stacked matrices. Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just (4, 4). Never mind. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:01:12 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:01:12 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? Message-ID: I imagine I'm using 64 bit numpy as I made a vanilla install from recent source on a 64 bit box but how can I tell for sure? I have some problems creating large arrays. In [29]: a=numpy.empty((1024, 1024, 1024), dtype=int8) works just fine In [30]: a=numpy.empty((1024, 1024, 2048), dtype=int8) gives me the dimensions too large error: ValueError: dimensions too large. In [31]: a=numpy.empty((1024, 1024, 2047), dtype=int8) gives me a memory error: MemoryError: How can I create these large arrays? Do I need to make sure I have a 64 bit python? How do I do that? Thanks in advance, John. From cournape at gmail.com Sat Mar 28 07:07:46 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 20:07:46 +0900 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: Message-ID: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> On Sat, Mar 28, 2009 at 8:01 PM, John Reid wrote: > I imagine I'm using 64 bit numpy as I made a vanilla install from recent > source on a 64 bit box but how can I tell for sure? I have some problems > creating large arrays. from platform import machine print machine() Should give you something like x86_64 for 64 bits intel/amd architecture, David From charlesr.harris at gmail.com Sat Mar 28 07:16:54 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 05:16:54 -0600 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: Message-ID: On Sat, Mar 28, 2009 at 5:01 AM, John Reid wrote: > I imagine I'm using 64 bit numpy as I made a vanilla install from recent > source on a 64 bit box but how can I tell for sure? I have some problems > creating large arrays. > What platform are you on? I'm guessing Mac. You can check python on unix type systems with $[charris at f9 ~]$ file `which python` /usr/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:19:59 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:19:59 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: Message-ID: Sorry for noise, it is my mistake. My assumption that the box is 64 bit was wrong :( At least the processors are 64 bit : Intel? Core?2 Duo Processor T9600 but uname -m reports: i686 which as far as I understand means it thinks it is a 32 bit processor. If anyone knows better please let me know. John. From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:23:18 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:23:18 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: David Cournapeau wrote: > from platform import machine > print machine() > > Should give you something like x86_64 for 64 bits intel/amd architecture, In [3]: from platform import machine In [4]: print machine() i686 Now I'm wondering why the OS isn't 64 bit but that's not for discussion here I guess. John. From charlesr.harris at gmail.com Sat Mar 28 07:28:16 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 05:28:16 -0600 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: On Sat, Mar 28, 2009 at 5:23 AM, John Reid wrote: > David Cournapeau wrote: > > from platform import machine > > print machine() > > > > Should give you something like x86_64 for 64 bits intel/amd architecture, > > > In [3]: from platform import machine > > In [4]: print machine() > i686 > > > Now I'm wondering why the OS isn't 64 bit but that's not for discussion > here I guess. > What really matters is if python is 64 bits. Most 64 bit systems also run 32 bit binaries. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:21:12 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:21:12 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: Message-ID: Charles R Harris wrote: > What platform are you on? I'm guessing Mac. You can check python on unix > type systems with > > $[charris at f9 ~]$ file `which python` > /usr/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 > (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped I'm on OpenSuse: file `which python` /usr/local/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:31:11 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:31:11 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: Message-ID: Charles R Harris wrote: > What platform are you on? I'm guessing Mac. You can check python on unix > type systems with > > $[charris at f9 ~]$ file `which python` > /usr/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 > (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped file `which python` /usr/local/bin/python: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped From cournape at gmail.com Sat Mar 28 07:38:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 20:38:10 +0900 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: <5b8d13220903280438p56da20b3gffb68dbe9ad9db52@mail.gmail.com> On Sat, Mar 28, 2009 at 8:23 PM, John Reid wrote: > David Cournapeau wrote: >> from platform import machine >> print machine() >> >> Should give you something like x86_64 for 64 bits intel/amd architecture, > > > In [3]: from platform import machine > > In [4]: print machine() > i686 > > > Now I'm wondering why the OS isn't 64 bit but that's not for discussion > here I guess. Generally, at least on linux, you have to choose a difference installation CD (or bootstrap method) depending on whether you want 32 or 64 bits OS when installing. Assuming a 64 bits capable CPU, I think you can't run 64 bits binaries on a 32 bits OS, but the contrary is more common (I don't really know the details - I stopped caring with vmware :) ). cheers, David From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 07:32:57 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 11:32:57 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: Charles R Harris wrote: > What really matters is if python is 64 bits. Most 64 bit systems also > run 32 bit binaries. Are you saying that even if "uname -m" gives i686, I still might be able to build a 64 bit python and numpy? From cournape at gmail.com Sat Mar 28 07:45:59 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 20:45:59 +0900 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: <5b8d13220903280445p6ebf95eh624f5cf94020d2ea@mail.gmail.com> On Sat, Mar 28, 2009 at 8:32 PM, John Reid wrote: > > > Charles R Harris wrote: >> What really matters is if python is 64 bits. Most 64 bit systems also >> run 32 bit binaries. > > Are you saying that even if "uname -m" gives i686, I still might be able > to build a 64 bit python and numpy? I think he meant exactly the contrary ;) AFAIK, you can't run 64 bits binaries on a 32 bits linux, even if your CPU is 64 bits, David From charlesr.harris at gmail.com Sat Mar 28 07:48:54 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 05:48:54 -0600 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: On Sat, Mar 28, 2009 at 5:32 AM, John Reid wrote: > > > Charles R Harris wrote: > > What really matters is if python is 64 bits. Most 64 bit systems also > > run 32 bit binaries. > > Are you saying that even if "uname -m" gives i686, I still might be able > to build a 64 bit python and numpy? > Probably not. You need a 64 bit operating system and it doesn't look like you have that. Did you install Suse yourself? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.reid at mail.cryst.bbk.ac.uk Sat Mar 28 08:01:32 2009 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Sat, 28 Mar 2009 12:01:32 +0000 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64 bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: Charles R Harris wrote: > > > On Sat, Mar 28, 2009 at 5:32 AM, John Reid > wrote: > > > > Charles R Harris wrote: > > What really matters is if python is 64 bits. Most 64 bit systems also > > run 32 bit binaries. > > Are you saying that even if "uname -m" gives i686, I still might be able > to build a 64 bit python and numpy? > > > Probably not. You need a 64 bit operating system and it doesn't look > like you have that. Did you install Suse yourself? Nope, but I do have root access to the box. I think it is probably a bit late to change it now considering how much has been installed already and the number of other users. Thanks to you and David for your help. John. From dineshbvadhia at hotmail.com Sat Mar 28 09:21:02 2009 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 28 Mar 2009 06:21:02 -0700 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit numpy? In-Reply-To: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: Uhmmm! I installed 64-bit Python (2.5x) on a Windows 64-bit Vista machine (yes, strange but true) hoping that the 32-bit Numpy & Scipy libraries would work but they didn't. From: Charles R Harris Sent: Saturday, March 28, 2009 4:28 AM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit numpy? On Sat, Mar 28, 2009 at 5:23 AM, John Reid wrote: David Cournapeau wrote: > from platform import machine > print machine() > > Should give you something like x86_64 for 64 bits intel/amd architecture, In [3]: from platform import machine In [4]: print machine() i686 Now I'm wondering why the OS isn't 64 bit but that's not for discussion here I guess. What really matters is if python is 64 bits. Most 64 bit systems also run 32 bit binaries. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Mar 28 09:16:02 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 22:16:02 +0900 Subject: [Numpy-discussion] How to tell whether I am using 32 bit or 64bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> Message-ID: <49CE2312.3010805@ar.media.kyoto-u.ac.jp> Dinesh B Vadhia wrote: > Uhmmm! I installed 64-bit Python (2.5x) on a Windows 64-bit Vista > machine (yes, strange but true) hoping that the 32-bit Numpy & Scipy > libraries would work but they didn't. That's a totally different situation: in your case, python and numpy share the same address space in one process (for all purpose, numpy is a dll for python), and you certainly can't mix 32 and 64 bits in the same process. What you can do is running 32 bits numpy/scipy for a 32 bits python on windows 64 bits... ... or helping us making numpy and scipy work on windows 64 bits by testing the experimental 64 bits builds of numpy/scipy for windows :) cheers, David From david at ar.media.kyoto-u.ac.jp Sat Mar 28 09:26:31 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 22:26:31 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 Message-ID: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Hi, I am pleased to announce the release of the rc1 for numpy 1.3.0. You can find source tarballs and installers for both Mac OS X and Windows on the sourceforge page: https://sourceforge.net/projects/numpy/ The release note for the 1.3.0 release are below, The Numpy developers ========================= NumPy 1.3.0 Release Notes ========================= This minor includes numerous bug fixes, official python 2.6 support, and several new features such as generalized ufuncs. Highlights ========== Python 2.6 support ~~~~~~~~~~~~~~~~~~ Python 2.6 is now supported on all previously supported platforms, including windows. http://www.python.org/dev/peps/pep-0361/ Generalized ufuncs ~~~~~~~~~~~~~~~~~~ There is a general need for looping over not only functions on scalars but also over functions on vectors (or arrays), as explained on http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to realize this concept by generalizing the universal functions (ufuncs), and provide a C implementation that adds ~500 lines to the numpy code base. In current (specialized) ufuncs, the elementary function is limited to element-by-element operations, whereas the generalized version supports "sub-array" by "sub-array" operations. The Perl vector library PDL provides a similar functionality and its terms are re-used in the following. Each generalized ufunc has information associated with it that states what the "core" dimensionality of the inputs is, as well as the corresponding dimensionality of the outputs (the element-wise ufuncs have zero core dimensions). The list of the core dimensions for all arguments is called the "signature" of a ufunc. For example, the ufunc numpy.add has signature "(),()->()" defining two scalar inputs and one scalar output. Another example is (see the GeneralLoopingFunctions page) the function inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner product along the last axis of each input, but keeps the remaining indices intact. For example, where a is of shape (3,5,N) and b is of shape (5,N), this will return an output of shape (3,5). The underlying elementary function is called 3*5 times. In the signature, we specify one core dimension "(i)" for each input and zero core dimensions "()" for the output, since it takes two 1-d arrays and returns a scalar. By using the same name "i", we specify that the two corresponding dimensions should be of the same size (or one of them is of size 1 and will be broadcasted). The dimensions beyond the core dimensions are called "loop" dimensions. In the above example, this corresponds to (3,5). The usual numpy "broadcasting" rules apply, where the signature determines how the dimensions of each input/output object are split into core and loop dimensions: While an input array has a smaller dimensionality than the corresponding number of core dimensions, 1's are pre-pended to its shape. The core dimensions are removed from all inputs and the remaining dimensions are broadcasted; defining the loop dimensions. The output is given by the loop dimensions plus the output core dimensions. Experimental Windows 64 bits support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS compilers and mingw-w64 compilers: This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt, Windows 64 bits section for more information on limitations and how to build it by yourself. New features ============ Formatting issues ~~~~~~~~~~~~~~~~~ Float formatting is now handled by numpy instead of the C runtime: this enables locale independent formatting, more robust fromstring and related methods. Special values (inf and nan) are also more consistent across platforms (nan vs IND/NaN, etc...), and more consistent with recent python formatting work (in 2.6 and later). Nan handling in max/min ~~~~~~~~~~~~~~~~~~~~~~~ The maximum/minimum ufuncs now reliably propagate nans. If one of the arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax and the array methods max/min. New ufuncs fmax and fmin have been added to deal with non-propagating nans. Nan handling in sign ~~~~~~~~~~~~~~~~~~~~ The ufunc sign now returns nan for the sign of anan. New ufuncs ~~~~~~~~~~ #. fmax - same as maximum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. fmin - same as minimum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. deg2rad - converts degrees to radians, same as the radians ufunc. #. rad2deg - converts radians to degrees, same as the degrees ufunc. #. log2 - base 2 logarithm. #. exp2 - base 2 exponential. #. trunc - truncate floats to nearest integer towards zero. #. logaddexp - add numbers stored as logarithms and return the logarithm of the result. #. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2 logarithm of the result result. Masked arrays ~~~~~~~~~~~~~ Several new features and bug fixes, including: * structured arrays should now be fully supported by MaskedArray (r6463, r6324, r6305, r6300, r6294...) * Minor bug fixes (r6356, r6352, r6335, r6299, r6298) * Improved support for __iter__ (r6326) * made baseclass, sharedmask and hardmask accesible to the user (but read-only) * doc update gfortran support on windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gfortran can now be used as a fortran compiler for numpy on windows, even when the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work). Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran does). It is unclear whether it will be possible to use gfortran and visual studio at all on x64. Arch option for windows binary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Automatic arch detection can now be bypassed from the command line for the superpack installed: numpy-1.3.0-superpack-win32.exe /arch=nosse will install a numpy which works on any x86, even if the running computer supports SSE set. Deprecated features =================== Histogram ~~~~~~~~~ The semantics of histogram has been modified to fix long-standing issues with outliers handling. The main changes concern #. the definition of the bin edges, now including the rightmost edge, and #. the handling of upper outliers, now ignored rather than tallied in the rightmost bin. The previous behavior is still accessible using `new=False`, but this is deprecated, and will be removed entirely in 1.4.0. Documentation changes ===================== A lot of documentation has been added. Both user guide and references can be built from sphinx. New C API ========= Multiarray API ~~~~~~~~~~~~~~ The following functions have been added to the multiarray C API: * PyArray_GetEndianness: to get runtime endianness Ufunc API ~~~~~~~~~~~~~~ The following functions have been added to the ufunc API: * PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc (generalized ufunc). New defines ~~~~~~~~~~~ New public C defines are available for ARCH specific code through numpy/npy_cpu.h: * NPY_CPU_X86: x86 arch (32 bits) * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) * NPY_CPU_PPC: 32 bits ppc * NPY_CPU_PPC64: 64 bits ppc * NPY_CPU_SPARC: 32 bits sparc * NPY_CPU_SPARC64: 64 bits sparc * NPY_CPU_S390: S390 * NPY_CPU_IA64: ia64 * NPY_CPU_PARISC: PARISC New macros for CPU endianness has been added as well (see internal changes below for details): * NPY_BYTE_ORDER: integer * NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines Those provide portable alternatives to glibc endian.h macros for platforms without it. Portable NAN, INFINITY, etc... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ npy_math.h now makes available several portable macro to get NAN, INFINITY: * NPY_NAN: equivalent to NAN, which is a GNU extension * NPY_INFINITY: equivalent to C99 INFINITY * NPY_PZERO, NPY_NZERO: positive and negative zero respectively Corresponding single and extended precision macros are available as well. All references to NAN, or home-grown computation of NAN on the fly have been removed for consistency. Internal changes ================ numpy.core math configuration revamp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This should make the porting to new platforms easier, and more robust. In particular, the configuration stage does not need to execute any code on the target platform, which is a first step toward cross-compilation. http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt umath refactor ~~~~~~~~~~~~~~ A lot of code cleanup for umath/ufunc code (charris). Improvements to build warnings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now build with -W -Wall without warnings http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt Separate core math library ~~~~~~~~~~~~~~~~~~~~~~~~~~ The core math functions (sin, cos, etc... for basic C types) have been put into a separate library; it acts as a compatibility layer, to support most C99 maths functions (real only for now). The library includes platform-specific fixes for various maths functions, such as using those versions should be more robust than using your platform functions directly. The API for existing functions is exactly the same as the C99 math functions API; the only difference is the npy prefix (npy_cos vs cos). The core library will be made available to any extension in 1.4.0. CPU arch detection ~~~~~~~~~~~~~~~~~~ npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc... Those are portable across OS and toolchains, and set up when the header is parsed, so that they can be safely used even in the case of cross-compilation (the values is not set when numpy is built), or for multi-arch binaries (e.g. fat binaries on Max OS X). npy_endian.h defines numpy specific endianness defines, modeled on the glibc endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set when the header is parsed by the compiler, and as such can be used for cross-compilation and multi-arch binaries. Checksums ========= 5c6b2f02d0846317c6e7bffa39f6f828 release/installers/numpy-1.3.0rc1.zip 20cdddd69594420b0f8556bbc4a27a5a release/installers/numpy-1.3.0rc1.tar.gz f85231c4a27b39f7cb713ef22926931e release/installers/numpy-1.3.0rc1-py2.5-macosx10.5.dmg b24bb536492502611ea797d9410bb7c2 release/installers/numpy-1.3.0rc1-win32-superpack-python2.5.exe From oliphant at enthought.com Sat Mar 28 10:54:36 2009 From: oliphant at enthought.com (Travis E. Oliphant) Date: Sat, 28 Mar 2009 09:54:36 -0500 Subject: [Numpy-discussion] DVCS at PyCon Message-ID: <49CE3A2C.9000007@enthought.com> FYI from PyCon Here at PyCon, it has been said that Python will be moving towards DVCS and will be using bzr or mecurial, but explicitly *not* git. It would seem that *git* got the "lowest" score in the Developer survey that Brett Cannon did. The reasons seem to be: * git doesn't have good Windows clients * git is not written with Python I think the sample size was pretty small to be making decisions on (especially when most opinions where "un-informed"). I don't know if it matters that NumPy / SciPy use the same DVCS as Python, but it's a data-point. -Travis From rpyle at post.harvard.edu Sat Mar 28 11:10:00 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sat, 28 Mar 2009 11:10:00 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: Hi all, On Mar 28, 2009, at 9:26 AM, David Cournapeau wrote: > I am pleased to announce the release of the rc1 for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > > https://sourceforge.net/projects/numpy/ > I have a PPC Mac, dual G5, running 10.5.6. The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not work for me. It said none of my disks were suitable for installation. The last time around, numpy-1.3.0b1-py2.5- macosx10.5.dmg persisted in installing itself into the system python rather than the Enthought distribution that I use, so I installed that version from the source tarball. This time, installing from the source tarball also went smoothly. Testing seems okay: >>> np.test() Running unit tests for numpy NumPy version 1.3.0rc1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/ 4.1.30101/lib/python2.5/site-packages/numpy Python version 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, 15:28:32) [GCC 4.0.1 (Apple Computer, Inc. build 5370)] nose version 0.10.3 ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ................................................K...K................... ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................S.............................................................................................................................................................................................................................................................................................................................................................................................................................. ---------------------------------------------------------------------- Ran 2030 tests in 13.930s OK (KNOWNFAIL=2, SKIP=1) Bob From rpyle at post.harvard.edu Sat Mar 28 11:29:57 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sat, 28 Mar 2009 11:29:57 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <9745069E-AB98-4B99-9749-82913B43D0D8@post.harvard.edu> Hi all, On Mar 28, 2009, at 9:26 AM, David Cournapeau wrote: > > I am pleased to announce the release of the rc1 for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > > https://sourceforge.net/projects/numpy/ > On my Intel Mac (MacBook Pro), the OS X installer refused to recognize my disk as an installation target, just as it did on my dual G5 PPC. Installation from the tarball was successful, and numpy.test() was okay (KNOWNFAIL=1, SKIP=1). Bob From dineshbvadhia at hotmail.com Sat Mar 28 11:33:03 2009 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 28 Mar 2009 08:33:03 -0700 Subject: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit numpy? In-Reply-To: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> <49CE2312.3010805@ar.media.kyoto-u.ac.jp> References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> <49CE2312.3010805@ar.media.kyoto-u.ac.jp> Message-ID: David 1) 32-bit Numpy/Scipy with 32-bit Python on 64-bit Windows does work. But, it doesn't take advantage of memory > 2gb. 2) Happy to help out with the experimental 64-bit builds of Numpy/Scipy. But, would this be with pre-installed Windows libraries or source files as I'm not setup for dealing with source files? The machine has an Intel Core2 Quad CPU with 8gb ram. Strangely, the 64-bit Python 2.5x Intel version wouldn't install but the AMD version did. Dinesh From: David Cournapeau Sent: Saturday, March 28, 2009 6:16 AM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit numpy? Dinesh B Vadhia wrote: > Uhmmm! I installed 64-bit Python (2.5x) on a Windows 64-bit Vista > machine (yes, strange but true) hoping that the 32-bit Numpy & Scipy > libraries would work but they didn't. That's a totally different situation: in your case, python and numpy share the same address space in one process (for all purpose, numpy is a dll for python), and you certainly can't mix 32 and 64 bits in the same process. What you can do is running 32 bits numpy/scipy for a 32 bits python on windows 64 bits... ... or helping us making numpy and scipy work on windows 64 bits by testing the experimental 64 bits builds of numpy/scipy for windows :) cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 28 11:52:38 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 00:52:38 +0900 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <49CE3A2C.9000007@enthought.com> References: <49CE3A2C.9000007@enthought.com> Message-ID: <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> Hi Travis, On Sat, Mar 28, 2009 at 11:54 PM, Travis E. Oliphant wrote: > > FYI from PyCon > > Here at PyCon, it has been said that Python will be moving towards DVCS > and will be using bzr or mecurial, but explicitly *not* git. ? It would > seem that *git* got the "lowest" score in the Developer survey that > Brett Cannon did. It is interesting how those tools are viewed so differently in different communities. I am too quite doubtful about the validity of those surveys :) > The reasons seem to be: > > ?* git doesn't have good Windows clients Depending on what is meant by good windows client (GUI, IDE integration), it is true, but then neither do bzr or hg have good clients, so I find this statement a bit strange. What is certainly true is that git developers care much less about windows than bzr (and hg ?). For example, I would guess git will never care much about case insensitive fs, etc... (I know bzr developers worked quite a bit on this). > ?* git is not written with Python I can somewhat understand why it matters to python, but does it matter to us ? There are definitely strong arguments against git - but I don't think being written in python is a strong one. The lack of a good window support is a good argument against changing from svn, but very unconvincing compared to other tools. Git has now so much more manpower compared to hg and bzr (many more project use it: the list of visible projects using git is becoming quite impressive) - from a 3rd party POV, I think git is much better set up than bzr and hg. Gnome choosing git could be significant (they made the decision a couple of days ago). > I think the sample size was pretty small to be making decisions on > (especially when most opinions where "un-informed"). Most people just choose the one they first use. Few people know several DVCS. Pauli and me started a page about arguments pro/cons git - it is still very much work in progress: http://projects.scipy.org/numpy/wiki/GitMigrationProposal Since few people are willing to try different systems, we also started a few workflows (compared to svn): http://projects.scipy.org/numpy/wiki/GitWorkflow FWIW, I have spent some time to look at converting svn repo to git, with proper conversion of branches, tags, and other things. I have converted my own scikits to git as a first trial (I have numpy converted as well, but I did not put it anywhere to avoid confusion). This part of the problem would be relatively simple to handle. cheers, David From cournape at gmail.com Sat Mar 28 12:04:05 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 01:04:05 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> Hi Robert, Thanks for the report. On Sun, Mar 29, 2009 at 12:10 AM, Robert Pyle wrote: > Hi all, > > On Mar 28, 2009, at 9:26 AM, David Cournapeau wrote: >> I am pleased to announce the release of the rc1 for numpy >> 1.3.0. You can find source tarballs and installers for both Mac OS X >> and Windows on the sourceforge page: >> >> https://sourceforge.net/projects/numpy/ >> > > I have a PPC Mac, dual G5, running 10.5.6. > > The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not > work for me. ?It said none of my disks were suitable for > installation. Hm, strange, I have never encountered this problem. To be sure I understand, you could open/mount the .dmg, but the .pkg refuses to install ? > ?The last time around, numpy-1.3.0b1-py2.5- > macosx10.5.dmg persisted in installing itself into the system python > rather than the Enthought distribution that I use, so I installed that > version from the source tarball. I am afraid there is nothing I can do here - the installer can only work with the system python I believe (or more exactly the python version I built the package against). Maybe people more familiar with bdist_mpkg could prove me wrong ? cheers, David From cournape at gmail.com Sat Mar 28 12:11:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 01:11:36 +0900 Subject: [Numpy-discussion] How to tell whether I am using 32 bitor 64bit numpy? In-Reply-To: References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com> <49CE2312.3010805@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903280911w14728e6bo426051f049e21e01@mail.gmail.com> 2009/3/29 Dinesh B Vadhia : > David > > 1)? 32-bit Numpy/Scipy with 32-bit Python on 64-bit Windows does work.? But, > it doesn't take advantage of memory > 2gb. Indeed. But running numpy 32 bits in python 64 bits is not possible - and even if it were, I guess it could not handle more than 32 bits pointers either :) > > 2)? Happy to help out with the experimental 64-bit builds of Numpy/Scipy. > But, would this be with pre-installed Windows libraries or source files as > I'm not setup for dealing with source files? Binaries. Building numpy and scipy from sources?on windows 64 bits is still a relatively epic battle I would not recommend on anyone :) >The machine has an Intel Core2 > Quad CPU with 8gb ram. The windows version matters much more than the CPU (server vs xp vs vista). I think we will only distribute binaries for python 2.6, too. > Strangely, the 64-bit Python 2.5x Intel version > wouldn't install but the AMD version did. If by Intel version you mean itanium, then it is no surprise. Itanium and amd64 are totally different CPU, and not compatible with each other. Otherwise, I am not sure what to understand what you mean by 64 bits Intel version. David From irving at naml.us Sat Mar 28 12:47:32 2009 From: irving at naml.us (Geoffrey Irving) Date: Sat, 28 Mar 2009 09:47:32 -0700 Subject: [Numpy-discussion] array of matrices In-Reply-To: <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> Message-ID: <7f9d599f0903280947p3c30614epb83b9266ae25ed6e@mail.gmail.com> On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern wrote: > 2009/3/27 Charles R Harris : >> >> On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote: >>> >>> On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: >>> > I have a number of arrays of shape (N,4,4). I need to perform a >>> > vectorised matrix-multiplication between pairs of them I.e. >>> > matrix-multiplication rules for the last two dimensions, usual >>> > element-wise rule for the 1st dimension (of length N). >>> > >>> > (How) is this possible with numpy? >>> >>> dot(a,b) was specifically designed for this use case. >> >> I think maybe he wants to treat them as stacked matrices. > > Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just > (4, 4). Never mind. It'd be great if this operation existed as a primitive. What do you think would be the best way in which to add it? One option would be to add a keyword argument to "dot" giving a set of axes to map over. E.g., dot(a, b, map=0) = array([dot(u,v) for u,v in zip(a,b)]) # but in C "map" isn't a very good name for the argument, though. Geoffrey From dineshbvadhia at hotmail.com Sat Mar 28 13:09:53 2009 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 28 Mar 2009 10:09:53 -0700 Subject: [Numpy-discussion] How to tell whether I am using 32 bitor64bit numpy? In-Reply-To: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com><49CE2312.3010805@ar.media.kyoto-u.ac.jp> <5b8d13220903280911w14728e6bo426051f049e21e01@mail.gmail.com> References: <5b8d13220903280407v62712553sb4055ca634138cf0@mail.gmail.com><49CE2312.3010805@ar.media.kyoto-u.ac.jp> <5b8d13220903280911w14728e6bo426051f049e21e01@mail.gmail.com> Message-ID: David: The OS is 64-bit Windows Vista Home Premium, Service Pack 1 with 8gb ram. Machine is used as a desktop development machine (not a server). Dinesh From: David Cournapeau Sent: Saturday, March 28, 2009 9:11 AM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] How to tell whether I am using 32 bitor64bit numpy? 2009/3/29 Dinesh B Vadhia : > David > > 1) 32-bit Numpy/Scipy with 32-bit Python on 64-bit Windows does work. But, > it doesn't take advantage of memory > 2gb. Indeed. But running numpy 32 bits in python 64 bits is not possible - and even if it were, I guess it could not handle more than 32 bits pointers either :) > > 2) Happy to help out with the experimental 64-bit builds of Numpy/Scipy. > But, would this be with pre-installed Windows libraries or source files as > I'm not setup for dealing with source files? Binaries. Building numpy and scipy from sources on windows 64 bits is still a relatively epic battle I would not recommend on anyone :) >The machine has an Intel Core2 > Quad CPU with 8gb ram. The windows version matters much more than the CPU (server vs xp vs vista). I think we will only distribute binaries for python 2.6, too. > Strangely, the 64-bit Python 2.5x Intel version > wouldn't install but the AMD version did. If by Intel version you mean itanium, then it is no surprise. Itanium and amd64 are totally different CPU, and not compatible with each other. Otherwise, I am not sure what to understand what you mean by 64 bits Intel version. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpyle at post.harvard.edu Sat Mar 28 13:41:29 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sat, 28 Mar 2009 13:41:29 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> Message-ID: <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> Hi David, On Mar 28, 2009, at 12:04 PM, David Cournapeau wrote: > Hi Robert, > > Thanks for the report. > > On Sun, Mar 29, 2009 at 12:10 AM, Robert Pyle > wrote: >> The Mac OS X installer (numpy-1.3.0rc1-py2.5-macosx10.5.dmg) did not >> work for me. It said none of my disks were suitable for >> installation. > > Hm, strange, I have never encountered this problem. To be sure I > understand, you could open/mount the .dmg, but the .pkg refuses to > install ? Yes. When it gets to "Select a Destination", I would expect my boot disk to get the green arrow as the installation target, but it (and the other three disks) have the exclamation point in the red circle. Same thing happened on my MacBook Pro (Intel) with its one disk. As I noted before, however, installation from source went without problems on both machines. Bob From sccolbert at gmail.com Sat Mar 28 13:47:13 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 13:47:13 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> Message-ID: <7f014ea60903281047k55f8c9cflbfbd64b975aa64cc@mail.gmail.com> going back and looking at this error: C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o isnt that saying that _configtest.o is referencing something in libgcc which is not lined to in the compile command? Is this something I can add to the numpy setup script? This problem is really beating me down. I've gone back and re-made the atlas .so like 15 times linking with libgcc in 15 different ways. all to no avail.... Thanks again for any help! Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 28 14:13:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 03:13:29 +0900 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281047k55f8c9cflbfbd64b975aa64cc@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281047k55f8c9cflbfbd64b975aa64cc@mail.gmail.com> Message-ID: <5b8d13220903281113v33021ef0yd38737cd8e8c1760@mail.gmail.com> 2009/3/29 Chris Colbert : > going back and looking at this error: > > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall > -Wstrict-prototypes -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas > -lptcblas -latlas -o _configtest > /usr/bin/ld: _configtest: hidden symbol `__powidf2' in > /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > /usr/bin/ld: _configtest: hidden symbol `__powidf2' in > /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > > > isnt that saying that _configtest.o is referencing something in libgcc which > is not lined to in the compile command? > > Is this something I can add to the numpy setup script? > > This problem is really beating me down. I've gone back and re-made the atlas > .so like 15 times linking with libgcc in 15 different ways. all to no > avail.... The way to build shared libraries in atlas does not always work, and some auto-detected settings are often the wrong ones. There is unfortunately not much we can do to help - and understanding the exact problem may be quite difficult if you are not familiar with various build issues. David From robince at gmail.com Sat Mar 28 14:30:17 2009 From: robince at gmail.com (Robin) Date: Sat, 28 Mar 2009 18:30:17 +0000 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CCF940.2050507@ar.media.kyoto-u.ac.jp> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> Message-ID: 2009/3/28 Chris Colbert : > Alright, building numpy against atlas from the repositories works, but this > atlas only contains the single threaded libraries. So i would like to get my > build working completely. It doesn't help at all with your problem - but I thought I'd point out there are other ways to exploit multicore machines than using threaded ATLAS (if that is your goal). For example, I use single threaded libraries and control parallel execution myself using multiprocessing module (this is easier for simple batch jobs, but might not be appropriate for your case). There is some information about this on the wiki: http://scipy.org/ParallelProgramming Cheers Robin From sccolbert at gmail.com Sat Mar 28 14:35:08 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 14:35:08 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270926v3aad3d4cr878dcc49cbe712cd@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> Message-ID: <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> Robin, Thanks. I need to get the backport for multiprocessing on 2.5. But now, it's more of a matter of not wanting to admit defeat.... Cheers, Chris On Sat, Mar 28, 2009 at 2:30 PM, Robin wrote: > 2009/3/28 Chris Colbert : > > Alright, building numpy against atlas from the repositories works, but > this > > atlas only contains the single threaded libraries. So i would like to get > my > > build working completely. > > It doesn't help at all with your problem - but I thought I'd point out > there are other ways to exploit multicore machines than using threaded > ATLAS (if that is your goal). > > For example, I use single threaded libraries and control parallel > execution myself using multiprocessing module (this is easier for > simple batch jobs, but might not be appropriate for your case). > > There is some information about this on the wiki: > http://scipy.org/ParallelProgramming > > Cheers > > Robin > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 14:42:01 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 14:42:01 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <5b8d13220903270943w5305d33eg831c00575be577a4@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> Message-ID: <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> alright, so i solved the linking error by building numpy against the static atlas libraries instead of .so's. But my original problem persists. Some functions work properly, buy numpy.linalg.eig() still hangs. the build log is attached. Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.zip Type: application/zip Size: 6050 bytes Desc: not available URL: From sccolbert at gmail.com Sat Mar 28 15:02:31 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 15:02:31 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270947r6f11d0a9r84adf9f10c7d6fac@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> Message-ID: <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> this is really, really, half of numpy.linalg works, the other half doesn't. working functions: cholesky det inv norm solve non-working functions (they all hang at 100% cpu): eig eigh eigvals eigvalsh pinv lstsq svd I must be a total n00b to be the only person running into this problem :) Cheers, Chris On Sat, Mar 28, 2009 at 2:42 PM, Chris Colbert wrote: > alright, > > so i solved the linking error by building numpy against the static atlas > libraries instead of .so's. > > But my original problem persists. Some functions work properly, buy > numpy.linalg.eig() still hangs. > > the build log is attached. > > > Chris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 15:30:54 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 13:30:54 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <49CD00A7.1050807@ar.media.kyoto-u.ac.jp> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> Message-ID: 2009/3/28 Chris Colbert > this is really, really, half of numpy.linalg works, the other half doesn't. > > > working functions: > > cholesky > det > inv > norm > solve > > non-working functions (they all hang at 100% cpu): > > eig > eigh > eigvals > eigvalsh > pinv > lstsq > svd > > > > I must be a total n00b to be the only person running into this problem :) > > Cheers, > What does your lapack make.inc file look like? Here is what I used on 64 bit Hardy back when. #################################################################### # LAPACK make include file. # # LAPACK, Version 3.1.1 # # February 2007 # #################################################################### # # See the INSTALL/ directory for more examples. # SHELL = /bin/sh # # The machine (platform) identifier to append to the library names # PLAT = _LINUX # # Modify the FORTRAN and OPTS definitions to refer to the # compiler and desired compiler options for your machine. NOOPT # refers to the compiler options desired when NO OPTIMIZATION is # selected. Define LOADER and LOADOPTS to refer to the loader and # desired load options for your machine. # FORTRAN = gfortran OPTS = -funroll-all-loops -O3 -fPIC DRVOPTS = $(OPTS) NOOPT = -fPIC LOADER = gfortran LOADOPTS = # # Timer for the SECOND and DSECND routines # # Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME # TIMER = EXT_ETIME # For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME_ # TIMER = EXT_ETIME_ # For gfortran compiler: SECOND and DSECND will use a call to the INTERNAL FUNCTION ETIME TIMER = INT_ETIME # If your Fortran compiler does not provide etime (like Nag Fortran Compiler, etc...) # SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME # TIMER = INT_CPU_TIME # If neither of this works...you can use the NONE value... In that case, SECOND and DSECND will always return 0 # TIMER = NONE # # The archiver and the flag(s) to use when building archive (library) # If you system has no ranlib, set RANLIB = echo. # ARCH = ar ARCHFLAGS= cr RANLIB = ranlib # # The location of the libraries to which you will link. (The # machine-specific, optimized BLAS library should be used whenever # possible.) # BLASLIB = ../../blas$(PLAT).a LAPACKLIB = lapack$(PLAT).a TMGLIB = tmglib$(PLAT).a EIGSRCLIB = eigsrc$(PLAT).a LINSRCLIB = linsrc$(PLAT).a Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 15:38:09 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 15:38:09 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903270957i7b99d87ak6150dc266eb86a58@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> Message-ID: <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> here it is: 32 bit Intrepid #################################################################### # LAPACK make include file. # # LAPACK, Version 3.1.1 # # February 2007 # #################################################################### # SHELL = /bin/sh # # The machine (platform) identifier to append to the library names # PLAT = _LINUX # # Modify the FORTRAN and OPTS definitions to refer to the # compiler and desired compiler options for your machine. NOOPT # refers to the compiler options desired when NO OPTIMIZATION is # selected. Define LOADER and LOADOPTS to refer to the loader and # desired load options for your machine. # FORTRAN = gfortran OPTS = -O2 -fPIC DRVOPTS = $(OPTS) NOOPT = -O0 -fPIC LOADER = gfortran LOADOPTS = # # Timer for the SECOND and DSECND routines # # Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME #TIMER = EXT_ETIME # For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME_ # TIMER = EXT_ETIME_ # For gfortran compiler: SECOND and DSECND will use a call to the INTERNAL FUNCTION ETIME TIMER = INT_ETIME # If your Fortran compiler does not provide etime (like Nag Fortran Compiler, etc...) # SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME # TIMER = INT_CPU_TIME # If neither of this works...you can use the NONE value... In that case, SECOND and DSECND will always return 0 # TIMER = NONE # # The archiver and the flag(s) to use when building archive (library) # If you system has no ranlib, set RANLIB = echo. # ARCH = ar ARCHFLAGS= cr RANLIB = ranlib # # The location of the libraries to which you will link. (The # machine-specific, optimized BLAS library should be used whenever # possible.) # BLASLIB = ../../blas$(PLAT).a LAPACKLIB = lapack$(PLAT).a TMGLIB = tmglib$(PLAT).a EIGSRCLIB = eigsrc$(PLAT).a LINSRCLIB = linsrc$(PLAT).a -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 15:40:23 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 15:40:23 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903271009x549ead9ev11102d731801e228@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> Message-ID: <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> I notice my OPTS and NOOPTS are different than yours. (I went of scipy.orginstall guide) Do you think that's the issue? Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 15:52:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 13:52:31 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903271932p9b8ece7o2b73e8e5e6e2db5c@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> Message-ID: 2009/3/28 Chris Colbert > I notice my OPTS and NOOPTS are different than yours. (I went of scipy.orginstall guide) > > Do you think that's the issue? > Probably not, but my experience is limited. IIRC, I also had to get the command line for building ATLAS just right and build LAPACK separately instead of having ATLAS do it. It took several tries and much poring through the ATLAS instructions. Something else to check is if you have another LAPACK/ATLAS sitting around somewhere. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 15:58:08 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 15:58:08 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903272041s7f84be0ap413612b509780a91@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> Message-ID: <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> i just ran a dummy config on atlas and its giving me different OPTS and NOOPTS flags than the scipy tutorial. so im gonna try that and report back Chris 2009/3/28 Charles R Harris > > > 2009/3/28 Chris Colbert > >> I notice my OPTS and NOOPTS are different than yours. (I went of >> scipy.org install guide) >> >> Do you think that's the issue? >> > > Probably not, but my experience is limited. IIRC, I also had to get the > command line for building ATLAS just right and build LAPACK separately > instead of having ATLAS do it. It took several tries and much poring through > the ATLAS instructions. > > Something else to check is if you have another LAPACK/ATLAS sitting around > somewhere. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 16:24:40 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 14:24:40 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> Message-ID: 2009/3/28 Chris Colbert > i just ran a dummy config on atlas and its giving me different OPTS and > NOOPTS flags than the scipy tutorial. so im gonna try that and report back > I think that I also had to explicitly specify the bit size flag on the ATLAS command line during various builds, -b32/64 or something like that... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 16:27:47 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 16:27:47 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281135n25cb076dt91358f5ebf49f546@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> Message-ID: <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> yeah, I set -b 32 on atlas... the bogus atlas config was telling me to set OPTS = O -fPIC -m32 and NOPTS = O -fPIC -m32, this caused the make process of lapack to hang. So i set OPTS = O2 -fPIC -m32 and NOPTS = O0 -fPIC -m32. Which is the same as all of my first attempts except for the presence of -m32. so maybe specifying bit size here is needed too. Chris 2009/3/28 Charles R Harris > > > 2009/3/28 Chris Colbert > >> i just ran a dummy config on atlas and its giving me different OPTS and >> NOOPTS flags than the scipy tutorial. so im gonna try that and report back >> > > I think that I also had to explicitly specify the bit size flag on the > ATLAS command line during various builds, -b32/64 or something like that... > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sat Mar 28 16:40:29 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 28 Mar 2009 16:40:29 -0400 Subject: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ? In-Reply-To: <49CCBC07.6000607@american.edu> References: <49CCAEFC.7050901@ar.media.kyoto-u.ac.jp> <49CCBC07.6000607@american.edu> Message-ID: <49CE8B3D.9030309@cs.toronto.edu> Alan G Isaac wrote: > On 3/27/2009 6:48 AM David Cournapeau apparently wrote: >> To build the numpy .dmg mac os x installer, I use a script from the >> adium project, which uses applescript and some mac os x black magic. The >> script seems to be GPL, as adium itself: > > > It might be worth a query to see if the > author would release just this script > under the modified BSD license. > http://trac.adiumx.com/wiki/ContactUs Just FYI (since this seems to have been resolved), I know that most of the Adium team are sympathetic to BSD licensing as it makes their professional lives less complicated. Many also work on Growl ( http://growl.info ) which is BSD-licensed. The main reason for the GPL in Adium is the dependency (for now) on libpurple, which infects their whole codebase. So, in the (doubbtful) event that some of Adium's code might prove useful to NumPy/SciPy, it probably *is* worth asking the developers about relicensing bits and pieces. David From sccolbert at gmail.com Sat Mar 28 17:17:35 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 17:17:35 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281142k5da06a24tbc1eaf18b6845be6@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> Message-ID: <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> YES! YES! YES! YES! HAHAHAHA! YES! using these flags in make.inc to build lapack 1.3.1 worked: OPTS = O2 -fPIC -m32 NOPTS = O2 -fPIC -m32 then build atlas as normal and build numpy against the static atlas libraries (building against the .so's created by atlas causes a linking error in numpy build.log. Numpy will still work, but who knows what function may be broken). Now, off to build numpy 1.3.0rc1 Thanks for all the help gents! Chris On Sat, Mar 28, 2009 at 4:27 PM, Chris Colbert wrote: > yeah, I set -b 32 on atlas... > > the bogus atlas config was telling me to set OPTS = O -fPIC -m32 and NOPTS > = O -fPIC -m32, this caused the make process of lapack to hang. > > So i set OPTS = O2 -fPIC -m32 and NOPTS = O0 -fPIC -m32. Which is the same > as all of my first attempts except for the presence of -m32. so maybe > specifying bit size here is needed too. > > Chris > > 2009/3/28 Charles R Harris > >> >> >> 2009/3/28 Chris Colbert >> >>> i just ran a dummy config on atlas and its giving me different OPTS and >>> NOOPTS flags than the scipy tutorial. so im gonna try that and report back >>> >> >> I think that I also had to explicitly specify the bit size flag on the >> ATLAS command line during various builds, -b32/64 or something like that... >> >> Chuck >> >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 17:34:31 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 15:34:31 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281202r44eeee01l7576d7ccf8c557ce@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> Message-ID: 2009/3/28 Chris Colbert > YES! YES! YES! YES! HAHAHAHA! YES! > > using these flags in make.inc to build lapack 1.3.1 worked: > > OPTS = O2 -fPIC -m32 > NOPTS = O2 -fPIC -m32 > > then build atlas as normal and build numpy against the static atlas > libraries (building against the .so's created by atlas causes a linking > error in numpy build.log. Numpy will still work, but who knows what > function may be broken). > > Now, off to build numpy 1.3.0rc1 > > Thanks for all the help gents! > You might need to run ldconfig to get the dynamic linking working. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 17:37:41 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 15:37:41 -0600 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> Message-ID: On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris wrote: > > > 2009/3/28 Chris Colbert > >> YES! YES! YES! YES! HAHAHAHA! YES! >> >> using these flags in make.inc to build lapack 1.3.1 worked: >> >> OPTS = O2 -fPIC -m32 >> NOPTS = O2 -fPIC -m32 >> >> then build atlas as normal and build numpy against the static atlas >> libraries (building against the .so's created by atlas causes a linking >> error in numpy build.log. Numpy will still work, but who knows what >> function may be broken). >> >> Now, off to build numpy 1.3.0rc1 >> >> Thanks for all the help gents! >> > > You might need to run ldconfig to get the dynamic linking working. > Oh, and the *.so libs don't install automagically, I had to explicitly copy them into the library. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 18:09:47 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 18:09:47 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281238s79251f68v424633adc0b66098@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> Message-ID: <7f014ea60903281509l4505310cv344430c44c83182f@mail.gmail.com> what does ldconfig do other than refresh the library path? i copied the .so's to /usr/local/atlas/lib and added that path to /etc/ld.so.conf.d/scipy.conf and then did ldconfig this was before building numpy Chris 2009/3/28 Charles R Harris > > > On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> 2009/3/28 Chris Colbert >> >>> YES! YES! YES! YES! HAHAHAHA! YES! >>> >>> using these flags in make.inc to build lapack 1.3.1 worked: >>> >>> OPTS = O2 -fPIC -m32 >>> NOPTS = O2 -fPIC -m32 >>> >>> then build atlas as normal and build numpy against the static atlas >>> libraries (building against the .so's created by atlas causes a linking >>> error in numpy build.log. Numpy will still work, but who knows what >>> function may be broken). >>> >>> Now, off to build numpy 1.3.0rc1 >>> >>> Thanks for all the help gents! >>> >> >> You might need to run ldconfig to get the dynamic linking working. >> > > Oh, and the *.so libs don't install automagically, I had to explicitly copy > them into the library. > > Chuck > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 18:30:37 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 18:30:37 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281509l4505310cv344430c44c83182f@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281240i3a1e61a9p3993597dc3b3677@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> <7f014ea60903281509l4505310cv344430c44c83182f@mail.gmail.com> Message-ID: <7f014ea60903281530h3732ad23m5a0ec5547c4f4bec@mail.gmail.com> aside from a smaller numpy install size, what do i gain from linking against the .so's vs the static libraries? Chris On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert wrote: > what does ldconfig do other than refresh the library path? > > i copied the .so's to /usr/local/atlas/lib and added that path to > /etc/ld.so.conf.d/scipy.conf and then did ldconfig > > this was before building numpy > > Chris > > 2009/3/28 Charles R Harris > >> >> >> On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> 2009/3/28 Chris Colbert >>> >>>> YES! YES! YES! YES! HAHAHAHA! YES! >>>> >>>> using these flags in make.inc to build lapack 1.3.1 worked: >>>> >>>> OPTS = O2 -fPIC -m32 >>>> NOPTS = O2 -fPIC -m32 >>>> >>>> then build atlas as normal and build numpy against the static atlas >>>> libraries (building against the .so's created by atlas causes a linking >>>> error in numpy build.log. Numpy will still work, but who knows what >>>> function may be broken). >>>> >>>> Now, off to build numpy 1.3.0rc1 >>>> >>>> Thanks for all the help gents! >>>> >>> >>> You might need to run ldconfig to get the dynamic linking working. >>> >> >> Oh, and the *.so libs don't install automagically, I had to explicitly >> copy them into the library. >> >> Chuck >> >> >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Sat Mar 28 18:58:45 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Sat, 28 Mar 2009 18:58:45 -0400 Subject: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU In-Reply-To: <7f014ea60903281530h3732ad23m5a0ec5547c4f4bec@mail.gmail.com> References: <7f014ea60903270725g61dbaaadj3b6e70582a4f8cd5@mail.gmail.com> <7f014ea60903281258j5bca7d8fibfbd427f436aa7fe@mail.gmail.com> <7f014ea60903281327l56bbe544qf7ae37fb8b54c857@mail.gmail.com> <7f014ea60903281417i79e9d90blb85f05d933b00d75@mail.gmail.com> <7f014ea60903281509l4505310cv344430c44c83182f@mail.gmail.com> <7f014ea60903281530h3732ad23m5a0ec5547c4f4bec@mail.gmail.com> Message-ID: <7f014ea60903281558q166c2ec5jad8f5d515bd948e2@mail.gmail.com> just to see if it would work. I compiled against the .so's and just didnt worry about the linking error. Then i installed numpy and ran numpy.test() these are the results: Ran 2030 tests in 9.778s OK (KNOWNFAIL=1, SKIP=11) so i guess that means its ok.. Chris On Sat, Mar 28, 2009 at 6:30 PM, Chris Colbert wrote: > aside from a smaller numpy install size, what do i gain from linking > against the .so's vs the static libraries? > > Chris > > > On Sat, Mar 28, 2009 at 6:09 PM, Chris Colbert wrote: > >> what does ldconfig do other than refresh the library path? >> >> i copied the .so's to /usr/local/atlas/lib and added that path to >> /etc/ld.so.conf.d/scipy.conf and then did ldconfig >> >> this was before building numpy >> >> Chris >> >> 2009/3/28 Charles R Harris >> >>> >>> >>> On Sat, Mar 28, 2009 at 3:34 PM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> 2009/3/28 Chris Colbert >>>> >>>>> YES! YES! YES! YES! HAHAHAHA! YES! >>>>> >>>>> using these flags in make.inc to build lapack 1.3.1 worked: >>>>> >>>>> OPTS = O2 -fPIC -m32 >>>>> NOPTS = O2 -fPIC -m32 >>>>> >>>>> then build atlas as normal and build numpy against the static atlas >>>>> libraries (building against the .so's created by atlas causes a linking >>>>> error in numpy build.log. Numpy will still work, but who knows what >>>>> function may be broken). >>>>> >>>>> Now, off to build numpy 1.3.0rc1 >>>>> >>>>> Thanks for all the help gents! >>>>> >>>> >>>> You might need to run ldconfig to get the dynamic linking working. >>>> >>> >>> Oh, and the *.so libs don't install automagically, I had to explicitly >>> copy them into the library. >>> >>> Chuck >>> >>> >>> >>> _______________________________________________ >>> Numpy-discussion mailing list >>> Numpy-discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jblaine at mitre.org Sat Mar 28 19:31:55 2009 From: jblaine at mitre.org (Jeff Blaine) Date: Sat, 28 Mar 2009 19:31:55 -0400 Subject: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC In-Reply-To: <49CA85A2.10905@mitre.org> References: <49CA85A2.10905@mitre.org> Message-ID: <49CEB36B.9040309@mitre.org> Same problem with 1.3.0rc1 Jeff Blaine wrote: > Aside from this, the website for NumPy should have a link to the > list subscription address, not a link to the list itself (which > cannot be posted to unless one is a member). > > Python 2.4.2 (#2, Dec 6 2006, 17:18:19) > [GCC 3.3.5] on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > Traceback (most recent call last): > File "", line 1, in ? > File > "/afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py", > > line 130, in ? > import add_newdocs > File > "/afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py", > > line 9, in ? > from lib import add_newdoc > File > "/afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py", > > line 4, in ? > from type_check import * > File > "/afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py", > > line 8, in ? > import numpy.core.numeric as _nx > File > "/afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py", > > line 5, in ? > import multiarray > ImportError: ld.so.1: python: fatal: relocation error: file > /afs/.rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so: > > symbol __builtin_isfinite: referenced symbol not found > >>> > > See build.log attached as well. > > > From charlesr.harris at gmail.com Sat Mar 28 20:35:13 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 18:35:13 -0600 Subject: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC In-Reply-To: <49CEB36B.9040309@mitre.org> References: <49CA85A2.10905@mitre.org> <49CEB36B.9040309@mitre.org> Message-ID: On Sat, Mar 28, 2009 at 5:31 PM, Jeff Blaine wrote: > Same problem with 1.3.0rc1 > > Jeff Blaine wrote: > > Aside from this, the website for NumPy should have a link to the > > list subscription address, not a link to the list itself (which > > cannot be posted to unless one is a member). > > > > Python 2.4.2 (#2, Dec 6 2006, 17:18:19) > > [GCC 3.3.5] on sunos5 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import numpy > > Traceback (most recent call last): > > File "", line 1, in ? > > File > > "/afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py > ", > > > > line 130, in ? > > import add_newdocs > > File > > "/afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py > ", > > > > line 9, in ? > > from lib import add_newdoc > > File > > "/afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py > ", > > > > line 4, in ? > > from type_check import * > > File > > "/afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py > ", > > > > line 8, in ? > > import numpy.core.numeric as _nx > > File > > "/afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py > ", > > > > line 5, in ? > > import multiarray > > ImportError: ld.so.1: python: fatal: relocation error: file > > /afs/. > rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so > : > > > > symbol __builtin_isfinite: referenced symbol not found > > >>> > > > > See build.log attached as well. > > > > Google indicates that this might be a problem with a missing isfinite and gcc 3.3.5. I think we should be detecting this, but... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Mar 28 20:45:44 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 28 Mar 2009 18:45:44 -0600 Subject: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC In-Reply-To: References: <49CA85A2.10905@mitre.org> <49CEB36B.9040309@mitre.org> Message-ID: On Sat, Mar 28, 2009 at 6:35 PM, Charles R Harris wrote: > > > On Sat, Mar 28, 2009 at 5:31 PM, Jeff Blaine wrote: > >> Same problem with 1.3.0rc1 >> >> Jeff Blaine wrote: >> > Aside from this, the website for NumPy should have a link to the >> > list subscription address, not a link to the list itself (which >> > cannot be posted to unless one is a member). >> > >> > Python 2.4.2 (#2, Dec 6 2006, 17:18:19) >> > [GCC 3.3.5] on sunos5 >> > Type "help", "copyright", "credits" or "license" for more information. >> > >>> import numpy >> > Traceback (most recent call last): >> > File "", line 1, in ? >> > File >> > "/afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/__init__.py >> ", >> > >> > line 130, in ? >> > import add_newdocs >> > File >> > "/afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/add_newdocs.py >> ", >> > >> > line 9, in ? >> > from lib import add_newdoc >> > File >> > "/afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/__init__.py >> ", >> > >> > line 4, in ? >> > from type_check import * >> > File >> > "/afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/lib/type_check.py >> ", >> > >> > line 8, in ? >> > import numpy.core.numeric as _nx >> > File >> > "/afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/__init__.py >> ", >> > >> > line 5, in ? >> > import multiarray >> > ImportError: ld.so.1: python: fatal: relocation error: file >> > /afs/. >> rcf.mitre.org/lang/python/sun4x_510/2.4.2/lib/python2.4/site-packages/numpy/core/multiarray.so >> : >> > >> > symbol __builtin_isfinite: referenced symbol not found >> > >>> >> > >> > See build.log attached as well. >> > >> > > > > Google indicates that this might be a problem with a missing isfinite and > gcc 3.3.5. I think we should be detecting this, but... > What version of glibc do you have? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sat Mar 28 21:09:27 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 28 Mar 2009 21:09:27 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <49CECA47.1010004@american.edu> On 3/28/2009 9:26 AM David Cournapeau apparently wrote: > I am pleased to announce the release of the rc1 for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > https://sourceforge.net/projects/numpy/ Was the Python 2.6 Superpack intentionally omitted? Alan Isaac From peridot.faceted at gmail.com Sun Mar 29 00:15:57 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 29 Mar 2009 00:15:57 -0400 Subject: [Numpy-discussion] array of matrices In-Reply-To: <7f9d599f0903280947p3c30614epb83b9266ae25ed6e@mail.gmail.com> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> <7f9d599f0903280947p3c30614epb83b9266ae25ed6e@mail.gmail.com> Message-ID: 2009/3/28 Geoffrey Irving : > On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern wrote: >> 2009/3/27 Charles R Harris : >>> >>> On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote: >>>> >>>> On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: >>>> > I have a number of arrays of shape (N,4,4). I need to perform a >>>> > vectorised matrix-multiplication between pairs of them I.e. >>>> > matrix-multiplication rules for the last two dimensions, usual >>>> > element-wise rule for the 1st dimension (of length N). >>>> > >>>> > (How) is this possible with numpy? >>>> >>>> dot(a,b) was specifically designed for this use case. >>> >>> I think maybe he wants to treat them as stacked matrices. >> >> Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just >> (4, 4). Never mind. > > It'd be great if this operation existed as a primitive. ?What do you > think would be the best way in which to add it? ?One option would be > to add a keyword argument to "dot" giving a set of axes to map over. > E.g., > > ? ?dot(a, b, map=0) = array([dot(u,v) for u,v in zip(a,b)]) # but in C > > "map" isn't a very good name for the argument, though. I think the right long-term solution is to make dot (and some other linear algebra functions) into "generalized ufuncs", so that when you dot two multidimensional objects together, they are treated as arrays of two-dimensional arrays, broadcasting is done on all but the last two dimensions, and then the linear algebra is applied "elementwise". This covers basically all "stacked matrices" uses in a very general way, but would require some redesigning of the linear algebra system - for example, dot() currently works on both two- and one-dimensional arrays, which can't work in such a setting. The infrastructure to support such generalized ufuncs has been added to numpy, but as far as I know no functions yet make use of it. Anne > Geoffrey > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Sun Mar 29 00:32:35 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Mar 2009 23:32:35 -0500 Subject: [Numpy-discussion] array of matrices In-Reply-To: References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> <7f9d599f0903280947p3c30614epb83b9266ae25ed6e@mail.gmail.com> Message-ID: <3d375d730903282132r431ac3e0i66808b4ae533df4b@mail.gmail.com> On Sat, Mar 28, 2009 at 23:15, Anne Archibald wrote: > 2009/3/28 Geoffrey Irving : >> On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern wrote: >>> 2009/3/27 Charles R Harris : >>>> >>>> On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern wrote: >>>>> >>>>> On Fri, Mar 27, 2009 at 17:38, Bryan Cole wrote: >>>>> > I have a number of arrays of shape (N,4,4). I need to perform a >>>>> > vectorised matrix-multiplication between pairs of them I.e. >>>>> > matrix-multiplication rules for the last two dimensions, usual >>>>> > element-wise rule for the 1st dimension (of length N). >>>>> > >>>>> > (How) is this possible with numpy? >>>>> >>>>> dot(a,b) was specifically designed for this use case. >>>> >>>> I think maybe he wants to treat them as stacked matrices. >>> >>> Oh, right. Sorry. dot(a, b) works when a is (N, 4, 4) and b is just >>> (4, 4). Never mind. >> >> It'd be great if this operation existed as a primitive. ?What do you >> think would be the best way in which to add it? ?One option would be >> to add a keyword argument to "dot" giving a set of axes to map over. >> E.g., >> >> ? ?dot(a, b, map=0) = array([dot(u,v) for u,v in zip(a,b)]) # but in C >> >> "map" isn't a very good name for the argument, though. > > I think the right long-term solution is to make dot (and some other > linear algebra functions) into "generalized ufuncs", so that when you > dot two multidimensional objects together, they are treated as arrays > of two-dimensional arrays, broadcasting is done on all but the last > two dimensions, and then the linear algebra is applied "elementwise". > This covers basically all "stacked matrices" uses in a very general > way, but would require some redesigning of the linear algebra system - > for example, dot() currently works on both two- and one-dimensional > arrays, which can't work in such a setting. > > The infrastructure to support such generalized ufuncs has been added > to numpy, but as far as I know no functions yet make use of it. I don't think there is a way to do it in general with dot(). Some cases are ambiguous. I think you will need separate matrix-matrix, matrix-vector, and vector-vector gufuncs, to coin a term. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Sun Mar 29 03:27:58 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 16:27:58 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CECA47.1010004@american.edu> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49CECA47.1010004@american.edu> Message-ID: <5b8d13220903290027s3d785202o202592be7832da74@mail.gmail.com> On Sun, Mar 29, 2009 at 10:09 AM, Alan G Isaac wrote: > On 3/28/2009 9:26 AM David Cournapeau apparently wrote: >> I am pleased to announce the release of the rc1 for numpy >> 1.3.0. You can find source tarballs and installers for both Mac OS X >> and Windows on the sourceforge page: >> https://sourceforge.net/projects/numpy/ > > Was the Python 2.6 Superpack intentionally omitted? No, I've added it. The 64 bits binary will come later David From david at ar.media.kyoto-u.ac.jp Sun Mar 29 04:03:33 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 29 Mar 2009 17:03:33 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> Message-ID: <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> Robert Pyle wrote: > > Yes. When it gets to "Select a Destination", I would expect my boot > disk to get the green arrow as the installation target, but it (and > the other three disks) have the exclamation point in the red circle. > Same thing happened on my MacBook Pro (Intel) with its one disk. > Now that I think about it, maybe this is due to the lack of a python interpreter from python.org on your side. Did you install any other python besides the one included in EPD ? If that's the problem, we should at least mention somewhere that the python from python.org is needed. cheers, David From christian at marquardt.sc Sun Mar 29 07:11:28 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Sun, 29 Mar 2009 12:11:28 +0100 (GMT+01:00) Subject: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type In-Reply-To: Message-ID: <9458747.2081238325088537.JavaMail.root@athene> They are, also in v1..3.0rc1 Many thanks! Christian ----- "Charles R Harris" wrote: > > > > 2009/3/27 Christian Marquardt < christian at marquardt.sc > > > Error messages? Sure;-) > > python -c 'import numpy; numpy.test()' > Running unit tests for numpy > NumPy version 1.3.0b1 > NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy > Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ gcc 4.2 mode] > nose version 0.10.4 > .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................FF..............FF.......................................................................................................................................................................................................................................................................................................................................................................... > OK, the tests should be fixed in r6773. > > Chuck > > > > _______________________________________________ Numpy-discussion mailing list Numpy-discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Dr. Christian Marquardt Email: christian at marquardt.sc Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 64293 Darmstadt Mobile: +49 (0) 179 290 84 74 Germany Fax: +49 (0) 6151 95 13 885 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at marquardt.sc Sun Mar 29 07:28:10 2009 From: christian at marquardt.sc (Christian Marquardt) Date: Sun, 29 Mar 2009 12:28:10 +0100 (GMT+01:00) Subject: [Numpy-discussion] Command line args for the Intel compilers (32 bit) In-Reply-To: <948943.2191238325845345.JavaMail.root@athene> Message-ID: <20330822.2211238326090169.JavaMail.root@athene> Hi, I've been carrying these modifcations of the build-in compiler command line arguments for the 32-bit Intel compilers for quite some while now; maybe they are interesting for other people as well... I've been using this with ifort (IFORT) 10.1 20080801 on a Suse Linux 10.3. Rationale for individual changes: - numpy-1.3.0rc1/numpy/distutils/fcompiler/intel.py: - For pentiumM's, options are changed from '-tpp7 -xB' to '-xN': The compiler documentation says that -tpp and -xB are deprecated and will be removed in future versions. - 'linker_so' gets an additional "-xN": If code is compiled with -x, additional "vector math" libraries (libvml*) need to be linked in, or loading the shared objects may fail at runtime; so I added this to the link command. If other '-x' options are used by the numpy distutils, the linker command should be modified accordingly - so this patch probably is not generic. - numpy-1.3.0rc1/numpy/distutils/intelccompiler.py: - A dedicated C++ compiler (icpc) is introduced and used for compiling C++ code, *and* for linking: I have found that C++ extensions require additional runtime libraries that are not linked in with the normal icc command, causing the loading of C++ extensions to fail at runtime. This used to be a problem with scipy in earlier versions, but I think currently, there's no C++ any more in scipy (but I could be wrong). Hope this is useful, Christian. -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-1.3.0rc1-intel-args.patch Type: text/x-patch Size: 3963 bytes Desc: not available URL: From rpyle at post.harvard.edu Sun Mar 29 10:43:08 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sun, 29 Mar 2009 10:43:08 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> Message-ID: Hi David, On Mar 29, 2009, at 4:03 AM, David Cournapeau wrote: > Robert Pyle wrote: >> >> Yes. When it gets to "Select a Destination", I would expect my boot >> disk to get the green arrow as the installation target, but it (and >> the other three disks) have the exclamation point in the red circle. >> Same thing happened on my MacBook Pro (Intel) with its one disk. >> > > Now that I think about it, maybe this is due to the lack of a python > interpreter from python.org on your side. Did you install any other > python besides the one included in EPD ? If that's the problem, we > should at least mention somewhere that the python from python.org is > needed. Okay, I just installed 2.6.1 from python.org, and it is now the version that starts when I type "python" to Terminal. I still cannot install numpy-1.3.0rc1 from the OS X installer, numpy-1.3.0rc1-py2.5- macosx10.5.dmg Bob From cournape at gmail.com Sun Mar 29 10:53:56 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 29 Mar 2009 07:53:56 -0700 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> On Sun, Mar 29, 2009 at 7:43 AM, Robert Pyle wrote: > Hi David, > > On Mar 29, 2009, at 4:03 AM, David Cournapeau wrote: > >> Robert Pyle wrote: >>> >>> Yes. ?When it gets to "Select a Destination", I would expect my boot >>> disk to get the green arrow as the installation target, but it (and >>> the other three disks) have the exclamation point in the red circle. >>> Same thing happened on my MacBook Pro (Intel) with its one disk. >>> >> >> Now that I think about it, maybe this is due to the lack of a python >> interpreter from python.org on your side. Did you install any other >> python besides the one included in EPD ? If that's the problem, we >> should at least mention somewhere that the python from python.org is >> needed. > > Okay, I just installed 2.6.1 from python.org, and it is now the > version that starts when I type "python" to Terminal. ?I still cannot > install numpy-1.3.0rc1 from the OS X installer, numpy-1.3.0rc1-py2.5- > macosx10.5.dmg Yes, you can't install a python 2.5 package on python 2.6. It is almost like installing from sources is actually easier than from an installer on mac os x... I can relatively easily provide a 2.6 installer, though. cheers, David From rpyle at post.harvard.edu Sun Mar 29 14:36:28 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Sun, 29 Mar 2009 14:36:28 -0400 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> Message-ID: On Mar 29, 2009, at 10:53 AM, David Cournapeau wrote: > On Sun, Mar 29, 2009 at 7:43 AM, Robert Pyle > wrote: >> Hi David, >> >> On Mar 29, 2009, at 4:03 AM, David Cournapeau wrote: >> >>> Robert Pyle wrote: >>>> >>>> Yes. When it gets to "Select a Destination", I would expect my >>>> boot >>>> disk to get the green arrow as the installation target, but it (and >>>> the other three disks) have the exclamation point in the red >>>> circle. >>>> Same thing happened on my MacBook Pro (Intel) with its one disk. >>>> >>> >>> Now that I think about it, maybe this is due to the lack of a python >>> interpreter from python.org on your side. Did you install any other >>> python besides the one included in EPD ? If that's the problem, we >>> should at least mention somewhere that the python from python.org is >>> needed. >> >> Okay, I just installed 2.6.1 from python.org, and it is now the >> version that starts when I type "python" to Terminal. I still cannot >> install numpy-1.3.0rc1 from the OS X installer, numpy-1.3.0rc1-py2.5- >> macosx10.5.dmg > > Yes, you can't install a python 2.5 package on python 2.6. It is > almost like installing from sources is actually easier than from an > installer on mac os x... I just installed 2.5.4 from python.org, and the OS X installer still doesn't work. This is on a PPC G5; I haven't tried it on my Intel MacBook Pro. Bob From drife at ucar.edu Sun Mar 29 16:28:37 2009 From: drife at ucar.edu (Daran L. Rife) Date: Sun, 29 Mar 2009 14:28:37 -0600 (MDT) Subject: [Numpy-discussion] Efficient removal of duplicates: Numpy discussion board Message-ID: <35346.24.56.188.140.1238358517.squirrel@mail.rap.ucar.edu> Marjolaine, Solution: unique_index = [i for i,x in enumerate(l) if not or x != l[i-1]] Remember that enumerate gives the index,value pairs of the items in any iterable object. Try it for yourself. Here's the output from my IDLE session. In [1]: l = [(1,1), (2,3), (1, 1), (4,5), (2,3), (10,21)] In [2]: l.sort() In [3]: l Out[3]: [(1, 1), (1, 1), (2, 3), (2, 3), (4, 5), (10, 21)] In [4]: unique_index = [i for i, x in enumerate(l) if not i or x != l[i-1]] In [5]: unique_index Out[5]: [0, 2, 4, 5] BTW, I'm posting my response to the numpy-discussion group so others may benefit. It's best to address your questions to the group, as individuals are not always available to answer your question in a timely manner. And by posting your message to the group, you draw from a large body of very knowledgeable people who will gladly help you. Daran -- > I saw your message on the numpy discussion board regarding the solution for the efficient removal of duplicates. I have the same problem but need to need to return > the indices of the values as an input with associated z values. > I was wondering if there was any ways to have the method you propoosed return the indices of the duplicate (or alternatively unique) values in a. > > > Here is the piece of code that you suggested at the time (Re: [Numpy-discussion] Efficient removal of duplicates, posted on Tue, 16 Dec 2008 01:10:00 -0800) > > > --------------------------------------------- > import numpy as np > > a = [(x0,y0), (x1,y1), ...] # A numpy array, but could be a list l = a.tolist() > l.sort() > unique = [x for i, x in enumerate(l) if not i or x != l[i-1]] # <---- a_unique = np.asarray(unique) > > --------------------------------------------- > > Best regards, marjolaine > > > > > -- > This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. > The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. > > This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support. > > From cournape at gmail.com Mon Mar 30 02:56:50 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 30 Mar 2009 15:56:50 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> Message-ID: <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> On Mon, Mar 30, 2009 at 3:36 AM, Robert Pyle wrote: > > I just installed 2.5.4 from python.org, and the OS X installer still > doesn't work. ?This is on a PPC G5; I haven't tried it on my Intel > MacBook Pro. I think I got it. To build numpy, I use virtualenv to make a "bootstrap" environment, but then the corresponding python path get embedded in the .mpkg - so unless you have your python interpreter in exactly the same path as my bootstrap (which is very unlikely), it won't run at all. This would also explain why I never saw the problem. I will prepare a new binary, cheers, David From david at ar.media.kyoto-u.ac.jp Mon Mar 30 03:41:00 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 30 Mar 2009 16:41:00 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> Message-ID: <49D0778C.5050706@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > On Mon, Mar 30, 2009 at 3:36 AM, Robert Pyle wrote: > > >> I just installed 2.5.4 from python.org, and the OS X installer still >> doesn't work. This is on a PPC G5; I haven't tried it on my Intel >> MacBook Pro. >> Could you try this one ? http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/numpy-1.3.0rc1-py2.5-macosx10.5.mpkg.tbz2 If it does not work, getting the /var/tmp/install.log would be helpful (the few last lines), cheers, David From kfrancoi at gmail.com Mon Mar 30 04:40:33 2009 From: kfrancoi at gmail.com (=?ISO-8859-1?Q?Kevin_Fran=E7oisse?=) Date: Mon, 30 Mar 2009 10:40:33 +0200 Subject: [Numpy-discussion] SWIG and numpy.i In-Reply-To: <81CAFBF8-B131-4910-B985-DE66022FA28D@sandia.gov> References: <36c2e0ca0903240733x2d3e4d44iaa6afd8d53c3ac69@mail.gmail.com> <49A4F2A3-1E5A-45F9-9A50-3F8460604D88@sandia.gov> <36c2e0ca0903250439r2b363873qbabe4722b6445b8f@mail.gmail.com> <81CAFBF8-B131-4910-B985-DE66022FA28D@sandia.gov> Message-ID: <36c2e0ca0903300140g633e20c6md2e655aa421040b6@mail.gmail.com> Hello Bill, Finaly, I just change my function header to take a double* rather than a double**. It's working fine now. Thank you for all your answer and your help! Swig and numpy.i are really cool when you now how to use it! I also use INPLACE array as a way output 2D arrays from my C function. Kevin On Wed, Mar 25, 2009 at 3:03 PM, Bill Spotz wrote: > Kevin, > > In this instance, the best thing is to write a wrapper function that calls > your matSum() function, and takes a double* rather than a double**. You > can %ignore the original function and %rename the wrapper so that the python > interface gets the name you want. > > > On Mar 25, 2009, at 7:39 AM, Kevin Fran?oisse wrote: > > Thanks Bill, it helps me a lot ! My function works fine now. >> >> But I encounter an other problem. This time with a NumPy array of 2 >> dimensions. >> >> Here is the function I want to use : >> >> /****************/ >> double matSum(double** mat, int n, int m){ >> int i,j; >> double sum = 0.0; >> for (i=0;i> for (j=0;j> sum += mat[i][j]; >> } >> } >> return sum; >> } >> /****************/ >> >> I supposed that the typemaps to use is the following : >> >> %apply (double* IN_ARRAY2, int DIM1, int DIM2) {(double** mat, int n, int >> m)}; >> >> But it is not working. Of course, my typemaps assignement is not >> compatible with my function parameters. I tried several ways of using a two >> dimensional array but I'm not sure what is the best way to do it ? >> >> Thanks >> >> --- >> Kevin Fran?oisse >> Ph.D. at Machine Learning Group at UCL >> Belgium >> kevin.francoisse at uclouvain.be >> >> On Tue, Mar 24, 2009 at 6:13 PM, Bill Spotz wrote: >> Kevin, >> >> You need to declare vecSum() *after* you %include "numpy.i" and use the >> %apply directive. Based on what you have, I think you can just get rid of >> the "extern double vecSum(...)". I don't see what purpose it serves. As >> is, it is telling swig to wrap vecSum() before you have set up your numpy >> typemaps. >> >> >> On Mar 24, 2009, at 10:33 AM, Kevin Fran?oisse wrote: >> >> Hi everyone, >> >> I have been using NumPy for a couple of month now, as part of my research >> project at the university. But now, I have to use a big C library I wrote >> myself in a python project. So I choose to use SWIG for the interface >> between both my python script and my C library. To make things more >> comprehensible, I wrote a small C methods that illustrate my problem: >> >> /* matrix.c */ >> >> #include >> #include >> /* Compute the sum of a vector of reals */ >> double vecSum(int* vec,int m){ >> int i; >> double sum =0.0; >> >> for(i=0;i> sum += vec[i]; >> } >> return sum; >> } >> >> /***/ >> >> /* matrix.h */ >> >> double vecSum(int* vec,int m); >> >> /***/ >> >> /* matrix.i */ >> >> %module matrix >> %{ >> #define SWIG_FILE_WITH_INIT >> #include "matrix.h" >> %} >> >> extern double vecSum(int* vec, int m); >> >> %include "numpy.i" >> >> %init %{ >> import_array(); >> %} >> >> %apply (int* IN_ARRAY1, int DIM1) {(int* vec, int m)}; >> %include "matrix.h" >> >> /***/ >> >> I'm using a python script to compile my swig interface and my C files >> (running Mac OS X 10.5) >> >> /* matrixSetup.py */ >> >> from distutils.core import setup, Extension >> import numpy >> >> setup(name='matrix', version='1.0', ext_modules =[Extension('_matrix', >> ['matrix.c','matrix.i'], >> include_dirs = [numpy.get_include(),'.'])]) >> >> /***/ >> >> Everything seems to work fine ! But when I test my wrapped module in >> python with an small NumPy array, here what I get : >> >> >>> import matrix >> >>> from numpy import * >> >>> a = arange(10) >> >>> matrix.vecSum(a,a.shape[0]) >> Traceback (most recent call last): >> File "", line 1, in >> TypeError: in method 'vecSum', argument 1 of type 'int *' >> >> How can I tell SWIG that my Integer NumPy array should represent a int* >> array in C ? >> >> Thank you very much, >> >> Kevin >> >> >> ** Bill Spotz ** >> ** Sandia National Laboratories Voice: (505)845-0170 ** >> ** P.O. Box 5800 Fax: (505)284-0154 ** >> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >> >> >> >> >> >> >> >> > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From numpy-discussion at maubp.freeserve.co.uk Mon Mar 30 05:59:30 2009 From: numpy-discussion at maubp.freeserve.co.uk (Peter) Date: Mon, 30 Mar 2009 10:59:30 +0100 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <320fb6e00903300259l6b423497gcdb6658cb91dba1a@mail.gmail.com> On Sat, Mar 28, 2009 at 2:26 PM, David Cournapeau wrote: > > Hi, > > I am pleased to announce the release of the rc1 for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > > https://sourceforge.net/projects/numpy/ For the beta release, I can see both numpy-1.3.0b1-win32-superpack-python2.5.exe and numpy-1.3.0b1-win32-superpack-python2.6.exe However, for the first release candidate I can only see numpy-1.3.0rc1-win32-superpack-python2.5.exe - no Python 2.6 version. Is this an oversight, or maybe some caching issue with the sourceforge mirror system? In the meantime I'll give the beta a go on Python 2.6 on my Windows XP machine... Thanks, Peter From david at ar.media.kyoto-u.ac.jp Mon Mar 30 05:50:39 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 30 Mar 2009 18:50:39 +0900 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <320fb6e00903300259l6b423497gcdb6658cb91dba1a@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <320fb6e00903300259l6b423497gcdb6658cb91dba1a@mail.gmail.com> Message-ID: <49D095EF.2020503@ar.media.kyoto-u.ac.jp> Peter wrote: > On Sat, Mar 28, 2009 at 2:26 PM, David Cournapeau > wrote: > >> Hi, >> >> I am pleased to announce the release of the rc1 for numpy >> 1.3.0. You can find source tarballs and installers for both Mac OS X >> and Windows on the sourceforge page: >> >> https://sourceforge.net/projects/numpy/ >> > > For the beta release, I can see both > numpy-1.3.0b1-win32-superpack-python2.5.exe and > numpy-1.3.0b1-win32-superpack-python2.6.exe > > However, for the first release candidate I can only see > numpy-1.3.0rc1-win32-superpack-python2.5.exe - no Python 2.6 version. I uploaded it but forgot to update it on the sourceforge download page. This should be fixed, David From cimrman3 at ntc.zcu.cz Mon Mar 30 06:08:46 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 30 Mar 2009 12:08:46 +0200 Subject: [Numpy-discussion] [Announce] Numpy 1.3.0 rc1 In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <49D09A2E.4090409@ntc.zcu.cz> Hi, It might be too late (I was off-line last week), but anyway: I have set the milestone for the ticket 1036 [1] to 1.4, but it does not change the existing functionality, brings some new one, and the tests pass, so I wonder if it could get it into the 1.3 release? cheers, r. [1] http://projects.scipy.org/numpy/ticket/1036 David Cournapeau wrote: > Hi, > > I am pleased to announce the release of the rc1 for numpy > 1.3.0. You can find source tarballs and installers for both Mac OS X > and Windows on the sourceforge page: > > https://sourceforge.net/projects/numpy/ > > > The release note for the 1.3.0 release are below, > > The Numpy developers From jblaine at mitre.org Mon Mar 30 08:53:02 2009 From: jblaine at mitre.org (Jeff Blaine) Date: Mon, 30 Mar 2009 08:53:02 -0400 Subject: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC In-Reply-To: References: <49CA85A2.10905@mitre.org> <49CEB36B.9040309@mitre.org> Message-ID: <49D0C0AE.3030006@mitre.org> > What version of glibc do you have? None. Solaris does not use GNU libc. From wesmckinn at gmail.com Mon Mar 30 09:03:56 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 30 Mar 2009 09:03:56 -0400 Subject: [Numpy-discussion] np.savez not multi-processing safe, alternatives? Message-ID: <6c476c8a0903300603n2a4f9d33scc8ee54deb692e37@mail.gmail.com> I have a process that stores a number of sets of 3 arrays output which can either be stored as a few .npy files or an .npz file with the same keys in each file (let's say, writing roughly 10,000 npz files, all containing the same keys 'a', 'b', 'c'). If I run multiple processes on the same machine (desirable, since they heavily database-IO-bound), over a period of hours some of the npz-writes will collide and fail due to the use of tempfile and tempfile.gettempdir() (either one of the .npy subfiles will be locked for writing or will get os.remove'd while the zip file is being written). So my question-- recommendations for a way around this, or possible to change the savez function to make it less likely to happen? (I am on Win32) Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From jblaine at mitre.org Mon Mar 30 09:37:39 2009 From: jblaine at mitre.org (Jeff Blaine) Date: Mon, 30 Mar 2009 09:37:39 -0400 Subject: [Numpy-discussion] Failure with 1.3.0b1 under Solaris 10 SPARC In-Reply-To: <49D0C0AE.3030006@mitre.org> References: <49CA85A2.10905@mitre.org> <49CEB36B.9040309@mitre.org> <49D0C0AE.3030006@mitre.org> Message-ID: <49D0CB23.1050306@mitre.org> FWIW, I solved this just now by removing Sun Studio from my PATH before build. It's clear that's a workaround though and the build process failed to determine something properly. Jeff Blaine wrote: >> What version of glibc do you have? > > None. Solaris does not use GNU libc. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From rpyle at post.harvard.edu Mon Mar 30 10:06:26 2009 From: rpyle at post.harvard.edu (Robert Pyle) Date: Mon, 30 Mar 2009 10:06:26 -0400 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <49D0778C.5050706@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> Message-ID: <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> Hi David, I decided to change the Subject line to be more apropos. On Mar 30, 2009, at 3:41 AM, David Cournapeau wrote: > David Cournapeau wrote: >> On Mon, Mar 30, 2009 at 3:36 AM, Robert Pyle >> wrote: >> >> >>> I just installed 2.5.4 from python.org, and the OS X installer still >>> doesn't work. This is on a PPC G5; I haven't tried it on my Intel >>> MacBook Pro. >>> > > Could you try this one ? > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/numpy-1.3.0rc1-py2.5-macosx10.5.mpkg.tbz2 This one installs, but only in /Library/Python/2.5/site-packages/, that is, for Apple's system python. This happened when `which python` pointed to either EPD python or python.org's 2.5.4. > If it does not work, getting the /var/tmp/install.log would be helpful > (the few last lines), > /var/tmp/ had a bunch of stuff in it, but no file named > install.log. Perhaps that's because the installation succeeded? Bob From cournape at gmail.com Mon Mar 30 11:22:14 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 Mar 2009 00:22:14 +0900 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> Message-ID: <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> On Mon, Mar 30, 2009 at 11:06 PM, Robert Pyle wrote: > Hi David, > > I decided to change the Subject line to be more apropos. > > On Mar 30, 2009, at 3:41 AM, David Cournapeau wrote: > >> David Cournapeau wrote: >>> On Mon, Mar 30, 2009 at 3:36 AM, Robert Pyle >>> wrote: >>> >>> >>>> I just installed 2.5.4 from python.org, and the OS X installer still >>>> doesn't work. ?This is on a PPC G5; I haven't tried it on my Intel >>>> MacBook Pro. >>>> >> >> Could you try this one ? >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/numpy-1.3.0rc1-py2.5-macosx10.5.mpkg.tbz2 > > This one installs, but only in /Library/Python/2.5/site-packages/, > that is, for Apple's system python. ?This happened when `which python` > pointed to either EPD python or python.org's 2.5.4. Yes, what your default python is does not matter: I don't know the details, but it looks like the mac os x installer only looks whether a python binary exists in /System/Library/..., that is the one I used to build the package. You can see this in the Info.plist inside the .mpkg. >> If it does not work, getting the /var/tmp/install.log would be helpful >> (the few last lines), > >> /var/tmp/ had a bunch of stuff in it, but no file named >> install.log. ?Perhaps that's because the installation succeeded? It it because I mistyped the path... logs are in /var/log/install.log. I tried to find a way to at least print something about missing requirement, but it does not look like there is a lot of documentation out there on the apple installer. cheers, David From Chris.Barker at noaa.gov Mon Mar 30 11:51:09 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 30 Mar 2009 08:51:09 -0700 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903280904j6e4fc9f7of642ab2edc4ffde2@mail.gmail.com> <9EC23A11-4979-4DF7-AE6B-7C3BA61E6494@post.harvard.edu> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> Message-ID: <49D0EA6D.6050601@noaa.gov> David Cournapeau wrote: > On Mon, Mar 30, 2009 at 11:06 PM, Robert Pyle wrote: >> This one installs, but only in /Library/Python/2.5/site-packages/, >> that is, for Apple's system python. This happened when `which python` >> pointed to either EPD python or python.org's 2.5.4. > > Yes, what your default python is does not matter: I don't know the > details, but it looks like the mac os x installer only looks whether a > python binary exists in /System/Library/..., that is the one I used to > build the package. You can see this in the Info.plist inside the > .mpkg. Well, this is the big question: what python(s) should be provide binaries for -- I think if you're only going to do one, it should be the python.org build, so that you can support 10.4, and 10.5 and everyone can use it. There are ways to build an installer that puts it in a place that both can find it -- wxPython does this -- but I'm not so sure that's a good idea. One of the key questions is how one should think of Apple's Python. They are using it for some system tools, so we really shouldn't break it. If you upgrade the numpy it comes with, there is some chance that you could break something. Also, Apple has not (and likely won't) upgrade their Python. I know I happened to run into a bug and needed a newer 2.5, so I'd rather have the control. A few years ago the MacPython community (as represented by the members of the pythonmac list) decided that the python.org build was that one that we should all target for binaries. That consensus has weakened with 10.5, as Apple did provide a Python that is fairly up to date and almost fully functional, but I think it's still a lot easier on everyone if we just stick with the python.org build as the one to target for binaries. That being said, it shouldn't be hard to build separate binaries for each python -- they would be identical except for where they get installed, and if they are clearly marked for downloading, there shouldn't be too much confusion. -Chris From cournape at gmail.com Mon Mar 30 12:19:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 Mar 2009 01:19:29 +0900 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <49D0EA6D.6050601@noaa.gov> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> Message-ID: <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> On Tue, Mar 31, 2009 at 12:51 AM, Chris Barker wrote: > Well, this is the big question: what python(s) should be provide > binaries for -- I think if you're only going to do one, it should be the > python.org build, so that you can support 10.4, and 10.5 and everyone > can use it. I don't really care, as long as there is only one. Maintaing binaries for every python out there is too time consuming. Given that mac os x is the easiest platform to build numpy/scipy on, that's not something i am interested in. > There are ways to build an installer that puts it in a place that both > can find it -- wxPython does this -- but I'm not so sure that's a good idea. there is the problem of compatibility. I am not sure whether Apple python and python.org are ABI compatible - even if the version is the same, you can certainly build incompatible python (I don't know if that's the case on mac os). > Also, Apple has not (and likely won't) upgrade their Python. I know I > happened to run into a bug and needed a newer 2.5, so I'd rather have > the control. That's a rather convincing argument. I will thus build binaries against python.org binaries (I still have to find a way to guarantee this in the build script, but that should not be too difficult). > That being said, it shouldn't be hard to build separate binaries for > each python -- they would be identical except for where they get > installed, and if they are clearly marked for downloading, there > shouldn't be too much confusion. My experience is that every choice presented to the user makes for more problem. And that just takes too much time. I prefer spending time making a few good installers rather than many half baked. Ideally, we should have something which could install on every python version, but oh well, David From jh at physics.ucf.edu Mon Mar 30 12:26:39 2009 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 30 Mar 2009 12:26:39 -0400 Subject: [Numpy-discussion] JOB: write numpy docs Message-ID: Last year's Doc Marathon got us off to a great start on documenting NumPy! But, there's still much work to be done, and SciPy after that. It's time to gear up for doing it again. Critical to last year's success was Stefan van der Walt's committed time, but he will be unable to play that role this year. So, I am looking to hire someone to write NumPy docs and help coordinate the doc project and its volunteers. The job includes working with me, the doc team, doc volunteers, and developers to: write and review a lot of docs, mainly those that others don't want to write help define milestones organize campaigns and volunteer teams to meet them research the NumPy and SciPy source codes to help plan: the eventual SciPy documentation the writing of a good User Manual work with the packaging team to meet their release deadlines perform other duties as assigned I am seeking someone to work full time if possible, and at least half time, from mid-April (or soon thereafter) through at least the (northern) summer. Candidates must be experienced NumPy and SciPy programmers; familiarity under the hood is a strong plus. They must also demonstrate their ability to produce excellent docs on the docs.SciPy.org wiki. Having contributed at a high level to an open-source community, especially to SciPy, is a big plus. Ability to take direction, work with and lead a team, and to work for extended periods without direct supervision on a list of assigned tasks are all critical. The applicant must be able to function well in a Linux environment; familiarity with multiple platforms is a plus. Please reply directly to me by email only. Include the following (PDF or ASCII formats strongly preferred): CV Statement of interest, qualifications per requirements above, availability, and wage expectations. Contact info for at least 3 professional references. Links to doc wiki pages for which you wrote the initial draft Links to doc wiki pages started by others to which you contributed significantly (edited, reviewed, proofed) The position is open until filled; candidates with complete applications by April 15 will receive full consideration. This is an open posting. Candidates who have not written any pages on the doc wiki yet have several weeks in which to do so. Pay will be commensurate with experience (up to a point). Relocation is not necessary. Candidates will need to provide their own computer and internet access. The University of Central Florida is an equal opportunity, equal access, affirmative action employer. --jh-- Prof. Joseph Harrington Department of Physics MAP 414 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 (407) 823-3416 voice (407) 823-5112 fax (407) 823-2325 physics office jh at physics.ucf.edu From sienkiew at stsci.edu Mon Mar 30 12:28:38 2009 From: sienkiew at stsci.edu (Mark Sienkiewicz) Date: Mon, 30 Mar 2009 12:28:38 -0400 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris In-Reply-To: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Message-ID: <49D0F336.7070800@stsci.edu> Numpy 1.3.0 rc1 fails this self-test on Solaris. ====================================================================== FAIL: Test find_duplicates ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/ra/pyssg/2.5.1/numpy/lib/tests/test_recfunctions.py", line 163, in test_find_duplicates assert_equal(test[0], a[control]) File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 121, in assert_equal return assert_array_equal(actual, desired, err_msg) File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 193, in assert_array_equal header='Arrays are not equal') File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 186, in assert_array_compare verbose=verbose, header=header) File "/usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py", line 395, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 50.0%) x: array([(1, (2.0, 'B')), (2, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, 'B'))], dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) y: array([(2, (2.0, 'B')), (1, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, 'B'))], dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) ---------------------------------------------------------------------- The software I am using: NumPy version 1.3.0rc1 Python version 2.5.1 (r251:54863, Jun 4 2008, 15:48:19) [C] nose version 0.10.4 I think this identifies the compilers it was built with: customize SunFCompiler Found executable /opt/SUNWspro-6u2/bin/f90 Could not locate executable echo ranlib customize SunFCompiler customize SunFCompiler using config C compiler: cc -DNDEBUG -O -xcode=pic32 It passes in Python 2.5.1 on these machines: x86 macintosh, 32 bit Red Hat Enterprise 4 Linux, x86, 32 bit RHE 3, x86, 32 bit RHE 4, x86, 64 bit PowerPC mac, 32 bit (Yes, even the PPC mac.) I see that this is the same problem as http://projects.scipy.org/numpy/ticket/1039 but the data used in the test is different. Mark S. From Chris.Barker at noaa.gov Mon Mar 30 13:10:16 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 30 Mar 2009 10:10:16 -0700 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49CF2B55.3090601@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> Message-ID: <49D0FCF8.5040908@noaa.gov> David Cournapeau wrote: > I don't really care, as long as there is only one. Maintaining binaries > for every python out there is too time consuming. Given that mac os X > is the easiest platform to build numpy/scipy on, I assume you meant NOT the easiest? ;-) > that's not something i am interested in. quite understandable. >> There are ways to build an installer that puts it in a place that both >> can find it -- wxPython does this -- but I'm not so sure that's a good idea. > > there is the problem of compatibility. I am not sure whether Apple > python and python.org are ABI compatible In theory, yes, and in practice, it seems to be working for wxPython. However, I agree that it's a bit risky. I'm at the PyCon MacPython sprint as we type -- and apparently Apple's is linked with the 10.5 sdk, whereas python.org's is linked against the 10.3 sdk -- so there could be issues. > I will thus build binaries > against python.org binaries (I still have to find a way to guarantee > this in the build script, but that should not be too difficult). Hardcoding the path to python should work: PYTHON=/Library/Frameworks/Python.framework/Versions/2.5/bin/python > My experience is that every choice presented to the user makes for > more problem. And that just takes too much time. I prefer spending > time making a few good installers rather than many half baked. I agree -- and most packages I use seem to supporting python.org exclusively for binaries. > Ideally, we should have something which could install on every python > version, but oh well, well, I guess that's the promise of easy_install -- but someone would have to build all the binary eggs... and there were weird issues with universal eggs on the mac that I understand have been fixed in 2.6, but not 2.5 Thanks for all your work on this, -Chris From cournape at gmail.com Mon Mar 30 13:27:04 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 Mar 2009 02:27:04 +0900 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <49D0FCF8.5040908@noaa.gov> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> <49D0FCF8.5040908@noaa.gov> Message-ID: <5b8d13220903301027n76634921hf4dc4b1818392ab3@mail.gmail.com> On Tue, Mar 31, 2009 at 2:10 AM, Chris Barker wrote: > David Cournapeau wrote: >> I don't really care, as long as there is only one. Maintaining binaries >> for every python out there is too time consuming. Given that mac os X >> is the easiest platform to build numpy/scipy on, > > I assume you meant NOT the easiest? ;-) Actually, no, I meant it :) It has gcc, which is the best supported compiler by numpy and scipy, there is almost no problem with g77, and the optimized blas/lapack is provided by the OS vendor, meaning on ABI issue, weird atlas build errors, etc... It is almost impossible to get the build wrong on mac os x once you get the right fortran compiler. > > In theory, yes, and in practice, it seems to be working for wxPython. > However, I agree that it's a bit risky. I'm at the PyCon MacPython > sprint as we type -- and apparently Apple's is linked with the 10.5 sdk, > whereas python.org's is linked against the 10.3 sdk -- so there could be > ?issues. I am almost certain there are issues in some configurations, in particular x86_64. I don't know the details, but I have seen mentioned several time this kind of problems: http://osdir.com/ml/python-dev/2009-02/msg00339.html I can see how this could cause trouble. > >> I will thus build binaries >> against python.org binaries (I still have to find a way to guarantee >> this in the build script, but that should not be too difficult). > > Hardcoding the path to python should work: > > PYTHON=/Library/Frameworks/Python.framework/Versions/2.5/bin/python Well, yes, but you can't really control this in the bdist_mpkg command. Also, my current paver file uses virtualenv to build a isolated numpy - that's what breaks the .mpkg, but I like this approach for building, so I would like to keep it as much as possible. > well, I guess that's the promise of easy_install -- but someone would > have to build all the binary eggs... and there were weird issues with > universal eggs on the mac that I understand have been fixed in 2.6, but > not 2.5 There are numerous problems with eggs (or more precisely, with "easy" install), which I am just not interested in getting into. In particular, it often breaks the user system - fixing it is easy for developers/"power users", but is a PITA for normal users. As long as easy_install is broken, I don't want to use it. cheers, David From jsilva at fc.up.pt Mon Mar 30 14:13:17 2009 From: jsilva at fc.up.pt (=?ISO-8859-1?Q?Jo=E3o_Lu=EDs_Silva?=) Date: Mon, 30 Mar 2009 19:13:17 +0100 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow Message-ID: Hi, I wrote a script to calculate the *optical* autocorrelation of an electric field. It's like the autocorrelation, but sums the fields instead of multiplying them. I'm calculating I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf) with script appended at the end. It's too slow for my purposes (takes ~5 seconds, and scales ~O(N**2)). numpy's correlate is fast enough, but isn't what I need as it multiplies instead of add the fields. Could you help me get this script to run faster (without having to write it in another programming language) ? Thanks, Jo?o Silva #-------------------------------------------------------- import numpy as np #import matplotlib.pyplot as plt n = 2**12 n_autocorr = 3*n-2 c = 3E2 w0 = 2.0*np.pi*c/800.0 t_max = 100.0 t = np.linspace(-t_max/2.0,t_max/2.0,n) E = np.exp(-(t/10.0)**2)*np.exp(1j*w0*t) #Electric field dt = t[1]-t[0] t_autocorr=np.linspace(-dt*n_autocorr/2.0,dt*n_autocorr/2.0,n_autocorr) E1 = np.zeros(n_autocorr,dtype=E.dtype) E2 = np.zeros(n_autocorr,dtype=E.dtype) Ac = np.zeros(n_autocorr,dtype=np.float64) E2[n-1:n-1+n] = E[:] for i in range(2*n-2): E1[:] = 0.0 E1[i:i+n] = E[:] Ac[i] = np.sum(np.abs(E1+E2)**2) Ac *= dt #plt.plot(t_autocorr,Ac) #plt.show() #-------------------------------------------------------- From peridot.faceted at gmail.com Mon Mar 30 14:23:12 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 30 Mar 2009 14:23:12 -0400 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow In-Reply-To: References: Message-ID: 2009/3/30 Jo?o Lu?s Silva : > Hi, > > I wrote a script to calculate the *optical* autocorrelation of an > electric field. It's like the autocorrelation, but sums the fields > instead of multiplying them. I'm calculating > > I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf) You may be in trouble if there's cancellation, but can't you just rewrite this as E(t)**2+E(t-tau)**2-2*E(t)*E(t-tau)? Then you have two O(n) integrals and one standard autocorrelation... Anne > with script appended at the end. It's too slow for my purposes (takes ~5 > seconds, and scales ~O(N**2)). numpy's correlate is fast enough, but > isn't what I need as it multiplies instead of add the fields. Could you > help me get this script to run faster (without having to write it in > another programming language) ? > > Thanks, > Jo?o Silva > > #-------------------------------------------------------- > > import numpy as np > #import matplotlib.pyplot as plt > > n = 2**12 > n_autocorr = 3*n-2 > > c = 3E2 > w0 = 2.0*np.pi*c/800.0 > t_max = 100.0 > t = np.linspace(-t_max/2.0,t_max/2.0,n) > > E = np.exp(-(t/10.0)**2)*np.exp(1j*w0*t) ? ?#Electric field > > dt = t[1]-t[0] > t_autocorr=np.linspace(-dt*n_autocorr/2.0,dt*n_autocorr/2.0,n_autocorr) > E1 = np.zeros(n_autocorr,dtype=E.dtype) > E2 = np.zeros(n_autocorr,dtype=E.dtype) > Ac = np.zeros(n_autocorr,dtype=np.float64) > > E2[n-1:n-1+n] = E[:] > > for i in range(2*n-2): > ? ? E1[:] = 0.0 > ? ? E1[i:i+n] = E[:] > > ? ? Ac[i] = np.sum(np.abs(E1+E2)**2) > > Ac *= dt > > #plt.plot(t_autocorr,Ac) > #plt.show() > > #-------------------------------------------------------- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Mon Mar 30 14:39:47 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 30 Mar 2009 12:39:47 -0600 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow In-Reply-To: References: Message-ID: On Mon, Mar 30, 2009 at 12:23 PM, Anne Archibald wrote: > 2009/3/30 Jo?o Lu?s Silva : > > Hi, > > > > I wrote a script to calculate the *optical* autocorrelation of an > > electric field. It's like the autocorrelation, but sums the fields > > instead of multiplying them. I'm calculating > > > > I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf) > > You may be in trouble if there's cancellation, but can't you just > rewrite this as E(t)**2+E(t-tau)**2-2*E(t)*E(t-tau)? Then you have two > O(n) integrals and one standard autocorrelation... > That should work. The first two integrals are actually the same, but need to be E(t)*E(t).conj(). The second integral needs twice the real part of E(t)*E(t-tau).conj(). Numpy correlate should really have the conjugate built in, but it doesn't. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon Mar 30 14:59:18 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 30 Mar 2009 11:59:18 -0700 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <5b8d13220903301027n76634921hf4dc4b1818392ab3@mail.gmail.com> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> <49D0FCF8.5040908@noaa.gov> <5b8d13220903301027n76634921hf4dc4b1818392ab3@mail.gmail.com> Message-ID: <49D11686.4010305@noaa.gov> David Cournapeau wrote: > On Tue, Mar 31, 2009 at 2:10 AM, Chris Barker wrote: >> I assume you meant NOT the easiest? ;-) > > Actually, no, I meant it :) It has gcc, which is the best supported > compiler by numpy and scipy, there is almost no problem with g77, and > the optimized blas/lapack is provided by the OS vendor, meaning on ABI > issue, weird atlas build errors, etc... It is almost impossible to get > the build wrong on mac os x once you get the right fortran compiler. I see -- well that's good news. I've found the Universal library requirements to be a pain sometimes, and it probably would be here if Apple wasn't giving us lapack/blas. > I am almost certain there are issues in some configurations, in > particular x86_64. Well, neither Apple nor python.org's builds are 64 bit anyway at this point. There is talk of quad (i386,and ppc_64 i86_64) builds the the future, though. >>> I will thus build binaries >>> against python.org binaries (I still have to find a way to guarantee >>> this in the build script, but that should not be too difficult). >> Hardcoding the path to python should work: >> >> PYTHON=/Library/Frameworks/Python.framework/Versions/2.5/bin/python > > Well, yes, but you can't really control this in the bdist_mpkg > command. bdist_mpkg should do "the right thing" if it's run with the right python. So you need to make sure you run: /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg Rather than whatever one happens to be found on your PATH. > Also, my current paver file uses virtualenv to build a > isolated numpy - that's what breaks the .mpkg, but I like this > approach for building, so I would like to keep it as much as possible. Well, maybe we need to hack bdist_mpkg to support this, we're pretty sure that it is possible. I want o make sure I understand what you want: Do you want to be able to build numpy in a virtualenv, and then build a mpkg that will install into the users regular Framework? Do you want to be able to build a mpkg that users can install into the virtualenv of their choice? Both? Of course, easy_install can do that, when it works! > There are numerous problems with eggs (or more precisely, with "easy" > install), which I am just not interested in getting into. me neither -- > In > particular, it often breaks the user system - fixing it is easy for > developers/"power users", but is a PITA for normal users. As long as > easy_install is broken, I don't want to use it. We were just talking about some of that last night -- we really need a "easy_uninstall" for instance. I'm going to poke into bdist_mpkg a bit right now. By the way, for the libgfortran issue, while statically linking it may be the best option, it wouldn't be too hard to have the mpkg include and install /usr/local/lib/ligfortran.dylib (or whatever). -Chris From pav at iki.fi Mon Mar 30 15:14:56 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 Mar 2009 19:14:56 +0000 (UTC) Subject: [Numpy-discussion] np.savez not multi-processing safe, alternatives? References: <6c476c8a0903300603n2a4f9d33scc8ee54deb692e37@mail.gmail.com> Message-ID: Mon, 30 Mar 2009 09:03:56 -0400, Wes McKinney wrote: > I have a process that stores a number of sets of 3 arrays output which > can either be stored as a few .npy files or an .npz file with the same > keys in each file (let's say, writing roughly 10,000 npz files, all > containing the same keys 'a', 'b', 'c'). If I run multiple processes on > the same machine (desirable, since they heavily database-IO-bound), over > a period of hours some of the npz-writes will collide and fail due to > the use of tempfile and tempfile.gettempdir() (either one of the .npy > subfiles will be locked for writing or will get os.remove'd while the > zip file is being written). This is bug #852, it's fixed in trunk. As a workaround for the present, you may want to grab the `savez` function from http://projects.scipy.org/numpy/browser/trunk/numpy/lib/io.py#L243 and use a copy of it in your code temporarily. The function is fairly small. -- Pauli Virtanen From charlesr.harris at gmail.com Mon Mar 30 16:03:17 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 30 Mar 2009 14:03:17 -0600 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris In-Reply-To: <49D0F336.7070800@stsci.edu> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> Message-ID: On Mon, Mar 30, 2009 at 10:28 AM, Mark Sienkiewicz wrote: > Numpy 1.3.0 rc1 fails this self-test on Solaris. > > > ====================================================================== > FAIL: Test find_duplicates > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/ra/pyssg/2.5.1/numpy/lib/tests/test_recfunctions.py", line > 163, in test_find_duplicates > assert_equal(test[0], a[control]) > File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 121, in > assert_equal > return assert_array_equal(actual, desired, err_msg) > File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 193, in > assert_array_equal > header='Arrays are not equal') > File "/usr/stsci/pyssgdev/2.5.1/numpy/ma/testutils.py", line 186, in > assert_array_compare > verbose=verbose, header=header) > File "/usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py", line 395, in > assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > > (mismatch 50.0%) > x: array([(1, (2.0, 'B')), (2, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, > 'B'))], > dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) > y: array([(2, (2.0, 'B')), (1, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, > 'B'))], > dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) > > ---------------------------------------------------------------------- > > > > The software I am using: > > NumPy version 1.3.0rc1 > Python version 2.5.1 (r251:54863, Jun 4 2008, 15:48:19) [C] > nose version 0.10.4 > These are new (two months old) tests. Hmm, they are also marked as known failures on win32. I wonder why they fail there and not on linux? I think you should open a ticket for this. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sienkiew at stsci.edu Mon Mar 30 17:04:21 2009 From: sienkiew at stsci.edu (Mark Sienkiewicz) Date: Mon, 30 Mar 2009 17:04:21 -0400 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris In-Reply-To: References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> Message-ID: <49D133D5.6020707@stsci.edu> >> ====================================================================== >> FAIL: Test find_duplicates >> ---------------------------------------------------------------------- >> ... >> AssertionError: >> Arrays are not equal >> >> (mismatch 50.0%) >> x: array([(1, (2.0, 'B')), (2, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, >> 'B'))], >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) >> y: array([(2, (2.0, 'B')), (1, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, >> 'B'))], >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) >> >> ---------------------------------------------------------------------- >> >> > > These are new (two months old) tests. Hmm, they are also marked as known > failures on win32. I wonder why they fail there and not on linux? I think > you should open a ticket for this. > I'm not sure how old the test is, but I see that it has been failing since Feb 1. (That is the earliest report I have online at the moment.) The ticket is http://projects.scipy.org/numpy/ticket/1039 . I added this specific failure mode to the ticket today. It does not surprise me at all when the trunk is broken on solaris. I'm mentioning it on the list because I see it is still broken in the release candidate. I assume somebody would want to either fix the problem or remove the non-working feature from the release. Mark S. From pav at iki.fi Mon Mar 30 17:12:35 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 Mar 2009 21:12:35 +0000 (UTC) Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> Message-ID: Mon, 30 Mar 2009 14:03:17 -0600, Charles R Harris wrote: > On Mon, Mar 30, 2009 at 10:28 AM, Mark Sienkiewicz > wrote: > >> Numpy 1.3.0 rc1 fails this self-test on Solaris. [clip] >> ====================================================================== >> FAIL: Test find_duplicates >> ---------------------------------------------------------------------- >> assert_equal(test[0], a[control]) >> >> x: array([(1, (2.0, 'B')), (2, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, >> 'B'))], >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) >> y: array([(2, (2.0, 'B')), (1, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, >> 'B'))], >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) > > These are new (two months old) tests. Hmm, they are also marked as known > failures on win32. I wonder why they fail there and not on linux? I > think you should open a ticket for this. The data seems to be in a different order in the index array and in the data array returned by `find_duplicates`. It is intended that find_duplicates guarantees that the returned indices correspond to the returned values? Another question: the 'recfunctions' is not imported anywhere in numpy? (BTW, it might be good not to keep commented-out code such as those np.knownfail decorators in the repository, unless it's explained why it's commented out...) -- Pauli Virtanen From charlesr.harris at gmail.com Mon Mar 30 17:15:06 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 30 Mar 2009 15:15:06 -0600 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris In-Reply-To: <49D133D5.6020707@stsci.edu> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> <49D133D5.6020707@stsci.edu> Message-ID: On Mon, Mar 30, 2009 at 3:04 PM, Mark Sienkiewicz wrote: > > >> ====================================================================== > >> FAIL: Test find_duplicates > >> ---------------------------------------------------------------------- > >> ... > >> AssertionError: > >> Arrays are not equal > >> > >> (mismatch 50.0%) > >> x: array([(1, (2.0, 'B')), (2, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, > >> 'B'))], > >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) > >> y: array([(2, (2.0, 'B')), (1, (2.0, 'B')), (2, (2.0, 'B')), (1, (2.0, > >> 'B'))], > >> dtype=[('A', '>i4'), ('B', [('BA', '>f8'), ('BB', '|S1')])]) > >> > >> ---------------------------------------------------------------------- > >> > >> > > > > These are new (two months old) tests. Hmm, they are also marked as known > > failures on win32. I wonder why they fail there and not on linux? I think > > you should open a ticket for this. > > > > I'm not sure how old the test is, but I see that it has been failing > since Feb 1. (That is the earliest report I have online at the moment.) > > The ticket is http://projects.scipy.org/numpy/ticket/1039 . I added > this specific failure mode to the ticket today. > > It does not surprise me at all when the trunk is broken on solaris. I'm > mentioning it on the list because I see it is still broken in the > release candidate. I assume somebody would want to either fix the > problem or remove the non-working feature from the release. > I'm guessing that it is the test that needs fixing. And maybe the windows problem is related. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Mar 30 17:16:19 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 30 Mar 2009 16:16:19 -0500 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> References: <49CE3A2C.9000007@enthought.com> <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> Message-ID: <49D136A3.80302@gmail.com> David Cournapeau wrote: > Hi Travis, > > On Sat, Mar 28, 2009 at 11:54 PM, Travis E. Oliphant > wrote: > >> FYI from PyCon >> >> Here at PyCon, it has been said that Python will be moving towards DVCS >> and will be using bzr or mecurial, but explicitly *not* git. It would >> seem that *git* got the "lowest" score in the Developer survey that >> Brett Cannon did. >> > > It is interesting how those tools are viewed so differently in > different communities. I am too quite doubtful about the validity of > those surveys :) > > >> The reasons seem to be: >> >> * git doesn't have good Windows clients >> > > Depending on what is meant by good windows client (GUI, IDE > integration), it is true, but then neither do bzr or hg have good > clients, so I find this statement a bit strange. What is certainly > true is that git developers care much less about windows than bzr (and > hg ?). For example, I would guess git will never care much about case > insensitive fs, etc... (I know bzr developers worked quite a bit on > this). > > >> * git is not written with Python >> > > I can somewhat understand why it matters to python, but does it matter to us ? > > There are definitely strong arguments against git - but I don't think > being written in python is a strong one. The lack of a good window > support is a good argument against changing from svn, but very > unconvincing compared to other tools. Git has now so much more > manpower compared to hg and bzr (many more project use it: the list of > visible projects using git is becoming quite impressive) - from a 3rd > party POV, I think git is much better set up than bzr and hg. Gnome > choosing git could be significant (they made the decision a couple of > days ago). > > >> I think the sample size was pretty small to be making decisions on >> (especially when most opinions where "un-informed"). >> > > Most people just choose the one they first use. Few people know > several DVCS. Pauli and me started a page about arguments pro/cons git > - it is still very much work in progress: > > http://projects.scipy.org/numpy/wiki/GitMigrationProposal > > Since few people are willing to try different systems, we also started > a few workflows (compared to svn): > > http://projects.scipy.org/numpy/wiki/GitWorkflow > > FWIW, I have spent some time to look at converting svn repo to git, > with proper conversion of branches, tags, and other things. I have > converted my own scikits to git as a first trial (I have numpy > converted as well, but I did not put it anywhere to avoid confusion). > This part of the problem would be relatively simple to handle. > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > It is now official that Python will switch to Mercurial (Hg): http://thread.gmane.org/gmane.comp.python.devel/102706 Not that it directly concerns me, but this is rather surprising given: http://www.python.org/dev/peps/pep-0374/ Hopefully more details will be provided in the near future. Bruce From pav at iki.fi Mon Mar 30 17:57:48 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 Mar 2009 21:57:48 +0000 (UTC) Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> <49D133D5.6020707@stsci.edu> Message-ID: Mon, 30 Mar 2009 15:15:06 -0600, Charles R Harris wrote: > I'm guessing that it is the test that needs fixing. And maybe the windows > problem is related. Probably they are both related to unspecified sort order for the duplicates. There were some sort-order ignoring missing in the test. I think the test is now fixed in trunk: http://projects.scipy.org/numpy/changeset/6827 -- Pauli Virtanen From aisaac at american.edu Mon Mar 30 18:38:21 2009 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 30 Mar 2009 18:38:21 -0400 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <49D136A3.80302@gmail.com> References: <49CE3A2C.9000007@enthought.com> <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> <49D136A3.80302@gmail.com> Message-ID: <49D149DD.70005@american.edu> On 3/30/2009 5:16 PM Bruce Southey apparently wrote: > It is now official that Python will switch to Mercurial (Hg): > http://thread.gmane.org/gmane.comp.python.devel/102706 > > Not that it directly concerns me, but this is rather surprising given: > http://www.python.org/dev/peps/pep-0374/ http://www.python.org/dev/peps/pep-0374/#chosen-dvcs ;-) Alan Isaac From Matthew.Partridge at barclaysglobal.com Mon Mar 30 19:54:45 2009 From: Matthew.Partridge at barclaysglobal.com (Partridge, Matthew BGI SYD) Date: Tue, 31 Mar 2009 10:54:45 +1100 Subject: [Numpy-discussion] lost with slicing Message-ID: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net> I apologise if I'm asking an obvious question or one that has already been addressed. I've tried to understand the documentation in the numpy manual on slicing, but I'm a bit lost. I'm trying to do indexing using both slices and index lists. I have a problem when I do something like: x[0, :, [0,1,2]] Here are a couple of examples: >>> a = numpy.arange(6).reshape(2,3) >>> print a [[0 1 2] [3 4 5]] >>> print a[:, [0,1,2]] # example 1 - this works as I expected [[0 1 2] [3 4 5]] >>> b = numpy.arange(6).reshape(1,2,3) >>> print b [[[0 1 2] [3 4 5]]] >>> print b[0, :, [0,1,2]] # example 2 - this seems to be the transpose of what I was expecting [[0 3] [1 4] [2 5]] >>> print b[0, [[0],[1]], [[0,1,2]]] # example 3 - this is what I expected [[0 1 2] [3 4 5]] Am I doing something wrong? Why do we get different behaviour in example 2 compared with example 1 or example 3? (I'm using numpy 1.0.3.1 on python 2.4.1 for windows, but I've tried some more recent versions of numpy as well.) mattp -- This message and any attachments are confidential, proprietary, and may be privileged. If this message was misdirected, Barclays Global Investors (BGI) does not waive any confidentiality or privilege. If you are not the intended recipient, please notify us immediately and destroy the message without disclosing its contents to anyone. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. The views and opinions expressed in this e-mail message are the author's own and may not reflect the views and opinions of BGI, unless the author is authorized by BGI to express such views or opinions on its behalf. All email sent to or from this address is subject to electronic storage and review by BGI. Although BGI operates anti-virus programs, it does not accept responsibility for any damage whatsoever caused by viruses being passed. From josef.pktd at gmail.com Mon Mar 30 20:21:22 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 30 Mar 2009 20:21:22 -0400 Subject: [Numpy-discussion] lost with slicing In-Reply-To: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net> References: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net> Message-ID: <1cd32cbb0903301721nb45628ci82eb63acb9f26d99@mail.gmail.com> On Mon, Mar 30, 2009 at 7:54 PM, Partridge, Matthew BGI SYD wrote: > > I apologise if I'm asking an obvious question or one that has already > been addressed. > > I've tried to understand the documentation in the numpy manual on > slicing, but I'm a bit lost. ?I'm trying to do indexing using both > slices and index lists. ?I have a problem when I do something like: > > x[0, :, [0,1,2]] > > Here are a couple of examples: > >>>> a = numpy.arange(6).reshape(2,3) >>>> print a > [[0 1 2] > ?[3 4 5]] >>>> print a[:, [0,1,2]] ? # example 1 - this works as I expected > [[0 1 2] > ?[3 4 5]] >>>> b = numpy.arange(6).reshape(1,2,3) >>>> print b > [[[0 1 2] > ?[3 4 5]]] >>>> print b[0, :, [0,1,2]] ?# example 2 - this seems to be the transpose > of what I was expecting > [[0 3] > ?[1 4] > ?[2 5]] >>>> print b[0, [[0],[1]], [[0,1,2]]] # example 3 - this is what I > expected > [[0 1 2] > ?[3 4 5]] > > Am I doing something wrong? ?Why do we get different behaviour in > example 2 compared with example 1 or example 3? > > (I'm using numpy 1.0.3.1 on python 2.4.1 for windows, but I've tried > some more recent versions of numpy as well.) > > mattp > that's how it works, whether we like it or not. see thread with title "is it a bug?" starting march 11 Josef From Matthew.Partridge at barclaysglobal.com Mon Mar 30 21:29:53 2009 From: Matthew.Partridge at barclaysglobal.com (Partridge, Matthew BGI SYD) Date: Tue, 31 Mar 2009 12:29:53 +1100 Subject: [Numpy-discussion] lost with slicing In-Reply-To: <1cd32cbb0903301721nb45628ci82eb63acb9f26d99@mail.gmail.com> References: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net> <1cd32cbb0903301721nb45628ci82eb63acb9f26d99@mail.gmail.com> Message-ID: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703A@sydnte2k032.insidelive.net> > > I apologise if I'm asking an obvious question or one that > has already > > been addressed. > > > > I've tried to understand the documentation in the numpy manual on > > slicing, but I'm a bit lost. I'm trying to do indexing using both > > slices and index lists. I have a problem when I do something like: > > > > x[0, :, [0,1,2]] > > > > Here are a couple of examples: > > > >>>> a = numpy.arange(6).reshape(2,3) > >>>> print a > > [[0 1 2] > > [3 4 5]] > >>>> print a[:, [0,1,2]] # example 1 - this works as I expected > > [[0 1 2] > > [3 4 5]] > >>>> b = numpy.arange(6).reshape(1,2,3) > >>>> print b > > [[[0 1 2] > > [3 4 5]]] > >>>> print b[0, :, [0,1,2]] # example 2 - this seems to be the > > transpose of what I was expecting > > [[0 3] > > [1 4] > > [2 5]] > >>>> print b[0, [[0],[1]], [[0,1,2]]] # example 3 - this is what I > > expected > > [[0 1 2] > > [3 4 5]] > > > > Am I doing something wrong? Why do we get different behaviour in > > example 2 compared with example 1 or example 3? > > > > (I'm using numpy 1.0.3.1 on python 2.4.1 for windows, but I've tried > > some more recent versions of numpy as well.) > > > > mattp > > > > that's how it works, whether we like it or not. > > see thread with title "is it a bug?" starting march 11 > > Josef Thanks Josef, I've looked over "is it a bug" thread, and realise that it is very relevant! But I'm still lost. Robert Kern wrote: "It's certainly weird, but it's working as designed. Fancy indexing via arrays is a separate subsystem from indexing via slices. Basically, fancy indexing decides the outermost shape of the result (e.g. the leftmost items in the shape tuple). If there are any sliced axes, they are *appended* to the end of that shape tuple." I see that's the case in example 2, but not in example 1 (above). Josef, I also see your example doesn't fit this explanation: >>> x = np.arange(30).reshape(3,5,2) >>> idx = np.array([0,1]); e = x[:,[0,1],0]; e.shape (3, 2) >>> idx = np.array([0,1]); e = x[:,:2,0]; e.shape (3, 2) Travis Oliphant wrote: Referencing my previous post on this topic. In this case, it is unambiguous to replace dimensions 1 and 2 with the result of broadcasting idx and idx together. Thus the (5,6) dimensions is replaced by the (2,) result of indexing leaving the outer dimensions in-tact, thus (4,2,7) is the result. I'm unclear on when something is regarded as "unambiguous"; I don't really get how the rules work. I'm trying to build something where I can do (for "a" having a shape (n1,n2,n3,...)): a[i1, i2, i3, ...] where i1, i2, i3 can be * a single index: eg a[3] * a slice: eg a[:3] * a list of keys: eg a[[1,2,3]] and the interpretation of this should yield: * no corresponding dimension if a single index is used * a dimension of length of the slice if a slice is used * a dimension of length of the list if a list is used I currently apply the following logic: * look through the index coordinates that are being applied * if there are multiple list-of-key indices, then reshape them so that they will broadcast to agree: a[[1,2,3], [4,5]] --> a[[[1],[2],[3]], [[4,5]]] * note if there are any slices. If so, I assume (as per Robert Kern's remark) that the dimensions corresponding to the slices are going to be appended to the end. So I make sure that I transpose my result at the end to correct for this. When I do all this, I get example 2 behaving like example 3, but example 1 then doesn't work. I'm not trying to get the discussion list to do my work for me, but I'm pretty confused as to when dimensions get swapped and when they don't; when something is "ambiguous" and when it is "unambiguous". Any help appreciated, thanks, matt -- This message and any attachments are confidential, proprietary, and may be privileged. If this message was misdirected, Barclays Global Investors (BGI) does not waive any confidentiality or privilege. If you are not the intended recipient, please notify us immediately and destroy the message without disclosing its contents to anyone. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. The views and opinions expressed in this e-mail message are the author's own and may not reflect the views and opinions of BGI, unless the author is authorized by BGI to express such views or opinions on its behalf. All email sent to or from this address is subject to electronic storage and review by BGI. Although BGI operates anti-virus programs, it does not accept responsibility for any damage whatsoever caused by viruses being passed. From robert.kern at gmail.com Mon Mar 30 21:34:56 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Mar 2009 20:34:56 -0500 Subject: [Numpy-discussion] lost with slicing In-Reply-To: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703A@sydnte2k032.insidelive.net> References: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net> <1cd32cbb0903301721nb45628ci82eb63acb9f26d99@mail.gmail.com> <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703A@sydnte2k032.insidelive.net> Message-ID: <3d375d730903301834s428bd203yce08ad2e21e4f374@mail.gmail.com> On Mon, Mar 30, 2009 at 20:29, Partridge, Matthew BGI SYD wrote: > Thanks Josef, > > I've looked over "is it a bug" thread, and realise that it is very relevant! > But I'm still lost. ?Robert Kern wrote: > > ?"It's certainly weird, but it's working as designed. Fancy indexing via > ?arrays is a separate subsystem from indexing via slices. Basically, > ?fancy indexing decides the outermost shape of the result (e.g. the > ?leftmost items in the shape tuple). If there are any sliced axes, they > ?are *appended* to the end of that shape tuple." I was wrong. Don't listen to me. Travis's explanation is what you need. > I see that's the case in example 2, but not in example 1 (above). ?Josef, I also > see your example doesn't fit this explanation: > > ?>>> x = np.arange(30).reshape(3,5,2) > ?>>> idx = np.array([0,1]); e = x[:,[0,1],0]; e.shape > ?(3, 2) > ?>>> idx = np.array([0,1]); e = x[:,:2,0]; e.shape > ?(3, 2) > > Travis Oliphant wrote: > > ?Referencing my previous post on this topic. ? In this case, it is > ?unambiguous to replace dimensions 1 and 2 with the result of > ?broadcasting idx and idx together. ? Thus the (5,6) dimensions is > ?replaced by the (2,) result of indexing leaving the outer dimensions > ?in-tact, ?thus (4,2,7) is the result. > > I'm unclear on when something is regarded as "unambiguous"; I don't really get how the rules work. When a slice is all the way on the left or right, but not in the middle. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Matthew.Partridge at barclaysglobal.com Mon Mar 30 21:36:36 2009 From: Matthew.Partridge at barclaysglobal.com (Partridge, Matthew BGI SYD) Date: Tue, 31 Mar 2009 12:36:36 +1100 Subject: [Numpy-discussion] lost with slicing In-Reply-To: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703A@sydnte2k032.insidelive.net> References: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545700F@sydnte2k032.insidelive.net><1cd32cbb0903301721nb45628ci82eb63acb9f26d99@mail.gmail.com> <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703A@sydnte2k032.insidelive.net> Message-ID: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D545703C@sydnte2k032.insidelive.net> Sorry group. I found Travis Oliphant's earlier 12 March post (that didn't show up in the same thread), and found the answer to my question. matt > > > I apologise if I'm asking an obvious question or one that > > has already > > > been addressed. > > > > > > I've tried to understand the documentation in the numpy manual on > > > slicing, but I'm a bit lost. I'm trying to do indexing > using both > > > slices and index lists. I have a problem when I do > something like: > > > > > > x[0, :, [0,1,2]] > > > > > > Here are a couple of examples: > > > > > >>>> a = numpy.arange(6).reshape(2,3) > > >>>> print a > > > [[0 1 2] > > > [3 4 5]] > > >>>> print a[:, [0,1,2]] # example 1 - this works as I expected > > > [[0 1 2] > > > [3 4 5]] > > >>>> b = numpy.arange(6).reshape(1,2,3) print b > > > [[[0 1 2] > > > [3 4 5]]] > > >>>> print b[0, :, [0,1,2]] # example 2 - this seems to be the > > > transpose of what I was > expecting [[0 > > > 3] > > > [1 4] > > > [2 5]] > > >>>> print b[0, [[0],[1]], [[0,1,2]]] # example 3 - this is what I > > > expected [[0 1 2] > > > [3 4 5]] > > > > > > Am I doing something wrong? Why do we get different behaviour in > > > example 2 compared with example 1 or example 3? > > > > > > (I'm using numpy 1.0.3.1 on python 2.4.1 for windows, but > I've tried > > > some more recent versions of numpy as well.) > > > > > > mattp > > > > > > > that's how it works, whether we like it or not. > > > > see thread with title "is it a bug?" starting march 11 > > > > Josef > > Thanks Josef, > > I've looked over "is it a bug" thread, and realise that it is > very relevant! > But I'm still lost. Robert Kern wrote: > > "It's certainly weird, but it's working as designed. Fancy > indexing via > arrays is a separate subsystem from indexing via slices. Basically, > fancy indexing decides the outermost shape of the result (e.g. the > leftmost items in the shape tuple). If there are any sliced > axes, they > are *appended* to the end of that shape tuple." > > I see that's the case in example 2, but not in example 1 > (above). Josef, I also see your example doesn't fit this explanation: > > >>> x = np.arange(30).reshape(3,5,2) > >>> idx = np.array([0,1]); e = x[:,[0,1],0]; e.shape > (3, 2) > >>> idx = np.array([0,1]); e = x[:,:2,0]; e.shape > (3, 2) > > Travis Oliphant wrote: > > Referencing my previous post on this topic. In this case, it is > unambiguous to replace dimensions 1 and 2 with the result of > broadcasting idx and idx together. Thus the (5,6) dimensions is > replaced by the (2,) result of indexing leaving the outer dimensions > in-tact, thus (4,2,7) is the result. > > I'm unclear on when something is regarded as "unambiguous"; I > don't really get how the rules work. > > I'm trying to build something where I can do (for "a" having > a shape (n1,n2,n3,...)): > > a[i1, i2, i3, ...] > > where i1, i2, i3 can be > * a single index: eg a[3] > * a slice: eg a[:3] > * a list of keys: eg a[[1,2,3]] > and the interpretation of this should yield: > * no corresponding dimension if a single index is used > * a dimension of length of the slice if a slice is used > * a dimension of length of the list if a list is used > > I currently apply the following logic: > * look through the index coordinates that are being applied > * if there are multiple list-of-key indices, then reshape > them so that they will broadcast to agree: > a[[1,2,3], [4,5]] --> a[[[1],[2],[3]], [[4,5]]] > * note if there are any slices. If so, I assume (as per > Robert Kern's remark) that the dimensions corresponding to > the slices are going to be appended to the end. So I make > sure that I transpose my result at the end to correct for this. > > When I do all this, I get example 2 behaving like example 3, > but example 1 then doesn't work. I'm not trying to get the > discussion list to do my work for me, but I'm pretty confused > as to when dimensions get swapped and when they don't; when > something is "ambiguous" and when it is "unambiguous". > > Any help appreciated, > thanks, > matt -- This message and any attachments are confidential, proprietary, and may be privileged. If this message was misdirected, Barclays Global Investors (BGI) does not waive any confidentiality or privilege. If you are not the intended recipient, please notify us immediately and destroy the message without disclosing its contents to anyone. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. The views and opinions expressed in this e-mail message are the author's own and may not reflect the views and opinions of BGI, unless the author is authorized by BGI to express such views or opinions on its behalf. All email sent to or from this address is subject to electronic storage and review by BGI. Although BGI operates anti-virus programs, it does not accept responsibility for any damage whatsoever caused by viruses being passed. From cournape at gmail.com Mon Mar 30 23:41:02 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 Mar 2009 12:41:02 +0900 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <49D136A3.80302@gmail.com> References: <49CE3A2C.9000007@enthought.com> <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> <49D136A3.80302@gmail.com> Message-ID: <5b8d13220903302041m71cea034pf21b4a61d39eb6f@mail.gmail.com> On Tue, Mar 31, 2009 at 6:16 AM, Bruce Southey wrote: > It is now official that Python will switch to Mercurial (Hg): > http://thread.gmane.org/gmane.comp.python.devel/102706 > > Not that it directly concerns me, but this is rather surprising given: > http://www.python.org/dev/peps/pep-0374/ I don't think it is: as Guido said in his email, someone has to make the decision, and endless discussion go nowhere, because you can always make arguments for one or the other. Since some core developers are strongly against git (Martin Loewis for example), and given that hg is used by several core python developers already, I think it makes sense. cheers, David From cgohlke at uci.edu Mon Mar 30 23:32:37 2009 From: cgohlke at uci.edu (cgohlke at uci.edu) Date: Mon, 30 Mar 2009 20:32:37 -0700 (PDT) Subject: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions In-Reply-To: <463e11f90903041928j7508b2fcu4abbaa65cfe11460@mail.gmail.com> References: <2352c0540903041410j263dbb4dk6d6a2662ae7c4216@mail.gmail.com> <463e11f90903041928j7508b2fcu4abbaa65cfe11460@mail.gmail.com> Message-ID: Hello, I have reimplemented many functions of the transformations.py module in a C extension module. Speed improvements are 5-50 times. -- Christoph On Mar 4, 8:28?pm, Jonathan Taylor wrote: > Looks cool but a lot of this should be done in an extension module to > make it fast. ?Perhaps starting this process off as a separate entity > until stability is acheived. ?I would be tempted to do some of this > using cython. ?I just wrote found that generating a rotation matrix > from euler angles is about 10x faster when done properly with cython. > > J. > > On Wed, Mar 4, 2009 at 5:10 PM, Gareth Elston > > wrote: > > I found a nice module for these transforms at > >http://www.lfd.uci.edu/~gohlke/code/transformations.py.html. I've > > been using an older version for some time and thought it might make a > > good addition to numpy/scipy. I made some simple mods to the older > > version to add a couple of functions I needed and to allow it to be > > used with Python 2.4. > > > The module is pure Python (2.5, with numpy 1.2 imported), includes > > doctests, and is BSD licensed. Here's the first part of the module > > docstring: > > > """Homogeneous Transformation Matrices and Quaternions. > > > A library for calculating 4x4 matrices for translating, rotating, mirroring, > > scaling, shearing, projecting, orthogonalizing, and superimposing arrays of > > homogenous coordinates as well as for converting between rotation matrices, > > Euler angles, and quaternions. > > """ > > > I'd like to see this added to numpy/scipy so I know I've got some > > reading to do (scipy.org/Developer_Zone and the huge scipy-dev > > discussions on Scipy development infrastructure / workflow) to make > > sure it follows the guidelines, but where would people like to see > > this? In numpy? scipy? scikits? elsewhere? > > > I seem to remember that there was a first draft of a guide for > > developers being written. Are there any links available? > > > Thanks, > > Gareth. > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discuss... at scipy.org > >http://projects.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discuss... at scipy.orghttp://projects.scipy.org/mailman/listinfo/numpy-discussion From david at ar.media.kyoto-u.ac.jp Tue Mar 31 01:09:53 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 31 Mar 2009 14:09:53 +0900 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <49D11686.4010305@noaa.gov> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> <49D0FCF8.5040908@noaa.gov> <5b8d13220903301027n76634921hf4dc4b1818392ab3@mail.gmail.com> <49D11686.4010305@noaa.gov> Message-ID: <49D1A5A1.2060205@ar.media.kyoto-u.ac.jp> Chris Barker wrote: > > I see -- well that's good news. I've found the Universal library > requirements to be a pain sometimes, and it probably would be here if > Apple wasn't giving us lapack/blas. > Yes, definitely. I could see a lot of trouble if people had to build a universal ATLAS :) > > Well, neither Apple nor python.org's builds are 64 bit anyway at this > point. There is talk of quad (i386,and ppc_64 i86_64) builds the the > future, though. > Yes, but that's something that has to should be supported sooner rather than later. > > bdist_mpkg should do "the right thing" if it's run with the right > python. So you need to make sure you run: > > /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg > > Rather than whatever one happens to be found on your PATH. > Yes, that's the problem: this cannot work directly if I use virtual env, since virtual env works by recreating a 'fake' python somewhere else. > Well, maybe we need to hack bdist_mpkg to support this, we're pretty > sure that it is possible. > > I want o make sure I understand what you want: > > Do you want to be able to build numpy in a virtualenv, and then build a > mpkg that will install into the users regular Framework? > Yes - more exactly, there should be a way to guarantee that if I create a virtual env from a given python interpreter, I can target a .mpkg to this python interpreter. > Do you want to be able to build a mpkg that users can install into the > virtualenv of their choice? > No - virtualenv is only an artefact of the build process - users should not care or even know I use virtualenv. I use virtualenv as a fast, poor-man's 'python chroot'. This way, I can build and install python in a directory with minimum interaction with the outside environment. Installing is necessary to build the doc correctly, and I don't want to mess my system with setuptools stuff. > Of course, easy_install can do that, when it works! > Except when it doesn't :) > We were just talking about some of that last night -- we really need a > "easy_uninstall" for instance. > yes - but I think it is very difficult to do right with the current design of easy_install (I have thought a bit about those issues recently, and I have started writing something to organize my thought a bit better - I can keep you posted if you are interested). > By the way, for the libgfortran issue, while statically linking it may > be the best option, it wouldn't be too hard to have the mpkg include and > install /usr/local/lib/ligfortran.dylib (or whatever). > I don't think it is a good idea: it would overwrite existing libgfortran.dylib, which would cause a lot of issues because libgfortran and gfortran have to be consistent. I know I would be very pissed if after installing a software, some unrelated software would be broken or worse overwritten. That's exactly what bothers me with easy_install. cheers, David From efiring at hawaii.edu Tue Mar 31 01:51:56 2009 From: efiring at hawaii.edu (Eric Firing) Date: Mon, 30 Mar 2009 19:51:56 -1000 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <5b8d13220903302041m71cea034pf21b4a61d39eb6f@mail.gmail.com> References: <49CE3A2C.9000007@enthought.com> <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> <49D136A3.80302@gmail.com> <5b8d13220903302041m71cea034pf21b4a61d39eb6f@mail.gmail.com> Message-ID: <49D1AF7C.3050802@hawaii.edu> David Cournapeau wrote: > On Tue, Mar 31, 2009 at 6:16 AM, Bruce Southey wrote: >> It is now official that Python will switch to Mercurial (Hg): >> http://thread.gmane.org/gmane.comp.python.devel/102706 >> >> Not that it directly concerns me, but this is rather surprising given: >> http://www.python.org/dev/peps/pep-0374/ > > I don't think it is: as Guido said in his email, someone has to make > the decision, and endless discussion go nowhere, because you can > always make arguments for one or the other. Since some core developers > are strongly against git (Martin Loewis for example), and given that > hg is used by several core python developers already, I think it makes > sense. I agree. The PEP does not show overwhelming superiority (or, arguably, even mild superiority) of any alternative; I think the different systems have been tending to converge in their capabilities, and all are serviceable. Mercurial *can* be viewed as easier to learn and use than git, and much faster than bzr. Perhaps of interest to the numpy community is that mercurial is already in use by Sphinx, sage, and cython. Disclosure: I use and like hg. Eric > > cheers, > > David > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From david at ar.media.kyoto-u.ac.jp Tue Mar 31 01:59:09 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 31 Mar 2009 14:59:09 +0900 Subject: [Numpy-discussion] DVCS at PyCon In-Reply-To: <49D1AF7C.3050802@hawaii.edu> References: <49CE3A2C.9000007@enthought.com> <5b8d13220903280852l17c578d7hadddbc40873f8c9f@mail.gmail.com> <49D136A3.80302@gmail.com> <5b8d13220903302041m71cea034pf21b4a61d39eb6f@mail.gmail.com> <49D1AF7C.3050802@hawaii.edu> Message-ID: <49D1B12D.3000300@ar.media.kyoto-u.ac.jp> Eric Firing wrote: > > I agree. The PEP does not show overwhelming superiority (or, arguably, > even mild superiority) of any alternative I think this PEP was poorly written. You can't see any of the advantage/differences of the different systems. Some people even said they don't see the differences with svn. I think the reason partly is that the PEP focused on existing python workflows, but the whole point, at least for me, is to change the general workflow (for reviews, code contributions, etc...). Stephen J. Turnbull sums it up nicely: http://mail.python.org/pipermail/python-dev/2009-March/087968.html FWIW, I tend to agree that Hg is less disruptive than git when coming from svn, at least for the simple tasks (I don't know hg enough to have a really informed opinion for more advanced workflows). cheers, David From cycomanic at gmail.com Tue Mar 31 03:54:36 2009 From: cycomanic at gmail.com (Jochen S) Date: Tue, 31 Mar 2009 20:54:36 +1300 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow In-Reply-To: References: Message-ID: <3b0ecd430903310054g2cca1208m6633db0f81c6b090@mail.gmail.com> On Tue, Mar 31, 2009 at 7:13 AM, Jo?o Lu?s Silva wrote: > Hi, > > I wrote a script to calculate the *optical* autocorrelation of an > electric field. It's like the autocorrelation, but sums the fields > instead of multiplying them. I'm calculating > > I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf) > An autocorrelation is just a convolution, which is a multiplication in frequency space. Thus you can do: FT_E = fft(E) FT_ac=FT_E*FT_E.conj() ac = fftshift(ifft(FT_ac)) where E is your field and ac is your autocorrelation. Also what sort of autocorrelation are you talking about. For instance SHG autocorrelation is an intensity autocorrelation thus the first line should be: FT_E = fft(abs(E)**2) HTH Jochen > with script appended at the end. It's too slow for my purposes (takes ~5 > seconds, and scales ~O(N**2)). numpy's correlate is fast enough, but > isn't what I need as it multiplies instead of add the fields. Could you > help me get this script to run faster (without having to write it in > another programming language) ? > > Thanks, > Jo?o Silva > > #-------------------------------------------------------- > > import numpy as np > #import matplotlib.pyplot as plt > > n = 2**12 > n_autocorr = 3*n-2 > > c = 3E2 > w0 = 2.0*np.pi*c/800.0 > t_max = 100.0 > t = np.linspace(-t_max/2.0,t_max/2.0,n) > > E = np.exp(-(t/10.0)**2)*np.exp(1j*w0*t) #Electric field > > dt = t[1]-t[0] > t_autocorr=np.linspace(-dt*n_autocorr/2.0,dt*n_autocorr/2.0,n_autocorr) > E1 = np.zeros(n_autocorr,dtype=E.dtype) > E2 = np.zeros(n_autocorr,dtype=E.dtype) > Ac = np.zeros(n_autocorr,dtype=np.float64) > > E2[n-1:n-1+n] = E[:] > > for i in range(2*n-2): > E1[:] = 0.0 > E1[i:i+n] = E[:] > > Ac[i] = np.sum(np.abs(E1+E2)**2) > > Ac *= dt > > #plt.plot(t_autocorr,Ac) > #plt.show() > > #-------------------------------------------------------- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cycomanic at gmail.com Tue Mar 31 05:07:25 2009 From: cycomanic at gmail.com (Jochen S) Date: Tue, 31 Mar 2009 22:07:25 +1300 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow In-Reply-To: <3b0ecd430903310054g2cca1208m6633db0f81c6b090@mail.gmail.com> References: <3b0ecd430903310054g2cca1208m6633db0f81c6b090@mail.gmail.com> Message-ID: <3b0ecd430903310207s1bfb12a1q19813568ac0cd692@mail.gmail.com> On Tue, Mar 31, 2009 at 8:54 PM, Jochen S wrote: > On Tue, Mar 31, 2009 at 7:13 AM, Jo?o Lu?s Silva wrote: > >> Hi, >> > > >> I wrote a script to calculate the *optical* autocorrelation of an >> electric field. It's like the autocorrelation, but sums the fields >> instead of multiplying them. I'm calculating >> >> I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf) >> > > An autocorrelation is just a convolution, which is a multiplication in > frequency space. Thus you can do: > FT_E = fft(E) > FT_ac=FT_E*FT_E.conj() > ac = fftshift(ifft(FT_ac)) > > where E is your field and ac is your autocorrelation. Also what sort of > autocorrelation are you talking about. For instance SHG autocorrelation is > an intensity autocorrelation thus the first line should be: > FT_E = fft(abs(E)**2) > Sorry I was reading over your example to quickly earlier, you're obviously using intensity autocorrelation so what you should be doing is: FT_E=fft(abs(E)**2) FT_ac = FT_E*FT_E.conj() ac = fftshift(ifft(FT_ac)) > > HTH > Jochen > > >> with script appended at the end. It's too slow for my purposes (takes ~5 >> seconds, and scales ~O(N**2)). numpy's correlate is fast enough, but >> isn't what I need as it multiplies instead of add the fields. Could you >> help me get this script to run faster (without having to write it in >> another programming language) ? >> >> Thanks, >> Jo?o Silva >> >> #-------------------------------------------------------- >> >> import numpy as np >> #import matplotlib.pyplot as plt >> >> n = 2**12 >> n_autocorr = 3*n-2 >> >> c = 3E2 >> w0 = 2.0*np.pi*c/800.0 >> t_max = 100.0 >> t = np.linspace(-t_max/2.0,t_max/2.0,n) >> >> E = np.exp(-(t/10.0)**2)*np.exp(1j*w0*t) #Electric field >> >> dt = t[1]-t[0] >> t_autocorr=np.linspace(-dt*n_autocorr/2.0,dt*n_autocorr/2.0,n_autocorr) >> E1 = np.zeros(n_autocorr,dtype=E.dtype) >> E2 = np.zeros(n_autocorr,dtype=E.dtype) >> Ac = np.zeros(n_autocorr,dtype=np.float64) >> >> E2[n-1:n-1+n] = E[:] >> >> for i in range(2*n-2): >> E1[:] = 0.0 >> E1[i:i+n] = E[:] >> >> Ac[i] = np.sum(np.abs(E1+E2)**2) >> >> Ac *= dt >> >> #plt.plot(t_autocorr,Ac) >> #plt.show() >> >> #-------------------------------------------------------- >> >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsilva at fc.up.pt Tue Mar 31 08:21:05 2009 From: jsilva at fc.up.pt (=?ISO-8859-1?Q?Jo=E3o_Lu=EDs_Silva?=) Date: Tue, 31 Mar 2009 13:21:05 +0100 Subject: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow In-Reply-To: References: Message-ID: Charles R Harris wrote: > That should work. The first two integrals are actually the same, but > need to be E(t)*E(t).conj(). The second integral needs twice the real > part of E(t)*E(t-tau).conj(). Numpy correlate should really have the > conjugate built in, but it doesn't. > > Chuck > It worked, thanks. Jo?o Silva From alexandre.fayolle at logilab.fr Tue Mar 31 09:50:03 2009 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue, 31 Mar 2009 15:50:03 +0200 Subject: [Numpy-discussion] array of matrices In-Reply-To: <1238193504.12867.4.camel@pc2.cole.uklinux.net> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> Message-ID: <200903311550.03388.alexandre.fayolle@logilab.fr> Le Friday 27 March 2009 23:38:25 Bryan Cole, vous avez ?crit?: > I have a number of arrays of shape (N,4,4). I need to perform a > vectorised matrix-multiplication between pairs of them I.e. > matrix-multiplication rules for the last two dimensions, usual > element-wise rule for the 1st dimension (of length N). > > (How) is this possible with numpy? I think dot will work, though you'll need to work a little bit to get the answer: >>> import numpy as np >>> a = np.array([[1,2], [3,4]], np.float) >>> aa = np.array([a,a+1,a+2]) >>> bb = np.array((a*5, a*6, a*7, a*8)) >>> np.dot(aa, bb).shape (3, 2, 4, 2) >>> for i, a_ in enumerate(aa): ... for j, b_ in enumerate(bb): ... print (np.dot(a_, b_) == np.dot(aa, bb)[i,:,j,:]).all() ... True True True True True True True True True True True True -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science From sienkiew at stsci.edu Tue Mar 31 10:45:05 2009 From: sienkiew at stsci.edu (Mark Sienkiewicz) Date: Tue, 31 Mar 2009 10:45:05 -0400 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris In-Reply-To: References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <49D0F336.7070800@stsci.edu> <49D133D5.6020707@stsci.edu> Message-ID: <49D22C71.1040800@stsci.edu> Pauli Virtanen wrote: > > Probably they are both related to unspecified sort order for > the duplicates. There were some sort-order ignoring missing in the test. > > I think the test is now fixed in trunk: > > http://projects.scipy.org/numpy/changeset/6827 > The test passes in 1.4.0.dev6827. Tested on Solaris 8, Mac OSX 10.4 (Tiger) on x86 and ppc, and both 32 and 64 bit Red Hat Enterprise, all with Python 2.5.1. Thanks for fixing this. Mark S. From charlesr.harris at gmail.com Tue Mar 31 14:16:03 2009 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 31 Mar 2009 12:16:03 -0600 Subject: [Numpy-discussion] Windows buildbot Message-ID: Hi David, Stefan, The windows buildbot is back online but seems to have a configuration problem. It would be nice to see that build working before the release, so it would be nice if you two could take a look at the error messages/contact Heller. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryan at cole.uklinux.net Tue Mar 31 17:10:20 2009 From: bryan at cole.uklinux.net (Bryan Cole) Date: Tue, 31 Mar 2009 22:10:20 +0100 Subject: [Numpy-discussion] array of matrices In-Reply-To: <200903311550.03388.alexandre.fayolle@logilab.fr> References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <200903311550.03388.alexandre.fayolle@logilab.fr> Message-ID: <1238533819.18876.2.camel@pc2.cole.uklinux.net> > > I think dot will work, though you'll need to work a little bit to get the > answer: > > >>> import numpy as np > >>> a = np.array([[1,2], [3,4]], np.float) > >>> aa = np.array([a,a+1,a+2]) > >>> bb = np.array((a*5, a*6, a*7, a*8)) > >>> np.dot(aa, bb).shape > (3, 2, 4, 2) > >>> for i, a_ in enumerate(aa): > ... for j, b_ in enumerate(bb): > ... print (np.dot(a_, b_) == np.dot(aa, bb)[i,:,j,:]).all() > ... > True > Thanks. Your comment has helped me understand what dot() does for ndims>2 . It's a pity this will be too memory inefficient for large N. Bryan > From Chris.Barker at noaa.gov Tue Mar 31 17:39:09 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 31 Mar 2009 14:39:09 -0700 Subject: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer In-Reply-To: <49D1A5A1.2060205@ar.media.kyoto-u.ac.jp> References: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> <5b8d13220903290753g224ee96am72c1cd93db88f650@mail.gmail.com> <5b8d13220903292356r5e954b90lf92c9f170ec01b13@mail.gmail.com> <49D0778C.5050706@ar.media.kyoto-u.ac.jp> <2BACA6B8-43A0-421E-859B-E6593B588FFF@post.harvard.edu> <5b8d13220903300822m682c7e26jb10975ecf1d9c723@mail.gmail.com> <49D0EA6D.6050601@noaa.gov> <5b8d13220903300919s13702969vd54c70c3a928fc0c@mail.gmail.com> <49D0FCF8.5040908@noaa.gov> <5b8d13220903301027n76634921hf4dc4b1818392ab3@mail.gmail.com> <49D11686.4010305@noaa.gov> <49D1A5A1.2060205@ar.media.kyoto-u.ac.jp> Message-ID: <49D28D7D.9050102@noaa.gov> David Cournapeau wrote: > Chris Barker wrote: >> Well, neither Apple nor python.org's builds are 64 bit anyway at this >> point. There is talk of quad (i386,and ppc_64 i86_64) builds the the >> future, though. >> > Yes, but that's something that has to should be supported sooner rather > than later. It does, but we don't need a binary installer for a python that doesn't have a binary installer. >> Well, maybe we need to hack bdist_mpkg to support this, we're pretty >> sure that it is possible. > Yes - more exactly, there should be a way to guarantee that if I create > a virtual env from a given python interpreter, I can target a .mpkg to > this python interpreter. Hmmm -- I don't know virtualenv enough to know what the virtualenv knows about how it was created... However, I'm not sure you need to do what your saying here. I imagine this workflow: set up a virtualenv for, say numpy x.y.rc-z play around with it, get everything to build, etc. with plain old setup.py build, setup.py install, etc. Once you are happy, run: /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg (or the 2.6 equivalent, etc) I THINK you'd get a .mpkg that was all set for the user to install in their Framework python. As long as you don't run the installer, you won't end up with it in your virtualenv. Or is this what you've tried and has failed for you? By the way, if you run bdist_mpkg from a version installed into your virtualenv, you will get an installer that will install into your virtualenv, whit the path hard coded, so really useless. > Installing is necessary to build the doc correctly, and I don't want to > mess my system with setuptools stuff. ah -- maybe that's the issue then -- darn. Are the docs included in the .mpkg? Do they need to be built for that? > I have started writing something to organize my thought a > bit better - I can keep you posted if you are interested). yes, I am. >> By the way, for the libgfortran issue, while statically linking it may >> be the best option, it wouldn't be too hard to have the mpkg include and >> install /usr/local/lib/ligfortran.dylib (or whatever). >> > I don't think it is a good idea: it would overwrite existing > libgfortran.dylib, which would cause a lot of issues because libgfortran > and gfortran have to be consistent. I know I would be very pissed if > after installing a software, some unrelated software would be broken or > worse overwritten. True. In that case we could put the dylib somewhere obscure: /usr/local/lib/scipy1.6/lib/ or even: /Library/Frameworks/Python.framework/Versions/2.5/lib/ But using static linking is probably better. Actually, and I betray my ignorance here, but IIUC: - There are a bunch of different scipy extensions that use libgfortran - Many of them are built more-or-less separately - So each of them would get their own copy of the static libgfortran - Just how many separate copies of libgfortran is that? - Enough to care? - How big is libgfortran? This is making me think solving the dynamic linking problem makes sense. Also, would it break anything if the libgfortran installed were properly versioned: libgfortran.a.b.c Isn't that the point of versioned libs? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From geometrian at gmail.com Tue Mar 31 20:20:35 2009 From: geometrian at gmail.com (Ian Mallett) Date: Tue, 31 Mar 2009 17:20:35 -0700 Subject: [Numpy-discussion] Numpy Positional Array Message-ID: Hello, I'm trying to make an array of size n*n*2. It should be of the form: [[[0,0],[1,0],[2,0],[3,0],[4,0], ... ,[n,0]], [[0,1],[1,1],[2,1],[3,1],[4,1], ... ,[n,1]], [[0,2],[1,2],[2,2],[3,2],[4,2], ... ,[n,2]], [[0,3],[1,3],[2,3],[3,3],[4,3], ... ,[n,3]], [[0,4],[1,4],[2,4],[3,4],[4,4], ... ,[n,4]], ... ... ... ... ... ... ... [[0,n],[1,n],[2,n],[3,n],[4,n], ... ,[n,n]]] Each vec2 represents the x,y position of the vec in the array itself. I'm completely stuck on implementing this setup in numpy. Any pointers? Thanks, Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 31 20:32:28 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Mar 2009 19:32:28 -0500 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: References: Message-ID: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> 2009/3/31 Ian Mallett : > Hello, > I'm trying to make an array of size n*n*2.? It should be of the form: > [[[0,0],[1,0],[2,0],[3,0],[4,0], ... ,[n,0]], > ?[[0,1],[1,1],[2,1],[3,1],[4,1], ... ,[n,1]], > ?[[0,2],[1,2],[2,2],[3,2],[4,2], ... ,[n,2]], > ?[[0,3],[1,3],[2,3],[3,3],[4,3], ... ,[n,3]], > ?[[0,4],[1,4],[2,4],[3,4],[4,4], ... ,[n,4]], > ?? ...?? ... ? ...?? ... ? ...?? ...?? ... > ?[[0,n],[1,n],[2,n],[3,n],[4,n], ... ,[n,n]]] > Each vec2 represents the x,y position of the vec in the array itself.? I'm > completely stuck on implementing this setup in numpy.? Any pointers? How do you want to fill in the array? If you are typing it in literally into your code, you would do basically the above, without the ...'s, and wrap it in numpy.array(...). Otherwise, you can create empty arrays with numpy.empty((n,n,2)), or filled in versions using zeros() and ones(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From geometrian at gmail.com Tue Mar 31 20:35:55 2009 From: geometrian at gmail.com (Ian Mallett) Date: Tue, 31 Mar 2009 17:35:55 -0700 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> Message-ID: On Tue, Mar 31, 2009 at 5:32 PM, Robert Kern wrote: > How do you want to fill in the array? If you are typing it in > literally into your code, you would do basically the above, without > the ...'s, and wrap it in numpy.array(...). I know that, but in some cases, n will be quite large, perhaps 1000 on a side. I'm trying to generate an array of that form in numpy entirely for speed and aesthetic reasons. Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 31 20:39:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Mar 2009 19:39:54 -0500 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> Message-ID: <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> 2009/3/31 Ian Mallett : > On Tue, Mar 31, 2009 at 5:32 PM, Robert Kern wrote: >> >> How do you want to fill in the array? If you are typing it in >> literally into your code, you would do basically the above, without >> the ...'s, and wrap it in numpy.array(...). > > I know that, but in some cases, n will be quite large, perhaps 1000 on a > side.? I'm trying to generate an array of that form in numpy entirely for > speed and aesthetic reasons. Again: How do you want to fill in the array? What is the process that generates the data? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From geometrian at gmail.com Tue Mar 31 20:42:47 2009 From: geometrian at gmail.com (Ian Mallett) Date: Tue, 31 Mar 2009 17:42:47 -0700 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> Message-ID: The array follows a pattern: each array of length 2 represents the x,y index of that array within the larger array. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 31 20:48:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Mar 2009 19:48:25 -0500 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> Message-ID: <3d375d730903311748q27d1e7dbr9797c74615d9eaab@mail.gmail.com> 2009/3/31 Ian Mallett : > The array follows a pattern: each array of length 2 represents the x,y index > of that array within the larger array. Ah, right. Use dstack(mgrid[0:n,0:n]). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From geometrian at gmail.com Tue Mar 31 20:51:14 2009 From: geometrian at gmail.com (Ian Mallett) Date: Tue, 31 Mar 2009 17:51:14 -0700 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: <3d375d730903311748q27d1e7dbr9797c74615d9eaab@mail.gmail.com> References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> <3d375d730903311748q27d1e7dbr9797c74615d9eaab@mail.gmail.com> Message-ID: Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matthew.Partridge at barclaysglobal.com Tue Mar 31 20:45:22 2009 From: Matthew.Partridge at barclaysglobal.com (Partridge, Matthew BGI SYD) Date: Wed, 1 Apr 2009 11:45:22 +1100 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com><3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> Message-ID: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D5457104@sydnte2k032.insidelive.net> > The array follows a pattern: each array of length 2 represents the x,y index of that array within the larger array. Is this what you are after? >>> numpy.array(list(numpy.ndindex(n,n))).reshape(n,n,2) -- This message and any attachments are confidential, proprietary, and may be privileged. If this message was misdirected, Barclays Global Investors (BGI) does not waive any confidentiality or privilege. If you are not the intended recipient, please notify us immediately and destroy the message without disclosing its contents to anyone. Any distribution, use or copying of this e-mail or the information it contains by other than an intended recipient is unauthorized. The views and opinions expressed in this e-mail message are the author's own and may not reflect the views and opinions of BGI, unless the author is authorized by BGI to express such views or opinions on its behalf. All email sent to or from this address is subject to electronic storage and review by BGI. Although BGI operates anti-virus programs, it does not accept responsibility for any damage whatsoever caused by viruses being passed. From geometrian at gmail.com Tue Mar 31 21:02:19 2009 From: geometrian at gmail.com (Ian Mallett) Date: Tue, 31 Mar 2009 18:02:19 -0700 Subject: [Numpy-discussion] Numpy Positional Array In-Reply-To: <5EFCE9D6AE4DD9409FA6BDD0E02FA8D5457104@sydnte2k032.insidelive.net> References: <3d375d730903311732x1a756153rc354fd823467d34c@mail.gmail.com> <3d375d730903311739q6f7480a1v367c98fe2d009dbd@mail.gmail.com> <5EFCE9D6AE4DD9409FA6BDD0E02FA8D5457104@sydnte2k032.insidelive.net> Message-ID: Same. Thanks, too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From engelh at deshaw.com Tue Mar 31 21:40:54 2009 From: engelh at deshaw.com (Hans-Andreas Engel) Date: Wed, 1 Apr 2009 01:40:54 +0000 (UTC) Subject: [Numpy-discussion] array of matrices References: <1238193504.12867.4.camel@pc2.cole.uklinux.net> <3d375d730903271543m23e3f6dcj39c59cd115dedfa2@mail.gmail.com> <3d375d730903280047h2195a468i108f963453bdb78d@mail.gmail.com> <7f9d599f0903280947p3c30614epb83b9266ae25ed6e@mail.gmail.com> <3d375d730903282132r431ac3e0i66808b4ae533df4b@mail.gmail.com> Message-ID: Robert Kern gmail.com> writes: > On Sat, Mar 28, 2009 at 23:15, Anne Archibald gmail.com> wrote: > > 2009/3/28 Geoffrey Irving naml.us>: > >> On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern gmail.com> wrote: > >>> 2009/3/27 Charles R Harris gmail.com>: > >>>> > >>>> On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern gmail.com> wrote: > >>>>> > >>>>> On Fri, Mar 27, 2009 at 17:38, Bryan Cole cole.uklinux.net> wrote: > >>>>> > I have a number of arrays of shape (N,4,4). I need to perform a > >>>>> > vectorised matrix-multiplication between pairs of them I.e. > >>>>> > matrix-multiplication rules for the last two dimensions, usual > >>>>> > element-wise rule for the 1st dimension (of length N). > >>>>> > (...) > >> > >> It'd be great if this operation existed as a primitive. ? (...) > > > > The infrastructure to support such generalized ufuncs has been added > > to numpy, but as far as I know no functions yet make use of it. > > I don't think there is a way to do it in general with dot(). Some > cases are ambiguous. I think you will need separate matrix-matrix, > matrix-vector, and vector-vector gufuncs, to coin a term. By the way, matrix multiplication is one of the testcases for the generalized ufuncs in numpy 1.3 -- this makes playing around with it easy: In [1]: N = 10; a = randn(N, 4, 4); b = randn(N, 4, 4) In [2]: import numpy.core.umath_tests In [3]: (numpy.core.umath_tests.matrix_multiply(a, b) == [dot(ai, bi) for (ai, bi) in zip(a, b)]).all() Out[3]: True Best, Hansres