From aw-confirm at eBay.com Mon Aug 1 20:32:23 2005 From: aw-confirm at eBay.com (aw-confirm at eBay.com) Date: Mon Aug 1 20:32:23 2005 Subject: [Numpy-discussion] TKO Notice: ***Urgent Safeharbor Department Notice*** Message-ID: <20050802024956.918442C8F35@bocca.dk> An HTML attachment was scrubbed... URL: From edcjones at comcast.net Mon Aug 1 20:41:59 2005 From: edcjones at comcast.net (Edward C. Jones) Date: Mon Aug 1 20:41:59 2005 Subject: [Numpy-discussion] numarray: Does NA_NewArray return a new reference? Message-ID: <42EEEB43.9000600@comcast.net> Do functions such as NA_NewArray return a new reference? Then I should Py_XDECREF them when I am finished with them? From jdhunter at ace.bsd.uchicago.edu Tue Aug 2 06:33:06 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue Aug 2 06:33:06 2005 Subject: [Numpy-discussion] ANN: matplotlib 0.83.2 Message-ID: <87psswzcsg.fsf@peds-pc311.bsd.uchicago.edu> This is a summary of recent developments in matplotlib since 0.80. For detailed notes, see http://matplotlib.sf.net/whats_new.html, http://matplotlib.sf.net/CHANGELOG and http://matplotlib.sf.net/API_CHANGES == Whats New == matplotlib wiki: this was just launched a few days ago and only has two entries to date, but we hope this will grow into a useful site with tutorials, howtos, installation notes, recipes, etc. Please contribute! Thanks to scipy.org and Enthought for hosting. http://www.scipy.org/wikis/topical_software/MatplotlibCookbook CocoaAgg: New CocoaAgg backend for native GUI on OSX, 10.3 and 10.4 compliant, contributed by Charles Moad. TeX support : Now you can (optionally) use TeX to handle all of the text elements in your figure with the rc param text.usetex in the antigrain and postscript backends; see http://www.scipy.org/wikis/topical_software/UsingTex. Thanks to Darren Dale for hard work on the TeX support. Reorganized config files: Made HOME/.matplotlib the new config dir where the matplotlibrc file, the ttf.cache, and the tex.cache live. Your .matplotlibrc file, if you have one, should be renamed to .matplotlib/matplotlibrc. Masked arrays: Support for masked arrays in line plots, pcolor and contours. Thanks Eric Firing and Jeffrey Whitaker. New image resize options interpolation options. See help(imshow) for details, particularly the interpolation, filternorm and filterrad kwargs. New values for the interp kwarg are: 'nearest', 'bilinear', 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', 'blackman' Byte images: Much faster imaeg loading for MxNx4 or MxNx3 UInt8 images, which bypasses the memory and CPU intensive integer/floating point conversions. Thanks Nicolas Girard. Fast markers on win32: The marker cache optimization is finally available for win32, after an agg bug was found and fixed (thanks Maxim!). Line marker plots should be considerably faster now on win32. Qt in ipython/pylab: You can now use qt in ipython pylab mode. Thanks Fernando Perez and the Orsay team! Agg wrapper proper: Started work on a proper agg wrapper to expose more general agg functionality in mpl. See examples/agg_test.py. Lots of wrapping remains to be done. Subplot configuration: There is a new toolbar button on GTK*, WX* and TkAgg to launch the subplot configuration tool. GUI neutral widgets: Matplotlib now has cross-GUI widgets (buttons, check buttons, radio buttons and sliders). See examples/widgets/*.py and http://matplotlib.sf.net/screenshots.html#slider_demo. This makes it easier to create interactive figures that run across backends. Full screen mode in GTK*: Use 'f' to toggle full screen mode in the GTK backends. Thanks Steve Chaplin. Downloads available from http://matplotlib.sf.net From Pieter.Dumon at intec.UGent.be Tue Aug 2 09:05:56 2005 From: Pieter.Dumon at intec.UGent.be (Pieter Dumon) Date: Tue Aug 2 09:05:56 2005 Subject: [Numpy-discussion] exp Message-ID: <42EF9964.102@intec.UGent.be> Hi, I'm having a problem with exp(): >>>import Numeric >>>a = Numeric.array([-800+0j])*Numeric.ones(10) >>>Numeric.exp(a) results in"OverflowError: math range error" I had expected to get the cmath.exp() result: >>> import cmath >>> cmath.exp(-800+0j) 0j anyone knows what I'm doing wrong ? Pieter From Fernando.Perez at colorado.edu Tue Aug 2 09:54:34 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue Aug 2 09:54:34 2005 Subject: [Numpy-discussion] exp In-Reply-To: <42EF9964.102@intec.UGent.be> References: <42EF9964.102@intec.UGent.be> Message-ID: <42EFA4B3.7090607@colorado.edu> Pieter Dumon wrote: > Hi, > > I'm having a problem with exp(): > > >>>import Numeric > >>>a = Numeric.array([-800+0j])*Numeric.ones(10) > >>>Numeric.exp(a) > > results in"OverflowError: math range error" > > I had expected to get the cmath.exp() result: > > >>> import cmath > >>> cmath.exp(-800+0j) > 0j > > > anyone knows what I'm doing wrong ? Numeric doesn't underflow to zero silently, nor does it offer a way to control the fpu's exception bits directly. I believe that numarray does offer such facilities. For exponentials of real arguments, I've overridden exp() with my own version: def exp_safe(x): """Compute exponentials which safely underflow to zero. Slow but convenient to use. Note that NumArray will introduce proper floating point exception handling with access to the underlying hardware.""" if type(x) is ArrayType: return exp(clip(x,exp_safe_MIN,exp_safe_MAX)) else: return math.exp(x) You can modify this to work with complex inputs. Best, f From perry at stsci.edu Tue Aug 2 11:43:36 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue Aug 2 11:43:36 2005 Subject: [Numpy-discussion] numarray: Does NA_NewArray return a new reference? In-Reply-To: <42EEEB43.9000600@comcast.net> References: <42EEEB43.9000600@comcast.net> Message-ID: <843f7a32570a268c0855fefba9c8bb28@stsci.edu> On Aug 1, 2005, at 11:40 PM, Edward C. Jones wrote: > Do functions such as NA_NewArray return a new reference? Then I should > Py_XDECREF them when I am finished with them? Yes, yes. Perry From perry at stsci.edu Tue Aug 2 12:54:13 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue Aug 2 12:54:13 2005 Subject: [Numpy-discussion] numarray: PyArrayObject supports number protocols? In-Reply-To: <42ED9D73.1010400@comcast.net> References: <42ED9D73.1010400@comcast.net> Message-ID: <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> On Jul 31, 2005, at 11:56 PM, Edward C. Jones wrote: > I just saw in the docs, Section 14.2.4: > > It should be noted that unlike earlier versions of numarray, the > present PyArrayObject structure is a first class python object, with > full support for the number protocols in C. > > Does this mean that I can add two numarrays in C using "PyNumber_Add"? > Todd is away for a few days, and I haven't found the actual support for this in the code, so I'm going to wait until he's back for a definitive answer. Perry From jmiller at stsci.edu Thu Aug 4 07:43:28 2005 From: jmiller at stsci.edu (Todd Miller) Date: Thu Aug 4 07:43:28 2005 Subject: [Numpy-discussion] numarray: PyArrayObject supports number protocols? In-Reply-To: <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> References: <42ED9D73.1010400@comcast.net> <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> Message-ID: <1123166524.4467.12.camel@halloween.stsci.edu> On Tue, 2005-08-02 at 15:53, Perry Greenfield wrote: > On Jul 31, 2005, at 11:56 PM, Edward C. Jones wrote: > > > I just saw in the docs, Section 14.2.4: > > > > It should be noted that unlike earlier versions of numarray, the > > present PyArrayObject structure is a first class python object, with > > full support for the number protocols in C. > > > > Does this mean that I can add two numarrays in C using "PyNumber_Add"? Yes. It should be noted that the numarray implementation of the number protocol is (still) in Python so there are issues with "atomicity" and the global interpreter lock just as there are with other Python callbacks from C. The documentation is referring to early versions of numarray where the C-API's C-representation of an array was not a Python object at all. Regards, Todd > Todd is away for a few days, and I haven't found the actual support for > this in the code, so I'm going to wait until he's back for a definitive > answer. > > Perry From alex_sch at telus.net Sat Aug 6 14:53:09 2005 From: alex_sch at telus.net (Alex Schultz) Date: Sat Aug 6 14:53:09 2005 Subject: [Numpy-discussion] Optimization Question Message-ID: <42F53139.3040604@telus.net> Hi, I made nice function that composts two RGBA image arrays, however I'm having issues with it being too slow. Does any body have any suggestions on how to optimize this (the subfunction 'maknewpixels' is a separation for profiling purposes, and it takes almost as much time as the other parts of the function): def compost(lowerlayer, upperlayer): def makenewpixels(ltable, utable, h, w, upperalphatable, loweralphatable): newpixels = Numeric.zeros((h, w, 4), 'b') newpixels[:,:,3] = Numeric.clip(upperalphatable + loweralphatable, 0, 255).astype('b') newpixels[:,:,0:3] = (utable + ltable).astype('b') return newpixels h = Numeric.size(lowerlayer, 0) w = Numeric.size(lowerlayer, 1) upperalphatable = upperlayer[:,:,3] loweralphatable = lowerlayer[:,:,3] #calculate the upper layer's portion of the composted picture utable = Numeric.swapaxes(Numeric.swapaxes(upperlayer[:,:,0:3], 0, 2) * upperalphatable / 255, 2, 0) #calculate the lower layer's portion of the composted picture ltable = Numeric.swapaxes(Numeric.swapaxes(lowerlayer[:,:,0:3], 0, 2) * (255 - upperalphatable) / 255, 2, 0) return makenewpixels(ltable, utable, h, w, upperalphatable, loweralphatable) Thanks, Alex Schultz From tp-verify at paypal.com Mon Aug 8 09:11:29 2005 From: tp-verify at paypal.com (Paypal Security Center) Date: Mon Aug 8 09:11:29 2005 Subject: [Numpy-discussion] PayPal Accounts Management Message-ID: <200508081611.j78GB0EW029616@linux4.fastname.no> An HTML attachment was scrubbed... URL: From gvwilson at cs.utoronto.ca Tue Aug 9 11:52:34 2005 From: gvwilson at cs.utoronto.ca (Greg Wilson) Date: Tue Aug 9 11:52:34 2005 Subject: [Numpy-discussion] re: software skills course Message-ID: Hi, I'm working with support from the Python Software Foundation to develop an open source course on basic software development skills for people with backgrounds in science and engineering. I have a beta version of the course notes ready for review, and would like to pull in people in sci&eng to look it over and give me feedback. If you know anyone who fits this bill (particularly people who might be interested in following along with a trial run of the course this fall), I'd be grateful for pointers. Thanks, Greg Wilson From Sebastien.deMentendeHorne at electrabel.com Wed Aug 10 05:08:57 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Wed Aug 10 05:08:57 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> Hi, Currently, we have an __array__ magic method that can be used to transform any object that implements it into an array. It think that a more useful magic method would be, for ufuncs, a __wrap_array__ method that would return an array object and a function to use after having applied the ufunc. For instance: class TimeSerie: def __init__(self, data, times): self.data = data # an array self.times = times # anything, could be any metadata def __wrap_array__(self, ufunc): return self.data, lambda data: TimeSerie(data, self.times) t = TimeSerie( arange(100), range(100) ) cos(t) # returns a TimeSerie object equivalent to TimeSerie( cos(arange(100)), range(100) ) This needs probably a change in the ufunc code that would first check if __wrap_array__ is there and if so, use it to get an array as well as a "constructor" to use for returning an object other than an array. Benefits: - easier to wrap array objects with metadata without rewriting all ufunc (see MaskedArray for problematic). - ufunc( list ) -> list and ufunc( tuple ) -> tuple instead of returning always arrays. Do you see an interest of doing so ? Does it need a lot of internal changes to Numeric/numarray/scipy.core ? Best, Sebastien ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From oliphant at ee.byu.edu Wed Aug 10 10:02:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 10:02:11 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method In-Reply-To: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> References: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> Message-ID: <42FA32C2.8050308@ee.byu.edu> Sebastien.deMentendeHorne at electrabel.com wrote: >Hi, > >Currently, we have an __array__ magic method that can be used to transform any object that implements it into an array. >It think that a more useful magic method would be, for ufuncs, a __wrap_array__ method that would return an array object and a function to use after having applied the ufunc. > > There is an interest in doing this but it is a bit more complicated for ufuncs with more than one input argument and more than one output argument. I have not seen a proposal that really works. Your __wrap_array__ method sounds interesting though. I think, however, that the __wrap_array__ method would need to provide some kind of array_interfaceable object to be really useful. In the new scipy.base, I've been trying to handle some of this, but it is more complicated than it might at first appear. -Travis From oliphant at ee.byu.edu Wed Aug 10 11:53:14 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 11:53:14 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) ready for alpha use Message-ID: <42FA4CF4.4050909@ee.byu.edu> For anybody interested in the development of scipy.base. The repository is in a state that can be tested and played with. I'm sure there are bugs, but I've removed the ones I've found. I'd be interested in help in tracking down others. Over the next few weeks, we will be attempting to build scipy using the new scipy.base. This should also help iron out some problems. Some of the notable features that came out of the ufunc adapting process are 1) reductions can now take place over a type different than the array type. Thus, if B is a byte array you can reduce over a long type to avoid modular arithmetic (overflow). 2) reduceat now takes an axis argument 3) copies are not made of large arrays but a buffering-scheme is used for casting and mis-behaved arrays. 4) the size of buffers used and what is meant by "large array" can be adjusted on a per function / module / global basis by setting the variable UFUNC_BUFSIZE in the local / module / global (builtin) scope 5) how errors in ufuncs are handled can be set and over-ridden on a function / module / global basis through the variable UFUNC_ERRMASK 6) you have another option besides ignore, warn, or raise. You can specify a Python function to call when an error occurs through the variable UFUNC_ERRFUNC (right now all errors go to this same function and a string is based indiciating which error has occurred). I should explain this idea of using variables to set information for the ufuncs. It comes out of an idea that Guido mentioned while Perry, Paul, and I met with him back in March. When he was informed of numarray's stack approach to error handling he questioned that design decision. He wondered if the error handling could not be defined on a per module basis. With that idea, it was relatively straightforward to implement a procedure wherein error behavior for ufuncs is determined by looking in the local, then global (module level), and finally builtin scope for a specific variable. This look-up is done at the beginning of the ufunc call. It will obviously add some time to code which loops through repeated look up calls (how much I'm not sure). Perhaps there is a way to ameliorate this, but until we see some performance issues, I'm not inclined to spend too much time on premature optimization. Comments and especially people with an inkling to try very alpha code out (i.e. it could segfault on you) are welcomed. -Travis O. From oliphant at ee.byu.edu Wed Aug 10 12:43:16 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 12:43:16 2005 Subject: [Numpy-discussion] Checking out scipy.base (Numeric3) Message-ID: <42FA5897.60204@ee.byu.edu> You can check out scipy.base using: cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login (press Enter when asked for login) cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P /Numeric3\ -Travis / From oliphant at ee.byu.edu Wed Aug 10 14:09:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 14:09:42 2005 Subject: [Numpy-discussion] Sourceforge pserver CVS access acting strange -> don't get Numeric3 that way. Message-ID: <42FA6CCD.2010803@ee.byu.edu> Anyone contemplating trying out the new scipy.base should not try to use the pserver CVS service from sourceforge for a while. It is not reflecting the current state of the developer CVS tree. I know they recently upgraded developer CVS hardware and perhaps the pserver system is still getting information from the old CVS tree rather than the new developer tree. So, if you get errors on building scipy.base after checking it out from pserver Sourceforge, wait a day or two. I suspect that sourceforge will fix this annoying problem soon. -Travis O. From Sebastien.deMentendeHorne at electrabel.com Thu Aug 11 04:37:17 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Thu Aug 11 04:37:17 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> > > >Hi, > > > >Currently, we have an __array__ magic method that can be > used to transform any object that implements it into an array. > >It think that a more useful magic method would be, for > ufuncs, a __wrap_array__ method that would return an array > object and a function to use after having applied the ufunc. > > > > > There is an interest in doing this but it is a bit more > complicated for > ufuncs with more than one input argument and more than one > output argument. > > I have not seen a proposal that really works. Your __wrap_array__ > method sounds interesting though. I think, however, that the > __wrap_array__ method would need to provide some kind of > array_interfaceable object to be really useful. > > In the new scipy.base, I've been trying to handle some of > this, but it > is more complicated than it might at first appear. > If we use the same convention as __add__ and __radd__ & co for binary functions this would make a __wrap_array__(self, ufunc): # all unary operators like __neg__, __abs__, __invert__ __lwrap_array__(self, other, ufunc): # all binary operators called with self on the lhs like __add__, __mul__, __pow__ __rwrap_array__(self, other, ufunc): # all binary operators called with self on the rhs like __radd__, __rmul__, __rpow__ with first a check on __lwrap_array__ and then __rwrap_array__ for binary functions. Those functions should then return the result of the operation (and not anymore an array object and a "constructor" function. My previous example would read: class TimeSerie: def __init__(self, data, times): self.data = data # an array self.times = times # anything, could be any metadata def __wrap_array__(self, ufunc): return TimeSerie( ufunc(self.data) , self.times) def __lwrap_array__(self, other, ufunc): return TimeSerie( ufunc(self.data, other) , self.times) def __rwrap_array__(self, other, ufunc): return TimeSerie( ufunc(other, self.data) , self.times) propagating this way the operation to the other object in binary operations. Better attempt ? Seb ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From europa100 at comcast.net Thu Aug 11 11:46:30 2005 From: europa100 at comcast.net (Rob) Date: Thu Aug 11 11:46:30 2005 Subject: [Numpy-discussion] Numeric Python EM Project Message-ID: <42FB9C38.9040508@comcast.net> Hi all, I will be putting this site back up, somewhere, I don't know. It needs to be updated to whatever people think is the best Python numeric type package. I've been out of this for a couple of years. I'm off to buy a second laptop today. Something relatively inexpensive that will just have Windoze on it. This pricey Dell is my *nix laptop. Currently it has OpenBSD only. The motivation of all of this is the dismay of "high tech" EE jobs here in Hillsboro, OR, and a renewed interest in Astrophysics and all of the wonderful philosophic and spiritual questions that naturally come from it. haha. Sincerely, Rob N3FT From greg.ewing at canterbury.ac.nz Thu Aug 11 22:08:10 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu Aug 11 22:08:10 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method In-Reply-To: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> References: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> Message-ID: <42FC32F7.7000800@canterbury.ac.nz> Sebastien.deMentendeHorne at electrabel.com wrote: > __wrap_array__(self, ufunc): # all unary operators like __neg__, __abs__, __invert__ > __lwrap_array__(self, other, ufunc): # all binary operators called with self on the lhs like __add__, __mul__, __pow__ > __rwrap_array__(self, other, ufunc): # all binary operators called with self on the rhs like __radd__, __rmul__, __rpow__ The term "wrap_array" doesn't seem very descriptive of what these are for. Wrapping of arrays may come into the implementations of them, but it's not the main point. The point is to apply a ufunc to the object and get an appropriate result. Maybe just __ufunc__, __lufunc__, __rufunc__? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From Sebastien.deMentendeHorne at electrabel.com Fri Aug 12 02:03:05 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Fri Aug 12 02:03:05 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF507@seebex02.eib.electrabel.be> > > > __wrap_array__(self, ufunc): # all unary operators like > __neg__, __abs__, __invert__ > > __lwrap_array__(self, other, ufunc): # all binary operators > called with self on the lhs like __add__, __mul__, __pow__ > > __rwrap_array__(self, other, ufunc): # all binary operators > called with self on the rhs like __radd__, __rmul__, __rpow__ > > The term "wrap_array" doesn't seem very descriptive of > what these are for. Wrapping of arrays may come into the > implementations of them, but it's not the main point. > The point is to apply a ufunc to the object and get > an appropriate result. > > Maybe just __ufunc__, __lufunc__, __rufunc__? Definitely ! I came first with a function that was more similar to __array__ and so __wrap_array__ seemed a plausible name for this function. Afterward, I changed a bit the role of this function to propose the alternative way and your names are way more adequate (it may be useful to add the unary/binary word in the function like: __unary_ufunc__ __binary_lufunc__ __binary_rufunc__ as it does not map exactly to __add__, __radd__ and people could mix __ufunc__ and __rufunc__ without understanding the __lufunc__... > > -- > Greg Ewing, Computer Science Dept, > +--------------------------------------+ > University of Canterbury, | A citizen of > NewZealandCorp, a | > Christchurch, New Zealand | wholly-owned subsidiary of > USA Inc. | > greg.ewing at canterbury.ac.nz +--------------------------------------+ > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development > Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * > Testing & QA > Security * Process Improvement & Measurement * > http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From gruben at bigpond.net.au Fri Aug 12 05:45:39 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri Aug 12 05:45:39 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42E800B0.7050803@noaa.gov> References: <42E800B0.7050803@noaa.gov> Message-ID: <42FC9996.5030507@bigpond.net.au> Just browsing through the new array PEP to see if there's support for 3-vector operations, such as cross product, norm, length functions. Sadly I don't see any. It's something I think is lacking from Numeric and numarray and would like to see implemented. Is it a deliberate choice not to include any? I understand that vectors are sufficiently different animals that you could argue that they shouldn't be supported. I use them enough to think that they should be. Gary Ruben From nadavh at visionsense.com Fri Aug 12 08:32:28 2005 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri Aug 12 08:32:28 2005 Subject: [Numpy-discussion] still no cross product for new array type Message-ID: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Usually this is within the scope of higher level packages such as ScientificPython. Nadav -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net on behalf of Gary Ruben Sent: Fri 12-Aug-05 14:44 To: numpy-discussion at lists.sourceforge.net Cc: Subject: [Numpy-discussion] still no cross product for new array type Just browsing through the new array PEP to see if there's support for 3-vector operations, such as cross product, norm, length functions. Sadly I don't see any. It's something I think is lacking from Numeric and numarray and would like to see implemented. Is it a deliberate choice not to include any? I understand that vectors are sufficiently different animals that you could argue that they shouldn't be supported. I use them enough to think that they should be. Gary Ruben ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From gruben at bigpond.net.au Fri Aug 12 17:45:31 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri Aug 12 17:45:31 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Message-ID: <42FD4255.8060900@bigpond.net.au> "Usually" means that that's where they are now, because they haven't been implemented in numpy. Comparing with Matlab/IDL where their status is slightly greater so as to earn operations of their own, I wonder why equivalent status isn't afforded them in numpy. Gary R. Nadav Horesh wrote: > Usually this is within the scope of higher level packages such as ScientificPython. > > Nadav > > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net on behalf of Gary Ruben > Sent: Fri 12-Aug-05 14:44 > To: numpy-discussion at lists.sourceforge.net > Cc: > Subject: [Numpy-discussion] still no cross product for new array type > Just browsing through the new array PEP to see if there's support for > 3-vector operations, such as cross product, norm, length functions. > Sadly I don't see any. It's something I think is lacking from Numeric > and numarray and would like to see implemented. Is it a deliberate > choice not to include any? I understand that vectors are sufficiently > different animals that you could argue that they shouldn't be supported. > I use them enough to think that they should be. > > Gary Ruben From greg.ewing at canterbury.ac.nz Sun Aug 14 18:35:19 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun Aug 14 18:35:19 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Message-ID: <42FFF5DE.7040604@canterbury.ac.nz> Nadav Horesh wrote: > Gary Ruben : > > > Just browsing through the new array PEP to see if there's support for > > 3-vector operations, such as cross product, norm, length functions. > > Sadly I don't see any. > > Usually this is within the scope of higher level packages such as > ScientificPython. I'd like to see at least cross product included somewhere, since it's difficult to synthesize efficiently from the other operations provided. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From europa100 at comcast.net Mon Aug 15 11:27:55 2005 From: europa100 at comcast.net (rob) Date: Mon Aug 15 11:27:55 2005 Subject: [Numpy-discussion] Numeric Python EM site back up Message-ID: <4300DDC4.9060502@comcast.net> Its at http://home.comcast.net/~europa100 I hope to morph it later with the latest numpy and/or numarray and/or scipy stuff. I would also like a more "glamorous" location for the site. Haha. Rob From sheltraw at unm.edu Mon Aug 15 11:54:28 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 15 11:54:28 2005 Subject: [Numpy-discussion] (no subject) In-Reply-To: <42FC32F7.7000800@canterbury.ac.nz> References: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> <42FC32F7.7000800@canterbury.ac.nz> Message-ID: Hello All Would someone please tell me how to write the magnitude of an array to the first half of a previous declared complex valued array without allocating more memory. This is the sort of thing that is trivial in C but seems arcane in python/numpy. Thanks, Daniel From sheltraw at unm.edu Mon Aug 15 11:54:48 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 15 11:54:48 2005 Subject: [Numpy-discussion] storing magnitude data Message-ID: Hello All Would someone please tell me how to store magnitude data in the first half of a previously allocated complex array (I need to save memory). Memory saving things like this are so simple and intuitive in C but less so in NumPy. Thanks, Daniel From tim.hochberg at cox.net Mon Aug 15 12:05:21 2005 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon Aug 15 12:05:21 2005 Subject: [Numpy-discussion] storing magnitude data In-Reply-To: References: Message-ID: <4300E720.9030408@cox.net> Daniel Sheltraw wrote: > Hello All > > Would someone please tell me how to store magnitude data in the first > half of > a previously allocated complex array (I need to save memory). Memory > saving > things like this are so simple and intuitive in C but less so in NumPy. >>> a = 3*na.arange(10) + 4*na.arange(10)*1j >>> na.absolute(a, a.real) >>> a.real array([ 0., 5., 10., 15., 20., 25., 30., 35., 40., 45.]) > > Thanks, > Daniel > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle > Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing > & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From Chris.Barker at noaa.gov Mon Aug 15 12:56:34 2005 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon Aug 15 12:56:34 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42FD4255.8060900@bigpond.net.au> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au> Message-ID: <4300F321.2010708@noaa.gov> Gary Ruben wrote: > "Usually" means that that's where they are now, because they haven't > been implemented in numpy. Comparing with Matlab/IDL where their status > is slightly greater so as to earn operations of their own, I wonder why > equivalent status isn't afforded them in numpy. Because numpy is NOT matlab or IDL, nor is it trying to be a close of either of them. I've never used IDL, but MATLAB is (or at least was) a MATrix LABoratory. It was conceived, from the beginning, to be a tool to do linear algebra computations. The new array type has nothing to do with linear algebra, nor does Numeric or numarray. they are general purpose array packages. As it happens, a handy way to store matrices is in a 2-d array, so it's natural to build a linear algebra (and vector arithmetic) package on top of NumPy arrays, but it should be higher level package. One of the things I like most about NumPY, as opposed to MATLAB, is that it doesn't assume everything is a matrix! If we could ever get Python to include additional operators, we could have a Numeric array package and linear algebra package all in one.. both with nice notation, that would be nice. In the meantime, check out cvxopt: http://www.ee.ucla.edu/~vandenbe/cvxopt/ If you want a matrix package. The author is talking about using the new array specification in future versions. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From aisaac at american.edu Mon Aug 15 13:17:14 2005 From: aisaac at american.edu (Alan G Isaac) Date: Mon Aug 15 13:17:14 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <4300F321.2010708@noaa.gov> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> Message-ID: On Mon, 15 Aug 2005, Chris Barker apparently wrote: > The new array type has nothing to do with linear algebra, > nor does Numeric or numarray. "Nothing" seems a bit strong. > If we could ever get Python to include additional > operators, we could have a Numeric array package and > linear algebra package all in one. both with nice > notation, that would be nice. Yep. > In the meantime, check out cvxopt: > http://www.ee.ucla.edu/~vandenbe/cvxopt/ > If you want a matrix package. The author is talking about using the new > array specification in future versions. There is also http://www3.sympatico.ca/cjw/PyMatrix/ (I'm not using either at the moment so I cannot compare them.) Cheers, Alan Isaac From perry at stsci.edu Mon Aug 15 14:26:22 2005 From: perry at stsci.edu (Perry Greenfield) Date: Mon Aug 15 14:26:22 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> Message-ID: <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> On Aug 15, 2005, at 4:18 PM, Alan G Isaac wrote: > On Mon, 15 Aug 2005, Chris Barker apparently wrote: >> If we could ever get Python to include additional >> operators, we could have a Numeric array package and >> linear algebra package all in one. both with nice >> notation, that would be nice. > > Yep. I wonder if we shouldn't use the operator hack recently added to the cookbook to give some sort of capability for this. It doesn't support precedence that people would like, but it maybe useful enough. What do people think? http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122 Perry From rkern at ucsd.edu Mon Aug 15 15:32:36 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon Aug 15 15:32:36 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> Message-ID: <4301179A.8000106@ucsd.edu> Perry Greenfield wrote: > > On Aug 15, 2005, at 4:18 PM, Alan G Isaac wrote: > >> On Mon, 15 Aug 2005, Chris Barker apparently wrote: >> >>> If we could ever get Python to include additional >>> operators, we could have a Numeric array package and >>> linear algebra package all in one. both with nice >>> notation, that would be nice. >> >> Yep. > > I wonder if we shouldn't use the operator hack recently added to the > cookbook to give some sort of capability for this. It doesn't support > precedence that people would like, but it maybe useful enough. What do > people think? > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122 If it's placed in a separate module with other such pseudo-operators (|dot|, |land|, |lor|) rather than in Numeric.py itself, I'm +1 on this. It's such a beautiful hack. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Mon Aug 15 16:39:32 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon Aug 15 16:39:32 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42FFF5DE.7040604@canterbury.ac.nz> (Greg Ewing's message of "Mon, 15 Aug 2005 13:54:38 +1200") References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FFF5DE.7040604@canterbury.ac.nz> Message-ID: Greg Ewing writes: > Nadav Horesh wrote: >> Gary Ruben : >> >> > Just browsing through the new array PEP to see if there's support >> for > 3-vector operations, such as cross product, norm, length >> functions. >> > Sadly I don't see any. >> >> Usually this is within the scope of higher level packages such as >> ScientificPython. > > I'd like to see at least cross product included somewhere, > since it's difficult to synthesize efficiently from the > other operations provided. I agree. I've added a cross_product() funtion to Numeric in CVS. It'll do the cross product along any axes of the arrays passed, assuming that they're of dimensions 2 or 3 (a 2d cross-product returns the z-component of the equivalent 3d one). The test cases look like this: a = Numeric.array([1,2,3]) b = Numeric.array([4,5,6]) assert_eq(Numeric.cross_product(a,b), [-3, 6, -3]) a = Numeric.array([1,2]) b = Numeric.array([4,5]) assert_eq(Numeric.cross_product(a,b), -3) a = Numeric.array([[1,2,3], [4,5,6]]) b = Numeric.array([7, 8, 9]) assert_eq(Numeric.cross_product(a,b), [[-6,12,-6],[-3,6,-3]]) a = Numeric.array([[1,2,3], [4,5,6]]) b = Numeric.array([[10,11,12], [7,8,9]]) assert_eq(Numeric.cross_product(a,b,axis1=0,axis2=0), [-33,-39,-45]) assert_eq(Numeric.cross_product(a,b), [[-9,18,-9], [-3,6,-3]]) and the calling sequence like this: def cross_product(a, b, axis1=-1, axis2=-1): """Return the cross product of two vectors. The cross product is performed over the last axes of a and b by default, and can handle axes with dimensions 2 and 3. For a dimension of 2, the z-component of the equivalent three-dimensional cross product is returned. """ Someone else can make the infix-operator module :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sheltraw at unm.edu Thu Aug 18 10:52:15 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Thu Aug 18 10:52:15 2005 Subject: [Numpy-discussion] freeing memory Message-ID: Hello All Is their a reliable way to free memory allocated for an array in Numeric? My application uses a lot of memory and I need to free some as it goes along. Thanks, Daniel From oliphant at ee.byu.edu Thu Aug 18 11:35:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Aug 18 11:35:03 2005 Subject: [Numpy-discussion] freeing memory In-Reply-To: References: Message-ID: <4304D323.9090808@ee.byu.edu> Daniel Sheltraw wrote: > Hello All > > Is their a reliable way to free memory allocated for an array in Numeric? > My application uses a lot of memory and I need to free some as it goes > along. > Assuming this is all in Python, you just need to delete all names bound to the array. >>> del arr This will remove the memory for arr assuming there are no other arrays referencing arr's memory (i.e. from indexing expresssions arr[3,:] and or name binds). -Travis From hsundar at gmail.com Fri Aug 19 16:00:03 2005 From: hsundar at gmail.com (Hari Sundar) Date: Fri Aug 19 16:00:03 2005 Subject: [Numpy-discussion] reading data from file Message-ID: <54a988e205081915594610b660@mail.gmail.com> Hi, I have a small doubt and it is very difficult for me to word this for google. So maybe someone here can help. I use the 'fromfile' function to read data directly into arrays. This works pretty well. I have a special case, where I am reading in 2 files. The first one is a full 3D file, (x,y,z). Each location tells me whether data is available at that location in the other file or not. I call this the mask.The other file contains information which should sit in x*y*z*D space, i.e., I have a vector of length D at any given location (x,y,z) where the mask is not zero. How do I read this effectively ? I don't want to run a loop and test for the mask being true and reading in the vector. Also, I am currently thinking of keeping the information in memory as an array of shape (x,y,z,D), even though the data is sparse. Is there a more efficient way to do this ? thanks, ~Hari From dd55 at cornell.edu Sun Aug 21 09:03:07 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun Aug 21 09:03:07 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS Message-ID: <200508211202.24796.dd55@cornell.edu> We are having a discussion on the gentoo-science mailing list about linking Numeric and SciPy to external BLAS, LAPACK and ATLAS libraries. Can anyone tell me, is it necessary to include atlas in the library list at compile time in order to reap the rewards of atlas? Does Numeric use the atlas libraries directly, or indirectly through the blas and lapack libraries? Thanks, Darren From luszczek at cs.utk.edu Sun Aug 21 09:45:07 2005 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Sun Aug 21 09:45:07 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS In-Reply-To: <200508211202.24796.dd55@cornell.edu> References: <200508211202.24796.dd55@cornell.edu> Message-ID: <4308AF72.2070702@cs.utk.edu> Darren, my experience with ATLAS and numarray on Gentoo (and other boxes) tells me that if ATLAS is a static library then yes I need it at compile time as the compiler/linker picks up needed symbol and puts them in numarray's dynamic library. On 64-bit boxes (Intel or AMD) the static libraries have to be compiled with `-fPIC' in order to be useful for numarray's dynamic library. But if you compile ATLAS as a dynamic library I'm guessing all these issues go away. The same applies to Goto BLAS which already are distributed as shared libraries alas without source code. I'm sure others will have more to say about that. Piotr Darren Dale wrote: > We are having a discussion on the gentoo-science mailing list about linking > Numeric and SciPy to external BLAS, LAPACK and ATLAS libraries. Can anyone > tell me, is it necessary to include atlas in the library list at compile time > in order to reap the rewards of atlas? Does Numeric use the atlas libraries > directly, or indirectly through the blas and lapack libraries? > > Thanks, > Darren > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From matthew.brett at gmail.com Sun Aug 21 11:41:08 2005 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun Aug 21 11:41:08 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS In-Reply-To: <4308AF72.2070702@cs.utk.edu> References: <200508211202.24796.dd55@cornell.edu> <4308AF72.2070702@cs.utk.edu> Message-ID: <1e2af89e05082111407ae72424@mail.gmail.com> Hi, > But if you compile ATLAS as a dynamic library I'm guessing > all these issues go away. You probably saw this, but Clint Whaley is just in the process of trying to optimize the shared library performace of ATLAS: http://sourceforge.net/mailarchive/forum.php?thread_id=8003844&forum_id=426 Best, Matthew From nvfpb_5861_b97qnpw at hotmail.com Mon Aug 22 05:24:07 2005 From: nvfpb_5861_b97qnpw at hotmail.com (Ilmari Mendez) Date: Mon Aug 22 05:24:07 2005 Subject: [Numpy-discussion] Víagrra Works Very Good Message-ID: An HTML attachment was scrubbed... URL: From hsundar at gmail.com Mon Aug 22 10:29:17 2005 From: hsundar at gmail.com (Hari Sundar) Date: Mon Aug 22 10:29:17 2005 Subject: [Numpy-discussion] Numeric - fromfile ? Message-ID: <54a988e20508221027535066ca@mail.gmail.com> Hi, Is there a function similar to numarray's fromfile, to read a binary array from a file ? What is the best way to read such a file into a Numeric array. thanks, ~Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From sheltraw at unm.edu Mon Aug 22 14:39:09 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 22 14:39:09 2005 Subject: [Numpy-discussion] array reshaping? Message-ID: Hello all Question: Lets say I have two arrays dimensioned according to: my_array = zeros((dim1,dim2,dim3)).astype(Complex32) my_array_2 = zeros((dim3,dim2)).astype(Complex32) where dim2 is not equal to dim3. Also Notice that dim2 and dim3 are swapped in my_array_2 relative to my_array. For some value of dim1 I would like to write a chunk of data contained in my_array_2 to the chunk of memory represented by my_array[dim1,:,:]. How can I best do this? Cheers, Daniel From alexandre.fayolle at logilab.fr Tue Aug 23 00:16:10 2005 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue Aug 23 00:16:10 2005 Subject: [Numpy-discussion] Numeric - fromfile ? In-Reply-To: <54a988e20508221027535066ca@mail.gmail.com> References: <54a988e20508221027535066ca@mail.gmail.com> Message-ID: <20050823071605.GA20610@logilab.fr> On Mon, Aug 22, 2005 at 01:27:55PM -0400, Hari Sundar wrote: > Hi, > > Is there a function similar to numarray's fromfile, to read a binary array > from a file ? What is the best way to read such a file into a Numeric array. It depends on the format of the file. If it's a pickled Numeric array, then the pickle module can read it back. Otherwise, you get to read the file (by opening it first and calling its read() method) and you can use the fromstring() function in numeric. However you need to know the type of the array and its shape. -- Alexandre Fayolle LOGILAB, Paris (France). http://www.logilab.com http://www.logilab.fr http://www.logilab.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From arnd.baecker at web.de Tue Aug 23 00:43:08 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue Aug 23 00:43:08 2005 Subject: [Numpy-discussion] Numeric - fromfile ? In-Reply-To: <20050823071605.GA20610@logilab.fr> References: <54a988e20508221027535066ca@mail.gmail.com> <20050823071605.GA20610@logilab.fr> Message-ID: On Tue, 23 Aug 2005, Alexandre Fayolle wrote: > On Mon, Aug 22, 2005 at 01:27:55PM -0400, Hari Sundar wrote: > > Hi, > > > > Is there a function similar to numarray's fromfile, to read a binary array > > from a file ? What is the best way to read such a file into a Numeric array. Another option is to use scipy.io, e.g.: In [5]:scipy.io.fread? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: g = numpyio.fread( fid, Num, read_type { mem_type, byteswap}) fid = open file pointer object (i.e. from fid = open('filename') ) Num = number of elements to read of type read_type read_type = a character in 'cb1silfdFD' (PyArray types) describing how to interpret bytes on disk. OPTIONAL mem_type = a character (PyArray type) describing what kind of PyArray to return in g. Default = read_type byteswap = 0 for no byteswapping or a 1 to byteswap (to handle different endianness). Default = 0. I use this a lot and it works very well. For example: from scipy import * # --- Create some arrays: x=arange(10)/10.0 z=zeros( (5,5),"d") z_complex=zeros( (5,7),"D") # --- write them as binary data: fp=file("x.dat","wb") io.fwrite(fp,len(x),x) fp.close() fp=file("z.dat","wb") io.fwrite(fp,z.shape[0]*z.shape[1],z) fp.close() fp=file("z_complex.dat","wb") io.fwrite(fp,z_complex.shape[0]*z_complex.shape[1],z_complex) fp.close() # --- and read them back fp=file("x.dat","rb") x_read=io.fread(fp,10,"d") fp.close() fp=file("z.dat","rb") z_read=reshape(io.fread(fp,5*5,"d"),(5,5)) fp.close() fp=file("z_complex.dat","rb") z_complex_read=reshape(io.fread(fp,5*7,"D"),(5,7)) fp.close() Remark: `"wb"` and `"rb"` is only needed under Windows, normally `"w"` and `"r"` would be enough. Best, Arnd From hsundar at gmail.com Tue Aug 23 05:31:12 2005 From: hsundar at gmail.com (Hari Sundar) Date: Tue Aug 23 05:31:12 2005 Subject: [Numpy-discussion] Fast iteration ... Message-ID: <54a988e205082305306d59d76f@mail.gmail.com> Hi, I am trying to compute the histogram of an ND image. I have been able to use the array operation for a part of it and that makes it quite fast. There is one part of my code that I have not been able to vectorize and is therefore quite slow. I was wondering if there is a faster way to do this, since this is substantially slower than what I would get from C code. # a and b are arrays (ND) sz = product(a.shape) ai = a.flat bi = b.flat for i in range(sz): hist_j[bi[i], ai[i]] += 1 hist_a[ai[i]] += 1 hist_b[bi[i]] += 1 thanks, ~Hari -- ??? ?????????????????? ??? ?? ??? ??? ?? ? ? ???????????????? ???? ???????????????? ??? ??? ?? ? ??? ?? Whence all creation had its origin, He, whether he fashioned it or whether He did not, He, who surveys it all from the highest heaven, He knows - or maybe even He does not know. ~Rg veda -------------- next part -------------- An HTML attachment was scrubbed... URL: From Scott.Daniels at Acm.Org Tue Aug 23 11:57:53 2005 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Tue Aug 23 11:57:53 2005 Subject: [Numpy-discussion] Re: Numeric - fromfile ? In-Reply-To: References: <54a988e20508221027535066ca@mail.gmail.com> <20050823071605.GA20610@logilab.fr> Message-ID: Arnd Baecker wrote: and bunch of good stuff, and > ... > fp=file("x.dat","wb") > io.fwrite(fp,len(x),x) > fp.close() > ... > # --- and read them back > fp=file("x.dat","rb") > x_read=io.fread(fp,10,"d") > fp.close() > > Remark: `"wb"` and `"rb"` is only needed under Windows, > normally `"w"` and `"r"` would be enough. This is only true if you are talking about Unix/Linux vs. Windows. There are other systems where the 'b' is necessary, and even where it makes no difference it explicitly states your intent. So, I'd always use "wb" or "rb" in these cases. On some filesystems, (VMS comes to mind) a text file is a completely different format from a binary file. --Scott David Daniels Scott.Daniels at Acm.Org From oliphant at ee.byu.edu Wed Aug 24 16:33:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 24 16:33:33 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) ready for extension modules to test Message-ID: <430D0374.1040105@ee.byu.edu> scipy.base is in a beta-quality state right now. The Packages are not ported, but the basic object works as far as I can tell. If you have extension modules that you would like to try and compile with the new system, I would appreciate the feedback. Note, this is for Numeric users only. The C-API follows the Numeric convention. There is a COMPATIBILITY file that lists a few of the changes (there are actually very few...), that may need to be made. Only if you used descr->zero, descr->one, or explicit typecode strings in your c-code should you need to make any changes. There are changes you may wish to make to improve your code to take advantage of the extended C-API, but you shouldn't have to, except in a couple of cases. You can check the code base out of anonymous CVS (wait a day) and try it.... -Travis O. From pontifor at yahoo.com Fri Aug 26 10:26:48 2005 From: pontifor at yahoo.com (Stefan Kuzminski) Date: Fri Aug 26 10:26:48 2005 Subject: [Numpy-discussion] is this a bug? Message-ID: <20050826172459.59694.qmail@web50608.mail.yahoo.com> >>> from numarray import * >>> x = ones(22400,Float) >>> print add.reduce(x) 22400.0 >>> print add.reduce(x!=0) -128 >>> print add.reduce((x!=0).astype(Int)) 22400 it seems like the boolean result of the expression ( middle try ) causes a problem? thanks, Stefan Kuzminski ____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs From jmiller at stsci.edu Fri Aug 26 12:15:32 2005 From: jmiller at stsci.edu (Todd Miller) Date: Fri Aug 26 12:15:32 2005 Subject: [Numpy-discussion] is this a bug? In-Reply-To: <20050826172459.59694.qmail@web50608.mail.yahoo.com> References: <20050826172459.59694.qmail@web50608.mail.yahoo.com> Message-ID: <1125083645.21580.104.camel@halloween.stsci.edu> On Fri, 2005-08-26 at 13:24, Stefan Kuzminski wrote: > >>> from numarray import * > >>> x = ones(22400,Float) > >>> print add.reduce(x) > 22400.0 > >>> print add.reduce(x!=0) > -128 > >>> print add.reduce((x!=0).astype(Int)) > 22400 > > it seems like the boolean result of the expression ( middle try ) > causes a problem? This issue has been discussed before and the general consensus was that this (somewhat treacherous) behavior should not change. For array totals (reducing on all axes at once), numarray has a sum() method which by default does do a type promotion to the "max type of kind", so integers -> Int64, floats -> Float64, and complexes -> Complex64 prior to the reduction. Regards, Todd From humufr at yahoo.fr Mon Aug 29 12:17:10 2005 From: humufr at yahoo.fr (Humufr) Date: Mon Aug 29 12:17:10 2005 Subject: [Numpy-discussion] bug in numarray? Message-ID: <43135F26.50509@yahoo.fr> Hi, I think there are a problem with numarray (not sure). I'm trying to correlate two differents file to find the same object in both. To do this I wrote some ugly software and I'm using the readcol2.py to read the file in a numarray, numarray string or list format. The cross_name.py is doing the cross correlation when I'm using the numarray string format. I'm using three parameters at differents columns and I compare all of them with something like: numarray.all(a[i,:] == b[j,:]) I saw that my script is very very slow or to be more precise became to be slow. It's seems ok at the beginning but little by little is slow down by a huge amount. I let it turn all the week end and it found ~40 000 objects (both files are ~200000 lines...) in common in two days. I change the software to use the list in python and in some minutes I'have ~20 000 objects found in common. So I think there are a big problem probably: 1) in my script, perhaps 2) in numarray or 3) in both. I hope to have explain the problem clearly ... N. ps: I print an output for the script cross_name.py to visually see the slow down and that appeard to became slow around the 700 objects in common but it's gradully decline. pps: I join the different file I used. The cross_name.py is the function with the problem. ------------------------------------- #readcol2.py ------------------------------------- def readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): """ Load ASCII data from fname into an array and return the array. The data must be regular, same number of values in every row fname can be a filename or a file handle. Input: - Fname : the name of the file to read Optionnal input: - comments : a string to indicate the charactor to delimit the domments. the default is the matlab character '%'. - columns : list or tuple ho contains the columns to use. - delimiter : a string to delimit the columns - dep : an integer to indicate from which line you want to begin to use the file (useful to avoid the descriptions lines) - arraytype : a string to indicate which kind of array you want ot have: numeric array (numeric) or character array (numstring) or list (list). By default it's the list mode used matfile data is not currently supported, but see Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz Example usage: x,y = transpose(readcol('test.dat')) # data in two columns X = readcol('test.dat') # a matrix of data x = readcol('test.dat') # a single column of data x = readcol('test.dat,'#') # the character use like a comment delimiter is '#' initial function from pylab, improve by myself for my need """ from numarray import array,transpose fh = file(fname) X = [] numCols = None nline = 0 if columns is None: for line in fh: nline += 1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue if arraytype=='numeric': row = [float(val) for val in line.split(delimiter)] else: row = [val.strip() for val in line.split(delimiter)] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) else: for line in fh: nline +=1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue row = line.split(delimiter) if arraytype=='numeric': row = [float(row[i-1]) for i in columns] elif arraytype=='numstring': row = [row[i-1].strip() for i in columns] else: row = [row[i-1].strip() for i in columns] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) if arraytype=='numeric': X = array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), elif arraytype == 'numstring': import numarray.strings # pb si numeric+pylab X = numarray.strings.array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), return X ---------------------------------------------------------------- #cross_name.py ---------------------------------------------------------------- #/usr/bin/env python ''' Software to cross correlate two files. To use it you had to file a params file who contains the information of the file you want to correlate. The information must have the format: namefile = list of column ; delimiter example: file1 = 1,2,3 ; file2 = 20,19,21 ; , no delimiter = blanck ''' # there are a big problem of efficiency. The software is far to long with big file like SDSS. # I had to find where is the problem import sys import numarray import string #read the params file params = {} for line in file(sys.argv[1],'rU'): line = line.strip() # delete the end of line (\n on unix) if not len(line): continue # is line empty do nothing and pass to the next line if line.startswith('#'): continue # test if the line is a comments (# is the character to signal it) tup = line.split('=',1) # split the line, the delimiter is the sign = columns = [int(i) for i in tup[1].strip().split(';')[0].strip().split(',')] # creat a list who contains # the columns we want to use delimiter = tup[1].strip().split(';')[1].strip() # check the delimiter of the data file (generally space or coma) if not len(delimiter): delimiter = None params[tup[0].strip()] = { 'columns' : columns, 'delimiter' : delimiter} # Read the data files (only the columns ask in the params file) debut_data = 1 data = [] for namefile in params.iterkeys(): import readcol2 #import the function to read the files #data.append(readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=1,arraytype='character')) params[namefile]['data'] = readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=debut_data,arraytype='character') # Read another times the data files to have all the lines! # Question: like it's a dictionnary are we sure that the file are in the same order... Check it!!!!!!!!! if len(params.keys()) == 2: namefile,data,delimiter = [],[],[] for keys in params.iterkeys(): namefile.append(keys) data.append(params[keys]['data']) delim = params[keys]['delimiter'] if delim != None: delimiter.append(params[keys]['delimiter']) else: delimiter.append(' ') #res_a = [] #res_b = [] f1_ini = file(namefile[0]).readlines()[debut_data:] f2_ini = file(namefile[1]).readlines()[debut_data:] #f1_ini = [line for line in file(namefile[0])][debut_data:] #f2_ini = [line for line in file(namefile[1])][debut_data:] f1=open('cross'+namefile[0],'w') f2=open('cross'+namefile[1],'w') f3=open('pastecross'+namefile[0]+namefile[1],'w') b_i = 0 for a_i in range(data[0].shape[0]): for b_i in range(b_i,data[1].shape[0]): if numarray.all(data[0][a_i,:] == data[1][b_i,:]): f1.write(f1_ini[a_i]) f2.write(f2_ini[b_i]) f3.write(f1_ini[a_i].strip()+delimiter[0]+' '+string.replace(f2_ini[b_i],delimiter[1],delimiter[0])) del f2_ini[b_i] break #res_a.append(a_i) #res_b.append(b_i) f1.close() f2.close() f3.close() else: print "too much file: only two allowed for the moment" #save the results in 3 files: 2 with the common objects from each file. # one with a paste of the lines of the 2 initial files. ----------------------------------------------------------------------- #cross_name2.py --------------------------------------------------------------------- #/usr/bin/env python ''' Software to cross correlate two files. To use it you had to file a params file who contains the information of the file you want to correlate. The information must have the format: namefile = list of column ; delimiter example: file1 = 1,2,3 ; file2 = 20,19,21 ; , no delimiter = blanck ''' # there are a big problem of efficiency. The software is far to long with big file like SDSS. # I had to find where is the problem import sys import numarray import string #read the params file params = {} for line in file(sys.argv[1],'rU'): line = line.strip() # delete the end of line (\n on unix) if not len(line): continue # is line empty do nothing and pass to the next line if line.startswith('#'): continue # test if the line is a comments (# is the character to signal it) tup = line.split('=',1) # split the line, the delimiter is the sign = columns = [int(i) for i in tup[1].strip().split(';')[0].strip().split(',')] # creat a list who contains # the columns we want to use delimiter = tup[1].strip().split(';')[1].strip() # check the delimiter of the data file (generally space or coma) if not len(delimiter): delimiter = None params[tup[0].strip()] = { 'columns' : columns, 'delimiter' : delimiter} # Read the data files (only the columns ask in the params file) debut_data = 1 data = [] for namefile in params.iterkeys(): import readcol2 #import the function to read the files #data.append(readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=1,arraytype='character')) params[namefile]['data'] = readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=debut_data,arraytype='list') # Read another times the data files to have all the lines! # Question: like it's a dictionnary are we sure that the file are in the same order... Check it!!!!!!!!! if len(params.keys()) == 2: namefile,data,delimiter = [],[],[] for keys in params.iterkeys(): namefile.append(keys) data.append(params[keys]['data']) delim = params[keys]['delimiter'] if delim != None: delimiter.append(params[keys]['delimiter']) else: delimiter.append(' ') #res_a = [] #res_b = [] f1_ini = file(namefile[0]).readlines()[debut_data:] f2_ini = file(namefile[1]).readlines()[debut_data:] #f1_ini = [line for line in file(namefile[0])][debut_data:] #f2_ini = [line for line in file(namefile[1])][debut_data:] f1=open('cross'+namefile[0],'w') f2=open('cross'+namefile[1],'w') f3=open('pastecross'+namefile[0]+namefile[1],'w') # i=0 # for a_i in range(len(data[0])): # #print data[0][a_i,:] # for b_i in range(len(data[1])): # if data[0][a_i] == data[1][b_i]: # print data[0][a_i],data[1][b_i] # i+=1 # print i # break b_i=0 for a_i in range(len(data[0])): for b_i in range(b_i,len(data[1])): if data[0][a_i] == data[1][b_i]: f1.write(f1_ini[a_i]) f2.write(f2_ini[b_i]) f3.write(f1_ini[a_i].strip()+delimiter[0]+' '+string.replace(f2_ini[b_i],delimiter[1],delimiter[0])) del f2_ini[b_i] break #res_a.append(a_i) #res_b.append(b_i) f1.close() f2.close() f3.close() else: print "too much file: only two allowed for the moment" #save the results in 3 files: 2 with the common objects from each file. # one with a paste of the lines of the 2 initial files. From gruel at astro.ufl.edu Mon Aug 29 14:54:12 2005 From: gruel at astro.ufl.edu (Nicolas Gruel) Date: Mon Aug 29 14:54:12 2005 Subject: [Numpy-discussion] bug in numarray? Message-ID: <43132C5D.70107@astro.ufl.edu> Hi, I think there are a problem with numarray (not sure). I'm trying to correlate two differents file to find the same object in both. To do this I wrote some ugly software and I'm using the readcol2.py to read the file in a numarray, numarray string or list format. The cross_name.py is doing the cross correlation when I'm using the numarray string format. I'm using three parameters at differents columns and I compare all of them with something like: numarray.all(a[i,:] == b[j,:]) I saw that my script is very very slow or to be more precise became to be slow. It's seems ok at the beginning but little by little is slow down by a huge amount. I let it turn all the week end and it found ~40 000 objects (both files are ~200000 lines...) in common in two days. I change the software to use the list in python and in some minutes I'have ~20 000 objects found in common. So I think there are a big problem probably: 1) in my script, perhaps 2) in numarray or 3) in both. I hope to have explain the problem clearly ... N. ps: I print an output for the script cross_name.py to visually see the slow down and that appeard to became slow around the 700 objects in common but it's gradully decline. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: readcol2.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cross_name.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cross_name2.py URL: From NadavH at VisionSense.com Mon Aug 29 22:46:11 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Mon Aug 29 22:46:11 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. Message-ID: <4313EFD7.5040301@VisionSense.com> Just started to play with Numeric3, looks as a significant usability improvement but.... Same functions/classes are named differently in numarray and Numeric3, for instance typecodes. I thing that agreeing on the same names for identical functions/classes would make the users life easier for either porting or alternating back ends. I believe that it may help unifying the two projects. Nadav. From oliphant at ee.byu.edu Mon Aug 29 22:59:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Aug 29 22:59:03 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313EFD7.5040301@VisionSense.com> References: <4313EFD7.5040301@VisionSense.com> Message-ID: <4313F560.1000200@ee.byu.edu> Nadav Horesh wrote: >Just started to play with Numeric3, looks as a significant usability >improvement but.... >Same functions/classes are named differently in numarray and Numeric3, >for instance typecodes. > > This is true for only a few cases. Mostly the names are compatible, but some of the naming conventions needed changing... For example: We have used type for the name of the data type in a numeric array. But, this can be confusing because type refers to the kind of Python object and all arrays are the same kind of python object. In addition, it is natural to use the type= keyword in array constructors, but this then blocks the use of that builtin for the function it is used with. Of course typecode was previously chosen by Numeric, but now the types are not codes (they are really type objects). Thus, I have been calling type (dtype) in the new scipy.base. The alternative is to keep the name type (eliminate the use of typecode, and rename python's type function to pytype within scipy). It could easily be changed if that is a real problem. Because of the signficantly different usage of types in the new system, it is helpful to have a different name (dtype). But, I could be persuaded to use the word type and rename Python's type to pytype. -Travis From NadavH at VisionSense.com Tue Aug 30 01:00:28 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Tue Aug 30 01:00:28 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43140FAC.4010802@VisionSense.com> I am not picky about which name to use. It is would be the same for me if Jay Miller would add a support for dtype keyword, and switch Int32 for int32 (or vice versa). In this case you both agree that types should be classes (although Numeric3 types == type is better) and not strings. Once there is an agreement on the functions, methods and keyword (for instance should arange function have a shape keyword), the exact names choice should be an easy issue to overcome. Nadav. Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. > > -Travis > > > > From cjw at sympatico.ca Tue Aug 30 05:12:47 2005 From: cjw at sympatico.ca (Colin J. Williams) Date: Tue Aug 30 05:12:47 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43144CB8.9000603@sympatico.ca> Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > These changes make sense (1) replacing type by dtype (dType?) and (2) replacing typecode by dType instances. It would be good if, as suggested by Nadav, the first change could be made to numarray. > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. This, I suggest, would be a step back. Is there any plan to make Win32 binary version available for testing? Past efforts to compile have failed. Colin W, From cjw at sympatico.ca Tue Aug 30 05:21:55 2005 From: cjw at sympatico.ca (Colin J. Williams) Date: Tue Aug 30 05:21:55 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43144F20.8090800@sympatico.ca> Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > [error] - this should have read: These changes make sense (1) replacing type by dtype (dType?) and (2) replacing typecode by dType an instance of a Numeric types class. It would be good if, as suggested by Nadav, the first change could be made to numarray. He indicates that the naming of the new Numeric types classes is different from that used by numarray. Is it necessary to change this? > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. This, I suggest, would be a step back. Is there any plan to make Win32 binary version available for testing? Past efforts to compile have failed. Colin W, ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From pbtransfert at freesurf.fr Wed Aug 31 00:08:08 2005 From: pbtransfert at freesurf.fr (pbtransfert at freesurf.fr) Date: Wed Aug 31 00:08:08 2005 Subject: [Numpy-discussion] portability issue Message-ID: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> hi ! i try to transfer a pickle which contains numeric array, from a 64-bits system to a 32-bits system. it seems to fail due to bad (or lack of) conversion... more precisely, here is what i do on the 64-bits system : import Numeric,cPickle a=Numeric.array([1,2,3]) f=open('test.pickle64','w') cPickle.dump(a,f) f.close() and here is what i try to do on the 32-bits system : import Numeric,cPickle f=open('test.pickle64','r') a=cPickle.load(f) f.close() and here is the log of the load : a=cPickle.load(f) File "/usr/lib/python2.3/site-packages/Numeric/Numeric.py", line 539, in array_constructor x.shape = shape ValueError: ('total size of new array must be unchanged', , ((3,), 'l', '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00',True)) Is there something to do to solve this difficulty ? thanks PB From cookedm at physics.mcmaster.ca Wed Aug 31 11:16:19 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed Aug 31 11:16:19 2005 Subject: [Numpy-discussion] portability issue In-Reply-To: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> (pbtransfert@freesurf.fr's message of "Wed, 31 Aug 2005 09:07:10 +0200 (CEST)") References: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> Message-ID: writes: > hi ! > > i try to transfer a pickle which contains numeric array, from a 64-bits > system to a 32-bits system. it seems to fail due to bad (or lack of) > conversion... more precisely, here is what i do on the 64-bits system : > > import Numeric,cPickle > a=Numeric.array([1,2,3]) > f=open('test.pickle64','w') > cPickle.dump(a,f) > f.close() > > and here is what i try to do on the 32-bits system : > > import Numeric,cPickle > f=open('test.pickle64','r') > a=cPickle.load(f) > f.close() > > and here is the log of the load : > > a=cPickle.load(f) > File "/usr/lib/python2.3/site-packages/Numeric/Numeric.py", line 539, in > array_constructor > x.shape = shape > ValueError: ('total size of new array must be unchanged', array_constructor at 0x40a1002c>, ((3,), 'l', > '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00',True)) > > > Is there something to do to solve this difficulty ? Specify the integer type with the number of bits. Numeric.array([1,2,3]) will create an array with a typecode of 'l' (Numeric.Int), which is the type that can hold Python ints (= C longs). On your 64-bit system, it's a 64-bit integer; on the 64-bit, it's a 32-bit integer. So, on the 32-bit system, when reading the pickle, it sees an array of type 'l', but there is too much data to fill the array it expects. The solution is to explicitly create your array using a typecode that gives the size of the integer. Either: a = Numeric.array([1,2,3], Numeric.Int32) or a = Numeric.array([1,2,3], Numeric.Int64) I haven't checked this, but I would think that using Int32 is better if all your numbers will fit in that. Using 64-bit integers would mean the 32-bit machine would have to use 'long long' types to do its math, which would be slower, while using 32-bit integers would mean the 64-bit machine would use 'int', which would still be fast for it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aw-confirm at eBay.com Mon Aug 1 20:32:23 2005 From: aw-confirm at eBay.com (aw-confirm at eBay.com) Date: Mon Aug 1 20:32:23 2005 Subject: [Numpy-discussion] TKO Notice: ***Urgent Safeharbor Department Notice*** Message-ID: <20050802024956.918442C8F35@bocca.dk> An HTML attachment was scrubbed... URL: From edcjones at comcast.net Mon Aug 1 20:41:59 2005 From: edcjones at comcast.net (Edward C. Jones) Date: Mon Aug 1 20:41:59 2005 Subject: [Numpy-discussion] numarray: Does NA_NewArray return a new reference? Message-ID: <42EEEB43.9000600@comcast.net> Do functions such as NA_NewArray return a new reference? Then I should Py_XDECREF them when I am finished with them? From jdhunter at ace.bsd.uchicago.edu Tue Aug 2 06:33:06 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue Aug 2 06:33:06 2005 Subject: [Numpy-discussion] ANN: matplotlib 0.83.2 Message-ID: <87psswzcsg.fsf@peds-pc311.bsd.uchicago.edu> This is a summary of recent developments in matplotlib since 0.80. For detailed notes, see http://matplotlib.sf.net/whats_new.html, http://matplotlib.sf.net/CHANGELOG and http://matplotlib.sf.net/API_CHANGES == Whats New == matplotlib wiki: this was just launched a few days ago and only has two entries to date, but we hope this will grow into a useful site with tutorials, howtos, installation notes, recipes, etc. Please contribute! Thanks to scipy.org and Enthought for hosting. http://www.scipy.org/wikis/topical_software/MatplotlibCookbook CocoaAgg: New CocoaAgg backend for native GUI on OSX, 10.3 and 10.4 compliant, contributed by Charles Moad. TeX support : Now you can (optionally) use TeX to handle all of the text elements in your figure with the rc param text.usetex in the antigrain and postscript backends; see http://www.scipy.org/wikis/topical_software/UsingTex. Thanks to Darren Dale for hard work on the TeX support. Reorganized config files: Made HOME/.matplotlib the new config dir where the matplotlibrc file, the ttf.cache, and the tex.cache live. Your .matplotlibrc file, if you have one, should be renamed to .matplotlib/matplotlibrc. Masked arrays: Support for masked arrays in line plots, pcolor and contours. Thanks Eric Firing and Jeffrey Whitaker. New image resize options interpolation options. See help(imshow) for details, particularly the interpolation, filternorm and filterrad kwargs. New values for the interp kwarg are: 'nearest', 'bilinear', 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', 'blackman' Byte images: Much faster imaeg loading for MxNx4 or MxNx3 UInt8 images, which bypasses the memory and CPU intensive integer/floating point conversions. Thanks Nicolas Girard. Fast markers on win32: The marker cache optimization is finally available for win32, after an agg bug was found and fixed (thanks Maxim!). Line marker plots should be considerably faster now on win32. Qt in ipython/pylab: You can now use qt in ipython pylab mode. Thanks Fernando Perez and the Orsay team! Agg wrapper proper: Started work on a proper agg wrapper to expose more general agg functionality in mpl. See examples/agg_test.py. Lots of wrapping remains to be done. Subplot configuration: There is a new toolbar button on GTK*, WX* and TkAgg to launch the subplot configuration tool. GUI neutral widgets: Matplotlib now has cross-GUI widgets (buttons, check buttons, radio buttons and sliders). See examples/widgets/*.py and http://matplotlib.sf.net/screenshots.html#slider_demo. This makes it easier to create interactive figures that run across backends. Full screen mode in GTK*: Use 'f' to toggle full screen mode in the GTK backends. Thanks Steve Chaplin. Downloads available from http://matplotlib.sf.net From Pieter.Dumon at intec.UGent.be Tue Aug 2 09:05:56 2005 From: Pieter.Dumon at intec.UGent.be (Pieter Dumon) Date: Tue Aug 2 09:05:56 2005 Subject: [Numpy-discussion] exp Message-ID: <42EF9964.102@intec.UGent.be> Hi, I'm having a problem with exp(): >>>import Numeric >>>a = Numeric.array([-800+0j])*Numeric.ones(10) >>>Numeric.exp(a) results in"OverflowError: math range error" I had expected to get the cmath.exp() result: >>> import cmath >>> cmath.exp(-800+0j) 0j anyone knows what I'm doing wrong ? Pieter From Fernando.Perez at colorado.edu Tue Aug 2 09:54:34 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue Aug 2 09:54:34 2005 Subject: [Numpy-discussion] exp In-Reply-To: <42EF9964.102@intec.UGent.be> References: <42EF9964.102@intec.UGent.be> Message-ID: <42EFA4B3.7090607@colorado.edu> Pieter Dumon wrote: > Hi, > > I'm having a problem with exp(): > > >>>import Numeric > >>>a = Numeric.array([-800+0j])*Numeric.ones(10) > >>>Numeric.exp(a) > > results in"OverflowError: math range error" > > I had expected to get the cmath.exp() result: > > >>> import cmath > >>> cmath.exp(-800+0j) > 0j > > > anyone knows what I'm doing wrong ? Numeric doesn't underflow to zero silently, nor does it offer a way to control the fpu's exception bits directly. I believe that numarray does offer such facilities. For exponentials of real arguments, I've overridden exp() with my own version: def exp_safe(x): """Compute exponentials which safely underflow to zero. Slow but convenient to use. Note that NumArray will introduce proper floating point exception handling with access to the underlying hardware.""" if type(x) is ArrayType: return exp(clip(x,exp_safe_MIN,exp_safe_MAX)) else: return math.exp(x) You can modify this to work with complex inputs. Best, f From perry at stsci.edu Tue Aug 2 11:43:36 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue Aug 2 11:43:36 2005 Subject: [Numpy-discussion] numarray: Does NA_NewArray return a new reference? In-Reply-To: <42EEEB43.9000600@comcast.net> References: <42EEEB43.9000600@comcast.net> Message-ID: <843f7a32570a268c0855fefba9c8bb28@stsci.edu> On Aug 1, 2005, at 11:40 PM, Edward C. Jones wrote: > Do functions such as NA_NewArray return a new reference? Then I should > Py_XDECREF them when I am finished with them? Yes, yes. Perry From perry at stsci.edu Tue Aug 2 12:54:13 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue Aug 2 12:54:13 2005 Subject: [Numpy-discussion] numarray: PyArrayObject supports number protocols? In-Reply-To: <42ED9D73.1010400@comcast.net> References: <42ED9D73.1010400@comcast.net> Message-ID: <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> On Jul 31, 2005, at 11:56 PM, Edward C. Jones wrote: > I just saw in the docs, Section 14.2.4: > > It should be noted that unlike earlier versions of numarray, the > present PyArrayObject structure is a first class python object, with > full support for the number protocols in C. > > Does this mean that I can add two numarrays in C using "PyNumber_Add"? > Todd is away for a few days, and I haven't found the actual support for this in the code, so I'm going to wait until he's back for a definitive answer. Perry From jmiller at stsci.edu Thu Aug 4 07:43:28 2005 From: jmiller at stsci.edu (Todd Miller) Date: Thu Aug 4 07:43:28 2005 Subject: [Numpy-discussion] numarray: PyArrayObject supports number protocols? In-Reply-To: <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> References: <42ED9D73.1010400@comcast.net> <04f1e4748cb8a90ac8770f50e66f1453@stsci.edu> Message-ID: <1123166524.4467.12.camel@halloween.stsci.edu> On Tue, 2005-08-02 at 15:53, Perry Greenfield wrote: > On Jul 31, 2005, at 11:56 PM, Edward C. Jones wrote: > > > I just saw in the docs, Section 14.2.4: > > > > It should be noted that unlike earlier versions of numarray, the > > present PyArrayObject structure is a first class python object, with > > full support for the number protocols in C. > > > > Does this mean that I can add two numarrays in C using "PyNumber_Add"? Yes. It should be noted that the numarray implementation of the number protocol is (still) in Python so there are issues with "atomicity" and the global interpreter lock just as there are with other Python callbacks from C. The documentation is referring to early versions of numarray where the C-API's C-representation of an array was not a Python object at all. Regards, Todd > Todd is away for a few days, and I haven't found the actual support for > this in the code, so I'm going to wait until he's back for a definitive > answer. > > Perry From alex_sch at telus.net Sat Aug 6 14:53:09 2005 From: alex_sch at telus.net (Alex Schultz) Date: Sat Aug 6 14:53:09 2005 Subject: [Numpy-discussion] Optimization Question Message-ID: <42F53139.3040604@telus.net> Hi, I made nice function that composts two RGBA image arrays, however I'm having issues with it being too slow. Does any body have any suggestions on how to optimize this (the subfunction 'maknewpixels' is a separation for profiling purposes, and it takes almost as much time as the other parts of the function): def compost(lowerlayer, upperlayer): def makenewpixels(ltable, utable, h, w, upperalphatable, loweralphatable): newpixels = Numeric.zeros((h, w, 4), 'b') newpixels[:,:,3] = Numeric.clip(upperalphatable + loweralphatable, 0, 255).astype('b') newpixels[:,:,0:3] = (utable + ltable).astype('b') return newpixels h = Numeric.size(lowerlayer, 0) w = Numeric.size(lowerlayer, 1) upperalphatable = upperlayer[:,:,3] loweralphatable = lowerlayer[:,:,3] #calculate the upper layer's portion of the composted picture utable = Numeric.swapaxes(Numeric.swapaxes(upperlayer[:,:,0:3], 0, 2) * upperalphatable / 255, 2, 0) #calculate the lower layer's portion of the composted picture ltable = Numeric.swapaxes(Numeric.swapaxes(lowerlayer[:,:,0:3], 0, 2) * (255 - upperalphatable) / 255, 2, 0) return makenewpixels(ltable, utable, h, w, upperalphatable, loweralphatable) Thanks, Alex Schultz From tp-verify at paypal.com Mon Aug 8 09:11:29 2005 From: tp-verify at paypal.com (Paypal Security Center) Date: Mon Aug 8 09:11:29 2005 Subject: [Numpy-discussion] PayPal Accounts Management Message-ID: <200508081611.j78GB0EW029616@linux4.fastname.no> An HTML attachment was scrubbed... URL: From gvwilson at cs.utoronto.ca Tue Aug 9 11:52:34 2005 From: gvwilson at cs.utoronto.ca (Greg Wilson) Date: Tue Aug 9 11:52:34 2005 Subject: [Numpy-discussion] re: software skills course Message-ID: Hi, I'm working with support from the Python Software Foundation to develop an open source course on basic software development skills for people with backgrounds in science and engineering. I have a beta version of the course notes ready for review, and would like to pull in people in sci&eng to look it over and give me feedback. If you know anyone who fits this bill (particularly people who might be interested in following along with a trial run of the course this fall), I'd be grateful for pointers. Thanks, Greg Wilson From Sebastien.deMentendeHorne at electrabel.com Wed Aug 10 05:08:57 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Wed Aug 10 05:08:57 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> Hi, Currently, we have an __array__ magic method that can be used to transform any object that implements it into an array. It think that a more useful magic method would be, for ufuncs, a __wrap_array__ method that would return an array object and a function to use after having applied the ufunc. For instance: class TimeSerie: def __init__(self, data, times): self.data = data # an array self.times = times # anything, could be any metadata def __wrap_array__(self, ufunc): return self.data, lambda data: TimeSerie(data, self.times) t = TimeSerie( arange(100), range(100) ) cos(t) # returns a TimeSerie object equivalent to TimeSerie( cos(arange(100)), range(100) ) This needs probably a change in the ufunc code that would first check if __wrap_array__ is there and if so, use it to get an array as well as a "constructor" to use for returning an object other than an array. Benefits: - easier to wrap array objects with metadata without rewriting all ufunc (see MaskedArray for problematic). - ufunc( list ) -> list and ufunc( tuple ) -> tuple instead of returning always arrays. Do you see an interest of doing so ? Does it need a lot of internal changes to Numeric/numarray/scipy.core ? Best, Sebastien ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From oliphant at ee.byu.edu Wed Aug 10 10:02:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 10:02:11 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method In-Reply-To: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> References: <6E48F3D185CF644788F55917A0D50A93017CF4FE@seebex02.eib.electrabel.be> Message-ID: <42FA32C2.8050308@ee.byu.edu> Sebastien.deMentendeHorne at electrabel.com wrote: >Hi, > >Currently, we have an __array__ magic method that can be used to transform any object that implements it into an array. >It think that a more useful magic method would be, for ufuncs, a __wrap_array__ method that would return an array object and a function to use after having applied the ufunc. > > There is an interest in doing this but it is a bit more complicated for ufuncs with more than one input argument and more than one output argument. I have not seen a proposal that really works. Your __wrap_array__ method sounds interesting though. I think, however, that the __wrap_array__ method would need to provide some kind of array_interfaceable object to be really useful. In the new scipy.base, I've been trying to handle some of this, but it is more complicated than it might at first appear. -Travis From oliphant at ee.byu.edu Wed Aug 10 11:53:14 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 11:53:14 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) ready for alpha use Message-ID: <42FA4CF4.4050909@ee.byu.edu> For anybody interested in the development of scipy.base. The repository is in a state that can be tested and played with. I'm sure there are bugs, but I've removed the ones I've found. I'd be interested in help in tracking down others. Over the next few weeks, we will be attempting to build scipy using the new scipy.base. This should also help iron out some problems. Some of the notable features that came out of the ufunc adapting process are 1) reductions can now take place over a type different than the array type. Thus, if B is a byte array you can reduce over a long type to avoid modular arithmetic (overflow). 2) reduceat now takes an axis argument 3) copies are not made of large arrays but a buffering-scheme is used for casting and mis-behaved arrays. 4) the size of buffers used and what is meant by "large array" can be adjusted on a per function / module / global basis by setting the variable UFUNC_BUFSIZE in the local / module / global (builtin) scope 5) how errors in ufuncs are handled can be set and over-ridden on a function / module / global basis through the variable UFUNC_ERRMASK 6) you have another option besides ignore, warn, or raise. You can specify a Python function to call when an error occurs through the variable UFUNC_ERRFUNC (right now all errors go to this same function and a string is based indiciating which error has occurred). I should explain this idea of using variables to set information for the ufuncs. It comes out of an idea that Guido mentioned while Perry, Paul, and I met with him back in March. When he was informed of numarray's stack approach to error handling he questioned that design decision. He wondered if the error handling could not be defined on a per module basis. With that idea, it was relatively straightforward to implement a procedure wherein error behavior for ufuncs is determined by looking in the local, then global (module level), and finally builtin scope for a specific variable. This look-up is done at the beginning of the ufunc call. It will obviously add some time to code which loops through repeated look up calls (how much I'm not sure). Perhaps there is a way to ameliorate this, but until we see some performance issues, I'm not inclined to spend too much time on premature optimization. Comments and especially people with an inkling to try very alpha code out (i.e. it could segfault on you) are welcomed. -Travis O. From oliphant at ee.byu.edu Wed Aug 10 12:43:16 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 12:43:16 2005 Subject: [Numpy-discussion] Checking out scipy.base (Numeric3) Message-ID: <42FA5897.60204@ee.byu.edu> You can check out scipy.base using: cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login (press Enter when asked for login) cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P /Numeric3\ -Travis / From oliphant at ee.byu.edu Wed Aug 10 14:09:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 10 14:09:42 2005 Subject: [Numpy-discussion] Sourceforge pserver CVS access acting strange -> don't get Numeric3 that way. Message-ID: <42FA6CCD.2010803@ee.byu.edu> Anyone contemplating trying out the new scipy.base should not try to use the pserver CVS service from sourceforge for a while. It is not reflecting the current state of the developer CVS tree. I know they recently upgraded developer CVS hardware and perhaps the pserver system is still getting information from the old CVS tree rather than the new developer tree. So, if you get errors on building scipy.base after checking it out from pserver Sourceforge, wait a day or two. I suspect that sourceforge will fix this annoying problem soon. -Travis O. From Sebastien.deMentendeHorne at electrabel.com Thu Aug 11 04:37:17 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Thu Aug 11 04:37:17 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> > > >Hi, > > > >Currently, we have an __array__ magic method that can be > used to transform any object that implements it into an array. > >It think that a more useful magic method would be, for > ufuncs, a __wrap_array__ method that would return an array > object and a function to use after having applied the ufunc. > > > > > There is an interest in doing this but it is a bit more > complicated for > ufuncs with more than one input argument and more than one > output argument. > > I have not seen a proposal that really works. Your __wrap_array__ > method sounds interesting though. I think, however, that the > __wrap_array__ method would need to provide some kind of > array_interfaceable object to be really useful. > > In the new scipy.base, I've been trying to handle some of > this, but it > is more complicated than it might at first appear. > If we use the same convention as __add__ and __radd__ & co for binary functions this would make a __wrap_array__(self, ufunc): # all unary operators like __neg__, __abs__, __invert__ __lwrap_array__(self, other, ufunc): # all binary operators called with self on the lhs like __add__, __mul__, __pow__ __rwrap_array__(self, other, ufunc): # all binary operators called with self on the rhs like __radd__, __rmul__, __rpow__ with first a check on __lwrap_array__ and then __rwrap_array__ for binary functions. Those functions should then return the result of the operation (and not anymore an array object and a "constructor" function. My previous example would read: class TimeSerie: def __init__(self, data, times): self.data = data # an array self.times = times # anything, could be any metadata def __wrap_array__(self, ufunc): return TimeSerie( ufunc(self.data) , self.times) def __lwrap_array__(self, other, ufunc): return TimeSerie( ufunc(self.data, other) , self.times) def __rwrap_array__(self, other, ufunc): return TimeSerie( ufunc(other, self.data) , self.times) propagating this way the operation to the other object in binary operations. Better attempt ? Seb ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From europa100 at comcast.net Thu Aug 11 11:46:30 2005 From: europa100 at comcast.net (Rob) Date: Thu Aug 11 11:46:30 2005 Subject: [Numpy-discussion] Numeric Python EM Project Message-ID: <42FB9C38.9040508@comcast.net> Hi all, I will be putting this site back up, somewhere, I don't know. It needs to be updated to whatever people think is the best Python numeric type package. I've been out of this for a couple of years. I'm off to buy a second laptop today. Something relatively inexpensive that will just have Windoze on it. This pricey Dell is my *nix laptop. Currently it has OpenBSD only. The motivation of all of this is the dismay of "high tech" EE jobs here in Hillsboro, OR, and a renewed interest in Astrophysics and all of the wonderful philosophic and spiritual questions that naturally come from it. haha. Sincerely, Rob N3FT From greg.ewing at canterbury.ac.nz Thu Aug 11 22:08:10 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu Aug 11 22:08:10 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method In-Reply-To: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> References: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> Message-ID: <42FC32F7.7000800@canterbury.ac.nz> Sebastien.deMentendeHorne at electrabel.com wrote: > __wrap_array__(self, ufunc): # all unary operators like __neg__, __abs__, __invert__ > __lwrap_array__(self, other, ufunc): # all binary operators called with self on the lhs like __add__, __mul__, __pow__ > __rwrap_array__(self, other, ufunc): # all binary operators called with self on the rhs like __radd__, __rmul__, __rpow__ The term "wrap_array" doesn't seem very descriptive of what these are for. Wrapping of arrays may come into the implementations of them, but it's not the main point. The point is to apply a ufunc to the object and get an appropriate result. Maybe just __ufunc__, __lufunc__, __rufunc__? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From Sebastien.deMentendeHorne at electrabel.com Fri Aug 12 02:03:05 2005 From: Sebastien.deMentendeHorne at electrabel.com (Sebastien.deMentendeHorne at electrabel.com) Date: Fri Aug 12 02:03:05 2005 Subject: [Numpy-discussion] new __wrap_array__ magic method Message-ID: <6E48F3D185CF644788F55917A0D50A93017CF507@seebex02.eib.electrabel.be> > > > __wrap_array__(self, ufunc): # all unary operators like > __neg__, __abs__, __invert__ > > __lwrap_array__(self, other, ufunc): # all binary operators > called with self on the lhs like __add__, __mul__, __pow__ > > __rwrap_array__(self, other, ufunc): # all binary operators > called with self on the rhs like __radd__, __rmul__, __rpow__ > > The term "wrap_array" doesn't seem very descriptive of > what these are for. Wrapping of arrays may come into the > implementations of them, but it's not the main point. > The point is to apply a ufunc to the object and get > an appropriate result. > > Maybe just __ufunc__, __lufunc__, __rufunc__? Definitely ! I came first with a function that was more similar to __array__ and so __wrap_array__ seemed a plausible name for this function. Afterward, I changed a bit the role of this function to propose the alternative way and your names are way more adequate (it may be useful to add the unary/binary word in the function like: __unary_ufunc__ __binary_lufunc__ __binary_rufunc__ as it does not map exactly to __add__, __radd__ and people could mix __ufunc__ and __rufunc__ without understanding the __lufunc__... > > -- > Greg Ewing, Computer Science Dept, > +--------------------------------------+ > University of Canterbury, | A citizen of > NewZealandCorp, a | > Christchurch, New Zealand | wholly-owned subsidiary of > USA Inc. | > greg.ewing at canterbury.ac.nz +--------------------------------------+ > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development > Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * > Testing & QA > Security * Process Improvement & Measurement * > http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > ======================================================= This message is confidential. It may also be privileged or otherwise protected by work product immunity or other legal rules. If you have received it by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Electrabel may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. http://www.electrabel.be/homepage/general/disclaimer_EN.asp ======================================================= From gruben at bigpond.net.au Fri Aug 12 05:45:39 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri Aug 12 05:45:39 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42E800B0.7050803@noaa.gov> References: <42E800B0.7050803@noaa.gov> Message-ID: <42FC9996.5030507@bigpond.net.au> Just browsing through the new array PEP to see if there's support for 3-vector operations, such as cross product, norm, length functions. Sadly I don't see any. It's something I think is lacking from Numeric and numarray and would like to see implemented. Is it a deliberate choice not to include any? I understand that vectors are sufficiently different animals that you could argue that they shouldn't be supported. I use them enough to think that they should be. Gary Ruben From nadavh at visionsense.com Fri Aug 12 08:32:28 2005 From: nadavh at visionsense.com (Nadav Horesh) Date: Fri Aug 12 08:32:28 2005 Subject: [Numpy-discussion] still no cross product for new array type Message-ID: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Usually this is within the scope of higher level packages such as ScientificPython. Nadav -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net on behalf of Gary Ruben Sent: Fri 12-Aug-05 14:44 To: numpy-discussion at lists.sourceforge.net Cc: Subject: [Numpy-discussion] still no cross product for new array type Just browsing through the new array PEP to see if there's support for 3-vector operations, such as cross product, norm, length functions. Sadly I don't see any. It's something I think is lacking from Numeric and numarray and would like to see implemented. Is it a deliberate choice not to include any? I understand that vectors are sufficiently different animals that you could argue that they shouldn't be supported. I use them enough to think that they should be. Gary Ruben ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From gruben at bigpond.net.au Fri Aug 12 17:45:31 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri Aug 12 17:45:31 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Message-ID: <42FD4255.8060900@bigpond.net.au> "Usually" means that that's where they are now, because they haven't been implemented in numpy. Comparing with Matlab/IDL where their status is slightly greater so as to earn operations of their own, I wonder why equivalent status isn't afforded them in numpy. Gary R. Nadav Horesh wrote: > Usually this is within the scope of higher level packages such as ScientificPython. > > Nadav > > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net on behalf of Gary Ruben > Sent: Fri 12-Aug-05 14:44 > To: numpy-discussion at lists.sourceforge.net > Cc: > Subject: [Numpy-discussion] still no cross product for new array type > Just browsing through the new array PEP to see if there's support for > 3-vector operations, such as cross product, norm, length functions. > Sadly I don't see any. It's something I think is lacking from Numeric > and numarray and would like to see implemented. Is it a deliberate > choice not to include any? I understand that vectors are sufficiently > different animals that you could argue that they shouldn't be supported. > I use them enough to think that they should be. > > Gary Ruben From greg.ewing at canterbury.ac.nz Sun Aug 14 18:35:19 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun Aug 14 18:35:19 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> Message-ID: <42FFF5DE.7040604@canterbury.ac.nz> Nadav Horesh wrote: > Gary Ruben : > > > Just browsing through the new array PEP to see if there's support for > > 3-vector operations, such as cross product, norm, length functions. > > Sadly I don't see any. > > Usually this is within the scope of higher level packages such as > ScientificPython. I'd like to see at least cross product included somewhere, since it's difficult to synthesize efficiently from the other operations provided. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From europa100 at comcast.net Mon Aug 15 11:27:55 2005 From: europa100 at comcast.net (rob) Date: Mon Aug 15 11:27:55 2005 Subject: [Numpy-discussion] Numeric Python EM site back up Message-ID: <4300DDC4.9060502@comcast.net> Its at http://home.comcast.net/~europa100 I hope to morph it later with the latest numpy and/or numarray and/or scipy stuff. I would also like a more "glamorous" location for the site. Haha. Rob From sheltraw at unm.edu Mon Aug 15 11:54:28 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 15 11:54:28 2005 Subject: [Numpy-discussion] (no subject) In-Reply-To: <42FC32F7.7000800@canterbury.ac.nz> References: <6E48F3D185CF644788F55917A0D50A93017CF501@seebex02.eib.electrabel.be> <42FC32F7.7000800@canterbury.ac.nz> Message-ID: Hello All Would someone please tell me how to write the magnitude of an array to the first half of a previous declared complex valued array without allocating more memory. This is the sort of thing that is trivial in C but seems arcane in python/numpy. Thanks, Daniel From sheltraw at unm.edu Mon Aug 15 11:54:48 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 15 11:54:48 2005 Subject: [Numpy-discussion] storing magnitude data Message-ID: Hello All Would someone please tell me how to store magnitude data in the first half of a previously allocated complex array (I need to save memory). Memory saving things like this are so simple and intuitive in C but less so in NumPy. Thanks, Daniel From tim.hochberg at cox.net Mon Aug 15 12:05:21 2005 From: tim.hochberg at cox.net (Tim Hochberg) Date: Mon Aug 15 12:05:21 2005 Subject: [Numpy-discussion] storing magnitude data In-Reply-To: References: Message-ID: <4300E720.9030408@cox.net> Daniel Sheltraw wrote: > Hello All > > Would someone please tell me how to store magnitude data in the first > half of > a previously allocated complex array (I need to save memory). Memory > saving > things like this are so simple and intuitive in C but less so in NumPy. >>> a = 3*na.arange(10) + 4*na.arange(10)*1j >>> na.absolute(a, a.real) >>> a.real array([ 0., 5., 10., 15., 20., 25., 30., 35., 40., 45.]) > > Thanks, > Daniel > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle > Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing > & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > From Chris.Barker at noaa.gov Mon Aug 15 12:56:34 2005 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon Aug 15 12:56:34 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42FD4255.8060900@bigpond.net.au> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au> Message-ID: <4300F321.2010708@noaa.gov> Gary Ruben wrote: > "Usually" means that that's where they are now, because they haven't > been implemented in numpy. Comparing with Matlab/IDL where their status > is slightly greater so as to earn operations of their own, I wonder why > equivalent status isn't afforded them in numpy. Because numpy is NOT matlab or IDL, nor is it trying to be a close of either of them. I've never used IDL, but MATLAB is (or at least was) a MATrix LABoratory. It was conceived, from the beginning, to be a tool to do linear algebra computations. The new array type has nothing to do with linear algebra, nor does Numeric or numarray. they are general purpose array packages. As it happens, a handy way to store matrices is in a 2-d array, so it's natural to build a linear algebra (and vector arithmetic) package on top of NumPy arrays, but it should be higher level package. One of the things I like most about NumPY, as opposed to MATLAB, is that it doesn't assume everything is a matrix! If we could ever get Python to include additional operators, we could have a Numeric array package and linear algebra package all in one.. both with nice notation, that would be nice. In the meantime, check out cvxopt: http://www.ee.ucla.edu/~vandenbe/cvxopt/ If you want a matrix package. The author is talking about using the new array specification in future versions. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From aisaac at american.edu Mon Aug 15 13:17:14 2005 From: aisaac at american.edu (Alan G Isaac) Date: Mon Aug 15 13:17:14 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <4300F321.2010708@noaa.gov> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> Message-ID: On Mon, 15 Aug 2005, Chris Barker apparently wrote: > The new array type has nothing to do with linear algebra, > nor does Numeric or numarray. "Nothing" seems a bit strong. > If we could ever get Python to include additional > operators, we could have a Numeric array package and > linear algebra package all in one. both with nice > notation, that would be nice. Yep. > In the meantime, check out cvxopt: > http://www.ee.ucla.edu/~vandenbe/cvxopt/ > If you want a matrix package. The author is talking about using the new > array specification in future versions. There is also http://www3.sympatico.ca/cjw/PyMatrix/ (I'm not using either at the moment so I cannot compare them.) Cheers, Alan Isaac From perry at stsci.edu Mon Aug 15 14:26:22 2005 From: perry at stsci.edu (Perry Greenfield) Date: Mon Aug 15 14:26:22 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> Message-ID: <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> On Aug 15, 2005, at 4:18 PM, Alan G Isaac wrote: > On Mon, 15 Aug 2005, Chris Barker apparently wrote: >> If we could ever get Python to include additional >> operators, we could have a Numeric array package and >> linear algebra package all in one. both with nice >> notation, that would be nice. > > Yep. I wonder if we shouldn't use the operator hack recently added to the cookbook to give some sort of capability for this. It doesn't support precedence that people would like, but it maybe useful enough. What do people think? http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122 Perry From rkern at ucsd.edu Mon Aug 15 15:32:36 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon Aug 15 15:32:36 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FD4255.8060900@bigpond.net.au><4300F321.2010708@noaa.gov> <1d4d645ffe7abbc96ec44afadaec956a@stsci.edu> Message-ID: <4301179A.8000106@ucsd.edu> Perry Greenfield wrote: > > On Aug 15, 2005, at 4:18 PM, Alan G Isaac wrote: > >> On Mon, 15 Aug 2005, Chris Barker apparently wrote: >> >>> If we could ever get Python to include additional >>> operators, we could have a Numeric array package and >>> linear algebra package all in one. both with nice >>> notation, that would be nice. >> >> Yep. > > I wonder if we shouldn't use the operator hack recently added to the > cookbook to give some sort of capability for this. It doesn't support > precedence that people would like, but it maybe useful enough. What do > people think? > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/384122 If it's placed in a separate module with other such pseudo-operators (|dot|, |land|, |lor|) rather than in Numeric.py itself, I'm +1 on this. It's such a beautiful hack. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Mon Aug 15 16:39:32 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon Aug 15 16:39:32 2005 Subject: [Numpy-discussion] still no cross product for new array type In-Reply-To: <42FFF5DE.7040604@canterbury.ac.nz> (Greg Ewing's message of "Mon, 15 Aug 2005 13:54:38 +1200") References: <07C6A61102C94148B8104D42DE95F7E86DEFF2@exchange2k.envision.co.il> <42FFF5DE.7040604@canterbury.ac.nz> Message-ID: Greg Ewing writes: > Nadav Horesh wrote: >> Gary Ruben : >> >> > Just browsing through the new array PEP to see if there's support >> for > 3-vector operations, such as cross product, norm, length >> functions. >> > Sadly I don't see any. >> >> Usually this is within the scope of higher level packages such as >> ScientificPython. > > I'd like to see at least cross product included somewhere, > since it's difficult to synthesize efficiently from the > other operations provided. I agree. I've added a cross_product() funtion to Numeric in CVS. It'll do the cross product along any axes of the arrays passed, assuming that they're of dimensions 2 or 3 (a 2d cross-product returns the z-component of the equivalent 3d one). The test cases look like this: a = Numeric.array([1,2,3]) b = Numeric.array([4,5,6]) assert_eq(Numeric.cross_product(a,b), [-3, 6, -3]) a = Numeric.array([1,2]) b = Numeric.array([4,5]) assert_eq(Numeric.cross_product(a,b), -3) a = Numeric.array([[1,2,3], [4,5,6]]) b = Numeric.array([7, 8, 9]) assert_eq(Numeric.cross_product(a,b), [[-6,12,-6],[-3,6,-3]]) a = Numeric.array([[1,2,3], [4,5,6]]) b = Numeric.array([[10,11,12], [7,8,9]]) assert_eq(Numeric.cross_product(a,b,axis1=0,axis2=0), [-33,-39,-45]) assert_eq(Numeric.cross_product(a,b), [[-9,18,-9], [-3,6,-3]]) and the calling sequence like this: def cross_product(a, b, axis1=-1, axis2=-1): """Return the cross product of two vectors. The cross product is performed over the last axes of a and b by default, and can handle axes with dimensions 2 and 3. For a dimension of 2, the z-component of the equivalent three-dimensional cross product is returned. """ Someone else can make the infix-operator module :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From sheltraw at unm.edu Thu Aug 18 10:52:15 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Thu Aug 18 10:52:15 2005 Subject: [Numpy-discussion] freeing memory Message-ID: Hello All Is their a reliable way to free memory allocated for an array in Numeric? My application uses a lot of memory and I need to free some as it goes along. Thanks, Daniel From oliphant at ee.byu.edu Thu Aug 18 11:35:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Aug 18 11:35:03 2005 Subject: [Numpy-discussion] freeing memory In-Reply-To: References: Message-ID: <4304D323.9090808@ee.byu.edu> Daniel Sheltraw wrote: > Hello All > > Is their a reliable way to free memory allocated for an array in Numeric? > My application uses a lot of memory and I need to free some as it goes > along. > Assuming this is all in Python, you just need to delete all names bound to the array. >>> del arr This will remove the memory for arr assuming there are no other arrays referencing arr's memory (i.e. from indexing expresssions arr[3,:] and or name binds). -Travis From hsundar at gmail.com Fri Aug 19 16:00:03 2005 From: hsundar at gmail.com (Hari Sundar) Date: Fri Aug 19 16:00:03 2005 Subject: [Numpy-discussion] reading data from file Message-ID: <54a988e205081915594610b660@mail.gmail.com> Hi, I have a small doubt and it is very difficult for me to word this for google. So maybe someone here can help. I use the 'fromfile' function to read data directly into arrays. This works pretty well. I have a special case, where I am reading in 2 files. The first one is a full 3D file, (x,y,z). Each location tells me whether data is available at that location in the other file or not. I call this the mask.The other file contains information which should sit in x*y*z*D space, i.e., I have a vector of length D at any given location (x,y,z) where the mask is not zero. How do I read this effectively ? I don't want to run a loop and test for the mask being true and reading in the vector. Also, I am currently thinking of keeping the information in memory as an array of shape (x,y,z,D), even though the data is sparse. Is there a more efficient way to do this ? thanks, ~Hari From dd55 at cornell.edu Sun Aug 21 09:03:07 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun Aug 21 09:03:07 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS Message-ID: <200508211202.24796.dd55@cornell.edu> We are having a discussion on the gentoo-science mailing list about linking Numeric and SciPy to external BLAS, LAPACK and ATLAS libraries. Can anyone tell me, is it necessary to include atlas in the library list at compile time in order to reap the rewards of atlas? Does Numeric use the atlas libraries directly, or indirectly through the blas and lapack libraries? Thanks, Darren From luszczek at cs.utk.edu Sun Aug 21 09:45:07 2005 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Sun Aug 21 09:45:07 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS In-Reply-To: <200508211202.24796.dd55@cornell.edu> References: <200508211202.24796.dd55@cornell.edu> Message-ID: <4308AF72.2070702@cs.utk.edu> Darren, my experience with ATLAS and numarray on Gentoo (and other boxes) tells me that if ATLAS is a static library then yes I need it at compile time as the compiler/linker picks up needed symbol and puts them in numarray's dynamic library. On 64-bit boxes (Intel or AMD) the static libraries have to be compiled with `-fPIC' in order to be useful for numarray's dynamic library. But if you compile ATLAS as a dynamic library I'm guessing all these issues go away. The same applies to Goto BLAS which already are distributed as shared libraries alas without source code. I'm sure others will have more to say about that. Piotr Darren Dale wrote: > We are having a discussion on the gentoo-science mailing list about linking > Numeric and SciPy to external BLAS, LAPACK and ATLAS libraries. Can anyone > tell me, is it necessary to include atlas in the library list at compile time > in order to reap the rewards of atlas? Does Numeric use the atlas libraries > directly, or indirectly through the blas and lapack libraries? > > Thanks, > Darren > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From matthew.brett at gmail.com Sun Aug 21 11:41:08 2005 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun Aug 21 11:41:08 2005 Subject: [Numpy-discussion] Question about external libraries and ATLAS In-Reply-To: <4308AF72.2070702@cs.utk.edu> References: <200508211202.24796.dd55@cornell.edu> <4308AF72.2070702@cs.utk.edu> Message-ID: <1e2af89e05082111407ae72424@mail.gmail.com> Hi, > But if you compile ATLAS as a dynamic library I'm guessing > all these issues go away. You probably saw this, but Clint Whaley is just in the process of trying to optimize the shared library performace of ATLAS: http://sourceforge.net/mailarchive/forum.php?thread_id=8003844&forum_id=426 Best, Matthew From nvfpb_5861_b97qnpw at hotmail.com Mon Aug 22 05:24:07 2005 From: nvfpb_5861_b97qnpw at hotmail.com (Ilmari Mendez) Date: Mon Aug 22 05:24:07 2005 Subject: [Numpy-discussion] Víagrra Works Very Good Message-ID: An HTML attachment was scrubbed... URL: From hsundar at gmail.com Mon Aug 22 10:29:17 2005 From: hsundar at gmail.com (Hari Sundar) Date: Mon Aug 22 10:29:17 2005 Subject: [Numpy-discussion] Numeric - fromfile ? Message-ID: <54a988e20508221027535066ca@mail.gmail.com> Hi, Is there a function similar to numarray's fromfile, to read a binary array from a file ? What is the best way to read such a file into a Numeric array. thanks, ~Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From sheltraw at unm.edu Mon Aug 22 14:39:09 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Mon Aug 22 14:39:09 2005 Subject: [Numpy-discussion] array reshaping? Message-ID: Hello all Question: Lets say I have two arrays dimensioned according to: my_array = zeros((dim1,dim2,dim3)).astype(Complex32) my_array_2 = zeros((dim3,dim2)).astype(Complex32) where dim2 is not equal to dim3. Also Notice that dim2 and dim3 are swapped in my_array_2 relative to my_array. For some value of dim1 I would like to write a chunk of data contained in my_array_2 to the chunk of memory represented by my_array[dim1,:,:]. How can I best do this? Cheers, Daniel From alexandre.fayolle at logilab.fr Tue Aug 23 00:16:10 2005 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Tue Aug 23 00:16:10 2005 Subject: [Numpy-discussion] Numeric - fromfile ? In-Reply-To: <54a988e20508221027535066ca@mail.gmail.com> References: <54a988e20508221027535066ca@mail.gmail.com> Message-ID: <20050823071605.GA20610@logilab.fr> On Mon, Aug 22, 2005 at 01:27:55PM -0400, Hari Sundar wrote: > Hi, > > Is there a function similar to numarray's fromfile, to read a binary array > from a file ? What is the best way to read such a file into a Numeric array. It depends on the format of the file. If it's a pickled Numeric array, then the pickle module can read it back. Otherwise, you get to read the file (by opening it first and calling its read() method) and you can use the fromstring() function in numeric. However you need to know the type of the array and its shape. -- Alexandre Fayolle LOGILAB, Paris (France). http://www.logilab.com http://www.logilab.fr http://www.logilab.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From arnd.baecker at web.de Tue Aug 23 00:43:08 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue Aug 23 00:43:08 2005 Subject: [Numpy-discussion] Numeric - fromfile ? In-Reply-To: <20050823071605.GA20610@logilab.fr> References: <54a988e20508221027535066ca@mail.gmail.com> <20050823071605.GA20610@logilab.fr> Message-ID: On Tue, 23 Aug 2005, Alexandre Fayolle wrote: > On Mon, Aug 22, 2005 at 01:27:55PM -0400, Hari Sundar wrote: > > Hi, > > > > Is there a function similar to numarray's fromfile, to read a binary array > > from a file ? What is the best way to read such a file into a Numeric array. Another option is to use scipy.io, e.g.: In [5]:scipy.io.fread? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: g = numpyio.fread( fid, Num, read_type { mem_type, byteswap}) fid = open file pointer object (i.e. from fid = open('filename') ) Num = number of elements to read of type read_type read_type = a character in 'cb1silfdFD' (PyArray types) describing how to interpret bytes on disk. OPTIONAL mem_type = a character (PyArray type) describing what kind of PyArray to return in g. Default = read_type byteswap = 0 for no byteswapping or a 1 to byteswap (to handle different endianness). Default = 0. I use this a lot and it works very well. For example: from scipy import * # --- Create some arrays: x=arange(10)/10.0 z=zeros( (5,5),"d") z_complex=zeros( (5,7),"D") # --- write them as binary data: fp=file("x.dat","wb") io.fwrite(fp,len(x),x) fp.close() fp=file("z.dat","wb") io.fwrite(fp,z.shape[0]*z.shape[1],z) fp.close() fp=file("z_complex.dat","wb") io.fwrite(fp,z_complex.shape[0]*z_complex.shape[1],z_complex) fp.close() # --- and read them back fp=file("x.dat","rb") x_read=io.fread(fp,10,"d") fp.close() fp=file("z.dat","rb") z_read=reshape(io.fread(fp,5*5,"d"),(5,5)) fp.close() fp=file("z_complex.dat","rb") z_complex_read=reshape(io.fread(fp,5*7,"D"),(5,7)) fp.close() Remark: `"wb"` and `"rb"` is only needed under Windows, normally `"w"` and `"r"` would be enough. Best, Arnd From hsundar at gmail.com Tue Aug 23 05:31:12 2005 From: hsundar at gmail.com (Hari Sundar) Date: Tue Aug 23 05:31:12 2005 Subject: [Numpy-discussion] Fast iteration ... Message-ID: <54a988e205082305306d59d76f@mail.gmail.com> Hi, I am trying to compute the histogram of an ND image. I have been able to use the array operation for a part of it and that makes it quite fast. There is one part of my code that I have not been able to vectorize and is therefore quite slow. I was wondering if there is a faster way to do this, since this is substantially slower than what I would get from C code. # a and b are arrays (ND) sz = product(a.shape) ai = a.flat bi = b.flat for i in range(sz): hist_j[bi[i], ai[i]] += 1 hist_a[ai[i]] += 1 hist_b[bi[i]] += 1 thanks, ~Hari -- ??? ?????????????????? ??? ?? ??? ??? ?? ? ? ???????????????? ???? ???????????????? ??? ??? ?? ? ??? ?? Whence all creation had its origin, He, whether he fashioned it or whether He did not, He, who surveys it all from the highest heaven, He knows - or maybe even He does not know. ~Rg veda -------------- next part -------------- An HTML attachment was scrubbed... URL: From Scott.Daniels at Acm.Org Tue Aug 23 11:57:53 2005 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Tue Aug 23 11:57:53 2005 Subject: [Numpy-discussion] Re: Numeric - fromfile ? In-Reply-To: References: <54a988e20508221027535066ca@mail.gmail.com> <20050823071605.GA20610@logilab.fr> Message-ID: Arnd Baecker wrote: and bunch of good stuff, and > ... > fp=file("x.dat","wb") > io.fwrite(fp,len(x),x) > fp.close() > ... > # --- and read them back > fp=file("x.dat","rb") > x_read=io.fread(fp,10,"d") > fp.close() > > Remark: `"wb"` and `"rb"` is only needed under Windows, > normally `"w"` and `"r"` would be enough. This is only true if you are talking about Unix/Linux vs. Windows. There are other systems where the 'b' is necessary, and even where it makes no difference it explicitly states your intent. So, I'd always use "wb" or "rb" in these cases. On some filesystems, (VMS comes to mind) a text file is a completely different format from a binary file. --Scott David Daniels Scott.Daniels at Acm.Org From oliphant at ee.byu.edu Wed Aug 24 16:33:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Aug 24 16:33:33 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) ready for extension modules to test Message-ID: <430D0374.1040105@ee.byu.edu> scipy.base is in a beta-quality state right now. The Packages are not ported, but the basic object works as far as I can tell. If you have extension modules that you would like to try and compile with the new system, I would appreciate the feedback. Note, this is for Numeric users only. The C-API follows the Numeric convention. There is a COMPATIBILITY file that lists a few of the changes (there are actually very few...), that may need to be made. Only if you used descr->zero, descr->one, or explicit typecode strings in your c-code should you need to make any changes. There are changes you may wish to make to improve your code to take advantage of the extended C-API, but you shouldn't have to, except in a couple of cases. You can check the code base out of anonymous CVS (wait a day) and try it.... -Travis O. From pontifor at yahoo.com Fri Aug 26 10:26:48 2005 From: pontifor at yahoo.com (Stefan Kuzminski) Date: Fri Aug 26 10:26:48 2005 Subject: [Numpy-discussion] is this a bug? Message-ID: <20050826172459.59694.qmail@web50608.mail.yahoo.com> >>> from numarray import * >>> x = ones(22400,Float) >>> print add.reduce(x) 22400.0 >>> print add.reduce(x!=0) -128 >>> print add.reduce((x!=0).astype(Int)) 22400 it seems like the boolean result of the expression ( middle try ) causes a problem? thanks, Stefan Kuzminski ____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs From jmiller at stsci.edu Fri Aug 26 12:15:32 2005 From: jmiller at stsci.edu (Todd Miller) Date: Fri Aug 26 12:15:32 2005 Subject: [Numpy-discussion] is this a bug? In-Reply-To: <20050826172459.59694.qmail@web50608.mail.yahoo.com> References: <20050826172459.59694.qmail@web50608.mail.yahoo.com> Message-ID: <1125083645.21580.104.camel@halloween.stsci.edu> On Fri, 2005-08-26 at 13:24, Stefan Kuzminski wrote: > >>> from numarray import * > >>> x = ones(22400,Float) > >>> print add.reduce(x) > 22400.0 > >>> print add.reduce(x!=0) > -128 > >>> print add.reduce((x!=0).astype(Int)) > 22400 > > it seems like the boolean result of the expression ( middle try ) > causes a problem? This issue has been discussed before and the general consensus was that this (somewhat treacherous) behavior should not change. For array totals (reducing on all axes at once), numarray has a sum() method which by default does do a type promotion to the "max type of kind", so integers -> Int64, floats -> Float64, and complexes -> Complex64 prior to the reduction. Regards, Todd From humufr at yahoo.fr Mon Aug 29 12:17:10 2005 From: humufr at yahoo.fr (Humufr) Date: Mon Aug 29 12:17:10 2005 Subject: [Numpy-discussion] bug in numarray? Message-ID: <43135F26.50509@yahoo.fr> Hi, I think there are a problem with numarray (not sure). I'm trying to correlate two differents file to find the same object in both. To do this I wrote some ugly software and I'm using the readcol2.py to read the file in a numarray, numarray string or list format. The cross_name.py is doing the cross correlation when I'm using the numarray string format. I'm using three parameters at differents columns and I compare all of them with something like: numarray.all(a[i,:] == b[j,:]) I saw that my script is very very slow or to be more precise became to be slow. It's seems ok at the beginning but little by little is slow down by a huge amount. I let it turn all the week end and it found ~40 000 objects (both files are ~200000 lines...) in common in two days. I change the software to use the list in python and in some minutes I'have ~20 000 objects found in common. So I think there are a big problem probably: 1) in my script, perhaps 2) in numarray or 3) in both. I hope to have explain the problem clearly ... N. ps: I print an output for the script cross_name.py to visually see the slow down and that appeard to became slow around the 700 objects in common but it's gradully decline. pps: I join the different file I used. The cross_name.py is the function with the problem. ------------------------------------- #readcol2.py ------------------------------------- def readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): """ Load ASCII data from fname into an array and return the array. The data must be regular, same number of values in every row fname can be a filename or a file handle. Input: - Fname : the name of the file to read Optionnal input: - comments : a string to indicate the charactor to delimit the domments. the default is the matlab character '%'. - columns : list or tuple ho contains the columns to use. - delimiter : a string to delimit the columns - dep : an integer to indicate from which line you want to begin to use the file (useful to avoid the descriptions lines) - arraytype : a string to indicate which kind of array you want ot have: numeric array (numeric) or character array (numstring) or list (list). By default it's the list mode used matfile data is not currently supported, but see Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz Example usage: x,y = transpose(readcol('test.dat')) # data in two columns X = readcol('test.dat') # a matrix of data x = readcol('test.dat') # a single column of data x = readcol('test.dat,'#') # the character use like a comment delimiter is '#' initial function from pylab, improve by myself for my need """ from numarray import array,transpose fh = file(fname) X = [] numCols = None nline = 0 if columns is None: for line in fh: nline += 1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue if arraytype=='numeric': row = [float(val) for val in line.split(delimiter)] else: row = [val.strip() for val in line.split(delimiter)] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) else: for line in fh: nline +=1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue row = line.split(delimiter) if arraytype=='numeric': row = [float(row[i-1]) for i in columns] elif arraytype=='numstring': row = [row[i-1].strip() for i in columns] else: row = [row[i-1].strip() for i in columns] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) if arraytype=='numeric': X = array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), elif arraytype == 'numstring': import numarray.strings # pb si numeric+pylab X = numarray.strings.array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), return X ---------------------------------------------------------------- #cross_name.py ---------------------------------------------------------------- #/usr/bin/env python ''' Software to cross correlate two files. To use it you had to file a params file who contains the information of the file you want to correlate. The information must have the format: namefile = list of column ; delimiter example: file1 = 1,2,3 ; file2 = 20,19,21 ; , no delimiter = blanck ''' # there are a big problem of efficiency. The software is far to long with big file like SDSS. # I had to find where is the problem import sys import numarray import string #read the params file params = {} for line in file(sys.argv[1],'rU'): line = line.strip() # delete the end of line (\n on unix) if not len(line): continue # is line empty do nothing and pass to the next line if line.startswith('#'): continue # test if the line is a comments (# is the character to signal it) tup = line.split('=',1) # split the line, the delimiter is the sign = columns = [int(i) for i in tup[1].strip().split(';')[0].strip().split(',')] # creat a list who contains # the columns we want to use delimiter = tup[1].strip().split(';')[1].strip() # check the delimiter of the data file (generally space or coma) if not len(delimiter): delimiter = None params[tup[0].strip()] = { 'columns' : columns, 'delimiter' : delimiter} # Read the data files (only the columns ask in the params file) debut_data = 1 data = [] for namefile in params.iterkeys(): import readcol2 #import the function to read the files #data.append(readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=1,arraytype='character')) params[namefile]['data'] = readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=debut_data,arraytype='character') # Read another times the data files to have all the lines! # Question: like it's a dictionnary are we sure that the file are in the same order... Check it!!!!!!!!! if len(params.keys()) == 2: namefile,data,delimiter = [],[],[] for keys in params.iterkeys(): namefile.append(keys) data.append(params[keys]['data']) delim = params[keys]['delimiter'] if delim != None: delimiter.append(params[keys]['delimiter']) else: delimiter.append(' ') #res_a = [] #res_b = [] f1_ini = file(namefile[0]).readlines()[debut_data:] f2_ini = file(namefile[1]).readlines()[debut_data:] #f1_ini = [line for line in file(namefile[0])][debut_data:] #f2_ini = [line for line in file(namefile[1])][debut_data:] f1=open('cross'+namefile[0],'w') f2=open('cross'+namefile[1],'w') f3=open('pastecross'+namefile[0]+namefile[1],'w') b_i = 0 for a_i in range(data[0].shape[0]): for b_i in range(b_i,data[1].shape[0]): if numarray.all(data[0][a_i,:] == data[1][b_i,:]): f1.write(f1_ini[a_i]) f2.write(f2_ini[b_i]) f3.write(f1_ini[a_i].strip()+delimiter[0]+' '+string.replace(f2_ini[b_i],delimiter[1],delimiter[0])) del f2_ini[b_i] break #res_a.append(a_i) #res_b.append(b_i) f1.close() f2.close() f3.close() else: print "too much file: only two allowed for the moment" #save the results in 3 files: 2 with the common objects from each file. # one with a paste of the lines of the 2 initial files. ----------------------------------------------------------------------- #cross_name2.py --------------------------------------------------------------------- #/usr/bin/env python ''' Software to cross correlate two files. To use it you had to file a params file who contains the information of the file you want to correlate. The information must have the format: namefile = list of column ; delimiter example: file1 = 1,2,3 ; file2 = 20,19,21 ; , no delimiter = blanck ''' # there are a big problem of efficiency. The software is far to long with big file like SDSS. # I had to find where is the problem import sys import numarray import string #read the params file params = {} for line in file(sys.argv[1],'rU'): line = line.strip() # delete the end of line (\n on unix) if not len(line): continue # is line empty do nothing and pass to the next line if line.startswith('#'): continue # test if the line is a comments (# is the character to signal it) tup = line.split('=',1) # split the line, the delimiter is the sign = columns = [int(i) for i in tup[1].strip().split(';')[0].strip().split(',')] # creat a list who contains # the columns we want to use delimiter = tup[1].strip().split(';')[1].strip() # check the delimiter of the data file (generally space or coma) if not len(delimiter): delimiter = None params[tup[0].strip()] = { 'columns' : columns, 'delimiter' : delimiter} # Read the data files (only the columns ask in the params file) debut_data = 1 data = [] for namefile in params.iterkeys(): import readcol2 #import the function to read the files #data.append(readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=1,arraytype='character')) params[namefile]['data'] = readcol2.readcol(namefile,columns=params[namefile]['columns'],comments='#',delimiter=params[namefile]['delimiter'],dep=debut_data,arraytype='list') # Read another times the data files to have all the lines! # Question: like it's a dictionnary are we sure that the file are in the same order... Check it!!!!!!!!! if len(params.keys()) == 2: namefile,data,delimiter = [],[],[] for keys in params.iterkeys(): namefile.append(keys) data.append(params[keys]['data']) delim = params[keys]['delimiter'] if delim != None: delimiter.append(params[keys]['delimiter']) else: delimiter.append(' ') #res_a = [] #res_b = [] f1_ini = file(namefile[0]).readlines()[debut_data:] f2_ini = file(namefile[1]).readlines()[debut_data:] #f1_ini = [line for line in file(namefile[0])][debut_data:] #f2_ini = [line for line in file(namefile[1])][debut_data:] f1=open('cross'+namefile[0],'w') f2=open('cross'+namefile[1],'w') f3=open('pastecross'+namefile[0]+namefile[1],'w') # i=0 # for a_i in range(len(data[0])): # #print data[0][a_i,:] # for b_i in range(len(data[1])): # if data[0][a_i] == data[1][b_i]: # print data[0][a_i],data[1][b_i] # i+=1 # print i # break b_i=0 for a_i in range(len(data[0])): for b_i in range(b_i,len(data[1])): if data[0][a_i] == data[1][b_i]: f1.write(f1_ini[a_i]) f2.write(f2_ini[b_i]) f3.write(f1_ini[a_i].strip()+delimiter[0]+' '+string.replace(f2_ini[b_i],delimiter[1],delimiter[0])) del f2_ini[b_i] break #res_a.append(a_i) #res_b.append(b_i) f1.close() f2.close() f3.close() else: print "too much file: only two allowed for the moment" #save the results in 3 files: 2 with the common objects from each file. # one with a paste of the lines of the 2 initial files. From gruel at astro.ufl.edu Mon Aug 29 14:54:12 2005 From: gruel at astro.ufl.edu (Nicolas Gruel) Date: Mon Aug 29 14:54:12 2005 Subject: [Numpy-discussion] bug in numarray? Message-ID: <43132C5D.70107@astro.ufl.edu> Hi, I think there are a problem with numarray (not sure). I'm trying to correlate two differents file to find the same object in both. To do this I wrote some ugly software and I'm using the readcol2.py to read the file in a numarray, numarray string or list format. The cross_name.py is doing the cross correlation when I'm using the numarray string format. I'm using three parameters at differents columns and I compare all of them with something like: numarray.all(a[i,:] == b[j,:]) I saw that my script is very very slow or to be more precise became to be slow. It's seems ok at the beginning but little by little is slow down by a huge amount. I let it turn all the week end and it found ~40 000 objects (both files are ~200000 lines...) in common in two days. I change the software to use the list in python and in some minutes I'have ~20 000 objects found in common. So I think there are a big problem probably: 1) in my script, perhaps 2) in numarray or 3) in both. I hope to have explain the problem clearly ... N. ps: I print an output for the script cross_name.py to visually see the slow down and that appeard to became slow around the 700 objects in common but it's gradully decline. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: readcol2.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cross_name.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cross_name2.py URL: From NadavH at VisionSense.com Mon Aug 29 22:46:11 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Mon Aug 29 22:46:11 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. Message-ID: <4313EFD7.5040301@VisionSense.com> Just started to play with Numeric3, looks as a significant usability improvement but.... Same functions/classes are named differently in numarray and Numeric3, for instance typecodes. I thing that agreeing on the same names for identical functions/classes would make the users life easier for either porting or alternating back ends. I believe that it may help unifying the two projects. Nadav. From oliphant at ee.byu.edu Mon Aug 29 22:59:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Aug 29 22:59:03 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313EFD7.5040301@VisionSense.com> References: <4313EFD7.5040301@VisionSense.com> Message-ID: <4313F560.1000200@ee.byu.edu> Nadav Horesh wrote: >Just started to play with Numeric3, looks as a significant usability >improvement but.... >Same functions/classes are named differently in numarray and Numeric3, >for instance typecodes. > > This is true for only a few cases. Mostly the names are compatible, but some of the naming conventions needed changing... For example: We have used type for the name of the data type in a numeric array. But, this can be confusing because type refers to the kind of Python object and all arrays are the same kind of python object. In addition, it is natural to use the type= keyword in array constructors, but this then blocks the use of that builtin for the function it is used with. Of course typecode was previously chosen by Numeric, but now the types are not codes (they are really type objects). Thus, I have been calling type (dtype) in the new scipy.base. The alternative is to keep the name type (eliminate the use of typecode, and rename python's type function to pytype within scipy). It could easily be changed if that is a real problem. Because of the signficantly different usage of types in the new system, it is helpful to have a different name (dtype). But, I could be persuaded to use the word type and rename Python's type to pytype. -Travis From NadavH at VisionSense.com Tue Aug 30 01:00:28 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Tue Aug 30 01:00:28 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43140FAC.4010802@VisionSense.com> I am not picky about which name to use. It is would be the same for me if Jay Miller would add a support for dtype keyword, and switch Int32 for int32 (or vice versa). In this case you both agree that types should be classes (although Numeric3 types == type is better) and not strings. Once there is an agreement on the functions, methods and keyword (for instance should arange function have a shape keyword), the exact names choice should be an easy issue to overcome. Nadav. Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. > > -Travis > > > > From cjw at sympatico.ca Tue Aug 30 05:12:47 2005 From: cjw at sympatico.ca (Colin J. Williams) Date: Tue Aug 30 05:12:47 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43144CB8.9000603@sympatico.ca> Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > These changes make sense (1) replacing type by dtype (dType?) and (2) replacing typecode by dType instances. It would be good if, as suggested by Nadav, the first change could be made to numarray. > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. This, I suggest, would be a step back. Is there any plan to make Win32 binary version available for testing? Past efforts to compile have failed. Colin W, From cjw at sympatico.ca Tue Aug 30 05:21:55 2005 From: cjw at sympatico.ca (Colin J. Williams) Date: Tue Aug 30 05:21:55 2005 Subject: [Numpy-discussion] Matching Nueric3/numarray namig conentions. In-Reply-To: <4313F560.1000200@ee.byu.edu> References: <4313EFD7.5040301@VisionSense.com> <4313F560.1000200@ee.byu.edu> Message-ID: <43144F20.8090800@sympatico.ca> Travis Oliphant wrote: > Nadav Horesh wrote: > >> Just started to play with Numeric3, looks as a significant usability >> improvement but.... >> Same functions/classes are named differently in numarray and Numeric3, >> for instance typecodes. >> >> > This is true for only a few cases. Mostly the names are compatible, but > some of the naming conventions needed changing... > For example: > > We have used type for the name of the data type in a numeric array. But, > this can be confusing because type refers to the kind of Python object > and all arrays are the same kind of python object. In addition, it is > natural to use the type= keyword in array constructors, but this then > blocks the use of that builtin for the function it is used with. Of > course typecode was previously chosen by Numeric, but now the types > are not codes (they are really type objects). Thus, I have been > calling type (dtype) in the new scipy.base. The alternative is to > keep the name type (eliminate the use of typecode, and rename python's > type function to pytype within scipy). > [error] - this should have read: These changes make sense (1) replacing type by dtype (dType?) and (2) replacing typecode by dType an instance of a Numeric types class. It would be good if, as suggested by Nadav, the first change could be made to numarray. He indicates that the naming of the new Numeric types classes is different from that used by numarray. Is it necessary to change this? > It could easily be changed if that is a real problem. Because of > the signficantly different usage of types in the new system, it is > helpful to have a different name (dtype). But, I could be persuaded > to use the word type and rename Python's type to pytype. This, I suggest, would be a step back. Is there any plan to make Win32 binary version available for testing? Past efforts to compile have failed. Colin W, ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From pbtransfert at freesurf.fr Wed Aug 31 00:08:08 2005 From: pbtransfert at freesurf.fr (pbtransfert at freesurf.fr) Date: Wed Aug 31 00:08:08 2005 Subject: [Numpy-discussion] portability issue Message-ID: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> hi ! i try to transfer a pickle which contains numeric array, from a 64-bits system to a 32-bits system. it seems to fail due to bad (or lack of) conversion... more precisely, here is what i do on the 64-bits system : import Numeric,cPickle a=Numeric.array([1,2,3]) f=open('test.pickle64','w') cPickle.dump(a,f) f.close() and here is what i try to do on the 32-bits system : import Numeric,cPickle f=open('test.pickle64','r') a=cPickle.load(f) f.close() and here is the log of the load : a=cPickle.load(f) File "/usr/lib/python2.3/site-packages/Numeric/Numeric.py", line 539, in array_constructor x.shape = shape ValueError: ('total size of new array must be unchanged', , ((3,), 'l', '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00',True)) Is there something to do to solve this difficulty ? thanks PB From cookedm at physics.mcmaster.ca Wed Aug 31 11:16:19 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed Aug 31 11:16:19 2005 Subject: [Numpy-discussion] portability issue In-Reply-To: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> (pbtransfert@freesurf.fr's message of "Wed, 31 Aug 2005 09:07:10 +0200 (CEST)") References: <45699.192.54.193.37.1125472030.squirrel@jose.freesurf.fr> Message-ID: writes: > hi ! > > i try to transfer a pickle which contains numeric array, from a 64-bits > system to a 32-bits system. it seems to fail due to bad (or lack of) > conversion... more precisely, here is what i do on the 64-bits system : > > import Numeric,cPickle > a=Numeric.array([1,2,3]) > f=open('test.pickle64','w') > cPickle.dump(a,f) > f.close() > > and here is what i try to do on the 32-bits system : > > import Numeric,cPickle > f=open('test.pickle64','r') > a=cPickle.load(f) > f.close() > > and here is the log of the load : > > a=cPickle.load(f) > File "/usr/lib/python2.3/site-packages/Numeric/Numeric.py", line 539, in > array_constructor > x.shape = shape > ValueError: ('total size of new array must be unchanged', array_constructor at 0x40a1002c>, ((3,), 'l', > '\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00',True)) > > > Is there something to do to solve this difficulty ? Specify the integer type with the number of bits. Numeric.array([1,2,3]) will create an array with a typecode of 'l' (Numeric.Int), which is the type that can hold Python ints (= C longs). On your 64-bit system, it's a 64-bit integer; on the 64-bit, it's a 32-bit integer. So, on the 32-bit system, when reading the pickle, it sees an array of type 'l', but there is too much data to fill the array it expects. The solution is to explicitly create your array using a typecode that gives the size of the integer. Either: a = Numeric.array([1,2,3], Numeric.Int32) or a = Numeric.array([1,2,3], Numeric.Int64) I haven't checked this, but I would think that using Int32 is better if all your numbers will fit in that. Using 64-bit integers would mean the 32-bit machine would have to use 'long long' types to do its math, which would be slower, while using 32-bit integers would mean the 64-bit machine would use 'int', which would still be fast for it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca