From chrishbarker at home.net Fri Jun 1 16:58:47 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri, 01 Jun 2001 13:58:47 -0700 Subject: [Numpy-discussion] Rich Comparisons and the Docs References: <00101808405300.01041@penguin.visionpro.com> Message-ID: <3B180207.A16036BF@home.net> Hi all, I just got done installing Python 2.1 and NumPy 20 on Linux (which was far more work that it should have been) just so that I could get rich comparisons. I then got a version of the PDF doc, dated march30, 2001, and it is out of date with regard to rich comparisons. This brings up two questions. 1) Is the doc in CVS or something somewhere so that others (me) could contribute to it? 2) Is there a rich comparisons way to do "and" and "or"? It's great that: logical_and(greater(a,b),less(a,c)) can now be: logical_and(a > b, a < c) but of course, what I would really like is: (a > b) and (a < c) Are there any nice ways to do this?? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From chrishbarker at home.net Fri Jun 1 18:17:51 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri, 01 Jun 2001 15:17:51 -0700 Subject: [Numpy-discussion] Rich Comparisons and the Docs References: Message-ID: <3B18148F.4CF34EB8@home.net> "Paul F. Dubois" wrote: > The doc source is not available. I try to keep it up to date. If you would > tell me the section I overlooked I will fix it. That would be: Special Topics: Comparisons > As to logical and and or, they are not available for overload in the rich > comparisons schema. That's a Python decision. I figured as much. I just took a look at PEP 207: Rich Comparisons, and interestingly enough, the orignal version by David Asher addressed this issue (at least in terms of chaining comparisons, such as a > b >c). In the final PEP, I foound this: 5 In the current proposal, when A>> a = array([1,2,3]) >>> b = array([3,2,1]) >>> c = array([3,3,3]) >>> a < b < c array([0, 1, 1]) Also, does (A Hi, I've been having difficultly pickling arrays with the type PyObject using Numeric. I haven't tried it with MA but I assume the same problem exists. This script works ===================================== from cPickle import dump, load from Numeric import array, PyObject def pickleIt(obj, fileName): fp = open(fileName, 'w') dump(obj, fp) fp.close def loadIt(fileName): fp = open(fileName, 'r') obj = load(fp) fp.close() return obj a = array(['abc', 'def', 'ghi'], PyObject) pickleIt(a, 'test.pickle') This script segfaults ==================================== # ... same imports and func defs as above b = loadIt() print b ==================================== I first noticed this when trying to pickle arrays constructed from lists of mx.DateTime objects. Numeric 19.1.0 Python 2.1 final Linux 2.2.18 Is this a reproduceable bug or something unique to my setup? Tavis From dubois1 at llnl.gov Tue Jun 5 14:14:41 2001 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Tue, 5 Jun 2001 11:14:41 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060510341000.01305@lucy> References: <01060510341000.01305@lucy> Message-ID: <01060511162000.17252@almanac> Travis: Works for me ....using either dump or dumps, load or loads I used Numeric 20.1.0b1 / Python 2.1 / RedHat 6.2 On Tue, 05 Jun 2001, Tavis Rudd wrote: > Hi, > I've been having difficultly pickling arrays with the > type PyObject using Numeric. I haven't tried it with > MA but I assume the same problem exists. > > This script works > ===================================== > from cPickle import dump, load > from Numeric import array, PyObject > > def pickleIt(obj, fileName): > fp = open(fileName, 'w') > dump(obj, fp) > fp.close > > def loadIt(fileName): > fp = open(fileName, 'r') > obj = load(fp) > fp.close() > return obj > > a = array(['abc', 'def', 'ghi'], PyObject) > pickleIt(a, 'test.pickle') > > This script segfaults > ==================================== > # ... same imports and func defs as above > b = loadIt() > print b > > ==================================== > > I first noticed this when trying to pickle arrays constructed > from lists of mx.DateTime objects. > > Numeric 19.1.0 > Python 2.1 final > Linux 2.2.18 > > Is this a reproduceable bug or something unique to my > setup? > Tavis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From tavis at calrudd.com Tue Jun 5 14:36:03 2001 From: tavis at calrudd.com (Tavis Rudd) Date: Tue, 5 Jun 2001 11:36:03 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060511162000.17252@almanac> References: <01060510341000.01305@lucy> <01060511162000.17252@almanac> Message-ID: <01060511360301.01305@lucy> Paul, I just installed 20.1.0b1 and got the same segfault. I'm using Suse 6.4 Note that if I dump it and load it from a single process it works fine. The error only occurs when I try to load it from a separate process. Tavis On Tuesday 05 June 2001 11:14, Paul F. Dubois wrote: > Travis: > Works for me ....using either dump or dumps, load or > loads I used Numeric 20.1.0b1 / Python 2.1 / RedHat 6.2 > On Tue, 05 Jun 2001, Tavis Rudd wrote: > > Hi, > > I've been having difficultly pickling arrays with the > > type PyObject using Numeric. I haven't tried it with > > MA but I assume the same problem exists. > > > > This script works > > ===================================== > > from cPickle import dump, load > > from Numeric import array, PyObject > > > > def pickleIt(obj, fileName): > > fp = open(fileName, 'w') > > dump(obj, fp) > > fp.close > > > > def loadIt(fileName): > > fp = open(fileName, 'r') > > obj = load(fp) > > fp.close() > > return obj > > > > a = array(['abc', 'def', 'ghi'], PyObject) > > pickleIt(a, 'test.pickle') > > > > This script segfaults > > ==================================== > > # ... same imports and func defs as above > > b = loadIt() > > print b > > > > ==================================== > > > > I first noticed this when trying to pickle arrays > > constructed from lists of mx.DateTime objects. > > > > Numeric 19.1.0 > > Python 2.1 final > > Linux 2.2.18 > > > > Is this a reproduceable bug or something unique to my > > setup? > > Tavis > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > http://lists.sourceforge.net/lists/listinfo/numpy-discu > >ssion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discuss >ion From tavis at calrudd.com Tue Jun 5 14:52:57 2001 From: tavis at calrudd.com (Tavis Rudd) Date: Tue, 5 Jun 2001 11:52:57 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060511360301.01305@lucy> References: <01060510341000.01305@lucy> <01060511162000.17252@almanac> <01060511360301.01305@lucy> Message-ID: <01060511525702.01305@lucy> oops, there's a typo in that test script I posted it should be b = loadIt('test.pickle') instead of > > > This script segfaults > > > ==================================== > > > # ... same imports and func defs as above > > > b = loadIt() > > > print b > > > > > > ==================================== From vanandel at atd.ucar.edu Wed Jun 6 12:50:50 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Wed, 06 Jun 2001 10:50:50 -0600 Subject: [Numpy-discussion] math range error with Masked Array Message-ID: <3B1E5F6A.59C6558@atd.ucar.edu> I'm trying to use the MA package for numeric computations. Unfortunately, attempting to construct a masked array sometimes fails: masked_values(values, missingValue,savespace=1) File "/usr/lib/python2.1/site-packages/MA/MA.py", line 1299, in masked_values m = Numeric.less_equal(abs(d-value), atol+rtol*abs(value)) OverflowError: math range error The odd thing is that the floating point calculations that produced the input Numeric array didn't cause a math range error, but MA's attempt to find the 'missing' values does cause a range error. When I switched to Python2.1, I had to find and fix several overflow problems that didn't cause exceptions under Python1.5. For example, I had to use a "protected" exponentation routine to avoid overflow errors: MIN_EXP = -745 MAX_EXP = 709 def ProtExp(a): """ Protected Exponentiation calculation. Avoid Overflow error on large negative or positive arguments """ min_a = choose(less(a, MIN_EXP), (a, MIN_EXP)) return exp(choose(greater(min_a, MAX_EXP), (min_a, MAX_EXP)) ) ------------------- I'm concerned that the math exception handling for Python2.1 under x86 Linux makes it hard to get my work done. Any ideas on how to fix this error in MA? (I already tried masked_values(values, missingValue, rtol=1e-2,atol=1.e-4, savespace=1), which didn't help.) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From marimont at nxpdata.com Wed Jun 6 13:49:28 2001 From: marimont at nxpdata.com (David H. Marimont) Date: Wed, 06 Jun 2001 10:49:28 -0700 Subject: [Numpy-discussion] undefined symbols when importing lapack Message-ID: <3B1E6D28.3CC8D443@nxpdata.com> I just compiled and installed Numeric 20.1.0b1 using lapack and blas libraries. When I tried to import lapack_lite (after importing Numeric), I got this error: Traceback (most recent call last): File "", line 1, in ? ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe I'm using Python 2.1 on RH 7.1. I've had this problem before and have even seen postings to this list about related problems. But the solutions posted were over my head, so I've never been able to use the python interface to the lapack and blas libraries, which I really need. Does anyone have any advice, preferably pitched to someone who has limited compilation skills (i.e. at the the "configure, make, make install" level)? Thanks. David Marimont From cookedm at physics.mcmaster.ca Wed Jun 6 14:06:41 2001 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: 06 Jun 2001 14:06:41 -0400 Subject: [Numpy-discussion] undefined symbols when importing lapack In-Reply-To: <3B1E6D28.3CC8D443@nxpdata.com> References: <3B1E6D28.3CC8D443@nxpdata.com> Message-ID: At some point, "David H. Marimont" wrote: > I just compiled and installed Numeric 20.1.0b1 using lapack > and blas libraries. When I tried to import lapack_lite (after > importing Numeric), I got this error: > > Traceback (most recent call last): > File "", line 1, in ? > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > I'm using Python 2.1 on RH 7.1. > > I've had this problem before and have even seen postings to this list > about related problems. But the solutions posted were over my head, so I've > never been able to use the python interface to the lapack and blas libraries, > which I really need. Does anyone have any advice, preferably pitched to > someone who has limited compilation skills (i.e. at the the "configure, > make, make install" level)? > > Thanks. You have to compile in the g2c library. For RH 7.1, add the path '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in setup.py, and 'g2c' in libraries_list. So the appropiate lines in setup.py will look like: # delete all but the first one in this list if using your own LAPACK/BLAS sourcelist = ['Src/lapack_litemodule.c', # 'Src/blas_lite.c', # 'Src/f2c_lite.c', # 'Src/zlapack_lite.c', # 'Src/dlapack_lite.c' ] # set these to use your own BLAS library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] libraries_list = ['lapack', 'blas', 'g2c'] If you're compiling on Debian, I don't think you need to add the path (but you need 'g2c'). You need g2c because lapack and blas were compiled from Fortran using g77, and so they depend on routines that implement some of the Fortran statements. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at mcmaster.ca From marimont at nxpdata.com Wed Jun 6 14:32:51 2001 From: marimont at nxpdata.com (David H. Marimont) Date: Wed, 06 Jun 2001 11:32:51 -0700 Subject: [Numpy-discussion] undefined symbols when importing lapack References: <3B1E6D28.3CC8D443@nxpdata.com> Message-ID: <3B1E7753.36A250FC@nxpdata.com> Thanks, David, that worked perfectly -- I can now import lapack_lite without any errors. Now I need to now how to call lapack functions aside from the ones that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). I found these via inspect.getmembers(lapack_lite). Where do all the other lapack functions live? And is there some way for me to determine that automatically? Thanks. David "David M. Cooke" wrote: > > At some point, "David H. Marimont" wrote: > > > I just compiled and installed Numeric 20.1.0b1 using lapack > > and blas libraries. When I tried to import lapack_lite (after > > importing Numeric), I got this error: > > > > Traceback (most recent call last): > > File "", line 1, in ? > > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > > > I'm using Python 2.1 on RH 7.1. > > > > I've had this problem before and have even seen postings to this list > > about related problems. But the solutions posted were over my head, so I've > > never been able to use the python interface to the lapack and blas libraries, > > which I really need. Does anyone have any advice, preferably pitched to > > someone who has limited compilation skills (i.e. at the the "configure, > > make, make install" level)? > > > > Thanks. > > You have to compile in the g2c library. For RH 7.1, add the path > '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in > setup.py, and 'g2c' in libraries_list. > > So the appropiate lines in setup.py will look like: > > # delete all but the first one in this list if using your own LAPACK/BLAS > sourcelist = ['Src/lapack_litemodule.c', > # 'Src/blas_lite.c', > # 'Src/f2c_lite.c', > # 'Src/zlapack_lite.c', > # 'Src/dlapack_lite.c' > ] > # set these to use your own BLAS > library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] > libraries_list = ['lapack', 'blas', 'g2c'] > > If you're compiling on Debian, I don't think you need to add the path > (but you need 'g2c'). > > You need g2c because lapack and blas were compiled from Fortran using > g77, and so they depend on routines that implement some of the Fortran > statements. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > |cookedm at mcmaster.ca > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From cookedm at physics.mcmaster.ca Wed Jun 6 14:57:04 2001 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: 06 Jun 2001 14:57:04 -0400 Subject: [Numpy-discussion] undefined symbols when importing lapack In-Reply-To: <3B1E7753.36A250FC@nxpdata.com> References: <3B1E6D28.3CC8D443@nxpdata.com> <3B1E7753.36A250FC@nxpdata.com> Message-ID: At some point, "David H. Marimont" wrote: > Thanks, David, that worked perfectly -- I can now import lapack_lite > without any errors. > > Now I need to now how to call lapack functions aside from the ones > that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, > dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). > I found these via inspect.getmembers(lapack_lite). Where do all > the other lapack functions live? And is there some way for me to > determine that automatically? > > Thanks. You want PyLapack. See a previous message at http://www.geocrawler.com/archives/3/1329/2000/4/0/3616954/ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at mcmaster.ca From karshi.hasanov at utoronto.ca Wed Jun 6 18:46:02 2001 From: karshi.hasanov at utoronto.ca (Karshi Hasanov) Date: Wed, 6 Jun 2001 17:46:02 -0500 Subject: [Numpy-discussion] vtkStructuredPoints Message-ID: <01060617460200.01082@cr322577-a> Hi, I wanna build *.vtk structured data file from an array A[i,j,k] which has a vector attributes. What's the right( or best) way of doing it using python? I do have the VTK User's Guide Book, but it didn't tell me much. Thanks From paul at pfdubois.com Wed Jun 6 18:51:56 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Wed, 6 Jun 2001 15:51:56 -0700 Subject: [Numpy-discussion] 20.1.0b2 Message-ID: 20.1.0b2 is now available in tar.gz, .exe, .zip formats. From Robert.Denham at dnr.qld.gov.au Wed Jun 6 23:58:33 2001 From: Robert.Denham at dnr.qld.gov.au (Robert.Denham at dnr.qld.gov.au) Date: Thu, 7 Jun 2001 13:58:33 +1000 Subject: [Numpy-Discussion] 3d interpolation Message-ID: I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ From pearu at cens.ioc.ee Thu Jun 7 04:06:27 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 7 Jun 2001 10:06:27 +0200 (EET) Subject: [Numpy-discussion] vtkStructuredPoints In-Reply-To: <01060617460200.01082@cr322577-a> Message-ID: On Wed, 6 Jun 2001, Karshi Hasanov wrote: > I wanna build *.vtk structured data file from an array A[i,j,k] which has > a vector attributes. What's the right( or best) way of doing it using python? > I do have the VTK User's Guide Book, but it didn't tell me much. > Thanks Check out PyVTK: http://cens.ioc.ee/projects/pyvtk/ Using PyVTK you can create the data file as follows: from pyvtk import * VtkData(StructuredPoints([n1,n2,n3]),PointData(Vectors(A))).tofile('arr.vtk') where A is n1 x n2 x n3 arrays of 3-sequences. Pearu From paul at pfdubois.com Thu Jun 7 12:26:22 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu, 7 Jun 2001 09:26:22 -0700 Subject: [Numpy-Discussion] 3d interpolation In-Reply-To: Message-ID: Try this and let me know if it works for you. It implements Hardy's multiquadric. Note the caution on the number of input points. This algorithm does a really spiffy job usually. Try the default rsq first. -- Paul -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Robert.Denham at dnr.qld.gov.au Sent: Wednesday, June 06, 2001 8:59 PM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-Discussion] 3d interpolation I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net http://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: multiquadric.py URL: From vanandel at atd.ucar.edu Fri Jun 8 15:58:58 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Fri, 08 Jun 2001 13:58:58 -0600 Subject: [Numpy-discussion] math range error with Masked Array References: Message-ID: <3B212E82.B6169CA1@atd.ucar.edu> "Paul F. Dubois" wrote: > > Is the array single or double precision? Does it have the spacesaver > attribute set? The array is single precision, with the spacesave attribute set. > > Is it possible the data had bad values in it that were something other than > the missing value? What was the missing value? Yes, I finally found that the array had some contained some 'Nan' variables, because of an error in my earlier calculation. So, at this point, MA works for me. Thanks for your help. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From hinsen at cnrs-orleans.fr Tue Jun 12 05:54:41 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Jun 2001 11:54:41 +0200 Subject: [Numpy-discussion] underflow array([2.9e-131])**3 In-Reply-To: (message from Jean-Bernard Addor on Thu, 31 May 2001 18:59:54 -0400 (EDT)) References: Message-ID: <200106120954.LAA05154@chinon.cnrs-orleans.fr> > Do anyone know how to disable underflow exception errors in Numeric? The problem is not in Numeric, it is the C library that decides whether underflows should be considered errors or not. The Python interpreter has some workarounds, but Numeric has not. I had the same problems you describe under AIX, where I solved them by linking to a different version of the math library. But I don't think there is a platform-independent solution. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From hinsen at cnrs-orleans.fr Tue Jun 12 05:56:38 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Jun 2001 11:56:38 +0200 Subject: [Numpy-discussion] PS: underflow array([2.9e-131+0j])**3 (why complex give a different result? bug?) In-Reply-To: (message from Jean-Bernard Addor on Thu, 31 May 2001 19:09:03 -0400 (EDT)) References: Message-ID: <200106120956.LAA05157@chinon.cnrs-orleans.fr> > >>> Numeric.array([2.9e-131])**3 > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> 2.9e-131**3 > 0.0 > >>> Numeric.array([2.9e-131+0j])**3 > array([ 0.+0.j]) > > Now I have a quick solution for my problem, but I have the impression that > this is a bug, at least I don't understand the underlying logic. If > somebody can explain it to me? The power calculation routines for float and complex are completely different, I suppose the absence of underflow reporting in the latter is just a side effect. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From vanandel at atd.ucar.edu Tue Jun 12 20:20:10 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 12 Jun 2001 18:20:10 -0600 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. Message-ID: <3B26B1BA.37988FBF@atd.ucar.edu> I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- import sys # test harness for Masked array performonce from MA import * #from Numeric import * from Perp.util.TimerUtility import TimerUtility def mult_test(a1, a2): res = a1 * a2 if __name__ == '__main__': repeat = 100 gates = 1000 beams = 370 if len(sys.argv) > 1: repeat = int(sys.argv[1]) t1 = ones((beams, gates), Float) a1 = masked_values(t1, -327.68) a2 = masked_values(t1, -327.68) i = 0 tu = TimerUtility(()) while (i < repeat): i = i+1 res = mult_test(a1, a2) elapsed = tu.elapsed() print 'completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) -------------- next part -------------- #include #include #include "PerfTimer.h" typedef float *FLOAT_PTR; extern void mult_test(FLOAT_PTR *a1, FLOAT_PTR *a2, FLOAT_PTR *resp, float missingValue); const int gates = 1000; const int beams = 370; int main(int argc, char *argv[]) { int repeat =100; const float missingValue = -327.68; if (argc > 1) repeat = atoi(argv[1]); FLOAT_PTR *a1 = new FLOAT_PTR[beams]; FLOAT_PTR *a2 = new FLOAT_PTR[beams]; FLOAT_PTR *res = new FLOAT_PTR[beams]; // allocate storage for 2d variables for (int b = 0; b < beams; ++b) { a1[b] = new float[gates]; a2[b] = new float[gates]; res[b] = new float[gates]; } PerfTimer pt; for (int r = 0; r < repeat; ++r) { mult_test(a1, a2, res,missingValue); } double elapsed = pt.Elapsed(); double cntMultiply = repeat*gates*beams; cout << repeat << " repetitions completed" << endl; cout << cntMultiply << "checked multiplies" << endl; cout << cntMultiply/elapsed << "checked multiplies/second" << endl; } void mult_test(FLOAT_PTR *a1, FLOAT_PTR *a2, FLOAT_PTR *resp, float missingValue) { const float atol = 1.e-8; const float rtol = 1.0e-5; for (int b=0; b < beams; ++b) { for (int g = 0; g < gates; ++g) { if (fabs(a1[b][g] - missingValue) < atol + rtol * fabs(missingValue)) { resp[b][g] = missingValue; continue; } else if (fabs(a2[b][g] - missingValue) < atol + rtol * fabs(missingValue)) { resp[b][g] = missingValue; } else { resp[b][g] = a1[b][g] * a2[b][g]; } } // for gates } // for beams } -------------- next part -------------- #include class PerfTimer { public: // constructor, starts timing PerfTimer(); // reset starting time void Start(); // compute elapsed time since last construction or Start() double Elapsed(); private: struct timespec startTime_; }; -------------- next part -------------- #include "PerfTimer.h" PerfTimer::PerfTimer() { Start(); } void PerfTimer::Start() { clock_gettime(CLOCK_REALTIME, &startTime_); } double PerfTimer::Elapsed() { struct timespec stopTime; clock_gettime(CLOCK_REALTIME, &stopTime); return (stopTime.tv_sec + stopTime.tv_nsec/1.0e9 - (startTime_.tv_sec + startTime_.tv_nsec/1.0e9) ); } -------------- next part -------------- #CCFLAGS=-O3 CCFLAGS= CXXFLAGS=${CCFLAGS} all: arrayperf arrayperf: arrayperf.cc g++ -o arrayperf arrayperf.cc PerfTimer.cc -lm -lrt -------------- next part -------------- # import time class TimerUtility: """ Timer/Utility for performance measurment """ def __init__(self, verbose=1,useElapsed=0): """ ctor: can suppress printing by settings **verbose** to 0 """ # if we're timing elapsed events, including subprocesses, # then use time.time() if useElapsed: self.__timeFunc = time.time else: self.__timeFunc = time.clock self.__startTime = self.__timeFunc() self.__lastTime = self.__startTime self.__verbose = verbose def elapsed(self, msg = 'Elapsed '): """ print elapsed time since instance creation or last **elapsed()** call """ current = self.__timeFunc() delta = (current - self.__lastTime) if (self.__verbose): print '%s : %5.2f' % (msg, float(delta)) self.__lastTime = current return delta def rate(self, count, msg = 'rate'): """ print elapsed time and rate since instance creation or last **elapsed()** call """ current = self.__timeFunc() delta = (current - self.__lastTime) if (self.__verbose): print '%s : %5.2f : %6.2f' % (msg, float(delta), float(count)/float(delta)) self.__lastTime = current return delta def total(self, msg = 'Total '): """ print total time since TimerUtility was created """ current = self.__timeFunc() diff = (current - self.__startTime) if (self.__verbose): print '%s : %5.2f' % (msg, diff) return diff From paul at pfdubois.com Tue Jun 12 21:08:46 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue, 12 Jun 2001 18:08:46 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. In-Reply-To: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: I have a timing benchmark for MA that computes the ratio MA/Numeric for two cases: 1. there is actually no mask 2. there is a mask For N=50,000 these ratios are usually around 1.3 and 1.8 respectively. It makes sense in the second case that the number might be around 2 since you have to pass through the mask data as well, even if it is only bytes. In short, there is this much overhead to MA. If you got MA/C++ = 1.67 it would indicate Numpy/C++ comparable. The tests Jim did when he first wrote it were about 10% worse than C. Your C++ uses a special value instead of a mask array which may mean that you traded space for CPU time, and using large arrays like that maybe that causes some page faults (?) Anyway you're comparing apples and oranges a little. Anyway, my point is this is probably an MA issue rather than a Numpy issue. However, please note that I did not (yet) do any of the normal profiling and testing that one would do to speed MA up, such as putting key parts in C. This is just not an issue for me right now. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From paul at pfdubois.com Tue Jun 12 21:23:13 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue, 12 Jun 2001 18:23:13 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. In-Reply-To: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: PS my test was on double precision, failed to notice that too. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From chrishbarker at home.net Wed Jun 13 15:24:42 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed, 13 Jun 2001 12:24:42 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. References: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: <3B27BDFA.30857110@home.net> If I read your C++ right (and I may not have, I'm a C++ novice), you allocated the memory for all three arrays, and then performed your loop. In the Python version, the result array is allocated when the multiplication is perfomed, so you are allocating and freeing the result array each tim ein the loop. That may slow things down a little. In a real application, you are less likely to be re-doing the same computation over and over again, so the allocation would happen only once. You might try something like this, and see if it is any faster (it is more memory efficient) Note also that there is some overhead in function calls in Python, so you may get some speed up if you inline the call to mult_test. You can decide for yourself if this would still be a fair comparison. You might try something like this, and see if it is any faster (it is more memory efficient) (unfortunately, MA doesn't seem to support the thiord argument to multiply) My version (I don't have TimerUtility, so I used time.clock instead) got these times: Your code: completed 1000 in 99.050000 seconds 3.74e+06 checked multiplies/second My code: alternative completed 1000 in 80.070000 seconds 4.62e+06 checked multiplies/second It did buy you something: here is the code: #!/usr/bin/env python2.1 import sys # test harness for Masked array performonce #from MA import * from Numeric import * from time import clock def mult_test(a1, a2): res = a1 * a2 if __name__ == '__main__': repeat = 100 gates = 1000 beams = 370 if len(sys.argv) > 1: repeat = int(sys.argv[1]) t1 = ones((beams, gates), Float) a1 = t1 a2 = t1 # a1 = masked_values(t1, -327.68) # a2 = masked_values(t1, -327.68) i = 0 start = clock() while (i < repeat): i = i+1 res = mult_test(a1, a2) elapsed = clock() - start print 'completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print # alternative: res = zeros(a1.shape,Float) i = 0 start = clock() while (i < repeat): i = i+1 multiply(a1, a2, res) elapsed = clock() - start print 'alternative completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print Another note: calling ones with Float as your type gives you a Python float, which is a C double. Use 'f' or Float32 to get a C float. I've found on Intel hardware, doubles are just as fast (the FPU used doubles anyway), but they do use more memory, so this could make a difference. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From paul at pfdubois.com Thu Jun 14 13:03:16 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu, 14 Jun 2001 10:03:16 -0700 Subject: [Numpy-discussion] New testing procedures Message-ID: In CVS there is now a file Test/test.py instead of the previous test_items and test_all.py. test.py uses PyUnit, the new testing framework in Python. Developers should be able to add tests much more easily now. From paul at pfdubois.com Fri Jun 15 19:37:34 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri, 15 Jun 2001 16:37:34 -0700 Subject: [Numpy-discussion] MA improvements Message-ID: Heavy MA users may wish to get the latest from CVS to try. It has numerous improvements as detailed in changes.txt. I am sending out this announcement because I won't be available to work on it for three weeks, so if you find it is broken sync back to June 14 until I can fix it. From hoel at germanlloyd.org Tue Jun 19 10:37:38 2001 From: hoel at germanlloyd.org (Berthold =?iso-8859-1?q?H=F6llmann?=) Date: 19 Jun 2001 16:37:38 +0200 Subject: [Numpy-discussion] speed problem with Numeric sort Message-ID: We have a speed problem with Numeric.sort on large arrays with only a few different values. Here is my example -- snip -- >cat numtst.py import Numeric print Numeric.__version__ class timer: def __init__(self): import time self.start = time.time() def stop(self): import time print "%.3f" % (time.time() - self.start) from RandomArray import random from Numeric import sort, Int r=random((71400,)) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70000).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*7).astype(Int) t = timer() ; p=sort(r) ; t.stop() 16:27 hoel at seeve:hoel 2>python numtst.py 17.3.0 0.185 0.148 2.053 21.668 -- snip -- So the less different values are contained in the array the longer takes the sorting. Is this also the case with newer versions of Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so slow? Thanks Berthold -- email: hoel at GermanLloyd.org ) tel. : +49 (40) 3 61 49 - 73 74 ( C[_] These opinions might be mine, but never those of my employer. From tim.hochberg at ieee.org Tue Jun 19 11:54:21 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 19 Jun 2001 08:54:21 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: Message-ID: <144801c0f8d8$1b4c8d40$87740918@cx781526b> Hi Berthold, I tested your code on Win95 using Numeric 20.0.0 and got essentially the same pattern of times. So, upgrading your version of Numeric is not likely to help. Curious, I checked out the code for sort and found that it justs calls qsort from the C library. I suspect that the problem is related to quicksort having bad worst case behaviour. Not being a computer scientest, I can't tell you under what situations the bad behaviour is triggered, although I know it doesn't like presorted lists. Anyway, if you're using 1D arrays, one workaround would be to use list.sort. Python's sorting routines has, I understand, lots of gimmicks to avoid the problems that quicksort sometimes encounters. I tried it and this: r=(random((71400,))*7).astype(Int) l = r.tolist() l.sort() p = array(l) runs about 40 times faster than this: r=(random((71400,))*7).astype(Int) p = r.sort() If you don't need to convert to and from a array, this approach is 60 times faster. Even if you're dealing with a multidimensional array, this approach (in a loop) might be signifigantly faster assuming you're sorting along the long axis. It makes one wonder if using the python sort rather than qsort for Numeric.sort would be a profitable change. No time to investigate it right now though. Hope that's useful... -tim > We have a speed problem with Numeric.sort on large arrays with only a > few different values. Here is my example > > -- snip -- > > >cat numtst.py > import Numeric > print Numeric.__version__ > class timer: > def __init__(self): > import time > self.start = time.time() > def stop(self): > import time > print "%.3f" % (time.time() - self.start) > > from RandomArray import random > from Numeric import sort, Int > > r=random((71400,)) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70000).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*7).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > 16:27 hoel at seeve:hoel 2>python numtst.py > 17.3.0 > 0.185 > 0.148 > 2.053 > 21.668 > > -- snip -- > > So the less different values are contained in the array the longer > takes the sorting. Is this also the case with newer versions of > Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so > slow? > > Thanks > > Berthold > -- > email: hoel at GermanLloyd.org > ) tel. : +49 (40) 3 61 49 - 73 74 > ( > C[_] These opinions might be mine, but never those of my employer. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From chrishbarker at home.net Tue Jun 19 12:58:41 2001 From: chrishbarker at home.net (Chris Barker) Date: Tue, 19 Jun 2001 09:58:41 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: Message-ID: <3B2F84C1.BB1608C6@home.net> Well, this is what I get with Python 2.1, NumPy 20.0.0, Linux on a 450Mhz PIII [cbarker at waves junk]$ python numtst.py 20.0.0 0.438 0.154 0.148 0.139 So whatever it was may have been fixed (or be a strange platform dependence). Note, you do want to be a bit careful about using time.time, as it measures real time, so if you have another process hogging resources, it will not be a fair measure. You can use time.closck instead, although I'm sure it has its issues as well. This is what I get with clock: [cbarker at waves junk]$ python numtst.py 20.0.0 0.440 0.160 0.140 0.140 There's not much going on on my machine right now, so little difference. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From tim.hochberg at ieee.org Tue Jun 19 14:29:25 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 19 Jun 2001 11:29:25 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: <3B2F84C1.BB1608C6@home.net> Message-ID: <14cf01c0f8ed$c4cb9db0$87740918@cx781526b> Chris Barker writes: [He doesn't get a slowdown with Numeric 20.0.0 on Linux] > So whatever it was may have been fixed (or be a strange platform > dependence). [SNIP] It must just be the cleverness of your platform's qsort. Numeric delegates sorting to qsort and running qsort on Windows 2000 with Numeric 20.0.0 I get similar values to Bethold's. Also some rather naive calculations for the complexity of quicksort on a list of n items with k distinct values gives me O((log(n/k) + (n/k)) * n). That's probably not exactly right, but it matches the timings I get pretty well. -tim From chrishbarker at home.net Tue Jun 19 15:10:48 2001 From: chrishbarker at home.net (Chris Barker) Date: Tue, 19 Jun 2001 12:10:48 -0700 Subject: [Numpy-discussion] Release notes bug? References: <3B2F84C1.BB1608C6@home.net> <14cf01c0f8ed$c4cb9db0$87740918@cx781526b> Message-ID: <3B2FA3B8.E8F7BEBE@home.net> I just took a look at the release notes for version 20 at: http://sourceforge.net/project/shownotes.php?release_id=31875 And it says: Release Name: 20 Notes: Requires Python 2.0 Shouldn't that be 2.1? or 2.0 or greater? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From vanandel at atd.ucar.edu Thu Jun 28 18:41:33 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Thu, 28 Jun 2001 16:41:33 -0600 Subject: [Numpy-discussion] MA : assignment to slices doesn't work like Numeric. Message-ID: <3B3BB29D.A932FF05@atd.ucar.edu> I retrieved the latest MA from CVS. I've noticed that assigning to a slice doesn't work the same as Numeric. Here's a simple test program: -------------------------------- from MA import * #from Numeric import * numBeams,numGates = (5,4) result = ones((numBeams, numGates),'f') * -327.68 print 'result = ', result t1 = ones((numGates,),'f') t2 = 2* ones((numGates,),'f') result[0] = t1 result[1][:] = t2 print 'result = ', result ----------------------------------------- Output using 'MA': result = [[-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,]] result = [[ 1. , 1. , 1. , 1. ,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,]] However, if I use Numeric, rather than MA, I get: result = [[-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68]] result = [[ 1. 1. 1. 1. ] [ 2. 2. 2. 2. ] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68]] So, a[x][:] = my_array doesn't seem to work using 'MA'. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From paul at pfdubois.com Fri Jun 15 19:34:35 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri, 15 Jun 2001 16:34:35 -0700 Subject: [Numpy-discussion] Experimental version of MA.py Message-ID: The version of MA.py checked in today supports more of the Numeric API. If you are a heavy MA user you may wish to check it out. Due to other commitments I will be unavailable for further work and testing on it for about 3 weeks. I attach the file for your convenience. To try it, drop it in the site-packages/MA directory in place of your current one. I also gave MA a new test routine very close to the new one for Numeric. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: MA.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: changes.txt URL: From chrishbarker at home.net Fri Jun 1 16:58:47 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri, 01 Jun 2001 13:58:47 -0700 Subject: [Numpy-discussion] Rich Comparisons and the Docs References: <00101808405300.01041@penguin.visionpro.com> Message-ID: <3B180207.A16036BF@home.net> Hi all, I just got done installing Python 2.1 and NumPy 20 on Linux (which was far more work that it should have been) just so that I could get rich comparisons. I then got a version of the PDF doc, dated march30, 2001, and it is out of date with regard to rich comparisons. This brings up two questions. 1) Is the doc in CVS or something somewhere so that others (me) could contribute to it? 2) Is there a rich comparisons way to do "and" and "or"? It's great that: logical_and(greater(a,b),less(a,c)) can now be: logical_and(a > b, a < c) but of course, what I would really like is: (a > b) and (a < c) Are there any nice ways to do this?? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From chrishbarker at home.net Fri Jun 1 18:17:51 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri, 01 Jun 2001 15:17:51 -0700 Subject: [Numpy-discussion] Rich Comparisons and the Docs References: Message-ID: <3B18148F.4CF34EB8@home.net> "Paul F. Dubois" wrote: > The doc source is not available. I try to keep it up to date. If you would > tell me the section I overlooked I will fix it. That would be: Special Topics: Comparisons > As to logical and and or, they are not available for overload in the rich > comparisons schema. That's a Python decision. I figured as much. I just took a look at PEP 207: Rich Comparisons, and interestingly enough, the orignal version by David Asher addressed this issue (at least in terms of chaining comparisons, such as a > b >c). In the final PEP, I foound this: 5 In the current proposal, when A>> a = array([1,2,3]) >>> b = array([3,2,1]) >>> c = array([3,3,3]) >>> a < b < c array([0, 1, 1]) Also, does (A Hi, I've been having difficultly pickling arrays with the type PyObject using Numeric. I haven't tried it with MA but I assume the same problem exists. This script works ===================================== from cPickle import dump, load from Numeric import array, PyObject def pickleIt(obj, fileName): fp = open(fileName, 'w') dump(obj, fp) fp.close def loadIt(fileName): fp = open(fileName, 'r') obj = load(fp) fp.close() return obj a = array(['abc', 'def', 'ghi'], PyObject) pickleIt(a, 'test.pickle') This script segfaults ==================================== # ... same imports and func defs as above b = loadIt() print b ==================================== I first noticed this when trying to pickle arrays constructed from lists of mx.DateTime objects. Numeric 19.1.0 Python 2.1 final Linux 2.2.18 Is this a reproduceable bug or something unique to my setup? Tavis From dubois1 at llnl.gov Tue Jun 5 14:14:41 2001 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Tue, 5 Jun 2001 11:14:41 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060510341000.01305@lucy> References: <01060510341000.01305@lucy> Message-ID: <01060511162000.17252@almanac> Travis: Works for me ....using either dump or dumps, load or loads I used Numeric 20.1.0b1 / Python 2.1 / RedHat 6.2 On Tue, 05 Jun 2001, Tavis Rudd wrote: > Hi, > I've been having difficultly pickling arrays with the > type PyObject using Numeric. I haven't tried it with > MA but I assume the same problem exists. > > This script works > ===================================== > from cPickle import dump, load > from Numeric import array, PyObject > > def pickleIt(obj, fileName): > fp = open(fileName, 'w') > dump(obj, fp) > fp.close > > def loadIt(fileName): > fp = open(fileName, 'r') > obj = load(fp) > fp.close() > return obj > > a = array(['abc', 'def', 'ghi'], PyObject) > pickleIt(a, 'test.pickle') > > This script segfaults > ==================================== > # ... same imports and func defs as above > b = loadIt() > print b > > ==================================== > > I first noticed this when trying to pickle arrays constructed > from lists of mx.DateTime objects. > > Numeric 19.1.0 > Python 2.1 final > Linux 2.2.18 > > Is this a reproduceable bug or something unique to my > setup? > Tavis > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From tavis at calrudd.com Tue Jun 5 14:36:03 2001 From: tavis at calrudd.com (Tavis Rudd) Date: Tue, 5 Jun 2001 11:36:03 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060511162000.17252@almanac> References: <01060510341000.01305@lucy> <01060511162000.17252@almanac> Message-ID: <01060511360301.01305@lucy> Paul, I just installed 20.1.0b1 and got the same segfault. I'm using Suse 6.4 Note that if I dump it and load it from a single process it works fine. The error only occurs when I try to load it from a separate process. Tavis On Tuesday 05 June 2001 11:14, Paul F. Dubois wrote: > Travis: > Works for me ....using either dump or dumps, load or > loads I used Numeric 20.1.0b1 / Python 2.1 / RedHat 6.2 > On Tue, 05 Jun 2001, Tavis Rudd wrote: > > Hi, > > I've been having difficultly pickling arrays with the > > type PyObject using Numeric. I haven't tried it with > > MA but I assume the same problem exists. > > > > This script works > > ===================================== > > from cPickle import dump, load > > from Numeric import array, PyObject > > > > def pickleIt(obj, fileName): > > fp = open(fileName, 'w') > > dump(obj, fp) > > fp.close > > > > def loadIt(fileName): > > fp = open(fileName, 'r') > > obj = load(fp) > > fp.close() > > return obj > > > > a = array(['abc', 'def', 'ghi'], PyObject) > > pickleIt(a, 'test.pickle') > > > > This script segfaults > > ==================================== > > # ... same imports and func defs as above > > b = loadIt() > > print b > > > > ==================================== > > > > I first noticed this when trying to pickle arrays > > constructed from lists of mx.DateTime objects. > > > > Numeric 19.1.0 > > Python 2.1 final > > Linux 2.2.18 > > > > Is this a reproduceable bug or something unique to my > > setup? > > Tavis > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > http://lists.sourceforge.net/lists/listinfo/numpy-discu > >ssion > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discuss >ion From tavis at calrudd.com Tue Jun 5 14:52:57 2001 From: tavis at calrudd.com (Tavis Rudd) Date: Tue, 5 Jun 2001 11:52:57 -0700 Subject: [Numpy-discussion] segfault when pickling PyObject arrays In-Reply-To: <01060511360301.01305@lucy> References: <01060510341000.01305@lucy> <01060511162000.17252@almanac> <01060511360301.01305@lucy> Message-ID: <01060511525702.01305@lucy> oops, there's a typo in that test script I posted it should be b = loadIt('test.pickle') instead of > > > This script segfaults > > > ==================================== > > > # ... same imports and func defs as above > > > b = loadIt() > > > print b > > > > > > ==================================== From vanandel at atd.ucar.edu Wed Jun 6 12:50:50 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Wed, 06 Jun 2001 10:50:50 -0600 Subject: [Numpy-discussion] math range error with Masked Array Message-ID: <3B1E5F6A.59C6558@atd.ucar.edu> I'm trying to use the MA package for numeric computations. Unfortunately, attempting to construct a masked array sometimes fails: masked_values(values, missingValue,savespace=1) File "/usr/lib/python2.1/site-packages/MA/MA.py", line 1299, in masked_values m = Numeric.less_equal(abs(d-value), atol+rtol*abs(value)) OverflowError: math range error The odd thing is that the floating point calculations that produced the input Numeric array didn't cause a math range error, but MA's attempt to find the 'missing' values does cause a range error. When I switched to Python2.1, I had to find and fix several overflow problems that didn't cause exceptions under Python1.5. For example, I had to use a "protected" exponentation routine to avoid overflow errors: MIN_EXP = -745 MAX_EXP = 709 def ProtExp(a): """ Protected Exponentiation calculation. Avoid Overflow error on large negative or positive arguments """ min_a = choose(less(a, MIN_EXP), (a, MIN_EXP)) return exp(choose(greater(min_a, MAX_EXP), (min_a, MAX_EXP)) ) ------------------- I'm concerned that the math exception handling for Python2.1 under x86 Linux makes it hard to get my work done. Any ideas on how to fix this error in MA? (I already tried masked_values(values, missingValue, rtol=1e-2,atol=1.e-4, savespace=1), which didn't help.) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From marimont at nxpdata.com Wed Jun 6 13:49:28 2001 From: marimont at nxpdata.com (David H. Marimont) Date: Wed, 06 Jun 2001 10:49:28 -0700 Subject: [Numpy-discussion] undefined symbols when importing lapack Message-ID: <3B1E6D28.3CC8D443@nxpdata.com> I just compiled and installed Numeric 20.1.0b1 using lapack and blas libraries. When I tried to import lapack_lite (after importing Numeric), I got this error: Traceback (most recent call last): File "", line 1, in ? ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe I'm using Python 2.1 on RH 7.1. I've had this problem before and have even seen postings to this list about related problems. But the solutions posted were over my head, so I've never been able to use the python interface to the lapack and blas libraries, which I really need. Does anyone have any advice, preferably pitched to someone who has limited compilation skills (i.e. at the the "configure, make, make install" level)? Thanks. David Marimont From cookedm at physics.mcmaster.ca Wed Jun 6 14:06:41 2001 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: 06 Jun 2001 14:06:41 -0400 Subject: [Numpy-discussion] undefined symbols when importing lapack In-Reply-To: <3B1E6D28.3CC8D443@nxpdata.com> References: <3B1E6D28.3CC8D443@nxpdata.com> Message-ID: At some point, "David H. Marimont" wrote: > I just compiled and installed Numeric 20.1.0b1 using lapack > and blas libraries. When I tried to import lapack_lite (after > importing Numeric), I got this error: > > Traceback (most recent call last): > File "", line 1, in ? > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > I'm using Python 2.1 on RH 7.1. > > I've had this problem before and have even seen postings to this list > about related problems. But the solutions posted were over my head, so I've > never been able to use the python interface to the lapack and blas libraries, > which I really need. Does anyone have any advice, preferably pitched to > someone who has limited compilation skills (i.e. at the the "configure, > make, make install" level)? > > Thanks. You have to compile in the g2c library. For RH 7.1, add the path '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in setup.py, and 'g2c' in libraries_list. So the appropiate lines in setup.py will look like: # delete all but the first one in this list if using your own LAPACK/BLAS sourcelist = ['Src/lapack_litemodule.c', # 'Src/blas_lite.c', # 'Src/f2c_lite.c', # 'Src/zlapack_lite.c', # 'Src/dlapack_lite.c' ] # set these to use your own BLAS library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] libraries_list = ['lapack', 'blas', 'g2c'] If you're compiling on Debian, I don't think you need to add the path (but you need 'g2c'). You need g2c because lapack and blas were compiled from Fortran using g77, and so they depend on routines that implement some of the Fortran statements. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at mcmaster.ca From marimont at nxpdata.com Wed Jun 6 14:32:51 2001 From: marimont at nxpdata.com (David H. Marimont) Date: Wed, 06 Jun 2001 11:32:51 -0700 Subject: [Numpy-discussion] undefined symbols when importing lapack References: <3B1E6D28.3CC8D443@nxpdata.com> Message-ID: <3B1E7753.36A250FC@nxpdata.com> Thanks, David, that worked perfectly -- I can now import lapack_lite without any errors. Now I need to now how to call lapack functions aside from the ones that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). I found these via inspect.getmembers(lapack_lite). Where do all the other lapack functions live? And is there some way for me to determine that automatically? Thanks. David "David M. Cooke" wrote: > > At some point, "David H. Marimont" wrote: > > > I just compiled and installed Numeric 20.1.0b1 using lapack > > and blas libraries. When I tried to import lapack_lite (after > > importing Numeric), I got this error: > > > > Traceback (most recent call last): > > File "", line 1, in ? > > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > > > I'm using Python 2.1 on RH 7.1. > > > > I've had this problem before and have even seen postings to this list > > about related problems. But the solutions posted were over my head, so I've > > never been able to use the python interface to the lapack and blas libraries, > > which I really need. Does anyone have any advice, preferably pitched to > > someone who has limited compilation skills (i.e. at the the "configure, > > make, make install" level)? > > > > Thanks. > > You have to compile in the g2c library. For RH 7.1, add the path > '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in > setup.py, and 'g2c' in libraries_list. > > So the appropiate lines in setup.py will look like: > > # delete all but the first one in this list if using your own LAPACK/BLAS > sourcelist = ['Src/lapack_litemodule.c', > # 'Src/blas_lite.c', > # 'Src/f2c_lite.c', > # 'Src/zlapack_lite.c', > # 'Src/dlapack_lite.c' > ] > # set these to use your own BLAS > library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] > libraries_list = ['lapack', 'blas', 'g2c'] > > If you're compiling on Debian, I don't think you need to add the path > (but you need 'g2c'). > > You need g2c because lapack and blas were compiled from Fortran using > g77, and so they depend on routines that implement some of the Fortran > statements. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > |cookedm at mcmaster.ca > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From cookedm at physics.mcmaster.ca Wed Jun 6 14:57:04 2001 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: 06 Jun 2001 14:57:04 -0400 Subject: [Numpy-discussion] undefined symbols when importing lapack In-Reply-To: <3B1E7753.36A250FC@nxpdata.com> References: <3B1E6D28.3CC8D443@nxpdata.com> <3B1E7753.36A250FC@nxpdata.com> Message-ID: At some point, "David H. Marimont" wrote: > Thanks, David, that worked perfectly -- I can now import lapack_lite > without any errors. > > Now I need to now how to call lapack functions aside from the ones > that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, > dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). > I found these via inspect.getmembers(lapack_lite). Where do all > the other lapack functions live? And is there some way for me to > determine that automatically? > > Thanks. You want PyLapack. See a previous message at http://www.geocrawler.com/archives/3/1329/2000/4/0/3616954/ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |cookedm at mcmaster.ca From karshi.hasanov at utoronto.ca Wed Jun 6 18:46:02 2001 From: karshi.hasanov at utoronto.ca (Karshi Hasanov) Date: Wed, 6 Jun 2001 17:46:02 -0500 Subject: [Numpy-discussion] vtkStructuredPoints Message-ID: <01060617460200.01082@cr322577-a> Hi, I wanna build *.vtk structured data file from an array A[i,j,k] which has a vector attributes. What's the right( or best) way of doing it using python? I do have the VTK User's Guide Book, but it didn't tell me much. Thanks From paul at pfdubois.com Wed Jun 6 18:51:56 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Wed, 6 Jun 2001 15:51:56 -0700 Subject: [Numpy-discussion] 20.1.0b2 Message-ID: 20.1.0b2 is now available in tar.gz, .exe, .zip formats. From Robert.Denham at dnr.qld.gov.au Wed Jun 6 23:58:33 2001 From: Robert.Denham at dnr.qld.gov.au (Robert.Denham at dnr.qld.gov.au) Date: Thu, 7 Jun 2001 13:58:33 +1000 Subject: [Numpy-Discussion] 3d interpolation Message-ID: I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ From pearu at cens.ioc.ee Thu Jun 7 04:06:27 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 7 Jun 2001 10:06:27 +0200 (EET) Subject: [Numpy-discussion] vtkStructuredPoints In-Reply-To: <01060617460200.01082@cr322577-a> Message-ID: On Wed, 6 Jun 2001, Karshi Hasanov wrote: > I wanna build *.vtk structured data file from an array A[i,j,k] which has > a vector attributes. What's the right( or best) way of doing it using python? > I do have the VTK User's Guide Book, but it didn't tell me much. > Thanks Check out PyVTK: http://cens.ioc.ee/projects/pyvtk/ Using PyVTK you can create the data file as follows: from pyvtk import * VtkData(StructuredPoints([n1,n2,n3]),PointData(Vectors(A))).tofile('arr.vtk') where A is n1 x n2 x n3 arrays of 3-sequences. Pearu From paul at pfdubois.com Thu Jun 7 12:26:22 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu, 7 Jun 2001 09:26:22 -0700 Subject: [Numpy-Discussion] 3d interpolation In-Reply-To: Message-ID: Try this and let me know if it works for you. It implements Hardy's multiquadric. Note the caution on the number of input points. This algorithm does a really spiffy job usually. Try the default rsq first. -- Paul -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Robert.Denham at dnr.qld.gov.au Sent: Wednesday, June 06, 2001 8:59 PM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-Discussion] 3d interpolation I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net http://lists.sourceforge.net/lists/listinfo/numpy-discussion -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: multiquadric.py URL: From vanandel at atd.ucar.edu Fri Jun 8 15:58:58 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Fri, 08 Jun 2001 13:58:58 -0600 Subject: [Numpy-discussion] math range error with Masked Array References: Message-ID: <3B212E82.B6169CA1@atd.ucar.edu> "Paul F. Dubois" wrote: > > Is the array single or double precision? Does it have the spacesaver > attribute set? The array is single precision, with the spacesave attribute set. > > Is it possible the data had bad values in it that were something other than > the missing value? What was the missing value? Yes, I finally found that the array had some contained some 'Nan' variables, because of an error in my earlier calculation. So, at this point, MA works for me. Thanks for your help. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From hinsen at cnrs-orleans.fr Tue Jun 12 05:54:41 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Jun 2001 11:54:41 +0200 Subject: [Numpy-discussion] underflow array([2.9e-131])**3 In-Reply-To: (message from Jean-Bernard Addor on Thu, 31 May 2001 18:59:54 -0400 (EDT)) References: Message-ID: <200106120954.LAA05154@chinon.cnrs-orleans.fr> > Do anyone know how to disable underflow exception errors in Numeric? The problem is not in Numeric, it is the C library that decides whether underflows should be considered errors or not. The Python interpreter has some workarounds, but Numeric has not. I had the same problems you describe under AIX, where I solved them by linking to a different version of the math library. But I don't think there is a platform-independent solution. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From hinsen at cnrs-orleans.fr Tue Jun 12 05:56:38 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Jun 2001 11:56:38 +0200 Subject: [Numpy-discussion] PS: underflow array([2.9e-131+0j])**3 (why complex give a different result? bug?) In-Reply-To: (message from Jean-Bernard Addor on Thu, 31 May 2001 19:09:03 -0400 (EDT)) References: Message-ID: <200106120956.LAA05157@chinon.cnrs-orleans.fr> > >>> Numeric.array([2.9e-131])**3 > Traceback (most recent call last): > File "", line 1, in ? > OverflowError: math range error > >>> 2.9e-131**3 > 0.0 > >>> Numeric.array([2.9e-131+0j])**3 > array([ 0.+0.j]) > > Now I have a quick solution for my problem, but I have the impression that > this is a bug, at least I don't understand the underlying logic. If > somebody can explain it to me? The power calculation routines for float and complex are completely different, I suppose the absence of underflow reporting in the latter is just a side effect. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From vanandel at atd.ucar.edu Tue Jun 12 20:20:10 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 12 Jun 2001 18:20:10 -0600 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. Message-ID: <3B26B1BA.37988FBF@atd.ucar.edu> I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- import sys # test harness for Masked array performonce from MA import * #from Numeric import * from Perp.util.TimerUtility import TimerUtility def mult_test(a1, a2): res = a1 * a2 if __name__ == '__main__': repeat = 100 gates = 1000 beams = 370 if len(sys.argv) > 1: repeat = int(sys.argv[1]) t1 = ones((beams, gates), Float) a1 = masked_values(t1, -327.68) a2 = masked_values(t1, -327.68) i = 0 tu = TimerUtility(()) while (i < repeat): i = i+1 res = mult_test(a1, a2) elapsed = tu.elapsed() print 'completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) -------------- next part -------------- #include #include #include "PerfTimer.h" typedef float *FLOAT_PTR; extern void mult_test(FLOAT_PTR *a1, FLOAT_PTR *a2, FLOAT_PTR *resp, float missingValue); const int gates = 1000; const int beams = 370; int main(int argc, char *argv[]) { int repeat =100; const float missingValue = -327.68; if (argc > 1) repeat = atoi(argv[1]); FLOAT_PTR *a1 = new FLOAT_PTR[beams]; FLOAT_PTR *a2 = new FLOAT_PTR[beams]; FLOAT_PTR *res = new FLOAT_PTR[beams]; // allocate storage for 2d variables for (int b = 0; b < beams; ++b) { a1[b] = new float[gates]; a2[b] = new float[gates]; res[b] = new float[gates]; } PerfTimer pt; for (int r = 0; r < repeat; ++r) { mult_test(a1, a2, res,missingValue); } double elapsed = pt.Elapsed(); double cntMultiply = repeat*gates*beams; cout << repeat << " repetitions completed" << endl; cout << cntMultiply << "checked multiplies" << endl; cout << cntMultiply/elapsed << "checked multiplies/second" << endl; } void mult_test(FLOAT_PTR *a1, FLOAT_PTR *a2, FLOAT_PTR *resp, float missingValue) { const float atol = 1.e-8; const float rtol = 1.0e-5; for (int b=0; b < beams; ++b) { for (int g = 0; g < gates; ++g) { if (fabs(a1[b][g] - missingValue) < atol + rtol * fabs(missingValue)) { resp[b][g] = missingValue; continue; } else if (fabs(a2[b][g] - missingValue) < atol + rtol * fabs(missingValue)) { resp[b][g] = missingValue; } else { resp[b][g] = a1[b][g] * a2[b][g]; } } // for gates } // for beams } -------------- next part -------------- #include class PerfTimer { public: // constructor, starts timing PerfTimer(); // reset starting time void Start(); // compute elapsed time since last construction or Start() double Elapsed(); private: struct timespec startTime_; }; -------------- next part -------------- #include "PerfTimer.h" PerfTimer::PerfTimer() { Start(); } void PerfTimer::Start() { clock_gettime(CLOCK_REALTIME, &startTime_); } double PerfTimer::Elapsed() { struct timespec stopTime; clock_gettime(CLOCK_REALTIME, &stopTime); return (stopTime.tv_sec + stopTime.tv_nsec/1.0e9 - (startTime_.tv_sec + startTime_.tv_nsec/1.0e9) ); } -------------- next part -------------- #CCFLAGS=-O3 CCFLAGS= CXXFLAGS=${CCFLAGS} all: arrayperf arrayperf: arrayperf.cc g++ -o arrayperf arrayperf.cc PerfTimer.cc -lm -lrt -------------- next part -------------- # import time class TimerUtility: """ Timer/Utility for performance measurment """ def __init__(self, verbose=1,useElapsed=0): """ ctor: can suppress printing by settings **verbose** to 0 """ # if we're timing elapsed events, including subprocesses, # then use time.time() if useElapsed: self.__timeFunc = time.time else: self.__timeFunc = time.clock self.__startTime = self.__timeFunc() self.__lastTime = self.__startTime self.__verbose = verbose def elapsed(self, msg = 'Elapsed '): """ print elapsed time since instance creation or last **elapsed()** call """ current = self.__timeFunc() delta = (current - self.__lastTime) if (self.__verbose): print '%s : %5.2f' % (msg, float(delta)) self.__lastTime = current return delta def rate(self, count, msg = 'rate'): """ print elapsed time and rate since instance creation or last **elapsed()** call """ current = self.__timeFunc() delta = (current - self.__lastTime) if (self.__verbose): print '%s : %5.2f : %6.2f' % (msg, float(delta), float(count)/float(delta)) self.__lastTime = current return delta def total(self, msg = 'Total '): """ print total time since TimerUtility was created """ current = self.__timeFunc() diff = (current - self.__startTime) if (self.__verbose): print '%s : %5.2f' % (msg, diff) return diff From paul at pfdubois.com Tue Jun 12 21:08:46 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue, 12 Jun 2001 18:08:46 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. In-Reply-To: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: I have a timing benchmark for MA that computes the ratio MA/Numeric for two cases: 1. there is actually no mask 2. there is a mask For N=50,000 these ratios are usually around 1.3 and 1.8 respectively. It makes sense in the second case that the number might be around 2 since you have to pass through the mask data as well, even if it is only bytes. In short, there is this much overhead to MA. If you got MA/C++ = 1.67 it would indicate Numpy/C++ comparable. The tests Jim did when he first wrote it were about 10% worse than C. Your C++ uses a special value instead of a mask array which may mean that you traded space for CPU time, and using large arrays like that maybe that causes some page faults (?) Anyway you're comparing apples and oranges a little. Anyway, my point is this is probably an MA issue rather than a Numpy issue. However, please note that I did not (yet) do any of the normal profiling and testing that one would do to speed MA up, such as putting key parts in C. This is just not an issue for me right now. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From paul at pfdubois.com Tue Jun 12 21:23:13 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue, 12 Jun 2001 18:23:13 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. In-Reply-To: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: PS my test was on double precision, failed to notice that too. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From chrishbarker at home.net Wed Jun 13 15:24:42 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed, 13 Jun 2001 12:24:42 -0700 Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. References: <3B26B1BA.37988FBF@atd.ucar.edu> Message-ID: <3B27BDFA.30857110@home.net> If I read your C++ right (and I may not have, I'm a C++ novice), you allocated the memory for all three arrays, and then performed your loop. In the Python version, the result array is allocated when the multiplication is perfomed, so you are allocating and freeing the result array each tim ein the loop. That may slow things down a little. In a real application, you are less likely to be re-doing the same computation over and over again, so the allocation would happen only once. You might try something like this, and see if it is any faster (it is more memory efficient) Note also that there is some overhead in function calls in Python, so you may get some speed up if you inline the call to mult_test. You can decide for yourself if this would still be a fair comparison. You might try something like this, and see if it is any faster (it is more memory efficient) (unfortunately, MA doesn't seem to support the thiord argument to multiply) My version (I don't have TimerUtility, so I used time.clock instead) got these times: Your code: completed 1000 in 99.050000 seconds 3.74e+06 checked multiplies/second My code: alternative completed 1000 in 80.070000 seconds 4.62e+06 checked multiplies/second It did buy you something: here is the code: #!/usr/bin/env python2.1 import sys # test harness for Masked array performonce #from MA import * from Numeric import * from time import clock def mult_test(a1, a2): res = a1 * a2 if __name__ == '__main__': repeat = 100 gates = 1000 beams = 370 if len(sys.argv) > 1: repeat = int(sys.argv[1]) t1 = ones((beams, gates), Float) a1 = t1 a2 = t1 # a1 = masked_values(t1, -327.68) # a2 = masked_values(t1, -327.68) i = 0 start = clock() while (i < repeat): i = i+1 res = mult_test(a1, a2) elapsed = clock() - start print 'completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print # alternative: res = zeros(a1.shape,Float) i = 0 start = clock() while (i < repeat): i = i+1 multiply(a1, a2, res) elapsed = clock() - start print 'alternative completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print Another note: calling ones with Float as your type gives you a Python float, which is a C double. Use 'f' or Float32 to get a C float. I've found on Intel hardware, doubles are just as fast (the FPU used doubles anyway), but they do use more memory, so this could make a difference. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From paul at pfdubois.com Thu Jun 14 13:03:16 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu, 14 Jun 2001 10:03:16 -0700 Subject: [Numpy-discussion] New testing procedures Message-ID: In CVS there is now a file Test/test.py instead of the previous test_items and test_all.py. test.py uses PyUnit, the new testing framework in Python. Developers should be able to add tests much more easily now. From paul at pfdubois.com Fri Jun 15 19:37:34 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri, 15 Jun 2001 16:37:34 -0700 Subject: [Numpy-discussion] MA improvements Message-ID: Heavy MA users may wish to get the latest from CVS to try. It has numerous improvements as detailed in changes.txt. I am sending out this announcement because I won't be available to work on it for three weeks, so if you find it is broken sync back to June 14 until I can fix it. From hoel at germanlloyd.org Tue Jun 19 10:37:38 2001 From: hoel at germanlloyd.org (Berthold =?iso-8859-1?q?H=F6llmann?=) Date: 19 Jun 2001 16:37:38 +0200 Subject: [Numpy-discussion] speed problem with Numeric sort Message-ID: We have a speed problem with Numeric.sort on large arrays with only a few different values. Here is my example -- snip -- >cat numtst.py import Numeric print Numeric.__version__ class timer: def __init__(self): import time self.start = time.time() def stop(self): import time print "%.3f" % (time.time() - self.start) from RandomArray import random from Numeric import sort, Int r=random((71400,)) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70000).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*7).astype(Int) t = timer() ; p=sort(r) ; t.stop() 16:27 hoel at seeve:hoel 2>python numtst.py 17.3.0 0.185 0.148 2.053 21.668 -- snip -- So the less different values are contained in the array the longer takes the sorting. Is this also the case with newer versions of Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so slow? Thanks Berthold -- email: hoel at GermanLloyd.org ) tel. : +49 (40) 3 61 49 - 73 74 ( C[_] These opinions might be mine, but never those of my employer. From tim.hochberg at ieee.org Tue Jun 19 11:54:21 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 19 Jun 2001 08:54:21 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: Message-ID: <144801c0f8d8$1b4c8d40$87740918@cx781526b> Hi Berthold, I tested your code on Win95 using Numeric 20.0.0 and got essentially the same pattern of times. So, upgrading your version of Numeric is not likely to help. Curious, I checked out the code for sort and found that it justs calls qsort from the C library. I suspect that the problem is related to quicksort having bad worst case behaviour. Not being a computer scientest, I can't tell you under what situations the bad behaviour is triggered, although I know it doesn't like presorted lists. Anyway, if you're using 1D arrays, one workaround would be to use list.sort. Python's sorting routines has, I understand, lots of gimmicks to avoid the problems that quicksort sometimes encounters. I tried it and this: r=(random((71400,))*7).astype(Int) l = r.tolist() l.sort() p = array(l) runs about 40 times faster than this: r=(random((71400,))*7).astype(Int) p = r.sort() If you don't need to convert to and from a array, this approach is 60 times faster. Even if you're dealing with a multidimensional array, this approach (in a loop) might be signifigantly faster assuming you're sorting along the long axis. It makes one wonder if using the python sort rather than qsort for Numeric.sort would be a profitable change. No time to investigate it right now though. Hope that's useful... -tim > We have a speed problem with Numeric.sort on large arrays with only a > few different values. Here is my example > > -- snip -- > > >cat numtst.py > import Numeric > print Numeric.__version__ > class timer: > def __init__(self): > import time > self.start = time.time() > def stop(self): > import time > print "%.3f" % (time.time() - self.start) > > from RandomArray import random > from Numeric import sort, Int > > r=random((71400,)) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70000).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*7).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > 16:27 hoel at seeve:hoel 2>python numtst.py > 17.3.0 > 0.185 > 0.148 > 2.053 > 21.668 > > -- snip -- > > So the less different values are contained in the array the longer > takes the sorting. Is this also the case with newer versions of > Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so > slow? > > Thanks > > Berthold > -- > email: hoel at GermanLloyd.org > ) tel. : +49 (40) 3 61 49 - 73 74 > ( > C[_] These opinions might be mine, but never those of my employer. > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/lists/listinfo/numpy-discussion From chrishbarker at home.net Tue Jun 19 12:58:41 2001 From: chrishbarker at home.net (Chris Barker) Date: Tue, 19 Jun 2001 09:58:41 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: Message-ID: <3B2F84C1.BB1608C6@home.net> Well, this is what I get with Python 2.1, NumPy 20.0.0, Linux on a 450Mhz PIII [cbarker at waves junk]$ python numtst.py 20.0.0 0.438 0.154 0.148 0.139 So whatever it was may have been fixed (or be a strange platform dependence). Note, you do want to be a bit careful about using time.time, as it measures real time, so if you have another process hogging resources, it will not be a fair measure. You can use time.closck instead, although I'm sure it has its issues as well. This is what I get with clock: [cbarker at waves junk]$ python numtst.py 20.0.0 0.440 0.160 0.140 0.140 There's not much going on on my machine right now, so little difference. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From tim.hochberg at ieee.org Tue Jun 19 14:29:25 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Tue, 19 Jun 2001 11:29:25 -0700 Subject: [Numpy-discussion] speed problem with Numeric sort References: <3B2F84C1.BB1608C6@home.net> Message-ID: <14cf01c0f8ed$c4cb9db0$87740918@cx781526b> Chris Barker writes: [He doesn't get a slowdown with Numeric 20.0.0 on Linux] > So whatever it was may have been fixed (or be a strange platform > dependence). [SNIP] It must just be the cleverness of your platform's qsort. Numeric delegates sorting to qsort and running qsort on Windows 2000 with Numeric 20.0.0 I get similar values to Bethold's. Also some rather naive calculations for the complexity of quicksort on a list of n items with k distinct values gives me O((log(n/k) + (n/k)) * n). That's probably not exactly right, but it matches the timings I get pretty well. -tim From chrishbarker at home.net Tue Jun 19 15:10:48 2001 From: chrishbarker at home.net (Chris Barker) Date: Tue, 19 Jun 2001 12:10:48 -0700 Subject: [Numpy-discussion] Release notes bug? References: <3B2F84C1.BB1608C6@home.net> <14cf01c0f8ed$c4cb9db0$87740918@cx781526b> Message-ID: <3B2FA3B8.E8F7BEBE@home.net> I just took a look at the release notes for version 20 at: http://sourceforge.net/project/shownotes.php?release_id=31875 And it says: Release Name: 20 Notes: Requires Python 2.0 Shouldn't that be 2.1? or 2.0 or greater? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From vanandel at atd.ucar.edu Thu Jun 28 18:41:33 2001 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Thu, 28 Jun 2001 16:41:33 -0600 Subject: [Numpy-discussion] MA : assignment to slices doesn't work like Numeric. Message-ID: <3B3BB29D.A932FF05@atd.ucar.edu> I retrieved the latest MA from CVS. I've noticed that assigning to a slice doesn't work the same as Numeric. Here's a simple test program: -------------------------------- from MA import * #from Numeric import * numBeams,numGates = (5,4) result = ones((numBeams, numGates),'f') * -327.68 print 'result = ', result t1 = ones((numGates,),'f') t2 = 2* ones((numGates,),'f') result[0] = t1 result[1][:] = t2 print 'result = ', result ----------------------------------------- Output using 'MA': result = [[-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,]] result = [[ 1. , 1. , 1. , 1. ,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,] [-327.68,-327.68,-327.68,-327.68,]] However, if I use Numeric, rather than MA, I get: result = [[-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68]] result = [[ 1. 1. 1. 1. ] [ 2. 2. 2. 2. ] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68] [-327.68 -327.68 -327.68 -327.68]] So, a[x][:] = my_array doesn't seem to work using 'MA'. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From paul at pfdubois.com Fri Jun 15 19:34:35 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri, 15 Jun 2001 16:34:35 -0700 Subject: [Numpy-discussion] Experimental version of MA.py Message-ID: The version of MA.py checked in today supports more of the Numeric API. If you are a heavy MA user you may wish to check it out. Due to other commitments I will be unavailable for further work and testing on it for about 3 weeks. I attach the file for your convenience. To try it, drop it in the site-packages/MA directory in place of your current one. I also gave MA a new test routine very close to the new one for Numeric. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: MA.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: changes.txt URL: