From europax at home.com Sat Sep 1 11:58:08 2001 From: europax at home.com (Rob) Date: Sat Sep 1 11:58:08 2001 Subject: [Numpy-discussion] another seemingly unNumpyizeable routine Message-ID: <3B912F97.FA2AB9A9@home.com> I hate to bother you guys, but this is how I learn. I have had some success using the take() and add.reduce() routines all by myself. However this one seems intractable: for IEdge in range(0,a.HybrdBoundEdgeNum): for JEdge in range(0,a.HybrdBoundEdgeNum): AVec[IEdge+a.TotInnerEdgeNum] += a.Cdd[IEdge,JEdge]*XVec[JEdge+a.TotInnerEdgeNum] ## below is my attempt at this, but it doesn't work ## h=a.TotInnerEdgeNum ## for IEdge in range(0,a.HybrdBoundEdgeNum): ## AVec[IEdge+h] = add.reduce(a.Cdd[IEdge,0:a.HybrdBoundEdgeNum] * XVec[h:(h+a.HybrdBoundEdgeNum)]) I think the problem lies in that add.reduce is using the slice 0:a.HybridBoundEdgeNum for Cdd, but then has to contend with XVec slice which is offset by h. Is there any elegant way to deal with this. This routine is part of a sparse matrix multiply routine. If I could find SparsePy, maybe I could eliminate it altogether. Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Sat Sep 1 19:54:02 2001 From: europax at home.com (Rob) Date: Sat Sep 1 19:54:02 2001 Subject: [Numpy-discussion] another seemingly unNumpyizeable routine References: <3B912F97.FA2AB9A9@home.com> Message-ID: <3B919EF7.EC7E3C8@home.com> I figured this out. I step JEdge instead of IEdge. Another big speed boost. Rob. Rob wrote: > > I hate to bother you guys, but this is how I learn. I have had some > success using the take() and add.reduce() routines all by myself. > However this one seems intractable: > > for IEdge in range(0,a.HybrdBoundEdgeNum): > for JEdge in range(0,a.HybrdBoundEdgeNum): > AVec[IEdge+a.TotInnerEdgeNum] += > a.Cdd[IEdge,JEdge]*XVec[JEdge+a.TotInnerEdgeNum] > > ## below is my attempt at this, but it doesn't > work > ## h=a.TotInnerEdgeNum > ## for IEdge in range(0,a.HybrdBoundEdgeNum): > ## AVec[IEdge+h] = add.reduce(a.Cdd[IEdge,0:a.HybrdBoundEdgeNum] * > XVec[h:(h+a.HybrdBoundEdgeNum)]) > > I think the problem lies in that add.reduce is using the slice > 0:a.HybridBoundEdgeNum for Cdd, but then has to contend with XVec slice > which is offset by h. Is there any elegant way to deal with this. This > routine is part of a sparse matrix multiply routine. If I could find > SparsePy, maybe I could eliminate it altogether. > > Thanks, Rob. > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Tue Sep 4 18:03:02 2001 From: europax at home.com (Rob) Date: Tue Sep 4 18:03:02 2001 Subject: [Numpy-discussion] Python and Numpy compiled on Athlon optimized gcc3.01 Message-ID: <3B95798C.E69744F4@home.com> Just for kicks last night I installed gcc3.01 which claims to have Athlon optimization, ie. -march=athlon. I recompiled Python and Numpy, and then ran a big simulation. The new compiler ran 200 seconds slower than the old gcc2.95 with plain -march=pentium. I need to go to the gcc website and see just what optimization they are claiming. Maybe I should have also used -02. Rob. -- The Numeric Python EM Project www.members.home.net/europax From myers at tc.cornell.edu Wed Sep 5 11:39:01 2001 From: myers at tc.cornell.edu (Chris Myers) Date: Wed Sep 5 11:39:01 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs Message-ID: <20010905143930.A10691@sowhat.tc.cornell.edu> I have a Python object that contains a NumPy array (among other things), and I'd like it to behave like an array in certain circumstances. (For example, I can define __getitem__ on the class so that the underlying NumPy array is indexed.) I'd like to be able to act on such an object with a ufunc, but am stymied. For example, if y is an instance of this array-like class of mine, then I get a type error if I try a reduction on it: >>> Numeric.minimum.reduce(y) Traceback (most recent call last): File "", line 1, in ? TypeError: function not supported for these types, and can't coerce to supported types Is there a way I can effect this coercion? I guess I'd prefer not to have to inherit from UserArray. Thanks, Chris ========================================================================== Chris Myers Cornell Theory Center -------------------------------------------------------------------------- 636 Rhodes Hall email: myers at tc.cornell.edu Cornell University phone: (607) 255-5894 / fax: (607) 254-8888 Ithaca, NY 14853 http://www.tc.cornell.edu/~myers ========================================================================== From tim.hochberg at ieee.org Wed Sep 5 11:54:01 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed Sep 5 11:54:01 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs References: <20010905143930.A10691@sowhat.tc.cornell.edu> Message-ID: <139301c1363c$09dc3a50$87740918@cx781526b> Hi Chris, I believe that all you should have to do is define an __array__ function similar to that in UserArray; def __array__(self,t=None): if t: return asarray(self.array,t) return asarray(self.array) Replace self.array with whatever you are calling your contained array. Regards, -tim ----- Original Message ----- From: "Chris Myers" To: Sent: Wednesday, September 05, 2001 11:39 AM Subject: [Numpy-discussion] coercion for array-like objects for ufuncs > I have a Python object that contains a NumPy array (among other > things), and I'd like it to behave like an array in certain > circumstances. (For example, I can define __getitem__ on the class so > that the underlying NumPy array is indexed.) > > I'd like to be able to act on such an object with a ufunc, but am > stymied. For example, if y is an instance of this array-like class of > mine, then I get a type error if I try a reduction on it: > > >>> Numeric.minimum.reduce(y) > Traceback (most recent call last): > File "", line 1, in ? > TypeError: function not supported for these types, and can't coerce to supported types > > > Is there a way I can effect this coercion? I guess I'd prefer not to > have to inherit from UserArray. > > Thanks, > > Chris > > ========================================================================== > Chris Myers > Cornell Theory Center > -------------------------------------------------------------------------- > 636 Rhodes Hall email: myers at tc.cornell.edu > Cornell University phone: (607) 255-5894 / fax: (607) 254-8888 > Ithaca, NY 14853 http://www.tc.cornell.edu/~myers > ========================================================================== > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant at ee.byu.edu Wed Sep 5 12:40:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Sep 5 12:40:02 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs In-Reply-To: <20010905143930.A10691@sowhat.tc.cornell.edu> Message-ID: > Is there a way I can effect this coercion? I guess I'd prefer not to > have to inherit from UserArray. You need to have an __array__ method defined in your class which returns the NumPy array to operate on. -Travis From europax at home.com Thu Sep 6 07:17:02 2001 From: europax at home.com (Rob) Date: Thu Sep 6 07:17:02 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday Message-ID: <3B9784EF.17398549@home.com> >From my web statistics it looks like about 100 people downloaded it. Should give Numpy a pretty good showing as I got it running almost as fast as C. Only slowdown was dumping the E fields into a file every few ticks to generate the movie. I know there must be better ways to dump an array than use indexing- but I was so excited when I finally got it to near C speeds that I just let it be. Rob. -- The Numeric Python EM Project www.members.home.net/europax From chrishbarker at home.net Thu Sep 6 09:36:03 2001 From: chrishbarker at home.net (Chris Barker) Date: Thu Sep 6 09:36:03 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday References: <3B9784EF.17398549@home.com> Message-ID: <3B97AAF9.C8129FEB@home.net> Rob wrote: > Only slowdown was dumping the E fields into a file every few > ticks to generate the movie. I know there must be better ways to dump > an array than use indexing- If you want to dump the binary data, you can use: file.write(array.tostring) If you need an ASCI representation, you can use various forms of: format_string = '%f '* A.shape[1] + '\n' for r in range(A.shape[0]): file.write(format_string%(tuple(A[r,:]))) You could probably use map to speed it up a little as well. I'd love to see an array-oreinted version of fscanf and fprintf to make this faster and easier, but I have yet to get around to writing one. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From oliphant at ee.byu.edu Thu Sep 6 10:14:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Sep 6 10:14:02 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday In-Reply-To: <3B97AAF9.C8129FEB@home.net> Message-ID: > Rob wrote: > > Only slowdown was dumping the E fields into a file every few > > ticks to generate the movie. I know there must be better ways to dump > > an array than use indexing- > > If you want to dump the binary data, you can use: > > file.write(array.tostring) > > If you need an ASCI representation, you can use various forms of: > > format_string = '%f '* A.shape[1] + '\n' > for r in range(A.shape[0]): > file.write(format_string%(tuple(A[r,:]))) > > You could probably use map to speed it up a little as well. > > I'd love to see an array-oreinted version of fscanf and fprintf to make > this faster and easier, but I have yet to get around to writing one. Check out the IO secion of SciPy. Numpyio along with a flexible ASCII_file reader are there. -Travis From chrishbarker at home.net Thu Sep 6 15:54:06 2001 From: chrishbarker at home.net (Chris Barker) Date: Thu Sep 6 15:54:06 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> Message-ID: <3B980397.63F08731@home.net> Hi all, I started a recent dission on c.l.p recently (http://mail.python.org/pipermail/python-list/2001-September/063502.html), and it brought up an interesting idea. In general, I'd like to see Python become more array-oriented, which PEP 209 and friends will help with. I want this because it provides a natural and effecient way of expressing your program, and because there are large oportunities for performance enhancements. WHile I make a lot of use of NumPy arrays at the moment, and get substantial performance benefits when I can use the overloaded operators and ufuncs, etc. Python itself still doesn't know that a NumPy array is a homogenous sequence (or might be), because it has no concept of such a thing. If the Python interpreter knew that it was homogenous, there could be a lot of opportunities for performance enhancements. In the above stated thread, I suggested the addition of a "homogenous" flag to sequences. I havn't gotten an enthusiastic response, but Skip Montanaro suggested: """ One approach might be to propose that the array object be "brought into the fold" as a builtin object and modifications be made to the virtual machine so that homogeneity can be assumed when operating on arrays. """ PEP 209 does propose that an array object be "brought into the fold" (or does it? would it be a builtin?, if not, at least being part of the standard library would be a help ), but it makes no mention of any modifications to the virtual machine that would allow optimisations outside of Numeric2 defined functions. Is this something worth adding? I understand that it is one thing for the homogenous sequence (or array) to be acknowledged in the VM, and quite another for actual optimizations to be written, but we need the former before we can get the latter. What do you all think?? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From paul at pfdubois.com Thu Sep 6 16:55:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu Sep 6 16:55:02 2001 Subject: [Numpy-discussion] Numeric 20.2.0 available in CVS or tar form Message-ID: <001201c1372f$4a189480$0301a8c0@plstn1.sfba.home.com> Numeric 20.2.0 is available at Sourceforge or via cvs. The tag for cvs is r20_2_0. I haven't done the Windows installers yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wqhdelphi at 263.net Thu Sep 6 20:10:02 2001 From: wqhdelphi at 263.net (wqh) Date: Thu Sep 6 20:10:02 2001 Subject: [Numpy-discussion] =?gb2312?B?Y2FuIEkgdXNlIHN5bWJvbGljIGNvbXB1dGF0aW9uIHdpdGggbnVtcHk/?= Message-ID: <3B983B3A.18232@mta3> An HTML attachment was scrubbed... URL: -------------- next part -------------- I should use this ,but do not know how, if it can not,then which should I use to perform.thanks IP??????? ??????????? ?????????? From hinsen at cnrs-orleans.fr Fri Sep 7 06:14:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Fri Sep 7 06:14:02 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 In-Reply-To: <3B980397.63F08731@home.net> (message from Chris Barker on Thu, 06 Sep 2001 16:15:35 -0700) References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> Message-ID: <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> > benefits when I can use the overloaded operators and ufuncs, etc. Python > itself still doesn't know that a NumPy array is a homogenous sequence > (or might be), because it has no concept of such a thing. If the Python > interpreter knew that it was homogenous, there could be a lot of > opportunities for performance enhancements. Such as? Given the overall dynamic nature of Python, I don't see any real opportunities outside array-specific code. What optimizations could be done knowing *only* that all elements of a sequence are of the same type, but without a particular data layout? Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From chrishbarker at home.net Fri Sep 7 17:01:23 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri Sep 7 17:01:23 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> Message-ID: <3B9964C9.6DA6E7CF@home.net> Konrad Hinsen wrote: > Such as? Given the overall dynamic nature of Python, I don't see any > real opportunities outside array-specific code. What optimizations > could be done knowing *only* that all elements of a sequence are of > the same type, but without a particular data layout? I remember Guido's answer to a FAQ a while ago (the current FAQ has a more terse version) that essentially stated that compiled Python wouldn't be much faster because of Python's dynamic nature, so that the expression x = y*z needs to be evaluated to see what type y is, what type z is, and what it means to multiply them, before the actual multiplication can take place. All that checking takes a whole lot longer than the multiplication itself. NumPy, or course, helps this along, because once it is determined that y and z are both NumPy arrays of the same shape and type, it can multiply all the elements in a C loop, without any further type checking. Where this breaks down, of course, is when you can't express what you want to do as a set of array functions. Once you learn the tricks these times are fairly rare (which is why NumPy is useful), but there are times when you can't use array-math, and need to loop through the elements of an array and operate on them. Python currently has to type check each of those elements every time something is done on them. In principle, if the Python VM knew that A was an array of Floats, it would know that A[i,j,k] was a Float, and it would not have to do a check. I think it would be easiest to get optimization in sequence-oriented operations, such as map() and list comprehensions: map(fun, A) when this is called, the function bound to the "fun" name is passed to map, as is the array bound to the name "A". The Array is known to be homogeous. map could conceivably compile a version of fun that worked on the type of the items in A, and then apply that function to all the elements in A, without type checking, and looping at the speed of C. This is the kind of optimization I am imagining. Kind of an instant U-func. Something similar could be done with list comprehensions. Of course, most things that can be done with list comprehensions and map() can be done with array operators anyway, so the real advantage would be a smarter compiler that could do that kind of thing inside a bunch of nested for loops. There is at least one project heading in that direction: * Psyco (Armin Rego) - a run-time specializing virtual machine that sees what sorts of types are input to a function and compiles a type- or value-specific version of that function on-the-fly. I believe Armin is looking at some JIT code generators in addition to or instead of another virtual machine. knowing that all the elements of an Array (or any other sequence) are the same type could help here a lot. Once a particular function was compiled with a given set of types, it could be called directly on all the elements of that array (and other arrays) with no type checking. What it comes down to is that while Python's dynamic nature is wonderful, and very powerful and flexible, there are many, many, times that it is not really needed, particularly inside a given small function. The standard answer about Python optimization is that you just need to write those few small functions in C. This is only really practical if they are functions that operate on particular expected inputs: essentially statically typed input (or at least not totally general). Even then, it is a substantial effort, even for those with extensive C experience (unlike me). It would be so much better if a Py2C or a JIT compiler could optimize those functions for you. I know this is a major programming effort, and will, at best, be years away, but I'd like to see Python move in a direction that makes it easier to do, and allows small steps to be done at a time. I think introducing the concept of a homogenous sequence could help a lot of these efforts. Numeric Arrays would be just a single example. Python already has strings, and tuples could be marked as homogenous when created, if they were. So could lists, but since they are mutable, their homogenaity could change at any time, so it might not be useful. I may be wrong about all this, I really don't know a whit about writing compilers or interpreters, or VMs, but I'm throughing around the idea, to see what I can learn, and see if it makes any sense. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From hinsen at cnrs-orleans.fr Sat Sep 8 15:11:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sat Sep 8 15:11:02 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 In-Reply-To: <3B9964C9.6DA6E7CF@home.net> (message from Chris Barker on Fri, 07 Sep 2001 17:22:33 -0700) References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> <3B9964C9.6DA6E7CF@home.net> Message-ID: <200109082210.f88MAI515833@chinon.cnrs-orleans.fr> > check each of those elements every time something is done on them. In > principle, if the Python VM knew that A was an array of Floats, it would > know that A[i,j,k] was a Float, and it would not have to do a check. I Right, but that would only work for types that are specially treated by the interpreter. Just knowing that all elements are of the same type is not enough. In fact, the VM does not do any check, it just looks up type-specific pointers and calls the relevant functions. The other question is how much effort the Python developers would be willing to spend on this, it looks like a very big job to me, in fact a reimplementation of the interpreter. > when this is called, the function bound to the "fun" name is passed to > map, as is the array bound to the name "A". The Array is known to be > homogeous. map could conceivably compile a version of fun that worked on Yes, that could be done, provided there is also a means for compiling type-specific versions of a function. > I know this is a major programming effort, and will, at best, be years > away, but I'd like to see Python move in a direction that makes it > easier to do, and allows small steps to be done at a time. I think > introducing the concept of a homogenous sequence could help a lot of I'd say that adding this feature is much less work than doing even the slightest bit of optimization. I know I am sounding pessimistic here, but, well, I am... Konrad. -- -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From wqhdelphi at 263.net Sat Sep 8 19:36:01 2001 From: wqhdelphi at 263.net (wqh) Date: Sat Sep 8 19:36:01 2001 Subject: [Numpy-discussion] =?gb2312?B?SXMgdGhlcmUgc29tZSBtZXRob2QgdG8gcmVhbGl6ZSBxciBkZWNvbXBvc3Rpb24=?= Message-ID: <3B9AD4F2.26917@mta1> An HTML attachment was scrubbed... URL: -------------- next part -------------- I find through the doc ,but can not find any.What Numpy support compare with matlab is too few,can I think we must work hard to make it better. IP??????? ??????????? ?????????? From jjl at pobox.com Sun Sep 9 10:49:01 2001 From: jjl at pobox.com (John J. Lee) Date: Sun Sep 9 10:49:01 2001 Subject: [Numpy-discussion] =?gb2312?B?SXMgdGhlcmUgc29tZSBtZXRob2QgdG8gcmVhbGl6ZSBxciBkZWNvbXBvc3Rpb24=?= In-Reply-To: <3B9AD4F2.26917@mta1> Message-ID: Try SciPy http://www.scipy.org/ Lots of wrapped libraries, and other useful stuff. John From tpitts at accentopto.com Mon Sep 10 10:37:04 2001 From: tpitts at accentopto.com (Todd Alan Pitts, Ph.D.) Date: Mon Sep 10 10:37:04 2001 Subject: [Numpy-discussion] Problem with import ... In-Reply-To: ; from oliphant@ee.byu.edu on Thu, Sep 06, 2001 at 11:13:51AM -0600 References: <3B97AAF9.C8129FEB@home.net> Message-ID: <20010910113615.A22554@fermi.accentopto.com> In migrating from python 1.5.2 and Numeric 13 to python2.1.1 with Numeric-20.1.0 I have encountered a rather pernicious problem. I had previously been using f2py 1.232 to generate modules for the BLAS and a self-consistent subset of LAPACK. Importing the thus created blas and then lapack modules made their FORTRAN routines available to the FORTRAN code that I had written. Everything worked well. However, after I compiled python2.1.1 and Numeric-20.1.0 importing the blas and lapack modules no longer provides access to their routines. Moreover, I can't even import the lapack module because it requires routines from the blas module (that it can no longer see). I get this behavior on both RedHat 7.1 and 6.1 (e.g. with and without gcc 2.96). I verified that python2.2a behaves as 2.1.1 in this regard as well. I tried linking generating blas and lapack libraries and linking them during creation of the shared libraries but that only worked marginally. I am still geting unresolved symbol errors. Has anyone else seen this problem/difference? -Todd From karshi.hasanov at utoronto.ca Mon Sep 10 15:47:01 2001 From: karshi.hasanov at utoronto.ca (Karshi Hasanov) Date: Mon Sep 10 15:47:01 2001 Subject: [Numpy-discussion] unwrap_function Message-ID: <20010910224642Z234904-13777+6@bureau8.utcc.utoronto.ca> Hi all, Does NumPy provide "unwrap" function like in MatLab? From chrishbarker at home.net Mon Sep 10 16:11:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 10 16:11:02 2001 Subject: [Numpy-discussion] unwrap_function References: <20010910224642Z234904-13777+6@bureau8.utcc.utoronto.ca> Message-ID: <3B9D4D8D.E273C637@home.net> Karshi Hasanov wrote: > Does NumPy provide "unwrap" function like in MatLab? For those who are wondering, this is unwrap in MATLAB: UNWRAP Unwrap phase angle. UNWRAP(P) unwraps radian phases P by changing absolute jumps greater than pi to their 2*pi complement. It unwraps along the first non-singleton dimension of P. P can be a scalar, vector, matrix, or N-D array. UNWRAP(P,TOL) uses a jump tolerance of TOL rather than the default TOL = pi. UNWRAP(P,[],DIM) unwraps along dimension DIM using the default tolerance. UNWRAP(P,TOL,DIM) uses a jump tolerance of TOL. See also ANGLE, ABS. And I think the answer is no, but It would be nice... -chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From hinsen at cnrs-orleans.fr Tue Sep 11 13:26:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Sep 11 13:26:02 2001 Subject: [Numpy-discussion] Problem with import ... In-Reply-To: <20010910113615.A22554@fermi.accentopto.com> (tpitts@accentopto.com) References: <3B97AAF9.C8129FEB@home.net> <20010910113615.A22554@fermi.accentopto.com> Message-ID: <200109112024.f8BKOxS29853@chinon.cnrs-orleans.fr> > FORTRAN code that I had written. Everything worked well. However, > after I compiled python2.1.1 and Numeric-20.1.0 importing the blas and > lapack modules no longer provides access to their routines. Moreover, Were you relying on symbols from one dynamic module being available to other dynamic modules? That was never guaranteed in Python, whether or not it works depends on the platform but also on the Python interpreter version. I am not aware of any relevant changes in Python 2.1, but then none of my code relies on this. The only safe way to call C/Fortran code in one dynamic module from another dynamic module is passing the addresses in CObject objects. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From paul at pfdubois.com Tue Sep 11 14:58:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue Sep 11 14:58:03 2001 Subject: [Numpy-discussion] Dynamic loading and shared libraries article Message-ID: <000101c13b0c$aee25780$0201a8c0@plstn1.sfba.home.com> My column this month has a really great article by David Beazley and two colleagues about how dynamic loading and shared libraries actually work and some things you can do with it. It is Computing in Science and Engineering, Sept/Oct 2001, Scientific Programming Department. I found the article a really big help in understanding this stuff. The magazine's web site is http://computer.org/cise. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From herb at dolphinsearch.com Tue Sep 11 16:28:01 2001 From: herb at dolphinsearch.com (Herbert L. Roitblat) Date: Tue Sep 11 16:28:01 2001 Subject: [Numpy-discussion] Precision Message-ID: <016701c13b19$656abe00$cb02a8c0@dolphinsearch.com> I have been wanting use Float128 numbers on an Alpha system running Linux and Python 2.0. Float128 does not seem to be available. At least Float128 is undefined. Is the Alpha capable of support Float128 numbers? If it is, what am I missing to do be able to use it? Thanks, Herb -------------- next part -------------- An HTML attachment was scrubbed... URL: From csmith at blakeschool.org Wed Sep 12 14:08:02 2001 From: csmith at blakeschool.org (Christopher Smith) Date: Wed Sep 12 14:08:02 2001 Subject: [Numpy-discussion] A proposal for a round(x,n) enhancement Message-ID: Hello, I would like to see an addition to the round(x,n) function syntax that would allow one to specify that x should be rounded to the nth significant digit as opposed to the nth digit to the left or right of the 1's digit. I am testing the waters right now on python-list and have not yet submitted a PEP. Some there have suggested that it be called a different function. Someone else has suggested that perhaps it could be added into SciPy or Numeric as a function there. I prefer the name round because it describes what you are doing. Someone suggested that MATLAB has a function called chop that does what I am proposing and at http://www.math.colostate.edu/~tavener/M561/lab5/lab5.pdf the document says that the "MATLAB function chop(x,n) rounds (not chops!) the number x to n significant digits." If this new function was called "chop" then any previous MATLAB users would know what to expect. (But why call it chop if you are actually rounding?) What I think would be nice is to modify round so it can round to a given number of sig. figs. Here is a def called Round that simulates what I am proposing: from math import floor,log10 def Round(x,n=0): if n%1: d=int(10*n) return round(x,d-1-int(floor(log10(abs(x))))) else: return round(x,n) print Round(1.23456,2) #returns 1.23 print Round(1.23456,0.2) #returns 1.2 The way that I have enhanced round's syntax is to have it check to see if there is a non-zero decimal part to n. If there is, n is multiplied by 10 and the resulting value is used to compute which digit to round x to. n=0.1 will round to the first significant digit while n=0.4 and n=1.1 will round to the 4th and 11th, respectively. I don't believe you will run into any numerical issues since even though something like .3 may be represented internally as 0.29999999999999999, multiplying it by 10 gets you back to 3 and this value is what is used in the call to round() I am open to comments about implementation and function name. /c From csmith at blakeschool.org Sat Sep 15 13:28:01 2001 From: csmith at blakeschool.org (Christopher Smith) Date: Sat Sep 15 13:28:01 2001 Subject: [Numpy-discussion] Re: PEP proposal for round(x,n) enhancement Message-ID: To Chris and others interested, Would you mind giving me an opinion here? I think you made the best argument for not using the name "round" since floating n is already accepted by round and changing how n is interpreted would break anyone's code who passed in n with a non-zero decimal portion. Sigfig or chop is an alternative but I really don't like the name "chop" since it's not descriptive of the rounding that takes place. Sigfig is ok, but I am proposing a round function (below) that offers more than just sigfigs. So...how about "round2" which has precedence in the atan2() function. The two sorts of roundings that can be done with round2() are 1) rounding to a specified number of significant figures or 2) rounding to the nearest "x". The latter is useful if you are making a histogram, for example, where experimental data values may all be distinct but when the deviation of values is considered it makes sense to round the observations to a specified uncertainty, e.g. if two values differ by less than sigma then they could be considered to be part of the same "bin". The script is below. Thanks for any input, /c #### from math import floor,log10 def round2(x,n=0,sigs4n=1): '''Return x rounded to the specified number of significant digits, n, as counted from the first non-zero digit. If n=0 (the default value for round2) then the magnitude of the number will be returned (e.g. round2(12) returns 10.0). If n<0 then x will be rounded to the nearest multiple of n which, by default, will be rounded to 1 digit (e.g. round2(1.23,-.28) will round 1.23 to the nearest multiple of 0.3. Regardless of n, if x=0, 0 will be returned.''' if x==0: return x if n<0: n=round2(-n,sigs4n) return n*int(x/n+.5) if n==0: return 10.**(int(floor(log10(abs(x))))) return round(x,int(n)-1-int(floor(log10(abs(x))))) #### From soender at cs.utk.edu Sun Sep 16 09:18:02 2001 From: soender at cs.utk.edu (Peter Soendergaard) Date: Sun Sep 16 09:18:02 2001 Subject: [Numpy-discussion] re: gcc 3.0 vs. 2.95.2 Message-ID: Hello, I just browsed over the archives for Numpy-discussion and saw this, and decided to sign up. I work on the ATLAS-project http://math-atlas.sourceforge.net/ and we have had similar problems with gcc3.0 Gcc 3.0 has a completely new backend which produces much slower floating point code on i386 machines. It is most visible on the Athlon, but it also also shows on on P4 and PIII machines. We havn't yet figured out if there are some optimizations that can make this go away, but if you need performance stick with the old 2.95 release for now. By the way, if you would like to use Atlas in NumPy (I don't know if you do it already) I might be of some help. There is c-interfaces to the BLAS bundled with ATLAS, supporting both row-major and column-major storage. Cheers, Peter. From europax at home.com Sun Sep 16 10:33:01 2001 From: europax at home.com (Rob) Date: Sun Sep 16 10:33:01 2001 Subject: [Numpy-discussion] re: gcc 3.0 vs. 2.95.2 References: Message-ID: <3BA4E210.7337C815@home.com> Peter Soendergaard wrote: > > Hello, > > I just browsed over the archives for Numpy-discussion and saw this, and > decided to sign up. > > I work on the ATLAS-project http://math-atlas.sourceforge.net/ and we have > had similar problems with gcc3.0 Gcc 3.0 has a completely new backend > which produces much slower floating point code on i386 machines. It is > most visible on the Athlon, but it also also shows on on P4 and PIII > machines. We havn't yet figured out if there are some optimizations that > can make this go away, but if you need performance stick with the old 2.95 > release for now. > > By the way, if you would like to use Atlas in NumPy (I don't know if you > do it already) I might be of some help. There is c-interfaces to the BLAS > bundled with ATLAS, supporting both row-major and column-major storage. > > Cheers, > > Peter. > > FROM: Rob > DATE: 09/04/2001 18:02:05 > SUBJECT: [Numpy-discussion] Python and Numpy compiled on Athlon > optimized gcc3.01 > > Just for kicks last night I installed gcc3.01 which claims to have Athlon > optimization, ie. -march=athlon. I recompiled Python and Numpy, and then > ran a big simulation. The new compiler ran 200 seconds slower than the > old gcc2.95 with plain -march=pentium. I need to go to the gcc website > and see just what optimization they are claiming. Maybe I should have > also used -02. Rob. > > -- > The Numeric Python EM Project > > http://www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion Hi Peter, great to know that I'm not alone. Maybe I can build an EM simulator using all integer math :). I think I mentioned this on the list before, but Win2k on my laptop (1Ghz) runs Numpy and Python faster than my 1.2Ghz Athlon DDR machine using FreeBSD :( Also, for reference, I have a 3DNow optimized MP3encoding program Gogo that encodes mp3's 10x faster on the athlon than Lame does on my FreeBSD system on the laptop. Go figure! -- The Numeric Python EM Project www.members.home.net/europax From paul at pfdubois.com Mon Sep 17 14:32:01 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Sep 17 14:32:01 2001 Subject: [Numpy-discussion] RE: Segfault with latest lapack_litemodule using complex matrices. In-Reply-To: <01091601582300.06795@travis> Message-ID: <000001c13fbf$fbf315c0$3d01a8c0@plstn1.sfba.home.com> A. Mirone rode to the rescue and the fix is available as Numeric-20.2.1.tar.gz or in CVS. -----Original Message----- From: Travis Oliphant [mailto:oliphant.travis at ieee.org] Sent: Saturday, September 15, 2001 6:58 PM To: Paul F. Dubois Subject: Segfault with latest lapack_litemodule using complex matrices. Hi Paul, With the latest changes to the lapack_lite module, I now get segfaults when I'm trying to load call LinearAlgebra.eigenvectors using a complex-valued matrix. Could you check this on your system? Thanks, -Travis From chrishbarker at home.net Mon Sep 17 16:15:03 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:15:03 2001 Subject: [Numpy-discussion] PyArray_Check problem. References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> Message-ID: <3BA68927.D7B50B21@home.net> Hi all, The MATLAB Digest just put out a little article about array indexing in MATLAB. I thought some of yo might find it interesting, and it might give some ideas for NumPy2. Most of what MATLAB has, NumPy has an equivalent, but I would love to see what matlab calls vector indexing, and a more natural way to do masks. I know vector indexing would be a pretty tricky thing to have work, at least with slices being references and all, but it would be a very nice thing!! Perhaps some brilliant person can figure out an elegant and efficient way to do it. Here is where you eill find the article: http://www.mathworks.com/company/digest/sept01/matrix.shtml -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From romberg at fsl.noaa.gov Mon Sep 17 16:21:01 2001 From: romberg at fsl.noaa.gov (Mike Romberg) Date: Mon Sep 17 16:21:01 2001 Subject: [Numpy-discussion] Offset 2D arrays Message-ID: <15270.34086.281888.272126@smaug.fsl.noaa.gov> I am attempting to create 2D arrays which are offset copies of a given starting array. For example if I have a 2D array like this: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) I would like to offset it by some amount in either or both the x and y dimension. Lets say that both the x and y offset would be 1. Then I would like to have an array like this: array([[5, 6, 0], [8, 9, 0], [0, 0, 0]]) Here I don't really care about the values which are now zero. The main point is that now I can compare the data values at any given (x,y) point with the values at the adjacent point (over one on each axis). This would be useful for the kinds of calculations we need to do. I just can't come up with a numeric way to do this. Does anyone have any ideas? Thanks alot, Mike Romberg (romberg at fsl.noaa.gov) From chrishbarker at home.net Mon Sep 17 16:23:01 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:23:01 2001 Subject: [Numpy-discussion] Indexing in MATLAB References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <3BA68927.D7B50B21@home.net> Message-ID: <3BA68B01.28CF36D9@home.net> OOPS! I replied to an arbitray message to get the address,and I forgot to change the subject, so here is the same message I just posted, but with an appropriate subject. Hi all, The MATLAB Digest just put out a little article about array indexing in MATLAB. I thought some of yo might find it interesting, and it might give some ideas for NumPy2. Most of what MATLAB has, NumPy has an equivalent, but I would love to see what matlab calls vector indexing, and a more natural way to do masks. I know vector indexing would be a pretty tricky thing to have work, at least with slices being references and all, but it would be a very nice thing!! Perhaps some brilliant person can figure out an elegant and efficient way to do it. Here is where you will find the article: http://www.mathworks.com/company/digest/sept01/matrix.shtml -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From nodwell at physics.ubc.ca Mon Sep 17 16:35:03 2001 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Mon Sep 17 16:35:03 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <15270.34086.281888.272126@smaug.fsl.noaa.gov> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> Message-ID: <20010917163437.A26450@holler.physics.ubc.ca> Mike, As was pointed out to me when I had a similar query, one way to do this is to define a class which inherits UserArray and refine indexing and slicing. I actually shifted by an offset of one in the opposite direction to what you seem to require. I had intended to generalize to arbitrary offsets, but haven't had the time yet. Anyway, you're welcome to grab my code at http://www.physics.ubc.ca/~mbelab/python/arrayone as a starting point for your class. There are still some issues and quirkiness with the code, but they're documented along with work-arounds, and suggestions for fixes have been made on this mailing list. Again, it's a matter of time... regards, Eric On Mon, Sep 17, 2001 at 05:20:06PM -0600, Mike Romberg wrote: > > I am attempting to create 2D arrays which are offset copies of a > given starting array. For example if I have a 2D array like this: > > array([[1, 2, 3], > [4, 5, 6], > [7, 8, 9]]) > > I would like to offset it by some amount in either or both the x and > y dimension. Lets say that both the x and y offset would be 1. Then > I would like to have an array like this: > > > > array([[5, 6, 0], > [8, 9, 0], > [0, 0, 0]]) > > Here I don't really care about the values which are now zero. The > main point is that now I can compare the data values at any given > (x,y) point with the values at the adjacent point (over one on each > axis). This would be useful for the kinds of calculations we need to > do. I just can't come up with a numeric way to do this. Does anyone > have any ideas? > > Thanks alot, > > Mike Romberg (romberg at fsl.noaa.gov) > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- ******************************** Eric Nodwell Ph.D. candidate Department of Physics University of British Columbia tel: 604-822-5425 fax: 604-822-5324 nodwell at physics.ubc.ca From nodwell at physics.ubc.ca Mon Sep 17 16:40:01 2001 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Mon Sep 17 16:40:01 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <20010917163437.A26450@holler.physics.ubc.ca> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <20010917163437.A26450@holler.physics.ubc.ca> Message-ID: <20010917163920.B26450@holler.physics.ubc.ca> For "refine indexing and slicing" read "re-define indexing and slicing". Oops :) From chrishbarker at home.net Mon Sep 17 16:41:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:41:02 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> Message-ID: <3BA68F4D.A8ADFB68@home.net> Mike Romberg wrote: > > I am attempting to create 2D arrays which are offset copies of a > given starting array. For example if I have a 2D array like this: > have any ideas? This is not quite as clean as i would like, but this will work: >>> a = array([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9]]) >>> m,n = a.shape >>> b[:m-1,:n-1] = a[1:,1:] >>> b array([[5, 6, 0], [8, 9, 0], [0, 0, 0]]) >>> if b does not have to be the same shape as a, then it is really easy: >>> b = a[1:,1:] -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From roitblat at hawaii.edu Tue Sep 18 10:57:03 2001 From: roitblat at hawaii.edu (Herbert L. Roitblat) Date: Tue Sep 18 10:57:03 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> Message-ID: <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> This will work: b=zeros ((3,3)) b[:2,:2] = b[:2,:2] + a[1:,1:] You need to know the size of a to use this scheme. ----- Original Message ----- From: "Chris Barker" To: "Mike Romberg" Cc: Sent: Monday, September 17, 2001 2:03 PM Subject: Re: [Numpy-discussion] Offset 2D arrays > Mike Romberg wrote: > > > > I am attempting to create 2D arrays which are offset copies of a > > given starting array. For example if I have a 2D array like this: > > > have any ideas? > > This is not quite as clean as i would like, but this will work: > > >>> a = array([[1, 2, 3], > ... [4, 5, 6], > ... [7, 8, 9]]) > >>> m,n = a.shape > >>> b[:m-1,:n-1] = a[1:,1:] > >>> b > array([[5, 6, 0], > [8, 9, 0], > [0, 0, 0]]) > >>> > > if b does not have to be the same shape as a, then it is really easy: > > >>> b = a[1:,1:] > > -Chris > > > -- > Christopher Barker, > Ph.D. > ChrisHBarker at home.net --- --- --- > http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ > ------@@@ ------@@@ ------@@@ > Oil Spill Modeling ------ @ ------ @ ------ @ > Water Resources Engineering ------- --------- -------- > Coastal and Fluvial Hydrodynamics -------------------------------------- > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From hinsen at cnrs-orleans.fr Wed Sep 19 01:07:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Wed Sep 19 01:07:02 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> (roitblat@hawaii.edu) References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> Message-ID: <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> > This will work: > b=zeros ((3,3)) > b[:2,:2] = b[:2,:2] + a[1:,1:] > > You need to know the size of a to use this scheme. How about this: b = 0*a b[:-1, :-1] = a[1:, 1:] Works for any shape and type of a. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From roitblat at hawaii.edu Wed Sep 19 10:14:03 2001 From: roitblat at hawaii.edu (Herbert L. Roitblat) Date: Wed Sep 19 10:14:03 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> Message-ID: <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> Konrad's solution is MUCH more elegant. HLR ----- Original Message ----- From: "Konrad Hinsen" To: Cc: ; ; Sent: Tuesday, September 18, 2001 10:05 PM Subject: Re: [Numpy-discussion] Offset 2D arrays > > This will work: > > b=zeros ((3,3)) > > b[:2,:2] = b[:2,:2] + a[1:,1:] > > > > You need to know the size of a to use this scheme. > > How about this: > > b = 0*a > b[:-1, :-1] = a[1:, 1:] > > Works for any shape and type of a. > > Konrad. > -- > -------------------------------------------------------------------------- ----- > Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > France | Nederlands/Francais > -------------------------------------------------------------------------- ----- > From romberg at fsl.noaa.gov Wed Sep 19 11:50:02 2001 From: romberg at fsl.noaa.gov (Mike Romberg) Date: Wed Sep 19 11:50:02 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> Message-ID: >>>>> " " == Herbert L Roitblat writes: > Konrad's solution is MUCH more elegant. > Message ----- From: "Konrad Hinsen" [snip] >> How about this: >> >> b = 0*a b[:-1, :-1] = a[1:, 1:] >> I think it looks cleaner as well. I've managed to create a function (with the help of the tips on this list) which can offset a 2d array in either or both the x and y dimensions. I strongly suspect that someone who *gets* python and numeric slicing better than I, can come up with a cleaner approach. def getindicies(o, l): if o > 0: s1 = o; e1 = l; s2 = 0; e2 = l - o elif o < 0: s1 = 0; e1 = l + o; s2 = -o; e2 = l else: s1 = 0; e1 = l; s2 = 0; e2 = l return s1, e1, s2, e2 # return a 2d array whose dimensions match a with the data offset # controlled by x and y. def offset(a, x, y): sy1, ey1, sy2, ey2 = getindicies(y, a.shape[0]) sx1, ex1, sx2, ex2 = getindicies(x, a.shape[1]) b = zeros(a.shape) b[sy1:ey1,sx1:ex1] = a[sy2:ey2,sx2:ex2] return b a = array(((1, 2, 3), (4, 5, 6), (7, 8, 9))) # no offset print offset(a, 0, 0) # offset by 1 column in x print offset(a, 1, 0) # offset by 1 column (opposite dir) in x print offset(a, -1, 0) # offset by 2 columns in x print offset(a, 2, 0) # offset by 2 columns in y print offset(a, 0, 2) Thanks, Mike Romberg (romberg at fsl.noaa.gov) From europax at home.com Thu Sep 20 17:54:02 2001 From: europax at home.com (Rob) Date: Thu Sep 20 17:54:02 2001 Subject: [Numpy-discussion] expect a new wave of Numpy'ers Message-ID: <3BAA8F4C.5B5C599C@home.com> I just announced a ham radio type FDTD simulation to the ham newsgroups. I couldn't resist trying out a ham type antenna, instead of all the waveguide stuff I've been doing. Rob. -- The Numeric Python EM Project www.members.home.net/europax From harpend at xmission.com Tue Sep 25 15:20:04 2001 From: harpend at xmission.com (Henry Harpending) Date: Tue Sep 25 15:20:04 2001 Subject: [Numpy-discussion] append number to vector Message-ID: <20010925161348.C6064@xmission.com> I often find myself wanting to append a number to a vector. After fumbling experimentation I use def comma(ar,inint): "comma(array,integer) returns array with integer appended" t=array([inint,]) return(concatenate((ar,t),1)) which is used like the comma in apl, i.e. ar <- ar, inint. This seems klutzy to me. Is there a simpler way to do it? If ar were a list, ar.append(inint) works, but no such luck after ar has become an array. Thanks, Henry Harpending, University of Utah From gvermeul at labs.polycnrs-gre.fr Wed Sep 26 05:18:02 2001 From: gvermeul at labs.polycnrs-gre.fr (Gerard Vermeulen) Date: Wed Sep 26 05:18:02 2001 Subject: [Numpy-discussion] FAST and EASY data plotting for Python, NumPy and Qt Message-ID: <01092614174502.23413@taco.polycnrs-gre.fr> Announcing PyQwt-0.29.91 FAST and EASY data plotting for Python, NumPy and Qt PyQwt is a set of Python bindings for the Qwt C++ class library. The Qwt library extend the Qt framework with widgets for Scientific and Engineering applications. It contains QwtPlot, a 2d plotting widget, and widgets for data input/output such as and QwtCounter, QwtKnob, QwtThermo and QwtWheel. PyQwt requires and extends PyQt, a set of Python bindings for Qt. PyQwt requires NumPy. NumPy extends the Python language with new data types that make Python an ideal language for numerical computing and experimentation (like MatLab, but better). The home page of PyQwt is http://gerard.vermeulen.free.fr NEW in PyQwt-0.29.91: 1. compatible with PyQt-2.5/sip-2.5 and PyQt-2.4/sip-2.4. 2. compatible with NumPy-20.2.0, and lower. 3. *.exe installer for Windows (requires Qt-2.3.0-NC). 4. build instructions for Windows and other versions of Qt. 5. HTML documentation with installation instructions and a reference listing the Python calls to PyQwt that are different from the corresponding C++ calls to Qwt. 6. fixed reference counting bug in the methods with NumPy arrays as arguments. 7. new methods: QwtPlot.axisMargins() QwtPlot.closestCurve() QwtPlot.curveKeys() QwtPlot.markerKeys() QwtPlot.title() QwtPlot.titleFont() QwtScale.labelFormat() QwtScale.map() 8. changed methods: QwtCurve.verifyRange() -- cleaned up interface QwtPlot.adjust() -- is now fully implemented QwtPlot.enableLegend() -- (de)selects all items or a single item 9. removed methods (incompatible with Python, because unsafe, even in C++): QwtCurve.setRawData() QwtPlot.setCurveRawData() QwtSpline.copyValues() Gerard Vermeulen From chrishbarker at home.net Wed Sep 26 11:35:05 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed Sep 26 11:35:05 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? References: <01092614174502.23413@taco.polycnrs-gre.fr> Message-ID: <3BB2254D.EAC7879A@home.net> Hi all, I was recently surprised to find that there are no round(0 or abs() Ufuncs with Numeric. I'm imagining that they might exist under other names, but if not, I submit my versions for critique (lightly tested) -Chris from Numeric import * def Uabs(a): """ A Ufunc version of the Python abs() function """ a = asarray(a) if a.typecode() == 'D' or a.typecode() == 'F':# for complex numbers return sqrt(a.imag**2 + a.real**2) else: return where(a < 0, -a, a) def Uround(a,n=0): """ A Ufunc version of the Python round() function. It should behave in the same way Note: I think this is the right thing to do for negative numbers, but not totally sure. (Uround(-0.5) = 0, but Uround(-0.5000001) = -1) It won't work for complex numbers """ a = asarray(a) n = asarray(n) return floor((a * 10.**n) + 0.5) / 10.**n -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From chrishbarker at home.net Wed Sep 26 12:00:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed Sep 26 12:00:02 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? References: <01092614174502.23413@taco.polycnrs-gre.fr> <3BB2254D.EAC7879A@home.net> Message-ID: <3BB22B00.E3CADF55@home.net> Chris Barker wrote: > I was recently surprised to find that there are no round() or abs() > Ufuncs with Numeric. I'm imagining that they might exist under other > names, but if not, I submit my versions for critique (lightly tested) OK, Tim Hochberg was nice enough to point out to me that abs() works on NumPy arrays. However, it does not work on other sequences, so maybe we need this: def abs(a): return abs(asarray(a)) -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From kern at caltech.edu Wed Sep 26 12:01:57 2001 From: kern at caltech.edu (Robert Kern) Date: Wed Sep 26 12:01:57 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? In-Reply-To: <3BB2254D.EAC7879A@home.net> References: <01092614174502.23413@taco.polycnrs-gre.fr> <3BB2254D.EAC7879A@home.net> Message-ID: <20010926120012.A18328@myrddin.caltech.edu> On Wed, Sep 26, 2001 at 11:58:21AM -0700, Chris Barker wrote: > Hi all, > > I was recently surprised to find that there are no round(0 or abs() > Ufuncs with Numeric. I'm imagining that they might exist under other > names, around and absolute. -- Robert Kern kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jjl at pobox.com Wed Sep 26 14:10:02 2001 From: jjl at pobox.com (John J. Lee) Date: Wed Sep 26 14:10:02 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? In-Reply-To: <3BB22B00.E3CADF55@home.net> Message-ID: On Wed, 26 Sep 2001, Chris Barker wrote: [...] > OK, Tim Hochberg was nice enough to point out to me that abs() works on > NumPy arrays. However, it does not work on other sequences, so maybe we > need this: > > def abs(a): > return abs(asarray(a)) Numeric.absolute, as Robert Kern pointed out, maybe that hasn't arrived in your mailbox... John From europax at home.com Thu Sep 27 18:07:03 2001 From: europax at home.com (Rob) Date: Thu Sep 27 18:07:03 2001 Subject: [Numpy-discussion] Can I further Numpy-ize this code? Message-ID: <3BB3CCFC.2367A7C3@home.com> I've been working on this for so long I may be missing the obvious. Here is a snippet of code that I would like to at least get rid of one more indexing operation. The problem is the presence of the one dimensional array constants which are functions of x,y, or z depending on their location: (you may recognize this as some FDTD code) Any ideas? Thanks, Rob. ####################################/ # Update the interior of the mesh: # all vector H vector components # ## for az in range(0,nz): for ay in range(0,ny): for ax in range(0,nx): dstore[ax,ay,0:nz]=Bx[ax,ay,0:nz] Bx[ax,ay,0:nz] = Bx[ax,ay,0:nz] * C1[0:nz] + ( ( (Ey[ax,ay,1:(nz+1)]-Ey[ax,ay,0:nz] ) / dz - (Ez[ax,ay+1,0:nz]-Ez[ax,ay,0:nz]) / dy ) * C2[0:nz] ) Hx[ax,ay,0:nz]= Hx[ax,ay,0:nz] * C3[ay] + ( ( Bx[ax,ay,0:nz] * C5[ax] - dstore[ax,ay,0:nz] * C6[ax] ) * C4h[ay] ) dstore[ax,ay,0:nz]=By[ax,ay,0:nz] By[ax,ay,0:nz] = By[ax,ay,0:nz] * C1[ax] + ( ( (Ez[ax+1,ay,0:nz]-Ez[ax,ay,0:nz]) / dx - (Ex[ax,ay,1:(nz+1)]-Ex[ax,ay,0:nz]) / dz ) * C2[ax] ) Hy[ax,ay,0:nz]= Hy[ax,ay,0:nz] * C3[0:nz] + ( ( By[ax,ay,0:nz] * C5[ay] - dstore[ax,ay,0:nz] * C6[ay] ) * C4h[0:nz] ) dstore[ax,ay,0:nz]=Bz[ax,ay,0:nz] Bz[ax,ay,0:nz] = Bz[ax,ay,0:nz] * C1[ay] + ( ( (Ex[ax,ay+1,0:nz]-Ex[ax,ay,0:nz] ) / dy - (Ey[ax+1,ay,0:nz]-Ey[ax,ay,0:nz] ) / dx ) * C2[ay] ) Hz[ax,ay,0:nz]= Hz[ax,ay,0:nz] * C3[ax] + ( ( Bz[ax,ay,0:nz] * C5[0:nz] - dstore[ax,ay,0:nz] * C6[0:nz] ) * C4h[ax] ) -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Thu Sep 27 20:12:01 2001 From: europax at home.com (Rob) Date: Thu Sep 27 20:12:01 2001 Subject: [Numpy-discussion] Can I further Numpy-ize this code? References: <3BB3CCFC.2367A7C3@home.com> Message-ID: <3BB3EA1B.FF6FD2C0@home.com> I fixed my code by turning the one dimensional arrays into 3 dimensional ones. This is much faster and totally sliced. But there still must be a way thats even faster. Rob. Rob wrote: > > I've been working on this for so long I may be missing the obvious. > Here is a snippet of code that I would like to at least get rid of one > more indexing operation. The problem is the presence of the one > dimensional array constants which are functions of x,y, or z depending > on their location: (you may recognize this as some FDTD code) Any > ideas? > > Thanks, Rob. > > ####################################/ > # Update the interior of the mesh: > # all vector H vector components > # > > ## for az in range(0,nz): > for ay in range(0,ny): > for ax in range(0,nx): > > dstore[ax,ay,0:nz]=Bx[ax,ay,0:nz] > > Bx[ax,ay,0:nz] = Bx[ax,ay,0:nz] * C1[0:nz] + ( ( > (Ey[ax,ay,1:(nz+1)]-Ey[ax,ay,0:nz] ) / dz - > (Ez[ax,ay+1,0:nz]-Ez[ax,ay,0:nz]) / dy ) * C2[0:nz] ) > > Hx[ax,ay,0:nz]= Hx[ax,ay,0:nz] * C3[ay] + ( ( > Bx[ax,ay,0:nz] * C5[ax] - dstore[ax,ay,0:nz] * C6[ax] ) * > C4h[ay] ) > > dstore[ax,ay,0:nz]=By[ax,ay,0:nz] > > By[ax,ay,0:nz] = By[ax,ay,0:nz] * C1[ax] + ( ( > (Ez[ax+1,ay,0:nz]-Ez[ax,ay,0:nz]) / dx - > (Ex[ax,ay,1:(nz+1)]-Ex[ax,ay,0:nz]) / dz ) * C2[ax] ) > > Hy[ax,ay,0:nz]= Hy[ax,ay,0:nz] * C3[0:nz] + ( ( > By[ax,ay,0:nz] * C5[ay] - dstore[ax,ay,0:nz] * C6[ay] ) * > C4h[0:nz] ) > > dstore[ax,ay,0:nz]=Bz[ax,ay,0:nz] > > Bz[ax,ay,0:nz] = Bz[ax,ay,0:nz] * C1[ay] + ( ( > (Ex[ax,ay+1,0:nz]-Ex[ax,ay,0:nz] ) / dy - > (Ey[ax+1,ay,0:nz]-Ey[ax,ay,0:nz] ) / dx ) * C2[ay] ) > > Hz[ax,ay,0:nz]= Hz[ax,ay,0:nz] * C3[ax] + ( ( > Bz[ax,ay,0:nz] * C5[0:nz] - dstore[ax,ay,0:nz] * C6[0:nz] > ) * C4h[ax] ) > > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Thu Sep 27 20:48:03 2001 From: europax at home.com (Rob) Date: Thu Sep 27 20:48:03 2001 Subject: [Numpy-discussion] searching for a better 3d field file format Message-ID: <3BB3F2B2.37E1560F@home.com> I've been using the "brick of bytes" format for my FDTD outputs, but unfortunately I can' find any type of 3d viewer for Windows. I am now using the X11/OpenGL based Animabob, which creates a movie of the series of dumped files. I am thinking of trying some other type of file format and viewer. I am wondering what others use to view 3d fields or data? Of course if it used Numpy and Python all the better. Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Fri Sep 28 07:11:01 2001 From: europax at home.com (Rob) Date: Fri Sep 28 07:11:01 2001 Subject: [Numpy-discussion] searching for a better 3d field file format References: <3BB3F2B2.37E1560F@home.com> <20010928062219.3742.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <3BB48489.1C5C623B@home.com> Hi Janko, yes I would love to have it. Thank you! I once installed Viz5D, but didn't quite know what to do with it. One thing though, the Windows version requires an X11 server, but thats better than no program at all :) I also installed OpenDX, the FreeBSD port version, but everytime I try to do anything it core dumps. I also want to look into VRML. Rob. Janko Hauser wrote: > > I would try vis5d, which is a very nice out of the box viewer of > 5D-Data. I have a module which can write vis5d-files directly from > numpy-data. The module is not polished, so if there is interest I > would send it privatly to you. > > http://vis5d.sourceforge.net/ > > http://www.ssec.wisc.edu/~billh/vis5d.html > > HTH, > > __Janko > > Rob writes: > > I've been using the "brick of bytes" format for my FDTD outputs, but > > unfortunately I can' find any type of 3d viewer for Windows. I am now > > using the X11/OpenGL based Animabob, which creates a movie of the series > > of dumped files. > > > > I am thinking of trying some other type of file format and viewer. I am > > wondering what others use to view 3d fields or data? Of course if it > > used Numpy and Python all the better. > > > > Thanks, Rob. > > > > > > -- > > The Numeric Python EM Project > > > > www.members.home.net/europax > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From n8gray at caltech.edu Fri Sep 28 16:41:03 2001 From: n8gray at caltech.edu (Nathaniel Gray) Date: Fri Sep 28 16:41:03 2001 Subject: [Numpy-discussion] ANN: gracePlot.py Message-ID: <01092816321207.01517@charter-DHCP-162> __________________________________________________________________ Announcing: gracePlot.py v0.5 An interactive, user-friendly python interface to the Grace plotting package. __________________________________________________________________ * WHAT IS IT? gracePlot.py is a high-level interface to the Grace plotting package available at: http://plasma-gate.weizmann.ac.il/Grace/ The goal of gracePlot is to offer the user an interactive plotting capability similar to that found in commercial packages such as Matlab and Mathematica, including GUI support for modifying plots and a user-friendly, pythonic interactive command-line interface. * WHAT FEATURES DOES IT OFFER? Since this package is in the early stages of development it does not yet provide high-level command-line access to all of Grace's plotting functionality. It does, however, offer: * Line Plots (with or without errorbars) * Histograms (with or without errorbars) * Multiple graphs (sets of axes) per plot * Multiple simultaneous plots (grace sessions) * Overlaid graphs, using a 'hold' command similar to Matlab's * Legends, titles, axis labels, and axis limits * Integration with Numerical Python and Scientific Python's Histogram object Note that all advanced features and customizations are available through the Grace UI, so you can compose rough plots in Python and then polish them up in Grace. * HOW DO I USE IT? Here is an example session that creates a plot with two sets of axes, putting a line plot in one and a histogram in the other: Python 2.1.1 (#2, Jul 31 2001, 14:10:42) [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from gracePlot import gracePlot >>> p = gracePlot() # A grace session opens >>> p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], ... symbols=1 ) # A plot with errorbars & symbols >>> p.title('Funding: Ministry of Silly Walks') >>> p.ylabel('Funding (Pounds\S10\N)') >>> p.multi(2,1) # Multiple plots: 2 rows, 1 column >>> p.xlimit(0, 6) # Set limits of x-axis >>> p.focus(1,0) # Set current graph to row 1, column 0 >>> p.histoPlot( [7, 15, 18, 20, 21], x_min=1, ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars >>> p.xlabel('Silliness Index') >>> p.ylabel('Applications/yr') >>> p.xlimit(0, 6) # Set limits of x-axis The result of this session can be found at: http://www.idyll.org/~n8gray/code/index.html * WHERE DO I GET IT? gracePlot is available here: http://www.idyll.org/~n8gray/code/index.html ___________________________________________________________ Cheers, -n8 -- Nathaniel Gray California Institute of Technology Computation and Neural Systems -- From europax at home.com Fri Sep 28 17:25:01 2001 From: europax at home.com (Rob) Date: Fri Sep 28 17:25:01 2001 Subject: [Numpy-discussion] ANN: gracePlot.py References: <01092816321207.01517@charter-DHCP-162> Message-ID: <3BB5147B.BFFFCE3F@home.com> Can it do any 3d volume rendering? I've heard of grace, but know nothing about it. Rob. Nathaniel Gray wrote: > > __________________________________________________________________ > > Announcing: gracePlot.py v0.5 > > An interactive, user-friendly python interface to the > Grace plotting package. > > __________________________________________________________________ > > * WHAT IS IT? > > gracePlot.py is a high-level interface to the Grace plotting package available > at: http://plasma-gate.weizmann.ac.il/Grace/ The goal of gracePlot is to > offer the user an interactive plotting capability similar to that found in > commercial packages such as Matlab and Mathematica, including GUI support for > modifying plots and a user-friendly, pythonic interactive command-line > interface. > > * WHAT FEATURES DOES IT OFFER? > > Since this package is in the early stages of development it does not yet > provide high-level command-line access to all of Grace's plotting > functionality. It does, however, offer: > * Line Plots (with or without errorbars) > * Histograms (with or without errorbars) > * Multiple graphs (sets of axes) per plot > * Multiple simultaneous plots (grace sessions) > * Overlaid graphs, using a 'hold' command similar to Matlab's > * Legends, titles, axis labels, and axis limits > * Integration with Numerical Python and Scientific Python's Histogram > object > > Note that all advanced features and customizations are available through the > Grace UI, so you can compose rough plots in Python and then polish them up in > Grace. > > * HOW DO I USE IT? > > Here is an example session that creates a plot with two sets of axes, putting > a line plot in one and a histogram in the other: > Python 2.1.1 (#2, Jul 31 2001, 14:10:42) > [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> from gracePlot import gracePlot > >>> p = gracePlot() # A grace session opens > >>> p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], > ... symbols=1 ) # A plot with errorbars & symbols > >>> p.title('Funding: Ministry of Silly Walks') > >>> p.ylabel('Funding (Pounds\S10\N)') > >>> p.multi(2,1) # Multiple plots: 2 rows, 1 column > >>> p.xlimit(0, 6) # Set limits of x-axis > >>> p.focus(1,0) # Set current graph to row 1, column 0 > >>> p.histoPlot( [7, 15, 18, 20, 21], x_min=1, > ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars > >>> p.xlabel('Silliness Index') > >>> p.ylabel('Applications/yr') > >>> p.xlimit(0, 6) # Set limits of x-axis > > The result of this session can be found at: > http://www.idyll.org/~n8gray/code/index.html > > * WHERE DO I GET IT? > > gracePlot is available here: > http://www.idyll.org/~n8gray/code/index.html > > ___________________________________________________________ > > Cheers, > -n8 > > -- > Nathaniel Gray > > California Institute of Technology > Computation and Neural Systems > -- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From n8gray at caltech.edu Fri Sep 28 17:32:01 2001 From: n8gray at caltech.edu (Nathaniel Gray) Date: Fri Sep 28 17:32:01 2001 Subject: [Numpy-discussion] ANN: gracePlot.py In-Reply-To: <3BB5147B.BFFFCE3F@home.com> References: <01092816321207.01517@charter-DHCP-162> <3BB5147B.BFFFCE3F@home.com> Message-ID: <0109281722410A.01517@charter-DHCP-162> Nope. Grace is only for 2-d plots AFAIK. See their website for more info. You might want to check out VTK and OpenDX, both of which have entries in the Vaults of Parnassus. I've never tried either one but both look like industrial strength volume visualization packages and both have python bindings. -n8 On Friday 28 September 2001 05:23 pm, Rob wrote: > Can it do any 3d volume rendering? I've heard of grace, but know > nothing about it. Rob. > > Nathaniel Gray wrote: > > __________________________________________________________________ > > > > Announcing: gracePlot.py v0.5 > > > > An interactive, user-friendly python interface to the > > Grace plotting package. > > > > __________________________________________________________________ > > -- Nathaniel Gray California Institute of Technology Computation and Neural Systems -- From pearu at cens.ioc.ee Fri Sep 28 23:53:02 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri Sep 28 23:53:02 2001 Subject: [Numpy-discussion] ANN: gracePlot.py In-Reply-To: <0109281722410A.01517@charter-DHCP-162> Message-ID: On Fri, 28 Sep 2001, Nathaniel Gray wrote: > Nope. Grace is only for 2-d plots AFAIK. See their website for more info. > You might want to check out VTK and OpenDX, both of which have entries in the Rob, have you tried Mayavi (http://mayavi.sourceforge.net) that is fully 3D capable. And see http://cens.ioc.ee/projects/pyvtk/ that is prototype software to create VTK files from Python objects. Regards, Pearu The following fragment is from Mayavi README.txt: The MayaVi Data Visualizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ MayaVi is a free, easy to use scientific data visualizer. It is written in Python and uses the amazing Visualization Toolkit (VTK) for the graphics. It provides a GUI written using. MayaVi is free and distributed under the GNU GPL. It is also cross platform and should run on any platform where both Python and VTK are available (which is almost any *nix, Mac OSX or Windows). From europax at home.com Sun Sep 30 09:33:02 2001 From: europax at home.com (Rob) Date: Sun Sep 30 09:33:02 2001 Subject: [Numpy-discussion] ANN: gracePlot.py References: Message-ID: <3BB748D9.42955FE9@home.com> Thanks Peauru for the info. I tried to install the FreeBSD vtk port, but it won't compile. It ran for an hour and then crashed. I did install it on windows so we'll see how that works today. I've been focusing most of my effort to port Animabob to Cygwin. I am very impressed with Cygwin so far as its XFree86 was easy to set up with no XFree86config. I also found the OpenDX port to Cygwin, so I'll play with that a little as well. Rob. Pearu Peterson wrote: > > On Fri, 28 Sep 2001, Nathaniel Gray wrote: > > > Nope. Grace is only for 2-d plots AFAIK. See their website for more info. > > You might want to check out VTK and OpenDX, both of which have entries in the > > Rob, have you tried Mayavi (http://mayavi.sourceforge.net) that is > fully 3D capable. And see http://cens.ioc.ee/projects/pyvtk/ that is > prototype software to create VTK files from Python objects. > > Regards, > Pearu > > The following fragment is from Mayavi README.txt: > > The MayaVi Data Visualizer > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > MayaVi is a free, easy to use scientific data visualizer. It is > written in Python and uses the amazing Visualization Toolkit (VTK) for > the graphics. It provides a GUI written using. MayaVi is free and > distributed under the GNU GPL. It is also cross platform and should > run on any platform where both Python and VTK are available (which is > almost any *nix, Mac OSX or Windows). > -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Sat Sep 1 11:58:08 2001 From: europax at home.com (Rob) Date: Sat Sep 1 11:58:08 2001 Subject: [Numpy-discussion] another seemingly unNumpyizeable routine Message-ID: <3B912F97.FA2AB9A9@home.com> I hate to bother you guys, but this is how I learn. I have had some success using the take() and add.reduce() routines all by myself. However this one seems intractable: for IEdge in range(0,a.HybrdBoundEdgeNum): for JEdge in range(0,a.HybrdBoundEdgeNum): AVec[IEdge+a.TotInnerEdgeNum] += a.Cdd[IEdge,JEdge]*XVec[JEdge+a.TotInnerEdgeNum] ## below is my attempt at this, but it doesn't work ## h=a.TotInnerEdgeNum ## for IEdge in range(0,a.HybrdBoundEdgeNum): ## AVec[IEdge+h] = add.reduce(a.Cdd[IEdge,0:a.HybrdBoundEdgeNum] * XVec[h:(h+a.HybrdBoundEdgeNum)]) I think the problem lies in that add.reduce is using the slice 0:a.HybridBoundEdgeNum for Cdd, but then has to contend with XVec slice which is offset by h. Is there any elegant way to deal with this. This routine is part of a sparse matrix multiply routine. If I could find SparsePy, maybe I could eliminate it altogether. Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Sat Sep 1 19:54:02 2001 From: europax at home.com (Rob) Date: Sat Sep 1 19:54:02 2001 Subject: [Numpy-discussion] another seemingly unNumpyizeable routine References: <3B912F97.FA2AB9A9@home.com> Message-ID: <3B919EF7.EC7E3C8@home.com> I figured this out. I step JEdge instead of IEdge. Another big speed boost. Rob. Rob wrote: > > I hate to bother you guys, but this is how I learn. I have had some > success using the take() and add.reduce() routines all by myself. > However this one seems intractable: > > for IEdge in range(0,a.HybrdBoundEdgeNum): > for JEdge in range(0,a.HybrdBoundEdgeNum): > AVec[IEdge+a.TotInnerEdgeNum] += > a.Cdd[IEdge,JEdge]*XVec[JEdge+a.TotInnerEdgeNum] > > ## below is my attempt at this, but it doesn't > work > ## h=a.TotInnerEdgeNum > ## for IEdge in range(0,a.HybrdBoundEdgeNum): > ## AVec[IEdge+h] = add.reduce(a.Cdd[IEdge,0:a.HybrdBoundEdgeNum] * > XVec[h:(h+a.HybrdBoundEdgeNum)]) > > I think the problem lies in that add.reduce is using the slice > 0:a.HybridBoundEdgeNum for Cdd, but then has to contend with XVec slice > which is offset by h. Is there any elegant way to deal with this. This > routine is part of a sparse matrix multiply routine. If I could find > SparsePy, maybe I could eliminate it altogether. > > Thanks, Rob. > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Tue Sep 4 18:03:02 2001 From: europax at home.com (Rob) Date: Tue Sep 4 18:03:02 2001 Subject: [Numpy-discussion] Python and Numpy compiled on Athlon optimized gcc3.01 Message-ID: <3B95798C.E69744F4@home.com> Just for kicks last night I installed gcc3.01 which claims to have Athlon optimization, ie. -march=athlon. I recompiled Python and Numpy, and then ran a big simulation. The new compiler ran 200 seconds slower than the old gcc2.95 with plain -march=pentium. I need to go to the gcc website and see just what optimization they are claiming. Maybe I should have also used -02. Rob. -- The Numeric Python EM Project www.members.home.net/europax From myers at tc.cornell.edu Wed Sep 5 11:39:01 2001 From: myers at tc.cornell.edu (Chris Myers) Date: Wed Sep 5 11:39:01 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs Message-ID: <20010905143930.A10691@sowhat.tc.cornell.edu> I have a Python object that contains a NumPy array (among other things), and I'd like it to behave like an array in certain circumstances. (For example, I can define __getitem__ on the class so that the underlying NumPy array is indexed.) I'd like to be able to act on such an object with a ufunc, but am stymied. For example, if y is an instance of this array-like class of mine, then I get a type error if I try a reduction on it: >>> Numeric.minimum.reduce(y) Traceback (most recent call last): File "", line 1, in ? TypeError: function not supported for these types, and can't coerce to supported types Is there a way I can effect this coercion? I guess I'd prefer not to have to inherit from UserArray. Thanks, Chris ========================================================================== Chris Myers Cornell Theory Center -------------------------------------------------------------------------- 636 Rhodes Hall email: myers at tc.cornell.edu Cornell University phone: (607) 255-5894 / fax: (607) 254-8888 Ithaca, NY 14853 http://www.tc.cornell.edu/~myers ========================================================================== From tim.hochberg at ieee.org Wed Sep 5 11:54:01 2001 From: tim.hochberg at ieee.org (Tim Hochberg) Date: Wed Sep 5 11:54:01 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs References: <20010905143930.A10691@sowhat.tc.cornell.edu> Message-ID: <139301c1363c$09dc3a50$87740918@cx781526b> Hi Chris, I believe that all you should have to do is define an __array__ function similar to that in UserArray; def __array__(self,t=None): if t: return asarray(self.array,t) return asarray(self.array) Replace self.array with whatever you are calling your contained array. Regards, -tim ----- Original Message ----- From: "Chris Myers" To: Sent: Wednesday, September 05, 2001 11:39 AM Subject: [Numpy-discussion] coercion for array-like objects for ufuncs > I have a Python object that contains a NumPy array (among other > things), and I'd like it to behave like an array in certain > circumstances. (For example, I can define __getitem__ on the class so > that the underlying NumPy array is indexed.) > > I'd like to be able to act on such an object with a ufunc, but am > stymied. For example, if y is an instance of this array-like class of > mine, then I get a type error if I try a reduction on it: > > >>> Numeric.minimum.reduce(y) > Traceback (most recent call last): > File "", line 1, in ? > TypeError: function not supported for these types, and can't coerce to supported types > > > Is there a way I can effect this coercion? I guess I'd prefer not to > have to inherit from UserArray. > > Thanks, > > Chris > > ========================================================================== > Chris Myers > Cornell Theory Center > -------------------------------------------------------------------------- > 636 Rhodes Hall email: myers at tc.cornell.edu > Cornell University phone: (607) 255-5894 / fax: (607) 254-8888 > Ithaca, NY 14853 http://www.tc.cornell.edu/~myers > ========================================================================== > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From oliphant at ee.byu.edu Wed Sep 5 12:40:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Sep 5 12:40:02 2001 Subject: [Numpy-discussion] coercion for array-like objects for ufuncs In-Reply-To: <20010905143930.A10691@sowhat.tc.cornell.edu> Message-ID: > Is there a way I can effect this coercion? I guess I'd prefer not to > have to inherit from UserArray. You need to have an __array__ method defined in your class which returns the NumPy array to operate on. -Travis From europax at home.com Thu Sep 6 07:17:02 2001 From: europax at home.com (Rob) Date: Thu Sep 6 07:17:02 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday Message-ID: <3B9784EF.17398549@home.com> >From my web statistics it looks like about 100 people downloaded it. Should give Numpy a pretty good showing as I got it running almost as fast as C. Only slowdown was dumping the E fields into a file every few ticks to generate the movie. I know there must be better ways to dump an array than use indexing- but I was so excited when I finally got it to near C speeds that I just let it be. Rob. -- The Numeric Python EM Project www.members.home.net/europax From chrishbarker at home.net Thu Sep 6 09:36:03 2001 From: chrishbarker at home.net (Chris Barker) Date: Thu Sep 6 09:36:03 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday References: <3B9784EF.17398549@home.com> Message-ID: <3B97AAF9.C8129FEB@home.net> Rob wrote: > Only slowdown was dumping the E fields into a file every few > ticks to generate the movie. I know there must be better ways to dump > an array than use indexing- If you want to dump the binary data, you can use: file.write(array.tostring) If you need an ASCI representation, you can use various forms of: format_string = '%f '* A.shape[1] + '\n' for r in range(A.shape[0]): file.write(format_string%(tuple(A[r,:]))) You could probably use map to speed it up a little as well. I'd love to see an array-oreinted version of fscanf and fprintf to make this faster and easier, but I have yet to get around to writing one. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From oliphant at ee.byu.edu Thu Sep 6 10:14:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Sep 6 10:14:02 2001 Subject: [Numpy-discussion] ToyFDTDpython on Freshmeat yesterday In-Reply-To: <3B97AAF9.C8129FEB@home.net> Message-ID: > Rob wrote: > > Only slowdown was dumping the E fields into a file every few > > ticks to generate the movie. I know there must be better ways to dump > > an array than use indexing- > > If you want to dump the binary data, you can use: > > file.write(array.tostring) > > If you need an ASCI representation, you can use various forms of: > > format_string = '%f '* A.shape[1] + '\n' > for r in range(A.shape[0]): > file.write(format_string%(tuple(A[r,:]))) > > You could probably use map to speed it up a little as well. > > I'd love to see an array-oreinted version of fscanf and fprintf to make > this faster and easier, but I have yet to get around to writing one. Check out the IO secion of SciPy. Numpyio along with a flexible ASCII_file reader are there. -Travis From chrishbarker at home.net Thu Sep 6 15:54:06 2001 From: chrishbarker at home.net (Chris Barker) Date: Thu Sep 6 15:54:06 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> Message-ID: <3B980397.63F08731@home.net> Hi all, I started a recent dission on c.l.p recently (http://mail.python.org/pipermail/python-list/2001-September/063502.html), and it brought up an interesting idea. In general, I'd like to see Python become more array-oriented, which PEP 209 and friends will help with. I want this because it provides a natural and effecient way of expressing your program, and because there are large oportunities for performance enhancements. WHile I make a lot of use of NumPy arrays at the moment, and get substantial performance benefits when I can use the overloaded operators and ufuncs, etc. Python itself still doesn't know that a NumPy array is a homogenous sequence (or might be), because it has no concept of such a thing. If the Python interpreter knew that it was homogenous, there could be a lot of opportunities for performance enhancements. In the above stated thread, I suggested the addition of a "homogenous" flag to sequences. I havn't gotten an enthusiastic response, but Skip Montanaro suggested: """ One approach might be to propose that the array object be "brought into the fold" as a builtin object and modifications be made to the virtual machine so that homogeneity can be assumed when operating on arrays. """ PEP 209 does propose that an array object be "brought into the fold" (or does it? would it be a builtin?, if not, at least being part of the standard library would be a help ), but it makes no mention of any modifications to the virtual machine that would allow optimisations outside of Numeric2 defined functions. Is this something worth adding? I understand that it is one thing for the homogenous sequence (or array) to be acknowledged in the VM, and quite another for actual optimizations to be written, but we need the former before we can get the latter. What do you all think?? -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From paul at pfdubois.com Thu Sep 6 16:55:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu Sep 6 16:55:02 2001 Subject: [Numpy-discussion] Numeric 20.2.0 available in CVS or tar form Message-ID: <001201c1372f$4a189480$0301a8c0@plstn1.sfba.home.com> Numeric 20.2.0 is available at Sourceforge or via cvs. The tag for cvs is r20_2_0. I haven't done the Windows installers yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wqhdelphi at 263.net Thu Sep 6 20:10:02 2001 From: wqhdelphi at 263.net (wqh) Date: Thu Sep 6 20:10:02 2001 Subject: [Numpy-discussion] =?gb2312?B?Y2FuIEkgdXNlIHN5bWJvbGljIGNvbXB1dGF0aW9uIHdpdGggbnVtcHk/?= Message-ID: <3B983B3A.18232@mta3> An HTML attachment was scrubbed... URL: -------------- next part -------------- I should use this ,but do not know how, if it can not,then which should I use to perform.thanks IP??????? ??????????? ?????????? From hinsen at cnrs-orleans.fr Fri Sep 7 06:14:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Fri Sep 7 06:14:02 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 In-Reply-To: <3B980397.63F08731@home.net> (message from Chris Barker on Thu, 06 Sep 2001 16:15:35 -0700) References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> Message-ID: <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> > benefits when I can use the overloaded operators and ufuncs, etc. Python > itself still doesn't know that a NumPy array is a homogenous sequence > (or might be), because it has no concept of such a thing. If the Python > interpreter knew that it was homogenous, there could be a lot of > opportunities for performance enhancements. Such as? Given the overall dynamic nature of Python, I don't see any real opportunities outside array-specific code. What optimizations could be done knowing *only* that all elements of a sequence are of the same type, but without a particular data layout? Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From chrishbarker at home.net Fri Sep 7 17:01:23 2001 From: chrishbarker at home.net (Chris Barker) Date: Fri Sep 7 17:01:23 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> Message-ID: <3B9964C9.6DA6E7CF@home.net> Konrad Hinsen wrote: > Such as? Given the overall dynamic nature of Python, I don't see any > real opportunities outside array-specific code. What optimizations > could be done knowing *only* that all elements of a sequence are of > the same type, but without a particular data layout? I remember Guido's answer to a FAQ a while ago (the current FAQ has a more terse version) that essentially stated that compiled Python wouldn't be much faster because of Python's dynamic nature, so that the expression x = y*z needs to be evaluated to see what type y is, what type z is, and what it means to multiply them, before the actual multiplication can take place. All that checking takes a whole lot longer than the multiplication itself. NumPy, or course, helps this along, because once it is determined that y and z are both NumPy arrays of the same shape and type, it can multiply all the elements in a C loop, without any further type checking. Where this breaks down, of course, is when you can't express what you want to do as a set of array functions. Once you learn the tricks these times are fairly rare (which is why NumPy is useful), but there are times when you can't use array-math, and need to loop through the elements of an array and operate on them. Python currently has to type check each of those elements every time something is done on them. In principle, if the Python VM knew that A was an array of Floats, it would know that A[i,j,k] was a Float, and it would not have to do a check. I think it would be easiest to get optimization in sequence-oriented operations, such as map() and list comprehensions: map(fun, A) when this is called, the function bound to the "fun" name is passed to map, as is the array bound to the name "A". The Array is known to be homogeous. map could conceivably compile a version of fun that worked on the type of the items in A, and then apply that function to all the elements in A, without type checking, and looping at the speed of C. This is the kind of optimization I am imagining. Kind of an instant U-func. Something similar could be done with list comprehensions. Of course, most things that can be done with list comprehensions and map() can be done with array operators anyway, so the real advantage would be a smarter compiler that could do that kind of thing inside a bunch of nested for loops. There is at least one project heading in that direction: * Psyco (Armin Rego) - a run-time specializing virtual machine that sees what sorts of types are input to a function and compiles a type- or value-specific version of that function on-the-fly. I believe Armin is looking at some JIT code generators in addition to or instead of another virtual machine. knowing that all the elements of an Array (or any other sequence) are the same type could help here a lot. Once a particular function was compiled with a given set of types, it could be called directly on all the elements of that array (and other arrays) with no type checking. What it comes down to is that while Python's dynamic nature is wonderful, and very powerful and flexible, there are many, many, times that it is not really needed, particularly inside a given small function. The standard answer about Python optimization is that you just need to write those few small functions in C. This is only really practical if they are functions that operate on particular expected inputs: essentially statically typed input (or at least not totally general). Even then, it is a substantial effort, even for those with extensive C experience (unlike me). It would be so much better if a Py2C or a JIT compiler could optimize those functions for you. I know this is a major programming effort, and will, at best, be years away, but I'd like to see Python move in a direction that makes it easier to do, and allows small steps to be done at a time. I think introducing the concept of a homogenous sequence could help a lot of these efforts. Numeric Arrays would be just a single example. Python already has strings, and tuples could be marked as homogenous when created, if they were. So could lists, but since they are mutable, their homogenaity could change at any time, so it might not be useful. I may be wrong about all this, I really don't know a whit about writing compilers or interpreters, or VMs, but I'm throughing around the idea, to see what I can learn, and see if it makes any sense. -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From hinsen at cnrs-orleans.fr Sat Sep 8 15:11:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sat Sep 8 15:11:02 2001 Subject: [Numpy-discussion] A possible addition to PEP 209 In-Reply-To: <3B9964C9.6DA6E7CF@home.net> (message from Chris Barker on Fri, 07 Sep 2001 17:22:33 -0700) References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <39EF63F6.B72C3853@jps.net> <3B980397.63F08731@home.net> <200109071312.f87DCkw11410@chinon.cnrs-orleans.fr> <3B9964C9.6DA6E7CF@home.net> Message-ID: <200109082210.f88MAI515833@chinon.cnrs-orleans.fr> > check each of those elements every time something is done on them. In > principle, if the Python VM knew that A was an array of Floats, it would > know that A[i,j,k] was a Float, and it would not have to do a check. I Right, but that would only work for types that are specially treated by the interpreter. Just knowing that all elements are of the same type is not enough. In fact, the VM does not do any check, it just looks up type-specific pointers and calls the relevant functions. The other question is how much effort the Python developers would be willing to spend on this, it looks like a very big job to me, in fact a reimplementation of the interpreter. > when this is called, the function bound to the "fun" name is passed to > map, as is the array bound to the name "A". The Array is known to be > homogeous. map could conceivably compile a version of fun that worked on Yes, that could be done, provided there is also a means for compiling type-specific versions of a function. > I know this is a major programming effort, and will, at best, be years > away, but I'd like to see Python move in a direction that makes it > easier to do, and allows small steps to be done at a time. I think > introducing the concept of a homogenous sequence could help a lot of I'd say that adding this feature is much less work than doing even the slightest bit of optimization. I know I am sounding pessimistic here, but, well, I am... Konrad. -- -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From wqhdelphi at 263.net Sat Sep 8 19:36:01 2001 From: wqhdelphi at 263.net (wqh) Date: Sat Sep 8 19:36:01 2001 Subject: [Numpy-discussion] =?gb2312?B?SXMgdGhlcmUgc29tZSBtZXRob2QgdG8gcmVhbGl6ZSBxciBkZWNvbXBvc3Rpb24=?= Message-ID: <3B9AD4F2.26917@mta1> An HTML attachment was scrubbed... URL: -------------- next part -------------- I find through the doc ,but can not find any.What Numpy support compare with matlab is too few,can I think we must work hard to make it better. IP??????? ??????????? ?????????? From jjl at pobox.com Sun Sep 9 10:49:01 2001 From: jjl at pobox.com (John J. Lee) Date: Sun Sep 9 10:49:01 2001 Subject: [Numpy-discussion] =?gb2312?B?SXMgdGhlcmUgc29tZSBtZXRob2QgdG8gcmVhbGl6ZSBxciBkZWNvbXBvc3Rpb24=?= In-Reply-To: <3B9AD4F2.26917@mta1> Message-ID: Try SciPy http://www.scipy.org/ Lots of wrapped libraries, and other useful stuff. John From tpitts at accentopto.com Mon Sep 10 10:37:04 2001 From: tpitts at accentopto.com (Todd Alan Pitts, Ph.D.) Date: Mon Sep 10 10:37:04 2001 Subject: [Numpy-discussion] Problem with import ... In-Reply-To: ; from oliphant@ee.byu.edu on Thu, Sep 06, 2001 at 11:13:51AM -0600 References: <3B97AAF9.C8129FEB@home.net> Message-ID: <20010910113615.A22554@fermi.accentopto.com> In migrating from python 1.5.2 and Numeric 13 to python2.1.1 with Numeric-20.1.0 I have encountered a rather pernicious problem. I had previously been using f2py 1.232 to generate modules for the BLAS and a self-consistent subset of LAPACK. Importing the thus created blas and then lapack modules made their FORTRAN routines available to the FORTRAN code that I had written. Everything worked well. However, after I compiled python2.1.1 and Numeric-20.1.0 importing the blas and lapack modules no longer provides access to their routines. Moreover, I can't even import the lapack module because it requires routines from the blas module (that it can no longer see). I get this behavior on both RedHat 7.1 and 6.1 (e.g. with and without gcc 2.96). I verified that python2.2a behaves as 2.1.1 in this regard as well. I tried linking generating blas and lapack libraries and linking them during creation of the shared libraries but that only worked marginally. I am still geting unresolved symbol errors. Has anyone else seen this problem/difference? -Todd From karshi.hasanov at utoronto.ca Mon Sep 10 15:47:01 2001 From: karshi.hasanov at utoronto.ca (Karshi Hasanov) Date: Mon Sep 10 15:47:01 2001 Subject: [Numpy-discussion] unwrap_function Message-ID: <20010910224642Z234904-13777+6@bureau8.utcc.utoronto.ca> Hi all, Does NumPy provide "unwrap" function like in MatLab? From chrishbarker at home.net Mon Sep 10 16:11:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 10 16:11:02 2001 Subject: [Numpy-discussion] unwrap_function References: <20010910224642Z234904-13777+6@bureau8.utcc.utoronto.ca> Message-ID: <3B9D4D8D.E273C637@home.net> Karshi Hasanov wrote: > Does NumPy provide "unwrap" function like in MatLab? For those who are wondering, this is unwrap in MATLAB: UNWRAP Unwrap phase angle. UNWRAP(P) unwraps radian phases P by changing absolute jumps greater than pi to their 2*pi complement. It unwraps along the first non-singleton dimension of P. P can be a scalar, vector, matrix, or N-D array. UNWRAP(P,TOL) uses a jump tolerance of TOL rather than the default TOL = pi. UNWRAP(P,[],DIM) unwraps along dimension DIM using the default tolerance. UNWRAP(P,TOL,DIM) uses a jump tolerance of TOL. See also ANGLE, ABS. And I think the answer is no, but It would be nice... -chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From hinsen at cnrs-orleans.fr Tue Sep 11 13:26:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Sep 11 13:26:02 2001 Subject: [Numpy-discussion] Problem with import ... In-Reply-To: <20010910113615.A22554@fermi.accentopto.com> (tpitts@accentopto.com) References: <3B97AAF9.C8129FEB@home.net> <20010910113615.A22554@fermi.accentopto.com> Message-ID: <200109112024.f8BKOxS29853@chinon.cnrs-orleans.fr> > FORTRAN code that I had written. Everything worked well. However, > after I compiled python2.1.1 and Numeric-20.1.0 importing the blas and > lapack modules no longer provides access to their routines. Moreover, Were you relying on symbols from one dynamic module being available to other dynamic modules? That was never guaranteed in Python, whether or not it works depends on the platform but also on the Python interpreter version. I am not aware of any relevant changes in Python 2.1, but then none of my code relies on this. The only safe way to call C/Fortran code in one dynamic module from another dynamic module is passing the addresses in CObject objects. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From paul at pfdubois.com Tue Sep 11 14:58:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue Sep 11 14:58:03 2001 Subject: [Numpy-discussion] Dynamic loading and shared libraries article Message-ID: <000101c13b0c$aee25780$0201a8c0@plstn1.sfba.home.com> My column this month has a really great article by David Beazley and two colleagues about how dynamic loading and shared libraries actually work and some things you can do with it. It is Computing in Science and Engineering, Sept/Oct 2001, Scientific Programming Department. I found the article a really big help in understanding this stuff. The magazine's web site is http://computer.org/cise. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From herb at dolphinsearch.com Tue Sep 11 16:28:01 2001 From: herb at dolphinsearch.com (Herbert L. Roitblat) Date: Tue Sep 11 16:28:01 2001 Subject: [Numpy-discussion] Precision Message-ID: <016701c13b19$656abe00$cb02a8c0@dolphinsearch.com> I have been wanting use Float128 numbers on an Alpha system running Linux and Python 2.0. Float128 does not seem to be available. At least Float128 is undefined. Is the Alpha capable of support Float128 numbers? If it is, what am I missing to do be able to use it? Thanks, Herb -------------- next part -------------- An HTML attachment was scrubbed... URL: From csmith at blakeschool.org Wed Sep 12 14:08:02 2001 From: csmith at blakeschool.org (Christopher Smith) Date: Wed Sep 12 14:08:02 2001 Subject: [Numpy-discussion] A proposal for a round(x,n) enhancement Message-ID: Hello, I would like to see an addition to the round(x,n) function syntax that would allow one to specify that x should be rounded to the nth significant digit as opposed to the nth digit to the left or right of the 1's digit. I am testing the waters right now on python-list and have not yet submitted a PEP. Some there have suggested that it be called a different function. Someone else has suggested that perhaps it could be added into SciPy or Numeric as a function there. I prefer the name round because it describes what you are doing. Someone suggested that MATLAB has a function called chop that does what I am proposing and at http://www.math.colostate.edu/~tavener/M561/lab5/lab5.pdf the document says that the "MATLAB function chop(x,n) rounds (not chops!) the number x to n significant digits." If this new function was called "chop" then any previous MATLAB users would know what to expect. (But why call it chop if you are actually rounding?) What I think would be nice is to modify round so it can round to a given number of sig. figs. Here is a def called Round that simulates what I am proposing: from math import floor,log10 def Round(x,n=0): if n%1: d=int(10*n) return round(x,d-1-int(floor(log10(abs(x))))) else: return round(x,n) print Round(1.23456,2) #returns 1.23 print Round(1.23456,0.2) #returns 1.2 The way that I have enhanced round's syntax is to have it check to see if there is a non-zero decimal part to n. If there is, n is multiplied by 10 and the resulting value is used to compute which digit to round x to. n=0.1 will round to the first significant digit while n=0.4 and n=1.1 will round to the 4th and 11th, respectively. I don't believe you will run into any numerical issues since even though something like .3 may be represented internally as 0.29999999999999999, multiplying it by 10 gets you back to 3 and this value is what is used in the call to round() I am open to comments about implementation and function name. /c From csmith at blakeschool.org Sat Sep 15 13:28:01 2001 From: csmith at blakeschool.org (Christopher Smith) Date: Sat Sep 15 13:28:01 2001 Subject: [Numpy-discussion] Re: PEP proposal for round(x,n) enhancement Message-ID: To Chris and others interested, Would you mind giving me an opinion here? I think you made the best argument for not using the name "round" since floating n is already accepted by round and changing how n is interpreted would break anyone's code who passed in n with a non-zero decimal portion. Sigfig or chop is an alternative but I really don't like the name "chop" since it's not descriptive of the rounding that takes place. Sigfig is ok, but I am proposing a round function (below) that offers more than just sigfigs. So...how about "round2" which has precedence in the atan2() function. The two sorts of roundings that can be done with round2() are 1) rounding to a specified number of significant figures or 2) rounding to the nearest "x". The latter is useful if you are making a histogram, for example, where experimental data values may all be distinct but when the deviation of values is considered it makes sense to round the observations to a specified uncertainty, e.g. if two values differ by less than sigma then they could be considered to be part of the same "bin". The script is below. Thanks for any input, /c #### from math import floor,log10 def round2(x,n=0,sigs4n=1): '''Return x rounded to the specified number of significant digits, n, as counted from the first non-zero digit. If n=0 (the default value for round2) then the magnitude of the number will be returned (e.g. round2(12) returns 10.0). If n<0 then x will be rounded to the nearest multiple of n which, by default, will be rounded to 1 digit (e.g. round2(1.23,-.28) will round 1.23 to the nearest multiple of 0.3. Regardless of n, if x=0, 0 will be returned.''' if x==0: return x if n<0: n=round2(-n,sigs4n) return n*int(x/n+.5) if n==0: return 10.**(int(floor(log10(abs(x))))) return round(x,int(n)-1-int(floor(log10(abs(x))))) #### From soender at cs.utk.edu Sun Sep 16 09:18:02 2001 From: soender at cs.utk.edu (Peter Soendergaard) Date: Sun Sep 16 09:18:02 2001 Subject: [Numpy-discussion] re: gcc 3.0 vs. 2.95.2 Message-ID: Hello, I just browsed over the archives for Numpy-discussion and saw this, and decided to sign up. I work on the ATLAS-project http://math-atlas.sourceforge.net/ and we have had similar problems with gcc3.0 Gcc 3.0 has a completely new backend which produces much slower floating point code on i386 machines. It is most visible on the Athlon, but it also also shows on on P4 and PIII machines. We havn't yet figured out if there are some optimizations that can make this go away, but if you need performance stick with the old 2.95 release for now. By the way, if you would like to use Atlas in NumPy (I don't know if you do it already) I might be of some help. There is c-interfaces to the BLAS bundled with ATLAS, supporting both row-major and column-major storage. Cheers, Peter. From europax at home.com Sun Sep 16 10:33:01 2001 From: europax at home.com (Rob) Date: Sun Sep 16 10:33:01 2001 Subject: [Numpy-discussion] re: gcc 3.0 vs. 2.95.2 References: Message-ID: <3BA4E210.7337C815@home.com> Peter Soendergaard wrote: > > Hello, > > I just browsed over the archives for Numpy-discussion and saw this, and > decided to sign up. > > I work on the ATLAS-project http://math-atlas.sourceforge.net/ and we have > had similar problems with gcc3.0 Gcc 3.0 has a completely new backend > which produces much slower floating point code on i386 machines. It is > most visible on the Athlon, but it also also shows on on P4 and PIII > machines. We havn't yet figured out if there are some optimizations that > can make this go away, but if you need performance stick with the old 2.95 > release for now. > > By the way, if you would like to use Atlas in NumPy (I don't know if you > do it already) I might be of some help. There is c-interfaces to the BLAS > bundled with ATLAS, supporting both row-major and column-major storage. > > Cheers, > > Peter. > > FROM: Rob > DATE: 09/04/2001 18:02:05 > SUBJECT: [Numpy-discussion] Python and Numpy compiled on Athlon > optimized gcc3.01 > > Just for kicks last night I installed gcc3.01 which claims to have Athlon > optimization, ie. -march=athlon. I recompiled Python and Numpy, and then > ran a big simulation. The new compiler ran 200 seconds slower than the > old gcc2.95 with plain -march=pentium. I need to go to the gcc website > and see just what optimization they are claiming. Maybe I should have > also used -02. Rob. > > -- > The Numeric Python EM Project > > http://www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion Hi Peter, great to know that I'm not alone. Maybe I can build an EM simulator using all integer math :). I think I mentioned this on the list before, but Win2k on my laptop (1Ghz) runs Numpy and Python faster than my 1.2Ghz Athlon DDR machine using FreeBSD :( Also, for reference, I have a 3DNow optimized MP3encoding program Gogo that encodes mp3's 10x faster on the athlon than Lame does on my FreeBSD system on the laptop. Go figure! -- The Numeric Python EM Project www.members.home.net/europax From paul at pfdubois.com Mon Sep 17 14:32:01 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Sep 17 14:32:01 2001 Subject: [Numpy-discussion] RE: Segfault with latest lapack_litemodule using complex matrices. In-Reply-To: <01091601582300.06795@travis> Message-ID: <000001c13fbf$fbf315c0$3d01a8c0@plstn1.sfba.home.com> A. Mirone rode to the rescue and the fix is available as Numeric-20.2.1.tar.gz or in CVS. -----Original Message----- From: Travis Oliphant [mailto:oliphant.travis at ieee.org] Sent: Saturday, September 15, 2001 6:58 PM To: Paul F. Dubois Subject: Segfault with latest lapack_litemodule using complex matrices. Hi Paul, With the latest changes to the lapack_lite module, I now get segfaults when I'm trying to load call LinearAlgebra.eigenvectors using a complex-valued matrix. Could you check this on your system? Thanks, -Travis From chrishbarker at home.net Mon Sep 17 16:15:03 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:15:03 2001 Subject: [Numpy-discussion] PyArray_Check problem. References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> Message-ID: <3BA68927.D7B50B21@home.net> Hi all, The MATLAB Digest just put out a little article about array indexing in MATLAB. I thought some of yo might find it interesting, and it might give some ideas for NumPy2. Most of what MATLAB has, NumPy has an equivalent, but I would love to see what matlab calls vector indexing, and a more natural way to do masks. I know vector indexing would be a pretty tricky thing to have work, at least with slices being references and all, but it would be a very nice thing!! Perhaps some brilliant person can figure out an elegant and efficient way to do it. Here is where you eill find the article: http://www.mathworks.com/company/digest/sept01/matrix.shtml -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From romberg at fsl.noaa.gov Mon Sep 17 16:21:01 2001 From: romberg at fsl.noaa.gov (Mike Romberg) Date: Mon Sep 17 16:21:01 2001 Subject: [Numpy-discussion] Offset 2D arrays Message-ID: <15270.34086.281888.272126@smaug.fsl.noaa.gov> I am attempting to create 2D arrays which are offset copies of a given starting array. For example if I have a 2D array like this: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) I would like to offset it by some amount in either or both the x and y dimension. Lets say that both the x and y offset would be 1. Then I would like to have an array like this: array([[5, 6, 0], [8, 9, 0], [0, 0, 0]]) Here I don't really care about the values which are now zero. The main point is that now I can compare the data values at any given (x,y) point with the values at the adjacent point (over one on each axis). This would be useful for the kinds of calculations we need to do. I just can't come up with a numeric way to do this. Does anyone have any ideas? Thanks alot, Mike Romberg (romberg at fsl.noaa.gov) From chrishbarker at home.net Mon Sep 17 16:23:01 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:23:01 2001 Subject: [Numpy-discussion] Indexing in MATLAB References: <004a01c037ee$964dd040$0200a8c0@home> <39EE3D2C.51A92A71@jps.net> <200010191725.TAA14895@chinon.cnrs-orleans.fr> <3BA68927.D7B50B21@home.net> Message-ID: <3BA68B01.28CF36D9@home.net> OOPS! I replied to an arbitray message to get the address,and I forgot to change the subject, so here is the same message I just posted, but with an appropriate subject. Hi all, The MATLAB Digest just put out a little article about array indexing in MATLAB. I thought some of yo might find it interesting, and it might give some ideas for NumPy2. Most of what MATLAB has, NumPy has an equivalent, but I would love to see what matlab calls vector indexing, and a more natural way to do masks. I know vector indexing would be a pretty tricky thing to have work, at least with slices being references and all, but it would be a very nice thing!! Perhaps some brilliant person can figure out an elegant and efficient way to do it. Here is where you will find the article: http://www.mathworks.com/company/digest/sept01/matrix.shtml -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From nodwell at physics.ubc.ca Mon Sep 17 16:35:03 2001 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Mon Sep 17 16:35:03 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <15270.34086.281888.272126@smaug.fsl.noaa.gov> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> Message-ID: <20010917163437.A26450@holler.physics.ubc.ca> Mike, As was pointed out to me when I had a similar query, one way to do this is to define a class which inherits UserArray and refine indexing and slicing. I actually shifted by an offset of one in the opposite direction to what you seem to require. I had intended to generalize to arbitrary offsets, but haven't had the time yet. Anyway, you're welcome to grab my code at http://www.physics.ubc.ca/~mbelab/python/arrayone as a starting point for your class. There are still some issues and quirkiness with the code, but they're documented along with work-arounds, and suggestions for fixes have been made on this mailing list. Again, it's a matter of time... regards, Eric On Mon, Sep 17, 2001 at 05:20:06PM -0600, Mike Romberg wrote: > > I am attempting to create 2D arrays which are offset copies of a > given starting array. For example if I have a 2D array like this: > > array([[1, 2, 3], > [4, 5, 6], > [7, 8, 9]]) > > I would like to offset it by some amount in either or both the x and > y dimension. Lets say that both the x and y offset would be 1. Then > I would like to have an array like this: > > > > array([[5, 6, 0], > [8, 9, 0], > [0, 0, 0]]) > > Here I don't really care about the values which are now zero. The > main point is that now I can compare the data values at any given > (x,y) point with the values at the adjacent point (over one on each > axis). This would be useful for the kinds of calculations we need to > do. I just can't come up with a numeric way to do this. Does anyone > have any ideas? > > Thanks alot, > > Mike Romberg (romberg at fsl.noaa.gov) > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- ******************************** Eric Nodwell Ph.D. candidate Department of Physics University of British Columbia tel: 604-822-5425 fax: 604-822-5324 nodwell at physics.ubc.ca From nodwell at physics.ubc.ca Mon Sep 17 16:40:01 2001 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Mon Sep 17 16:40:01 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <20010917163437.A26450@holler.physics.ubc.ca> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <20010917163437.A26450@holler.physics.ubc.ca> Message-ID: <20010917163920.B26450@holler.physics.ubc.ca> For "refine indexing and slicing" read "re-define indexing and slicing". Oops :) From chrishbarker at home.net Mon Sep 17 16:41:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Mon Sep 17 16:41:02 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> Message-ID: <3BA68F4D.A8ADFB68@home.net> Mike Romberg wrote: > > I am attempting to create 2D arrays which are offset copies of a > given starting array. For example if I have a 2D array like this: > have any ideas? This is not quite as clean as i would like, but this will work: >>> a = array([[1, 2, 3], ... [4, 5, 6], ... [7, 8, 9]]) >>> m,n = a.shape >>> b[:m-1,:n-1] = a[1:,1:] >>> b array([[5, 6, 0], [8, 9, 0], [0, 0, 0]]) >>> if b does not have to be the same shape as a, then it is really easy: >>> b = a[1:,1:] -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From roitblat at hawaii.edu Tue Sep 18 10:57:03 2001 From: roitblat at hawaii.edu (Herbert L. Roitblat) Date: Tue Sep 18 10:57:03 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> Message-ID: <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> This will work: b=zeros ((3,3)) b[:2,:2] = b[:2,:2] + a[1:,1:] You need to know the size of a to use this scheme. ----- Original Message ----- From: "Chris Barker" To: "Mike Romberg" Cc: Sent: Monday, September 17, 2001 2:03 PM Subject: Re: [Numpy-discussion] Offset 2D arrays > Mike Romberg wrote: > > > > I am attempting to create 2D arrays which are offset copies of a > > given starting array. For example if I have a 2D array like this: > > > have any ideas? > > This is not quite as clean as i would like, but this will work: > > >>> a = array([[1, 2, 3], > ... [4, 5, 6], > ... [7, 8, 9]]) > >>> m,n = a.shape > >>> b[:m-1,:n-1] = a[1:,1:] > >>> b > array([[5, 6, 0], > [8, 9, 0], > [0, 0, 0]]) > >>> > > if b does not have to be the same shape as a, then it is really easy: > > >>> b = a[1:,1:] > > -Chris > > > -- > Christopher Barker, > Ph.D. > ChrisHBarker at home.net --- --- --- > http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ > ------@@@ ------@@@ ------@@@ > Oil Spill Modeling ------ @ ------ @ ------ @ > Water Resources Engineering ------- --------- -------- > Coastal and Fluvial Hydrodynamics -------------------------------------- > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From hinsen at cnrs-orleans.fr Wed Sep 19 01:07:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Wed Sep 19 01:07:02 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> (roitblat@hawaii.edu) References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> Message-ID: <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> > This will work: > b=zeros ((3,3)) > b[:2,:2] = b[:2,:2] + a[1:,1:] > > You need to know the size of a to use this scheme. How about this: b = 0*a b[:-1, :-1] = a[1:, 1:] Works for any shape and type of a. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From roitblat at hawaii.edu Wed Sep 19 10:14:03 2001 From: roitblat at hawaii.edu (Herbert L. Roitblat) Date: Wed Sep 19 10:14:03 2001 Subject: [Numpy-discussion] Offset 2D arrays References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> Message-ID: <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> Konrad's solution is MUCH more elegant. HLR ----- Original Message ----- From: "Konrad Hinsen" To: Cc: ; ; Sent: Tuesday, September 18, 2001 10:05 PM Subject: Re: [Numpy-discussion] Offset 2D arrays > > This will work: > > b=zeros ((3,3)) > > b[:2,:2] = b[:2,:2] + a[1:,1:] > > > > You need to know the size of a to use this scheme. > > How about this: > > b = 0*a > b[:-1, :-1] = a[1:, 1:] > > Works for any shape and type of a. > > Konrad. > -- > -------------------------------------------------------------------------- ----- > Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > France | Nederlands/Francais > -------------------------------------------------------------------------- ----- > From romberg at fsl.noaa.gov Wed Sep 19 11:50:02 2001 From: romberg at fsl.noaa.gov (Mike Romberg) Date: Wed Sep 19 11:50:02 2001 Subject: [Numpy-discussion] Offset 2D arrays In-Reply-To: <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> References: <15270.34086.281888.272126@smaug.fsl.noaa.gov> <3BA68F4D.A8ADFB68@home.net> <006d01c1406b$4a2b13e0$86f4a8c0@dolphinsearch.com> <200109190805.f8J85K926087@chinon.cnrs-orleans.fr> <013901c1412e$64448e00$86f4a8c0@dolphinsearch.com> Message-ID: >>>>> " " == Herbert L Roitblat writes: > Konrad's solution is MUCH more elegant. > Message ----- From: "Konrad Hinsen" [snip] >> How about this: >> >> b = 0*a b[:-1, :-1] = a[1:, 1:] >> I think it looks cleaner as well. I've managed to create a function (with the help of the tips on this list) which can offset a 2d array in either or both the x and y dimensions. I strongly suspect that someone who *gets* python and numeric slicing better than I, can come up with a cleaner approach. def getindicies(o, l): if o > 0: s1 = o; e1 = l; s2 = 0; e2 = l - o elif o < 0: s1 = 0; e1 = l + o; s2 = -o; e2 = l else: s1 = 0; e1 = l; s2 = 0; e2 = l return s1, e1, s2, e2 # return a 2d array whose dimensions match a with the data offset # controlled by x and y. def offset(a, x, y): sy1, ey1, sy2, ey2 = getindicies(y, a.shape[0]) sx1, ex1, sx2, ex2 = getindicies(x, a.shape[1]) b = zeros(a.shape) b[sy1:ey1,sx1:ex1] = a[sy2:ey2,sx2:ex2] return b a = array(((1, 2, 3), (4, 5, 6), (7, 8, 9))) # no offset print offset(a, 0, 0) # offset by 1 column in x print offset(a, 1, 0) # offset by 1 column (opposite dir) in x print offset(a, -1, 0) # offset by 2 columns in x print offset(a, 2, 0) # offset by 2 columns in y print offset(a, 0, 2) Thanks, Mike Romberg (romberg at fsl.noaa.gov) From europax at home.com Thu Sep 20 17:54:02 2001 From: europax at home.com (Rob) Date: Thu Sep 20 17:54:02 2001 Subject: [Numpy-discussion] expect a new wave of Numpy'ers Message-ID: <3BAA8F4C.5B5C599C@home.com> I just announced a ham radio type FDTD simulation to the ham newsgroups. I couldn't resist trying out a ham type antenna, instead of all the waveguide stuff I've been doing. Rob. -- The Numeric Python EM Project www.members.home.net/europax From harpend at xmission.com Tue Sep 25 15:20:04 2001 From: harpend at xmission.com (Henry Harpending) Date: Tue Sep 25 15:20:04 2001 Subject: [Numpy-discussion] append number to vector Message-ID: <20010925161348.C6064@xmission.com> I often find myself wanting to append a number to a vector. After fumbling experimentation I use def comma(ar,inint): "comma(array,integer) returns array with integer appended" t=array([inint,]) return(concatenate((ar,t),1)) which is used like the comma in apl, i.e. ar <- ar, inint. This seems klutzy to me. Is there a simpler way to do it? If ar were a list, ar.append(inint) works, but no such luck after ar has become an array. Thanks, Henry Harpending, University of Utah From gvermeul at labs.polycnrs-gre.fr Wed Sep 26 05:18:02 2001 From: gvermeul at labs.polycnrs-gre.fr (Gerard Vermeulen) Date: Wed Sep 26 05:18:02 2001 Subject: [Numpy-discussion] FAST and EASY data plotting for Python, NumPy and Qt Message-ID: <01092614174502.23413@taco.polycnrs-gre.fr> Announcing PyQwt-0.29.91 FAST and EASY data plotting for Python, NumPy and Qt PyQwt is a set of Python bindings for the Qwt C++ class library. The Qwt library extend the Qt framework with widgets for Scientific and Engineering applications. It contains QwtPlot, a 2d plotting widget, and widgets for data input/output such as and QwtCounter, QwtKnob, QwtThermo and QwtWheel. PyQwt requires and extends PyQt, a set of Python bindings for Qt. PyQwt requires NumPy. NumPy extends the Python language with new data types that make Python an ideal language for numerical computing and experimentation (like MatLab, but better). The home page of PyQwt is http://gerard.vermeulen.free.fr NEW in PyQwt-0.29.91: 1. compatible with PyQt-2.5/sip-2.5 and PyQt-2.4/sip-2.4. 2. compatible with NumPy-20.2.0, and lower. 3. *.exe installer for Windows (requires Qt-2.3.0-NC). 4. build instructions for Windows and other versions of Qt. 5. HTML documentation with installation instructions and a reference listing the Python calls to PyQwt that are different from the corresponding C++ calls to Qwt. 6. fixed reference counting bug in the methods with NumPy arrays as arguments. 7. new methods: QwtPlot.axisMargins() QwtPlot.closestCurve() QwtPlot.curveKeys() QwtPlot.markerKeys() QwtPlot.title() QwtPlot.titleFont() QwtScale.labelFormat() QwtScale.map() 8. changed methods: QwtCurve.verifyRange() -- cleaned up interface QwtPlot.adjust() -- is now fully implemented QwtPlot.enableLegend() -- (de)selects all items or a single item 9. removed methods (incompatible with Python, because unsafe, even in C++): QwtCurve.setRawData() QwtPlot.setCurveRawData() QwtSpline.copyValues() Gerard Vermeulen From chrishbarker at home.net Wed Sep 26 11:35:05 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed Sep 26 11:35:05 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? References: <01092614174502.23413@taco.polycnrs-gre.fr> Message-ID: <3BB2254D.EAC7879A@home.net> Hi all, I was recently surprised to find that there are no round(0 or abs() Ufuncs with Numeric. I'm imagining that they might exist under other names, but if not, I submit my versions for critique (lightly tested) -Chris from Numeric import * def Uabs(a): """ A Ufunc version of the Python abs() function """ a = asarray(a) if a.typecode() == 'D' or a.typecode() == 'F':# for complex numbers return sqrt(a.imag**2 + a.real**2) else: return where(a < 0, -a, a) def Uround(a,n=0): """ A Ufunc version of the Python round() function. It should behave in the same way Note: I think this is the right thing to do for negative numbers, but not totally sure. (Uround(-0.5) = 0, but Uround(-0.5000001) = -1) It won't work for complex numbers """ a = asarray(a) n = asarray(n) return floor((a * 10.**n) + 0.5) / 10.**n -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From chrishbarker at home.net Wed Sep 26 12:00:02 2001 From: chrishbarker at home.net (Chris Barker) Date: Wed Sep 26 12:00:02 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? References: <01092614174502.23413@taco.polycnrs-gre.fr> <3BB2254D.EAC7879A@home.net> Message-ID: <3BB22B00.E3CADF55@home.net> Chris Barker wrote: > I was recently surprised to find that there are no round() or abs() > Ufuncs with Numeric. I'm imagining that they might exist under other > names, but if not, I submit my versions for critique (lightly tested) OK, Tim Hochberg was nice enough to point out to me that abs() works on NumPy arrays. However, it does not work on other sequences, so maybe we need this: def abs(a): return abs(asarray(a)) -Chris -- Christopher Barker, Ph.D. ChrisHBarker at home.net --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ From kern at caltech.edu Wed Sep 26 12:01:57 2001 From: kern at caltech.edu (Robert Kern) Date: Wed Sep 26 12:01:57 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? In-Reply-To: <3BB2254D.EAC7879A@home.net> References: <01092614174502.23413@taco.polycnrs-gre.fr> <3BB2254D.EAC7879A@home.net> Message-ID: <20010926120012.A18328@myrddin.caltech.edu> On Wed, Sep 26, 2001 at 11:58:21AM -0700, Chris Barker wrote: > Hi all, > > I was recently surprised to find that there are no round(0 or abs() > Ufuncs with Numeric. I'm imagining that they might exist under other > names, around and absolute. -- Robert Kern kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jjl at pobox.com Wed Sep 26 14:10:02 2001 From: jjl at pobox.com (John J. Lee) Date: Wed Sep 26 14:10:02 2001 Subject: [Numpy-discussion] round() and abs() Ufuncs??? In-Reply-To: <3BB22B00.E3CADF55@home.net> Message-ID: On Wed, 26 Sep 2001, Chris Barker wrote: [...] > OK, Tim Hochberg was nice enough to point out to me that abs() works on > NumPy arrays. However, it does not work on other sequences, so maybe we > need this: > > def abs(a): > return abs(asarray(a)) Numeric.absolute, as Robert Kern pointed out, maybe that hasn't arrived in your mailbox... John From europax at home.com Thu Sep 27 18:07:03 2001 From: europax at home.com (Rob) Date: Thu Sep 27 18:07:03 2001 Subject: [Numpy-discussion] Can I further Numpy-ize this code? Message-ID: <3BB3CCFC.2367A7C3@home.com> I've been working on this for so long I may be missing the obvious. Here is a snippet of code that I would like to at least get rid of one more indexing operation. The problem is the presence of the one dimensional array constants which are functions of x,y, or z depending on their location: (you may recognize this as some FDTD code) Any ideas? Thanks, Rob. ####################################/ # Update the interior of the mesh: # all vector H vector components # ## for az in range(0,nz): for ay in range(0,ny): for ax in range(0,nx): dstore[ax,ay,0:nz]=Bx[ax,ay,0:nz] Bx[ax,ay,0:nz] = Bx[ax,ay,0:nz] * C1[0:nz] + ( ( (Ey[ax,ay,1:(nz+1)]-Ey[ax,ay,0:nz] ) / dz - (Ez[ax,ay+1,0:nz]-Ez[ax,ay,0:nz]) / dy ) * C2[0:nz] ) Hx[ax,ay,0:nz]= Hx[ax,ay,0:nz] * C3[ay] + ( ( Bx[ax,ay,0:nz] * C5[ax] - dstore[ax,ay,0:nz] * C6[ax] ) * C4h[ay] ) dstore[ax,ay,0:nz]=By[ax,ay,0:nz] By[ax,ay,0:nz] = By[ax,ay,0:nz] * C1[ax] + ( ( (Ez[ax+1,ay,0:nz]-Ez[ax,ay,0:nz]) / dx - (Ex[ax,ay,1:(nz+1)]-Ex[ax,ay,0:nz]) / dz ) * C2[ax] ) Hy[ax,ay,0:nz]= Hy[ax,ay,0:nz] * C3[0:nz] + ( ( By[ax,ay,0:nz] * C5[ay] - dstore[ax,ay,0:nz] * C6[ay] ) * C4h[0:nz] ) dstore[ax,ay,0:nz]=Bz[ax,ay,0:nz] Bz[ax,ay,0:nz] = Bz[ax,ay,0:nz] * C1[ay] + ( ( (Ex[ax,ay+1,0:nz]-Ex[ax,ay,0:nz] ) / dy - (Ey[ax+1,ay,0:nz]-Ey[ax,ay,0:nz] ) / dx ) * C2[ay] ) Hz[ax,ay,0:nz]= Hz[ax,ay,0:nz] * C3[ax] + ( ( Bz[ax,ay,0:nz] * C5[0:nz] - dstore[ax,ay,0:nz] * C6[0:nz] ) * C4h[ax] ) -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Thu Sep 27 20:12:01 2001 From: europax at home.com (Rob) Date: Thu Sep 27 20:12:01 2001 Subject: [Numpy-discussion] Can I further Numpy-ize this code? References: <3BB3CCFC.2367A7C3@home.com> Message-ID: <3BB3EA1B.FF6FD2C0@home.com> I fixed my code by turning the one dimensional arrays into 3 dimensional ones. This is much faster and totally sliced. But there still must be a way thats even faster. Rob. Rob wrote: > > I've been working on this for so long I may be missing the obvious. > Here is a snippet of code that I would like to at least get rid of one > more indexing operation. The problem is the presence of the one > dimensional array constants which are functions of x,y, or z depending > on their location: (you may recognize this as some FDTD code) Any > ideas? > > Thanks, Rob. > > ####################################/ > # Update the interior of the mesh: > # all vector H vector components > # > > ## for az in range(0,nz): > for ay in range(0,ny): > for ax in range(0,nx): > > dstore[ax,ay,0:nz]=Bx[ax,ay,0:nz] > > Bx[ax,ay,0:nz] = Bx[ax,ay,0:nz] * C1[0:nz] + ( ( > (Ey[ax,ay,1:(nz+1)]-Ey[ax,ay,0:nz] ) / dz - > (Ez[ax,ay+1,0:nz]-Ez[ax,ay,0:nz]) / dy ) * C2[0:nz] ) > > Hx[ax,ay,0:nz]= Hx[ax,ay,0:nz] * C3[ay] + ( ( > Bx[ax,ay,0:nz] * C5[ax] - dstore[ax,ay,0:nz] * C6[ax] ) * > C4h[ay] ) > > dstore[ax,ay,0:nz]=By[ax,ay,0:nz] > > By[ax,ay,0:nz] = By[ax,ay,0:nz] * C1[ax] + ( ( > (Ez[ax+1,ay,0:nz]-Ez[ax,ay,0:nz]) / dx - > (Ex[ax,ay,1:(nz+1)]-Ex[ax,ay,0:nz]) / dz ) * C2[ax] ) > > Hy[ax,ay,0:nz]= Hy[ax,ay,0:nz] * C3[0:nz] + ( ( > By[ax,ay,0:nz] * C5[ay] - dstore[ax,ay,0:nz] * C6[ay] ) * > C4h[0:nz] ) > > dstore[ax,ay,0:nz]=Bz[ax,ay,0:nz] > > Bz[ax,ay,0:nz] = Bz[ax,ay,0:nz] * C1[ay] + ( ( > (Ex[ax,ay+1,0:nz]-Ex[ax,ay,0:nz] ) / dy - > (Ey[ax+1,ay,0:nz]-Ey[ax,ay,0:nz] ) / dx ) * C2[ay] ) > > Hz[ax,ay,0:nz]= Hz[ax,ay,0:nz] * C3[ax] + ( ( > Bz[ax,ay,0:nz] * C5[0:nz] - dstore[ax,ay,0:nz] * C6[0:nz] > ) * C4h[ax] ) > > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Thu Sep 27 20:48:03 2001 From: europax at home.com (Rob) Date: Thu Sep 27 20:48:03 2001 Subject: [Numpy-discussion] searching for a better 3d field file format Message-ID: <3BB3F2B2.37E1560F@home.com> I've been using the "brick of bytes" format for my FDTD outputs, but unfortunately I can' find any type of 3d viewer for Windows. I am now using the X11/OpenGL based Animabob, which creates a movie of the series of dumped files. I am thinking of trying some other type of file format and viewer. I am wondering what others use to view 3d fields or data? Of course if it used Numpy and Python all the better. Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax From europax at home.com Fri Sep 28 07:11:01 2001 From: europax at home.com (Rob) Date: Fri Sep 28 07:11:01 2001 Subject: [Numpy-discussion] searching for a better 3d field file format References: <3BB3F2B2.37E1560F@home.com> <20010928062219.3742.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <3BB48489.1C5C623B@home.com> Hi Janko, yes I would love to have it. Thank you! I once installed Viz5D, but didn't quite know what to do with it. One thing though, the Windows version requires an X11 server, but thats better than no program at all :) I also installed OpenDX, the FreeBSD port version, but everytime I try to do anything it core dumps. I also want to look into VRML. Rob. Janko Hauser wrote: > > I would try vis5d, which is a very nice out of the box viewer of > 5D-Data. I have a module which can write vis5d-files directly from > numpy-data. The module is not polished, so if there is interest I > would send it privatly to you. > > http://vis5d.sourceforge.net/ > > http://www.ssec.wisc.edu/~billh/vis5d.html > > HTH, > > __Janko > > Rob writes: > > I've been using the "brick of bytes" format for my FDTD outputs, but > > unfortunately I can' find any type of 3d viewer for Windows. I am now > > using the X11/OpenGL based Animabob, which creates a movie of the series > > of dumped files. > > > > I am thinking of trying some other type of file format and viewer. I am > > wondering what others use to view 3d fields or data? Of course if it > > used Numpy and Python all the better. > > > > Thanks, Rob. > > > > > > -- > > The Numeric Python EM Project > > > > www.members.home.net/europax > > > > _______________________________________________ > > Numpy-discussion mailing list > > Numpy-discussion at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From n8gray at caltech.edu Fri Sep 28 16:41:03 2001 From: n8gray at caltech.edu (Nathaniel Gray) Date: Fri Sep 28 16:41:03 2001 Subject: [Numpy-discussion] ANN: gracePlot.py Message-ID: <01092816321207.01517@charter-DHCP-162> __________________________________________________________________ Announcing: gracePlot.py v0.5 An interactive, user-friendly python interface to the Grace plotting package. __________________________________________________________________ * WHAT IS IT? gracePlot.py is a high-level interface to the Grace plotting package available at: http://plasma-gate.weizmann.ac.il/Grace/ The goal of gracePlot is to offer the user an interactive plotting capability similar to that found in commercial packages such as Matlab and Mathematica, including GUI support for modifying plots and a user-friendly, pythonic interactive command-line interface. * WHAT FEATURES DOES IT OFFER? Since this package is in the early stages of development it does not yet provide high-level command-line access to all of Grace's plotting functionality. It does, however, offer: * Line Plots (with or without errorbars) * Histograms (with or without errorbars) * Multiple graphs (sets of axes) per plot * Multiple simultaneous plots (grace sessions) * Overlaid graphs, using a 'hold' command similar to Matlab's * Legends, titles, axis labels, and axis limits * Integration with Numerical Python and Scientific Python's Histogram object Note that all advanced features and customizations are available through the Grace UI, so you can compose rough plots in Python and then polish them up in Grace. * HOW DO I USE IT? Here is an example session that creates a plot with two sets of axes, putting a line plot in one and a histogram in the other: Python 2.1.1 (#2, Jul 31 2001, 14:10:42) [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from gracePlot import gracePlot >>> p = gracePlot() # A grace session opens >>> p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], ... symbols=1 ) # A plot with errorbars & symbols >>> p.title('Funding: Ministry of Silly Walks') >>> p.ylabel('Funding (Pounds\S10\N)') >>> p.multi(2,1) # Multiple plots: 2 rows, 1 column >>> p.xlimit(0, 6) # Set limits of x-axis >>> p.focus(1,0) # Set current graph to row 1, column 0 >>> p.histoPlot( [7, 15, 18, 20, 21], x_min=1, ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars >>> p.xlabel('Silliness Index') >>> p.ylabel('Applications/yr') >>> p.xlimit(0, 6) # Set limits of x-axis The result of this session can be found at: http://www.idyll.org/~n8gray/code/index.html * WHERE DO I GET IT? gracePlot is available here: http://www.idyll.org/~n8gray/code/index.html ___________________________________________________________ Cheers, -n8 -- Nathaniel Gray California Institute of Technology Computation and Neural Systems -- From europax at home.com Fri Sep 28 17:25:01 2001 From: europax at home.com (Rob) Date: Fri Sep 28 17:25:01 2001 Subject: [Numpy-discussion] ANN: gracePlot.py References: <01092816321207.01517@charter-DHCP-162> Message-ID: <3BB5147B.BFFFCE3F@home.com> Can it do any 3d volume rendering? I've heard of grace, but know nothing about it. Rob. Nathaniel Gray wrote: > > __________________________________________________________________ > > Announcing: gracePlot.py v0.5 > > An interactive, user-friendly python interface to the > Grace plotting package. > > __________________________________________________________________ > > * WHAT IS IT? > > gracePlot.py is a high-level interface to the Grace plotting package available > at: http://plasma-gate.weizmann.ac.il/Grace/ The goal of gracePlot is to > offer the user an interactive plotting capability similar to that found in > commercial packages such as Matlab and Mathematica, including GUI support for > modifying plots and a user-friendly, pythonic interactive command-line > interface. > > * WHAT FEATURES DOES IT OFFER? > > Since this package is in the early stages of development it does not yet > provide high-level command-line access to all of Grace's plotting > functionality. It does, however, offer: > * Line Plots (with or without errorbars) > * Histograms (with or without errorbars) > * Multiple graphs (sets of axes) per plot > * Multiple simultaneous plots (grace sessions) > * Overlaid graphs, using a 'hold' command similar to Matlab's > * Legends, titles, axis labels, and axis limits > * Integration with Numerical Python and Scientific Python's Histogram > object > > Note that all advanced features and customizations are available through the > Grace UI, so you can compose rough plots in Python and then polish them up in > Grace. > > * HOW DO I USE IT? > > Here is an example session that creates a plot with two sets of axes, putting > a line plot in one and a histogram in the other: > Python 2.1.1 (#2, Jul 31 2001, 14:10:42) > [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> from gracePlot import gracePlot > >>> p = gracePlot() # A grace session opens > >>> p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], > ... symbols=1 ) # A plot with errorbars & symbols > >>> p.title('Funding: Ministry of Silly Walks') > >>> p.ylabel('Funding (Pounds\S10\N)') > >>> p.multi(2,1) # Multiple plots: 2 rows, 1 column > >>> p.xlimit(0, 6) # Set limits of x-axis > >>> p.focus(1,0) # Set current graph to row 1, column 0 > >>> p.histoPlot( [7, 15, 18, 20, 21], x_min=1, > ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars > >>> p.xlabel('Silliness Index') > >>> p.ylabel('Applications/yr') > >>> p.xlimit(0, 6) # Set limits of x-axis > > The result of this session can be found at: > http://www.idyll.org/~n8gray/code/index.html > > * WHERE DO I GET IT? > > gracePlot is available here: > http://www.idyll.org/~n8gray/code/index.html > > ___________________________________________________________ > > Cheers, > -n8 > > -- > Nathaniel Gray > > California Institute of Technology > Computation and Neural Systems > -- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax From n8gray at caltech.edu Fri Sep 28 17:32:01 2001 From: n8gray at caltech.edu (Nathaniel Gray) Date: Fri Sep 28 17:32:01 2001 Subject: [Numpy-discussion] ANN: gracePlot.py In-Reply-To: <3BB5147B.BFFFCE3F@home.com> References: <01092816321207.01517@charter-DHCP-162> <3BB5147B.BFFFCE3F@home.com> Message-ID: <0109281722410A.01517@charter-DHCP-162> Nope. Grace is only for 2-d plots AFAIK. See their website for more info. You might want to check out VTK and OpenDX, both of which have entries in the Vaults of Parnassus. I've never tried either one but both look like industrial strength volume visualization packages and both have python bindings. -n8 On Friday 28 September 2001 05:23 pm, Rob wrote: > Can it do any 3d volume rendering? I've heard of grace, but know > nothing about it. Rob. > > Nathaniel Gray wrote: > > __________________________________________________________________ > > > > Announcing: gracePlot.py v0.5 > > > > An interactive, user-friendly python interface to the > > Grace plotting package. > > > > __________________________________________________________________ > > -- Nathaniel Gray California Institute of Technology Computation and Neural Systems -- From pearu at cens.ioc.ee Fri Sep 28 23:53:02 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri Sep 28 23:53:02 2001 Subject: [Numpy-discussion] ANN: gracePlot.py In-Reply-To: <0109281722410A.01517@charter-DHCP-162> Message-ID: On Fri, 28 Sep 2001, Nathaniel Gray wrote: > Nope. Grace is only for 2-d plots AFAIK. See their website for more info. > You might want to check out VTK and OpenDX, both of which have entries in the Rob, have you tried Mayavi (http://mayavi.sourceforge.net) that is fully 3D capable. And see http://cens.ioc.ee/projects/pyvtk/ that is prototype software to create VTK files from Python objects. Regards, Pearu The following fragment is from Mayavi README.txt: The MayaVi Data Visualizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ MayaVi is a free, easy to use scientific data visualizer. It is written in Python and uses the amazing Visualization Toolkit (VTK) for the graphics. It provides a GUI written using. MayaVi is free and distributed under the GNU GPL. It is also cross platform and should run on any platform where both Python and VTK are available (which is almost any *nix, Mac OSX or Windows). From europax at home.com Sun Sep 30 09:33:02 2001 From: europax at home.com (Rob) Date: Sun Sep 30 09:33:02 2001 Subject: [Numpy-discussion] ANN: gracePlot.py References: Message-ID: <3BB748D9.42955FE9@home.com> Thanks Peauru for the info. I tried to install the FreeBSD vtk port, but it won't compile. It ran for an hour and then crashed. I did install it on windows so we'll see how that works today. I've been focusing most of my effort to port Animabob to Cygwin. I am very impressed with Cygwin so far as its XFree86 was easy to set up with no XFree86config. I also found the OpenDX port to Cygwin, so I'll play with that a little as well. Rob. Pearu Peterson wrote: > > On Fri, 28 Sep 2001, Nathaniel Gray wrote: > > > Nope. Grace is only for 2-d plots AFAIK. See their website for more info. > > You might want to check out VTK and OpenDX, both of which have entries in the > > Rob, have you tried Mayavi (http://mayavi.sourceforge.net) that is > fully 3D capable. And see http://cens.ioc.ee/projects/pyvtk/ that is > prototype software to create VTK files from Python objects. > > Regards, > Pearu > > The following fragment is from Mayavi README.txt: > > The MayaVi Data Visualizer > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > MayaVi is a free, easy to use scientific data visualizer. It is > written in Python and uses the amazing Visualization Toolkit (VTK) for > the graphics. It provides a GUI written using. MayaVi is free and > distributed under the GNU GPL. It is also cross platform and should > run on any platform where both Python and VTK are available (which is > almost any *nix, Mac OSX or Windows). > -- The Numeric Python EM Project www.members.home.net/europax