From aurag at crm.umontreal.ca Wed Mar 1 09:32:48 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 14:32:48 GMT Subject: [Numpy-discussion] Derivatives Message-ID: <20000301.14324800@adam-aurag.penguinpowered.com> Hi, attached is a file called Derivative.py. It computes derivatives and is based on an algorithm found in Numerical Recipes in C. What to do you think about it and has anyone started a "serious" calculus oriented subpackage for Numerical Python in general? I mean: derivatives, partial derivatives, jacobian, hessian implemented fast and precise. On another note, why isn't infinity defined in NumPy? Why is tan(pi/2) a number even if big? Shouldn't it be infinity? From aurag at crm.umontreal.ca Wed Mar 1 09:34:18 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 14:34:18 GMT Subject: [Numpy-discussion] Derivative II Message-ID: <20000301.14341800@adam-aurag.penguinpowered.com> Forgot to attach file. Here it is -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Derivative.py URL: From aurag at CRM.UMontreal.CA Wed Mar 1 10:46:10 2000 From: aurag at CRM.UMontreal.CA (Hassan Aurag) Date: Wed, 1 Mar 2000 10:46:10 -0500 (EST) Subject: [Numpy-discussion] Derivatives Message-ID: <200003011546.KAA22032@newton.CRM.UMontreal.CA> The answer is yes! It is an 1e10 and not 1e-10. At 0.0 you got to pick the interval yourself. You can't just use the starting point x > Date: Wed, 1 Mar 2000 16:06:28 +0100 (MET) > From: Fredrik Stenberg > To: Hassan Aurag > Subject: Re: [Numpy-discussion] Derivatives > MIME-Version: 1.0 > > > Hi, > > > > attached is a file called Derivative.py. > > > > It computes derivatives and is based on an algorithm found in > > Numerical Recipes in C. > > > > What to do you think about it and has anyone started a "serious" > > calculus oriented subpackage for Numerical Python in general? > > > > I mean: derivatives, partial derivatives, jacobian, hessian > > implemented fast and precise. > > > > On another note, why isn't infinity defined in NumPy? > > > > Why is tan(pi/2) a number even if big? Shouldn't it be infinity? > > > > > > > > I tried your algoritm on sin(x) and i got some rather interesting > results. > > ######### EXAMPLE############# > from math import sin > > def f(x): > return sin(x) > > import Derivative > > print Derivative.Diff(f,0.0) > > ########RESULT################ > -2.03844228853e-10 > > It should be approx 1.0 > > > > I found the error i think.. > check row 28 in Derivative > h = random()/1e-10 > should that be h = random()/1e+10?? > > > Fredrik > From KRodgers at ryanaero.com Wed Mar 1 11:32:21 2000 From: KRodgers at ryanaero.com (Rodgers, Kevin) Date: Wed, 1 Mar 2000 08:32:21 -0800 Subject: [Numpy-discussion] Derivatives Message-ID: <0D8C1A50C283D311ABB800508B612E5354B3B8@ryanaero.com> Whenever I see somebody implementing code based on "Numerical Recipies", I feel obligated to send them the following link: http://math.jpl.nasa.gov/nr/nr.html YMMV, as always . . . Kevin Rodgers Northrop Grumman Ryan Aeronautical Center krodgers at ryanaero.com > -----Original Message----- > From: Hassan Aurag [SMTP:aurag at crm.umontreal.ca] > Sent: Wednesday, March 01, 2000 6:33 AM > To: Gmath-devel at lists.sourceforge.net; > numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Derivatives > > > > Hi, > > attached is a file called Derivative.py. > > It computes derivatives and is based on an algorithm found in > Numerical Recipes in C. > > What to do you think about it and has anyone started a "serious" > calculus oriented subpackage for Numerical Python in general? > > I mean: derivatives, partial derivatives, jacobian, hessian > implemented fast and precise. > > On another note, why isn't infinity defined in NumPy? > > Why is tan(pi/2) a number even if big? Shouldn't it be infinity? > > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From zow at pensive.llnl.gov Wed Mar 1 12:49:23 2000 From: zow at pensive.llnl.gov (Zow Terry Brugger) Date: Wed, 01 Mar 2000 09:49:23 -0800 Subject: [Numpy-discussion] Derivatives In-Reply-To: Your message of "Wed, 01 Mar 2000 08:32:21 PST." <0D8C1A50C283D311ABB800508B612E5354B3B8@ryanaero.com> Message-ID: <200003011753.JAA11191@lists.sourceforge.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-Type: text/plain; charset=us-ascii > Whenever I see somebody implementing code based on "Numerical Recipies", I > feel obligated to send them the following link: > > http://math.jpl.nasa.gov/nr/nr.html > Funny - that's the first thing I thought of too (just looking at it yesterday). Don't neglect to look at "the NR response" link buried at the very bottom of the page. > YMMV, as always . . . > More precisely, I think the point is that yourself or some other person so qualified should review any algorithm provided by a third party to ensure applicability to your problem domain. > Kevin Rodgers "Zow" Terry Brugger zow at acm.org zow at llnl.gov -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 5.0i for non-commercial use Charset: noconv iQA/AwUBOL1YI6fuGVwXgOQkEQIg7QCgtysc351lOuA3zP41XXaRZdhVoQ4An0S9 UWudL1u0qIvPZumAOVkAtEnA =UXpc -----END PGP SIGNATURE----- From dubois1 at llnl.gov Wed Mar 1 15:53:24 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Wed, 1 Mar 2000 12:53:24 -0800 Subject: [Numpy-discussion] Pyfort 3.1 at Source Forge Message-ID: <00030112582400.15819@almanac> Pyfort, a Python-Fortran connection tool, is now available at pyfortran.sourceforge.net, instead of xfiles.llnl.gov. Documentation is available at the site. Version 3.1 uses distutils for easy extension building. A sample package is in the test subdirectory. Support for the pgf90 compiler on Linux is now available. Note that the limitation to non-explicitly interfaced routines has not been removed yet, but it will handle using pgf90-compiled F90 files in your Fortran. From zow at pensive.llnl.gov Wed Mar 1 16:40:35 2000 From: zow at pensive.llnl.gov (Zow Terry Brugger) Date: Wed, 01 Mar 2000 13:40:35 -0800 Subject: [Numpy-discussion] Derivatives (fwd) Message-ID: <200003012144.NAA21744@lists.sourceforge.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-Type: text/plain; charset=us-ascii Hassan, This seemed relevant to pass back to the numeric list - hope you don't mind. My comments follow. - ------- Forwarded Message From aurag at crm.umontreal.ca Wed Mar 1 14:46:42 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 19:46:42 +0000 Subject: [Gmath-devel] Re: [Numpy-discussion] Derivatives Message-ID: I have to agree with you on most counts that Numerical Recipes in C=20 is not a full-blown encyclopedia on all subtleties of doing numerical=20 computations. However it does its job greatly for a big class of stuff I am=20 interested in: minimization, non-linear system of equations solving=20 (The newton routine given there is good, accurate and fast.) There are errors and problems as in most other numerical books. In=20 truth, I don't think there is anything fully correct out there. When trying to make computations you have to do a lot of testing and=20 a lot of thought even if the algorithm seems perfect. That applies to=20 all books, recipes et al. I have just discovered that tan(pi/2) =3D 1.63317787284e+16 in=20 numerical python. And we all know this is bad. It should be infinity=20 period. We should define a symbol called infinity and put the correct=20 definition of tan, sin .... for all known angles then interpolate if=20 needed for the rest, or whatever is used to actually compute those thing= s. Peace - ------- End of Forwarded Message I believe you're actually talking about the math module, not the numeric module (I'm not aware of tan or pi definitions in numeric, but I haven't bothered to double check that). Never the less, I think it has relevance here as Numeric is all about doing serious number crunching. This problem is caused by the lack of infinite precision in python. Of course, how is it even possible to perform infinite precision on non-rational numbers? The obvious solution is to allow the routine (tan() in this case) to recognize named constants that have relevance in their domain (pi in this case). This would fix the problem: math.tan(math.pi) = -1.22460635382e-16 but it still doesn't solve your problem because the named constant would have the mathematical operation performed on it before it's passed into the function, ruining whatever intimate knowledge of the given named constant that routine has. Perhaps you could get the routine to recognize rational math on named constants (the problem with that solution is how do you not burden other routines with the knowledge of how to process that expression). Assuming you had that, even for your example, should the answer be positive or negative infinity? Another obvious solution is just to assume that any floating point overflow is infinity and any underflow is zero. This obviously won't work because some asymptotic functions (say 1/x^3) will overflow or underflow at values for x for which the correct answer is not properly infinity or zero. It is interesting to note that Matlab's behaviour is the same as Python's, which would indicate to me that there's not some easy solution to this problem that Guido et. al. overlooked. I haven't really researched the problem at all (although now I'm interested), but I'd be interested if anyone has a proposal for (or reference to) how this problem can be solved in a general purpose programming language at all (as there exists the distinct possibility that it can not be done in Python without breaking backwards compatibility). Terry -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 5.0i for non-commercial use Charset: noconv iQA/AwUBOL2OUqfuGVwXgOQkEQJTpQCggOuFT2ZVavzMhy+jZgoehnrK5uIAoMzO D5OOdLtBvT97ee7vkckO+0Qt =SmqL -----END PGP SIGNATURE----- From aurag at crm.umontreal.ca Wed Mar 1 22:19:10 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Thu, 02 Mar 2000 03:19:10 GMT Subject: [Numpy-discussion] Derivatives (fwd) In-Reply-To: <200003012144.NAA21744@lists.sourceforge.net> References: <200003012144.NAA21744@lists.sourceforge.net> Message-ID: <20000302.3191000@adam-aurag.penguinpowered.com> Mathematica does it correctly. How is another question. But I guess the idea is to pass it through some kind of database of exact solutions then optionally evaluate it. Mathematica is very good at that (symbolic stuff) >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<< On 3/1/00, 4:40:35 PM, "Zow" Terry Brugger wrote regarding [Numpy-discussion] Derivatives (fwd): > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > Content-Type: text/plain; charset=us-ascii > Hassan, > This seemed relevant to pass back to the numeric list - hope you don't mind. > My comments follow. > - ------- Forwarded Message > Date: Wed, 01 Mar 2000 19:46:42 +0000 > From: Hassan Aurag > To: zow at llnl.gov, Gmath-devel at lists.sourceforge.net > Subject: Re: [Gmath-devel] Re: [Numpy-discussion] Derivatives > I have to agree with you on most counts that Numerical Recipes in C=20 > is not a full-blown encyclopedia on all subtleties of doing numerical=20 > computations. > However it does its job greatly for a big class of stuff I am=20 > interested in: minimization, non-linear system of equations solving=20 > (The newton routine given there is good, accurate and fast.) > There are errors and problems as in most other numerical books. In=20 > truth, I don't think there is anything fully correct out there. > When trying to make computations you have to do a lot of testing and=20 > a lot of thought even if the algorithm seems perfect. That applies to=20 > all books, recipes et al. > I have just discovered that tan(pi/2) =3D 1.63317787284e+16 in=20 > numerical python. And we all know this is bad. It should be infinity=20 > period. We should define a symbol called infinity and put the correct=20 > definition of tan, sin .... for all known angles then interpolate if=20 > needed for the rest, or whatever is used to actually compute those thing= > s. > Peace > > - ------- End of Forwarded Message > I believe you're actually talking about the math module, not the numeric > module (I'm not aware of tan or pi definitions in numeric, but I haven't > bothered to double check that). Never the less, I think it has relevance here > as Numeric is all about doing serious number crunching. This problem is caused > by the lack of infinite precision in python. Of course, how is it even > possible to perform infinite precision on non-rational numbers? > The obvious solution is to allow the routine (tan() in this case) to recognize > named constants that have relevance in their domain (pi in this case). This > would fix the problem: > math.tan(math.pi) = -1.22460635382e-16 > but it still doesn't solve your problem because the named constant would have > the mathematical operation performed on it before it's passed into the > function, ruining whatever intimate knowledge of the given named constant that > routine has. > Perhaps you could get the routine to recognize rational math on named > constants (the problem with that solution is how do you not burden other > routines with the knowledge of how to process that expression). Assuming you > had that, even for your example, should the answer be positive or negative > infinity? > Another obvious solution is just to assume that any floating point overflow is > infinity and any underflow is zero. This obviously won't work because some > asymptotic functions (say 1/x^3) will overflow or underflow at values for x > for which the correct answer is not properly infinity or zero. > It is interesting to note that Matlab's behaviour is the same as Python's, > which would indicate to me that there's not some easy solution to this problem > that Guido et. al. overlooked. I haven't really researched the problem at all > (although now I'm interested), but I'd be interested if anyone has a proposal > for (or reference to) how this problem can be solved in a general purpose > programming language at all (as there exists the distinct possibility that it > can not be done in Python without breaking backwards compatibility). > Terry > -----BEGIN PGP SIGNATURE----- > Version: PGPfreeware 5.0i for non-commercial use > Charset: noconv > iQA/AwUBOL2OUqfuGVwXgOQkEQJTpQCggOuFT2ZVavzMhy+jZgoehnrK5uIAoMzO > D5OOdLtBvT97ee7vkckO+0Qt > =SmqL > -----END PGP SIGNATURE----- > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From Oliphant.Travis at mayo.edu Thu Mar 2 02:18:28 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Thu, 2 Mar 2000 01:18:28 -0600 (CST) Subject: [Numpy-discussion] Survey results Message-ID: I'm still here working away at the code cleanup of Numerical Python. I know some of you may be interested in the results of a survey related to that effort. Here are the results so far: 1: How long have your been using NumPy: 19 response: <= 1 year: 8 1-3 years: 6 4-5 years: 5 2: How important is NumPy to your daily work now? (1-Depend on it, 5-Dabble with it) 1 - 17 2 - 5 3 - 5 4 - 3 Avg: 1.8 3: How would you rate the current functionality of NumPy? (1-Love it, 5-Hate it) 1 - 2 2 - 19 3 - 8 4 - 1 Avg: 2.27 4: How important is it to you for NumPy to get into the Python core? (1-Very Important, 5-Not Important) 1 - 8 2 - 14 3 - 4 3 - 4 Avg: 2.1 5: How much interest do you have in improving NumPy? (1-Unlimited, 5-None) 1 - 10 2 - 13 3 - 7 Avg: 1.9 6: How concerned are you about alterations to the underlying C-structure of NumPy? (1-Very concerned, 5-Don't care)) 1 - 3 2 - 2 3 - 8 4 - 5 5 - 12 Avg: 3.7 7: How important is memory conservation to you? (1-Very important, 5-Not important) 1 - 11 2 - 10 3 - 6 4 - 1 5 - 2 Avg: 2.1 8: How important is it to you that the underlying code to NumPy be easy to understand? (1-Very important, 5-Not important) 1 - 7 2 - 13 3 - 5 4 - 5 Avg: 2.3 9: How important is it to you that NumPy be fast? (1-Very important, 5-Not important) 1 - 22 2 - 7 3 - 1 Avg: 1.3 10: How happy are you with the current coercion rules (including spacesaving arrays)? (1-Happy, 5-Miserable) 1 - 3 2 - 13 3 - 10 4 - 2 Avg: 2.4 11: Should mixed-precision arithmetic cast to the largest memory type (yes) or the least memory type (no)? Yes - 23 No - 6 12: Should object arrays (typecode='O') remain a part of NumPy? (1-Agree, 5-Disagree) 1 - 13 2 - 3 3 - 10 4 - 2 Avg: 2.1 13: Should slices (e.g. x[3:10]) be changed to be copies? Yes - 12 No - 16 So, as you can see the results were pretty clear in some areas and quite controversial in other areas. Have fun interpreting them... Travis From gpk at bell-labs.com Thu Mar 2 09:26:27 2000 From: gpk at bell-labs.com (Greg Kochanski) Date: Thu, 02 Mar 2000 09:26:27 -0500 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #20 - 5 msgs References: <200003012006.MAA16796@lists.sourceforge.net> Message-ID: <38BE7A13.3F133F03@bell-labs.com> Look at Scientific Python (see http://starship.python.net/crew/hinsen/scientific.html ) for some differentiation routines. > From: Hassan Aurag > Date: Wed, 01 Mar 2000 14:32:48 GMT > To: Gmath-devel at lists.sourceforge.net, numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Derivatives > > Hi, > > attached is a file called Derivative.py. > > It computes derivatives and is based on an algorithm found in > Numerical Recipes in C. > > What to do you think about it and has anyone started a "serious" > calculus oriented subpackage for Numerical Python in general? From vanandel at atd.ucar.edu Thu Mar 2 10:21:40 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Thu, 02 Mar 2000 08:21:40 -0700 Subject: [Numpy-discussion] python FAQTS Message-ID: <38BE8704.5FE50987@atd.ucar.edu> I just started looking at the new Python Knowledge Base, maintained at http://python.faqts.com This looks like a nice simple way of maintaining FAQs and link collections. I've already registered, and started building links to various packages that work with Numeric Python. Since sourceforge.net doesn't seem to offer these collaborative FAQ building tools, I'd propose that we should add to the existing Numeric Python FAQs, and expand the link collection that I've (just) started. Could we add a link to the (Numeric) Python Knowledge Base to Numeric Python's home on sourceforge? -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From pauldubois at home.com Thu Mar 2 11:37:57 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 2 Mar 2000 08:37:57 -0800 Subject: [Numpy-discussion] python FAQTS In-Reply-To: <38BE8704.5FE50987@atd.ucar.edu> Message-ID: Done. Also added link on python.faqts.com to Numerical and Pyfort. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe > Van Andel > Sent: Thursday, March 02, 2000 7:22 AM > To: numpy-discussion at lists.sourceforge.net > Cc: nathan at synop.com > Subject: [Numpy-discussion] python FAQTS > > Could we add a link to the (Numeric) Python Knowledge Base to Numeric > Python's home on > sourceforge? > From nathan at synop.com Thu Mar 2 12:04:24 2000 From: nathan at synop.com (Nathan Wallace) Date: Thu, 02 Mar 2000 12:04:24 -0500 Subject: [Numpy-discussion] python FAQTS References: Message-ID: <38BE9F18.7E840552@synop.com> Please don't hesitate to ask if there are any changes / improvements to FAQTs that could be made to help your project. Cheers, Nathan > Done. Also added link on python.faqts.com to Numerical and Pyfort. From collins at rushe.aero.org Thu Mar 2 14:40:58 2000 From: collins at rushe.aero.org (JEFFERY COLLINS) Date: Thu, 2 Mar 2000 11:40:58 -0800 Subject: [Numpy-discussion] numpy on NT Message-ID: <200003021940.LAA13341@rushe.aero.org> A colleague is attempting to install numpy on an NT machine. How is this done? I tried to help, but the install procedure is apparently different from what I am accustomed on Unix. It appears that the python-numpy-15.2.zip is a precompiled distribution ready for dumping into some directory. Since it doesn't contain a setup.py, I presume that Distutils isn't necessary. Also, what is the accepted way of setting PYTHONPATH on NT? Thanks, Jeff From DavidA at ActiveState.com Thu Mar 2 14:55:15 2000 From: DavidA at ActiveState.com (David Ascher) Date: Thu, 2 Mar 2000 11:55:15 -0800 Subject: [Numpy-discussion] numpy on NT In-Reply-To: <200003021940.LAA13341@rushe.aero.org> Message-ID: > A colleague is attempting to install numpy on an NT machine. How is > this done? I tried to help, but the install procedure is apparently > different from what I am accustomed on Unix. > > It appears that the python-numpy-15.2.zip is a precompiled > distribution ready for dumping into some directory. Since it doesn't > contain a setup.py, I presume that Distutils isn't necessary. Indeed. You can just unzip it straight in to your main Python directory (typically C:\Program Files\Python). > Also, what is the accepted way of setting PYTHONPATH on NT? Go to the control panel, click on the System icon, pick the Environment tab, and add an entry (usually in the USER section) for PYTHONPATH. But you only need to do so if you don't want to install NumPy in the main Python directory. --david From pauldubois at home.com Mon Mar 6 13:50:19 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 6 Mar 2000 10:50:19 -0800 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: <001801bf71fa$41f681a0$01f936d1@janus> Message-ID: We are seeking a volunteer developer for Numeric who will remove the current BLAS/LINPACK lite stuff in favor of linking to whatever the native version is on a particular machine. The current default is that you have to work harder to get the good ones than our bad ones; we want to reverse that. We have gotten a lot of complaints about the current situation and while we are aware of the counter arguments the Council of Nummies has reached a consensus to do this. A truly excited volunteer would widen the amount of stuff that the interface can get to. They would work via the CVS tree; see http://numpy.sourceforge.net. Please reply to dubois at users.sourceforge.net. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of James > R. Webb > Sent: Monday, February 07, 2000 10:04 PM > To: numpy-discussion at lists.sourceforge.net > Cc: matrix-sig at python.org > Subject: [Numpy-discussion] Re: [Matrix-SIG] An Experiment in > code-cleanup. > > > There is now a linux native BLAS available through links at > http://www.cs.utk.edu/~ghenry/distrib/ courtesy of the ASCI Option Red > Project. > > There is also ATLAS (http://www.netlib.org/atlas/). > > Either library seems to link into NumPy without a hitch. > > ----- Original Message ----- > From: "Beausoleil, Raymond" > To: > Cc: > Sent: Tuesday, February 08, 2000 2:31 PM > Subject: RE: [Matrix-SIG] An Experiment in code-cleanup. > > > > I've been reading the posts on this topic with considerable > interest. For > a > > moment, I want to emphasize the "code-cleanup" angle more literally than > the > > functionality mods suggested so far. > > > > Several months ago, I hacked my personal copy of the NumPy > distribution so > > that I could use the Intel Math Kernel Library for Windows. The IMKL is > > (1) freely available from Intel at > > http://developer.intel.com/vtune/perflibst/mkl/index.htm; > > (2) basically BLAS and LAPACK, with an FFT or two thrown in for good > > measure; > > (3) optimized for the different x86 processors (e.g., generic > x86, Pentium > > II & III); > > (4) configured to use 1, 2, or 4 processors; and > > (5) configured to use multithreading. > > It is an impressive, fast implementation. I'm sure there are similar > native > > libraries available on other platforms. > > > > Probably due to my inexperience with both Python and NumPy, it took me a > > couple of days to successfully tear out the f2c'd stuff and get the IMKL > > linking correctly. The parts I've used work fine, but there are probably > > other features that I haven't tested yet that still aren't up > to snuff. In > > any case, the resulting code wasn't very pretty. > > > > As long as the NumPy code is going to be commented and cleaned > up, I'd be > > glad to help make sure that the process of using a native BLAS/LAPACK > > distribution (which was probably compiled using Fortran storage > and naming > > conventions) is more straightforward. Among the more tedious issues to > > consider are: > > (1) The extent of the support for LAPACK. Do we want to stick > with LAPACK > > Lite? > > (2) The storage format. If we've still got row-ordered matrices > under the > > hood, and we want to use native LAPACK libraries that were > compiled using > > column-major format, then we'll have to be careful to set all > of the flags > > correctly. This isn't going to be a big deal, _unless_ NumPy > will support > > more of LAPACK when a native library is available. Then, of > course, there > > are the special cases: the IMKL has both a C and a Fortran interface to > the > > BLAS. > > (3) Through the judicious use of header files with compiler-dependent > flags, > > we could accommodate the various naming conventions used when > the FORTRAN > > libraries were compiled (e.g., sgetrf_ or SGETRF). > > > > The primary output of this effort would be an expansion of the > "Compilation > > Notes" subsection of Section 15 of the NumPy documentation, and some > header > > files that made the recompilation easier than it is now. > > > > Regards, > > > > Ray > > > > ============================ > > Ray Beausoleil > > Hewlett-Packard Laboratories > > mailto:beausol at hpl.hp.com > > Vox: 425-883-6648 > > Fax: 425-883-2535 > > HP Telnet: 957-4951 > > ============================ > > > > _______________________________________________ > > Matrix-SIG maillist - Matrix-SIG at python.org > > http://www.python.org/mailman/listinfo/matrix-sig > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From godzilla at netmeg.net Mon Mar 6 17:16:27 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Mon, 6 Mar 2000 17:16:27 -0500 (EST) Subject: [Numpy-discussion] moving matrix-SIG archives to SourceForge Message-ID: <14532.11835.181972.492971@gargle.gargle.HOWL> Is there any chance of getting the old matrix-SIG archives moved over to SourceForge location and have them made searchable? i wanted to look up stuff on broadcast rules in NumPy and remembered there were posts on it in the old archives, but i dont see any way to search the things. thanks les schaffer From hinsen at dirac.cnrs-orleans.fr Tue Mar 7 15:13:58 2000 From: hinsen at dirac.cnrs-orleans.fr (hinsen at dirac.cnrs-orleans.fr) Date: Tue, 7 Mar 2000 21:13:58 +0100 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: References: Message-ID: <200003072013.VAA16809@chinon.cnrs-orleans.fr> > We are seeking a volunteer developer for Numeric who will remove the current > BLAS/LINPACK lite stuff in favor of linking to whatever the native version > is on a particular machine. The current default is that you have to work This should take several volunteers; nobody has access to all machine types! > A truly excited volunteer would widen the amount of stuff that the interface > can get to. They would work via the CVS tree; see That is not necessary; a full BLAS/LAPACK interface has existed for years, written by Doug Heisterkamp. In fact, the lapack_lite module is simply a subset of it. By some strange coincidence I have worked a bit on this just a few days ago. I have added a compilation/installation script and added thread support (such that LAPACK calls don't block other threads). You can pick it up at ftp://dirac.cnrs-orleans.fr/pub/ as a tar archive and as RPMs for (RedHat) Linux. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From mhagger at blizzard.harvard.edu Tue Mar 7 23:33:15 2000 From: mhagger at blizzard.harvard.edu (Michael Haggerty) Date: 07 Mar 2000 23:33:15 -0500 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: "Paul F. Dubois"'s message of "Mon, 6 Mar 2000 10:50:19 -0800" References: Message-ID: Hello, "Paul F. Dubois" writes: > We are seeking a volunteer developer for Numeric who will remove the > current BLAS/LINPACK lite stuff in favor of linking to whatever the > native version is on a particular machine. I don't have much time to help out with the python interface, but I have some (mostly) machine-translated C/C++ header files for LAPACK that might be useful. These files could be SWIGged as a starting point for a python binding. Even better, the perl (*yuck*) script that does the translation could be modified to prototype input vs. output arrays differently (the script determines which arrays are input vs. output from the comment lines from the Fortran source). Then SWIG typemaps could be written that handle input/output correctly and much of the wrapping job would be automated. Of course all this won't help with row vs. column storage format. The header files and a translation script can be obtained from http://monsoon.harvard.edu/~mhagger/download/ Unfortuantely I don't have the same thing for BLAS, mostly because the comments in the BLAS Fortran files are less careful and consistent, making machine interpretation more difficult. Please let me know if you find these headers useful. Yours, Michael -- Michael Haggerty mhagger at blizzard.harvard.edu From godzilla at netmeg.net Thu Mar 9 10:14:21 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Thu, 9 Mar 2000 10:14:21 -0500 (EST) Subject: [Numpy-discussion] old matric-SIG archives Message-ID: <14535.49101.728831.410812@gargle.gargle.HOWL> well, let me try it this way: 1.) Is this list the place where people who have the capability of moving the old matrix-sig archives from python.org to sourceforge hang out? 2.) If you're here and listening, i see that the sourceforge archives are already searchable, so.... if we moved the old matrix-sig archives over to sourceforge, is there more work need done to make __them__ searchable? i volunteer to make this happen. les schaffer -- ____ Les Schaffer ___| --->> Engineering R&D <<--- Theoretical & Applied Mechanics | Designspring, Inc. Center for Radiophysics & Space Research | http://www.designspring.com/ Cornell Univ. schaffer at tam.cornell.edu | les at designspring.com From Oliphant.Travis at mayo.edu Thu Mar 9 15:46:24 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Thu, 9 Mar 2000 14:46:24 -0600 (CST) Subject: [Numpy-discussion] Moving archives to Sourceforge. Message-ID: I'm responding, but I don't know the answer to your question. I am not familiar with Mailman, or how to move archives on to sourceforge, or if it is even possible. Sorry for the inadequate help, but I suspect that you are not getting a response because nobody knows. Sincerely, Travis Oliphant From godzilla at netmeg.net Thu Mar 9 16:52:36 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Thu, 9 Mar 2000 16:52:36 -0500 (EST) Subject: [Numpy-discussion] Moving archives to Sourceforge. In-Reply-To: References: Message-ID: <14536.7460.380291.899054@gargle.gargle.HOWL> Travis: > qI'm responding, but I don't know the answer to your question. I am not familiar with Mailman, or how to move archives on to sourceforge, or if it is even possible. i have been in contact with Paul Dubois and i am tracking down the feasibility of making the transfer happen. it amounts to cat'ing the mailbox from the matrix sig archive onto the sourceforge archive, and re-running the archiver on the resulting mailbox (according to barry warsaw). so we need to find out if sourceforge allows us to fiddle with the mailbox in that fashion. les schaffer From Barrett at stsci.edu Mon Mar 13 14:19:51 2000 From: Barrett at stsci.edu (Paul Barrett) Date: Mon, 13 Mar 2000 14:19:51 -0500 (EST) Subject: [Numpy-discussion] Documentation Message-ID: <14541.15379.555688.924323@nem-srvr.stsci.edu> I have a couple of question about the Numpy documentation: 1. Is there a recent version of the Numerical Python manual available anywhere? I can't find it at the SourceForge and I've tried xfiles.llnl.org, but can't get through. (But from what Paul Dubois has said recently about the LLNL site, I shouldn't expect to either.) 2. Have any changes been made to the documentation since about Q1 1999? I think my current version dates from about this period. -- Paul From pbleyer at dgf.uchile.cl Mon Mar 13 16:54:29 2000 From: pbleyer at dgf.uchile.cl (Pablo Bleyer Kocik) Date: Mon, 13 Mar 2000 16:54:29 -0500 Subject: [Numpy-discussion] Documentation References: <14541.15379.555688.924323@nem-srvr.stsci.edu> Message-ID: <38CD6395.AA5FCF73@dgf.uchile.cl> Paul Barrett wrote: > I have a couple of question about the Numpy documentation: > > 1. Is there a recent version of the Numerical Python manual available > anywhere? I can't find it at the SourceForge and I've tried > xfiles.llnl.org, but can't get through. (But from what Paul Dubois > has said recently about the LLNL site, I shouldn't expect to > either.) > > 2. Have any changes been made to the documentation since about Q1 > 1999? I think my current version dates from about this period. > > -- Paul Who is maintaining the Python manual actually? -- Pablo Bleyer Kocik |"And all this science I don't understand pbleyer | It's just my job five days a week @dgf.uchile.cl | A rocket man, a rocket man..." - Elton John from sys import*;from string import*;a=argv;[s,p,q]=filter(lambda x:x[:1]!= '-',a);d='-d'in a;e,n=atol(p,16),atol(q,16);l=(len(q)+1)/2;o,inb=l-d,l-1+d while s:s=stdin.read(inb);s and map(stdout.write,map(lambda i,b=pow(reduce( lambda x,y:(x<<8L)+y,map(ord,s)),e,n):chr(b>>8*i&255),range(o-1,-1,-1))) From pauldubois at home.com Mon Mar 13 14:42:13 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 13 Mar 2000 11:42:13 -0800 Subject: [Numpy-discussion] Documentation In-Reply-To: <38CD6395.AA5FCF73@dgf.uchile.cl> Message-ID: > Who is maintaining the Python manual actually? > Me. From vanandel at atd.ucar.edu Mon Mar 13 15:40:29 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Mon, 13 Mar 2000 13:40:29 -0700 Subject: [Numpy-discussion] single precision version of interp Message-ID: <38CD523D.603F8B4@atd.ucar.edu> As I've previously mentioned to Paul, I need a single precision version of 'interp()', so I can use it on large single precision arrays, without returning a double precision array. In my own copy of NumPy, I've written such a routine, and added it to 'arrayfns.c '. Naturally, I want to see this functionality built into the official release, so I do not have to apply my own patches to new releases. Can we decide how such single precision needs are accomodated? Should there be a keyword argument on the 'interp()' call, that calls the single precision version? Or should the caller invoke 'interpf()', rather than 'interp()?' I don't much care what the solution looks like, as long as people agree that: 1) we need such functionality in NumPy. 2) we can establish a precedent on how single precision vs double precision methods are invoked. Please let me know your opinions on how this should be resolved. Thanks much! -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From jhauser at ifm.uni-kiel.de Mon Mar 13 18:54:22 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Tue, 14 Mar 2000 00:54:22 +0100 (CET) Subject: [Numpy-discussion] Documentation In-Reply-To: References: <38CD6395.AA5FCF73@dgf.uchile.cl> Message-ID: <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> I'm currently build a reference for NumPy and some of the other modules in the format of the standard python library reference. At the moment this is more a personal effort to get an overview which functions are there. I do not really write stuff myself, but bring information of various sources together and reformat it. Than I want to test some approaches to extract info for a function from the HTML source at the interactive commandline. I see this not as a replacement for the excellent PDF documentation, which has far more information and many examples. The standard latex documentation package for Python has currently a bug with the index generation. If this is solved I will put a HTML tree online, so it can be examined and criticized. Yust for information, __Janko From jhauser at ifm.uni-kiel.de Mon Mar 13 18:59:54 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Tue, 14 Mar 2000 00:59:54 +0100 (CET) Subject: [Numpy-discussion] single precision version of interp In-Reply-To: <38CD523D.603F8B4@atd.ucar.edu> References: <38CD523D.603F8B4@atd.ucar.edu> Message-ID: <20000313235954.30626.qmail@lisboa.ifm.uni-kiel.de> Shouldn't this be handled in the function and decided by the typecode of the parameters. Or put a typcode keyword parameter in the function signature of interp. To derive functions for different types and put these into the public namespace is not so good IMHO. This could also be handled by a wrapper, which calls different the compiled _functions. __Janko From aurag at crm.umontreal.ca Mon Mar 13 21:20:06 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Tue, 14 Mar 2000 02:20:06 GMT Subject: [Numpy-discussion] Documentation In-Reply-To: <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> References: <38CD6395.AA5FCF73@dgf.uchile.cl> <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <20000314.2200600@adam-aurag.penguinpowered.com> This is not totally related, but I'd love to see a docbook based doc for numpy. See, I am writing GmatH (http://gmath.sourceforge.net) and among otherb things it provides a nice GUI (I hope) to NumPy. Now I'd love to see a docbook based thinggy that could be added to the app using a gtkhtml widget (duh!). It'd be nice to also have a Quick help file that I could incorporate into the app. >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<< On 3/13/00, 6:54:22 PM, Janko Hauser wrote regarding RE: [Numpy-discussion] Documentation: > I'm currently build a reference for NumPy and some of the other > modules in the format of the standard python library reference. At the > moment this is more a personal effort to get an overview which > functions are there. I do not really write stuff myself, but bring > information of various sources together and reformat it. > Than I want to test some approaches to extract info for a function > from the HTML source at the interactive commandline. > I see this not as a replacement for the excellent PDF documentation, > which has far more information and many examples. > The standard latex documentation package for Python has currently a > bug with the index generation. If this is solved I will put a HTML > tree online, so it can be examined and criticized. > Yust for information, > __Janko > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From jbaddor at physics.mcgill.ca Wed Mar 22 18:31:07 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Wed, 22 Mar 2000 18:31:07 -0500 (EST) Subject: [Numpy-discussion] how to compute a Gamma function with Numpy? Message-ID: Hey Numpy people! I need to compute the Gamma function (the one related to factorial) with Numpy. It seems no to be included in the module, Am I right? Is any code for computing it available? Where could I find an algorythme to adapt ? Jean-Bernard From cgw at fnal.gov Wed Mar 22 19:07:23 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Wed, 22 Mar 2000 18:07:23 -0600 (CST) Subject: [Numpy-discussion] how to compute a Gamma function with Numpy? In-Reply-To: References: Message-ID: <14553.24635.191392.10424@buffalo.fnal.gov> Jean-Bernard Addor writes: > Hey Numpy people! > > I need to compute the Gamma function (the one related to factorial) with > Numpy. It seems no to be included in the module, Am I right? Yes, but it is present in Travis Oliphant's "cephes" module which is an addon to stock NumPy. See http://oliphant.netpedia.net/packages/included_functions.html for the list of functions, and go to http://oliphant.netpedia.net/ to download the package itself. From jbaddor at physics.mcgill.ca Wed Mar 22 20:48:31 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Wed, 22 Mar 2000 20:48:31 -0500 (EST) Subject: [Numpy-discussion] quadrature.py vs Multipack Message-ID: Hey Numpy people! I have to integrate functions like: Int_0^1 (t*(t-1))**-(H/2) dt or Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 I just tried quadrature on the 1st one: it needs very order of quadrature to be precise and is in that case slow. Would it work better with Multipack? (I have to upgrade to python 1.5.2 to try Multipack!) Thank you for your help. Jean-Bernard From peter.chang at nottingham.ac.uk Thu Mar 23 10:37:07 2000 From: peter.chang at nottingham.ac.uk (peter.chang at nottingham.ac.uk) Date: Thu, 23 Mar 2000 15:37:07 +0000 (GMT) Subject: [Numpy-discussion] quadrature.py vs Multipack In-Reply-To: Message-ID: On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt This is a beta function with arguments (1-H/2,1-H/2) and is related to gamma functions. B(x,y) = Gamma(x)Gamma(y)/Gamma(x+y) > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 This can be can done analytically = 2 Int_0^1 t**(H-1) dt = 2 [ t**(H)/H ]_0^1 = 2/H > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. HTH Peter From jbaddor at physics.mcgill.ca Thu Mar 23 11:00:44 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Thu, 23 Mar 2000 11:00:44 -0500 (EST) Subject: [Numpy-discussion] Re: quadrature.py vs Multipack In-Reply-To: Message-ID: Hey! I am now able to reply to my question! Quadpack from multipack is much more quicker and accurate! and it needs not Python 1.5.2 (on my system it crashes with new python, but it is a quick installation) comparision: >>> quadrature.quad(lambda t: t**(H-1), 0, 1) 2.54866576894 >>> quadpack.quad(lambda t: t**(H-1), 0, 1) (3.33333333333, 4.26325641456e-14) >>> 1/H 3.33333333333 The expected result is 1/H, (H was .3). It is possible to improve the precison of the result of quadrature, it becomes very slow, but never very precise. Jean-Bernard On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt > > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 > > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. > > Would it work better with Multipack? > > (I have to upgrade to python 1.5.2 to try Multipack!) > > Thank you for your help. > > Jean-Bernard > > From cgw at fnal.gov Thu Mar 23 16:14:49 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Thu, 23 Mar 2000 15:14:49 -0600 (CST) Subject: [Numpy-discussion] Doc strings for ufuncs Message-ID: <14554.35145.163207.135868@buffalo.fnal.gov> I really, really, really like the Cephes module and a lot of the work Travis Oliphant has been doing on Numeric Python. Nice to see Numeric getting more and more powerful all the time. However, as more and more functions get added to the libraries, their names become more and more obscure, and it sure would be nice to have doc-strings for them. For the stock Numeric ufuncs like "add", the meanings are self-evident, but things like "pdtri" and "incbet" are a bit less obvious. (One of my pet peeves with Python is that there are so many functions and classes with empty doc-strings. Bit by bit, I'm trying to add them in). However, one little problem with the current Numeric implementation is that ufunc objects don't have support for a doc string, the doc string is hardwired in the type object, and comes up as: "Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently" This is not only non-helpful, it's misspelled ("Optimizied"?) Well, according to the charter, a well-behaved Nummie will "Fix bugs at will, without prior consultation." But a well-behaved Nummie wil also "Add new features only after consultation with other Nummies" So, I hereby declare doing something about the useless doc-strings to be fixing a bug and not adding a feature ;-) The patch below adds doc strings to all the ufuncs in the Cephes module. They are automatically extracted from the HTML documentation for the Cephes module. (In the process of doing this, I also added some missing items to said HTML documentation). This patch depends on another patch, which I am submitting via the SourceForge, which allows ufunc objects to have doc strings. With these patches, you get this: >>> import cephes >>> print cephes.incbet.__doc__ incbet(a,b,x) returns the incomplete beta integral of the arguments, evaluated from zero to x: gamma(a+b) / (gamma(a)*gamma(b)) * integral(t**(a-1) (1-t)**(b-1), t=0..x). instead of this: >>> print cephes.incbet.__doc__ Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently Isn't that nicer? "Ni-Ni-Numpy!" Here's the "gendoc.py" script to pull the docstrings out of included_functions.h: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gendoc.py URL: -------------- next part -------------- And here are the patches to cephes: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cephes-patch URL: -------------- next part -------------- NB: This stuff won't work at all without the corresponding patches to the Numeric core, to be posted shortly to the sourceforge site. From hinsen at cnrs-orleans.fr Fri Mar 24 11:02:37 2000 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Fri, 24 Mar 2000 17:02:37 +0100 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: <14554.35145.163207.135868@buffalo.fnal.gov> (message from Charles G Waldman on Thu, 23 Mar 2000 15:14:49 -0600 (CST)) References: <14554.35145.163207.135868@buffalo.fnal.gov> Message-ID: <200003241602.RAA13230@chinon.cnrs-orleans.fr> > I really, really, really like the Cephes module and a lot of the work Me too! > So, I hereby declare doing something about the useless doc-strings to > be fixing a bug and not adding a feature ;-) Great, this is a step in the right direction. I am still hoping for an interactive environment that lets me consult docstrings why I work, but that won't happen before most Python functions actually have docstrings. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From DavidA at ActiveState.com Fri Mar 24 11:41:43 2000 From: DavidA at ActiveState.com (David Ascher) Date: Fri, 24 Mar 2000 08:41:43 -0800 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: <200003241602.RAA13230@chinon.cnrs-orleans.fr> Message-ID: > Great, this is a step in the right direction. I am still hoping for an > interactive environment that lets me consult docstrings why I work, > but that won't happen before most Python functions actually have > docstrings. Did you look at recent versions of IDLE (and Pythonwin, but not on your platforms =)? --david From pauldubois at home.com Fri Mar 24 18:08:33 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 24 Mar 2000 15:08:33 -0800 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: Message-ID: The CVS version now has doc strings for all the functions in umath. (C. Waldman, P. Dubois). From bhoel at server.python.net Sat Mar 25 11:26:16 2000 From: bhoel at server.python.net (Berthold=?iso-8859-1?q?_H=F6llmann?=) Date: 25 Mar 2000 17:26:16 +0100 Subject: [Numpy-discussion] Recent CVS NumPy Version with recent CVS distutil and Linux Message-ID: Hello, I tried to install a NumPy version downloaded this morning using distutils also downloaded then. This failed on my Linux box. First the setup.py script in not compatible with the newer distutil API. Second the glibc math.h includes a mathtypes.h which defines a function named gamma. This leads to a problem with ranlibmodule.c. I tried to solve both problems. This is documented with the applied patch. Cheers Berthold -------------- next part -------------- A non-text attachment was scrubbed... Name: NumPyPatch Type: application/octet-stream Size: 3403 bytes Desc: not available URL: -------------- next part -------------- -- bhoel at starship.python.net / http://starship.python.net/crew/bhoel/ It is unlawful to use this email address for unsolicited ads (USC Title 47 Sec.227). I will assess a US$500 charge for reviewing and deleting each unsolicited ad. From jsaenz at wm.lc.ehu.es Tue Mar 28 05:46:17 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Tue, 28 Mar 2000 12:46:17 +0200 (MET DST) Subject: [Numpy-discussion] Release of Pyclimate0.0 Message-ID:

Pyclimate 0.0 - Climate variability analysis using Numeric Python (28-Mar-00) Tuesday, 03/28/2000 Hello, all. We are making the first announce of a pre-alpha release (version 0.0) of our package pyclimate, which presents some tools used for climate variability analysis and which make extensive use of Numerical Python. It is released under the GNU Public License. We call them a pre-alpha release. Even though the routines are quite debugged, they are yet growing and we are thinking in making a stable release shortly after receiving some feedback from users. The package contains: IO functions ------------ -ASCII files (simple, but useful) -ncstruct.py: netCDF structure copier. From a COARDS compliant netCDF file, this module creates a COARDS compliant file, copying the needed attributes, dimensions, auxiliary variables, comments, and so on in one call. Time handling routines ---------------------- * JDTime.py -> Some C/Python functions to convert from date to Scaliger's Julian Day and from Julian Day to date. We are not trying to replace mxDate, but addressing a different problem. In particular, this module contains a routine especially suited to handling monthly time steps for climatological use. * JDTimeHandler.py -> Python module which parses the units attribute of the time variable in a COARDS file and which offsets and scales adequately the time values to read/save date fields. Interface to DCDFLIB.C ---------------------- A C/Python interface to the free DCDFLIB.C library is provided. This library allows direct and inverse computations of parameters for several probability distribution functions like Chi^2, normal, binomial, F, noncentral F, and many many more. EOF analysis ------------ Empirical Orthogonal Function analysis based on the SVD decomposition of the data matrix and related functions to test the reliability/degeneracy of eigenvalues (truncation rules). Monte Carlo test of the stability of eigenvectors to temporal subsampling. SVD decomposition ----------------- SVD decomposition of the correlation matrix of two datasets, functions to compute the expansion coefficients, the squared cumulative covariance fraction and the homogeneous and heterogeneous correlation maps. Monte Carlo test of the stability of singular vectors to temporal subsampling. Multivariate digital filter --------------------------- Multivariate digital filter (high and low pass) based on the Kolmogorov-Zurbenko filter Differential operators on the sphere ------------------------------------ Some classes to compute differential operators (gradient and divergence) on a regular latitude/longitude grid. PREREQUISITES ============= To be able to use it, you will need: 1. Python ;-) 2. netCDF library 3.4 or later 3. Scientific Python, by Konrad Hinsen 4. DCDFLIB.C version 1.1 IF AND ONLY IF you really want to change the C code (JDTime.[hc] and pycdf.[hc]), then, you will also need SWIG. COMPILATION =========== There is no a automatic compilation/installation procedure, but the Makefile is quite straightforward. After manually editing the Makefile for different platforms, the commands make make test -> Runs a (not infalible) regression test make install will do it. SORRY, we don't use it under Windows, only UNIX. Volunteers that generate a Windows installation file would be appreciated, but we will not do it. DOCUMENTATION ============= LaTeX, Postscript and PDF versions of the manual are included in the distribution. However, we are preparing a new set of documentation according to PSA rules. AVAILABILITY ============ http://lcdx00.wm.lc.ehu.es/~jsaenz/pyclimate (Europe) http://pyclimate.zubi.net/ (USA) http://starship.python.net/crew/~jsaenz (USA) Any feedback from the users of the package will be really appreciated by the authors. We will try to incorporate new developments, in case we are able to do so. Our time availability is scarce. Enjoy. Jon Saenz, jsaenz at wm.lc.ehu.es Juan Zubillaga, wmpzuesj at lg.ehu.es From godzilla at netmeg.net Tue Mar 28 12:35:39 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Tue, 28 Mar 2000 12:35:39 -0500 (EST) Subject: [Numpy-discussion] del( distutils ) Message-ID: <14560.60779.465582.624964@gargle.gargle.HOWL> I am just trying to compile the latest CVS updates on NumPy, and am getting very cranky about the dependence of build process on distutils. let me say it politely, using distutils at this point is for the birds. maybe for distributing released versions to the general public, where the distutils setup.py script has been tested on many platforms, is a long term solution. but for right now, i miss the Makefile stuff. an example: (gustav)~/system/numpy/Numerical/: python setup.py build running build [snip] /usr/bin/egcc -c -I/usr/include/python1.5 -IInclude -O3 -mpentium -fpic Src/_numpymodule.c Src/arrayobject.c Src/ufuncobject.c Src/ufuncobject.c:783: conflicting types for `PyUFunc_FromFuncAndData' /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': Src/ufuncobject.c:805: structure has no member named `doc' Src/ufuncobject.c: In function `ufunc_getattr': Src/ufuncobject.c:955: structure has no member named `doc' [blah blah blah] if this was from running make, i could be inside xemacs and click on the damn error and go right to the line in question. now, i gotta horse around, its a waste of time. another example, i posed to c.l.p a week or so ago. where it turned out that distutils doesnt know enough that, for example, arrayobject.h in distribution is newer than the one in /usr/include/python1.5 so breaks the build when API changes have been made. distutils should at leaast get the order of the -I's correct. but we're not a distutils SIG, we're NumPy, right? can we please go back, at least for now, to using -- or at a minimum distributing -- the Makefile's? les schaffer From vanandel at atd.ucar.edu Tue Mar 28 13:00:12 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 28 Mar 2000 11:00:12 -0700 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() Message-ID: <38E0F32C.E20B96CD@atd.ucar.edu> As I've mentioned earlier, I'm using interp() with very large datasets, and I can't afford to use double precision arrays. Here's a patch that lets interp() accept an optional typecode argument. Passing 'f' calls the new single precision version, and returns a single precision array. Passing no argument or 'd' uses the previous, double precision version. (No other array types are supported - an error is returned.) I hope this can be added to the CVS version of Numeric. Thanks! -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- *** arrayfnsmodule.c 1999/04/14 22:58:32 1.1 --- arrayfnsmodule.c 1999/08/12 23:27:28 *************** *** 6,11 **** --- 6,13 ---- #include #include + #define MAX_INTERP_DIMS 6 + static PyObject *ErrorObject; /* Define 2 macros for error handling: *************** *** 34,39 **** --- 36,43 ---- #define A_DIM(a,i) (((PyArrayObject *)a)->dimensions[i]) #define GET_ARR(ap,op,type,dim) \ Py_Try(ap=(PyArrayObject *)PyArray_ContiguousFromObject(op,type,dim,dim)) + #define GET_ARR2(ap,op,type,min,max) \ + Py_Try(ap=(PyArrayObject *)PyArray_ContiguousFromObject(op,type,min,max)) #define ERRSS(s) ((PyObject *)(PyErr_SetString(ErrorObject,s),0)) #define SETERR(s) if(!PyErr_Occurred()) ERRSS(errstr ? errstr : s) #define DECREF_AND_ZERO(p) do{Py_XDECREF(p);p=0;}while(0) *************** *** 571,576 **** --- 575,613 ---- return result ; } + static int + binary_searchf(float dval, float dlist [], int len) + { + /* binary_search accepts three arguments: a numeric value and + * a numeric array and its length. It assumes that the array is sorted in + * increasing order. It returns the index of the array's + * largest element which is <= the value. It will return -1 if + * the value is less than the least element of the array. */ + /* self is not used */ + int bottom , top , middle, result; + + if (dval < dlist [0]) + result = -1 ; + else { + bottom = 0; + top = len - 1; + while (bottom < top) { + middle = (top + bottom) / 2 ; + if (dlist [middle] < dval) + bottom = middle + 1 ; + else if (dlist [middle] > dval) + top = middle - 1 ; + else + return middle ; + } + if (dlist [bottom] > dval) + result = bottom - 1 ; + else + result = bottom ; + } + + return result ; + } static char arr_interp__doc__[] = "" ; *************** *** 597,609 **** Py_DECREF(ay); Py_DECREF(ax); return NULL ;} ! GET_ARR(az,oz,PyArray_DOUBLE,1); lenz = A_SIZE (az); dy = (double *) A_DATA (ay); dx = (double *) A_DATA (ax); dz = (double *) A_DATA (az); ! Py_Try (_interp = (PyArrayObject *) PyArray_FromDims (1, &lenz, ! PyArray_DOUBLE)); dres = (double *) A_DATA (_interp) ; slopes = (double *) malloc ( (leny - 1) * sizeof (double)) ; for (i = 0 ; i < leny - 1; i++) { --- 634,647 ---- Py_DECREF(ay); Py_DECREF(ax); return NULL ;} ! GET_ARR2(az,oz,PyArray_DOUBLE,1,MAX_INTERP_DIMS); lenz = A_SIZE (az); dy = (double *) A_DATA (ay); dx = (double *) A_DATA (ax); dz = (double *) A_DATA (az); ! /* create output array with same size as 'Z' input array */ ! Py_Try (_interp = (PyArrayObject *) PyArray_FromDims ! (A_NDIM(az), az->dimensions, PyArray_DOUBLE)); dres = (double *) A_DATA (_interp) ; slopes = (double *) malloc ( (leny - 1) * sizeof (double)) ; for (i = 0 ; i < leny - 1; i++) { *************** *** 627,632 **** --- 665,724 ---- return PyArray_Return (_interp); } + /* return float, rather than double */ + + static PyObject * + arr_interpf(PyObject *self, PyObject *args) + { + /* interp (y, x, z) treats (x, y) as a piecewise linear function + * whose value is y [0] for x < x [0] and y [len (y) -1] for x > + * x [len (y) -1]. An array of floats the same length as z is + * returned, whose values are ordinates for the corresponding z + * abscissae interpolated into the piecewise linear function. */ + /* self is not used */ + PyObject * oy, * ox, * oz ; + PyArrayObject * ay, * ax, * az , * _interp; + float * dy, * dx, * dz , * dres, * slopes; + int leny, lenz, i, left ; + + Py_Try(PyArg_ParseTuple(args, "OOO", &oy, &ox, &oz)); + GET_ARR(ay,oy,PyArray_FLOAT,1); + GET_ARR(ax,ox,PyArray_FLOAT,1); + if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { + SETERR ("interp: x and y are not the same length."); + Py_DECREF(ay); + Py_DECREF(ax); + return NULL ;} + GET_ARR2(az,oz,PyArray_FLOAT,1,MAX_INTERP_DIMS); + lenz = A_SIZE (az); + dy = (float *) A_DATA (ay); + dx = (float *) A_DATA (ax); + dz = (float *) A_DATA (az); + /* create output array with same size as 'Z' input array */ + Py_Try (_interp = (PyArrayObject *) PyArray_FromDims + (A_NDIM(az), az->dimensions, PyArray_FLOAT)); + dres = (float *) A_DATA (_interp) ; + slopes = (float *) malloc ( (leny - 1) * sizeof (float)) ; + for (i = 0 ; i < leny - 1; i++) { + slopes [i] = (dy [i + 1] - dy [i]) / (dx [i + 1] - dx [i]) ; + } + for ( i = 0 ; i < lenz ; i ++ ) + { + left = binary_searchf (dz [i], dx, leny) ; + if (left < 0) + dres [i] = dy [0] ; + else if (left >= leny - 1) + dres [i] = dy [leny - 1] ; + else + dres [i] = slopes [left] * (dz [i] - dx [left]) + dy [left]; + } + + free (slopes); + Py_DECREF(ay); + Py_DECREF(ax); + Py_DECREF(az); + return PyArray_Return (_interp); + } static int incr_slot_ (float x, double *bins, int lbins) { int i ; *************** *** 1295,1300 **** --- 1387,1393 ---- {"array_set", arr_array_set, 1, arr_array_set__doc__}, {"index_sort", arr_index_sort, 1, arr_index_sort__doc__}, {"interp", arr_interp, 1, arr_interp__doc__}, + {"interpf", arr_interpf, 1, arr_interp__doc__}, {"digitize", arr_digitize, 1, arr_digitize__doc__}, {"zmin_zmax", arr_zmin_zmax, 1, arr_zmin_zmax__doc__}, {"reverse", arr_reverse, 1, arr_reverse__doc__}, From cgw at fnal.gov Tue Mar 28 13:23:45 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 28 Mar 2000 12:23:45 -0600 (CST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.60779.465582.624964@gargle.gargle.HOWL> References: <14560.60779.465582.624964@gargle.gargle.HOWL> Message-ID: <14560.63665.537485.403965@buffalo.fnal.gov> Les Schaffer writes: > > let me say it politely, using distutils at this point is for the > birds. Hmm, I'd hate to hear the impolite form! ;-) > but for right now, i miss the Makefile stuff. > > an example: > (gustav)~/system/numpy/Numerical/: python setup.py build > /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' > Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': > Src/ufuncobject.c:805: structure has no member named `doc' > Src/ufuncobject.c: In function `ufunc_getattr': > Src/ufuncobject.c:955: structure has no member named `doc' > [blah blah blah] > > if this was from running make, i could be inside xemacs and click on > the damn error and go right to the line in question. now, i gotta > horse around, its a waste of time. No, you don't have to horse around! Of course you can use xemacs and compile mode. (I wouldn't dream of building anything from an xterm window!) Just do M-x compile and set the compilation command to "python setup.py build", then you can click on the errors. Or create a trivial Makefile with these contents: numpy: python setup.py build install: python From cgw at fnal.gov Tue Mar 28 13:25:36 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 28 Mar 2000 12:25:36 -0600 (CST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.63665.537485.403965@buffalo.fnal.gov> References: <14560.60779.465582.624964@gargle.gargle.HOWL> <14560.63665.537485.403965@buffalo.fnal.gov> Message-ID: <14560.63776.153092.975975@buffalo.fnal.gov> Ooops, that message was somehow truncated, the last line should obviously have read: install: python setup.py install From godzilla at netmeg.net Tue Mar 28 13:35:22 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Tue, 28 Mar 2000 13:35:22 -0500 (EST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.63665.537485.403965@buffalo.fnal.gov> References: <14560.60779.465582.624964@gargle.gargle.HOWL> <14560.63665.537485.403965@buffalo.fnal.gov> Message-ID: <14560.64363.7709.62933@gargle.gargle.HOWL> > Hmm, I'd hate to hear the impolite form! ;-) isn't it nice when cranky-heads are dealt with politely? ;-) > Just do M-x compile and set the compilation command to "python > setup.py build", then you can click on the errors. ahhhh... hadnt thought of that. thanks.... here is a one line fix to build_ext.py in the distutils distribution that switches the order of the includes, so that -IInclude comes before -I/usr/lib/python1.5 in the build process, so API changes in the .h files don't hose the build before the install. les schaffer (gustav)/usr/lib/python1.5/site-packages/distutils/command/: diff -c build_ext.py~ build_ext.py *** build_ext.py~ Sun Jan 30 13:34:12 2000 --- build_ext.py Tue Mar 28 13:30:03 2000 *************** *** 99,105 **** self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.insert (0, py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) --- 99,105 ---- self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.append(py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) From vanandel at atd.ucar.edu Tue Mar 28 14:30:39 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 28 Mar 2000 12:30:39 -0700 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() References: <38E0F32C.E20B96CD@atd.ucar.edu> Message-ID: <38E1085F.745A4460@atd.ucar.edu> I'm very sorry, I previously sent the wrong patch. Here's the correct patch. (The other was a previous version of arrayfnsmodule.c, that added an interpf() command. This version adds an optional typecode to allow specifying single precision results from interp) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- Index: arrayfnsmodule.c =================================================================== RCS file: /cvsroot/numpy/Numerical/Src/arrayfnsmodule.c,v retrieving revision 1.1 diff -u -r1.1 arrayfnsmodule.c --- arrayfnsmodule.c 2000/01/20 18:18:28 1.1 +++ arrayfnsmodule.c 2000/03/28 17:55:04 @@ -575,6 +575,96 @@ return result ; } +static int +binary_searchf(float dval, float dlist [], int len) +{ + /* binary_search accepts three arguments: a numeric value and + * a numeric array and its length. It assumes that the array is sorted in + * increasing order. It returns the index of the array's + * largest element which is <= the value. It will return -1 if + * the value is less than the least element of the array. */ + /* self is not used */ + int bottom , top , middle, result; + + if (dval < dlist [0]) + result = -1 ; + else { + bottom = 0; + top = len - 1; + while (bottom < top) { + middle = (top + bottom) / 2 ; + if (dlist [middle] < dval) + bottom = middle + 1 ; + else if (dlist [middle] > dval) + top = middle - 1 ; + else + return middle ; + } + if (dlist [bottom] > dval) + result = bottom - 1 ; + else + result = bottom ; + } + + return result ; +} +/* return float, rather than double */ + +static PyObject * +arr_interpf(PyObject *self, PyObject *args) +{ + /* interp (y, x, z) treats (x, y) as a piecewise linear function + * whose value is y [0] for x < x [0] and y [len (y) -1] for x > + * x [len (y) -1]. An array of floats the same length as z is + * returned, whose values are ordinates for the corresponding z + * abscissae interpolated into the piecewise linear function. */ + /* self is not used */ + PyObject * oy, * ox, * oz ; + PyArrayObject * ay, * ax, * az , * _interp; + float * dy, * dx, * dz , * dres, * slopes; + int leny, lenz, i, left ; + + PyObject *tpo = Py_None; /* unused argument, we've already parsed it*/ + + Py_Try(PyArg_ParseTuple(args, "OOO|O", &oy, &ox, &oz, &tpo)); + GET_ARR(ay,oy,PyArray_FLOAT,1); + GET_ARR(ax,ox,PyArray_FLOAT,1); + if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { + SETERR ("interp: x and y are not the same length."); + Py_DECREF(ay); + Py_DECREF(ax); + return NULL ;} + GET_ARR2(az,oz,PyArray_FLOAT,1,MAX_INTERP_DIMS); + lenz = A_SIZE (az); + dy = (float *) A_DATA (ay); + dx = (float *) A_DATA (ax); + dz = (float *) A_DATA (az); + /* create output array with same size as 'Z' input array */ + Py_Try (_interp = (PyArrayObject *) PyArray_FromDims + (A_NDIM(az), az->dimensions, PyArray_FLOAT)); + dres = (float *) A_DATA (_interp) ; + slopes = (float *) malloc ( (leny - 1) * sizeof (float)) ; + for (i = 0 ; i < leny - 1; i++) { + slopes [i] = (dy [i + 1] - dy [i]) / (dx [i + 1] - dx [i]) ; + } + for ( i = 0 ; i < lenz ; i ++ ) + { + left = binary_searchf (dz [i], dx, leny) ; + if (left < 0) + dres [i] = dy [0] ; + else if (left >= leny - 1) + dres [i] = dy [leny - 1] ; + else + dres [i] = slopes [left] * (dz [i] - dx [left]) + dy [left]; + } + + free (slopes); + Py_DECREF(ay); + Py_DECREF(ax); + Py_DECREF(az); + return PyArray_Return (_interp); +} + static char arr_interp__doc__[] = "" ; @@ -592,8 +682,24 @@ PyArrayObject * ay, * ax, * az , * _interp; double * dy, * dx, * dz , * dres, * slopes; int leny, lenz, i, left ; + PyObject *tpo = Py_None; + char type_char = 'd'; + char *type = &type_char; - Py_Try(PyArg_ParseTuple(args, "OOO", &oy, &ox, &oz)); + Py_Try(PyArg_ParseTuple(args, "OOO|O", &oy, &ox, &oz,&tpo)); + if (tpo != Py_None) { + type = PyString_AsString(tpo); + if (!type) + return NULL; + if(!*type) + type = &type_char; + } + if (*type == 'f' ) { + return arr_interpf(self, args); + } else if (*type != 'd') { + SETERR ("interp: unimplemented typecode."); + return NULL; + } GET_ARR(ay,oy,PyArray_DOUBLE,1); GET_ARR(ax,ox,PyArray_DOUBLE,1); if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { From pauldubois at home.com Tue Mar 28 14:59:09 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 28 Mar 2000 11:59:09 -0800 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() In-Reply-To: <38E1085F.745A4460@atd.ucar.edu> Message-ID: > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe > Van Andel > Sent: Tuesday, March 28, 2000 11:31 AM > To: vanandel at ucar.edu > Cc: numpy-discussion; Paul F. Dubois > Subject: Re: [Numpy-discussion] single precision patch for > arrayfnsmodule.c :interp() > Patch completed. I added a doc string. I used the second patch you sent. From janne at nnets.fi Fri Mar 31 05:04:13 2000 From: janne at nnets.fi (Janne Sinkkonen) Date: 31 Mar 2000 13:04:13 +0300 Subject: [Numpy-discussion] Binary kit for windows Message-ID: Could somebody give me a pointer to a easily installable Windows binary kit for NumPy? Even a bit older version (such as 11) would do fine. I couldn't find any reference to such a kit from sourceforge or the LLNL site. -- Janne From aurag at crm.umontreal.ca Wed Mar 1 09:32:48 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 14:32:48 GMT Subject: [Numpy-discussion] Derivatives Message-ID: <20000301.14324800@adam-aurag.penguinpowered.com> Hi, attached is a file called Derivative.py. It computes derivatives and is based on an algorithm found in Numerical Recipes in C. What to do you think about it and has anyone started a "serious" calculus oriented subpackage for Numerical Python in general? I mean: derivatives, partial derivatives, jacobian, hessian implemented fast and precise. On another note, why isn't infinity defined in NumPy? Why is tan(pi/2) a number even if big? Shouldn't it be infinity? From aurag at crm.umontreal.ca Wed Mar 1 09:34:18 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 14:34:18 GMT Subject: [Numpy-discussion] Derivative II Message-ID: <20000301.14341800@adam-aurag.penguinpowered.com> Forgot to attach file. Here it is -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Derivative.py URL: From aurag at CRM.UMontreal.CA Wed Mar 1 10:46:10 2000 From: aurag at CRM.UMontreal.CA (Hassan Aurag) Date: Wed, 1 Mar 2000 10:46:10 -0500 (EST) Subject: [Numpy-discussion] Derivatives Message-ID: <200003011546.KAA22032@newton.CRM.UMontreal.CA> The answer is yes! It is an 1e10 and not 1e-10. At 0.0 you got to pick the interval yourself. You can't just use the starting point x > Date: Wed, 1 Mar 2000 16:06:28 +0100 (MET) > From: Fredrik Stenberg > To: Hassan Aurag > Subject: Re: [Numpy-discussion] Derivatives > MIME-Version: 1.0 > > > Hi, > > > > attached is a file called Derivative.py. > > > > It computes derivatives and is based on an algorithm found in > > Numerical Recipes in C. > > > > What to do you think about it and has anyone started a "serious" > > calculus oriented subpackage for Numerical Python in general? > > > > I mean: derivatives, partial derivatives, jacobian, hessian > > implemented fast and precise. > > > > On another note, why isn't infinity defined in NumPy? > > > > Why is tan(pi/2) a number even if big? Shouldn't it be infinity? > > > > > > > > I tried your algoritm on sin(x) and i got some rather interesting > results. > > ######### EXAMPLE############# > from math import sin > > def f(x): > return sin(x) > > import Derivative > > print Derivative.Diff(f,0.0) > > ########RESULT################ > -2.03844228853e-10 > > It should be approx 1.0 > > > > I found the error i think.. > check row 28 in Derivative > h = random()/1e-10 > should that be h = random()/1e+10?? > > > Fredrik > From KRodgers at ryanaero.com Wed Mar 1 11:32:21 2000 From: KRodgers at ryanaero.com (Rodgers, Kevin) Date: Wed, 1 Mar 2000 08:32:21 -0800 Subject: [Numpy-discussion] Derivatives Message-ID: <0D8C1A50C283D311ABB800508B612E5354B3B8@ryanaero.com> Whenever I see somebody implementing code based on "Numerical Recipies", I feel obligated to send them the following link: http://math.jpl.nasa.gov/nr/nr.html YMMV, as always . . . Kevin Rodgers Northrop Grumman Ryan Aeronautical Center krodgers at ryanaero.com > -----Original Message----- > From: Hassan Aurag [SMTP:aurag at crm.umontreal.ca] > Sent: Wednesday, March 01, 2000 6:33 AM > To: Gmath-devel at lists.sourceforge.net; > numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Derivatives > > > > Hi, > > attached is a file called Derivative.py. > > It computes derivatives and is based on an algorithm found in > Numerical Recipes in C. > > What to do you think about it and has anyone started a "serious" > calculus oriented subpackage for Numerical Python in general? > > I mean: derivatives, partial derivatives, jacobian, hessian > implemented fast and precise. > > On another note, why isn't infinity defined in NumPy? > > Why is tan(pi/2) a number even if big? Shouldn't it be infinity? > > > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From zow at pensive.llnl.gov Wed Mar 1 12:49:23 2000 From: zow at pensive.llnl.gov (Zow Terry Brugger) Date: Wed, 01 Mar 2000 09:49:23 -0800 Subject: [Numpy-discussion] Derivatives In-Reply-To: Your message of "Wed, 01 Mar 2000 08:32:21 PST." <0D8C1A50C283D311ABB800508B612E5354B3B8@ryanaero.com> Message-ID: <200003011753.JAA11191@lists.sourceforge.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-Type: text/plain; charset=us-ascii > Whenever I see somebody implementing code based on "Numerical Recipies", I > feel obligated to send them the following link: > > http://math.jpl.nasa.gov/nr/nr.html > Funny - that's the first thing I thought of too (just looking at it yesterday). Don't neglect to look at "the NR response" link buried at the very bottom of the page. > YMMV, as always . . . > More precisely, I think the point is that yourself or some other person so qualified should review any algorithm provided by a third party to ensure applicability to your problem domain. > Kevin Rodgers "Zow" Terry Brugger zow at acm.org zow at llnl.gov -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 5.0i for non-commercial use Charset: noconv iQA/AwUBOL1YI6fuGVwXgOQkEQIg7QCgtysc351lOuA3zP41XXaRZdhVoQ4An0S9 UWudL1u0qIvPZumAOVkAtEnA =UXpc -----END PGP SIGNATURE----- From dubois1 at llnl.gov Wed Mar 1 15:53:24 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Wed, 1 Mar 2000 12:53:24 -0800 Subject: [Numpy-discussion] Pyfort 3.1 at Source Forge Message-ID: <00030112582400.15819@almanac> Pyfort, a Python-Fortran connection tool, is now available at pyfortran.sourceforge.net, instead of xfiles.llnl.gov. Documentation is available at the site. Version 3.1 uses distutils for easy extension building. A sample package is in the test subdirectory. Support for the pgf90 compiler on Linux is now available. Note that the limitation to non-explicitly interfaced routines has not been removed yet, but it will handle using pgf90-compiled F90 files in your Fortran. From zow at pensive.llnl.gov Wed Mar 1 16:40:35 2000 From: zow at pensive.llnl.gov (Zow Terry Brugger) Date: Wed, 01 Mar 2000 13:40:35 -0800 Subject: [Numpy-discussion] Derivatives (fwd) Message-ID: <200003012144.NAA21744@lists.sourceforge.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-Type: text/plain; charset=us-ascii Hassan, This seemed relevant to pass back to the numeric list - hope you don't mind. My comments follow. - ------- Forwarded Message From aurag at crm.umontreal.ca Wed Mar 1 14:46:42 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Wed, 01 Mar 2000 19:46:42 +0000 Subject: [Gmath-devel] Re: [Numpy-discussion] Derivatives Message-ID: I have to agree with you on most counts that Numerical Recipes in C=20 is not a full-blown encyclopedia on all subtleties of doing numerical=20 computations. However it does its job greatly for a big class of stuff I am=20 interested in: minimization, non-linear system of equations solving=20 (The newton routine given there is good, accurate and fast.) There are errors and problems as in most other numerical books. In=20 truth, I don't think there is anything fully correct out there. When trying to make computations you have to do a lot of testing and=20 a lot of thought even if the algorithm seems perfect. That applies to=20 all books, recipes et al. I have just discovered that tan(pi/2) =3D 1.63317787284e+16 in=20 numerical python. And we all know this is bad. It should be infinity=20 period. We should define a symbol called infinity and put the correct=20 definition of tan, sin .... for all known angles then interpolate if=20 needed for the rest, or whatever is used to actually compute those thing= s. Peace - ------- End of Forwarded Message I believe you're actually talking about the math module, not the numeric module (I'm not aware of tan or pi definitions in numeric, but I haven't bothered to double check that). Never the less, I think it has relevance here as Numeric is all about doing serious number crunching. This problem is caused by the lack of infinite precision in python. Of course, how is it even possible to perform infinite precision on non-rational numbers? The obvious solution is to allow the routine (tan() in this case) to recognize named constants that have relevance in their domain (pi in this case). This would fix the problem: math.tan(math.pi) = -1.22460635382e-16 but it still doesn't solve your problem because the named constant would have the mathematical operation performed on it before it's passed into the function, ruining whatever intimate knowledge of the given named constant that routine has. Perhaps you could get the routine to recognize rational math on named constants (the problem with that solution is how do you not burden other routines with the knowledge of how to process that expression). Assuming you had that, even for your example, should the answer be positive or negative infinity? Another obvious solution is just to assume that any floating point overflow is infinity and any underflow is zero. This obviously won't work because some asymptotic functions (say 1/x^3) will overflow or underflow at values for x for which the correct answer is not properly infinity or zero. It is interesting to note that Matlab's behaviour is the same as Python's, which would indicate to me that there's not some easy solution to this problem that Guido et. al. overlooked. I haven't really researched the problem at all (although now I'm interested), but I'd be interested if anyone has a proposal for (or reference to) how this problem can be solved in a general purpose programming language at all (as there exists the distinct possibility that it can not be done in Python without breaking backwards compatibility). Terry -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 5.0i for non-commercial use Charset: noconv iQA/AwUBOL2OUqfuGVwXgOQkEQJTpQCggOuFT2ZVavzMhy+jZgoehnrK5uIAoMzO D5OOdLtBvT97ee7vkckO+0Qt =SmqL -----END PGP SIGNATURE----- From aurag at crm.umontreal.ca Wed Mar 1 22:19:10 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Thu, 02 Mar 2000 03:19:10 GMT Subject: [Numpy-discussion] Derivatives (fwd) In-Reply-To: <200003012144.NAA21744@lists.sourceforge.net> References: <200003012144.NAA21744@lists.sourceforge.net> Message-ID: <20000302.3191000@adam-aurag.penguinpowered.com> Mathematica does it correctly. How is another question. But I guess the idea is to pass it through some kind of database of exact solutions then optionally evaluate it. Mathematica is very good at that (symbolic stuff) >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<< On 3/1/00, 4:40:35 PM, "Zow" Terry Brugger wrote regarding [Numpy-discussion] Derivatives (fwd): > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > Content-Type: text/plain; charset=us-ascii > Hassan, > This seemed relevant to pass back to the numeric list - hope you don't mind. > My comments follow. > - ------- Forwarded Message > Date: Wed, 01 Mar 2000 19:46:42 +0000 > From: Hassan Aurag > To: zow at llnl.gov, Gmath-devel at lists.sourceforge.net > Subject: Re: [Gmath-devel] Re: [Numpy-discussion] Derivatives > I have to agree with you on most counts that Numerical Recipes in C=20 > is not a full-blown encyclopedia on all subtleties of doing numerical=20 > computations. > However it does its job greatly for a big class of stuff I am=20 > interested in: minimization, non-linear system of equations solving=20 > (The newton routine given there is good, accurate and fast.) > There are errors and problems as in most other numerical books. In=20 > truth, I don't think there is anything fully correct out there. > When trying to make computations you have to do a lot of testing and=20 > a lot of thought even if the algorithm seems perfect. That applies to=20 > all books, recipes et al. > I have just discovered that tan(pi/2) =3D 1.63317787284e+16 in=20 > numerical python. And we all know this is bad. It should be infinity=20 > period. We should define a symbol called infinity and put the correct=20 > definition of tan, sin .... for all known angles then interpolate if=20 > needed for the rest, or whatever is used to actually compute those thing= > s. > Peace > > - ------- End of Forwarded Message > I believe you're actually talking about the math module, not the numeric > module (I'm not aware of tan or pi definitions in numeric, but I haven't > bothered to double check that). Never the less, I think it has relevance here > as Numeric is all about doing serious number crunching. This problem is caused > by the lack of infinite precision in python. Of course, how is it even > possible to perform infinite precision on non-rational numbers? > The obvious solution is to allow the routine (tan() in this case) to recognize > named constants that have relevance in their domain (pi in this case). This > would fix the problem: > math.tan(math.pi) = -1.22460635382e-16 > but it still doesn't solve your problem because the named constant would have > the mathematical operation performed on it before it's passed into the > function, ruining whatever intimate knowledge of the given named constant that > routine has. > Perhaps you could get the routine to recognize rational math on named > constants (the problem with that solution is how do you not burden other > routines with the knowledge of how to process that expression). Assuming you > had that, even for your example, should the answer be positive or negative > infinity? > Another obvious solution is just to assume that any floating point overflow is > infinity and any underflow is zero. This obviously won't work because some > asymptotic functions (say 1/x^3) will overflow or underflow at values for x > for which the correct answer is not properly infinity or zero. > It is interesting to note that Matlab's behaviour is the same as Python's, > which would indicate to me that there's not some easy solution to this problem > that Guido et. al. overlooked. I haven't really researched the problem at all > (although now I'm interested), but I'd be interested if anyone has a proposal > for (or reference to) how this problem can be solved in a general purpose > programming language at all (as there exists the distinct possibility that it > can not be done in Python without breaking backwards compatibility). > Terry > -----BEGIN PGP SIGNATURE----- > Version: PGPfreeware 5.0i for non-commercial use > Charset: noconv > iQA/AwUBOL2OUqfuGVwXgOQkEQJTpQCggOuFT2ZVavzMhy+jZgoehnrK5uIAoMzO > D5OOdLtBvT97ee7vkckO+0Qt > =SmqL > -----END PGP SIGNATURE----- > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From Oliphant.Travis at mayo.edu Thu Mar 2 02:18:28 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Thu, 2 Mar 2000 01:18:28 -0600 (CST) Subject: [Numpy-discussion] Survey results Message-ID: I'm still here working away at the code cleanup of Numerical Python. I know some of you may be interested in the results of a survey related to that effort. Here are the results so far: 1: How long have your been using NumPy: 19 response: <= 1 year: 8 1-3 years: 6 4-5 years: 5 2: How important is NumPy to your daily work now? (1-Depend on it, 5-Dabble with it) 1 - 17 2 - 5 3 - 5 4 - 3 Avg: 1.8 3: How would you rate the current functionality of NumPy? (1-Love it, 5-Hate it) 1 - 2 2 - 19 3 - 8 4 - 1 Avg: 2.27 4: How important is it to you for NumPy to get into the Python core? (1-Very Important, 5-Not Important) 1 - 8 2 - 14 3 - 4 3 - 4 Avg: 2.1 5: How much interest do you have in improving NumPy? (1-Unlimited, 5-None) 1 - 10 2 - 13 3 - 7 Avg: 1.9 6: How concerned are you about alterations to the underlying C-structure of NumPy? (1-Very concerned, 5-Don't care)) 1 - 3 2 - 2 3 - 8 4 - 5 5 - 12 Avg: 3.7 7: How important is memory conservation to you? (1-Very important, 5-Not important) 1 - 11 2 - 10 3 - 6 4 - 1 5 - 2 Avg: 2.1 8: How important is it to you that the underlying code to NumPy be easy to understand? (1-Very important, 5-Not important) 1 - 7 2 - 13 3 - 5 4 - 5 Avg: 2.3 9: How important is it to you that NumPy be fast? (1-Very important, 5-Not important) 1 - 22 2 - 7 3 - 1 Avg: 1.3 10: How happy are you with the current coercion rules (including spacesaving arrays)? (1-Happy, 5-Miserable) 1 - 3 2 - 13 3 - 10 4 - 2 Avg: 2.4 11: Should mixed-precision arithmetic cast to the largest memory type (yes) or the least memory type (no)? Yes - 23 No - 6 12: Should object arrays (typecode='O') remain a part of NumPy? (1-Agree, 5-Disagree) 1 - 13 2 - 3 3 - 10 4 - 2 Avg: 2.1 13: Should slices (e.g. x[3:10]) be changed to be copies? Yes - 12 No - 16 So, as you can see the results were pretty clear in some areas and quite controversial in other areas. Have fun interpreting them... Travis From gpk at bell-labs.com Thu Mar 2 09:26:27 2000 From: gpk at bell-labs.com (Greg Kochanski) Date: Thu, 02 Mar 2000 09:26:27 -0500 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #20 - 5 msgs References: <200003012006.MAA16796@lists.sourceforge.net> Message-ID: <38BE7A13.3F133F03@bell-labs.com> Look at Scientific Python (see http://starship.python.net/crew/hinsen/scientific.html ) for some differentiation routines. > From: Hassan Aurag > Date: Wed, 01 Mar 2000 14:32:48 GMT > To: Gmath-devel at lists.sourceforge.net, numpy-discussion at lists.sourceforge.net > Subject: [Numpy-discussion] Derivatives > > Hi, > > attached is a file called Derivative.py. > > It computes derivatives and is based on an algorithm found in > Numerical Recipes in C. > > What to do you think about it and has anyone started a "serious" > calculus oriented subpackage for Numerical Python in general? From vanandel at atd.ucar.edu Thu Mar 2 10:21:40 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Thu, 02 Mar 2000 08:21:40 -0700 Subject: [Numpy-discussion] python FAQTS Message-ID: <38BE8704.5FE50987@atd.ucar.edu> I just started looking at the new Python Knowledge Base, maintained at http://python.faqts.com This looks like a nice simple way of maintaining FAQs and link collections. I've already registered, and started building links to various packages that work with Numeric Python. Since sourceforge.net doesn't seem to offer these collaborative FAQ building tools, I'd propose that we should add to the existing Numeric Python FAQs, and expand the link collection that I've (just) started. Could we add a link to the (Numeric) Python Knowledge Base to Numeric Python's home on sourceforge? -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From pauldubois at home.com Thu Mar 2 11:37:57 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 2 Mar 2000 08:37:57 -0800 Subject: [Numpy-discussion] python FAQTS In-Reply-To: <38BE8704.5FE50987@atd.ucar.edu> Message-ID: Done. Also added link on python.faqts.com to Numerical and Pyfort. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe > Van Andel > Sent: Thursday, March 02, 2000 7:22 AM > To: numpy-discussion at lists.sourceforge.net > Cc: nathan at synop.com > Subject: [Numpy-discussion] python FAQTS > > Could we add a link to the (Numeric) Python Knowledge Base to Numeric > Python's home on > sourceforge? > From nathan at synop.com Thu Mar 2 12:04:24 2000 From: nathan at synop.com (Nathan Wallace) Date: Thu, 02 Mar 2000 12:04:24 -0500 Subject: [Numpy-discussion] python FAQTS References: Message-ID: <38BE9F18.7E840552@synop.com> Please don't hesitate to ask if there are any changes / improvements to FAQTs that could be made to help your project. Cheers, Nathan > Done. Also added link on python.faqts.com to Numerical and Pyfort. From collins at rushe.aero.org Thu Mar 2 14:40:58 2000 From: collins at rushe.aero.org (JEFFERY COLLINS) Date: Thu, 2 Mar 2000 11:40:58 -0800 Subject: [Numpy-discussion] numpy on NT Message-ID: <200003021940.LAA13341@rushe.aero.org> A colleague is attempting to install numpy on an NT machine. How is this done? I tried to help, but the install procedure is apparently different from what I am accustomed on Unix. It appears that the python-numpy-15.2.zip is a precompiled distribution ready for dumping into some directory. Since it doesn't contain a setup.py, I presume that Distutils isn't necessary. Also, what is the accepted way of setting PYTHONPATH on NT? Thanks, Jeff From DavidA at ActiveState.com Thu Mar 2 14:55:15 2000 From: DavidA at ActiveState.com (David Ascher) Date: Thu, 2 Mar 2000 11:55:15 -0800 Subject: [Numpy-discussion] numpy on NT In-Reply-To: <200003021940.LAA13341@rushe.aero.org> Message-ID: > A colleague is attempting to install numpy on an NT machine. How is > this done? I tried to help, but the install procedure is apparently > different from what I am accustomed on Unix. > > It appears that the python-numpy-15.2.zip is a precompiled > distribution ready for dumping into some directory. Since it doesn't > contain a setup.py, I presume that Distutils isn't necessary. Indeed. You can just unzip it straight in to your main Python directory (typically C:\Program Files\Python). > Also, what is the accepted way of setting PYTHONPATH on NT? Go to the control panel, click on the System icon, pick the Environment tab, and add an entry (usually in the USER section) for PYTHONPATH. But you only need to do so if you don't want to install NumPy in the main Python directory. --david From pauldubois at home.com Mon Mar 6 13:50:19 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 6 Mar 2000 10:50:19 -0800 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: <001801bf71fa$41f681a0$01f936d1@janus> Message-ID: We are seeking a volunteer developer for Numeric who will remove the current BLAS/LINPACK lite stuff in favor of linking to whatever the native version is on a particular machine. The current default is that you have to work harder to get the good ones than our bad ones; we want to reverse that. We have gotten a lot of complaints about the current situation and while we are aware of the counter arguments the Council of Nummies has reached a consensus to do this. A truly excited volunteer would widen the amount of stuff that the interface can get to. They would work via the CVS tree; see http://numpy.sourceforge.net. Please reply to dubois at users.sourceforge.net. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of James > R. Webb > Sent: Monday, February 07, 2000 10:04 PM > To: numpy-discussion at lists.sourceforge.net > Cc: matrix-sig at python.org > Subject: [Numpy-discussion] Re: [Matrix-SIG] An Experiment in > code-cleanup. > > > There is now a linux native BLAS available through links at > http://www.cs.utk.edu/~ghenry/distrib/ courtesy of the ASCI Option Red > Project. > > There is also ATLAS (http://www.netlib.org/atlas/). > > Either library seems to link into NumPy without a hitch. > > ----- Original Message ----- > From: "Beausoleil, Raymond" > To: > Cc: > Sent: Tuesday, February 08, 2000 2:31 PM > Subject: RE: [Matrix-SIG] An Experiment in code-cleanup. > > > > I've been reading the posts on this topic with considerable > interest. For > a > > moment, I want to emphasize the "code-cleanup" angle more literally than > the > > functionality mods suggested so far. > > > > Several months ago, I hacked my personal copy of the NumPy > distribution so > > that I could use the Intel Math Kernel Library for Windows. The IMKL is > > (1) freely available from Intel at > > http://developer.intel.com/vtune/perflibst/mkl/index.htm; > > (2) basically BLAS and LAPACK, with an FFT or two thrown in for good > > measure; > > (3) optimized for the different x86 processors (e.g., generic > x86, Pentium > > II & III); > > (4) configured to use 1, 2, or 4 processors; and > > (5) configured to use multithreading. > > It is an impressive, fast implementation. I'm sure there are similar > native > > libraries available on other platforms. > > > > Probably due to my inexperience with both Python and NumPy, it took me a > > couple of days to successfully tear out the f2c'd stuff and get the IMKL > > linking correctly. The parts I've used work fine, but there are probably > > other features that I haven't tested yet that still aren't up > to snuff. In > > any case, the resulting code wasn't very pretty. > > > > As long as the NumPy code is going to be commented and cleaned > up, I'd be > > glad to help make sure that the process of using a native BLAS/LAPACK > > distribution (which was probably compiled using Fortran storage > and naming > > conventions) is more straightforward. Among the more tedious issues to > > consider are: > > (1) The extent of the support for LAPACK. Do we want to stick > with LAPACK > > Lite? > > (2) The storage format. If we've still got row-ordered matrices > under the > > hood, and we want to use native LAPACK libraries that were > compiled using > > column-major format, then we'll have to be careful to set all > of the flags > > correctly. This isn't going to be a big deal, _unless_ NumPy > will support > > more of LAPACK when a native library is available. Then, of > course, there > > are the special cases: the IMKL has both a C and a Fortran interface to > the > > BLAS. > > (3) Through the judicious use of header files with compiler-dependent > flags, > > we could accommodate the various naming conventions used when > the FORTRAN > > libraries were compiled (e.g., sgetrf_ or SGETRF). > > > > The primary output of this effort would be an expansion of the > "Compilation > > Notes" subsection of Section 15 of the NumPy documentation, and some > header > > files that made the recompilation easier than it is now. > > > > Regards, > > > > Ray > > > > ============================ > > Ray Beausoleil > > Hewlett-Packard Laboratories > > mailto:beausol at hpl.hp.com > > Vox: 425-883-6648 > > Fax: 425-883-2535 > > HP Telnet: 957-4951 > > ============================ > > > > _______________________________________________ > > Matrix-SIG maillist - Matrix-SIG at python.org > > http://www.python.org/mailman/listinfo/matrix-sig > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From godzilla at netmeg.net Mon Mar 6 17:16:27 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Mon, 6 Mar 2000 17:16:27 -0500 (EST) Subject: [Numpy-discussion] moving matrix-SIG archives to SourceForge Message-ID: <14532.11835.181972.492971@gargle.gargle.HOWL> Is there any chance of getting the old matrix-SIG archives moved over to SourceForge location and have them made searchable? i wanted to look up stuff on broadcast rules in NumPy and remembered there were posts on it in the old archives, but i dont see any way to search the things. thanks les schaffer From hinsen at dirac.cnrs-orleans.fr Tue Mar 7 15:13:58 2000 From: hinsen at dirac.cnrs-orleans.fr (hinsen at dirac.cnrs-orleans.fr) Date: Tue, 7 Mar 2000 21:13:58 +0100 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: References: Message-ID: <200003072013.VAA16809@chinon.cnrs-orleans.fr> > We are seeking a volunteer developer for Numeric who will remove the current > BLAS/LINPACK lite stuff in favor of linking to whatever the native version > is on a particular machine. The current default is that you have to work This should take several volunteers; nobody has access to all machine types! > A truly excited volunteer would widen the amount of stuff that the interface > can get to. They would work via the CVS tree; see That is not necessary; a full BLAS/LAPACK interface has existed for years, written by Doug Heisterkamp. In fact, the lapack_lite module is simply a subset of it. By some strange coincidence I have worked a bit on this just a few days ago. I have added a compilation/installation script and added thread support (such that LAPACK calls don't block other threads). You can pick it up at ftp://dirac.cnrs-orleans.fr/pub/ as a tar archive and as RPMs for (RedHat) Linux. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From mhagger at blizzard.harvard.edu Tue Mar 7 23:33:15 2000 From: mhagger at blizzard.harvard.edu (Michael Haggerty) Date: 07 Mar 2000 23:33:15 -0500 Subject: [Numpy-discussion] Volunteer sought for BLAS/LINPACK restructure In-Reply-To: "Paul F. Dubois"'s message of "Mon, 6 Mar 2000 10:50:19 -0800" References: Message-ID: Hello, "Paul F. Dubois" writes: > We are seeking a volunteer developer for Numeric who will remove the > current BLAS/LINPACK lite stuff in favor of linking to whatever the > native version is on a particular machine. I don't have much time to help out with the python interface, but I have some (mostly) machine-translated C/C++ header files for LAPACK that might be useful. These files could be SWIGged as a starting point for a python binding. Even better, the perl (*yuck*) script that does the translation could be modified to prototype input vs. output arrays differently (the script determines which arrays are input vs. output from the comment lines from the Fortran source). Then SWIG typemaps could be written that handle input/output correctly and much of the wrapping job would be automated. Of course all this won't help with row vs. column storage format. The header files and a translation script can be obtained from http://monsoon.harvard.edu/~mhagger/download/ Unfortuantely I don't have the same thing for BLAS, mostly because the comments in the BLAS Fortran files are less careful and consistent, making machine interpretation more difficult. Please let me know if you find these headers useful. Yours, Michael -- Michael Haggerty mhagger at blizzard.harvard.edu From godzilla at netmeg.net Thu Mar 9 10:14:21 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Thu, 9 Mar 2000 10:14:21 -0500 (EST) Subject: [Numpy-discussion] old matric-SIG archives Message-ID: <14535.49101.728831.410812@gargle.gargle.HOWL> well, let me try it this way: 1.) Is this list the place where people who have the capability of moving the old matrix-sig archives from python.org to sourceforge hang out? 2.) If you're here and listening, i see that the sourceforge archives are already searchable, so.... if we moved the old matrix-sig archives over to sourceforge, is there more work need done to make __them__ searchable? i volunteer to make this happen. les schaffer -- ____ Les Schaffer ___| --->> Engineering R&D <<--- Theoretical & Applied Mechanics | Designspring, Inc. Center for Radiophysics & Space Research | http://www.designspring.com/ Cornell Univ. schaffer at tam.cornell.edu | les at designspring.com From Oliphant.Travis at mayo.edu Thu Mar 9 15:46:24 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Thu, 9 Mar 2000 14:46:24 -0600 (CST) Subject: [Numpy-discussion] Moving archives to Sourceforge. Message-ID: I'm responding, but I don't know the answer to your question. I am not familiar with Mailman, or how to move archives on to sourceforge, or if it is even possible. Sorry for the inadequate help, but I suspect that you are not getting a response because nobody knows. Sincerely, Travis Oliphant From godzilla at netmeg.net Thu Mar 9 16:52:36 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Thu, 9 Mar 2000 16:52:36 -0500 (EST) Subject: [Numpy-discussion] Moving archives to Sourceforge. In-Reply-To: References: Message-ID: <14536.7460.380291.899054@gargle.gargle.HOWL> Travis: > qI'm responding, but I don't know the answer to your question. I am not familiar with Mailman, or how to move archives on to sourceforge, or if it is even possible. i have been in contact with Paul Dubois and i am tracking down the feasibility of making the transfer happen. it amounts to cat'ing the mailbox from the matrix sig archive onto the sourceforge archive, and re-running the archiver on the resulting mailbox (according to barry warsaw). so we need to find out if sourceforge allows us to fiddle with the mailbox in that fashion. les schaffer From Barrett at stsci.edu Mon Mar 13 14:19:51 2000 From: Barrett at stsci.edu (Paul Barrett) Date: Mon, 13 Mar 2000 14:19:51 -0500 (EST) Subject: [Numpy-discussion] Documentation Message-ID: <14541.15379.555688.924323@nem-srvr.stsci.edu> I have a couple of question about the Numpy documentation: 1. Is there a recent version of the Numerical Python manual available anywhere? I can't find it at the SourceForge and I've tried xfiles.llnl.org, but can't get through. (But from what Paul Dubois has said recently about the LLNL site, I shouldn't expect to either.) 2. Have any changes been made to the documentation since about Q1 1999? I think my current version dates from about this period. -- Paul From pbleyer at dgf.uchile.cl Mon Mar 13 16:54:29 2000 From: pbleyer at dgf.uchile.cl (Pablo Bleyer Kocik) Date: Mon, 13 Mar 2000 16:54:29 -0500 Subject: [Numpy-discussion] Documentation References: <14541.15379.555688.924323@nem-srvr.stsci.edu> Message-ID: <38CD6395.AA5FCF73@dgf.uchile.cl> Paul Barrett wrote: > I have a couple of question about the Numpy documentation: > > 1. Is there a recent version of the Numerical Python manual available > anywhere? I can't find it at the SourceForge and I've tried > xfiles.llnl.org, but can't get through. (But from what Paul Dubois > has said recently about the LLNL site, I shouldn't expect to > either.) > > 2. Have any changes been made to the documentation since about Q1 > 1999? I think my current version dates from about this period. > > -- Paul Who is maintaining the Python manual actually? -- Pablo Bleyer Kocik |"And all this science I don't understand pbleyer | It's just my job five days a week @dgf.uchile.cl | A rocket man, a rocket man..." - Elton John from sys import*;from string import*;a=argv;[s,p,q]=filter(lambda x:x[:1]!= '-',a);d='-d'in a;e,n=atol(p,16),atol(q,16);l=(len(q)+1)/2;o,inb=l-d,l-1+d while s:s=stdin.read(inb);s and map(stdout.write,map(lambda i,b=pow(reduce( lambda x,y:(x<<8L)+y,map(ord,s)),e,n):chr(b>>8*i&255),range(o-1,-1,-1))) From pauldubois at home.com Mon Mar 13 14:42:13 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 13 Mar 2000 11:42:13 -0800 Subject: [Numpy-discussion] Documentation In-Reply-To: <38CD6395.AA5FCF73@dgf.uchile.cl> Message-ID: > Who is maintaining the Python manual actually? > Me. From vanandel at atd.ucar.edu Mon Mar 13 15:40:29 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Mon, 13 Mar 2000 13:40:29 -0700 Subject: [Numpy-discussion] single precision version of interp Message-ID: <38CD523D.603F8B4@atd.ucar.edu> As I've previously mentioned to Paul, I need a single precision version of 'interp()', so I can use it on large single precision arrays, without returning a double precision array. In my own copy of NumPy, I've written such a routine, and added it to 'arrayfns.c '. Naturally, I want to see this functionality built into the official release, so I do not have to apply my own patches to new releases. Can we decide how such single precision needs are accomodated? Should there be a keyword argument on the 'interp()' call, that calls the single precision version? Or should the caller invoke 'interpf()', rather than 'interp()?' I don't much care what the solution looks like, as long as people agree that: 1) we need such functionality in NumPy. 2) we can establish a precedent on how single precision vs double precision methods are invoked. Please let me know your opinions on how this should be resolved. Thanks much! -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu From jhauser at ifm.uni-kiel.de Mon Mar 13 18:54:22 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Tue, 14 Mar 2000 00:54:22 +0100 (CET) Subject: [Numpy-discussion] Documentation In-Reply-To: References: <38CD6395.AA5FCF73@dgf.uchile.cl> Message-ID: <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> I'm currently build a reference for NumPy and some of the other modules in the format of the standard python library reference. At the moment this is more a personal effort to get an overview which functions are there. I do not really write stuff myself, but bring information of various sources together and reformat it. Than I want to test some approaches to extract info for a function from the HTML source at the interactive commandline. I see this not as a replacement for the excellent PDF documentation, which has far more information and many examples. The standard latex documentation package for Python has currently a bug with the index generation. If this is solved I will put a HTML tree online, so it can be examined and criticized. Yust for information, __Janko From jhauser at ifm.uni-kiel.de Mon Mar 13 18:59:54 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Tue, 14 Mar 2000 00:59:54 +0100 (CET) Subject: [Numpy-discussion] single precision version of interp In-Reply-To: <38CD523D.603F8B4@atd.ucar.edu> References: <38CD523D.603F8B4@atd.ucar.edu> Message-ID: <20000313235954.30626.qmail@lisboa.ifm.uni-kiel.de> Shouldn't this be handled in the function and decided by the typecode of the parameters. Or put a typcode keyword parameter in the function signature of interp. To derive functions for different types and put these into the public namespace is not so good IMHO. This could also be handled by a wrapper, which calls different the compiled _functions. __Janko From aurag at crm.umontreal.ca Mon Mar 13 21:20:06 2000 From: aurag at crm.umontreal.ca (Hassan Aurag) Date: Tue, 14 Mar 2000 02:20:06 GMT Subject: [Numpy-discussion] Documentation In-Reply-To: <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> References: <38CD6395.AA5FCF73@dgf.uchile.cl> <20000313235423.30614.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <20000314.2200600@adam-aurag.penguinpowered.com> This is not totally related, but I'd love to see a docbook based doc for numpy. See, I am writing GmatH (http://gmath.sourceforge.net) and among otherb things it provides a nice GUI (I hope) to NumPy. Now I'd love to see a docbook based thinggy that could be added to the app using a gtkhtml widget (duh!). It'd be nice to also have a Quick help file that I could incorporate into the app. >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<< On 3/13/00, 6:54:22 PM, Janko Hauser wrote regarding RE: [Numpy-discussion] Documentation: > I'm currently build a reference for NumPy and some of the other > modules in the format of the standard python library reference. At the > moment this is more a personal effort to get an overview which > functions are there. I do not really write stuff myself, but bring > information of various sources together and reformat it. > Than I want to test some approaches to extract info for a function > from the HTML source at the interactive commandline. > I see this not as a replacement for the excellent PDF documentation, > which has far more information and many examples. > The standard latex documentation package for Python has currently a > bug with the index generation. If this is solved I will put a HTML > tree online, so it can be examined and criticized. > Yust for information, > __Janko > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From jbaddor at physics.mcgill.ca Wed Mar 22 18:31:07 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Wed, 22 Mar 2000 18:31:07 -0500 (EST) Subject: [Numpy-discussion] how to compute a Gamma function with Numpy? Message-ID: Hey Numpy people! I need to compute the Gamma function (the one related to factorial) with Numpy. It seems no to be included in the module, Am I right? Is any code for computing it available? Where could I find an algorythme to adapt ? Jean-Bernard From cgw at fnal.gov Wed Mar 22 19:07:23 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Wed, 22 Mar 2000 18:07:23 -0600 (CST) Subject: [Numpy-discussion] how to compute a Gamma function with Numpy? In-Reply-To: References: Message-ID: <14553.24635.191392.10424@buffalo.fnal.gov> Jean-Bernard Addor writes: > Hey Numpy people! > > I need to compute the Gamma function (the one related to factorial) with > Numpy. It seems no to be included in the module, Am I right? Yes, but it is present in Travis Oliphant's "cephes" module which is an addon to stock NumPy. See http://oliphant.netpedia.net/packages/included_functions.html for the list of functions, and go to http://oliphant.netpedia.net/ to download the package itself. From jbaddor at physics.mcgill.ca Wed Mar 22 20:48:31 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Wed, 22 Mar 2000 20:48:31 -0500 (EST) Subject: [Numpy-discussion] quadrature.py vs Multipack Message-ID: Hey Numpy people! I have to integrate functions like: Int_0^1 (t*(t-1))**-(H/2) dt or Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 I just tried quadrature on the 1st one: it needs very order of quadrature to be precise and is in that case slow. Would it work better with Multipack? (I have to upgrade to python 1.5.2 to try Multipack!) Thank you for your help. Jean-Bernard From peter.chang at nottingham.ac.uk Thu Mar 23 10:37:07 2000 From: peter.chang at nottingham.ac.uk (peter.chang at nottingham.ac.uk) Date: Thu, 23 Mar 2000 15:37:07 +0000 (GMT) Subject: [Numpy-discussion] quadrature.py vs Multipack In-Reply-To: Message-ID: On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt This is a beta function with arguments (1-H/2,1-H/2) and is related to gamma functions. B(x,y) = Gamma(x)Gamma(y)/Gamma(x+y) > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 This can be can done analytically = 2 Int_0^1 t**(H-1) dt = 2 [ t**(H)/H ]_0^1 = 2/H > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. HTH Peter From jbaddor at physics.mcgill.ca Thu Mar 23 11:00:44 2000 From: jbaddor at physics.mcgill.ca (Jean-Bernard Addor) Date: Thu, 23 Mar 2000 11:00:44 -0500 (EST) Subject: [Numpy-discussion] Re: quadrature.py vs Multipack In-Reply-To: Message-ID: Hey! I am now able to reply to my question! Quadpack from multipack is much more quicker and accurate! and it needs not Python 1.5.2 (on my system it crashes with new python, but it is a quick installation) comparision: >>> quadrature.quad(lambda t: t**(H-1), 0, 1) 2.54866576894 >>> quadpack.quad(lambda t: t**(H-1), 0, 1) (3.33333333333, 4.26325641456e-14) >>> 1/H 3.33333333333 The expected result is 1/H, (H was .3). It is possible to improve the precison of the result of quadrature, it becomes very slow, but never very precise. Jean-Bernard On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt > > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 > > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. > > Would it work better with Multipack? > > (I have to upgrade to python 1.5.2 to try Multipack!) > > Thank you for your help. > > Jean-Bernard > > From cgw at fnal.gov Thu Mar 23 16:14:49 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Thu, 23 Mar 2000 15:14:49 -0600 (CST) Subject: [Numpy-discussion] Doc strings for ufuncs Message-ID: <14554.35145.163207.135868@buffalo.fnal.gov> I really, really, really like the Cephes module and a lot of the work Travis Oliphant has been doing on Numeric Python. Nice to see Numeric getting more and more powerful all the time. However, as more and more functions get added to the libraries, their names become more and more obscure, and it sure would be nice to have doc-strings for them. For the stock Numeric ufuncs like "add", the meanings are self-evident, but things like "pdtri" and "incbet" are a bit less obvious. (One of my pet peeves with Python is that there are so many functions and classes with empty doc-strings. Bit by bit, I'm trying to add them in). However, one little problem with the current Numeric implementation is that ufunc objects don't have support for a doc string, the doc string is hardwired in the type object, and comes up as: "Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently" This is not only non-helpful, it's misspelled ("Optimizied"?) Well, according to the charter, a well-behaved Nummie will "Fix bugs at will, without prior consultation." But a well-behaved Nummie wil also "Add new features only after consultation with other Nummies" So, I hereby declare doing something about the useless doc-strings to be fixing a bug and not adding a feature ;-) The patch below adds doc strings to all the ufuncs in the Cephes module. They are automatically extracted from the HTML documentation for the Cephes module. (In the process of doing this, I also added some missing items to said HTML documentation). This patch depends on another patch, which I am submitting via the SourceForge, which allows ufunc objects to have doc strings. With these patches, you get this: >>> import cephes >>> print cephes.incbet.__doc__ incbet(a,b,x) returns the incomplete beta integral of the arguments, evaluated from zero to x: gamma(a+b) / (gamma(a)*gamma(b)) * integral(t**(a-1) (1-t)**(b-1), t=0..x). instead of this: >>> print cephes.incbet.__doc__ Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently Isn't that nicer? "Ni-Ni-Numpy!" Here's the "gendoc.py" script to pull the docstrings out of included_functions.h: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gendoc.py URL: -------------- next part -------------- And here are the patches to cephes: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cephes-patch URL: -------------- next part -------------- NB: This stuff won't work at all without the corresponding patches to the Numeric core, to be posted shortly to the sourceforge site. From hinsen at cnrs-orleans.fr Fri Mar 24 11:02:37 2000 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Fri, 24 Mar 2000 17:02:37 +0100 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: <14554.35145.163207.135868@buffalo.fnal.gov> (message from Charles G Waldman on Thu, 23 Mar 2000 15:14:49 -0600 (CST)) References: <14554.35145.163207.135868@buffalo.fnal.gov> Message-ID: <200003241602.RAA13230@chinon.cnrs-orleans.fr> > I really, really, really like the Cephes module and a lot of the work Me too! > So, I hereby declare doing something about the useless doc-strings to > be fixing a bug and not adding a feature ;-) Great, this is a step in the right direction. I am still hoping for an interactive environment that lets me consult docstrings why I work, but that won't happen before most Python functions actually have docstrings. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From DavidA at ActiveState.com Fri Mar 24 11:41:43 2000 From: DavidA at ActiveState.com (David Ascher) Date: Fri, 24 Mar 2000 08:41:43 -0800 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: <200003241602.RAA13230@chinon.cnrs-orleans.fr> Message-ID: > Great, this is a step in the right direction. I am still hoping for an > interactive environment that lets me consult docstrings why I work, > but that won't happen before most Python functions actually have > docstrings. Did you look at recent versions of IDLE (and Pythonwin, but not on your platforms =)? --david From pauldubois at home.com Fri Mar 24 18:08:33 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 24 Mar 2000 15:08:33 -0800 Subject: [Numpy-discussion] Doc strings for ufuncs In-Reply-To: Message-ID: The CVS version now has doc strings for all the functions in umath. (C. Waldman, P. Dubois). From bhoel at server.python.net Sat Mar 25 11:26:16 2000 From: bhoel at server.python.net (Berthold=?iso-8859-1?q?_H=F6llmann?=) Date: 25 Mar 2000 17:26:16 +0100 Subject: [Numpy-discussion] Recent CVS NumPy Version with recent CVS distutil and Linux Message-ID: Hello, I tried to install a NumPy version downloaded this morning using distutils also downloaded then. This failed on my Linux box. First the setup.py script in not compatible with the newer distutil API. Second the glibc math.h includes a mathtypes.h which defines a function named gamma. This leads to a problem with ranlibmodule.c. I tried to solve both problems. This is documented with the applied patch. Cheers Berthold -------------- next part -------------- A non-text attachment was scrubbed... Name: NumPyPatch Type: application/octet-stream Size: 3403 bytes Desc: not available URL: -------------- next part -------------- -- bhoel at starship.python.net / http://starship.python.net/crew/bhoel/ It is unlawful to use this email address for unsolicited ads (USC Title 47 Sec.227). I will assess a US$500 charge for reviewing and deleting each unsolicited ad. From jsaenz at wm.lc.ehu.es Tue Mar 28 05:46:17 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Tue, 28 Mar 2000 12:46:17 +0200 (MET DST) Subject: [Numpy-discussion] Release of Pyclimate0.0 Message-ID:

Pyclimate 0.0 - Climate variability analysis using Numeric Python (28-Mar-00) Tuesday, 03/28/2000 Hello, all. We are making the first announce of a pre-alpha release (version 0.0) of our package pyclimate, which presents some tools used for climate variability analysis and which make extensive use of Numerical Python. It is released under the GNU Public License. We call them a pre-alpha release. Even though the routines are quite debugged, they are yet growing and we are thinking in making a stable release shortly after receiving some feedback from users. The package contains: IO functions ------------ -ASCII files (simple, but useful) -ncstruct.py: netCDF structure copier. From a COARDS compliant netCDF file, this module creates a COARDS compliant file, copying the needed attributes, dimensions, auxiliary variables, comments, and so on in one call. Time handling routines ---------------------- * JDTime.py -> Some C/Python functions to convert from date to Scaliger's Julian Day and from Julian Day to date. We are not trying to replace mxDate, but addressing a different problem. In particular, this module contains a routine especially suited to handling monthly time steps for climatological use. * JDTimeHandler.py -> Python module which parses the units attribute of the time variable in a COARDS file and which offsets and scales adequately the time values to read/save date fields. Interface to DCDFLIB.C ---------------------- A C/Python interface to the free DCDFLIB.C library is provided. This library allows direct and inverse computations of parameters for several probability distribution functions like Chi^2, normal, binomial, F, noncentral F, and many many more. EOF analysis ------------ Empirical Orthogonal Function analysis based on the SVD decomposition of the data matrix and related functions to test the reliability/degeneracy of eigenvalues (truncation rules). Monte Carlo test of the stability of eigenvectors to temporal subsampling. SVD decomposition ----------------- SVD decomposition of the correlation matrix of two datasets, functions to compute the expansion coefficients, the squared cumulative covariance fraction and the homogeneous and heterogeneous correlation maps. Monte Carlo test of the stability of singular vectors to temporal subsampling. Multivariate digital filter --------------------------- Multivariate digital filter (high and low pass) based on the Kolmogorov-Zurbenko filter Differential operators on the sphere ------------------------------------ Some classes to compute differential operators (gradient and divergence) on a regular latitude/longitude grid. PREREQUISITES ============= To be able to use it, you will need: 1. Python ;-) 2. netCDF library 3.4 or later 3. Scientific Python, by Konrad Hinsen 4. DCDFLIB.C version 1.1 IF AND ONLY IF you really want to change the C code (JDTime.[hc] and pycdf.[hc]), then, you will also need SWIG. COMPILATION =========== There is no a automatic compilation/installation procedure, but the Makefile is quite straightforward. After manually editing the Makefile for different platforms, the commands make make test -> Runs a (not infalible) regression test make install will do it. SORRY, we don't use it under Windows, only UNIX. Volunteers that generate a Windows installation file would be appreciated, but we will not do it. DOCUMENTATION ============= LaTeX, Postscript and PDF versions of the manual are included in the distribution. However, we are preparing a new set of documentation according to PSA rules. AVAILABILITY ============ http://lcdx00.wm.lc.ehu.es/~jsaenz/pyclimate (Europe) http://pyclimate.zubi.net/ (USA) http://starship.python.net/crew/~jsaenz (USA) Any feedback from the users of the package will be really appreciated by the authors. We will try to incorporate new developments, in case we are able to do so. Our time availability is scarce. Enjoy. Jon Saenz, jsaenz at wm.lc.ehu.es Juan Zubillaga, wmpzuesj at lg.ehu.es From godzilla at netmeg.net Tue Mar 28 12:35:39 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Tue, 28 Mar 2000 12:35:39 -0500 (EST) Subject: [Numpy-discussion] del( distutils ) Message-ID: <14560.60779.465582.624964@gargle.gargle.HOWL> I am just trying to compile the latest CVS updates on NumPy, and am getting very cranky about the dependence of build process on distutils. let me say it politely, using distutils at this point is for the birds. maybe for distributing released versions to the general public, where the distutils setup.py script has been tested on many platforms, is a long term solution. but for right now, i miss the Makefile stuff. an example: (gustav)~/system/numpy/Numerical/: python setup.py build running build [snip] /usr/bin/egcc -c -I/usr/include/python1.5 -IInclude -O3 -mpentium -fpic Src/_numpymodule.c Src/arrayobject.c Src/ufuncobject.c Src/ufuncobject.c:783: conflicting types for `PyUFunc_FromFuncAndData' /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': Src/ufuncobject.c:805: structure has no member named `doc' Src/ufuncobject.c: In function `ufunc_getattr': Src/ufuncobject.c:955: structure has no member named `doc' [blah blah blah] if this was from running make, i could be inside xemacs and click on the damn error and go right to the line in question. now, i gotta horse around, its a waste of time. another example, i posed to c.l.p a week or so ago. where it turned out that distutils doesnt know enough that, for example, arrayobject.h in distribution is newer than the one in /usr/include/python1.5 so breaks the build when API changes have been made. distutils should at leaast get the order of the -I's correct. but we're not a distutils SIG, we're NumPy, right? can we please go back, at least for now, to using -- or at a minimum distributing -- the Makefile's? les schaffer From vanandel at atd.ucar.edu Tue Mar 28 13:00:12 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 28 Mar 2000 11:00:12 -0700 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() Message-ID: <38E0F32C.E20B96CD@atd.ucar.edu> As I've mentioned earlier, I'm using interp() with very large datasets, and I can't afford to use double precision arrays. Here's a patch that lets interp() accept an optional typecode argument. Passing 'f' calls the new single precision version, and returns a single precision array. Passing no argument or 'd' uses the previous, double precision version. (No other array types are supported - an error is returned.) I hope this can be added to the CVS version of Numeric. Thanks! -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- *** arrayfnsmodule.c 1999/04/14 22:58:32 1.1 --- arrayfnsmodule.c 1999/08/12 23:27:28 *************** *** 6,11 **** --- 6,13 ---- #include #include + #define MAX_INTERP_DIMS 6 + static PyObject *ErrorObject; /* Define 2 macros for error handling: *************** *** 34,39 **** --- 36,43 ---- #define A_DIM(a,i) (((PyArrayObject *)a)->dimensions[i]) #define GET_ARR(ap,op,type,dim) \ Py_Try(ap=(PyArrayObject *)PyArray_ContiguousFromObject(op,type,dim,dim)) + #define GET_ARR2(ap,op,type,min,max) \ + Py_Try(ap=(PyArrayObject *)PyArray_ContiguousFromObject(op,type,min,max)) #define ERRSS(s) ((PyObject *)(PyErr_SetString(ErrorObject,s),0)) #define SETERR(s) if(!PyErr_Occurred()) ERRSS(errstr ? errstr : s) #define DECREF_AND_ZERO(p) do{Py_XDECREF(p);p=0;}while(0) *************** *** 571,576 **** --- 575,613 ---- return result ; } + static int + binary_searchf(float dval, float dlist [], int len) + { + /* binary_search accepts three arguments: a numeric value and + * a numeric array and its length. It assumes that the array is sorted in + * increasing order. It returns the index of the array's + * largest element which is <= the value. It will return -1 if + * the value is less than the least element of the array. */ + /* self is not used */ + int bottom , top , middle, result; + + if (dval < dlist [0]) + result = -1 ; + else { + bottom = 0; + top = len - 1; + while (bottom < top) { + middle = (top + bottom) / 2 ; + if (dlist [middle] < dval) + bottom = middle + 1 ; + else if (dlist [middle] > dval) + top = middle - 1 ; + else + return middle ; + } + if (dlist [bottom] > dval) + result = bottom - 1 ; + else + result = bottom ; + } + + return result ; + } static char arr_interp__doc__[] = "" ; *************** *** 597,609 **** Py_DECREF(ay); Py_DECREF(ax); return NULL ;} ! GET_ARR(az,oz,PyArray_DOUBLE,1); lenz = A_SIZE (az); dy = (double *) A_DATA (ay); dx = (double *) A_DATA (ax); dz = (double *) A_DATA (az); ! Py_Try (_interp = (PyArrayObject *) PyArray_FromDims (1, &lenz, ! PyArray_DOUBLE)); dres = (double *) A_DATA (_interp) ; slopes = (double *) malloc ( (leny - 1) * sizeof (double)) ; for (i = 0 ; i < leny - 1; i++) { --- 634,647 ---- Py_DECREF(ay); Py_DECREF(ax); return NULL ;} ! GET_ARR2(az,oz,PyArray_DOUBLE,1,MAX_INTERP_DIMS); lenz = A_SIZE (az); dy = (double *) A_DATA (ay); dx = (double *) A_DATA (ax); dz = (double *) A_DATA (az); ! /* create output array with same size as 'Z' input array */ ! Py_Try (_interp = (PyArrayObject *) PyArray_FromDims ! (A_NDIM(az), az->dimensions, PyArray_DOUBLE)); dres = (double *) A_DATA (_interp) ; slopes = (double *) malloc ( (leny - 1) * sizeof (double)) ; for (i = 0 ; i < leny - 1; i++) { *************** *** 627,632 **** --- 665,724 ---- return PyArray_Return (_interp); } + /* return float, rather than double */ + + static PyObject * + arr_interpf(PyObject *self, PyObject *args) + { + /* interp (y, x, z) treats (x, y) as a piecewise linear function + * whose value is y [0] for x < x [0] and y [len (y) -1] for x > + * x [len (y) -1]. An array of floats the same length as z is + * returned, whose values are ordinates for the corresponding z + * abscissae interpolated into the piecewise linear function. */ + /* self is not used */ + PyObject * oy, * ox, * oz ; + PyArrayObject * ay, * ax, * az , * _interp; + float * dy, * dx, * dz , * dres, * slopes; + int leny, lenz, i, left ; + + Py_Try(PyArg_ParseTuple(args, "OOO", &oy, &ox, &oz)); + GET_ARR(ay,oy,PyArray_FLOAT,1); + GET_ARR(ax,ox,PyArray_FLOAT,1); + if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { + SETERR ("interp: x and y are not the same length."); + Py_DECREF(ay); + Py_DECREF(ax); + return NULL ;} + GET_ARR2(az,oz,PyArray_FLOAT,1,MAX_INTERP_DIMS); + lenz = A_SIZE (az); + dy = (float *) A_DATA (ay); + dx = (float *) A_DATA (ax); + dz = (float *) A_DATA (az); + /* create output array with same size as 'Z' input array */ + Py_Try (_interp = (PyArrayObject *) PyArray_FromDims + (A_NDIM(az), az->dimensions, PyArray_FLOAT)); + dres = (float *) A_DATA (_interp) ; + slopes = (float *) malloc ( (leny - 1) * sizeof (float)) ; + for (i = 0 ; i < leny - 1; i++) { + slopes [i] = (dy [i + 1] - dy [i]) / (dx [i + 1] - dx [i]) ; + } + for ( i = 0 ; i < lenz ; i ++ ) + { + left = binary_searchf (dz [i], dx, leny) ; + if (left < 0) + dres [i] = dy [0] ; + else if (left >= leny - 1) + dres [i] = dy [leny - 1] ; + else + dres [i] = slopes [left] * (dz [i] - dx [left]) + dy [left]; + } + + free (slopes); + Py_DECREF(ay); + Py_DECREF(ax); + Py_DECREF(az); + return PyArray_Return (_interp); + } static int incr_slot_ (float x, double *bins, int lbins) { int i ; *************** *** 1295,1300 **** --- 1387,1393 ---- {"array_set", arr_array_set, 1, arr_array_set__doc__}, {"index_sort", arr_index_sort, 1, arr_index_sort__doc__}, {"interp", arr_interp, 1, arr_interp__doc__}, + {"interpf", arr_interpf, 1, arr_interp__doc__}, {"digitize", arr_digitize, 1, arr_digitize__doc__}, {"zmin_zmax", arr_zmin_zmax, 1, arr_zmin_zmax__doc__}, {"reverse", arr_reverse, 1, arr_reverse__doc__}, From cgw at fnal.gov Tue Mar 28 13:23:45 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 28 Mar 2000 12:23:45 -0600 (CST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.60779.465582.624964@gargle.gargle.HOWL> References: <14560.60779.465582.624964@gargle.gargle.HOWL> Message-ID: <14560.63665.537485.403965@buffalo.fnal.gov> Les Schaffer writes: > > let me say it politely, using distutils at this point is for the > birds. Hmm, I'd hate to hear the impolite form! ;-) > but for right now, i miss the Makefile stuff. > > an example: > (gustav)~/system/numpy/Numerical/: python setup.py build > /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' > Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': > Src/ufuncobject.c:805: structure has no member named `doc' > Src/ufuncobject.c: In function `ufunc_getattr': > Src/ufuncobject.c:955: structure has no member named `doc' > [blah blah blah] > > if this was from running make, i could be inside xemacs and click on > the damn error and go right to the line in question. now, i gotta > horse around, its a waste of time. No, you don't have to horse around! Of course you can use xemacs and compile mode. (I wouldn't dream of building anything from an xterm window!) Just do M-x compile and set the compilation command to "python setup.py build", then you can click on the errors. Or create a trivial Makefile with these contents: numpy: python setup.py build install: python From cgw at fnal.gov Tue Mar 28 13:25:36 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 28 Mar 2000 12:25:36 -0600 (CST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.63665.537485.403965@buffalo.fnal.gov> References: <14560.60779.465582.624964@gargle.gargle.HOWL> <14560.63665.537485.403965@buffalo.fnal.gov> Message-ID: <14560.63776.153092.975975@buffalo.fnal.gov> Ooops, that message was somehow truncated, the last line should obviously have read: install: python setup.py install From godzilla at netmeg.net Tue Mar 28 13:35:22 2000 From: godzilla at netmeg.net (Les Schaffer) Date: Tue, 28 Mar 2000 13:35:22 -0500 (EST) Subject: [Numpy-discussion] del( distutils ) In-Reply-To: <14560.63665.537485.403965@buffalo.fnal.gov> References: <14560.60779.465582.624964@gargle.gargle.HOWL> <14560.63665.537485.403965@buffalo.fnal.gov> Message-ID: <14560.64363.7709.62933@gargle.gargle.HOWL> > Hmm, I'd hate to hear the impolite form! ;-) isn't it nice when cranky-heads are dealt with politely? ;-) > Just do M-x compile and set the compilation command to "python > setup.py build", then you can click on the errors. ahhhh... hadnt thought of that. thanks.... here is a one line fix to build_ext.py in the distutils distribution that switches the order of the includes, so that -IInclude comes before -I/usr/lib/python1.5 in the build process, so API changes in the .h files don't hose the build before the install. les schaffer (gustav)/usr/lib/python1.5/site-packages/distutils/command/: diff -c build_ext.py~ build_ext.py *** build_ext.py~ Sun Jan 30 13:34:12 2000 --- build_ext.py Tue Mar 28 13:30:03 2000 *************** *** 99,105 **** self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.insert (0, py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) --- 99,105 ---- self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.append(py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) From vanandel at atd.ucar.edu Tue Mar 28 14:30:39 2000 From: vanandel at atd.ucar.edu (Joe Van Andel) Date: Tue, 28 Mar 2000 12:30:39 -0700 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() References: <38E0F32C.E20B96CD@atd.ucar.edu> Message-ID: <38E1085F.745A4460@atd.ucar.edu> I'm very sorry, I previously sent the wrong patch. Here's the correct patch. (The other was a previous version of arrayfnsmodule.c, that added an interpf() command. This version adds an optional typecode to allow specifying single precision results from interp) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: vanandel at ucar.edu -------------- next part -------------- Index: arrayfnsmodule.c =================================================================== RCS file: /cvsroot/numpy/Numerical/Src/arrayfnsmodule.c,v retrieving revision 1.1 diff -u -r1.1 arrayfnsmodule.c --- arrayfnsmodule.c 2000/01/20 18:18:28 1.1 +++ arrayfnsmodule.c 2000/03/28 17:55:04 @@ -575,6 +575,96 @@ return result ; } +static int +binary_searchf(float dval, float dlist [], int len) +{ + /* binary_search accepts three arguments: a numeric value and + * a numeric array and its length. It assumes that the array is sorted in + * increasing order. It returns the index of the array's + * largest element which is <= the value. It will return -1 if + * the value is less than the least element of the array. */ + /* self is not used */ + int bottom , top , middle, result; + + if (dval < dlist [0]) + result = -1 ; + else { + bottom = 0; + top = len - 1; + while (bottom < top) { + middle = (top + bottom) / 2 ; + if (dlist [middle] < dval) + bottom = middle + 1 ; + else if (dlist [middle] > dval) + top = middle - 1 ; + else + return middle ; + } + if (dlist [bottom] > dval) + result = bottom - 1 ; + else + result = bottom ; + } + + return result ; +} +/* return float, rather than double */ + +static PyObject * +arr_interpf(PyObject *self, PyObject *args) +{ + /* interp (y, x, z) treats (x, y) as a piecewise linear function + * whose value is y [0] for x < x [0] and y [len (y) -1] for x > + * x [len (y) -1]. An array of floats the same length as z is + * returned, whose values are ordinates for the corresponding z + * abscissae interpolated into the piecewise linear function. */ + /* self is not used */ + PyObject * oy, * ox, * oz ; + PyArrayObject * ay, * ax, * az , * _interp; + float * dy, * dx, * dz , * dres, * slopes; + int leny, lenz, i, left ; + + PyObject *tpo = Py_None; /* unused argument, we've already parsed it*/ + + Py_Try(PyArg_ParseTuple(args, "OOO|O", &oy, &ox, &oz, &tpo)); + GET_ARR(ay,oy,PyArray_FLOAT,1); + GET_ARR(ax,ox,PyArray_FLOAT,1); + if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { + SETERR ("interp: x and y are not the same length."); + Py_DECREF(ay); + Py_DECREF(ax); + return NULL ;} + GET_ARR2(az,oz,PyArray_FLOAT,1,MAX_INTERP_DIMS); + lenz = A_SIZE (az); + dy = (float *) A_DATA (ay); + dx = (float *) A_DATA (ax); + dz = (float *) A_DATA (az); + /* create output array with same size as 'Z' input array */ + Py_Try (_interp = (PyArrayObject *) PyArray_FromDims + (A_NDIM(az), az->dimensions, PyArray_FLOAT)); + dres = (float *) A_DATA (_interp) ; + slopes = (float *) malloc ( (leny - 1) * sizeof (float)) ; + for (i = 0 ; i < leny - 1; i++) { + slopes [i] = (dy [i + 1] - dy [i]) / (dx [i + 1] - dx [i]) ; + } + for ( i = 0 ; i < lenz ; i ++ ) + { + left = binary_searchf (dz [i], dx, leny) ; + if (left < 0) + dres [i] = dy [0] ; + else if (left >= leny - 1) + dres [i] = dy [leny - 1] ; + else + dres [i] = slopes [left] * (dz [i] - dx [left]) + dy [left]; + } + + free (slopes); + Py_DECREF(ay); + Py_DECREF(ax); + Py_DECREF(az); + return PyArray_Return (_interp); +} + static char arr_interp__doc__[] = "" ; @@ -592,8 +682,24 @@ PyArrayObject * ay, * ax, * az , * _interp; double * dy, * dx, * dz , * dres, * slopes; int leny, lenz, i, left ; + PyObject *tpo = Py_None; + char type_char = 'd'; + char *type = &type_char; - Py_Try(PyArg_ParseTuple(args, "OOO", &oy, &ox, &oz)); + Py_Try(PyArg_ParseTuple(args, "OOO|O", &oy, &ox, &oz,&tpo)); + if (tpo != Py_None) { + type = PyString_AsString(tpo); + if (!type) + return NULL; + if(!*type) + type = &type_char; + } + if (*type == 'f' ) { + return arr_interpf(self, args); + } else if (*type != 'd') { + SETERR ("interp: unimplemented typecode."); + return NULL; + } GET_ARR(ay,oy,PyArray_DOUBLE,1); GET_ARR(ax,ox,PyArray_DOUBLE,1); if ( (leny = A_SIZE (ay)) != A_SIZE (ax)) { From pauldubois at home.com Tue Mar 28 14:59:09 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 28 Mar 2000 11:59:09 -0800 Subject: [Numpy-discussion] single precision patch for arrayfnsmodule.c :interp() In-Reply-To: <38E1085F.745A4460@atd.ucar.edu> Message-ID: > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Joe > Van Andel > Sent: Tuesday, March 28, 2000 11:31 AM > To: vanandel at ucar.edu > Cc: numpy-discussion; Paul F. Dubois > Subject: Re: [Numpy-discussion] single precision patch for > arrayfnsmodule.c :interp() > Patch completed. I added a doc string. I used the second patch you sent. From janne at nnets.fi Fri Mar 31 05:04:13 2000 From: janne at nnets.fi (Janne Sinkkonen) Date: 31 Mar 2000 13:04:13 +0300 Subject: [Numpy-discussion] Binary kit for windows Message-ID: Could somebody give me a pointer to a easily installable Windows binary kit for NumPy? Even a bit older version (such as 11) would do fine. I couldn't find any reference to such a kit from sourceforge or the LLNL site. -- Janne