From jjl at pobox.com Mon Dec 3 12:06:39 2001 From: jjl at pobox.com (John J. Lee) Date: Mon Dec 3 12:06:39 2001 Subject: [Numpy-discussion] limits.py? Message-ID: Is there some equivalent of limits.h for Numeric? And is there a searchable archive for the numpy-discussion list?? IIRC, the standard sourceforge lists aren't searchable (?!), which is bizarre if true. John From oliphant at ee.byu.edu Mon Dec 3 12:36:04 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Dec 3 12:36:04 2001 Subject: [Numpy-discussion] limits.py? In-Reply-To: Message-ID: > > Is there some equivalent of limits.h for Numeric? > > And is there a searchable archive for the numpy-discussion list?? IIRC, > the standard sourceforge lists aren't searchable (?!), which is bizarre > if true. Try limits.py in SciPy (www.scipy.org) -Travis From gvermeul at labs.polycnrs-gre.fr Tue Dec 4 00:11:03 2001 From: gvermeul at labs.polycnrs-gre.fr (Gerard Vermeulen) Date: Tue Dec 4 00:11:03 2001 Subject: [Numpy-discussion] question about Windows installer Message-ID: <01120409101901.08863@taco.polycnrs-gre.fr> Hi, Could somebody tell me if the Numeric executable installer for Windows is made with distutils? Thanks -- Gerard From pearu at cens.ioc.ee Tue Dec 4 06:17:06 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue Dec 4 06:17:06 2001 Subject: [Numpy-discussion] ANN: F2PY - Fortran to Python Interface Generator Message-ID: F2PY - Fortran to Python Interface Generator I am pleased to announce the third public release of f2py (2nd Edition) (version 2.3.328): http://cens.ioc.ee/projects/f2py2e/ f2py is a command line tool for binding Python and Fortran codes. It scans Fortran 77/90/95 codes and generates a Python C/API module that makes it possible to call Fortran subroutines from Python. No Fortran or C expertise is required for using this tool. Features include: *** All basic Fortran types are supported: integer[ | *1 | *2 | *4 | *8 ], logical[ | *1 | *2 | *4 | *8 ], character[ | *(*) | *1 | *2 | *3 | ... ] real[ | *4 | *8 | *16 ], double precision, complex[ | *8 | *16 | *32 ] *** Multi-dimensional arrays of (almost) all basic types. Dimension specifications: | : | * | : *** Supported attributes and statements: intent([ in | inout | out | hide | in,out | inout,out ]) dimension() depend([]) check([]) note() optional, required, external NEW: intent(c), threadsafe, fortranname *** Calling Fortran 77/90/95 subroutines and functions. Also Fortran 90/95 module subroutines are supported. Internal initialization of optional arguments. *** Accessing COMMON blocks from Python. NEW: Accessing Fortran 90/95 module data. *** Call-back functions: calling Python functions from Fortran with very flexible hooks. *** In Python, arguments of the interfaced functions may be of different type - necessary type conversations are done internally in C level. *** Automatically generates documentation (__doc__,LaTeX) for interfaced functions. *** Automatically generates signature files --- user has full control over the interface constructions. Automatically detects the signatures of call-back functions, solves argument dependencies, etc. NEW: * Automatically generates setup_.py for building extension modules using tools from distutils and fortran_support module (from SciPy). *** Automatically generates Makefile for compiling Fortran and C codes and linking them to a shared module. Many compilers are supported: gcc, Compaq Fortran, VAST/f90 Fortran, Absoft F77/F90, MIPSpro 7 Compilers, etc. Platforms: Intel/Alpha Linux, HP-UX, IRIX64. *** Complete User's Guide in various formats (html,ps,pdf,dvi). *** f2py users list is available for support, feedback, etc. NEW: * Installation with distutils. *** And finally, many bugs were fixed. More information about f2py, see http://cens.ioc.ee/projects/f2py2e/ LICENSE: f2py is released under the LGPL. Sincerely, Pearu Peterson December 4, 2001

f2py 2.3.328 - The Fortran to Python Interface Generator (04-Dec-01) From sag at hydrosphere.com Tue Dec 4 09:28:07 2001 From: sag at hydrosphere.com (Sue Giller) Date: Tue Dec 4 09:28:07 2001 Subject: [Numpy-discussion] Using objects in arrays Message-ID: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> I am trying to use objects in an array, and still be able to use the various extra functions offered by multiarray. I am finding that some of the functions work and some don't. Is it hopeless to try to use objects in an array and expect .reduce and others to work properly? As a simple example, I have a DataPoint object that consists of a value and flag(s). This object has all the __cmp__, __add_, etc functions implemented. I can do MA.average(m), MA.sum(m), MA.add.reduce(m), (they seem to use __add__) but I can't do MA.minimum.reduce(m) or MA.maximum.reduce(m). I can do MA.maximum(m) and MA.minimum(m), but not MA.maximum(m, 0) or MA.minimum(m, 0) The values returned by MA.argmax(m) makes no sense (wrong index?) but is consistent with results from argsort(). MA.argmin(m) gives an error (I have a __neg__ fn in datapoint) File "C:\Python21\MA\MA.py", line 1977, in argmin return Numeric.argmin(d, axis) File "C:\Python21\Numeric\Numeric.py", line 281, in argmin a = -array(a, copy=0) TypeError: bad operand type for unary - for example: print m # 3 valid, 1 masked print MA.maximum(m) print MA.argmax(m) # gives index to masked value print MA.minimum(m) #print MA.argmin(m) - gives error above print MA.argsort(m) print MA.average(m) print MA.maximum.reduce(m, 0) [1, ] ,[10, a] ,[None, D] ,-- ,] [10, a] 3 [None, D] [2,0,1,3,] [3.66666666667, aD] Traceback (most recent call last): File "C:\Python21\Pythonwin\pywin\framework\scriptutils.py", line 301, in RunScript exec codeObject in __main__.__dict__ File "C:\Python21\HDP\Data\DataPoint.py", line 136, in ? print MA.maximum.reduce(m, 0) File "C:\Python21\MA\MA.py", line 1913, in reduce t = Numeric.maximum.reduce(filled(target, maximum_fill_value(target)), axis) TypeError: function not supported for these types, and can't coerce to supported types From oliphant at ee.byu.edu Tue Dec 4 10:05:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue Dec 4 10:05:02 2001 Subject: [Numpy-discussion] Using objects in arrays In-Reply-To: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> Message-ID: > I am trying to use objects in an array, and still be able to use the > various extra functions offered by multiarray. I am finding that some > of the functions work and some don't. Is it hopeless to try to use > objects in an array and expect .reduce and others to work > properly? > > As a simple example, I have a DataPoint object that consists of a > value and flag(s). This object has all the __cmp__, __add_, etc > functions implemented. > > I can do MA.average(m), MA.sum(m), MA.add.reduce(m), (they > seem to use __add__) but I can't do MA.minimum.reduce(m) or > MA.maximum.reduce(m). This is actually, very helpful information. I'm not sure how well-tested the object-type is with Numeric. I know it has been used, but I'm not sure all of the ufunc methods have been well tested. However, I'm not sure what is caused by MA and what is caused by basic Numeric, however. -Travis From sag at hydrosphere.com Tue Dec 4 10:23:01 2001 From: sag at hydrosphere.com (Sue Giller) Date: Tue Dec 4 10:23:01 2001 Subject: [Numpy-discussion] Using objects in arrays In-Reply-To: References: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> Message-ID: <20011204182200900.AAA271@mail.climatedata.com@SUEW2000> My digging shows that most stuff is passed on to Numeric and then on to mutliarray, which is a .pyd. At that point, I gave up trying to figure things out. On 4 Dec 01, at 11:05, Travis Oliphant wrote: > However, I'm not sure what is caused by MA and what is caused by basic > Numeric, however. > > > -Travis > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From Olivier.Buchwalder at epfl.ch Thu Dec 6 10:28:05 2001 From: Olivier.Buchwalder at epfl.ch (oli) Date: Thu Dec 6 10:28:05 2001 Subject: [Numpy-discussion] Problem with MacOsX Message-ID: <006d01c17e83$c8cb0b50$ddade6c2@PcOli> Hi, I have tried to compile Numpy for MacOsX but it doesn't compile the C modules... Have anyone not this problem? Thanks oli From jsw at cdc.noaa.gov Thu Dec 6 11:05:03 2001 From: jsw at cdc.noaa.gov (Jeff Whitaker) Date: Thu Dec 6 11:05:03 2001 Subject: [Numpy-discussion] Problem with MacOsX In-Reply-To: <006d01c17e83$c8cb0b50$ddade6c2@PcOli> Message-ID: Oli: Works fine for me. I've contributed a numeric python package to the fink distribution (fink.sf.net), it saves you the trouble of porting it yourself. -Jeff On Thu, 6 Dec 2001, oli wrote: > Hi, > I have tried to compile Numpy for MacOsX but it doesn't compile the C > modules... > Have anyone not this problem? > Thanks > oli > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : jsw at cdc.noaa.gov 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 From sag at hydrosphere.com Thu Dec 6 12:30:19 2001 From: sag at hydrosphere.com (Sue Giller) Date: Thu Dec 6 12:30:19 2001 Subject: [Numpy-discussion] Rounding errors in add.reduce? Message-ID: <20011206202922697.AAA274@mail.climatedata.com@SUEW2000> import MA I have been using add.reduce on some arrays with single precision data in them. At some point, the reduction seems to be producing incorrect values, caused I presume by floating point rounding errors. THe values are correct if the same array is created as double precision. The odd things is that I can do a straight summing of those values into long, single or double variable and get correct answers. Is this a bug or what? If the difference is due to rounding error, I would expect the same errors to show up in the cases of summing the individual values. The output from the following code shows the following. Note that the add.reduce from the single precision array is different from all the others. It doesn't matter the rank or size of array, just so sum of values gets to certain size. accumulated sums float 75151440.0 long 75151440 accumulated from raveled array float 75151440.0 double 75151440.0 add.reduce single 75150288.0 double 75151440.0 --- code --- import MA # this causes same problem if array is 1d of [amax*bmax*cmax] in len amax = 4 bmax = 31 cmax = 12 farr = MA.zeros((amax, bmax, cmax), 'f') # single float darr = MA.zeros((amax, bmax, cmax), 'd') # double float sum = 0.0 lsum = 0 value = 50505 # reducing this can cause all values to agree for a in range (0, amax): for b in range (0, bmax): for c in range (0, cmax): farr[a, b, c] = value darr[a, b, c] = value sum = sum + value lsum = lsum + value fflat = MA.ravel(farr) dflat = MA.ravel(darr) fsum = dsum = 0.0 for value in fflat: fsum = fsum + value for value in dflat: dsum = dsum + value freduce = MA.add.reduce(fflat) dreduce = MA.add.reduce(dflat) print "accumulated sums" print "\tfloat\t", sum, "\tlong ", lsum print "accumulated from raveled array" print "\tfloat\t", fsum, "\tdouble", dsum print "add.reduce" print "\tsingle\t", freduce, "\tdouble", dreduce From sag at hydrosphere.com Thu Dec 6 12:57:05 2001 From: sag at hydrosphere.com (Sue Giller) Date: Thu Dec 6 12:57:05 2001 Subject: [Numpy-discussion] RE: Rounding errors in add.reduce? Message-ID: <20011206205644666.AAA217@mail.climatedata.com@SUEW2000> Well, never mind. I figured out that my test variables silently went to double precision, and that this is a case precision issues, not rounding error. So, I guess I will have to force my arrays to double precision. Sorry to bother the list. sue From jh at oobleck.astro.cornell.edu Thu Dec 6 15:34:02 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu Dec 6 15:34:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> Sorry for the late hit, I have been away at a conference. Regarding the issue of SciPy including a Python interpreter and being a complete distribution, I think this is a *poor* idea. Most Linux distributions include Python for OS use. I ran into trouble when I supported a separate Python for science use. There was confusion (path issues, etc.) about which Python people were getting, why the need for a second version, etc. Then, the science version got out of date as I updated my OS and wanted to keep my data analysis version stable. That led to confusion about what features were broken/fixed in what version. If SciPy includes Python, it has to make a commitment to release a new, tested version of the whole package just as soon as the new Python is available. That will complicate the release schedule, as Python releases won't be in sync with the SciPy development cycle. There is also the issue, if OSs start including both mainstream Python and SciPy, of their inherently coming with two different versions of Python, and thereby causing headaches for the OS distributor. The likely result is that the distributor would drop SciPy. Further, they will have to to decide which version to install in /usr/bin (SciPy will lose that one). If SciPy or any other scientific package includes a Python interpreter, it should have a special name, like "scipy", and not be "python" to the command line. Frankly, I prefer the layered approach, so long as everyone works to make the layers "just work" together. This is quite practical with modern package managers. --jh-- From eric at enthought.com Thu Dec 6 19:39:01 2001 From: eric at enthought.com (eric) Date: Thu Dec 6 19:39:01 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? References: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> Message-ID: <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> Hey Joe, Your right on many counts. A single monolithic packages sacrifices the ability to embed SciPy in other apps. This is unacceptable as it is one of the major powers of Python. On the other hand, a single monolithic (single download) package is exactly what is needed to introduce the full capabilities of Python and SciPy to a large number of novice users. I'm hoping the choice between these two isn't an "either/or" decision. We've (well Travis O. mostly) been working to divide SciPy into multiple "levels," and think this provides a solution for a wide range of users. There are 3 levels now, and there is likely to be a 4th "sumo" level added later. Level 1 is the core routines. Level 2 includes most Numeric algorithms. Level 3 includes graphics and perhaps a few other things. The sumo package would be a large all inclusive package with Python/Numeric/SciPy/VTK/wxPython(maybe)/PyCrust/MayaVi(maybe) and possibly others. Its form hasn't been defined yet, but it is meant to be a click-install package for the large population of potential users who are looking for a scientific programming/exploration environment. The goal of SciPy from the beginning has been to make Python an appealing platform to the scientific masses -- not just the computer savvy. I firmly believe that somthing like th sumo package is the only way that SciPy will ever take hold as a widely used tool. If you require multiple installations, you loose a huge number of potential users. The other thing to remember is that there are really two major platforms -- Linux and Windows (and many others hopefully supported). Linux users are generally much more savvy and do not mind multiple installs. But the windows world remains a huge portion of the target audience, and these people generally have a very different set of packaging expectations. If it ain't click/install/run, they'll simply choose a different package. We've considered releasing maybe 3 different packages, say level 1/2, level 1/2/3, and sumo so that people can pick the one they wish to use. On the good side, this gives the experts the core algorithms packaged up without much cruft so that they can use it with their current Python and the novice an easy way to try out all the cool features of Python/SciPy/Visualization. On the bad side, it introduces maintance headaches and might lead to version issues. Still, I'm really hoping a good compromise can be found here. > If SciPy or any other scientific package includes a Python > interpreter, it should have a special name, like "scipy", and not be > "python" to the command line. Frankly, I prefer the layered approach, > so long as everyone works to make the layers "just work" together. > This is quite practical with modern package managers. I guess the sumo version might have the interpreter renamed scipy -- we'll wait till at least a few blanks have been laid before we try and cross that bridge... On the layering side, I think SciPy is moving in the general direction that your suggesting. see ya, eric ----- Original Message ----- From: "Joe Harrington" To: Cc: Sent: Thursday, December 06, 2001 6:33 PM Subject: Re: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? > Sorry for the late hit, I have been away at a conference. > > Regarding the issue of SciPy including a Python interpreter and being > a complete distribution, I think this is a *poor* idea. Most Linux > distributions include Python for OS use. I ran into trouble when I > supported a separate Python for science use. There was confusion > (path issues, etc.) about which Python people were getting, why the > need for a second version, etc. Then, the science version got out of > date as I updated my OS and wanted to keep my data analysis version > stable. That led to confusion about what features were broken/fixed > in what version. If SciPy includes Python, it has to make a > commitment to release a new, tested version of the whole package just > as soon as the new Python is available. That will complicate the > release schedule, as Python releases won't be in sync with the SciPy > development cycle. There is also the issue, if OSs start including > both mainstream Python and SciPy, of their inherently coming with two > different versions of Python, and thereby causing headaches for the OS > distributor. The likely result is that the distributor would drop > SciPy. Further, they will have to to decide which version to install > in /usr/bin (SciPy will lose that one). > > If SciPy or any other scientific package includes a Python > interpreter, it should have a special name, like "scipy", and not be > "python" to the command line. Frankly, I prefer the layered approach, > so long as everyone works to make the layers "just work" together. > This is quite practical with modern package managers. > > --jh-- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From hinsen at cnrs-orleans.fr Thu Dec 6 23:41:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Thu Dec 6 23:41:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? Message-ID: <200112070740.fB77e2q01664@localhost.localdomain> "eric" writes: > as a widely used tool. If you require multiple installations, you loose a > huge number of potential users. In the early days of NumPy, I had a similar problem because many people didn't succeed in installing it properly. I ended up preparing a specially packaged Python interpreter with NumPy plus my own scientific modules. With increasingly frequent releases of NumPy and Python itself, this became impossible to keep up. And since we have distutils, installation questions became really rare, being mostly about compatibility of my code with this or that version of NumPy (due to the change in the C API export). On the other hand, I work in a community dominated by Unix users, who come in two varieties: Linux people, who just get RPMs, and others, who typically have a competent systems administrator. The Windows world might have different requirements. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From siopis at umich.edu Thu Dec 6 23:54:03 2001 From: siopis at umich.edu (Christos Siopis ) Date: Thu Dec 6 23:54:03 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? Message-ID: I am reviving the 'too many numerical libraries doing the same thing?' thread, with apologies for the long delay to respond to the list's comments. Thanks to Joe Harrington, Paul Dubois, Konrad Hinsen and Chris Barker who took the time to respond to my posting. The funny thing is that, as i see it, even though they were mostly trying to provide reasons why such a project has not, and probably will not, be done, their reasons sound to me as reasons why it *should* be done! In essence, what i am 'proposing' is for a big umbrella organization (NSF, NASA and IEEE come to mind) to sponsor the development of this uber-library for numerical scientific and engineering applications. This would be 'sold' as an infrastructure project: creating the essential functionality that is needed in order to build most kinds of scientific and engineering applications. It would save lots of duplication effort and improve productivity and quality at government labs, academia and the private sector alike. The end product would have some sort of open-source license (this can be a thorny issue, but i am sure a mutually satisfactory solution can be found). Alternatively (or in addition), it might be better to specify the API and leave the implementation part (open source and/or commercial) to others. Below I give some reasons, based mostly on the feedback from the list, why i think it is appropriate that this effort be pursued at such a high level (both bureaucratically and talent-wise). - The task is too complex to be carried out by a single group of people (e.g., by a single government lab): * Not only does it involve 'too much work', but the expertise and the priorities of the group won't necessarily reflect those of the scientific and engineering community at large. * The resources of a single lab/group do not suffice. Even if they did, project managers primarily care to produce what is needed for their project. If it has beneficial side-effects for the community, so much the better, but this is not a priority. Alas, since the proposed task is too general to fit in the needs of any single research group, noone will probably ever carry it out. - The end product should be something that the 'average' numerical scientist will feel compelled to use. For instance, if the majority of the community does not feel at home with object-oriented techniques, there should be a layer (if technically possible) that would allow access to the library via more traditional languages (C/Fortran), all the while helping people to make the transition to OO, when appropriate --i can hear Paul saying "but it's *always* appropriate" :) - There is a number of issues for which it is not presently clear that there is a good answer. Paul brought up some of these problems. For example, is it possible to build the library in layers such that it can be accessed at different levels as object components, as objects, and as 'usual', non-OO functions all at the same time? As Paul said, "Eiffel or C++ versions of some NAG routines typically have methods with one or two arguments while the C or Fortran ones have 15 or more". Does NAG use the same code-base for both its non-OO and OO products? - It is hard to find people who are good scientists, good numerical scientists, good programmers, have good coordinator skills and can work in a team. This was presented as a reason why a project such as this proposed cannot be done. But in fact i think these are all arguments why a widely-publicised and funded project such as this would have more chances to succeed than small-scale, individual efforts. I would also imagine that numerical analysts might feel more compelled to 'publicize' their work to the masses if there were single, respected, quality resource that provides the context in which they can deposit their work (or a draft thereof, anyway). - Some companies and managers are hesitant to use open-source products due to requirements for 'fiduciary responsibility', as Paul put it. I think that a product created under the conditions that i described would probably pass this requirement --not in the sense that there will be an entity to sue, but in the sense that it would be a reputable product sponsored by some of the top research organizations or even the government. *************************************************************** / Christos Siopis | Tel : 734-615-1585 \ / Postdoctoral Research Fellow | \ / Department of Astronomy | FAX : 734-763-6317 \ / University of Michigan | \ / Ann Arbor, MI 48109-1090 | E-mail : siopis at umich.edu \ / U.S.A. _____________________| \ / / http://www.astro.lsa.umich.edu/People/siopis.html \ *************************************************************** From paul at pfdubois.com Fri Dec 7 08:54:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri Dec 7 08:54:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: Message-ID: <000801c17f3f$5af5a600$0b01a8c0@freedom> Chris wrote in part: -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Christos Siopis Sent: Thursday, December 06, 2001 11:53 PM To: numpy-discussion at lists.sourceforge.net Subject: Re: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In essence, what i am 'proposing' is for a big umbrella organization (NSF, NASA and IEEE come to mind) to sponsor the development of this uber-library for numerical scientific and engineering applications. This would be 'sold' as an infrastructure project: creating the essential functionality that is needed in order to build most kinds of scientific and engineering applications. It would save lots of duplication effort and improve productivity and quality at government labs, academia and the private sector alike. The end product would have some sort of open-source license (this can be a thorny issue, but i am sure a mutually satisfactory solution can be found). ----------- Those who do not know history, etc. LLNL, LANL, and Sandia had such a project in the 70s called the SLATEC library for mathematical software. It was pretty successful for the Fortran era. However, the funding agencies are unable to maintain interest in infrastructure very long. If there came a day when the vast majority of scientific programmers shared a platform and a language, there is now the communications infrastructure so that they could do a good open-source library, given someone to lead it with some vision. Linus Mathguy. From jh at oobleck.astro.cornell.edu Fri Dec 7 13:02:02 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Fri Dec 7 13:02:02 2001 Subject: [Numpy-discussion] Re: Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112072101.fB7L1QJ16023@oobleck.astro.cornell.edu> Something that everyone should be aware of is that right now we *may* have an opportunity to get significant support. Kodak has acquired RSI, makers of IDL. Most of the planetary astronomy community uses IDL, as do many geophysicists and medical imaging people. Kodak is dramatically raising prices, and has killed support for Mac OS X. The IDL site license just arranged for the group at NASA Ames is over $200k, making site licenses more expensive than individual licenses were just a few years ago, on a per-license basis. At the Division for Planetary Sciences meeting last week, I was frequently approached by colleagues who said, "Joe, what do I do?", and from the more savvy, "Is Python ready yet?" I discussed the possibility of producing an OSS or PD analysis system with a NASA program manager. He sees the money going out of his own programs to Kodak, and is concerned. However, his programs cannot fund such an effort as it is out of his scope. The right program is probably Applied Information Systems Research, but he is checking around NASA HQ to see whether this is the case. He was very positive about the idea. I suspect that a proposal will be more likely to fly this year than next, as there is a sense of concern right now, whereas next year people will already have adjusted. Depending on how my '01 grant proposals turn out, I may be proposing this to NASA in '02. Paul Barrett and I proposed it once before, in 1996 I think, but to the wrong program. Supporting parts of the effort from different sources would be wise. Paul Dubois makes the excellent point that such efforts generally peter out. It would be important to set this up as an OSS project with many contributors, some of whom are paid full-time to design and build the core. Good foundational documents and designs, and community reviews solicited from savvy non-participants, would help ensure that progress continued as sources of funding appeared and disappeared...and that there is enough wider-community support to keep it going until it produces something. NASA's immediate objective will be a complete data-analysis system to replace IDL, in short order, including an IDL-to-python converter program. That shouldn't be hard as IDL is a simple language, and PyDL may have solved much of that problem. So, at this point I'm still assessing what to do and whether/how to do it. Should we put together proposals to the various funding agencies to support SciPy? Should we create something new? What efforts exist in other communities, particularly the numerical analysis community? How much can we rely on savvy users to contribute code, and will that code be any good? My feeling is that there is actually a lot of money available for this, but it will require a few people to give up their jobs and pursue it full-time. And there, as they say, is the rub. --jh-- From paul at pfdubois.com Fri Dec 7 15:41:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri Dec 7 15:41:02 2001 Subject: [Numpy-discussion] fromstring bug Message-ID: <000001c17f78$3a86cb40$0b01a8c0@freedom> There is a bug report about fromstring. This bug report is correct. The docstring differs from the documentation, which only mentions the two-argument form (string, typecode). Numeric itself uses the latter form. The implementation has the 'count' argument but implemented incorrectly, without allowing for keyword arguments. In order to reconcile this with the least damage to the most people, I have changed the signature to: fromstring(string, typecode='l', count=-1) and allowed keyword arguments. This does not break the arguments as advertised in the documentation but does break any explicitly three-argument call in the old order. I will commit to CVS and this will be in 2.3 unless someone mean yells at me. From phillip_david at xontech.com Sat Dec 8 00:33:02 2001 From: phillip_david at xontech.com (Phillip David) Date: Sat Dec 8 00:33:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> References: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> Message-ID: <200112080831.fB88VLC22346@orngca-mls01.socal.rr.com> On Thursday 06 December 2001 06:39 pm, you wrote: > We've (well Travis O. mostly) been working to divide SciPy into multiple > "levels," and think this provides a solution for a wide range of users. > There are 3 levels now, and there is likely to be a 4th "sumo" level added > later. > > Level 1 is the core routines. Level 2 includes most Numeric algorithms. > Level 3 includes graphics and perhaps a few other things. The sumo package > would be a large all inclusive package with > Python/Numeric/SciPy/VTK/wxPython(maybe)/PyCrust/MayaVi(maybe) and possibly > others. I think this sounds GREAT! I've been looking for such a package to provide my company with a migration path from IDL, and such a package just might do it. If you're looking for other suggestions to add to sumo, I'd like to suggest ReportLab (or some sort of PDF generation tool) and as many scientific data formats (such as HDF, netCDF, and about a million others) as are sufficiently mature to be included. I've signed up to do an HDF5 port, and have been given a really nice start by Robert Kern. However, this one's not yet ready for inclusion. Phillip From jonathan.gilligan at vanderbilt.edu Mon Dec 10 11:56:02 2001 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon Dec 10 11:56:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: Message-ID: <5.1.0.14.0.20011210133713.04ae3060@g.mail.vanderbilt.edu> To follow up on Paul Dubois's wise comments about history, I would cite a more recent example: Software Carpentry. In 1999, with great fanfare, DOE (LANL) and CodeSourcery announced: >The aim of the Software Carpentry project is to create a new generation of >easy-to-use software engineering tools, and to document both those tools >and the working practices they are meant to support. The Advanced >Computing Laboratory at Los Alamos National Laboratory is providing >$860,000 of funding for Software Carpentry, which is being administered by >Code Sourcery, LLC. All of the project's designs, tools, test suites, and >documentation will be generally available under the terms of an Open >Source license. With announcements to the scientific community and an article in Dr. Dobb's Journal, this thing looked like a project aimed at development tools quite similar to what you are envisioning for numerical tools, right down to the python-centricity. If you go to the web site (http://www.software-carpentry.com) today, you will find that a year into the project, the plug was pulled. This does not bode well for big open or free projects financed by the major scientific agencies. Curiously the private sector has done much better in this regard (e.g., VTK, spun out of General Electric, Data Explorer from IBM, etc.). At 01:53 AM 12/7/01, Christos Siopis wrote: >In essence, what i am 'proposing' is for a big umbrella organization (NSF, >NASA and IEEE come to mind) to sponsor the development of this >uber-library for numerical scientific and engineering applications. This >would be 'sold' as an infrastructure project: creating the essential >functionality that is needed in order to build most kinds of scientific >and engineering applications. It would save lots of duplication effort and >improve productivity and quality at government labs, academia and the >private sector alike. The end product would have some sort of open-source >license (this can be a thorny issue, but i am sure a mutually satisfactory >solution can be found). Alternatively (or in addition), it might be better >to specify the API and leave the implementation part (open source and/or >commercial) to others. ============================================================================= Jonathan M. Gilligan jonathan.gilligan at vanderbilt.edu The Robert T. Lagemann Assistant Professor www.vanderbilt.edu/lsp/gilligan of Living State Physics Office: 615 343-6252 Dept. of Physics and Astronomy, Box 1807-B Lab (X-Ray) 343-7574 6823 Stevenson Center Fax: 343-7263 Vanderbilt University, Nashville, TN 37235 Dep't Office: 322-2828 From jh at oobleck.astro.cornell.edu Mon Dec 10 12:33:04 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Mon Dec 10 12:33:04 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> Could you let us know what happened to Software Carpentry? The web site isn't very revealing. Who pulled the plug and why? Were they making expected progress? Are they dead or merely negotiating some changes with this or another sponsor? Thanks, --jh-- From jjl at pobox.com Mon Dec 10 14:49:03 2001 From: jjl at pobox.com (John J. Lee) Date: Mon Dec 10 14:49:03 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> Message-ID: On Mon, 10 Dec 2001, Joe Harrington wrote: > Could you let us know what happened to Software Carpentry? The web > site isn't very revealing. Who pulled the plug and why? Were they > making expected progress? Are they dead or merely negotiating some > changes with this or another sponsor? Don't know about the official project / contest / whatever it was, but some of the tools proposed have been implemented to some extent. For example, SCcons --> scons (IIRC), and Roundup, the winning bug- tracking system design (Ka-Ping Yee I think) is being implemented by someone -- there was an announcement on the python announce list the other day. John From szummer at ai.mit.edu Mon Dec 10 15:04:02 2001 From: szummer at ai.mit.edu (Martin Szummer) Date: Mon Dec 10 15:04:02 2001 Subject: [Numpy-discussion] sparse matrices; SparsePy Message-ID: <001301c181ce$aa8518e0$27022a12@mit.edu> Hi all, What is the recommended way of doing sparse matrix manipulations in NumPy? I read about Travis Oliphant's SparsePy http://pylab.sourceforge.net/ but on that web page, the links are broken (the tarball file fails). Also, this is version 0.1, and has not been updated in a long time. Are there any others? If anybody has a copy of the SparsePy source, could you make it available to me? (also it would be good if somebody could repair Travis Oliphant's page; I tried to notify him, but don't think I had the correct email). Has anybody tried compiling SparsePy under Windows? It seems tricky, given that it needs a Fortran compiler. I was thinking of trying Cygwin, but not sure if I can get it to link with the ActiveState python compiled with Microsoft tools. Thanks, -- Martin From jonathan.gilligan at vanderbilt.edu Mon Dec 10 15:05:01 2001 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon Dec 10 15:05:01 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> References: Message-ID: <5.1.0.14.0.20011210162751.04ad9ec0@g.mail.vanderbilt.edu> I don't know too much. From what I have read, it seems that the money dried up with the .com crash, The $800,000 allocated for development never materialized, and the project head left for greener pastures. Some of the projects that won the SC design competition have moved to SourceForge, where they show some life (http://sourceforge.net/projects/roundup/, http://sourceforge.net/projects/scons/). The big thing, from what I know (which is not much), is that the promise of almost a million dollars of federal funding for this project seems to have evaporated. At 02:32 PM 12/10/01, Joe Harrington wrote: >Could you let us know what happened to Software Carpentry? The web >site isn't very revealing. Who pulled the plug and why? Were they >making expected progress? Are they dead or merely negotiating some >changes with this or another sponsor? From gpk at bell-labs.com Tue Dec 11 12:54:03 2001 From: gpk at bell-labs.com (Greg Kochanski) Date: Tue Dec 11 12:54:03 2001 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #366 - 7 msgs References: Message-ID: <3C16721D.2B361EAA@bell-labs.com> Spacesaver doesn't propagate through matrix multiplies: >>> import Numeric >>> x=Numeric.zeros((3,3), Numeric.Float32) >>> x.savespace(1) >>> x array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],'f') >>> x.spacesaver() 1 >>> y = Numeric.matrixmultiply(x, x) >>> y array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],'f') >>> y.spacesaver() 0 From nwagner at mecha.uni-stuttgart.de Thu Dec 13 02:39:02 2001 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu Dec 13 02:39:02 2001 Subject: [Numpy-discussion] Forth-Order Runge-Kutta Message-ID: <3C189374.BC2658CD@mecha.uni-stuttgart.de> Hi, I am looking for an implementation of the fourth-order Runge-Kutta method in Numpy. Is it already done by someone ? Thanks in advance. Nils From cnetzer at mail.arc.nasa.gov Thu Dec 13 13:12:25 2001 From: cnetzer at mail.arc.nasa.gov (Chad Netzer) Date: Thu Dec 13 13:12:25 2001 Subject: [Numpy-discussion] Forth-Order Runge-Kutta In-Reply-To: References: Message-ID: <200112132111.NAA18760@mail.arc.nasa.gov> > Date: Thu, 13 Dec 2001 12:39:32 +0100 > From: Nils Wagner > Subject: [Numpy-discussion] Forth-Order Runge-Kutta > > I am looking for an implementation of the fourth-order Runge-Kutta > method in Numpy. Here is one I wrote (based on a similar I had seen in lorentz.py in the PyOpenGL demos, I believe), that I used for chaotic dynamics visualizations. # A simple, non-stepsize adaptive fourth order Runge-Kutta integration # estimator method. def fourth_order_runge_kutta(self, xyz, dt): derivative = self.differentiator hdt = 0.5 * dt xyz = asarray(xyz) # Force tuple or list to an array k1 = array(derivative(xyz)) k2 = array(derivative(xyz + k1 * hdt)) k3 = array(derivative(xyz + k2 * hdt)) k4 = array(derivative(xyz + k3 * dt)) new_xyz = xyz + (k1 + k4) * (dt / 6.0) + (k2 + k3) * (dt / 3.0) return new_xyz where self.differentiator is a function that takes an x,y,z coordinate tuple, and returns the deriviative as an x,y,z coordinate tuple. I wrote this when I was much more naive about Runge-Kutta and Python Numeric, so don't use it without some looking over. It is at least a good starting point. -- Chad Netzer cnetzer at mail.arc.nasa.gov From Achim.Gaedke at uni-koeln.de Fri Dec 14 02:27:02 2001 From: Achim.Gaedke at uni-koeln.de (Achim Gaedke) Date: Fri Dec 14 02:27:02 2001 Subject: [Numpy-discussion] pygsl source moved to sourceforge Message-ID: <3C19D114.8788B1C0@uni-koeln.de> Hi! I've checked in my source, so it is available at http://pygsl.sourceforge.net/ . A mailinglist for discussion needs and support is created, please mail to: mailto:pygsl-discuss at lists.sourceforge.net . pygsl provides now: module pygsl.sf: with 200 special functions module pygsl.const: with 23 often used mathematical constants module pygsl.ieee: access to the ieee-arithmetics layer of gsl module pygsl.rng: provides several random number generators and different probability densities It can be used with gsl-0.* and with gsl-1.* (with a small change in setup.py). Version 0.0.3 is tested for python-2.0. The cvs version can be used with python-2.1 and python-2.2 . Help is needed and code is accepted with thanks. Just visit our cvs repository at sourceforge: http://sourceforge.net/cvs/?group_id=34743 Yours, Achim From rob at pythonemproject.com Sun Dec 16 13:05:02 2001 From: rob at pythonemproject.com (Rob) Date: Sun Dec 16 13:05:02 2001 Subject: [Numpy-discussion] my Numpy statements are slower than indexed formulations in some cases Message-ID: <3C1D0BF4.23E89F0@pythonemproject.com> I have a number of thse routines in some EM code. I've tried to Numpyize them, but end up with code that runs even slower. Here is the old indexed routing: ----------------- for JJ in range(0,TotExtMetalEdgeNum): McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] ---------------- Here is the Numpy version: --------------------- McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) --------------------- I wonder if there is another faster way to do this? Thanks, Rob. -- The Numeric Python EM Project www.pythonemproject.com From ray_drew at yahoo.co.uk Mon Dec 17 02:48:03 2001 From: ray_drew at yahoo.co.uk (Ray Drew) Date: Mon Dec 17 02:48:03 2001 Subject: [Numpy-discussion] Python 2.2 Message-ID: <006201c186e7$da0d32f0$df13000a@RDREWXP> Does anyone know when a release for Python 2.2 on Windows will be available? regards, Ray Drew _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com From paul at pfdubois.com Mon Dec 17 08:13:04 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 17 08:13:04 2001 Subject: [Numpy-discussion] Python 2.2 In-Reply-To: <006201c186e7$da0d32f0$df13000a@RDREWXP> Message-ID: <000001c18715$477816c0$0c01a8c0@freedom> I downloaded and installed 2.2c1 and built the Numeric installers. However, when I run them they all fail when the installation begins (after one clicks the final click to install) with an access violation. I removed any previous Numeric and it still happened. Building and installing with setup.py install works ok. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Ray Drew Sent: Monday, December 17, 2001 2:45 AM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-discussion] Python 2.2 Does anyone know when a release for Python 2.2 on Windows will be available? regards, Ray Drew _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From guido at python.org Mon Dec 17 08:20:01 2001 From: guido at python.org (Guido van Rossum) Date: Mon Dec 17 08:20:01 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: Your message of "Mon, 17 Dec 2001 08:09:59 PST." <000001c18715$477816c0$0c01a8c0@freedom> References: <000001c18715$477816c0$0c01a8c0@freedom> Message-ID: <200112171616.LAA04194@cj20424-a.reston1.va.home.com> > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access violation. > I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. I don't know anything about your installers, but could it be that you were trying to install without Administrator permissions? That used to crash the previous Python installer too. (The new one doesn't, but it's a commercial product so we can't share it.) --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas.heller at ion-tof.com Mon Dec 17 08:51:04 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Mon Dec 17 08:51:04 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> Message-ID: <028501c1871a$ec399260$e000a8c0@thomasnotebook> > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access violation. > I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. > Are you talking about bdist_wininst installers? I just could reproduce a problem with them (after renaming away all my zip.exe programs in the PATH). Thomas From paul at pfdubois.com Mon Dec 17 09:27:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 17 09:27:03 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: <200112171616.LAA04194@cj20424-a.reston1.va.home.com> Message-ID: <000201c1871f$a6f514e0$0c01a8c0@freedom> I am on Windows Me, so there is no concept of administrator. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Guido van Rossum Sent: Monday, December 17, 2001 8:16 AM To: Paul F. Dubois Cc: 'Ray Drew'; numpy-discussion at lists.sourceforge.net; python-dev at python.org Subject: Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access > violation. I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. I don't know anything about your installers, but could it be that you were trying to install without Administrator permissions? That used to crash the previous Python installer too. (The new one doesn't, but it's a commercial product so we can't share it.) --Guido van Rossum (home page: http://www.python.org/~guido/) _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From thomas.heller at ion-tof.com Tue Dec 18 13:27:02 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Tue Dec 18 13:27:02 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> Message-ID: <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> I've just fixed the bdist_wininst command in CVS. There is not much time left before the release of Python 2.2 (how much exactly?). To make sure that the current version works correctly, I would kindly request some testers to build windows installers from some of their package distributions, and check them to make sure they work correctly. The full story is in bug [#483982]: Python 2.2b2 bdist_wininst crashes: http://sourceforge.net/tracker/index.php?func=detail&aid=483982&group_id=5470&atid=105470 To test the new version, you only have to replace the file Lib/distutils/command/bdist_wininst.py in your Python 2.2 distribution with the new one from CVS (rev 1.27): http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Lib/distutils/command/bdist_wininst.py Thanks, Thomas From mal at lemburg.com Tue Dec 18 13:48:08 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue Dec 18 13:48:08 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: <3C1FB96C.E3EE8DCB@lemburg.com> Thomas Heller wrote: > > I've just fixed the bdist_wininst command in CVS. > There is not much time left before the release of Python 2.2 (how much exactly?). > > To make sure that the current version works correctly, I would > kindly request some testers to build windows installers from some > of their package distributions, and check them to make sure they work > correctly. > > The full story is in bug [#483982]: Python 2.2b2 bdist_wininst crashes: > http://sourceforge.net/tracker/index.php?func=detail&aid=483982&group_id=5470&atid=105470 > > To test the new version, you only have to replace the file > Lib/distutils/command/bdist_wininst.py in your Python 2.2 distribution > with the new one from CVS (rev 1.27): > > http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Lib/distutils/command/bdist_wininst.py FYI, I just tested it with egenix-mx-base and egenix-mx-commercial and both seem to work just fine. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From tim at zope.com Tue Dec 18 14:18:02 2001 From: tim at zope.com (Tim Peters) Date: Tue Dec 18 14:18:02 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: [Thomas Heller] > I've just fixed the bdist_wininst command in CVS. Thank you! > There is not much time left before the release of Python 2.2 (how > much exactly?). 2.2 final should be released this Friday. From barry at zope.com Tue Dec 18 14:20:01 2001 From: barry at zope.com (Barry A. Warsaw) Date: Tue Dec 18 14:20:01 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: <15391.49367.712590.185830@anthem.wooz.org> >>>>> "TH" == Thomas Heller writes: TH> I've just fixed the bdist_wininst command in CVS. There is TH> not much time left before the release of Python 2.2 (how much TH> exactly?). About 3 days. :) -Barry From paul at pfdubois.com Tue Dec 18 16:36:01 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue Dec 18 16:36:01 2001 Subject: [Numpy-discussion] Windows / 2.2 version up Message-ID: <000501c18824$adde1ee0$0c01a8c0@freedom> I built the Windows installers for Python-2.2 and they should appear at sourceforge shortly. From fardal at coral.phys.uvic.ca Tue Dec 18 19:40:01 2001 From: fardal at coral.phys.uvic.ca (Mark Fardal) Date: Tue Dec 18 19:40:01 2001 Subject: [Numpy-discussion] converting IDL to python Message-ID: <200112190433.fBJ4XNT13889@coral.phys.uvic.ca> Hi, earlier this month Joe Harrington posted this: NASA's immediate objective will be a complete data-analysis system to replace IDL, in short order, including an IDL-to-python converter program. That shouldn't be hard as IDL is a simple language, and PyDL may have solved much of that problem. I'm an IDL user, and I'm currently trying to see if I can switch over to python. It strikes me that an IDL-to-python converter program is a VERY ambitious idea. While IDL's syntax is rather similar to Python, and less powerful (so that there is less to translate), there are enough differences between them that conversion is probably not a simple task. For example, in IDL: arguments are passed by reference array storage order is different there's a different notion of "truth" than in Python a structure type exists, and a notation for slicing arrays of structures trailing "degenerate" array dimensions are truncated in a hard-to-predict way array subscripting generates copy, not reference I'm sure there are some other big differences too. Then there is major functionality in IDL that is missing in Python. * (Like plotting. :-) It's hard to translate a routine when you have nothing to translate it to. I've looked at PyDL. It seems a nice little crutch, and it makes the learning curve a little shallower for people converting from IDL to Python. It's very far from being a complete translation from one language to the other. I don't know what Joe had in mind for a converter program. One that took care of braindead syntax conversion, while leaving the more difficult issues for a human programmer to translate, might not be too hard and could be pretty useful. I think it would be a bad idea to hold out for something that automatically produced working code. my $.02 (worth only $0.0127 US) Mark Fardal University of Victoria * I'm stuck in Python 1.5 until the New Year, so have yet to see what SciPy has to offer. Maybe SciPy-Sumo is getting close. From rlw at stsci.edu Tue Dec 18 21:31:06 2001 From: rlw at stsci.edu (Rick White) Date: Tue Dec 18 21:31:06 2001 Subject: [Numpy-discussion] converting IDL to python In-Reply-To: <200112190433.fBJ4XNT13889@coral.phys.uvic.ca> Message-ID: On Tue, 18 Dec 2001, Mark Fardal wrote: > earlier this month Joe Harrington posted this: > > NASA's immediate objective will be a complete data-analysis system to > replace IDL, in short order, including an IDL-to-python converter > program. That shouldn't be hard as IDL is a simple language, and PyDL > may have solved much of that problem. > > I'm an IDL user, and I'm currently trying to see if I can switch over > to python. It strikes me that an IDL-to-python converter program is a > VERY ambitious idea. While IDL's syntax is rather similar to Python, > and less powerful (so that there is less to translate), there are > enough differences between them that conversion is probably not a > simple task. For example, in IDL: > > arguments are passed by reference > array storage order is different > there's a different notion of "truth" than in Python > a structure type exists, and a notation for slicing arrays of structures > trailing "degenerate" array dimensions are truncated in a hard-to-predict way > array subscripting generates copy, not reference > > I'm sure there are some other big differences too. I'm thinking about this problem too. For the PyRAF system (pyraf.stsci.edu), I wrote a set of Python programs that parse IRAF CL scripts and automatically translate them to Python. CL scripts have a much less regular syntax than IDL (making them harder to parse), but the CL has a vastly more limited functionality (making it easier to translate the results to IDL.) I did have to deal with issues similar to those above. E.g., the CL does pass arguments by reference (sometimes -- I told you it is irregular!) and I figured out how to handle that. The array storage order doesn't strike me as a biggie, it is just necessary to swap the order of indices. The CL also has a different notion of truth (boolean expressions have the value yes or no, which are magic values along the lines of None in Python.) The IDL notion of truth is really pretty similar to Python's when you get down to it. The new numeric array module that we have been working on (numarray) also supports the equivalent of arrays of structures that can be sliced and indexed just like arrays of numbers. We had IDL structures (among other things) in mind when we developed this capability, and think our version is a significant improvement over IDL. We also have arrays of strings and aim to have all the basic array functionality that is available in IDL. (By the way, I'm a long-time IDL user and am reasonably expert in it.) The hardest part (as you mention) is that IDL has a really large collection of built-in functions which probably will not be readily available in Python. Until someone writes things like the morphological dilation and erosion operations, it won't be possible to translate IDL programs that use them. And there will always be some weird corners of IDL behavior where it just would not be worth the effort to try to replicate exactly what IDL does. That's true in the CL translation too -- but it does not seem to be a big limitation, because users tend to stay away from those weird spots too. My conclusion is that an automatic conversion of (most) IDL to Python probably can be done with a reasonable amount of work. It certainly is not trivial, but I think (based on my experience with the CL) that it is not impossibly difficult either. I'm hoping to take a shot at this sometime during the next 6 months. Rick Richard L. White rlw at stsci.edu http://sundog.stsci.edu/rick/ Space Telescope Science Institute Baltimore, MD From edcjones at erols.com Wed Dec 19 15:50:02 2001 From: edcjones at erols.com (Edward C. Jones) Date: Wed Dec 19 15:50:02 2001 Subject: [Numpy-discussion] float** arrays and Numarray Message-ID: <3C21294A.4090706@erols.com> I use Numeric to transfer images between various image processing systems. Numarray includes most of the image data memory layouts that I have seen except for: Some C types need to be added (as several people have already commented). Some systems define an array as a "pointer to an array of pointers to ...". "Numerical Recipes in C" explains this approach clearly. Could this perhaps be implemented in Numarray as a buffer interface with multiple data segments? Thanks, Ed Jones From jmr at engineering.uiowa.edu Wed Dec 19 20:13:02 2001 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Wed Dec 19 20:13:02 2001 Subject: [Numpy-discussion] converting IDL to python Message-ID: <876672h6n7.fsf@phantom.ecn.uiowa.edu> Rick White writes: > The hardest part (as you mention) is that IDL has a really large > collection of built-in functions which probably will not be readily > available in Python. Until someone writes things like the morphological > dilation and erosion operations, it won't be possible to translate IDL > programs that use them. By the way, I do have binary erosion and dilation, along with connected components analysis, flood fill region grow, and several other common 2D/3D image processing functions working in NumPy. I would be glad to share them. I don't know how similar the interface is to IDL, since I have never used IDL. - Joe Reinhardt -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From eric at enthought.com Thu Dec 20 08:18:03 2001 From: eric at enthought.com (eric) Date: Thu Dec 20 08:18:03 2001 Subject: [Numpy-discussion] converting IDL to python References: <876672h6n7.fsf@phantom.ecn.uiowa.edu> Message-ID: <03ad01c18969$8130e080$777ba8c0@ericlaptop> Rick and Joe, I'm not familiar with these erosion and dialation functions. Are they something that should be in SciPy? We could start an "idl" package that houses functions familiar to this community. Or maybe they should live in another place like "signal" or maybe an image processing package. Thoughts? I've crossed posted this to the scipy-dev at scipy.org list as this is probably the best place to continue such a discussion. eric ----- Original Message ----- From: "Joe Reinhardt" To: Sent: Wednesday, December 19, 2001 11:12 PM Subject: [Numpy-discussion] converting IDL to python > > Rick White writes: > > The hardest part (as you mention) is that IDL has a really large > > collection of built-in functions which probably will not be readily > > available in Python. Until someone writes things like the morphological > > dilation and erosion operations, it won't be possible to translate IDL > > programs that use them. > > By the way, I do have binary erosion and dilation, along with > connected components analysis, flood fill region grow, and several > other common 2D/3D image processing functions working in NumPy. > > I would be glad to share them. I don't know how similar the > interface is to IDL, since I have never used IDL. > > - Joe Reinhardt > > > > -- > Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering > joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 > Telephone: 319-335-5634 FAX: 319-335-5631 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jmr at engineering.uiowa.edu Thu Dec 20 08:28:04 2001 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Thu Dec 20 08:28:04 2001 Subject: [Numpy-discussion] converting IDL to python In-Reply-To: <03ad01c18969$8130e080$777ba8c0@ericlaptop> ("eric"'s message of "Thu, 20 Dec 2001 10:17:57 -0500") References: <876672h6n7.fsf@phantom.ecn.uiowa.edu> <03ad01c18969$8130e080$777ba8c0@ericlaptop> Message-ID: "eric" writes: > I'm not familiar with these erosion and dialation functions. Are they > something that should be in SciPy? We could start an "idl" package that > houses functions familiar to this community. Or maybe they should live in > another place like "signal" or maybe an image processing package. > Thoughts? Dilation and erosion are image processing functions used to "grow" and "shrink" objects and boundaries. They are part of a group of image operators based on the theory of mathematical morphology. Overall, they are very extremely useful for shape/size analysis on images. Yes, they should certainly be part of any image processing toolbox for Python. Here is a link for a commercial math. morphology toolbox for matlab: http://www.mmorph.com - Joe -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From eric at enthought.com Thu Dec 20 08:36:03 2001 From: eric at enthought.com (eric) Date: Thu Dec 20 08:36:03 2001 Subject: [Numpy-discussion] converting IDL to python References: Message-ID: <03be01c1896c$12f7d670$777ba8c0@ericlaptop> I'm not an IDL user, but have thought some about the same sort of project for either executing Matlab scripts from Python, or perhaps translating them to Python. On the face of it, it seems doable. Of the incompatibilities I've thought about, the magic NARGOUT variable that tells Matlab how many output results are expected from a functions seems the most difficult. It might be handled in Python by examining the parse tree, but it isn't a trivial translation. Another option would be embedding Octave which is a Matlab work alike. I think it is a little behind the current Matlab release though. There might be some synergy between projects aiming to execute IDL/Matlab code from Python. eric ----- Original Message ----- From: "Rick White" To: "Mark Fardal" Cc: "numpy" Sent: Wednesday, December 19, 2001 12:30 AM Subject: Re: [Numpy-discussion] converting IDL to python > On Tue, 18 Dec 2001, Mark Fardal wrote: > > > earlier this month Joe Harrington posted this: > > > > NASA's immediate objective will be a complete data-analysis system to > > replace IDL, in short order, including an IDL-to-python converter > > program. That shouldn't be hard as IDL is a simple language, and PyDL > > may have solved much of that problem. > > > > I'm an IDL user, and I'm currently trying to see if I can switch over > > to python. It strikes me that an IDL-to-python converter program is a > > VERY ambitious idea. While IDL's syntax is rather similar to Python, > > and less powerful (so that there is less to translate), there are > > enough differences between them that conversion is probably not a > > simple task. For example, in IDL: > > > > arguments are passed by reference > > array storage order is different > > there's a different notion of "truth" than in Python > > a structure type exists, and a notation for slicing arrays of structures > > trailing "degenerate" array dimensions are truncated in a hard-to-predict way > > array subscripting generates copy, not reference > > > > I'm sure there are some other big differences too. > > I'm thinking about this problem too. For the PyRAF system > (pyraf.stsci.edu), I wrote a set of Python programs that parse IRAF CL > scripts and automatically translate them to Python. CL scripts have a > much less regular syntax than IDL (making them harder to parse), but the > CL has a vastly more limited functionality (making it easier to translate > the results to IDL.) I did have to deal with issues similar to those > above. E.g., the CL does pass arguments by reference (sometimes -- I told > you it is irregular!) and I figured out how to handle that. The array > storage order doesn't strike me as a biggie, it is just necessary to swap > the order of indices. The CL also has a different notion of truth > (boolean expressions have the value yes or no, which are magic values > along the lines of None in Python.) The IDL notion of truth is really > pretty similar to Python's when you get down to it. > > The new numeric array module that we have been working on (numarray) also > supports the equivalent of arrays of structures that can be sliced and > indexed just like arrays of numbers. We had IDL structures (among other > things) in mind when we developed this capability, and think our version > is a significant improvement over IDL. We also have arrays of strings > and aim to have all the basic array functionality that is available in > IDL. (By the way, I'm a long-time IDL user and am reasonably expert in it.) > > The hardest part (as you mention) is that IDL has a really large > collection of built-in functions which probably will not be readily > available in Python. Until someone writes things like the morphological > dilation and erosion operations, it won't be possible to translate IDL > programs that use them. And there will always be some weird corners of > IDL behavior where it just would not be worth the effort to try to > replicate exactly what IDL does. That's true in the CL translation too > -- but it does not seem to be a big limitation, because users tend to stay > away from those weird spots too. > > My conclusion is that an automatic conversion of (most) IDL to Python > probably can be done with a reasonable amount of work. It certainly is > not trivial, but I think (based on my experience with the CL) that it is > not impossibly difficult either. I'm hoping to take a shot at this > sometime during the next 6 months. > Rick > > Richard L. White rlw at stsci.edu http://sundog.stsci.edu/rick/ > Space Telescope Science Institute > Baltimore, MD > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From perry at stsci.edu Thu Dec 20 10:37:01 2001 From: perry at stsci.edu (Perry Greenfield) Date: Thu Dec 20 10:37:01 2001 Subject: [Numpy-discussion] float** arrays and Numarray In-Reply-To: <3C21294A.4090706@erols.com> Message-ID: > I use Numeric to transfer images between various image processing > systems. Numarray includes most of the image data memory layouts that I > have seen except for: > > Some C types need to be added (as several people have already commented). > > Some systems define an array as a "pointer to an array of pointers to > ...". "Numerical Recipes in C" explains this approach clearly. Could > this perhaps be implemented in Numarray as a buffer interface with > multiple data segments? > > Thanks, > Ed Jones That's an interesting question. I don't think that such a representation makes much sense for numarray internally (for example, it makes it difficult to reshape the object without creating new pointer arrays-- or data segments as mentioned above). I doubt that we would take this approach. On the other hand, I don't see why the C-API could not create such a representation (i.e., creating the necessary pointer arrays). I guess the main question is whether how memory mangement is done for these pointer arrays since they are not part of the numarray object (one may have to explicitly deallocate them in the C code). The main usefulness of this approach is that it allows a simple indexing notation multidimensional arrays in C. (On the other hand, perhaps the same could be largely accomplished with C macros.) A drawback is that it can imply significant memory overheads if the shape of the array is skinny in the wrong dimension. But no, we haven't really thought much about it yet. Perry From perry at stsci.edu Thu Dec 20 10:48:02 2001 From: perry at stsci.edu (Perry Greenfield) Date: Thu Dec 20 10:48:02 2001 Subject: [Numpy-discussion] next step for numarray Message-ID: I just thought I would write a few words about what will be happening with numarray development. At the moment it is quiescent since we have another internal project to finish. Hopefully that will be done in 3 weeks or so; we will start work on numarray again. I've gotten feedback from Guido regarding the safety issue; he made it clear that we should not be able to let the user crash Python, even if they can do that only by bypassing the public interface. Since there may be implications for the C-API we are making that our highest priority (though I don't think it will seriously change the API). Thanks for the feedback so far. We will raise other specific issues as we get closer to dealing with them. Perry From rob at pythonemproject.com Fri Dec 21 23:46:23 2001 From: rob at pythonemproject.com (Rob) Date: Fri Dec 21 23:46:23 2001 Subject: [Numpy-discussion] Re: my Numpy statements are slower than indexed formulations in some cases References: <3C1D0BF4.23E89F0@pythonemproject.com> Message-ID: <3C236208.7545F760@pythonemproject.com> Rob wrote: > > I have a number of thse routines in some EM code. I've tried to > Numpyize them, but end up with code that runs even slower. > > Here is the old indexed routing: > > ----------------- > for JJ in range(0,TotExtMetalEdgeNum): > > McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] > ---------------- > > Here is the Numpy version: > > --------------------- > McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) > --------------------- > > I wonder if there is another faster way to do this? Thanks, Rob. > > -- I did speed things up just a tiny bit by using: add.reduce(Ccc*VcVector,1) instead of add.reduce(transpose(Ccc*VcVector). But I'm still running way slower than an indexed array scheme. Rob. > The Numeric Python EM Project > > www.pythonemproject.com -- The Numeric Python EM Project www.pythonemproject.com From rob at pythonemproject.com Sun Dec 23 11:21:31 2001 From: rob at pythonemproject.com (Rob) Date: Sun Dec 23 11:21:31 2001 Subject: [Numpy-discussion] Re: my Numpy statements are slower than indexed formulations in somecases References: <3C1D0BF4.23E89F0@pythonemproject.com> <3C236208.7545F760@pythonemproject.com> Message-ID: <3C24B004.4D976DCD@pythonemproject.com> Problem solved. I had one last for loop in there. Its odd that as I gradually got rid of the for loops, the speed went down, but when I got rid of the last on, bingo! That routine now runs at a 1/10 of its original time. Rob. Rob wrote: > > Rob wrote: > > > > I have a number of thse routines in some EM code. I've tried to > > Numpyize them, but end up with code that runs even slower. > > > > Here is the old indexed routing: > > > > ----------------- > > for JJ in range(0,TotExtMetalEdgeNum): > > > > McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] > > ---------------- > > > > Here is the Numpy version: > > > > --------------------- > > McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) > > --------------------- > > > > I wonder if there is another faster way to do this? Thanks, Rob. > > > > -- > > I did speed things up just a tiny bit by using: > > add.reduce(Ccc*VcVector,1) instead of > add.reduce(transpose(Ccc*VcVector). > > But I'm still running way slower than an indexed array scheme. Rob. > > The Numeric Python EM Project > > > > www.pythonemproject.com > > -- > The Numeric Python EM Project > > www.pythonemproject.com > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.pythonemproject.com From rob at pythonemproject.com Mon Dec 24 09:40:02 2001 From: rob at pythonemproject.com (Rob) Date: Mon Dec 24 09:40:02 2001 Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement Message-ID: <3C2767CD.3E1E4134@pythonemproject.com> I have this Class: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 What I want is for Cond to be a Numpy array of 20 values, each with the above members. Thus I could say in a statement: wire=Cond a= wire[10].x1 b= wire[5].z2 etc... How can I go about this? Thanks, Rob. and Merry Christmas!!!!!!! ps. this is for an FEM-MOM grid generator -- The Numeric Python EM Project www.pythonemproject.com From paul at pfdubois.com Mon Dec 24 10:35:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 24 10:35:02 2001 Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement In-Reply-To: <3C2767CD.3E1E4134@pythonemproject.com> Message-ID: <000501c18ca9$476445c0$0c01a8c0@freedom> Without asking why in the world you would want to do this: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 wire = map(lambda dummy: Cond(), range(20)) print wire[10].x1 This makes a list. If you really INSIST on a Numeric array, add import Numeric wire = Numeric.array(wire) But this will be an array of type O. If your real concern is that the size will be huge, you might want to make arrays out of each member instead. You have your choice of making the syntax wire.x1[10] or wire[10].x1, but the latter requires additional technique. Also I doubt that you mean what you said. If each instance is to have its own values, and the ones above are just the initial values, it is much more proper to do: class Cond: def __init__ (self, **kw): d = { 'x1':0, ... put them all here } d.update(kw) for k, v in d.items(): setattr(self, k, v) That allows you to make instances all these way: Cond(), Cond(x1=4), Cond(x1=1,y2=1), Cond(x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 ) -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Rob Sent: Monday, December 24, 2001 9:37 AM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement I have this Class: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 What I want is for Cond to be a Numpy array of 20 values, each with the above members. Thus I could say in a statement: wire=Cond a= wire[10].x1 b= wire[5].z2 etc... How can I go about this? Thanks, Rob. and Merry Christmas!!!!!!! ps. this is for an FEM-MOM grid generator -- The Numeric Python EM Project www.pythonemproject.com _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From paul at pfdubois.com Thu Dec 27 11:21:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu Dec 27 11:21:03 2001 Subject: [Numpy-discussion] Numeric 21.0a1 in CVS Message-ID: <000001c18f0b$34ac3dc0$0c01a8c0@freedom> Version 21.0a1 -- in CVS only Please help me test the following. I have tested on Windows / Python 2.2. Implement true division operations for Python 2.2 (Bruce Sherwood, our hero). New functions in Numeric; they work on any sequence a that can be converted to a Numeric array. def rank (a): "Get the rank of a (the number of dimensions, not a matrix rank)" def shape (a): "Get the shape of a" def size (a, axis=None): "Get the number of elements in a, or along a certain axis." def average (a, axis=0, weights=None, returned = 0): """average(a, axis=0, weights=None) Computes average along indicated axis. Inputs can be integer or floating types. Return type is floating but depends on spacesaver flags. If weights are given, result is sum(a*weights)/sum(weights) weights if given must have a's shape or be the 1-d with length the size of a in the given axis. If returned, return a tuple: the result and the sum of the weights or count of values. if axis is None, average over the entire array raises ZeroDivisionError if appropriate when result is scalar. """ From jjl at pobox.com Mon Dec 3 12:06:39 2001 From: jjl at pobox.com (John J. Lee) Date: Mon Dec 3 12:06:39 2001 Subject: [Numpy-discussion] limits.py? Message-ID: Is there some equivalent of limits.h for Numeric? And is there a searchable archive for the numpy-discussion list?? IIRC, the standard sourceforge lists aren't searchable (?!), which is bizarre if true. John From oliphant at ee.byu.edu Mon Dec 3 12:36:04 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Dec 3 12:36:04 2001 Subject: [Numpy-discussion] limits.py? In-Reply-To: Message-ID: > > Is there some equivalent of limits.h for Numeric? > > And is there a searchable archive for the numpy-discussion list?? IIRC, > the standard sourceforge lists aren't searchable (?!), which is bizarre > if true. Try limits.py in SciPy (www.scipy.org) -Travis From gvermeul at labs.polycnrs-gre.fr Tue Dec 4 00:11:03 2001 From: gvermeul at labs.polycnrs-gre.fr (Gerard Vermeulen) Date: Tue Dec 4 00:11:03 2001 Subject: [Numpy-discussion] question about Windows installer Message-ID: <01120409101901.08863@taco.polycnrs-gre.fr> Hi, Could somebody tell me if the Numeric executable installer for Windows is made with distutils? Thanks -- Gerard From pearu at cens.ioc.ee Tue Dec 4 06:17:06 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue Dec 4 06:17:06 2001 Subject: [Numpy-discussion] ANN: F2PY - Fortran to Python Interface Generator Message-ID: F2PY - Fortran to Python Interface Generator I am pleased to announce the third public release of f2py (2nd Edition) (version 2.3.328): http://cens.ioc.ee/projects/f2py2e/ f2py is a command line tool for binding Python and Fortran codes. It scans Fortran 77/90/95 codes and generates a Python C/API module that makes it possible to call Fortran subroutines from Python. No Fortran or C expertise is required for using this tool. Features include: *** All basic Fortran types are supported: integer[ | *1 | *2 | *4 | *8 ], logical[ | *1 | *2 | *4 | *8 ], character[ | *(*) | *1 | *2 | *3 | ... ] real[ | *4 | *8 | *16 ], double precision, complex[ | *8 | *16 | *32 ] *** Multi-dimensional arrays of (almost) all basic types. Dimension specifications: | : | * | : *** Supported attributes and statements: intent([ in | inout | out | hide | in,out | inout,out ]) dimension() depend([]) check([]) note() optional, required, external NEW: intent(c), threadsafe, fortranname *** Calling Fortran 77/90/95 subroutines and functions. Also Fortran 90/95 module subroutines are supported. Internal initialization of optional arguments. *** Accessing COMMON blocks from Python. NEW: Accessing Fortran 90/95 module data. *** Call-back functions: calling Python functions from Fortran with very flexible hooks. *** In Python, arguments of the interfaced functions may be of different type - necessary type conversations are done internally in C level. *** Automatically generates documentation (__doc__,LaTeX) for interfaced functions. *** Automatically generates signature files --- user has full control over the interface constructions. Automatically detects the signatures of call-back functions, solves argument dependencies, etc. NEW: * Automatically generates setup_.py for building extension modules using tools from distutils and fortran_support module (from SciPy). *** Automatically generates Makefile for compiling Fortran and C codes and linking them to a shared module. Many compilers are supported: gcc, Compaq Fortran, VAST/f90 Fortran, Absoft F77/F90, MIPSpro 7 Compilers, etc. Platforms: Intel/Alpha Linux, HP-UX, IRIX64. *** Complete User's Guide in various formats (html,ps,pdf,dvi). *** f2py users list is available for support, feedback, etc. NEW: * Installation with distutils. *** And finally, many bugs were fixed. More information about f2py, see http://cens.ioc.ee/projects/f2py2e/ LICENSE: f2py is released under the LGPL. Sincerely, Pearu Peterson December 4, 2001

f2py 2.3.328 - The Fortran to Python Interface Generator (04-Dec-01) From sag at hydrosphere.com Tue Dec 4 09:28:07 2001 From: sag at hydrosphere.com (Sue Giller) Date: Tue Dec 4 09:28:07 2001 Subject: [Numpy-discussion] Using objects in arrays Message-ID: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> I am trying to use objects in an array, and still be able to use the various extra functions offered by multiarray. I am finding that some of the functions work and some don't. Is it hopeless to try to use objects in an array and expect .reduce and others to work properly? As a simple example, I have a DataPoint object that consists of a value and flag(s). This object has all the __cmp__, __add_, etc functions implemented. I can do MA.average(m), MA.sum(m), MA.add.reduce(m), (they seem to use __add__) but I can't do MA.minimum.reduce(m) or MA.maximum.reduce(m). I can do MA.maximum(m) and MA.minimum(m), but not MA.maximum(m, 0) or MA.minimum(m, 0) The values returned by MA.argmax(m) makes no sense (wrong index?) but is consistent with results from argsort(). MA.argmin(m) gives an error (I have a __neg__ fn in datapoint) File "C:\Python21\MA\MA.py", line 1977, in argmin return Numeric.argmin(d, axis) File "C:\Python21\Numeric\Numeric.py", line 281, in argmin a = -array(a, copy=0) TypeError: bad operand type for unary - for example: print m # 3 valid, 1 masked print MA.maximum(m) print MA.argmax(m) # gives index to masked value print MA.minimum(m) #print MA.argmin(m) - gives error above print MA.argsort(m) print MA.average(m) print MA.maximum.reduce(m, 0) [1, ] ,[10, a] ,[None, D] ,-- ,] [10, a] 3 [None, D] [2,0,1,3,] [3.66666666667, aD] Traceback (most recent call last): File "C:\Python21\Pythonwin\pywin\framework\scriptutils.py", line 301, in RunScript exec codeObject in __main__.__dict__ File "C:\Python21\HDP\Data\DataPoint.py", line 136, in ? print MA.maximum.reduce(m, 0) File "C:\Python21\MA\MA.py", line 1913, in reduce t = Numeric.maximum.reduce(filled(target, maximum_fill_value(target)), axis) TypeError: function not supported for these types, and can't coerce to supported types From oliphant at ee.byu.edu Tue Dec 4 10:05:02 2001 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue Dec 4 10:05:02 2001 Subject: [Numpy-discussion] Using objects in arrays In-Reply-To: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> Message-ID: > I am trying to use objects in an array, and still be able to use the > various extra functions offered by multiarray. I am finding that some > of the functions work and some don't. Is it hopeless to try to use > objects in an array and expect .reduce and others to work > properly? > > As a simple example, I have a DataPoint object that consists of a > value and flag(s). This object has all the __cmp__, __add_, etc > functions implemented. > > I can do MA.average(m), MA.sum(m), MA.add.reduce(m), (they > seem to use __add__) but I can't do MA.minimum.reduce(m) or > MA.maximum.reduce(m). This is actually, very helpful information. I'm not sure how well-tested the object-type is with Numeric. I know it has been used, but I'm not sure all of the ufunc methods have been well tested. However, I'm not sure what is caused by MA and what is caused by basic Numeric, however. -Travis From sag at hydrosphere.com Tue Dec 4 10:23:01 2001 From: sag at hydrosphere.com (Sue Giller) Date: Tue Dec 4 10:23:01 2001 Subject: [Numpy-discussion] Using objects in arrays In-Reply-To: References: <20011204172717369.AAA204@mail.climatedata.com@SUEW2000> Message-ID: <20011204182200900.AAA271@mail.climatedata.com@SUEW2000> My digging shows that most stuff is passed on to Numeric and then on to mutliarray, which is a .pyd. At that point, I gave up trying to figure things out. On 4 Dec 01, at 11:05, Travis Oliphant wrote: > However, I'm not sure what is caused by MA and what is caused by basic > Numeric, however. > > > -Travis > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From Olivier.Buchwalder at epfl.ch Thu Dec 6 10:28:05 2001 From: Olivier.Buchwalder at epfl.ch (oli) Date: Thu Dec 6 10:28:05 2001 Subject: [Numpy-discussion] Problem with MacOsX Message-ID: <006d01c17e83$c8cb0b50$ddade6c2@PcOli> Hi, I have tried to compile Numpy for MacOsX but it doesn't compile the C modules... Have anyone not this problem? Thanks oli From jsw at cdc.noaa.gov Thu Dec 6 11:05:03 2001 From: jsw at cdc.noaa.gov (Jeff Whitaker) Date: Thu Dec 6 11:05:03 2001 Subject: [Numpy-discussion] Problem with MacOsX In-Reply-To: <006d01c17e83$c8cb0b50$ddade6c2@PcOli> Message-ID: Oli: Works fine for me. I've contributed a numeric python package to the fink distribution (fink.sf.net), it saves you the trouble of porting it yourself. -Jeff On Thu, 6 Dec 2001, oli wrote: > Hi, > I have tried to compile Numpy for MacOsX but it doesn't compile the C > modules... > Have anyone not this problem? > Thanks > oli > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : jsw at cdc.noaa.gov 325 Broadway Web : www.cdc.noaa.gov/~jsw Boulder, CO, USA 80303-3328 Office : Skaggs Research Cntr 1D-124 From sag at hydrosphere.com Thu Dec 6 12:30:19 2001 From: sag at hydrosphere.com (Sue Giller) Date: Thu Dec 6 12:30:19 2001 Subject: [Numpy-discussion] Rounding errors in add.reduce? Message-ID: <20011206202922697.AAA274@mail.climatedata.com@SUEW2000> import MA I have been using add.reduce on some arrays with single precision data in them. At some point, the reduction seems to be producing incorrect values, caused I presume by floating point rounding errors. THe values are correct if the same array is created as double precision. The odd things is that I can do a straight summing of those values into long, single or double variable and get correct answers. Is this a bug or what? If the difference is due to rounding error, I would expect the same errors to show up in the cases of summing the individual values. The output from the following code shows the following. Note that the add.reduce from the single precision array is different from all the others. It doesn't matter the rank or size of array, just so sum of values gets to certain size. accumulated sums float 75151440.0 long 75151440 accumulated from raveled array float 75151440.0 double 75151440.0 add.reduce single 75150288.0 double 75151440.0 --- code --- import MA # this causes same problem if array is 1d of [amax*bmax*cmax] in len amax = 4 bmax = 31 cmax = 12 farr = MA.zeros((amax, bmax, cmax), 'f') # single float darr = MA.zeros((amax, bmax, cmax), 'd') # double float sum = 0.0 lsum = 0 value = 50505 # reducing this can cause all values to agree for a in range (0, amax): for b in range (0, bmax): for c in range (0, cmax): farr[a, b, c] = value darr[a, b, c] = value sum = sum + value lsum = lsum + value fflat = MA.ravel(farr) dflat = MA.ravel(darr) fsum = dsum = 0.0 for value in fflat: fsum = fsum + value for value in dflat: dsum = dsum + value freduce = MA.add.reduce(fflat) dreduce = MA.add.reduce(dflat) print "accumulated sums" print "\tfloat\t", sum, "\tlong ", lsum print "accumulated from raveled array" print "\tfloat\t", fsum, "\tdouble", dsum print "add.reduce" print "\tsingle\t", freduce, "\tdouble", dreduce From sag at hydrosphere.com Thu Dec 6 12:57:05 2001 From: sag at hydrosphere.com (Sue Giller) Date: Thu Dec 6 12:57:05 2001 Subject: [Numpy-discussion] RE: Rounding errors in add.reduce? Message-ID: <20011206205644666.AAA217@mail.climatedata.com@SUEW2000> Well, never mind. I figured out that my test variables silently went to double precision, and that this is a case precision issues, not rounding error. So, I guess I will have to force my arrays to double precision. Sorry to bother the list. sue From jh at oobleck.astro.cornell.edu Thu Dec 6 15:34:02 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu Dec 6 15:34:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> Sorry for the late hit, I have been away at a conference. Regarding the issue of SciPy including a Python interpreter and being a complete distribution, I think this is a *poor* idea. Most Linux distributions include Python for OS use. I ran into trouble when I supported a separate Python for science use. There was confusion (path issues, etc.) about which Python people were getting, why the need for a second version, etc. Then, the science version got out of date as I updated my OS and wanted to keep my data analysis version stable. That led to confusion about what features were broken/fixed in what version. If SciPy includes Python, it has to make a commitment to release a new, tested version of the whole package just as soon as the new Python is available. That will complicate the release schedule, as Python releases won't be in sync with the SciPy development cycle. There is also the issue, if OSs start including both mainstream Python and SciPy, of their inherently coming with two different versions of Python, and thereby causing headaches for the OS distributor. The likely result is that the distributor would drop SciPy. Further, they will have to to decide which version to install in /usr/bin (SciPy will lose that one). If SciPy or any other scientific package includes a Python interpreter, it should have a special name, like "scipy", and not be "python" to the command line. Frankly, I prefer the layered approach, so long as everyone works to make the layers "just work" together. This is quite practical with modern package managers. --jh-- From eric at enthought.com Thu Dec 6 19:39:01 2001 From: eric at enthought.com (eric) Date: Thu Dec 6 19:39:01 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? References: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> Message-ID: <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> Hey Joe, Your right on many counts. A single monolithic packages sacrifices the ability to embed SciPy in other apps. This is unacceptable as it is one of the major powers of Python. On the other hand, a single monolithic (single download) package is exactly what is needed to introduce the full capabilities of Python and SciPy to a large number of novice users. I'm hoping the choice between these two isn't an "either/or" decision. We've (well Travis O. mostly) been working to divide SciPy into multiple "levels," and think this provides a solution for a wide range of users. There are 3 levels now, and there is likely to be a 4th "sumo" level added later. Level 1 is the core routines. Level 2 includes most Numeric algorithms. Level 3 includes graphics and perhaps a few other things. The sumo package would be a large all inclusive package with Python/Numeric/SciPy/VTK/wxPython(maybe)/PyCrust/MayaVi(maybe) and possibly others. Its form hasn't been defined yet, but it is meant to be a click-install package for the large population of potential users who are looking for a scientific programming/exploration environment. The goal of SciPy from the beginning has been to make Python an appealing platform to the scientific masses -- not just the computer savvy. I firmly believe that somthing like th sumo package is the only way that SciPy will ever take hold as a widely used tool. If you require multiple installations, you loose a huge number of potential users. The other thing to remember is that there are really two major platforms -- Linux and Windows (and many others hopefully supported). Linux users are generally much more savvy and do not mind multiple installs. But the windows world remains a huge portion of the target audience, and these people generally have a very different set of packaging expectations. If it ain't click/install/run, they'll simply choose a different package. We've considered releasing maybe 3 different packages, say level 1/2, level 1/2/3, and sumo so that people can pick the one they wish to use. On the good side, this gives the experts the core algorithms packaged up without much cruft so that they can use it with their current Python and the novice an easy way to try out all the cool features of Python/SciPy/Visualization. On the bad side, it introduces maintance headaches and might lead to version issues. Still, I'm really hoping a good compromise can be found here. > If SciPy or any other scientific package includes a Python > interpreter, it should have a special name, like "scipy", and not be > "python" to the command line. Frankly, I prefer the layered approach, > so long as everyone works to make the layers "just work" together. > This is quite practical with modern package managers. I guess the sumo version might have the interpreter renamed scipy -- we'll wait till at least a few blanks have been laid before we try and cross that bridge... On the layering side, I think SciPy is moving in the general direction that your suggesting. see ya, eric ----- Original Message ----- From: "Joe Harrington" To: Cc: Sent: Thursday, December 06, 2001 6:33 PM Subject: Re: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? > Sorry for the late hit, I have been away at a conference. > > Regarding the issue of SciPy including a Python interpreter and being > a complete distribution, I think this is a *poor* idea. Most Linux > distributions include Python for OS use. I ran into trouble when I > supported a separate Python for science use. There was confusion > (path issues, etc.) about which Python people were getting, why the > need for a second version, etc. Then, the science version got out of > date as I updated my OS and wanted to keep my data analysis version > stable. That led to confusion about what features were broken/fixed > in what version. If SciPy includes Python, it has to make a > commitment to release a new, tested version of the whole package just > as soon as the new Python is available. That will complicate the > release schedule, as Python releases won't be in sync with the SciPy > development cycle. There is also the issue, if OSs start including > both mainstream Python and SciPy, of their inherently coming with two > different versions of Python, and thereby causing headaches for the OS > distributor. The likely result is that the distributor would drop > SciPy. Further, they will have to to decide which version to install > in /usr/bin (SciPy will lose that one). > > If SciPy or any other scientific package includes a Python > interpreter, it should have a special name, like "scipy", and not be > "python" to the command line. Frankly, I prefer the layered approach, > so long as everyone works to make the layers "just work" together. > This is quite practical with modern package managers. > > --jh-- > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From hinsen at cnrs-orleans.fr Thu Dec 6 23:41:02 2001 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Thu Dec 6 23:41:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? Message-ID: <200112070740.fB77e2q01664@localhost.localdomain> "eric" writes: > as a widely used tool. If you require multiple installations, you loose a > huge number of potential users. In the early days of NumPy, I had a similar problem because many people didn't succeed in installing it properly. I ended up preparing a specially packaged Python interpreter with NumPy plus my own scientific modules. With increasingly frequent releases of NumPy and Python itself, this became impossible to keep up. And since we have distutils, installation questions became really rare, being mostly about compatibility of my code with this or that version of NumPy (due to the change in the C API export). On the other hand, I work in a community dominated by Unix users, who come in two varieties: Linux people, who just get RPMs, and others, who typically have a competent systems administrator. The Windows world might have different requirements. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From siopis at umich.edu Thu Dec 6 23:54:03 2001 From: siopis at umich.edu (Christos Siopis ) Date: Thu Dec 6 23:54:03 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? Message-ID: I am reviving the 'too many numerical libraries doing the same thing?' thread, with apologies for the long delay to respond to the list's comments. Thanks to Joe Harrington, Paul Dubois, Konrad Hinsen and Chris Barker who took the time to respond to my posting. The funny thing is that, as i see it, even though they were mostly trying to provide reasons why such a project has not, and probably will not, be done, their reasons sound to me as reasons why it *should* be done! In essence, what i am 'proposing' is for a big umbrella organization (NSF, NASA and IEEE come to mind) to sponsor the development of this uber-library for numerical scientific and engineering applications. This would be 'sold' as an infrastructure project: creating the essential functionality that is needed in order to build most kinds of scientific and engineering applications. It would save lots of duplication effort and improve productivity and quality at government labs, academia and the private sector alike. The end product would have some sort of open-source license (this can be a thorny issue, but i am sure a mutually satisfactory solution can be found). Alternatively (or in addition), it might be better to specify the API and leave the implementation part (open source and/or commercial) to others. Below I give some reasons, based mostly on the feedback from the list, why i think it is appropriate that this effort be pursued at such a high level (both bureaucratically and talent-wise). - The task is too complex to be carried out by a single group of people (e.g., by a single government lab): * Not only does it involve 'too much work', but the expertise and the priorities of the group won't necessarily reflect those of the scientific and engineering community at large. * The resources of a single lab/group do not suffice. Even if they did, project managers primarily care to produce what is needed for their project. If it has beneficial side-effects for the community, so much the better, but this is not a priority. Alas, since the proposed task is too general to fit in the needs of any single research group, noone will probably ever carry it out. - The end product should be something that the 'average' numerical scientist will feel compelled to use. For instance, if the majority of the community does not feel at home with object-oriented techniques, there should be a layer (if technically possible) that would allow access to the library via more traditional languages (C/Fortran), all the while helping people to make the transition to OO, when appropriate --i can hear Paul saying "but it's *always* appropriate" :) - There is a number of issues for which it is not presently clear that there is a good answer. Paul brought up some of these problems. For example, is it possible to build the library in layers such that it can be accessed at different levels as object components, as objects, and as 'usual', non-OO functions all at the same time? As Paul said, "Eiffel or C++ versions of some NAG routines typically have methods with one or two arguments while the C or Fortran ones have 15 or more". Does NAG use the same code-base for both its non-OO and OO products? - It is hard to find people who are good scientists, good numerical scientists, good programmers, have good coordinator skills and can work in a team. This was presented as a reason why a project such as this proposed cannot be done. But in fact i think these are all arguments why a widely-publicised and funded project such as this would have more chances to succeed than small-scale, individual efforts. I would also imagine that numerical analysts might feel more compelled to 'publicize' their work to the masses if there were single, respected, quality resource that provides the context in which they can deposit their work (or a draft thereof, anyway). - Some companies and managers are hesitant to use open-source products due to requirements for 'fiduciary responsibility', as Paul put it. I think that a product created under the conditions that i described would probably pass this requirement --not in the sense that there will be an entity to sue, but in the sense that it would be a reputable product sponsored by some of the top research organizations or even the government. *************************************************************** / Christos Siopis | Tel : 734-615-1585 \ / Postdoctoral Research Fellow | \ / Department of Astronomy | FAX : 734-763-6317 \ / University of Michigan | \ / Ann Arbor, MI 48109-1090 | E-mail : siopis at umich.edu \ / U.S.A. _____________________| \ / / http://www.astro.lsa.umich.edu/People/siopis.html \ *************************************************************** From paul at pfdubois.com Fri Dec 7 08:54:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri Dec 7 08:54:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: Message-ID: <000801c17f3f$5af5a600$0b01a8c0@freedom> Chris wrote in part: -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Christos Siopis Sent: Thursday, December 06, 2001 11:53 PM To: numpy-discussion at lists.sourceforge.net Subject: Re: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In essence, what i am 'proposing' is for a big umbrella organization (NSF, NASA and IEEE come to mind) to sponsor the development of this uber-library for numerical scientific and engineering applications. This would be 'sold' as an infrastructure project: creating the essential functionality that is needed in order to build most kinds of scientific and engineering applications. It would save lots of duplication effort and improve productivity and quality at government labs, academia and the private sector alike. The end product would have some sort of open-source license (this can be a thorny issue, but i am sure a mutually satisfactory solution can be found). ----------- Those who do not know history, etc. LLNL, LANL, and Sandia had such a project in the 70s called the SLATEC library for mathematical software. It was pretty successful for the Fortran era. However, the funding agencies are unable to maintain interest in infrastructure very long. If there came a day when the vast majority of scientific programmers shared a platform and a language, there is now the communications infrastructure so that they could do a good open-source library, given someone to lead it with some vision. Linus Mathguy. From jh at oobleck.astro.cornell.edu Fri Dec 7 13:02:02 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Fri Dec 7 13:02:02 2001 Subject: [Numpy-discussion] Re: Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112072101.fB7L1QJ16023@oobleck.astro.cornell.edu> Something that everyone should be aware of is that right now we *may* have an opportunity to get significant support. Kodak has acquired RSI, makers of IDL. Most of the planetary astronomy community uses IDL, as do many geophysicists and medical imaging people. Kodak is dramatically raising prices, and has killed support for Mac OS X. The IDL site license just arranged for the group at NASA Ames is over $200k, making site licenses more expensive than individual licenses were just a few years ago, on a per-license basis. At the Division for Planetary Sciences meeting last week, I was frequently approached by colleagues who said, "Joe, what do I do?", and from the more savvy, "Is Python ready yet?" I discussed the possibility of producing an OSS or PD analysis system with a NASA program manager. He sees the money going out of his own programs to Kodak, and is concerned. However, his programs cannot fund such an effort as it is out of his scope. The right program is probably Applied Information Systems Research, but he is checking around NASA HQ to see whether this is the case. He was very positive about the idea. I suspect that a proposal will be more likely to fly this year than next, as there is a sense of concern right now, whereas next year people will already have adjusted. Depending on how my '01 grant proposals turn out, I may be proposing this to NASA in '02. Paul Barrett and I proposed it once before, in 1996 I think, but to the wrong program. Supporting parts of the effort from different sources would be wise. Paul Dubois makes the excellent point that such efforts generally peter out. It would be important to set this up as an OSS project with many contributors, some of whom are paid full-time to design and build the core. Good foundational documents and designs, and community reviews solicited from savvy non-participants, would help ensure that progress continued as sources of funding appeared and disappeared...and that there is enough wider-community support to keep it going until it produces something. NASA's immediate objective will be a complete data-analysis system to replace IDL, in short order, including an IDL-to-python converter program. That shouldn't be hard as IDL is a simple language, and PyDL may have solved much of that problem. So, at this point I'm still assessing what to do and whether/how to do it. Should we put together proposals to the various funding agencies to support SciPy? Should we create something new? What efforts exist in other communities, particularly the numerical analysis community? How much can we rely on savvy users to contribute code, and will that code be any good? My feeling is that there is actually a lot of money available for this, but it will require a few people to give up their jobs and pursue it full-time. And there, as they say, is the rub. --jh-- From paul at pfdubois.com Fri Dec 7 15:41:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Fri Dec 7 15:41:02 2001 Subject: [Numpy-discussion] fromstring bug Message-ID: <000001c17f78$3a86cb40$0b01a8c0@freedom> There is a bug report about fromstring. This bug report is correct. The docstring differs from the documentation, which only mentions the two-argument form (string, typecode). Numeric itself uses the latter form. The implementation has the 'count' argument but implemented incorrectly, without allowing for keyword arguments. In order to reconcile this with the least damage to the most people, I have changed the signature to: fromstring(string, typecode='l', count=-1) and allowed keyword arguments. This does not break the arguments as advertised in the documentation but does break any explicitly three-argument call in the old order. I will commit to CVS and this will be in 2.3 unless someone mean yells at me. From phillip_david at xontech.com Sat Dec 8 00:33:02 2001 From: phillip_david at xontech.com (Phillip David) Date: Sat Dec 8 00:33:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> References: <200112062333.fB6NXbn13050@oobleck.astro.cornell.edu> <015201c17ec8$5ebba7c0$777ba8c0@ericlaptop> Message-ID: <200112080831.fB88VLC22346@orngca-mls01.socal.rr.com> On Thursday 06 December 2001 06:39 pm, you wrote: > We've (well Travis O. mostly) been working to divide SciPy into multiple > "levels," and think this provides a solution for a wide range of users. > There are 3 levels now, and there is likely to be a 4th "sumo" level added > later. > > Level 1 is the core routines. Level 2 includes most Numeric algorithms. > Level 3 includes graphics and perhaps a few other things. The sumo package > would be a large all inclusive package with > Python/Numeric/SciPy/VTK/wxPython(maybe)/PyCrust/MayaVi(maybe) and possibly > others. I think this sounds GREAT! I've been looking for such a package to provide my company with a migration path from IDL, and such a package just might do it. If you're looking for other suggestions to add to sumo, I'd like to suggest ReportLab (or some sort of PDF generation tool) and as many scientific data formats (such as HDF, netCDF, and about a million others) as are sufficiently mature to be included. I've signed up to do an HDF5 port, and have been given a really nice start by Robert Kern. However, this one's not yet ready for inclusion. Phillip From jonathan.gilligan at vanderbilt.edu Mon Dec 10 11:56:02 2001 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon Dec 10 11:56:02 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: Message-ID: <5.1.0.14.0.20011210133713.04ae3060@g.mail.vanderbilt.edu> To follow up on Paul Dubois's wise comments about history, I would cite a more recent example: Software Carpentry. In 1999, with great fanfare, DOE (LANL) and CodeSourcery announced: >The aim of the Software Carpentry project is to create a new generation of >easy-to-use software engineering tools, and to document both those tools >and the working practices they are meant to support. The Advanced >Computing Laboratory at Los Alamos National Laboratory is providing >$860,000 of funding for Software Carpentry, which is being administered by >Code Sourcery, LLC. All of the project's designs, tools, test suites, and >documentation will be generally available under the terms of an Open >Source license. With announcements to the scientific community and an article in Dr. Dobb's Journal, this thing looked like a project aimed at development tools quite similar to what you are envisioning for numerical tools, right down to the python-centricity. If you go to the web site (http://www.software-carpentry.com) today, you will find that a year into the project, the plug was pulled. This does not bode well for big open or free projects financed by the major scientific agencies. Curiously the private sector has done much better in this regard (e.g., VTK, spun out of General Electric, Data Explorer from IBM, etc.). At 01:53 AM 12/7/01, Christos Siopis wrote: >In essence, what i am 'proposing' is for a big umbrella organization (NSF, >NASA and IEEE come to mind) to sponsor the development of this >uber-library for numerical scientific and engineering applications. This >would be 'sold' as an infrastructure project: creating the essential >functionality that is needed in order to build most kinds of scientific >and engineering applications. It would save lots of duplication effort and >improve productivity and quality at government labs, academia and the >private sector alike. The end product would have some sort of open-source >license (this can be a thorny issue, but i am sure a mutually satisfactory >solution can be found). Alternatively (or in addition), it might be better >to specify the API and leave the implementation part (open source and/or >commercial) to others. ============================================================================= Jonathan M. Gilligan jonathan.gilligan at vanderbilt.edu The Robert T. Lagemann Assistant Professor www.vanderbilt.edu/lsp/gilligan of Living State Physics Office: 615 343-6252 Dept. of Physics and Astronomy, Box 1807-B Lab (X-Ray) 343-7574 6823 Stevenson Center Fax: 343-7263 Vanderbilt University, Nashville, TN 37235 Dep't Office: 322-2828 From jh at oobleck.astro.cornell.edu Mon Dec 10 12:33:04 2001 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Mon Dec 10 12:33:04 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: (numpy-discussion-request@lists.sourceforge.net) References: Message-ID: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> Could you let us know what happened to Software Carpentry? The web site isn't very revealing. Who pulled the plug and why? Were they making expected progress? Are they dead or merely negotiating some changes with this or another sponsor? Thanks, --jh-- From jjl at pobox.com Mon Dec 10 14:49:03 2001 From: jjl at pobox.com (John J. Lee) Date: Mon Dec 10 14:49:03 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> Message-ID: On Mon, 10 Dec 2001, Joe Harrington wrote: > Could you let us know what happened to Software Carpentry? The web > site isn't very revealing. Who pulled the plug and why? Were they > making expected progress? Are they dead or merely negotiating some > changes with this or another sponsor? Don't know about the official project / contest / whatever it was, but some of the tools proposed have been implemented to some extent. For example, SCcons --> scons (IIRC), and Roundup, the winning bug- tracking system design (Ka-Ping Yee I think) is being implemented by someone -- there was an announcement on the python announce list the other day. John From szummer at ai.mit.edu Mon Dec 10 15:04:02 2001 From: szummer at ai.mit.edu (Martin Szummer) Date: Mon Dec 10 15:04:02 2001 Subject: [Numpy-discussion] sparse matrices; SparsePy Message-ID: <001301c181ce$aa8518e0$27022a12@mit.edu> Hi all, What is the recommended way of doing sparse matrix manipulations in NumPy? I read about Travis Oliphant's SparsePy http://pylab.sourceforge.net/ but on that web page, the links are broken (the tarball file fails). Also, this is version 0.1, and has not been updated in a long time. Are there any others? If anybody has a copy of the SparsePy source, could you make it available to me? (also it would be good if somebody could repair Travis Oliphant's page; I tried to notify him, but don't think I had the correct email). Has anybody tried compiling SparsePy under Windows? It seems tricky, given that it needs a Fortran compiler. I was thinking of trying Cygwin, but not sure if I can get it to link with the ActiveState python compiled with Microsoft tools. Thanks, -- Martin From jonathan.gilligan at vanderbilt.edu Mon Dec 10 15:05:01 2001 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon Dec 10 15:05:01 2001 Subject: [Numpy-discussion] Meta: too many numerical libraries doing the same thing? In-Reply-To: <200112102032.fBAKWIs25433@oobleck.astro.cornell.edu> References: Message-ID: <5.1.0.14.0.20011210162751.04ad9ec0@g.mail.vanderbilt.edu> I don't know too much. From what I have read, it seems that the money dried up with the .com crash, The $800,000 allocated for development never materialized, and the project head left for greener pastures. Some of the projects that won the SC design competition have moved to SourceForge, where they show some life (http://sourceforge.net/projects/roundup/, http://sourceforge.net/projects/scons/). The big thing, from what I know (which is not much), is that the promise of almost a million dollars of federal funding for this project seems to have evaporated. At 02:32 PM 12/10/01, Joe Harrington wrote: >Could you let us know what happened to Software Carpentry? The web >site isn't very revealing. Who pulled the plug and why? Were they >making expected progress? Are they dead or merely negotiating some >changes with this or another sponsor? From gpk at bell-labs.com Tue Dec 11 12:54:03 2001 From: gpk at bell-labs.com (Greg Kochanski) Date: Tue Dec 11 12:54:03 2001 Subject: [Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #366 - 7 msgs References: Message-ID: <3C16721D.2B361EAA@bell-labs.com> Spacesaver doesn't propagate through matrix multiplies: >>> import Numeric >>> x=Numeric.zeros((3,3), Numeric.Float32) >>> x.savespace(1) >>> x array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],'f') >>> x.spacesaver() 1 >>> y = Numeric.matrixmultiply(x, x) >>> y array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]],'f') >>> y.spacesaver() 0 From nwagner at mecha.uni-stuttgart.de Thu Dec 13 02:39:02 2001 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu Dec 13 02:39:02 2001 Subject: [Numpy-discussion] Forth-Order Runge-Kutta Message-ID: <3C189374.BC2658CD@mecha.uni-stuttgart.de> Hi, I am looking for an implementation of the fourth-order Runge-Kutta method in Numpy. Is it already done by someone ? Thanks in advance. Nils From cnetzer at mail.arc.nasa.gov Thu Dec 13 13:12:25 2001 From: cnetzer at mail.arc.nasa.gov (Chad Netzer) Date: Thu Dec 13 13:12:25 2001 Subject: [Numpy-discussion] Forth-Order Runge-Kutta In-Reply-To: References: Message-ID: <200112132111.NAA18760@mail.arc.nasa.gov> > Date: Thu, 13 Dec 2001 12:39:32 +0100 > From: Nils Wagner > Subject: [Numpy-discussion] Forth-Order Runge-Kutta > > I am looking for an implementation of the fourth-order Runge-Kutta > method in Numpy. Here is one I wrote (based on a similar I had seen in lorentz.py in the PyOpenGL demos, I believe), that I used for chaotic dynamics visualizations. # A simple, non-stepsize adaptive fourth order Runge-Kutta integration # estimator method. def fourth_order_runge_kutta(self, xyz, dt): derivative = self.differentiator hdt = 0.5 * dt xyz = asarray(xyz) # Force tuple or list to an array k1 = array(derivative(xyz)) k2 = array(derivative(xyz + k1 * hdt)) k3 = array(derivative(xyz + k2 * hdt)) k4 = array(derivative(xyz + k3 * dt)) new_xyz = xyz + (k1 + k4) * (dt / 6.0) + (k2 + k3) * (dt / 3.0) return new_xyz where self.differentiator is a function that takes an x,y,z coordinate tuple, and returns the deriviative as an x,y,z coordinate tuple. I wrote this when I was much more naive about Runge-Kutta and Python Numeric, so don't use it without some looking over. It is at least a good starting point. -- Chad Netzer cnetzer at mail.arc.nasa.gov From Achim.Gaedke at uni-koeln.de Fri Dec 14 02:27:02 2001 From: Achim.Gaedke at uni-koeln.de (Achim Gaedke) Date: Fri Dec 14 02:27:02 2001 Subject: [Numpy-discussion] pygsl source moved to sourceforge Message-ID: <3C19D114.8788B1C0@uni-koeln.de> Hi! I've checked in my source, so it is available at http://pygsl.sourceforge.net/ . A mailinglist for discussion needs and support is created, please mail to: mailto:pygsl-discuss at lists.sourceforge.net . pygsl provides now: module pygsl.sf: with 200 special functions module pygsl.const: with 23 often used mathematical constants module pygsl.ieee: access to the ieee-arithmetics layer of gsl module pygsl.rng: provides several random number generators and different probability densities It can be used with gsl-0.* and with gsl-1.* (with a small change in setup.py). Version 0.0.3 is tested for python-2.0. The cvs version can be used with python-2.1 and python-2.2 . Help is needed and code is accepted with thanks. Just visit our cvs repository at sourceforge: http://sourceforge.net/cvs/?group_id=34743 Yours, Achim From rob at pythonemproject.com Sun Dec 16 13:05:02 2001 From: rob at pythonemproject.com (Rob) Date: Sun Dec 16 13:05:02 2001 Subject: [Numpy-discussion] my Numpy statements are slower than indexed formulations in some cases Message-ID: <3C1D0BF4.23E89F0@pythonemproject.com> I have a number of thse routines in some EM code. I've tried to Numpyize them, but end up with code that runs even slower. Here is the old indexed routing: ----------------- for JJ in range(0,TotExtMetalEdgeNum): McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] ---------------- Here is the Numpy version: --------------------- McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) --------------------- I wonder if there is another faster way to do this? Thanks, Rob. -- The Numeric Python EM Project www.pythonemproject.com From ray_drew at yahoo.co.uk Mon Dec 17 02:48:03 2001 From: ray_drew at yahoo.co.uk (Ray Drew) Date: Mon Dec 17 02:48:03 2001 Subject: [Numpy-discussion] Python 2.2 Message-ID: <006201c186e7$da0d32f0$df13000a@RDREWXP> Does anyone know when a release for Python 2.2 on Windows will be available? regards, Ray Drew _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com From paul at pfdubois.com Mon Dec 17 08:13:04 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 17 08:13:04 2001 Subject: [Numpy-discussion] Python 2.2 In-Reply-To: <006201c186e7$da0d32f0$df13000a@RDREWXP> Message-ID: <000001c18715$477816c0$0c01a8c0@freedom> I downloaded and installed 2.2c1 and built the Numeric installers. However, when I run them they all fail when the installation begins (after one clicks the final click to install) with an access violation. I removed any previous Numeric and it still happened. Building and installing with setup.py install works ok. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Ray Drew Sent: Monday, December 17, 2001 2:45 AM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-discussion] Python 2.2 Does anyone know when a release for Python 2.2 on Windows will be available? regards, Ray Drew _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From guido at python.org Mon Dec 17 08:20:01 2001 From: guido at python.org (Guido van Rossum) Date: Mon Dec 17 08:20:01 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: Your message of "Mon, 17 Dec 2001 08:09:59 PST." <000001c18715$477816c0$0c01a8c0@freedom> References: <000001c18715$477816c0$0c01a8c0@freedom> Message-ID: <200112171616.LAA04194@cj20424-a.reston1.va.home.com> > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access violation. > I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. I don't know anything about your installers, but could it be that you were trying to install without Administrator permissions? That used to crash the previous Python installer too. (The new one doesn't, but it's a commercial product so we can't share it.) --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas.heller at ion-tof.com Mon Dec 17 08:51:04 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Mon Dec 17 08:51:04 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> Message-ID: <028501c1871a$ec399260$e000a8c0@thomasnotebook> > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access violation. > I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. > Are you talking about bdist_wininst installers? I just could reproduce a problem with them (after renaming away all my zip.exe programs in the PATH). Thomas From paul at pfdubois.com Mon Dec 17 09:27:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 17 09:27:03 2001 Subject: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: <200112171616.LAA04194@cj20424-a.reston1.va.home.com> Message-ID: <000201c1871f$a6f514e0$0c01a8c0@freedom> I am on Windows Me, so there is no concept of administrator. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Guido van Rossum Sent: Monday, December 17, 2001 8:16 AM To: Paul F. Dubois Cc: 'Ray Drew'; numpy-discussion at lists.sourceforge.net; python-dev at python.org Subject: Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 > I downloaded and installed 2.2c1 and built the Numeric installers. > However, when I run them they all fail when the installation begins > (after one clicks the final click to install) with an access > violation. I removed any previous Numeric and it still happened. > > Building and installing with setup.py install works ok. I don't know anything about your installers, but could it be that you were trying to install without Administrator permissions? That used to crash the previous Python installer too. (The new one doesn't, but it's a commercial product so we can't share it.) --Guido van Rossum (home page: http://www.python.org/~guido/) _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From thomas.heller at ion-tof.com Tue Dec 18 13:27:02 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Tue Dec 18 13:27:02 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> Message-ID: <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> I've just fixed the bdist_wininst command in CVS. There is not much time left before the release of Python 2.2 (how much exactly?). To make sure that the current version works correctly, I would kindly request some testers to build windows installers from some of their package distributions, and check them to make sure they work correctly. The full story is in bug [#483982]: Python 2.2b2 bdist_wininst crashes: http://sourceforge.net/tracker/index.php?func=detail&aid=483982&group_id=5470&atid=105470 To test the new version, you only have to replace the file Lib/distutils/command/bdist_wininst.py in your Python 2.2 distribution with the new one from CVS (rev 1.27): http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Lib/distutils/command/bdist_wininst.py Thanks, Thomas From mal at lemburg.com Tue Dec 18 13:48:08 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue Dec 18 13:48:08 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: <3C1FB96C.E3EE8DCB@lemburg.com> Thomas Heller wrote: > > I've just fixed the bdist_wininst command in CVS. > There is not much time left before the release of Python 2.2 (how much exactly?). > > To make sure that the current version works correctly, I would > kindly request some testers to build windows installers from some > of their package distributions, and check them to make sure they work > correctly. > > The full story is in bug [#483982]: Python 2.2b2 bdist_wininst crashes: > http://sourceforge.net/tracker/index.php?func=detail&aid=483982&group_id=5470&atid=105470 > > To test the new version, you only have to replace the file > Lib/distutils/command/bdist_wininst.py in your Python 2.2 distribution > with the new one from CVS (rev 1.27): > > http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/python/python/dist/src/Lib/distutils/command/bdist_wininst.py FYI, I just tested it with egenix-mx-base and egenix-mx-commercial and both seem to work just fine. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From tim at zope.com Tue Dec 18 14:18:02 2001 From: tim at zope.com (Tim Peters) Date: Tue Dec 18 14:18:02 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 In-Reply-To: <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: [Thomas Heller] > I've just fixed the bdist_wininst command in CVS. Thank you! > There is not much time left before the release of Python 2.2 (how > much exactly?). 2.2 final should be released this Friday. From barry at zope.com Tue Dec 18 14:20:01 2001 From: barry at zope.com (Barry A. Warsaw) Date: Tue Dec 18 14:20:01 2001 Subject: Need help with bdist_wininst, was Re: [Python-Dev] RE: [Numpy-discussion] Python 2.2 References: <000001c18715$477816c0$0c01a8c0@freedom> <028501c1871a$ec399260$e000a8c0@thomasnotebook> <09bc01c1880a$8f0a4530$e000a8c0@thomasnotebook> Message-ID: <15391.49367.712590.185830@anthem.wooz.org> >>>>> "TH" == Thomas Heller writes: TH> I've just fixed the bdist_wininst command in CVS. There is TH> not much time left before the release of Python 2.2 (how much TH> exactly?). About 3 days. :) -Barry From paul at pfdubois.com Tue Dec 18 16:36:01 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Tue Dec 18 16:36:01 2001 Subject: [Numpy-discussion] Windows / 2.2 version up Message-ID: <000501c18824$adde1ee0$0c01a8c0@freedom> I built the Windows installers for Python-2.2 and they should appear at sourceforge shortly. From fardal at coral.phys.uvic.ca Tue Dec 18 19:40:01 2001 From: fardal at coral.phys.uvic.ca (Mark Fardal) Date: Tue Dec 18 19:40:01 2001 Subject: [Numpy-discussion] converting IDL to python Message-ID: <200112190433.fBJ4XNT13889@coral.phys.uvic.ca> Hi, earlier this month Joe Harrington posted this: NASA's immediate objective will be a complete data-analysis system to replace IDL, in short order, including an IDL-to-python converter program. That shouldn't be hard as IDL is a simple language, and PyDL may have solved much of that problem. I'm an IDL user, and I'm currently trying to see if I can switch over to python. It strikes me that an IDL-to-python converter program is a VERY ambitious idea. While IDL's syntax is rather similar to Python, and less powerful (so that there is less to translate), there are enough differences between them that conversion is probably not a simple task. For example, in IDL: arguments are passed by reference array storage order is different there's a different notion of "truth" than in Python a structure type exists, and a notation for slicing arrays of structures trailing "degenerate" array dimensions are truncated in a hard-to-predict way array subscripting generates copy, not reference I'm sure there are some other big differences too. Then there is major functionality in IDL that is missing in Python. * (Like plotting. :-) It's hard to translate a routine when you have nothing to translate it to. I've looked at PyDL. It seems a nice little crutch, and it makes the learning curve a little shallower for people converting from IDL to Python. It's very far from being a complete translation from one language to the other. I don't know what Joe had in mind for a converter program. One that took care of braindead syntax conversion, while leaving the more difficult issues for a human programmer to translate, might not be too hard and could be pretty useful. I think it would be a bad idea to hold out for something that automatically produced working code. my $.02 (worth only $0.0127 US) Mark Fardal University of Victoria * I'm stuck in Python 1.5 until the New Year, so have yet to see what SciPy has to offer. Maybe SciPy-Sumo is getting close. From rlw at stsci.edu Tue Dec 18 21:31:06 2001 From: rlw at stsci.edu (Rick White) Date: Tue Dec 18 21:31:06 2001 Subject: [Numpy-discussion] converting IDL to python In-Reply-To: <200112190433.fBJ4XNT13889@coral.phys.uvic.ca> Message-ID: On Tue, 18 Dec 2001, Mark Fardal wrote: > earlier this month Joe Harrington posted this: > > NASA's immediate objective will be a complete data-analysis system to > replace IDL, in short order, including an IDL-to-python converter > program. That shouldn't be hard as IDL is a simple language, and PyDL > may have solved much of that problem. > > I'm an IDL user, and I'm currently trying to see if I can switch over > to python. It strikes me that an IDL-to-python converter program is a > VERY ambitious idea. While IDL's syntax is rather similar to Python, > and less powerful (so that there is less to translate), there are > enough differences between them that conversion is probably not a > simple task. For example, in IDL: > > arguments are passed by reference > array storage order is different > there's a different notion of "truth" than in Python > a structure type exists, and a notation for slicing arrays of structures > trailing "degenerate" array dimensions are truncated in a hard-to-predict way > array subscripting generates copy, not reference > > I'm sure there are some other big differences too. I'm thinking about this problem too. For the PyRAF system (pyraf.stsci.edu), I wrote a set of Python programs that parse IRAF CL scripts and automatically translate them to Python. CL scripts have a much less regular syntax than IDL (making them harder to parse), but the CL has a vastly more limited functionality (making it easier to translate the results to IDL.) I did have to deal with issues similar to those above. E.g., the CL does pass arguments by reference (sometimes -- I told you it is irregular!) and I figured out how to handle that. The array storage order doesn't strike me as a biggie, it is just necessary to swap the order of indices. The CL also has a different notion of truth (boolean expressions have the value yes or no, which are magic values along the lines of None in Python.) The IDL notion of truth is really pretty similar to Python's when you get down to it. The new numeric array module that we have been working on (numarray) also supports the equivalent of arrays of structures that can be sliced and indexed just like arrays of numbers. We had IDL structures (among other things) in mind when we developed this capability, and think our version is a significant improvement over IDL. We also have arrays of strings and aim to have all the basic array functionality that is available in IDL. (By the way, I'm a long-time IDL user and am reasonably expert in it.) The hardest part (as you mention) is that IDL has a really large collection of built-in functions which probably will not be readily available in Python. Until someone writes things like the morphological dilation and erosion operations, it won't be possible to translate IDL programs that use them. And there will always be some weird corners of IDL behavior where it just would not be worth the effort to try to replicate exactly what IDL does. That's true in the CL translation too -- but it does not seem to be a big limitation, because users tend to stay away from those weird spots too. My conclusion is that an automatic conversion of (most) IDL to Python probably can be done with a reasonable amount of work. It certainly is not trivial, but I think (based on my experience with the CL) that it is not impossibly difficult either. I'm hoping to take a shot at this sometime during the next 6 months. Rick Richard L. White rlw at stsci.edu http://sundog.stsci.edu/rick/ Space Telescope Science Institute Baltimore, MD From edcjones at erols.com Wed Dec 19 15:50:02 2001 From: edcjones at erols.com (Edward C. Jones) Date: Wed Dec 19 15:50:02 2001 Subject: [Numpy-discussion] float** arrays and Numarray Message-ID: <3C21294A.4090706@erols.com> I use Numeric to transfer images between various image processing systems. Numarray includes most of the image data memory layouts that I have seen except for: Some C types need to be added (as several people have already commented). Some systems define an array as a "pointer to an array of pointers to ...". "Numerical Recipes in C" explains this approach clearly. Could this perhaps be implemented in Numarray as a buffer interface with multiple data segments? Thanks, Ed Jones From jmr at engineering.uiowa.edu Wed Dec 19 20:13:02 2001 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Wed Dec 19 20:13:02 2001 Subject: [Numpy-discussion] converting IDL to python Message-ID: <876672h6n7.fsf@phantom.ecn.uiowa.edu> Rick White writes: > The hardest part (as you mention) is that IDL has a really large > collection of built-in functions which probably will not be readily > available in Python. Until someone writes things like the morphological > dilation and erosion operations, it won't be possible to translate IDL > programs that use them. By the way, I do have binary erosion and dilation, along with connected components analysis, flood fill region grow, and several other common 2D/3D image processing functions working in NumPy. I would be glad to share them. I don't know how similar the interface is to IDL, since I have never used IDL. - Joe Reinhardt -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From eric at enthought.com Thu Dec 20 08:18:03 2001 From: eric at enthought.com (eric) Date: Thu Dec 20 08:18:03 2001 Subject: [Numpy-discussion] converting IDL to python References: <876672h6n7.fsf@phantom.ecn.uiowa.edu> Message-ID: <03ad01c18969$8130e080$777ba8c0@ericlaptop> Rick and Joe, I'm not familiar with these erosion and dialation functions. Are they something that should be in SciPy? We could start an "idl" package that houses functions familiar to this community. Or maybe they should live in another place like "signal" or maybe an image processing package. Thoughts? I've crossed posted this to the scipy-dev at scipy.org list as this is probably the best place to continue such a discussion. eric ----- Original Message ----- From: "Joe Reinhardt" To: Sent: Wednesday, December 19, 2001 11:12 PM Subject: [Numpy-discussion] converting IDL to python > > Rick White writes: > > The hardest part (as you mention) is that IDL has a really large > > collection of built-in functions which probably will not be readily > > available in Python. Until someone writes things like the morphological > > dilation and erosion operations, it won't be possible to translate IDL > > programs that use them. > > By the way, I do have binary erosion and dilation, along with > connected components analysis, flood fill region grow, and several > other common 2D/3D image processing functions working in NumPy. > > I would be glad to share them. I don't know how similar the > interface is to IDL, since I have never used IDL. > > - Joe Reinhardt > > > > -- > Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering > joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 > Telephone: 319-335-5634 FAX: 319-335-5631 > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From jmr at engineering.uiowa.edu Thu Dec 20 08:28:04 2001 From: jmr at engineering.uiowa.edu (Joe Reinhardt) Date: Thu Dec 20 08:28:04 2001 Subject: [Numpy-discussion] converting IDL to python In-Reply-To: <03ad01c18969$8130e080$777ba8c0@ericlaptop> ("eric"'s message of "Thu, 20 Dec 2001 10:17:57 -0500") References: <876672h6n7.fsf@phantom.ecn.uiowa.edu> <03ad01c18969$8130e080$777ba8c0@ericlaptop> Message-ID: "eric" writes: > I'm not familiar with these erosion and dialation functions. Are they > something that should be in SciPy? We could start an "idl" package that > houses functions familiar to this community. Or maybe they should live in > another place like "signal" or maybe an image processing package. > Thoughts? Dilation and erosion are image processing functions used to "grow" and "shrink" objects and boundaries. They are part of a group of image operators based on the theory of mathematical morphology. Overall, they are very extremely useful for shape/size analysis on images. Yes, they should certainly be part of any image processing toolbox for Python. Here is a link for a commercial math. morphology toolbox for matlab: http://www.mmorph.com - Joe -- Joseph M. Reinhardt, Ph.D. Department of Biomedical Engineering joe-reinhardt at uiowa.edu University of Iowa, Iowa City, IA 52242 Telephone: 319-335-5634 FAX: 319-335-5631 From eric at enthought.com Thu Dec 20 08:36:03 2001 From: eric at enthought.com (eric) Date: Thu Dec 20 08:36:03 2001 Subject: [Numpy-discussion] converting IDL to python References: Message-ID: <03be01c1896c$12f7d670$777ba8c0@ericlaptop> I'm not an IDL user, but have thought some about the same sort of project for either executing Matlab scripts from Python, or perhaps translating them to Python. On the face of it, it seems doable. Of the incompatibilities I've thought about, the magic NARGOUT variable that tells Matlab how many output results are expected from a functions seems the most difficult. It might be handled in Python by examining the parse tree, but it isn't a trivial translation. Another option would be embedding Octave which is a Matlab work alike. I think it is a little behind the current Matlab release though. There might be some synergy between projects aiming to execute IDL/Matlab code from Python. eric ----- Original Message ----- From: "Rick White" To: "Mark Fardal" Cc: "numpy" Sent: Wednesday, December 19, 2001 12:30 AM Subject: Re: [Numpy-discussion] converting IDL to python > On Tue, 18 Dec 2001, Mark Fardal wrote: > > > earlier this month Joe Harrington posted this: > > > > NASA's immediate objective will be a complete data-analysis system to > > replace IDL, in short order, including an IDL-to-python converter > > program. That shouldn't be hard as IDL is a simple language, and PyDL > > may have solved much of that problem. > > > > I'm an IDL user, and I'm currently trying to see if I can switch over > > to python. It strikes me that an IDL-to-python converter program is a > > VERY ambitious idea. While IDL's syntax is rather similar to Python, > > and less powerful (so that there is less to translate), there are > > enough differences between them that conversion is probably not a > > simple task. For example, in IDL: > > > > arguments are passed by reference > > array storage order is different > > there's a different notion of "truth" than in Python > > a structure type exists, and a notation for slicing arrays of structures > > trailing "degenerate" array dimensions are truncated in a hard-to-predict way > > array subscripting generates copy, not reference > > > > I'm sure there are some other big differences too. > > I'm thinking about this problem too. For the PyRAF system > (pyraf.stsci.edu), I wrote a set of Python programs that parse IRAF CL > scripts and automatically translate them to Python. CL scripts have a > much less regular syntax than IDL (making them harder to parse), but the > CL has a vastly more limited functionality (making it easier to translate > the results to IDL.) I did have to deal with issues similar to those > above. E.g., the CL does pass arguments by reference (sometimes -- I told > you it is irregular!) and I figured out how to handle that. The array > storage order doesn't strike me as a biggie, it is just necessary to swap > the order of indices. The CL also has a different notion of truth > (boolean expressions have the value yes or no, which are magic values > along the lines of None in Python.) The IDL notion of truth is really > pretty similar to Python's when you get down to it. > > The new numeric array module that we have been working on (numarray) also > supports the equivalent of arrays of structures that can be sliced and > indexed just like arrays of numbers. We had IDL structures (among other > things) in mind when we developed this capability, and think our version > is a significant improvement over IDL. We also have arrays of strings > and aim to have all the basic array functionality that is available in > IDL. (By the way, I'm a long-time IDL user and am reasonably expert in it.) > > The hardest part (as you mention) is that IDL has a really large > collection of built-in functions which probably will not be readily > available in Python. Until someone writes things like the morphological > dilation and erosion operations, it won't be possible to translate IDL > programs that use them. And there will always be some weird corners of > IDL behavior where it just would not be worth the effort to try to > replicate exactly what IDL does. That's true in the CL translation too > -- but it does not seem to be a big limitation, because users tend to stay > away from those weird spots too. > > My conclusion is that an automatic conversion of (most) IDL to Python > probably can be done with a reasonable amount of work. It certainly is > not trivial, but I think (based on my experience with the CL) that it is > not impossibly difficult either. I'm hoping to take a shot at this > sometime during the next 6 months. > Rick > > Richard L. White rlw at stsci.edu http://sundog.stsci.edu/rick/ > Space Telescope Science Institute > Baltimore, MD > > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From perry at stsci.edu Thu Dec 20 10:37:01 2001 From: perry at stsci.edu (Perry Greenfield) Date: Thu Dec 20 10:37:01 2001 Subject: [Numpy-discussion] float** arrays and Numarray In-Reply-To: <3C21294A.4090706@erols.com> Message-ID: > I use Numeric to transfer images between various image processing > systems. Numarray includes most of the image data memory layouts that I > have seen except for: > > Some C types need to be added (as several people have already commented). > > Some systems define an array as a "pointer to an array of pointers to > ...". "Numerical Recipes in C" explains this approach clearly. Could > this perhaps be implemented in Numarray as a buffer interface with > multiple data segments? > > Thanks, > Ed Jones That's an interesting question. I don't think that such a representation makes much sense for numarray internally (for example, it makes it difficult to reshape the object without creating new pointer arrays-- or data segments as mentioned above). I doubt that we would take this approach. On the other hand, I don't see why the C-API could not create such a representation (i.e., creating the necessary pointer arrays). I guess the main question is whether how memory mangement is done for these pointer arrays since they are not part of the numarray object (one may have to explicitly deallocate them in the C code). The main usefulness of this approach is that it allows a simple indexing notation multidimensional arrays in C. (On the other hand, perhaps the same could be largely accomplished with C macros.) A drawback is that it can imply significant memory overheads if the shape of the array is skinny in the wrong dimension. But no, we haven't really thought much about it yet. Perry From perry at stsci.edu Thu Dec 20 10:48:02 2001 From: perry at stsci.edu (Perry Greenfield) Date: Thu Dec 20 10:48:02 2001 Subject: [Numpy-discussion] next step for numarray Message-ID: I just thought I would write a few words about what will be happening with numarray development. At the moment it is quiescent since we have another internal project to finish. Hopefully that will be done in 3 weeks or so; we will start work on numarray again. I've gotten feedback from Guido regarding the safety issue; he made it clear that we should not be able to let the user crash Python, even if they can do that only by bypassing the public interface. Since there may be implications for the C-API we are making that our highest priority (though I don't think it will seriously change the API). Thanks for the feedback so far. We will raise other specific issues as we get closer to dealing with them. Perry From rob at pythonemproject.com Fri Dec 21 23:46:23 2001 From: rob at pythonemproject.com (Rob) Date: Fri Dec 21 23:46:23 2001 Subject: [Numpy-discussion] Re: my Numpy statements are slower than indexed formulations in some cases References: <3C1D0BF4.23E89F0@pythonemproject.com> Message-ID: <3C236208.7545F760@pythonemproject.com> Rob wrote: > > I have a number of thse routines in some EM code. I've tried to > Numpyize them, but end up with code that runs even slower. > > Here is the old indexed routing: > > ----------------- > for JJ in range(0,TotExtMetalEdgeNum): > > McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] > ---------------- > > Here is the Numpy version: > > --------------------- > McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) > --------------------- > > I wonder if there is another faster way to do this? Thanks, Rob. > > -- I did speed things up just a tiny bit by using: add.reduce(Ccc*VcVector,1) instead of add.reduce(transpose(Ccc*VcVector). But I'm still running way slower than an indexed array scheme. Rob. > The Numeric Python EM Project > > www.pythonemproject.com -- The Numeric Python EM Project www.pythonemproject.com From rob at pythonemproject.com Sun Dec 23 11:21:31 2001 From: rob at pythonemproject.com (Rob) Date: Sun Dec 23 11:21:31 2001 Subject: [Numpy-discussion] Re: my Numpy statements are slower than indexed formulations in somecases References: <3C1D0BF4.23E89F0@pythonemproject.com> <3C236208.7545F760@pythonemproject.com> Message-ID: <3C24B004.4D976DCD@pythonemproject.com> Problem solved. I had one last for loop in there. Its odd that as I gradually got rid of the for loops, the speed went down, but when I got rid of the last on, bingo! That routine now runs at a 1/10 of its original time. Rob. Rob wrote: > > Rob wrote: > > > > I have a number of thse routines in some EM code. I've tried to > > Numpyize them, but end up with code that runs even slower. > > > > Here is the old indexed routing: > > > > ----------------- > > for JJ in range(0,TotExtMetalEdgeNum): > > > > McVector+=Ccc[0:TotExtMetalEdgeNum,JJ] * VcVector[JJ] > > ---------------- > > > > Here is the Numpy version: > > > > --------------------- > > McVector= add.reduce(transpose(Ccc[...] * VcVector[...])) > > --------------------- > > > > I wonder if there is another faster way to do this? Thanks, Rob. > > > > -- > > I did speed things up just a tiny bit by using: > > add.reduce(Ccc*VcVector,1) instead of > add.reduce(transpose(Ccc*VcVector). > > But I'm still running way slower than an indexed array scheme. Rob. > > The Numeric Python EM Project > > > > www.pythonemproject.com > > -- > The Numeric Python EM Project > > www.pythonemproject.com > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.pythonemproject.com From rob at pythonemproject.com Mon Dec 24 09:40:02 2001 From: rob at pythonemproject.com (Rob) Date: Mon Dec 24 09:40:02 2001 Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement Message-ID: <3C2767CD.3E1E4134@pythonemproject.com> I have this Class: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 What I want is for Cond to be a Numpy array of 20 values, each with the above members. Thus I could say in a statement: wire=Cond a= wire[10].x1 b= wire[5].z2 etc... How can I go about this? Thanks, Rob. and Merry Christmas!!!!!!! ps. this is for an FEM-MOM grid generator -- The Numeric Python EM Project www.pythonemproject.com From paul at pfdubois.com Mon Dec 24 10:35:02 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Mon Dec 24 10:35:02 2001 Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement In-Reply-To: <3C2767CD.3E1E4134@pythonemproject.com> Message-ID: <000501c18ca9$476445c0$0c01a8c0@freedom> Without asking why in the world you would want to do this: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 wire = map(lambda dummy: Cond(), range(20)) print wire[10].x1 This makes a list. If you really INSIST on a Numeric array, add import Numeric wire = Numeric.array(wire) But this will be an array of type O. If your real concern is that the size will be huge, you might want to make arrays out of each member instead. You have your choice of making the syntax wire.x1[10] or wire[10].x1, but the latter requires additional technique. Also I doubt that you mean what you said. If each instance is to have its own values, and the ones above are just the initial values, it is much more proper to do: class Cond: def __init__ (self, **kw): d = { 'x1':0, ... put them all here } d.update(kw) for k, v in d.items(): setattr(self, k, v) That allows you to make instances all these way: Cond(), Cond(x1=4), Cond(x1=1,y2=1), Cond(x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 ) -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net [mailto:numpy-discussion-admin at lists.sourceforge.net] On Behalf Of Rob Sent: Monday, December 24, 2001 9:37 AM To: numpy-discussion at lists.sourceforge.net Subject: [Numpy-discussion] trouble ingegrating Numpy arrays into a class statement I have this Class: class Cond: x1=0 y1=0 z1=0 x2=0 y2=0 z2=0 DelX=0 DelY=0 DelZ=0 What I want is for Cond to be a Numpy array of 20 values, each with the above members. Thus I could say in a statement: wire=Cond a= wire[10].x1 b= wire[5].z2 etc... How can I go about this? Thanks, Rob. and Merry Christmas!!!!!!! ps. this is for an FEM-MOM grid generator -- The Numeric Python EM Project www.pythonemproject.com _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From paul at pfdubois.com Thu Dec 27 11:21:03 2001 From: paul at pfdubois.com (Paul F. Dubois) Date: Thu Dec 27 11:21:03 2001 Subject: [Numpy-discussion] Numeric 21.0a1 in CVS Message-ID: <000001c18f0b$34ac3dc0$0c01a8c0@freedom> Version 21.0a1 -- in CVS only Please help me test the following. I have tested on Windows / Python 2.2. Implement true division operations for Python 2.2 (Bruce Sherwood, our hero). New functions in Numeric; they work on any sequence a that can be converted to a Numeric array. def rank (a): "Get the rank of a (the number of dimensions, not a matrix rank)" def shape (a): "Get the shape of a" def size (a, axis=None): "Get the number of elements in a, or along a certain axis." def average (a, axis=0, weights=None, returned = 0): """average(a, axis=0, weights=None) Computes average along indicated axis. Inputs can be integer or floating types. Return type is floating but depends on spacesaver flags. If weights are given, result is sum(a*weights)/sum(weights) weights if given must have a's shape or be the 1-d with length the size of a in the given axis. If returned, return a tuple: the result and the sum of the weights or count of values. if axis is None, average over the entire array raises ZeroDivisionError if appropriate when result is scalar. """