From rob at hooft.net Tue Jul 4 03:53:10 2000 From: rob at hooft.net (Rob W. W. Hooft) Date: Tue, 4 Jul 2000 09:53:10 +0200 (CEST) Subject: [Numpy-discussion] Slight inconsistency between 'f' shape () array and python scalar Message-ID: <14689.38886.964153.150197@temoleh.chem.uu.nl> Hm. I found a bug in one of my programs that was due to the difference in behavior between a 'f' shape () array and a true python scalar: import Numeric a=Numeric.zeros((50,50),'f') b=[] for i in range(50): d=a[i,i] b.append(d) nb=Numeric.array(b) print nb.shape # Expect (50,) but get (50,1) BTW: Why does "a=Numeric.zeros((50,),'f'); d=a[i]" return a python scalar, and the above script a shape () array? Rob -- ===== rob at hooft.net http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= From dubois1 at llnl.gov Thu Jul 6 09:59:17 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Thu, 6 Jul 2000 06:59:17 -0700 Subject: [Numpy-discussion] New setup.py in numpy CVS In-Reply-To: <20000704103029.A1918@beelzebub> Message-ID: I checked in a new version of setup.py for Numerical that corresponds to Distutils-0.9. This is a modification of the setup_numpy.py that Greg has in Distutils. This version REQUIRES the new Distutils. If you install Distutils into 1.6a2, remember (which I didn't at first) to go delete the distutils directory in the Python library directory. If you don't you will get a message while running setup informing you that your Distutils must be updated. This CVS version also separates out LAPACK/BLAS so that using the "lite" version of the libraries supplied with numpy is now optional. A small attempt is made to find the library or the user can edit setup.py to set the locations if that fails. I have not tested this on Windows and I would bet it needs help; we probably won't cut a new Numerical release until this is resolved and Python 2.0 is out so that the needed version of Distutils is standard. Suggestions for improvements would be most welcome. Paul From dubois1 at llnl.gov Thu Jul 6 13:01:41 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Thu, 6 Jul 2000 10:01:41 -0700 Subject: [Numpy-discussion] Re distutils and numpy In-Reply-To: <14681.25131.166235.152210@anthem.concentric.net> References: <20000627205843.A1607@beelzebub> <14681.25131.166235.152210@anthem.concentric.net> Message-ID: <00070610043200.02062@almanac> I have made even more changes to Numeric this morning, separating off FFT and MA as separate packages and adding the package RNG. I found an error in the previous setup.py; it was installing headers in include/python1.6/Numerical instead of Numeric. This apparently gets fixed if you change the name of the package (which I otherwise thought didn't do anything.) From gward at python.net Fri Jul 7 19:13:44 2000 From: gward at python.net (Greg Ward) Date: Fri, 7 Jul 2000 19:13:44 -0400 Subject: [Numpy-discussion] Re: Re distutils and numpy In-Reply-To: <00070610043200.02062@almanac>; from dubois1@llnl.gov on Thu, Jul 06, 2000 at 10:01:41AM -0700 References: <20000627205843.A1607@beelzebub> <14681.25131.166235.152210@anthem.concentric.net> <00070610043200.02062@almanac> Message-ID: <20000707191344.A1249@beelzebub> On 06 July 2000, Paul F. Dubois said: > I found an error in the previous setup.py; it was installing headers > in include/python1.6/Numerical instead of Numeric. This apparently > gets fixed if you change the name of the package (which I otherwise > thought didn't do anything.) That's a feature. If Joe Blow releases an extension that requires the headers from NumPy, he should just have to specify, "I require the headers for " and have Distutils take care of the -I paths for him. (It doesn't do this currently, but it could and should!) Don't tell me I'm the only one who's confused about whether it's "NumPy", "Numerical Python", or "Numeric Python", and whether the above blank should be filled in with "Numeric" or "Numerical". BTW, the distribution name is also used, obviously, to create source and built distributions. So naming the header file directory after it is not without precedent. (It does have the subtle side-effect that distribution names should be valid as part of the filename in C #include statements. I have no idea what restrictions that imposes... but it's probably just common sense to stick to [a-zA-Z0-9_-] in distribution names and filenames.) Greg -- Greg Ward - Unix geek gward at python.net http://starship.python.net/~gward/ "Question authority!" "Oh yeah? Says who?" From pete at shinners.org Sat Jul 8 03:35:33 2000 From: pete at shinners.org (Pete Shinners) Date: Sat, 8 Jul 2000 00:35:33 -0700 Subject: [Numpy-discussion] savebit questions and errors Message-ID: <003601bfe8af$1a094840$0200a8c0@home> i'm developing with some Numeric stuff and don't have a solid grasp on what the 'SAVEBIT' stuff is. it's not mentioned in the docs at all. what's also strange is the the numpy testing "test_all.py" fails in the final sections when testing the savebit stuff. can someone give a quicky description of what this flag can be used for? also... i assume someone already knows about the error in the testing? i ran it on both a MIPS5000 IRIX system and an x86 win98 system and it errored out consistently on the test line 506. From pete at shinners.org Sat Jul 8 03:55:45 2000 From: pete at shinners.org (Pete Shinners) Date: Sat, 8 Jul 2000 00:55:45 -0700 Subject: [Numpy-discussion] Optimizing Numpy Message-ID: <007101bfe8b1$ecabca00$0200a8c0@home> i've been throwing my hand and getting more speed out of numpy. i've birthed a little fruit from my efforts. my area of use is specifically with 2D arrays with image info. anyways, i've attached an 'arrayobject.c' file that is from the 15.3 release and optimized. in my test case the code ran about twice the speed of the original 15.3 release. i went and tested out other uses and found on a 20% speedup pretty consistent. (for example, i cranked the mandelbrot demo resolution to 320 x 200 and removed the 'print' command and it went from a runtime of 5.5 to 4.5) i'm not sure how people 'officially' make contributions to the code, but i hope this is easy enough to merge. i also hope this is accepted (or at least reviewed) for inclusion in the next release. optimizing further... i also plan on a few more optimizations. the least is going to be a 'C' version of 'arrayrange' and probably 'ones'. the current arrayrange is pretty slow (slower than the standard python 'range' in all my tests). the other optimization is a bit more drastic, and i'd like to hear feedback from more 'numpy experts' before making the change. in the file 'arraytypes.c' with all the arrays of conversion functions, i've found that the conversion routines are a little too 'elaborate'. these routines are only ever called from one line and the the two "increment/skip" arguments are always hardcoded one. there are two possible roads to speedup the conversion of array types. 1-- optimize all the conversion routines so they aren't so generic. this should be a pretty easy fix and should offer noticeable speed. 2-- do a better job of converting arrays. instead of creating a whole new array of the new type and simply copying that, create a conversion method that simply converts the data directly into the destination array. this would mean using all those conversion routines to their full power. this would offer more speed than the first option, but is also a lot more work well, what do people think? my initial thought is to make a quick python script to take 'arraytypes.c' and convert all the functions to be a quicker version. numpy is amazing, and i'm glad to get a chance to better it! -------------- next part -------------- A non-text attachment was scrubbed... Name: arrayobject.zip Type: application/x-zip-compressed Size: 16499 bytes Desc: not available URL: From pauldubois at home.com Sat Jul 8 12:35:36 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 8 Jul 2000 09:35:36 -0700 Subject: [Numpy-discussion] savebit questions and errors In-Reply-To: <003601bfe8af$1a094840$0200a8c0@home> Message-ID: Setting the "savespace" property encourages Numeric to keep the results of calculations at smaller precisions. The error in the test has been fixed by removing the test for now. It appears the result of the test is platform-dependent. There is a list of fixed and open bugs on the project page http://sourceforge.net/projects/numpy. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:36 AM > To: numpy-discussion at sourceforge.net > Subject: [Numpy-discussion] savebit questions and errors > > > i'm developing with some Numeric stuff and don't have > a solid grasp on what the 'SAVEBIT' stuff is. it's not > mentioned in the docs at all. > > what's also strange is the the numpy testing "test_all.py" > fails in the final sections when testing the savebit > stuff. > > can someone give a quicky description of what this flag > can be used for? > > also... i assume someone already knows about the error in > the testing? i ran it on both a MIPS5000 IRIX system and > an x86 win98 system and it errored out consistently on > the test line 506. > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From pauldubois at home.com Sat Jul 8 12:38:49 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 8 Jul 2000 09:38:49 -0700 Subject: [Numpy-discussion] Optimizing Numpy In-Reply-To: <007101bfe8b1$ecabca00$0200a8c0@home> Message-ID: The project page has a patch manager for contributions. Please note that Travis is in the middle of a substantial reimplementation and so I think nobody would want to do a lot of optimizing right now. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:56 AM > To: numpy-discussion at sourceforge.net > Subject: [Numpy-discussion] Optimizing Numpy > > > i've been throwing my hand and getting more speed out of > numpy. i've birthed a little fruit from my efforts. my area > of use is specifically with 2D arrays with image info. > > anyways, i've attached an 'arrayobject.c' file that is > from the 15.3 release and optimized. > > in my test case the code ran about twice the speed of > the original 15.3 release. i went and tested out other > uses and found on a 20% speedup pretty consistent. > (for example, i cranked the mandelbrot demo resolution > to 320 x 200 and removed the 'print' command and it > went from a runtime of 5.5 to 4.5) > > i'm not sure how people 'officially' make contributions > to the code, but i hope this is easy enough to merge. i > also hope this is accepted (or at least reviewed) for > inclusion in the next release. > > > optimizing further... > i also plan on a few more optimizations. the least is going > to be a 'C' version of 'arrayrange' and probably 'ones'. the > current arrayrange is pretty slow (slower than the standard > python 'range' in all my tests). > the other optimization is a bit more drastic, and i'd like > to hear feedback from more 'numpy experts' before making the > change. in the file 'arraytypes.c' with all the arrays of conversion > functions, i've found that the conversion routines are a little > too 'elaborate'. these routines are only ever called from one line > and the the two "increment/skip" arguments are always hardcoded one. > there are two possible roads to speedup the conversion of array > types. > 1-- optimize all the conversion routines so they aren't so generic. > this should be a pretty easy fix and should offer noticeable speed. > 2-- do a better job of converting arrays. instead of creating a > whole new array of the new type and simply copying that, create > a conversion method that simply converts the data directly into > the destination array. this would mean using all those conversion > routines to their full power. this would offer more speed than > the first option, but is also a lot more work > > well, what do people think? my initial thought is to make a > quick python script to take 'arraytypes.c' and convert all the > functions to be a quicker version. > > > numpy is amazing, and i'm glad to get a chance to better it! > > From gward at python.net Sat Jul 8 21:23:33 2000 From: gward at python.net (Greg Ward) Date: Sat, 8 Jul 2000 21:23:33 -0400 Subject: [Numpy-discussion] Re: [Distutils] New setup.py in numpy CVS In-Reply-To: ; from dubois1@llnl.gov on Thu, Jul 06, 2000 at 06:59:17AM -0700 References: <20000704103029.A1918@beelzebub> Message-ID: <20000708212333.B4248@beelzebub> On 06 July 2000, Paul F. Dubois said: > This version REQUIRES the new Distutils. If you install Distutils into > 1.6a2, remember (which I didn't at first) to go delete the distutils > directory in the Python library directory. If you don't you will get a > message while running setup informing you that your Distutils must be > updated. The "Official Recommendation" is just to rename that directory away: that way you'll have (eg.) lib/python1.6/distutils-orig and lib/python1.6/site-packages/distutils, and the site-packages version will take precedence without having clobbered the standard library too badly. See the Distutils README.txt. The alternative was support in the Distutils for replacing/upgrading bits of the standard library; Guido was, shall we say, non-receptive to that idea. Oh well. > This CVS version also separates out LAPACK/BLAS so that using the "lite" > version of the libraries supplied with numpy is now optional. A small > attempt is made to find the library or the user can edit setup.py to set the > locations if that fails. Oh good, does that mean it'll take less than 20 minutes to compile NumPy on my pokey old 100 MHz Pentium? ;-) Greg -- Greg Ward - Unix bigot gward at python.net http://starship.python.net/~gward/ Software patents SUCK -- boycott amazon.com! From turner at blueskystudios.com Mon Jul 10 10:27:30 2000 From: turner at blueskystudios.com (John A. Turner) Date: Mon, 10 Jul 2000 10:27:30 -0400 (EDT) Subject: [Numpy-discussion] savebit questions and errors In-Reply-To: References: <003601bfe8af$1a094840$0200a8c0@home> Message-ID: <14697.56658.361792.192673@denmark.blueskystudios.com> >>>>> "PFD" == Paul F Dubois : PFD> The error in the test has been fixed by removing the test for now. That's going in my quotes file - I realize it's legit in this case, but you have to admit that, esp. out of context, it's pretty funny... -- John A. Turner, Senior Research Associate Blue Sky Studios, One South Rd, Harrison, NY 10528 http://www.blueskystudios.com/ (914) 825-8319 From jonathan.gilligan at vanderbilt.edu Tue Jul 11 11:27:34 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Tue, 11 Jul 2000 10:27:34 -0500 Subject: [Numpy-discussion] Current versions not on CVS (distutils and NumPy) Message-ID: <4.3.2.7.2.20000711101829.055a71a0@g.mail.vanderbilt.edu> I tried updating my distutils and numpy from their CVS sites last night, to try building and found the following problems. Distutils from :pserver:distutilscvs at cvs.python.org:/projects/cvsroot has a most recent tag of Distutils-0_8_2. Where can I get CVS access to Distutils 0.9, or is this a tar-only distribution. On a related note, the cvs of Numerical Python from SourceForge has a most recent tag of V15_2. If I get the most recent (untagged) version, of NumPy, there is not lapack_lite_library directory, so "setup.py install" fails. Does this mean that Numerical Python 15.3 has not yet been checked into the CVS repository? It's a bit confusing when the CVS repositories are not up to date with the most recent releases. Could someone clarify for me, please. Jonathan Gilligan From milgram at cgpp.com Sun Jul 16 15:36:53 2000 From: milgram at cgpp.com (J. Milgram) Date: Sun, 16 Jul 2000 15:36:53 -0400 Subject: [Numpy-discussion] Polynomial factorization? Message-ID: <200007161936.PAA19023@cgpp.com> Hi - I need to do prime factorization of some polynomials with integer coefficients. It would seem the Berlekamp algorithm would be what I want but I also admit to almost total ignorance on the subject. I'm having trouble even understanding how the algorithm works so I'm probably not the best person to be implementing it. But I may give it a try. Is anyone already working on a Python implementation of this? Or any tips on how to proceed? Pointers to other freeware solutions? I prefer something that can be re-written in or modularized into Python but even if not, that's no big deal. The alternative is to purchase one of the symbolic algebra packages like Maple or Mathematica etc. thanks Judah Judah Milgram milgram at cgpp.com P.O. Box 8376, Langley Park, MD 20787 (301) 422-4626 (-3047 fax) From syrus at shiba.ucsd.edu Mon Jul 17 22:15:27 2000 From: syrus at shiba.ucsd.edu (Syrus Nemat-Nasser) Date: Mon, 17 Jul 2000 19:15:27 -0700 (PDT) Subject: [Numpy-discussion] Re: Polynomial factorization? Message-ID: Hello Judah, I accidentally deleted your original message, but found it again in the Numerical list archives. I believe that there are free software packages that would do what you need. For example, look at Maxima which is under the GPL now (free'd version of Macsyma). If you are non-commercial, you can get a free version of MuPad. Also, there are number theory packages available that can probably do polynomial factorization. I suggest that you look at the Scientific Applications for Linux page: http://SAL.KachinaTech.COM/index.shtml Note that a lot of the software there can also be run on other platforms including both UNIX and Windows. You can even search for "polynomial factorization" and get a number of hits. Cheers. Syrus. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Syrus C Nemat-Nasser, PhD | Center of Excellence for Advanced Materials UCSD Department of Physics | UCSD Department of Mechanical | and Aerospace Engineering From milgram at cgpp.com Tue Jul 18 12:33:21 2000 From: milgram at cgpp.com (J. Milgram) Date: Tue, 18 Jul 2000 12:33:21 -0400 Subject: [Numpy-discussion] Re: Polynomial factorization? In-Reply-To: Your message of "Mon, 17 Jul 2000 19:15:27 PDT." Message-ID: <200007181633.MAA25974@cgpp.com> Hi Syrus, Wow, a GPL'd Macsyma! Very cool, thanks for the pointer. That does exactly what I need. Thanks too for the pointer to the Sci Apps for Linux page. regards - Judah Judah Milgram milgram at cgpp.com College Park Press http://www.cgpp.com P.O. Box 143, College Park MD, USA 20741 +001 (301) 422-4626 (422-3047 fax) From warmerda at home.com Wed Jul 19 18:50:36 2000 From: warmerda at home.com (Frank Warmerdam) Date: Wed, 19 Jul 2000 17:50:36 -0500 Subject: [Numpy-discussion] Numpy / OpenEV / GDAL Integration Message-ID: <397630BC.CACF72A6@home.com> Folks, I just wanted to let you all know that I have made a first hack at integrating Numpy with GDAL (my geospatial raster access library), with the purpose of integrating with OpenEV (a geospatial data viewing application). The work is still relatively preliminary, but might be of interest to some. OpenEV and GDAL are raster orientated, and will only be of interest for those working with 2D matrices and an interest in treating them as rasters. A few notes of interest. o OpenEV and GDAL support a variety of data types. Unlike most raster packages they can support UnsignedInt8, Int16, Int32, Float32, Float64, ComplexInt16, ComplexInt32, ComplexFloat32 and ComplexFloat64 data types. o OpenEV will scale for display, but will also show real underlying data values. It also includes special display modes for complex rasters to show phase, magnitude, phase & magnitude, real and imaginary views of complex layers. o GDAL can be used to save real, and complex data to TIFF (and a few other less well known formats), as well as loading various raster formats common in remote sensing and GIS. The following is a minimal example of using GDAL and numpy together: from Numeric import * import gdalnumeric x = gdalnumeric.LoadFile( "/u/data/png/guy.png" ) print x.shape x = 255.0 - x gdalnumeric.SaveArray( x, "out.tif", "GTiff" ) More information is available at: http://www.remotesensing.org/gdal http://openev.sourceforge.net/ Finally, a thanks to those who have developed and maintained Numeric Python. It is a great package, and I look forward to using it more. Best regards, ---------------------------------------+-------------------------------------- I set the clouds in motion - turn up | Frank Warmerdam, warmerda at home.com light and sound - activate the windows | http://members.home.com/warmerda and watch the world go round - Rush | Geospatial Programmer for Rent From pete at shinners.org Wed Jul 26 01:46:33 2000 From: pete at shinners.org (Pete Shinners) Date: Tue, 25 Jul 2000 22:46:33 -0700 Subject: [Numpy-discussion] Re: [Pysdl-devel] Further development... References: <00072514054503.00439@azriel> Message-ID: <004101bff6c4$db70f820$0200a8c0@home> > Ideas I've tossed around include creating an interface similar to the one > surface objects have for audio buffers, adding adaptors for PIL and PST, > creating an overly complex system for 'compiling' surface operations for speed, > and some other things. > I don't know. So I'm asking. What is it you want? interfacing with PIL should be pretty easy with the "fromstring" and "tostring" style functions that PIL uses. Numpy interfaces with PIL in this manner. (in fact with my unofficial numpy extension, you can already use numpy to get the images into PIL (or so i assume, have yet to test something like that :] )) i also like the idea of easier integration with "C" extensions, but this will prove a bigger challenge than breaking up the source i'd imagine. (although it will likely require it, since currently, all the global objects are created in the header file (DOH)) well, you asked for it... here's a sampling things i'd like to see for pysdl -------------------------------------------------------------------- first is throwing exceptions instead of returning -value error codes now that i'm into python, i love the flexibility of exceptions, instead of checking every function call for its return code, you just call all the functions, and put the error handling at the end a wrapper for some line/polygon filling library. i assume SDL has something like this already, but everynow and then i keep thinking i'll want this. i'd especially like this if it was done on a library tuned for SDL, so i can get fast-as-possible-filled-polygons break the sourcecode into smaller files. :] actually i really do want this one. i would love to hear a discussion of the best possible ways to get this done. i've done this in a reasonably clean and efficient way. it isn't thoroughly planned out but i was pleased with how it turned out. if it can be used as a starting point for discussion it will have served its purpose! numeric python implementation. i've got a crude sample of this going. (the other) peter and i were able to attain some amazing speed gains. it's not for everyone (and i think well discussed :] ) but it beats the pants off trying to drum up a C extension to do basic image operations. anyways, those speedups we saw were on pretty basic operations. things like a 640x480 radial gradient went from about 20 seconds to under 2 seconds (and that was still with python doing the color conversions, and a generic non-optimized numpy-2-sdl transfer). i also have my basic "fire" demo running under numpy that would be foolhardy without numpy cleanup of the "event" handling routines. currently i think it's too much "C" and too little "python". i've thought out a couple simple changes that make it much smaller and more graceful than event handling with C-SDL some higher-level classes written in python. simple base classes for things like sprites, eventhandlers (i'm enjoying mine!). the reason these should be in python to begin with is so they can be easily inherited and extended. i still haven't got to using the sound routines in pysdl, but it seems like a much cleaner implementation could be made from what there is in C-SDL (and we've currently duplicated for pysdl). instead of the current "audio.play(mysound)" something more like "mysound.play()". i realize there's some issues involved, and i haven't really looked at the audio interfaces (yet), but from what i currently see it seems a little backwards heh, this one just occurred to me, but check with PERL and other SDL language bindings to make sure we're not missing out on any great ideas at least the thought of switching to using distutils. i've scanned through their docs and examples, and it seems powerful. perhaps best to wait for the next python release which includes distutils standard. but i would love to hear anyone's experience if they've maintained any packages with this. this will probably just have to wait until someone out there needs something like this and just goes ahead and creates it. but to help a pysdl game rely on 3rd-party extensions written in python there could be some prebuild "rexec" modes that can run python code in a 'gaming-oriented' restricted environment. i dunno, it's just something that pops into my head once in awhile i looked into the SDL_net library to see if it was worth including with pysdl. after checking it all out i saw nothing that wasn't offered by the standard python "socket" libraries. i'd say this library is worth ignoring, until someone can bring forth some facts that i overlooked finally, i'll end with a simple one. just a global pysdl function to get the current screen. when writing my game between multiple modules i'm always having to pass the screen (and other info) around between all the modules. there may be other pysdl "state" info like this that is usefull to be able to access from any of the modules in my game (i just haven't found em yet) i like peter nicolai's idea for the web plugin. but i fear that SDL is not the base library of choice for this type of project, since SDL has little control over the display window. (ie, impossible to "embed" into other windows) as for other platforms, i have done limited testing on IRIX which has worked great for me. (IRIX being another platform that SDL has recently supported) > A side note to Pete: I haven't been ignoring you, I've just been > trying to decide what I think of it before giving you a response no trouble. i've decided to just start writing stuff and deal with issues as they come up. i figure with this approach there are two benefits. first, i'll have real-world experience to make my "suggestions" a lot more worthwhile. second, there is a top notch pysdl game out there! From rob at hooft.net Tue Jul 4 03:53:10 2000 From: rob at hooft.net (Rob W. W. Hooft) Date: Tue, 4 Jul 2000 09:53:10 +0200 (CEST) Subject: [Numpy-discussion] Slight inconsistency between 'f' shape () array and python scalar Message-ID: <14689.38886.964153.150197@temoleh.chem.uu.nl> Hm. I found a bug in one of my programs that was due to the difference in behavior between a 'f' shape () array and a true python scalar: import Numeric a=Numeric.zeros((50,50),'f') b=[] for i in range(50): d=a[i,i] b.append(d) nb=Numeric.array(b) print nb.shape # Expect (50,) but get (50,1) BTW: Why does "a=Numeric.zeros((50,),'f'); d=a[i]" return a python scalar, and the above script a shape () array? Rob -- ===== rob at hooft.net http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= From dubois1 at llnl.gov Thu Jul 6 09:59:17 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Thu, 6 Jul 2000 06:59:17 -0700 Subject: [Numpy-discussion] New setup.py in numpy CVS In-Reply-To: <20000704103029.A1918@beelzebub> Message-ID: I checked in a new version of setup.py for Numerical that corresponds to Distutils-0.9. This is a modification of the setup_numpy.py that Greg has in Distutils. This version REQUIRES the new Distutils. If you install Distutils into 1.6a2, remember (which I didn't at first) to go delete the distutils directory in the Python library directory. If you don't you will get a message while running setup informing you that your Distutils must be updated. This CVS version also separates out LAPACK/BLAS so that using the "lite" version of the libraries supplied with numpy is now optional. A small attempt is made to find the library or the user can edit setup.py to set the locations if that fails. I have not tested this on Windows and I would bet it needs help; we probably won't cut a new Numerical release until this is resolved and Python 2.0 is out so that the needed version of Distutils is standard. Suggestions for improvements would be most welcome. Paul From dubois1 at llnl.gov Thu Jul 6 13:01:41 2000 From: dubois1 at llnl.gov (Paul F. Dubois) Date: Thu, 6 Jul 2000 10:01:41 -0700 Subject: [Numpy-discussion] Re distutils and numpy In-Reply-To: <14681.25131.166235.152210@anthem.concentric.net> References: <20000627205843.A1607@beelzebub> <14681.25131.166235.152210@anthem.concentric.net> Message-ID: <00070610043200.02062@almanac> I have made even more changes to Numeric this morning, separating off FFT and MA as separate packages and adding the package RNG. I found an error in the previous setup.py; it was installing headers in include/python1.6/Numerical instead of Numeric. This apparently gets fixed if you change the name of the package (which I otherwise thought didn't do anything.) From gward at python.net Fri Jul 7 19:13:44 2000 From: gward at python.net (Greg Ward) Date: Fri, 7 Jul 2000 19:13:44 -0400 Subject: [Numpy-discussion] Re: Re distutils and numpy In-Reply-To: <00070610043200.02062@almanac>; from dubois1@llnl.gov on Thu, Jul 06, 2000 at 10:01:41AM -0700 References: <20000627205843.A1607@beelzebub> <14681.25131.166235.152210@anthem.concentric.net> <00070610043200.02062@almanac> Message-ID: <20000707191344.A1249@beelzebub> On 06 July 2000, Paul F. Dubois said: > I found an error in the previous setup.py; it was installing headers > in include/python1.6/Numerical instead of Numeric. This apparently > gets fixed if you change the name of the package (which I otherwise > thought didn't do anything.) That's a feature. If Joe Blow releases an extension that requires the headers from NumPy, he should just have to specify, "I require the headers for " and have Distutils take care of the -I paths for him. (It doesn't do this currently, but it could and should!) Don't tell me I'm the only one who's confused about whether it's "NumPy", "Numerical Python", or "Numeric Python", and whether the above blank should be filled in with "Numeric" or "Numerical". BTW, the distribution name is also used, obviously, to create source and built distributions. So naming the header file directory after it is not without precedent. (It does have the subtle side-effect that distribution names should be valid as part of the filename in C #include statements. I have no idea what restrictions that imposes... but it's probably just common sense to stick to [a-zA-Z0-9_-] in distribution names and filenames.) Greg -- Greg Ward - Unix geek gward at python.net http://starship.python.net/~gward/ "Question authority!" "Oh yeah? Says who?" From pete at shinners.org Sat Jul 8 03:35:33 2000 From: pete at shinners.org (Pete Shinners) Date: Sat, 8 Jul 2000 00:35:33 -0700 Subject: [Numpy-discussion] savebit questions and errors Message-ID: <003601bfe8af$1a094840$0200a8c0@home> i'm developing with some Numeric stuff and don't have a solid grasp on what the 'SAVEBIT' stuff is. it's not mentioned in the docs at all. what's also strange is the the numpy testing "test_all.py" fails in the final sections when testing the savebit stuff. can someone give a quicky description of what this flag can be used for? also... i assume someone already knows about the error in the testing? i ran it on both a MIPS5000 IRIX system and an x86 win98 system and it errored out consistently on the test line 506. From pete at shinners.org Sat Jul 8 03:55:45 2000 From: pete at shinners.org (Pete Shinners) Date: Sat, 8 Jul 2000 00:55:45 -0700 Subject: [Numpy-discussion] Optimizing Numpy Message-ID: <007101bfe8b1$ecabca00$0200a8c0@home> i've been throwing my hand and getting more speed out of numpy. i've birthed a little fruit from my efforts. my area of use is specifically with 2D arrays with image info. anyways, i've attached an 'arrayobject.c' file that is from the 15.3 release and optimized. in my test case the code ran about twice the speed of the original 15.3 release. i went and tested out other uses and found on a 20% speedup pretty consistent. (for example, i cranked the mandelbrot demo resolution to 320 x 200 and removed the 'print' command and it went from a runtime of 5.5 to 4.5) i'm not sure how people 'officially' make contributions to the code, but i hope this is easy enough to merge. i also hope this is accepted (or at least reviewed) for inclusion in the next release. optimizing further... i also plan on a few more optimizations. the least is going to be a 'C' version of 'arrayrange' and probably 'ones'. the current arrayrange is pretty slow (slower than the standard python 'range' in all my tests). the other optimization is a bit more drastic, and i'd like to hear feedback from more 'numpy experts' before making the change. in the file 'arraytypes.c' with all the arrays of conversion functions, i've found that the conversion routines are a little too 'elaborate'. these routines are only ever called from one line and the the two "increment/skip" arguments are always hardcoded one. there are two possible roads to speedup the conversion of array types. 1-- optimize all the conversion routines so they aren't so generic. this should be a pretty easy fix and should offer noticeable speed. 2-- do a better job of converting arrays. instead of creating a whole new array of the new type and simply copying that, create a conversion method that simply converts the data directly into the destination array. this would mean using all those conversion routines to their full power. this would offer more speed than the first option, but is also a lot more work well, what do people think? my initial thought is to make a quick python script to take 'arraytypes.c' and convert all the functions to be a quicker version. numpy is amazing, and i'm glad to get a chance to better it! -------------- next part -------------- A non-text attachment was scrubbed... Name: arrayobject.zip Type: application/x-zip-compressed Size: 16499 bytes Desc: not available URL: From pauldubois at home.com Sat Jul 8 12:35:36 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 8 Jul 2000 09:35:36 -0700 Subject: [Numpy-discussion] savebit questions and errors In-Reply-To: <003601bfe8af$1a094840$0200a8c0@home> Message-ID: Setting the "savespace" property encourages Numeric to keep the results of calculations at smaller precisions. The error in the test has been fixed by removing the test for now. It appears the result of the test is platform-dependent. There is a list of fixed and open bugs on the project page http://sourceforge.net/projects/numpy. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:36 AM > To: numpy-discussion at sourceforge.net > Subject: [Numpy-discussion] savebit questions and errors > > > i'm developing with some Numeric stuff and don't have > a solid grasp on what the 'SAVEBIT' stuff is. it's not > mentioned in the docs at all. > > what's also strange is the the numpy testing "test_all.py" > fails in the final sections when testing the savebit > stuff. > > can someone give a quicky description of what this flag > can be used for? > > also... i assume someone already knows about the error in > the testing? i ran it on both a MIPS5000 IRIX system and > an x86 win98 system and it errored out consistently on > the test line 506. > > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From pauldubois at home.com Sat Jul 8 12:38:49 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 8 Jul 2000 09:38:49 -0700 Subject: [Numpy-discussion] Optimizing Numpy In-Reply-To: <007101bfe8b1$ecabca00$0200a8c0@home> Message-ID: The project page has a patch manager for contributions. Please note that Travis is in the middle of a substantial reimplementation and so I think nobody would want to do a lot of optimizing right now. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:56 AM > To: numpy-discussion at sourceforge.net > Subject: [Numpy-discussion] Optimizing Numpy > > > i've been throwing my hand and getting more speed out of > numpy. i've birthed a little fruit from my efforts. my area > of use is specifically with 2D arrays with image info. > > anyways, i've attached an 'arrayobject.c' file that is > from the 15.3 release and optimized. > > in my test case the code ran about twice the speed of > the original 15.3 release. i went and tested out other > uses and found on a 20% speedup pretty consistent. > (for example, i cranked the mandelbrot demo resolution > to 320 x 200 and removed the 'print' command and it > went from a runtime of 5.5 to 4.5) > > i'm not sure how people 'officially' make contributions > to the code, but i hope this is easy enough to merge. i > also hope this is accepted (or at least reviewed) for > inclusion in the next release. > > > optimizing further... > i also plan on a few more optimizations. the least is going > to be a 'C' version of 'arrayrange' and probably 'ones'. the > current arrayrange is pretty slow (slower than the standard > python 'range' in all my tests). > the other optimization is a bit more drastic, and i'd like > to hear feedback from more 'numpy experts' before making the > change. in the file 'arraytypes.c' with all the arrays of conversion > functions, i've found that the conversion routines are a little > too 'elaborate'. these routines are only ever called from one line > and the the two "increment/skip" arguments are always hardcoded one. > there are two possible roads to speedup the conversion of array > types. > 1-- optimize all the conversion routines so they aren't so generic. > this should be a pretty easy fix and should offer noticeable speed. > 2-- do a better job of converting arrays. instead of creating a > whole new array of the new type and simply copying that, create > a conversion method that simply converts the data directly into > the destination array. this would mean using all those conversion > routines to their full power. this would offer more speed than > the first option, but is also a lot more work > > well, what do people think? my initial thought is to make a > quick python script to take 'arraytypes.c' and convert all the > functions to be a quicker version. > > > numpy is amazing, and i'm glad to get a chance to better it! > > From gward at python.net Sat Jul 8 21:23:33 2000 From: gward at python.net (Greg Ward) Date: Sat, 8 Jul 2000 21:23:33 -0400 Subject: [Numpy-discussion] Re: [Distutils] New setup.py in numpy CVS In-Reply-To: ; from dubois1@llnl.gov on Thu, Jul 06, 2000 at 06:59:17AM -0700 References: <20000704103029.A1918@beelzebub> Message-ID: <20000708212333.B4248@beelzebub> On 06 July 2000, Paul F. Dubois said: > This version REQUIRES the new Distutils. If you install Distutils into > 1.6a2, remember (which I didn't at first) to go delete the distutils > directory in the Python library directory. If you don't you will get a > message while running setup informing you that your Distutils must be > updated. The "Official Recommendation" is just to rename that directory away: that way you'll have (eg.) lib/python1.6/distutils-orig and lib/python1.6/site-packages/distutils, and the site-packages version will take precedence without having clobbered the standard library too badly. See the Distutils README.txt. The alternative was support in the Distutils for replacing/upgrading bits of the standard library; Guido was, shall we say, non-receptive to that idea. Oh well. > This CVS version also separates out LAPACK/BLAS so that using the "lite" > version of the libraries supplied with numpy is now optional. A small > attempt is made to find the library or the user can edit setup.py to set the > locations if that fails. Oh good, does that mean it'll take less than 20 minutes to compile NumPy on my pokey old 100 MHz Pentium? ;-) Greg -- Greg Ward - Unix bigot gward at python.net http://starship.python.net/~gward/ Software patents SUCK -- boycott amazon.com! From turner at blueskystudios.com Mon Jul 10 10:27:30 2000 From: turner at blueskystudios.com (John A. Turner) Date: Mon, 10 Jul 2000 10:27:30 -0400 (EDT) Subject: [Numpy-discussion] savebit questions and errors In-Reply-To: References: <003601bfe8af$1a094840$0200a8c0@home> Message-ID: <14697.56658.361792.192673@denmark.blueskystudios.com> >>>>> "PFD" == Paul F Dubois : PFD> The error in the test has been fixed by removing the test for now. That's going in my quotes file - I realize it's legit in this case, but you have to admit that, esp. out of context, it's pretty funny... -- John A. Turner, Senior Research Associate Blue Sky Studios, One South Rd, Harrison, NY 10528 http://www.blueskystudios.com/ (914) 825-8319 From jonathan.gilligan at vanderbilt.edu Tue Jul 11 11:27:34 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Tue, 11 Jul 2000 10:27:34 -0500 Subject: [Numpy-discussion] Current versions not on CVS (distutils and NumPy) Message-ID: <4.3.2.7.2.20000711101829.055a71a0@g.mail.vanderbilt.edu> I tried updating my distutils and numpy from their CVS sites last night, to try building and found the following problems. Distutils from :pserver:distutilscvs at cvs.python.org:/projects/cvsroot has a most recent tag of Distutils-0_8_2. Where can I get CVS access to Distutils 0.9, or is this a tar-only distribution. On a related note, the cvs of Numerical Python from SourceForge has a most recent tag of V15_2. If I get the most recent (untagged) version, of NumPy, there is not lapack_lite_library directory, so "setup.py install" fails. Does this mean that Numerical Python 15.3 has not yet been checked into the CVS repository? It's a bit confusing when the CVS repositories are not up to date with the most recent releases. Could someone clarify for me, please. Jonathan Gilligan From milgram at cgpp.com Sun Jul 16 15:36:53 2000 From: milgram at cgpp.com (J. Milgram) Date: Sun, 16 Jul 2000 15:36:53 -0400 Subject: [Numpy-discussion] Polynomial factorization? Message-ID: <200007161936.PAA19023@cgpp.com> Hi - I need to do prime factorization of some polynomials with integer coefficients. It would seem the Berlekamp algorithm would be what I want but I also admit to almost total ignorance on the subject. I'm having trouble even understanding how the algorithm works so I'm probably not the best person to be implementing it. But I may give it a try. Is anyone already working on a Python implementation of this? Or any tips on how to proceed? Pointers to other freeware solutions? I prefer something that can be re-written in or modularized into Python but even if not, that's no big deal. The alternative is to purchase one of the symbolic algebra packages like Maple or Mathematica etc. thanks Judah Judah Milgram milgram at cgpp.com P.O. Box 8376, Langley Park, MD 20787 (301) 422-4626 (-3047 fax) From syrus at shiba.ucsd.edu Mon Jul 17 22:15:27 2000 From: syrus at shiba.ucsd.edu (Syrus Nemat-Nasser) Date: Mon, 17 Jul 2000 19:15:27 -0700 (PDT) Subject: [Numpy-discussion] Re: Polynomial factorization? Message-ID: Hello Judah, I accidentally deleted your original message, but found it again in the Numerical list archives. I believe that there are free software packages that would do what you need. For example, look at Maxima which is under the GPL now (free'd version of Macsyma). If you are non-commercial, you can get a free version of MuPad. Also, there are number theory packages available that can probably do polynomial factorization. I suggest that you look at the Scientific Applications for Linux page: http://SAL.KachinaTech.COM/index.shtml Note that a lot of the software there can also be run on other platforms including both UNIX and Windows. You can even search for "polynomial factorization" and get a number of hits. Cheers. Syrus. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Syrus C Nemat-Nasser, PhD | Center of Excellence for Advanced Materials UCSD Department of Physics | UCSD Department of Mechanical | and Aerospace Engineering From milgram at cgpp.com Tue Jul 18 12:33:21 2000 From: milgram at cgpp.com (J. Milgram) Date: Tue, 18 Jul 2000 12:33:21 -0400 Subject: [Numpy-discussion] Re: Polynomial factorization? In-Reply-To: Your message of "Mon, 17 Jul 2000 19:15:27 PDT." Message-ID: <200007181633.MAA25974@cgpp.com> Hi Syrus, Wow, a GPL'd Macsyma! Very cool, thanks for the pointer. That does exactly what I need. Thanks too for the pointer to the Sci Apps for Linux page. regards - Judah Judah Milgram milgram at cgpp.com College Park Press http://www.cgpp.com P.O. Box 143, College Park MD, USA 20741 +001 (301) 422-4626 (422-3047 fax) From warmerda at home.com Wed Jul 19 18:50:36 2000 From: warmerda at home.com (Frank Warmerdam) Date: Wed, 19 Jul 2000 17:50:36 -0500 Subject: [Numpy-discussion] Numpy / OpenEV / GDAL Integration Message-ID: <397630BC.CACF72A6@home.com> Folks, I just wanted to let you all know that I have made a first hack at integrating Numpy with GDAL (my geospatial raster access library), with the purpose of integrating with OpenEV (a geospatial data viewing application). The work is still relatively preliminary, but might be of interest to some. OpenEV and GDAL are raster orientated, and will only be of interest for those working with 2D matrices and an interest in treating them as rasters. A few notes of interest. o OpenEV and GDAL support a variety of data types. Unlike most raster packages they can support UnsignedInt8, Int16, Int32, Float32, Float64, ComplexInt16, ComplexInt32, ComplexFloat32 and ComplexFloat64 data types. o OpenEV will scale for display, but will also show real underlying data values. It also includes special display modes for complex rasters to show phase, magnitude, phase & magnitude, real and imaginary views of complex layers. o GDAL can be used to save real, and complex data to TIFF (and a few other less well known formats), as well as loading various raster formats common in remote sensing and GIS. The following is a minimal example of using GDAL and numpy together: from Numeric import * import gdalnumeric x = gdalnumeric.LoadFile( "/u/data/png/guy.png" ) print x.shape x = 255.0 - x gdalnumeric.SaveArray( x, "out.tif", "GTiff" ) More information is available at: http://www.remotesensing.org/gdal http://openev.sourceforge.net/ Finally, a thanks to those who have developed and maintained Numeric Python. It is a great package, and I look forward to using it more. Best regards, ---------------------------------------+-------------------------------------- I set the clouds in motion - turn up | Frank Warmerdam, warmerda at home.com light and sound - activate the windows | http://members.home.com/warmerda and watch the world go round - Rush | Geospatial Programmer for Rent From pete at shinners.org Wed Jul 26 01:46:33 2000 From: pete at shinners.org (Pete Shinners) Date: Tue, 25 Jul 2000 22:46:33 -0700 Subject: [Numpy-discussion] Re: [Pysdl-devel] Further development... References: <00072514054503.00439@azriel> Message-ID: <004101bff6c4$db70f820$0200a8c0@home> > Ideas I've tossed around include creating an interface similar to the one > surface objects have for audio buffers, adding adaptors for PIL and PST, > creating an overly complex system for 'compiling' surface operations for speed, > and some other things. > I don't know. So I'm asking. What is it you want? interfacing with PIL should be pretty easy with the "fromstring" and "tostring" style functions that PIL uses. Numpy interfaces with PIL in this manner. (in fact with my unofficial numpy extension, you can already use numpy to get the images into PIL (or so i assume, have yet to test something like that :] )) i also like the idea of easier integration with "C" extensions, but this will prove a bigger challenge than breaking up the source i'd imagine. (although it will likely require it, since currently, all the global objects are created in the header file (DOH)) well, you asked for it... here's a sampling things i'd like to see for pysdl -------------------------------------------------------------------- first is throwing exceptions instead of returning -value error codes now that i'm into python, i love the flexibility of exceptions, instead of checking every function call for its return code, you just call all the functions, and put the error handling at the end a wrapper for some line/polygon filling library. i assume SDL has something like this already, but everynow and then i keep thinking i'll want this. i'd especially like this if it was done on a library tuned for SDL, so i can get fast-as-possible-filled-polygons break the sourcecode into smaller files. :] actually i really do want this one. i would love to hear a discussion of the best possible ways to get this done. i've done this in a reasonably clean and efficient way. it isn't thoroughly planned out but i was pleased with how it turned out. if it can be used as a starting point for discussion it will have served its purpose! numeric python implementation. i've got a crude sample of this going. (the other) peter and i were able to attain some amazing speed gains. it's not for everyone (and i think well discussed :] ) but it beats the pants off trying to drum up a C extension to do basic image operations. anyways, those speedups we saw were on pretty basic operations. things like a 640x480 radial gradient went from about 20 seconds to under 2 seconds (and that was still with python doing the color conversions, and a generic non-optimized numpy-2-sdl transfer). i also have my basic "fire" demo running under numpy that would be foolhardy without numpy cleanup of the "event" handling routines. currently i think it's too much "C" and too little "python". i've thought out a couple simple changes that make it much smaller and more graceful than event handling with C-SDL some higher-level classes written in python. simple base classes for things like sprites, eventhandlers (i'm enjoying mine!). the reason these should be in python to begin with is so they can be easily inherited and extended. i still haven't got to using the sound routines in pysdl, but it seems like a much cleaner implementation could be made from what there is in C-SDL (and we've currently duplicated for pysdl). instead of the current "audio.play(mysound)" something more like "mysound.play()". i realize there's some issues involved, and i haven't really looked at the audio interfaces (yet), but from what i currently see it seems a little backwards heh, this one just occurred to me, but check with PERL and other SDL language bindings to make sure we're not missing out on any great ideas at least the thought of switching to using distutils. i've scanned through their docs and examples, and it seems powerful. perhaps best to wait for the next python release which includes distutils standard. but i would love to hear anyone's experience if they've maintained any packages with this. this will probably just have to wait until someone out there needs something like this and just goes ahead and creates it. but to help a pysdl game rely on 3rd-party extensions written in python there could be some prebuild "rexec" modes that can run python code in a 'gaming-oriented' restricted environment. i dunno, it's just something that pops into my head once in awhile i looked into the SDL_net library to see if it was worth including with pysdl. after checking it all out i saw nothing that wasn't offered by the standard python "socket" libraries. i'd say this library is worth ignoring, until someone can bring forth some facts that i overlooked finally, i'll end with a simple one. just a global pysdl function to get the current screen. when writing my game between multiple modules i'm always having to pass the screen (and other info) around between all the modules. there may be other pysdl "state" info like this that is usefull to be able to access from any of the modules in my game (i just haven't found em yet) i like peter nicolai's idea for the web plugin. but i fear that SDL is not the base library of choice for this type of project, since SDL has little control over the display window. (ie, impossible to "embed" into other windows) as for other platforms, i have done limited testing on IRIX which has worked great for me. (IRIX being another platform that SDL has recently supported) > A side note to Pete: I haven't been ignoring you, I've just been > trying to decide what I think of it before giving you a response no trouble. i've decided to just start writing stuff and deal with issues as they come up. i figure with this approach there are two benefits. first, i'll have real-world experience to make my "suggestions" a lot more worthwhile. second, there is a top notch pysdl game out there!