From nick at nickbower.com Wed Sep 6 04:47:26 2000 From: nick at nickbower.com (nick at nickbower.com) Date: Wed, 06 Sep 2000 08:47:26 GMT Subject: [Numpy-discussion] 2 i/o questions and an announcement Message-ID: <20000906.8472600@cricket.> 2 really quick questions and an announcement of sorts: 1) Is NumPy v2.0 going to address binary file io at all? Right now, I use the array module with the "fromfile" method, but this then has to be re-cast to a NumPy array, which seems inefficient (although I've no evidence for that), especially for large 50-100 meg data files. 2) Is there a quicker way to get data from PIL to NumPy than: data = array(Image.open(...).getdata(), typecode=Int) this takes an eternity for a decent sized image (1.5 minutes on a P400 for a 1200x1000 image). Some of you may already know, but I've taken a first crack at an free clone of IDL (as in RSI's Interactive Data Language, which has similarities to MatLab). Cohesive integration of 2D plotting/contouring (Dislin), 8/24-bit imaging (PIL) and array arithmetic (NumPy) has been implemented using standard IDL syntax - http://nickbower.com/computer/pydl Cheers, nick. From jsaenz at wm.lc.ehu.es Wed Sep 6 05:40:10 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Wed, 6 Sep 2000 11:40:10 +0200 (MET DST) Subject: [Numpy-discussion] Help about C, Py_INCREF and throwing Exceptions from C Message-ID: Hello, all. I am finishing a C module which computes a kernel-based probability density function estimation using a kernel-based approach for unidimensional and multidimensional PDFs. The idea is: Given a Numpy array expdata, with the experimental data in a d-dimensional space (d=1,2,3), we want to make an estimation of the PDF at grid points 'xpoints' in the same d-dimensional space using a bandwidth h: Python code: import KPDF pdf=KPDF.MPDFEpanechnikov(expdata,xpoints,h) The module is written in C, and it needs to return a Numpy array shaped (xpoints.shape[0],). So, internally, the function which creates the array reads: int dims[1]; PyArrayObject *rarray; /* returned array */ PyArrayObject *xpoints; dims[0]=xpoints->dimensions[0]; rarray=(PyArrayObject*)PyArray_FromDims(1,dims,PyArray_DOUBLE); if (rarray==NULL) return NULL; /* More code follows */ return PyArray_Return(rarray); I am assuming that I DO NOT have to call Py_INCREF() explicitly in my function before returning the array to Python, because it is actually called by PyArray_FromDims(). I would like to know whether it is actually that way. I am also assuming that, in case malloc() is unable to allocate memory inside PyArray_FromDims(), it sets the corresponding exception and my only TO-DO in my function is returning NULL to the caller. I would appreciate replies these questions by gurus. I have Read The Funny Manuals on extending (regular and numerical) Python, but I think these points are not specially clear, at least, I am doubtful after reading them. Thanks in advance. Jon Saenz. | Tfno: +34 946012470 Depto. Fisica Aplicada II | Fax: +34 944648500 Facultad de Ciencias. \\ Universidad del Pais Vasco \\ Apdo. 644 \\ 48080 - Bilbao \\ SPAIN From pauldubois at home.com Thu Sep 7 01:05:36 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Wed, 6 Sep 2000 22:05:36 -0700 Subject: [Numpy-discussion] [ANNOUNCE] Documentation, 1.00b1 masked array Message-ID: I have put a new version of MA into CVS; it is quite beefed up. Worse, I wrote documentation for it. If you need to deal with missing values, please give it a read. The numpy doc is available from the numpy home page, http://numpy.sourceforge.net, in both HTML and PDF. There is a test routine in subdirectory Packages/MA/Test. From o.hofmann at Smail.Uni-Koeln.de Thu Sep 7 05:22:57 2000 From: o.hofmann at Smail.Uni-Koeln.de (Oliver Hofmann) Date: Thu, 7 Sep 2000 11:22:57 +0200 Subject: [Numpy-discussion] Row/Column delete Message-ID: <20000907112257.B8034@smail.rrz.uni-koeln.de> 'lo everyone! Been fiddling around with NumPy lately, trying to use it for some simple clustering algorithms. Those require constant merging or rows and columns. Since the arrays are static in size I've been looking at the FAQ and doku for a function that helps with this, but have been unable to find any. What I am doing right now is: Set up a new matrix with one less row or column (zeros()), then copy all the rows/columns but the one I'd like to delete to the new matrix. However, this requires setting up a second matrix for each delete action. A combined row/column delete requires two copy actions, making it even slower. Is there a better/faster way to do this? Ie, parse through the old matrix and copy individual positions instead of rows/columns? Any help would be appreciated! Oliver Hofmann -- Oliver Hofmann - University of Cologne - Department of Biochemistry o.hofmann at smail.uni-koeln.de - setar at gmx.de - connla at thewell.com "It's too bad she won't live. But then, who does?" From absd00t at c1186.ae.ge.com Thu Sep 7 08:50:38 2000 From: absd00t at c1186.ae.ge.com (U-E59264-Osman F Buyukisik) Date: Thu, 7 Sep 2000 08:50:38 -0400 (EDT) Subject: [Numpy-discussion] Python-2.01b Message-ID: <14775.36445.869829.852906@c1186.ae.ge.com> Hello All, I made a mistake of installing the new 2.0 python! Now when I try to re-build numpy, I get errors. ... building '_numpy' extension gcc -g -O2 -fpic -IInclude -I/home/absd00t/local/include/python2.0 -c Src/_numpymodule.c -o build/temp.hp-uxB/Src/_numpymodule.o -I/usr/include/Motif1.1 -I/usr/include/X11R4 In file included from Src/_numpymodule.c:4: Include/arrayobject.h:16: parse error before `Py_FPROTO' Include/arrayobject.h:18: parse error before `Py_FPROTO' Include/arrayobject.h:19: parse error before `Py_FPROTO' Include/arrayobject.h:22: parse error before `PyArray_VectorUnaryFunc' Include/arrayobject.h:22: warning: no semicolon at end of struct or union Include/arrayobject.h:24: warning: data definition has no type or storage class Include/arrayobject.h:25: parse error before `*' .... Anyone working on porting to version 2.0 yet?? Or back to version 1.5 or1.6? TIA osman From jack at oratrix.nl Thu Sep 7 10:18:50 2000 From: jack at oratrix.nl (Jack Jansen) Date: Thu, 07 Sep 2000 16:18:50 +0200 Subject: [Numpy-discussion] Python-2.01b In-Reply-To: Message by U-E59264-Osman F Buyukisik , Thu, 7 Sep 2000 08:50:38 -0400 (EDT) , <14775.36445.869829.852906@c1186.ae.ge.com> Message-ID: <20000907141851.E1686303181@snelboot.oratrix.nl> > Hello All, > I made a mistake of installing the new 2.0 python! Now when I try to > re-build numpy, I get errors. > [...] > Include/arrayobject.h:18: parse error before `Py_FPROTO' I have the fixes on my disk at home, but I haven't gotten around to completely testing NumPy yet. I'll try and get around to it as soon as possible. That is: assuming people think it is a good idea I check these fixes in, let me know, please. (A quick workaround that may work is to add the following lines to Python.h: #define Py_FPROTO(x) x #define Py_PROTO(x) x but I haven't tested this). -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From jhauser at ifm.uni-kiel.de Fri Sep 8 08:34:09 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Fri, 8 Sep 2000 14:34:09 +0200 (CEST) Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <20000906.8472600@cricket.> References: <20000906.8472600@cricket.> Message-ID: <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> >From my understanding of the code, Numpy2 uses the buffer interface. This would be the basis for a fast exchange of data between PIL and Numpy arrays. Regarding IO the pickling of arrays is quite fast. Have you tried this already? HTH, __Janko From rlw at stsci.edu Fri Sep 8 08:57:50 2000 From: rlw at stsci.edu (Rick White) Date: Fri, 8 Sep 2000 08:57:50 -0400 (EDT) Subject: [Numpy-discussion] 2 i/o questions and an announcement Message-ID: <200009081257.IAA26772@sundog.stsci.edu> Janko Hauser writes: > >From my understanding of the code, Numpy2 uses the buffer >interface. This would be the basis for a fast exchange of data between >PIL and Numpy arrays. Regarding IO the pickling of arrays is quite >fast. Have you tried this already? Pickling is fine if all you want to do is create data within Numeric and save it. But it's not useful either for reading data that already exists in some format (the usual case) or for writing data in standard formats so it can be exchanged with other people. Rick White From jhauser at ifm.uni-kiel.de Fri Sep 8 09:07:16 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Fri, 8 Sep 2000 15:07:16 +0200 (CEST) Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <200009081257.IAA26772@sundog.stsci.edu> References: <200009081257.IAA26772@sundog.stsci.edu> Message-ID: <20000908130716.8041.qmail@lisboa.ifm.uni-kiel.de> Rick White writes: > Pickling is fine if all you want to do is create data within Numeric > and save it. But it's not useful either for reading data that already > exists in some format (the usual case) or for writing data in standard > formats so it can be exchanged with other people. That's right. If you want to read binary formats, for which you know the data layout the numpyio package from Travis Oliphant is very fast and memory efficient, because it builds up the array during the read/write so no copy of the data string needs to be in memory. I see no other way to read other formats generally besides specific wrappers, like the netcdf interface. __Janko Hauser From nick at nickbower.com Sat Sep 9 00:15:51 2000 From: nick at nickbower.com (nick at nickbower.com) Date: Sat, 09 Sep 2000 04:15:51 GMT Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> References: <20000906.8472600@cricket.> <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <20000909.4155100@cricket.> > From my understanding of the code, Numpy2 uses the buffer > interface. This would be the basis for a fast exchange of data between > PIL and Numpy arrays. I should have been more specific <:) This problem was related to the current version of NumPy's interaction with PIUL. > Regarding IO the pickling of arrays is quite > fast. Have you tried this already? So is array.fromfile (faster I would expect). The issue at hand is not the read speed, but avoiding reading into memory then re-copying it over to NumPy formats. > That's right. If you want to read binary formats, for which you > know the data layout the numpyio package from Travis > Oliphant is very fast and memory efficient, because it > builds up the array during the read/write so no copy > of the data string needs to be in memory. Ah-ha! Sounds exactly what I wanted. I'll check it out. Thanks, Nick. From pauldubois at home.com Mon Sep 11 11:28:39 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 11 Sep 2000 08:28:39 -0700 Subject: [Numpy-discussion] Reference on numpy home page In-Reply-To: Message-ID: Travis, I added a reference to your multipack home page on http://numpy.sourceforge.net. I encourage you and other nummies to edit that section of the home page to add other references to primary collections of numpy-compatible software, or to add details to my descriptions. To do it, log in via ssh to sourceforge, cd to /home/groups/numpy/htdocs, and edit index.html. I think we can rely on sloth to prevent edit conflicts, there being so few of us. Paul From pearu at ioc.ee Tue Sep 12 12:12:59 2000 From: pearu at ioc.ee (Pearu Peterson) Date: Tue, 12 Sep 2000 19:12:59 +0300 (EETDST) Subject: [Numpy-discussion] ANNOUNCEMENT: Fortran to Python Interface Generator, 2nd Rel. Message-ID: FPIG - Fortran to Python Interface Generator I am pleased to announce the second public release of f2py (version 2.264): http://cens.ioc.ee/projects/f2py2e/ f2py is a command line tool for binding Python and Fortran codes. It scans Fortran 77/90/95 codes and generates a Python C/API module that makes it possible to call Fortran routines from Python. No Fortran or C expertise is required for using this tool. Features include: *** All basic Fortran types are supported: integer[ | *1 | *2 | *4 | *8 ], logical[ | *1 | *2 | *4 | *8 ], character[ | *(*) | *1 | *2 | *3 | ... ] real[ | *4 | *8 | *16 ], double precision, complex[ | *8 | *16 | *32 ] *** Multi-dimensional arrays of (almost) all basic types. Dimension specifications: | : | * | : *** Supported attributes: intent([ in | inout | out | hide | in,out | inout,out ]) dimension() depend([]) check([]) note() optional, required, external *** Calling Fortran 77/90/95 subroutines and functions. Also Fortran 90/95 module routines. Internal initialization of optional arguments. *** Accessing COMMON blocks from Python. Accessing Fortran 90/95 module data coming soon. *** Call-back functions: calling Python functions from Fortran with very flexible hooks. *** In Python, arguments of the interfaced functions may be of different type - necessary type conversations are done internally in C level. *** Automatically generates documentation (__doc__,LaTeX) for interface functions. *** Automatically generates signature files --- user has full control over the interface constructions. Automatically detects the signatures of call-back functions, solves argument dependencies, etc. *** Automatically generates Makefile for compiling Fortran and C codes and linking them to a shared module. Many compilers are supported: gcc, Compaq Fortran, VAST/f90 Fortran, Absoft F77/F90, MIPSpro 7 Compilers, etc. Platforms: Intel/Alpha Linux, HP-UX, IRIX64. *** Complete User's Guide in various formats (html,ps,pdf,dvi). *** f2py users list is available for support, feedback, etc. More information about f2py, see http://cens.ioc.ee/projects/f2py2e/ f2py is released under the LGPL license. Sincerely, Pearu Peterson September 12, 2000

f2py 2.264 - The Fortran to Python Interface Generator (12-Sep-00) From jsaenz at wm.lc.ehu.es Tue Sep 12 11:56:16 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Tue, 12 Sep 2000 17:56:16 +0200 (MET DST) Subject: [Numpy-discussion] Univariate and multivariate density estimation using Python Message-ID: Hello, all. I am announcing a C extension module which computes univariate and multivariate probability density functions by means of a kernel-based approach. The module includes functions to perform the estimation using the following kernels: * One-dimensional data: Epanechnikov Biweight Triangular * 2 or 3-dimensional data: Epanechnikov Multivariate gaussian For multivariate data, there is the optional feature of scaling each axis by means of a matrix. This approach allows the definition of Fukunaga-type estimators. The functions in the module are used as follows: import KPDF # edata and gdata must be numpy arrays of shapes (N,) and (E,) # while h is a scalar # pdf1 is a numpy array which holds the PDF evaluated at points # gdata with experimental data in edata. pdf1.shape=(E,) pdf1=KPDF.UPDFEpanechnikov(edata,gdata,h) # For multivariate estimation, edata and gdata must be numpy arrays of # shapes (N,2|3) and (E,2|3) while h is a scalar # pdf2 is a numpy array which holds the PDF evaluated at points # gdata with experimental data in edata. pdf2.shape=(E,) pdf2=KPDF.MPDFEpanechnikov(e2data,g2data,h) # For Fukunaga-type estimators, Sm1 must be a numpy array 2x2(3x3) # and holds the covariance matrix. sqrtdetS is the square root of the # determinant pdf2=KPDF.MPDFEpanechnikov(e2data,g2data,h,Sm1,sqrtdetS) There is not a lot of documentation in the module, but I have a serious commitment of preparing it soon. It can be downloaded from: http://starship.python.net/crew/jsaenz/KPDF.tar.gz Feedback of interested users will be greatly appreciated. Regards. Jon Saenz. | Tfno: +34 946012470 Depto. Fisica Aplicada II | Fax: +34 944648500 Facultad de Ciencias. \\ Universidad del Pais Vasco \\ Apdo. 644 \\ 48080 - Bilbao \\ SPAIN From dgoodger at atsautomation.com Tue Sep 12 10:33:22 2000 From: dgoodger at atsautomation.com (Goodger, David) Date: Tue, 12 Sep 2000 10:33:22 -0400 Subject: [Numpy-discussion] NumPy built distributions Message-ID: ATTN: Numeric Python project administrators There is a lot of interest in NumPy out there. Every time a new version of Numeric or Python comes out, people are interested in obtaining built distributions. Many of us are not able to build from source; without a repository of built distributions I'm afraid that many people will give up. The MacOS community is lucky that Jack Jansen, who maintains the MacPython distribution, also includes a built NumPy with his installers. Other OSes are not so lucky. Robin Becker (robin at jessikat.fsnet.co.uk) has built NumPy for Win32 many times and emails copies to those who request them. Unfortunately for him, but fortunately for NumPy, the demand is getting too high for this method of distribution. It would be great if these built distributions were available for all, with minimal overhead to developers/packagers. Would it be possible to add Robin's installers to the FTP site? How should he (or I, if he's not able) go about this? David Goodger Systems Administrator, Advanced Systems Automation Tooling Systems Inc., Automation Systems Division direct: (519) 653-4483 ext. 7121 fax: (519) 650-6695 e-mail: dgoodger at atsautomation.com goodger at users.sourceforge.net >From NumPy's Open Discussion forum (http://sourceforge.net/forum/forum.php?forum_id=3847): By: rgbecker ( Robin Becker ) Numeric-16.0 win32 builds [ reply ] 2000-Sep-09 03:57 I have built and zipped binaries for Numeric-16.0 for Python-1.5.2/1.6b1/2.0b1 I am fed up with sending these binaries to people individually. Is there an FTP location I could use on the sourceforge site? From DavidA at ActiveState.com Tue Sep 12 16:13:07 2000 From: DavidA at ActiveState.com (David Ascher) Date: Tue, 12 Sep 2000 13:13:07 -0700 (Pacific Daylight Time) Subject: [Numpy-discussion] NumPy built distributions In-Reply-To: Message-ID: > Would it be possible to add Robin's installers to the FTP site? How should > he (or I, if he's not able) go about this? I'm working on it right now. I need to repackage his code a bit since he just shipped me the .pyds and I'd like to make it a drop-in zip file (meaning I just add the .py and the .pth file). --david From pauldubois at home.com Tue Sep 12 16:17:37 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 12 Sep 2000 13:17:37 -0700 Subject: [Numpy-discussion] NumPy built distributions In-Reply-To: Message-ID: I'm willing to add any built distributions that you make. In the longer run I am happy to welcome additional developers who just want to be able to do these releases. Here is what you do: ftp downloads.sourceforge.net cd /incoming Upload your file Send me email telling me the name of the file. I will reply when I have done it. I don't know how long they let the files sit in incoming before blowing them away but let's try it and see how things work out. Please name the files so that they indicate the platform if not obvious, Numeric version AND the Python version with which they were built. I suggest something like Numeric-16.0-Python1.6.rpm, etc. The Python versions 1.5.2, 1.6, and 2.0 are incompatible. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of > Goodger, David > Sent: Tuesday, September 12, 2000 7:33 AM > To: numpy-discussion at lists.sourceforge.net > Cc: robin at jessikat.fsnet.co.uk > Subject: [Numpy-discussion] NumPy built distributions > > > ATTN: Numeric Python project administrators > > There is a lot of interest in NumPy out there. Every time a new version of > Numeric or Python comes out, people are interested in obtaining built > distributions. Many of us are not able to build from source; without a > repository of built distributions I'm afraid that many people > will give up. > > The MacOS community is lucky that Jack Jansen, who maintains the MacPython > distribution, also includes a built NumPy with his installers. Other OSes > are not so lucky. > > Robin Becker (robin at jessikat.fsnet.co.uk) has built NumPy for Win32 many > times and emails copies to those who request them. Unfortunately for him, > but fortunately for NumPy, the demand is getting too high for > this method of > distribution. It would be great if these built distributions were > available > for all, with minimal overhead to developers/packagers. > > Would it be possible to add Robin's installers to the FTP site? How should > he (or I, if he's not able) go about this? > > David Goodger > Systems Administrator, Advanced Systems > Automation Tooling Systems Inc., Automation Systems Division > direct: (519) 653-4483 ext. 7121 fax: (519) 650-6695 > e-mail: dgoodger at atsautomation.com > goodger at users.sourceforge.net > > > From NumPy's Open Discussion forum > (http://sourceforge.net/forum/forum.php?forum_id=3847): > > By: rgbecker ( Robin Becker ) > Numeric-16.0 win32 builds [ reply ] > 2000-Sep-09 03:57 > I have built and zipped binaries for Numeric-16.0 for > Python-1.5.2/1.6b1/2.0b1 > I am fed up with sending these binaries to people individually. > Is there an FTP location I could use on the sourceforge site? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From o.hofmann at smail.uni-koeln.de Tue Sep 12 09:03:28 2000 From: o.hofmann at smail.uni-koeln.de (Oliver Hofmann) Date: Tue, 12 Sep 2000 15:03:28 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <200009121202.OAA03292@chinon.cnrs-orleans.fr>; from hinsen@cnrs-orleans.fr on Tue, Sep 12, 2000 at 02:02:49PM +0200 References: <20000907112257.B8034@smail.rrz.uni-koeln.de> <200009121202.OAA03292@chinon.cnrs-orleans.fr> Message-ID: <20000912150327.B19@smail.rrz.uni-koeln.de> Konrad Hinsen (hinsen at cnrs-orleans.fr) wrote: > I haven't done any comparisons, but I suspect that Numeric.take() is > faster for large arrays. Try this: Yes, indeed. Actually, it is so fast that the first time I rewrote the script last week I was quite convinced it had to be buggy. Thanks again everyone, Oliver -- Oliver Hofmann - University of Cologne - Department of Biochemistry o.hofmann at smail.uni-koeln.de - setar at gmx.de - connla at thewell.com "It's too bad she won't live. But then, who does?" From hinsen at dirac.cnrs-orleans.fr Tue Sep 12 08:03:11 2000 From: hinsen at dirac.cnrs-orleans.fr (hinsen at dirac.cnrs-orleans.fr) Date: Tue, 12 Sep 2000 14:03:11 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <20000907112257.B8034@smail.rrz.uni-koeln.de> (message from Oliver Hofmann on Thu, 7 Sep 2000 11:22:57 +0200) References: <20000907112257.B8034@smail.rrz.uni-koeln.de> Message-ID: <200009121203.OAA03295@chinon.cnrs-orleans.fr> > What I am doing right now is: Set up a new matrix with one less row > or column (zeros()), then copy all the rows/columns but the one I'd > like to delete to the new matrix. I haven't done any comparisons, but I suspect that Numeric.take() is faster for large arrays. Try this: def delete_row(matrix, row): return Numeric.take(matrix, range(row) + range(row+1, matrix.shape[0])) def delete_column(matrix, column): return Numeric.take(matrix, range(column) + range(column+1, matrix.shape[1]), axis = 1) Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From hinsen at cnrs-orleans.fr Tue Sep 12 08:02:49 2000 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Sep 2000 14:02:49 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <20000907112257.B8034@smail.rrz.uni-koeln.de> (message from Oliver Hofmann on Thu, 7 Sep 2000 11:22:57 +0200) References: <20000907112257.B8034@smail.rrz.uni-koeln.de> Message-ID: <200009121202.OAA03292@chinon.cnrs-orleans.fr> > What I am doing right now is: Set up a new matrix with one less row > or column (zeros()), then copy all the rows/columns but the one I'd > like to delete to the new matrix. I haven't done any comparisons, but I suspect that Numeric.take() is faster for large arrays. Try this: def delete_row(matrix, row): return Numeric.take(matrix, range(row) + range(row+1, matrix.shape[0])) def delete_column(matrix, column): return Numeric.take(matrix, range(column) + range(column+1, matrix.shape[1]), axis = 1) Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From pauldubois at home.com Wed Sep 13 01:15:04 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 12 Sep 2000 22:15:04 -0700 Subject: [Numpy-discussion] Sourceforge down time scheduled -- heads up Message-ID: -----Original Message----- From: noreply at sourceforge.net [mailto:noreply at sourceforge.net] Sent: Tuesday, September 12, 2000 6:18 PM To: noreply at sourceforge.net Subject: SourceForge: Important Site News Dear SourceForge User, As the Director of SourceForge, I want to thank you in making SourceForge the most successful Open Source Development Site in the World. We just surpassed 60,000 registered users and 8,800 open source projects. We have a lot of exciting things planned for SourceForge in the coming months. These include faster servers, improved connectivity, mirrored servers, and the addition of more hardware platforms to our compile farm (including Sparc, PowerPC, Alpha, and more). Did I mention additional storage? The new servers that we will be adding to the site will increase the total storage on SourceForge by an additional 6 terabytes. In 10 days we will begin the first phase of our new hardware build out. This phase involves moving the primary site to our new location in Fremont, California. This move will take place on Friday night (Sept 22nd) at 10pm and continue to 8am Saturday morning (Pacific Standard Time). During this time the site will be off-line as we make the physical change. I know many of you use Sourceforge as your primary development environment, so I want to apologize in advance for the inconvenience of this downtime. If you have any concerns about this, please feel free to email me. I will write you again as this date nears, with a reminder and an update. Thank you again for using SourceForge.net -- Patrick McGovern Director of SourceForge.net Pat at sourceforge.net --------------------- This email was sent from sourceforge.net. To change your email receipt preferences, please visit the site and edit your account via the "Account Maintenance" link. Direct any questions to admin at sourceforge.net, or reply to this email. From sanner at scripps.edu Wed Sep 13 19:49:52 2000 From: sanner at scripps.edu (Michel Sanner) Date: Wed, 13 Sep 2000 16:49:52 -0700 Subject: [Numpy-discussion] NumPy built distributions > Message-ID: <1000913164952.ZM165594@noah.scripps.edu> Hello, We have a download site for Python module (which I meant to make public .. just didn't have time yet). It enables downloading precompiled Python interpreters (my users are no wanting to have to compile the interpreter) and a large number of Python extensions including Numeric for the following platforms: sgi IRIX5 sgi IRIX6 sun SunOS5 Dec Alpha Linux feel free to give it a shot ! http://www.scripps.edu/~sanner/Python and scroll down to Downloads Any feed back is welcome ! -Michel -- ----------------------------------------------------------------------- >>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!! Michel F. Sanner Ph.D. The Scripps Research Institute Assistant Professor Department of Molecular Biology 10550 North Torrey Pines Road Tel. (858) 784-2341 La Jolla, CA 92037 Fax. (858) 784-2860 sanner at scripps.edu http://www.scripps.edu/sanner ----------------------------------------------------------------------- From pauldubois at home.com Thu Sep 14 08:10:53 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 14 Sep 2000 05:10:53 -0700 Subject: [Numpy-discussion] Da Blas Message-ID: Well, I certainly didn't mean to flame anyone in particular, so I'm sorry that Frank was offended. I suppose I got irritated when I tried to solve one problem and simply created one that was harder to solve. I don't think any of us quite appreciated how bad the situation is. No good deed goes unpunished. I tried yesterday to help someone internally with an SGI system. They had two new wrinkles on this: a. They had their own blas but not their own lapack, in /usr/lib. Well, I had allowed for that in my second version, but of course the imprecision of the -L stuff makes it possible to get the wrong one that way. b. I did get the /usr/lib version of the blas, and boy was it the wrong one, since apparently there are two binary modes on an SGI and it was not -n32 like Python expected. Neither was the lapack_lite one. I had to edit the Makefile. That problem could be avoided using distutils to do the compile. The solution was to edit setup.py to remove /usr/lib from the search list. The problem with simply using the desired library as a binary instead of a library is that you would load all of it. I do not know how much control you can get with the -l -L paradigm, especially in a portable way. From pauldubois at home.com Thu Sep 14 09:02:26 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 14 Sep 2000 06:02:26 -0700 Subject: [Numpy-discussion] Survey available on lapack_lite question Message-ID: I have started a survey on sourceforge which asks the question, should lapack_lite and the modules that depend upon it be moved to the optional Packages section. Pros: this would move these modules out of a privleged position into equality with other "application" packages. Their presence in the core is unnecessary. The compilation problems, as yet unsolved, with lapack_lite vs. alternate implementations, break the installation of the core even for people who are not using the facilities. Cons: It would require extra effort to install the Numerical distribution from source and get back to the current status quo. Some people would be confused at the changes. Note: If we did it "right", and made these things into packages instead of modules that install into Numeric, changes to scripts would be required, but that is not being proposed here. My vision of this change is that they would still install directly into Numeric, and would be included in all "prebuilt" distributions. From frank at ned.dem.csiro.au Fri Sep 15 07:28:44 2000 From: frank at ned.dem.csiro.au (Frank Horowitz) Date: Fri, 15 Sep 2000 19:28:44 +0800 Subject: [Numpy-discussion] Da Blas Message-ID: At 3:14 PM -0700 14/9/00, wrote: >Well, I certainly didn't mean to flame anyone in particular, so I'm sorry >that Frank was offended. Publicly apologized, and publicly accepted. I'm sorry I lost my cool. Ray Beausoleil and I are having a look at this too. He's coming at the problem from a Windows perspective, and I'm coming at it from a Linux/Unix perspective. It's going to be a little slow going for me at first, since it appears that I'm going to have to get my head around the distutils way of doing things.... Just in case it saves anyone else following a false lead, yes, there is a bug in the (Linux Mandrake 7.1; but probably any unix-ish box) compile line for building the lapack_lite.so shared library (i.e. what is generated on my box is "-L/usr/lib-llapack" which obviously is missing a blank before the "-llapack" substring). However, when I coerced the distutils system to get around that bug (by specifying "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR variables in setup.py) the same problem (i.e. an "ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing lapack_lite) ultimately manifested itself. So my advice is not to bother tracking that one down (although it probably should be reported as a bug in distutils, since adding that trailing blank algorithmically instead of in a user modifiable configuration string is the "right thing to do", TM.). I'm still puzzled by Thomas Breul's report of preloading an f2c library squashing the bug under RedHat 6.2(?). That does not work under Mandrake, although clearly there is some bloody symbol table missing from somewhere. The trouble is to find out where, and then to coerce distutils to deal with that in a portable way... Frank -- -- Frank Horowitz frank at ned.dem.csiro.au CSIRO-Exploration & Mining, PO Box 437, Nedlands, WA 6009, AUSTRALIA Direct: +61 8 9284 8431; FAX: +61 8 9389 1906; Reception: +61 8 9389 8421 From ransom at cfa.harvard.edu Fri Sep 15 11:30:38 2000 From: ransom at cfa.harvard.edu (Scott M. Ransom) Date: Fri, 15 Sep 2000 11:30:38 -0400 Subject: [Numpy-discussion] Da Blas References: Message-ID: <39C2409E.51659120@cfa.harvard.edu> Frank Horowitz wrote: > > However, when I > coerced the distutils system to get around that bug (by specifying > "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR > variables in setup.py) the same problem (i.e. an "ImportError: > /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing > lapack_lite) ultimately manifested itself. This problem is easily fixed (at least on linux) by performing the link of lapack_lite.so with g77 instead of gcc (this is required because the lapack and/or blas libraries are based on fortran object files...). For instance the out-of-box link command on my machine (Debian 2.2) is: gcc -shared build/temp.linux2/Src/lapack_litemodule.o -L/usr/local/lib -L/usr/lib -llapack -lblas -o build/lib.linux2/lapack_lite.so Simply change the 'gcc' to 'g77' and everything works nicely. Not sure if this is specific to Linux or not... Scott -- Scott M. Ransom Address: Harvard-Smithsonian CfA Phone: (617) 495-4142 60 Garden St. MS 10 email: ransom at cfa.harvard.edu Cambridge, MA 02138 GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From frank at ned.dem.csiro.au Fri Sep 15 20:39:39 2000 From: frank at ned.dem.csiro.au (Frank Horowitz) Date: Sat, 16 Sep 2000 08:39:39 +0800 Subject: [Numpy-discussion] Da Blas In-Reply-To: <39C2409E.51659120@cfa.harvard.edu> References: Message-ID: At 11:30 AM -0400 9/15/00, Scott M. Ransom wrote: >Frank Horowitz wrote: >> >> However, when I >> coerced the distutils system to get around that bug (by specifying >> "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR >> variables in setup.py) the same problem (i.e. an "ImportError: >> /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing >> lapack_lite) ultimately manifested itself. > >This problem is easily fixed (at least on linux) by performing the link >of lapack_lite.so with g77 instead of gcc (this is required because the >lapack and/or blas libraries are based on fortran object files...). Good on ya, Scott. You nailed it. When I did this by hand, it worked. What I now need to figure out is how to coerce distutils into doing that automagically so others can benefit without this pain! Cheers, Frank Horowitz Dr. Frank Horowitz __ \ CSIRO Exploration & Mining ,~' L_|\ Australian 39 Fairway, PO Box 437, ;-' \ Geodynamics Nedlands, WA 6009 Australia ( \ Cooperative Phone +61 9 284 8431 + ___ / Research Fax +61 9 389 1906 L~~' "\__/ Centre frank at ned.dem.csiro.au W From kern at its.caltech.edu Fri Sep 15 22:10:10 2000 From: kern at its.caltech.edu (Robert Kern) Date: Fri, 15 Sep 2000 19:10:10 -0700 (PDT) Subject: [Numpy-discussion] Da Blas In-Reply-To: Message-ID: On Sat, 16 Sep 2000, Frank Horowitz wrote: > At 11:30 AM -0400 9/15/00, Scott M. Ransom wrote: > >Frank Horowitz wrote: > >> > >> However, when I > >> coerced the distutils system to get around that bug (by specifying > >> "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR > >> variables in setup.py) the same problem (i.e. an "ImportError: > >> /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing > >> lapack_lite) ultimately manifested itself. > > > >This problem is easily fixed (at least on linux) by performing the link > >of lapack_lite.so with g77 instead of gcc (this is required because the > >lapack and/or blas libraries are based on fortran object files...). > > Good on ya, Scott. You nailed it. When I did this by hand, it worked. What > I now need to figure out is how to coerce distutils into doing that > automagically so others can benefit without this pain! Adding "-lg2c" to the end of the gcc (not g77) link line always did the trick for me with FORTRAN libraries. I'm not sure if g77 does anything more than that compared with gcc for linking. > Cheers, > Frank Horowitz -- Robert Kern kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pauldubois at home.com Sun Sep 17 00:56:16 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 16 Sep 2000 21:56:16 -0700 Subject: [Numpy-discussion] Numeric addition: put Message-ID: In the CVS sources there is now a new function "put". "put" is roughly the opposite of "take" put(x, indices, values) is roughly x.flat[j] = values[j] for all j in indices The fine print will appear in the manuals shortly. From DavidA at ActiveState.com Sun Sep 17 01:04:01 2000 From: DavidA at ActiveState.com (David Ascher) Date: Sat, 16 Sep 2000 22:04:01 -0700 (Pacific Daylight Time) Subject: [Numpy-discussion] Numeric addition: put In-Reply-To: Message-ID: > In the CVS sources there is now a new function "put". > "put" is roughly the opposite of "take" yeah! Go, Paul, Go! =) --david From Oliphant.Travis at mayo.edu Mon Sep 18 10:06:39 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Mon, 18 Sep 2000 09:06:39 -0500 (CDT) Subject: [Numpy-discussion] Re: New put function In-Reply-To: <200009171917.MAA12610@lists.sourceforge.net> Message-ID: Paul, Great job on the new put function. I can't wait to try it out. This goes a long way towards eliminating one of the few complaints I've heard about Numerical Python when compared to other similar environments. -Travis From jonathan.gilligan at vanderbilt.edu Mon Sep 18 17:34:17 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon, 18 Sep 2000 16:34:17 -0500 Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows Message-ID: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> I have recently upgraded to Python 1.6 (final). Now I can't figure out how to build Numpy with lapack_lite under windows. There is not a makefile for lapack_lite for MSVC and the instructions in README don't really help. All that setup.py will do is complain that lapack_lite is not installed. Specifically, are there specific compiler flags that I need to use to compile lapack lite? Should I just emulate the unix version and do cl -c *.c lib -out:blas.lib blas_lite.obj lib -out:lapack.lib dlapack_lite.obj f2c_lite.obj zlapack_lite.obj I tried doing this, but when I try to run the NumTut examples, I get the following error: D:\Programming\python\Numerical\Demo>python Python 1.6 (#0, Sep 5 2000, 08:16:13) [MSC 32 bit (Intel)] on win32 Copyright (c) 1995-2000 Corporation for National Research Initiatives. All Rights Reserved. Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All Rights Reserved. >>> import NumTut Traceback (most recent call last): File "", line 1, in ? File "NumTut\__init__.py", line 14, in ? greece = pickle.load(open(os.path.join(_dir, 'greece.pik'), 'rb')) / 256.0 File "D:\Programming\python\Python 1.6\lib\pickle.py", line 855, in load return Unpickler(file).load() File "D:\Programming\python\Python 1.6\lib\pickle.py", line 515, in load dispatch[key](self) File "D:\Programming\python\Python 1.6\lib\pickle.py", line 688, in load_globa l klass = self.find_class(module, name) File "D:\Programming\python\Python 1.6\lib\pickle.py", line 698, in find_class raise SystemError, \ from module Numericto import class array_constructor >>> FWIW, I am using the versions from :pserver:anonymous at cvs.numpy.sourceforge.net:/cvsroot/numpy, module Numerical, checked out as of Sept. 14. I didn't see any tags defined in the repository corresponding to release 16.x. Did I miss something? Should the releases not be getting tagged in the CVS repository? Thanks for any pointers, Jonathan From jonathan.gilligan at vanderbilt.edu Mon Sep 18 17:54:31 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon, 18 Sep 2000 16:54:31 -0500 Subject: [Numpy-discussion] Problem Solved Re: Building Numpy under Python 1.6, Windows In-Reply-To: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: <5.0.0.25.0.20000918165215.06201a10@g.mail.vanderbilt.edu> I have solved my problem building numpy under Python 1.6, windows. The problem was that Demos/NumTut/greece.pik was improperly checked into the CVS repository as a text file, when it should have been binary. Jonathan From pauldubois at home.com Mon Sep 18 18:29:04 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 18 Sep 2000 15:29:04 -0700 Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows In-Reply-To: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: Would you try syncing up to the repository and try again? I redid this stuff over the weekend because so many people are having trouble. The thing should build in lapack by default and it is in an optional package to boot. I didn't announce this because I wanted some of my fellow developers to suffer first, but they don't seem to be in a suffering mood today. Do cvs update -d -P to get the new structure. We haven't been doing cvs tags. I suppose if I could remember how to do them I would do them when I am the one cutting the release (CVS is not my usual source control system). > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of > Jonathan M. Gilligan > Sent: Monday, September 18, 2000 2:34 PM > To: numpy-discussion at lists.sourceforge.net > Cc: Jonathan M. Gilligan > Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows > > > I have recently upgraded to Python 1.6 (final). Now I can't > figure out how > to build Numpy with lapack_lite under windows. There is not a > makefile for > lapack_lite for MSVC and the instructions in README don't really > help. All > that setup.py will do is complain that lapack_lite is not installed. > From jonathan.gilligan at vanderbilt.edu Tue Sep 19 10:53:55 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Tue, 19 Sep 2000 09:53:55 -0500 Subject: [Numpy-discussion] Tagging releases under CVS (was RE: [Numpy-discussion] Building Numpy under Python 1.6, Windows) In-Reply-To: References: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: <5.0.0.25.0.20000919094005.044a80c0@g.mail.vanderbilt.edu> To tag the version you currently have checked out, go into the root directory of the module (e.g., Numeric) and AFTER CHECKING IN ALL MODIFICATIONS do cvs tag [-c] where is the tag name. This will apply to the current revision of each file that you have checked out. It is important to note that the tag is applied to the repository so it is essential that you check in all modified files and resolve conflicts BEFORE tagging the repository. The -c flag tells cvs to check that all files in the local directory are unmodified and warns you if they are not. The restrictions on the tag name are not well-documented, but if they match the regular expression ^[A-Za-z][A-Za-z_0-9-]+$ they will work (i.e., matching this regex is a sufficient but perhaps not necessary condition). Tags that begin with [0-9] or that contain [ \t.,;:] will not work. If you want to go back and tag old releases without checking them out, then if there is a target date that you can use to identify the version (e.g., a release date), you can run cvs rtag -D Where is the date. For format, the following excerpt from the Cederqvist manual may help: >A wide variety of date formats are supported by CVS. The most standard >ones are ISO8601 (from the International Standards Organization) and the >Internet e-mail standard (specified in RFC822 as amended by RFC1123). > >ISO8601 dates have many variants but a few examples are: > >1972-09-24 >1972-09-24 20:05 This command will tag all files in at the latest revision on or before with tag . I hope this is helpful to you. At 05:29 PM 9/18/2000, Paul F. Dubois wrote: >We haven't been doing cvs tags. I suppose if I could remember how to do them >I would do them when I am the one cutting the release (CVS is not my usual >source control system). Best regards, Jonathan From pauldubois at home.com Tue Sep 19 14:37:31 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 19 Sep 2000 11:37:31 -0700 Subject: [Numpy-discussion] [ANNOUNCE] HappyDoc docs added to numpy site Message-ID: Check it out at http://numpy.sourceforge.net/doc From trainor at uic.edu Tue Sep 19 17:42:06 2000 From: trainor at uic.edu (trainor at uic.edu) Date: Tue, 19 Sep 2000 16:42:06 -0500 Subject: [Numpy-discussion] Question: arrays of booleans Message-ID: <39C7DDAD.CDF99EB6@uic.edu> Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about comparisons of multiarray objects and arrays of booleans in particular. Back when I was checking out Numpy for an application, I read a note in the Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray objects. It said and still says: "Currently, comparisons of multiarray objects results in exceptions, since reasonable results (arrays of booleans) are not doable without non-trivial changes to the Python core. These changes are planned for Python 1.6, at which point array object comparisons will be updated." ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) Can anyone comment on what's happening with arrays of booleans in Numpy? I don't want to duplicate someone elses effort, but don't mind standing on someone elses shoulders. My application calls for operations on both non-sparse and sparce matrices. douglas From pete at visionart.com Tue Sep 19 18:55:59 2000 From: pete at visionart.com (Pete Shinners) Date: Tue, 19 Sep 2000 15:55:59 -0700 Subject: [Numpy-discussion] can i improve this code? Message-ID: <009c01c0228c$c7686f90$f73f93cd@visionart.com> I have image data stored in a 3D array. the code is pretty simple, but what i don't like about it is needing to transpose the array twice. here's the code... def map_rgb_array(surf, a): "returns an array of 2d mapped pixels from a 3d array" #image info needed to convert map rgb data loss = surf.get_losses[:3] shift = surf.get_shifts()[:3] prep = a >> loss << shift return transpose(bitwise_or.reduce(transpose(prep))) once i get to the 'prep' line, my 3d image has all the values for R,G,B mapped to the appropriate bitspace. all i need to do is bitwise_or the values together "finalcolor = R|G|B". this is where my lack of guru-ness with numpy comes in. the only way i could figure to do this was transpose the array and then apply the bitwise_or. only problem is then i need to de-transpose the array to get it back to where i started. is there some form or mix of bitwise_or i can use to make it operate on the 3rd axis instead of the 1st? thanks all, hopefully this can be improved. legibility comes second to speed as a priority :] ... hmm, tinkering time passes ... i did find this, "prep[:,:,0] | prep[:,:,1] | prep[:,:,2]". is something like that going to be best? "bitwise_or.reduce((prep[:,:,0], prep[:,:,1], prep[:,:,2]))" From cgw at fnal.gov Tue Sep 19 20:22:27 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 19 Sep 2000 19:22:27 -0500 (CDT) Subject: [Numpy-discussion] Question: arrays of booleans In-Reply-To: <39C7DDAD.CDF99EB6@uic.edu> References: <39C7DDAD.CDF99EB6@uic.edu> Message-ID: <14792.835.587030.528703@buffalo.fnal.gov> trainor at uic.edu writes: > Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about > comparisons of multiarray objects and arrays of booleans in particular. > > Back when I was checking out Numpy for an application, I read a note in the > Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray > objects. It said and still says: > > "Currently, comparisons of multiarray objects results in exceptions, > since reasonable results (arrays of booleans) are not doable without > non-trivial changes to the Python core. These changes are planned for > Python 1.6, at which point array object comparisons will be updated." > ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) > > Can anyone comment on what's happening with arrays of booleans in > Numpy? I believe that the passage you quote is referring to David Ascher's "Rich Comparisons" proposal. This is not in Python 1.6 and is not in Python 2.0 and nobody is currently working on it. The section in the manual should be revised. From gball at cfa.harvard.edu Thu Sep 21 11:28:02 2000 From: gball at cfa.harvard.edu (Greg Ball) Date: Thu, 21 Sep 2000 11:28:02 -0400 (EDT) Subject: [Numpy-discussion] rich comparisons In-Reply-To: <200009210641.XAA04111@lists.sourceforge.net> Message-ID: > Can anyone comment on what's happening with arrays of booleans in Numpy? > I don't want to duplicate someone elses effort, but don't mind standing > on someone elses shoulders. My application calls for operations on both > non-sparse and sparce matrices. "Rich comparisons" are really only syntactic sugar. For non-sparse matrices the ufuncs greater(a,b) and less(a,b) will do the job. Equivalents for sparse matrices probably exist. -Greg Ball From pauldubois at home.com Fri Sep 22 14:09:12 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 22 Sep 2000 11:09:12 -0700 Subject: [Numpy-discussion] Numeric-17.0.tar.gz released Message-ID: I have released the current CVS image as 17.0. The tag is r17_0. From cbarker at jps.net Thu Sep 28 18:31:14 2000 From: cbarker at jps.net (Chris Barker) Date: Thu, 28 Sep 2000 15:31:14 -0700 Subject: [Numpy-discussion] How Can I find out if an array is Contiguous in an extension module? Message-ID: <39D3C6B2.1CB86EFA@jps.net> Hi all, I'm writing an extension module that uses PyArrayObjects, and I want to be able to tell if an array passed in is contiguous. I know that I can use PyArray_ContiguousFromObject, and it will just return a reference to the same array if it is contiguous, but I want to know whether or not a new array has been created. I realize that most of the time it doesn't matter, but in this case I am chaning the array in place, and need to pass it to another function that is expecting a standard C array, so I need to make sure the user has passed in a contiguous array. I was surprised to not find a "PyArray_Contiguous" function, or something like it. I see that there is a field in PyArrayObject (int flags) that has a bit indicating whether a field is contiguous, but being a newbe to C as well, I'm not sure how to get at it. I'd love a handy utility function, but if one doesn't exist, can someone send me the code I need to check that bit? thanks, -Chris -- Christopher Barker, Ph.D. cbarker at jps.net --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ From pauldubois at home.com Thu Sep 28 22:10:05 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 28 Sep 2000 19:10:05 -0700 Subject: [Numpy-discussion] How Can I find out if an array is Contiguous in an extension module? In-Reply-To: <39D3C6B2.1CB86EFA@jps.net> Message-ID: > I was surprised to not find a "PyArray_Contiguous" function, or > something like it. The following macro is define in arrayobject.h: #define PyArray_ISCONTIGUOUS(m) ((m)->flags & CONTIGUOUS) # From pete at shinners.org Wed Sep 27 11:27:26 2000 From: pete at shinners.org (Pete Shinners) Date: Wed, 27 Sep 2000 08:27:26 -0700 Subject: [Numpy-discussion] precompiled binary for 17 Message-ID: <007b01c02897$71e241c0$0200a8c0@home> i've compiled my own numeric, and i thought i'd offer it back for other win users. i haven't seen a precompiled binary package for win32 yet, so maybe this can get added to sourceforge? if not, here's a link that'll work for a couple weeks http://www.shinners.org/pete/NumPy17_20.zip this is numeric-17.0, compiled for 2.0beta btw, much thanks and congrats to the numpy team. getting this compiled and installed was WORLDS better than working with the 16.x releases. thanks! (and congrats!) From pauldubois at home.com Fri Sep 29 10:20:27 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 29 Sep 2000 07:20:27 -0700 Subject: [Numpy-discussion] precompiled binary for 17 In-Reply-To: <007b01c02897$71e241c0$0200a8c0@home> Message-ID: File released. Thank you! In future you can just email me directly. -- PFD > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Wednesday, September 27, 2000 8:27 AM > To: Numpy Discussion > Subject: [Numpy-discussion] precompiled binary for 17 > > > i've compiled my own numeric, and i thought i'd offer it > back for other win users. i haven't seen a precompiled > binary package for win32 yet, so maybe this can get > added to sourceforge? if not, here's a link that'll > work for a couple weeks > > http://www.shinners.org/pete/NumPy17_20.zip > > this is numeric-17.0, compiled for 2.0beta > > > btw, much thanks and congrats to the numpy team. > getting this compiled and installed was WORLDS > better than working with the 16.x releases. thanks! > (and congrats!) > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From nick at nickbower.com Wed Sep 6 04:47:26 2000 From: nick at nickbower.com (nick at nickbower.com) Date: Wed, 06 Sep 2000 08:47:26 GMT Subject: [Numpy-discussion] 2 i/o questions and an announcement Message-ID: <20000906.8472600@cricket.> 2 really quick questions and an announcement of sorts: 1) Is NumPy v2.0 going to address binary file io at all? Right now, I use the array module with the "fromfile" method, but this then has to be re-cast to a NumPy array, which seems inefficient (although I've no evidence for that), especially for large 50-100 meg data files. 2) Is there a quicker way to get data from PIL to NumPy than: data = array(Image.open(...).getdata(), typecode=Int) this takes an eternity for a decent sized image (1.5 minutes on a P400 for a 1200x1000 image). Some of you may already know, but I've taken a first crack at an free clone of IDL (as in RSI's Interactive Data Language, which has similarities to MatLab). Cohesive integration of 2D plotting/contouring (Dislin), 8/24-bit imaging (PIL) and array arithmetic (NumPy) has been implemented using standard IDL syntax - http://nickbower.com/computer/pydl Cheers, nick. From jsaenz at wm.lc.ehu.es Wed Sep 6 05:40:10 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Wed, 6 Sep 2000 11:40:10 +0200 (MET DST) Subject: [Numpy-discussion] Help about C, Py_INCREF and throwing Exceptions from C Message-ID: Hello, all. I am finishing a C module which computes a kernel-based probability density function estimation using a kernel-based approach for unidimensional and multidimensional PDFs. The idea is: Given a Numpy array expdata, with the experimental data in a d-dimensional space (d=1,2,3), we want to make an estimation of the PDF at grid points 'xpoints' in the same d-dimensional space using a bandwidth h: Python code: import KPDF pdf=KPDF.MPDFEpanechnikov(expdata,xpoints,h) The module is written in C, and it needs to return a Numpy array shaped (xpoints.shape[0],). So, internally, the function which creates the array reads: int dims[1]; PyArrayObject *rarray; /* returned array */ PyArrayObject *xpoints; dims[0]=xpoints->dimensions[0]; rarray=(PyArrayObject*)PyArray_FromDims(1,dims,PyArray_DOUBLE); if (rarray==NULL) return NULL; /* More code follows */ return PyArray_Return(rarray); I am assuming that I DO NOT have to call Py_INCREF() explicitly in my function before returning the array to Python, because it is actually called by PyArray_FromDims(). I would like to know whether it is actually that way. I am also assuming that, in case malloc() is unable to allocate memory inside PyArray_FromDims(), it sets the corresponding exception and my only TO-DO in my function is returning NULL to the caller. I would appreciate replies these questions by gurus. I have Read The Funny Manuals on extending (regular and numerical) Python, but I think these points are not specially clear, at least, I am doubtful after reading them. Thanks in advance. Jon Saenz. | Tfno: +34 946012470 Depto. Fisica Aplicada II | Fax: +34 944648500 Facultad de Ciencias. \\ Universidad del Pais Vasco \\ Apdo. 644 \\ 48080 - Bilbao \\ SPAIN From pauldubois at home.com Thu Sep 7 01:05:36 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Wed, 6 Sep 2000 22:05:36 -0700 Subject: [Numpy-discussion] [ANNOUNCE] Documentation, 1.00b1 masked array Message-ID: I have put a new version of MA into CVS; it is quite beefed up. Worse, I wrote documentation for it. If you need to deal with missing values, please give it a read. The numpy doc is available from the numpy home page, http://numpy.sourceforge.net, in both HTML and PDF. There is a test routine in subdirectory Packages/MA/Test. From o.hofmann at Smail.Uni-Koeln.de Thu Sep 7 05:22:57 2000 From: o.hofmann at Smail.Uni-Koeln.de (Oliver Hofmann) Date: Thu, 7 Sep 2000 11:22:57 +0200 Subject: [Numpy-discussion] Row/Column delete Message-ID: <20000907112257.B8034@smail.rrz.uni-koeln.de> 'lo everyone! Been fiddling around with NumPy lately, trying to use it for some simple clustering algorithms. Those require constant merging or rows and columns. Since the arrays are static in size I've been looking at the FAQ and doku for a function that helps with this, but have been unable to find any. What I am doing right now is: Set up a new matrix with one less row or column (zeros()), then copy all the rows/columns but the one I'd like to delete to the new matrix. However, this requires setting up a second matrix for each delete action. A combined row/column delete requires two copy actions, making it even slower. Is there a better/faster way to do this? Ie, parse through the old matrix and copy individual positions instead of rows/columns? Any help would be appreciated! Oliver Hofmann -- Oliver Hofmann - University of Cologne - Department of Biochemistry o.hofmann at smail.uni-koeln.de - setar at gmx.de - connla at thewell.com "It's too bad she won't live. But then, who does?" From absd00t at c1186.ae.ge.com Thu Sep 7 08:50:38 2000 From: absd00t at c1186.ae.ge.com (U-E59264-Osman F Buyukisik) Date: Thu, 7 Sep 2000 08:50:38 -0400 (EDT) Subject: [Numpy-discussion] Python-2.01b Message-ID: <14775.36445.869829.852906@c1186.ae.ge.com> Hello All, I made a mistake of installing the new 2.0 python! Now when I try to re-build numpy, I get errors. ... building '_numpy' extension gcc -g -O2 -fpic -IInclude -I/home/absd00t/local/include/python2.0 -c Src/_numpymodule.c -o build/temp.hp-uxB/Src/_numpymodule.o -I/usr/include/Motif1.1 -I/usr/include/X11R4 In file included from Src/_numpymodule.c:4: Include/arrayobject.h:16: parse error before `Py_FPROTO' Include/arrayobject.h:18: parse error before `Py_FPROTO' Include/arrayobject.h:19: parse error before `Py_FPROTO' Include/arrayobject.h:22: parse error before `PyArray_VectorUnaryFunc' Include/arrayobject.h:22: warning: no semicolon at end of struct or union Include/arrayobject.h:24: warning: data definition has no type or storage class Include/arrayobject.h:25: parse error before `*' .... Anyone working on porting to version 2.0 yet?? Or back to version 1.5 or1.6? TIA osman From jack at oratrix.nl Thu Sep 7 10:18:50 2000 From: jack at oratrix.nl (Jack Jansen) Date: Thu, 07 Sep 2000 16:18:50 +0200 Subject: [Numpy-discussion] Python-2.01b In-Reply-To: Message by U-E59264-Osman F Buyukisik , Thu, 7 Sep 2000 08:50:38 -0400 (EDT) , <14775.36445.869829.852906@c1186.ae.ge.com> Message-ID: <20000907141851.E1686303181@snelboot.oratrix.nl> > Hello All, > I made a mistake of installing the new 2.0 python! Now when I try to > re-build numpy, I get errors. > [...] > Include/arrayobject.h:18: parse error before `Py_FPROTO' I have the fixes on my disk at home, but I haven't gotten around to completely testing NumPy yet. I'll try and get around to it as soon as possible. That is: assuming people think it is a good idea I check these fixes in, let me know, please. (A quick workaround that may work is to add the following lines to Python.h: #define Py_FPROTO(x) x #define Py_PROTO(x) x but I haven't tested this). -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From jhauser at ifm.uni-kiel.de Fri Sep 8 08:34:09 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Fri, 8 Sep 2000 14:34:09 +0200 (CEST) Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <20000906.8472600@cricket.> References: <20000906.8472600@cricket.> Message-ID: <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> >From my understanding of the code, Numpy2 uses the buffer interface. This would be the basis for a fast exchange of data between PIL and Numpy arrays. Regarding IO the pickling of arrays is quite fast. Have you tried this already? HTH, __Janko From rlw at stsci.edu Fri Sep 8 08:57:50 2000 From: rlw at stsci.edu (Rick White) Date: Fri, 8 Sep 2000 08:57:50 -0400 (EDT) Subject: [Numpy-discussion] 2 i/o questions and an announcement Message-ID: <200009081257.IAA26772@sundog.stsci.edu> Janko Hauser writes: > >From my understanding of the code, Numpy2 uses the buffer >interface. This would be the basis for a fast exchange of data between >PIL and Numpy arrays. Regarding IO the pickling of arrays is quite >fast. Have you tried this already? Pickling is fine if all you want to do is create data within Numeric and save it. But it's not useful either for reading data that already exists in some format (the usual case) or for writing data in standard formats so it can be exchanged with other people. Rick White From jhauser at ifm.uni-kiel.de Fri Sep 8 09:07:16 2000 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Fri, 8 Sep 2000 15:07:16 +0200 (CEST) Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <200009081257.IAA26772@sundog.stsci.edu> References: <200009081257.IAA26772@sundog.stsci.edu> Message-ID: <20000908130716.8041.qmail@lisboa.ifm.uni-kiel.de> Rick White writes: > Pickling is fine if all you want to do is create data within Numeric > and save it. But it's not useful either for reading data that already > exists in some format (the usual case) or for writing data in standard > formats so it can be exchanged with other people. That's right. If you want to read binary formats, for which you know the data layout the numpyio package from Travis Oliphant is very fast and memory efficient, because it builds up the array during the read/write so no copy of the data string needs to be in memory. I see no other way to read other formats generally besides specific wrappers, like the netcdf interface. __Janko Hauser From nick at nickbower.com Sat Sep 9 00:15:51 2000 From: nick at nickbower.com (nick at nickbower.com) Date: Sat, 09 Sep 2000 04:15:51 GMT Subject: [Numpy-discussion] 2 i/o questions and an announcement In-Reply-To: <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> References: <20000906.8472600@cricket.> <20000908123410.7964.qmail@lisboa.ifm.uni-kiel.de> Message-ID: <20000909.4155100@cricket.> > From my understanding of the code, Numpy2 uses the buffer > interface. This would be the basis for a fast exchange of data between > PIL and Numpy arrays. I should have been more specific <:) This problem was related to the current version of NumPy's interaction with PIUL. > Regarding IO the pickling of arrays is quite > fast. Have you tried this already? So is array.fromfile (faster I would expect). The issue at hand is not the read speed, but avoiding reading into memory then re-copying it over to NumPy formats. > That's right. If you want to read binary formats, for which you > know the data layout the numpyio package from Travis > Oliphant is very fast and memory efficient, because it > builds up the array during the read/write so no copy > of the data string needs to be in memory. Ah-ha! Sounds exactly what I wanted. I'll check it out. Thanks, Nick. From pauldubois at home.com Mon Sep 11 11:28:39 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 11 Sep 2000 08:28:39 -0700 Subject: [Numpy-discussion] Reference on numpy home page In-Reply-To: Message-ID: Travis, I added a reference to your multipack home page on http://numpy.sourceforge.net. I encourage you and other nummies to edit that section of the home page to add other references to primary collections of numpy-compatible software, or to add details to my descriptions. To do it, log in via ssh to sourceforge, cd to /home/groups/numpy/htdocs, and edit index.html. I think we can rely on sloth to prevent edit conflicts, there being so few of us. Paul From pearu at ioc.ee Tue Sep 12 12:12:59 2000 From: pearu at ioc.ee (Pearu Peterson) Date: Tue, 12 Sep 2000 19:12:59 +0300 (EETDST) Subject: [Numpy-discussion] ANNOUNCEMENT: Fortran to Python Interface Generator, 2nd Rel. Message-ID: FPIG - Fortran to Python Interface Generator I am pleased to announce the second public release of f2py (version 2.264): http://cens.ioc.ee/projects/f2py2e/ f2py is a command line tool for binding Python and Fortran codes. It scans Fortran 77/90/95 codes and generates a Python C/API module that makes it possible to call Fortran routines from Python. No Fortran or C expertise is required for using this tool. Features include: *** All basic Fortran types are supported: integer[ | *1 | *2 | *4 | *8 ], logical[ | *1 | *2 | *4 | *8 ], character[ | *(*) | *1 | *2 | *3 | ... ] real[ | *4 | *8 | *16 ], double precision, complex[ | *8 | *16 | *32 ] *** Multi-dimensional arrays of (almost) all basic types. Dimension specifications: | : | * | : *** Supported attributes: intent([ in | inout | out | hide | in,out | inout,out ]) dimension() depend([]) check([]) note() optional, required, external *** Calling Fortran 77/90/95 subroutines and functions. Also Fortran 90/95 module routines. Internal initialization of optional arguments. *** Accessing COMMON blocks from Python. Accessing Fortran 90/95 module data coming soon. *** Call-back functions: calling Python functions from Fortran with very flexible hooks. *** In Python, arguments of the interfaced functions may be of different type - necessary type conversations are done internally in C level. *** Automatically generates documentation (__doc__,LaTeX) for interface functions. *** Automatically generates signature files --- user has full control over the interface constructions. Automatically detects the signatures of call-back functions, solves argument dependencies, etc. *** Automatically generates Makefile for compiling Fortran and C codes and linking them to a shared module. Many compilers are supported: gcc, Compaq Fortran, VAST/f90 Fortran, Absoft F77/F90, MIPSpro 7 Compilers, etc. Platforms: Intel/Alpha Linux, HP-UX, IRIX64. *** Complete User's Guide in various formats (html,ps,pdf,dvi). *** f2py users list is available for support, feedback, etc. More information about f2py, see http://cens.ioc.ee/projects/f2py2e/ f2py is released under the LGPL license. Sincerely, Pearu Peterson September 12, 2000

f2py 2.264 - The Fortran to Python Interface Generator (12-Sep-00) From jsaenz at wm.lc.ehu.es Tue Sep 12 11:56:16 2000 From: jsaenz at wm.lc.ehu.es (Jon Saenz) Date: Tue, 12 Sep 2000 17:56:16 +0200 (MET DST) Subject: [Numpy-discussion] Univariate and multivariate density estimation using Python Message-ID: Hello, all. I am announcing a C extension module which computes univariate and multivariate probability density functions by means of a kernel-based approach. The module includes functions to perform the estimation using the following kernels: * One-dimensional data: Epanechnikov Biweight Triangular * 2 or 3-dimensional data: Epanechnikov Multivariate gaussian For multivariate data, there is the optional feature of scaling each axis by means of a matrix. This approach allows the definition of Fukunaga-type estimators. The functions in the module are used as follows: import KPDF # edata and gdata must be numpy arrays of shapes (N,) and (E,) # while h is a scalar # pdf1 is a numpy array which holds the PDF evaluated at points # gdata with experimental data in edata. pdf1.shape=(E,) pdf1=KPDF.UPDFEpanechnikov(edata,gdata,h) # For multivariate estimation, edata and gdata must be numpy arrays of # shapes (N,2|3) and (E,2|3) while h is a scalar # pdf2 is a numpy array which holds the PDF evaluated at points # gdata with experimental data in edata. pdf2.shape=(E,) pdf2=KPDF.MPDFEpanechnikov(e2data,g2data,h) # For Fukunaga-type estimators, Sm1 must be a numpy array 2x2(3x3) # and holds the covariance matrix. sqrtdetS is the square root of the # determinant pdf2=KPDF.MPDFEpanechnikov(e2data,g2data,h,Sm1,sqrtdetS) There is not a lot of documentation in the module, but I have a serious commitment of preparing it soon. It can be downloaded from: http://starship.python.net/crew/jsaenz/KPDF.tar.gz Feedback of interested users will be greatly appreciated. Regards. Jon Saenz. | Tfno: +34 946012470 Depto. Fisica Aplicada II | Fax: +34 944648500 Facultad de Ciencias. \\ Universidad del Pais Vasco \\ Apdo. 644 \\ 48080 - Bilbao \\ SPAIN From dgoodger at atsautomation.com Tue Sep 12 10:33:22 2000 From: dgoodger at atsautomation.com (Goodger, David) Date: Tue, 12 Sep 2000 10:33:22 -0400 Subject: [Numpy-discussion] NumPy built distributions Message-ID: ATTN: Numeric Python project administrators There is a lot of interest in NumPy out there. Every time a new version of Numeric or Python comes out, people are interested in obtaining built distributions. Many of us are not able to build from source; without a repository of built distributions I'm afraid that many people will give up. The MacOS community is lucky that Jack Jansen, who maintains the MacPython distribution, also includes a built NumPy with his installers. Other OSes are not so lucky. Robin Becker (robin at jessikat.fsnet.co.uk) has built NumPy for Win32 many times and emails copies to those who request them. Unfortunately for him, but fortunately for NumPy, the demand is getting too high for this method of distribution. It would be great if these built distributions were available for all, with minimal overhead to developers/packagers. Would it be possible to add Robin's installers to the FTP site? How should he (or I, if he's not able) go about this? David Goodger Systems Administrator, Advanced Systems Automation Tooling Systems Inc., Automation Systems Division direct: (519) 653-4483 ext. 7121 fax: (519) 650-6695 e-mail: dgoodger at atsautomation.com goodger at users.sourceforge.net >From NumPy's Open Discussion forum (http://sourceforge.net/forum/forum.php?forum_id=3847): By: rgbecker ( Robin Becker ) Numeric-16.0 win32 builds [ reply ] 2000-Sep-09 03:57 I have built and zipped binaries for Numeric-16.0 for Python-1.5.2/1.6b1/2.0b1 I am fed up with sending these binaries to people individually. Is there an FTP location I could use on the sourceforge site? From DavidA at ActiveState.com Tue Sep 12 16:13:07 2000 From: DavidA at ActiveState.com (David Ascher) Date: Tue, 12 Sep 2000 13:13:07 -0700 (Pacific Daylight Time) Subject: [Numpy-discussion] NumPy built distributions In-Reply-To: Message-ID: > Would it be possible to add Robin's installers to the FTP site? How should > he (or I, if he's not able) go about this? I'm working on it right now. I need to repackage his code a bit since he just shipped me the .pyds and I'd like to make it a drop-in zip file (meaning I just add the .py and the .pth file). --david From pauldubois at home.com Tue Sep 12 16:17:37 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 12 Sep 2000 13:17:37 -0700 Subject: [Numpy-discussion] NumPy built distributions In-Reply-To: Message-ID: I'm willing to add any built distributions that you make. In the longer run I am happy to welcome additional developers who just want to be able to do these releases. Here is what you do: ftp downloads.sourceforge.net cd /incoming Upload your file Send me email telling me the name of the file. I will reply when I have done it. I don't know how long they let the files sit in incoming before blowing them away but let's try it and see how things work out. Please name the files so that they indicate the platform if not obvious, Numeric version AND the Python version with which they were built. I suggest something like Numeric-16.0-Python1.6.rpm, etc. The Python versions 1.5.2, 1.6, and 2.0 are incompatible. > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of > Goodger, David > Sent: Tuesday, September 12, 2000 7:33 AM > To: numpy-discussion at lists.sourceforge.net > Cc: robin at jessikat.fsnet.co.uk > Subject: [Numpy-discussion] NumPy built distributions > > > ATTN: Numeric Python project administrators > > There is a lot of interest in NumPy out there. Every time a new version of > Numeric or Python comes out, people are interested in obtaining built > distributions. Many of us are not able to build from source; without a > repository of built distributions I'm afraid that many people > will give up. > > The MacOS community is lucky that Jack Jansen, who maintains the MacPython > distribution, also includes a built NumPy with his installers. Other OSes > are not so lucky. > > Robin Becker (robin at jessikat.fsnet.co.uk) has built NumPy for Win32 many > times and emails copies to those who request them. Unfortunately for him, > but fortunately for NumPy, the demand is getting too high for > this method of > distribution. It would be great if these built distributions were > available > for all, with minimal overhead to developers/packagers. > > Would it be possible to add Robin's installers to the FTP site? How should > he (or I, if he's not able) go about this? > > David Goodger > Systems Administrator, Advanced Systems > Automation Tooling Systems Inc., Automation Systems Division > direct: (519) 653-4483 ext. 7121 fax: (519) 650-6695 > e-mail: dgoodger at atsautomation.com > goodger at users.sourceforge.net > > > From NumPy's Open Discussion forum > (http://sourceforge.net/forum/forum.php?forum_id=3847): > > By: rgbecker ( Robin Becker ) > Numeric-16.0 win32 builds [ reply ] > 2000-Sep-09 03:57 > I have built and zipped binaries for Numeric-16.0 for > Python-1.5.2/1.6b1/2.0b1 > I am fed up with sending these binaries to people individually. > Is there an FTP location I could use on the sourceforge site? > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion From o.hofmann at smail.uni-koeln.de Tue Sep 12 09:03:28 2000 From: o.hofmann at smail.uni-koeln.de (Oliver Hofmann) Date: Tue, 12 Sep 2000 15:03:28 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <200009121202.OAA03292@chinon.cnrs-orleans.fr>; from hinsen@cnrs-orleans.fr on Tue, Sep 12, 2000 at 02:02:49PM +0200 References: <20000907112257.B8034@smail.rrz.uni-koeln.de> <200009121202.OAA03292@chinon.cnrs-orleans.fr> Message-ID: <20000912150327.B19@smail.rrz.uni-koeln.de> Konrad Hinsen (hinsen at cnrs-orleans.fr) wrote: > I haven't done any comparisons, but I suspect that Numeric.take() is > faster for large arrays. Try this: Yes, indeed. Actually, it is so fast that the first time I rewrote the script last week I was quite convinced it had to be buggy. Thanks again everyone, Oliver -- Oliver Hofmann - University of Cologne - Department of Biochemistry o.hofmann at smail.uni-koeln.de - setar at gmx.de - connla at thewell.com "It's too bad she won't live. But then, who does?" From hinsen at dirac.cnrs-orleans.fr Tue Sep 12 08:03:11 2000 From: hinsen at dirac.cnrs-orleans.fr (hinsen at dirac.cnrs-orleans.fr) Date: Tue, 12 Sep 2000 14:03:11 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <20000907112257.B8034@smail.rrz.uni-koeln.de> (message from Oliver Hofmann on Thu, 7 Sep 2000 11:22:57 +0200) References: <20000907112257.B8034@smail.rrz.uni-koeln.de> Message-ID: <200009121203.OAA03295@chinon.cnrs-orleans.fr> > What I am doing right now is: Set up a new matrix with one less row > or column (zeros()), then copy all the rows/columns but the one I'd > like to delete to the new matrix. I haven't done any comparisons, but I suspect that Numeric.take() is faster for large arrays. Try this: def delete_row(matrix, row): return Numeric.take(matrix, range(row) + range(row+1, matrix.shape[0])) def delete_column(matrix, column): return Numeric.take(matrix, range(column) + range(column+1, matrix.shape[1]), axis = 1) Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From hinsen at cnrs-orleans.fr Tue Sep 12 08:02:49 2000 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue, 12 Sep 2000 14:02:49 +0200 Subject: [Numpy-discussion] Row/Column delete In-Reply-To: <20000907112257.B8034@smail.rrz.uni-koeln.de> (message from Oliver Hofmann on Thu, 7 Sep 2000 11:22:57 +0200) References: <20000907112257.B8034@smail.rrz.uni-koeln.de> Message-ID: <200009121202.OAA03292@chinon.cnrs-orleans.fr> > What I am doing right now is: Set up a new matrix with one less row > or column (zeros()), then copy all the rows/columns but the one I'd > like to delete to the new matrix. I haven't done any comparisons, but I suspect that Numeric.take() is faster for large arrays. Try this: def delete_row(matrix, row): return Numeric.take(matrix, range(row) + range(row+1, matrix.shape[0])) def delete_column(matrix, column): return Numeric.take(matrix, range(column) + range(column+1, matrix.shape[1]), axis = 1) Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From pauldubois at home.com Wed Sep 13 01:15:04 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 12 Sep 2000 22:15:04 -0700 Subject: [Numpy-discussion] Sourceforge down time scheduled -- heads up Message-ID: -----Original Message----- From: noreply at sourceforge.net [mailto:noreply at sourceforge.net] Sent: Tuesday, September 12, 2000 6:18 PM To: noreply at sourceforge.net Subject: SourceForge: Important Site News Dear SourceForge User, As the Director of SourceForge, I want to thank you in making SourceForge the most successful Open Source Development Site in the World. We just surpassed 60,000 registered users and 8,800 open source projects. We have a lot of exciting things planned for SourceForge in the coming months. These include faster servers, improved connectivity, mirrored servers, and the addition of more hardware platforms to our compile farm (including Sparc, PowerPC, Alpha, and more). Did I mention additional storage? The new servers that we will be adding to the site will increase the total storage on SourceForge by an additional 6 terabytes. In 10 days we will begin the first phase of our new hardware build out. This phase involves moving the primary site to our new location in Fremont, California. This move will take place on Friday night (Sept 22nd) at 10pm and continue to 8am Saturday morning (Pacific Standard Time). During this time the site will be off-line as we make the physical change. I know many of you use Sourceforge as your primary development environment, so I want to apologize in advance for the inconvenience of this downtime. If you have any concerns about this, please feel free to email me. I will write you again as this date nears, with a reminder and an update. Thank you again for using SourceForge.net -- Patrick McGovern Director of SourceForge.net Pat at sourceforge.net --------------------- This email was sent from sourceforge.net. To change your email receipt preferences, please visit the site and edit your account via the "Account Maintenance" link. Direct any questions to admin at sourceforge.net, or reply to this email. From sanner at scripps.edu Wed Sep 13 19:49:52 2000 From: sanner at scripps.edu (Michel Sanner) Date: Wed, 13 Sep 2000 16:49:52 -0700 Subject: [Numpy-discussion] NumPy built distributions > Message-ID: <1000913164952.ZM165594@noah.scripps.edu> Hello, We have a download site for Python module (which I meant to make public .. just didn't have time yet). It enables downloading precompiled Python interpreters (my users are no wanting to have to compile the interpreter) and a large number of Python extensions including Numeric for the following platforms: sgi IRIX5 sgi IRIX6 sun SunOS5 Dec Alpha Linux feel free to give it a shot ! http://www.scripps.edu/~sanner/Python and scroll down to Downloads Any feed back is welcome ! -Michel -- ----------------------------------------------------------------------- >>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!! Michel F. Sanner Ph.D. The Scripps Research Institute Assistant Professor Department of Molecular Biology 10550 North Torrey Pines Road Tel. (858) 784-2341 La Jolla, CA 92037 Fax. (858) 784-2860 sanner at scripps.edu http://www.scripps.edu/sanner ----------------------------------------------------------------------- From pauldubois at home.com Thu Sep 14 08:10:53 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 14 Sep 2000 05:10:53 -0700 Subject: [Numpy-discussion] Da Blas Message-ID: Well, I certainly didn't mean to flame anyone in particular, so I'm sorry that Frank was offended. I suppose I got irritated when I tried to solve one problem and simply created one that was harder to solve. I don't think any of us quite appreciated how bad the situation is. No good deed goes unpunished. I tried yesterday to help someone internally with an SGI system. They had two new wrinkles on this: a. They had their own blas but not their own lapack, in /usr/lib. Well, I had allowed for that in my second version, but of course the imprecision of the -L stuff makes it possible to get the wrong one that way. b. I did get the /usr/lib version of the blas, and boy was it the wrong one, since apparently there are two binary modes on an SGI and it was not -n32 like Python expected. Neither was the lapack_lite one. I had to edit the Makefile. That problem could be avoided using distutils to do the compile. The solution was to edit setup.py to remove /usr/lib from the search list. The problem with simply using the desired library as a binary instead of a library is that you would load all of it. I do not know how much control you can get with the -l -L paradigm, especially in a portable way. From pauldubois at home.com Thu Sep 14 09:02:26 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 14 Sep 2000 06:02:26 -0700 Subject: [Numpy-discussion] Survey available on lapack_lite question Message-ID: I have started a survey on sourceforge which asks the question, should lapack_lite and the modules that depend upon it be moved to the optional Packages section. Pros: this would move these modules out of a privleged position into equality with other "application" packages. Their presence in the core is unnecessary. The compilation problems, as yet unsolved, with lapack_lite vs. alternate implementations, break the installation of the core even for people who are not using the facilities. Cons: It would require extra effort to install the Numerical distribution from source and get back to the current status quo. Some people would be confused at the changes. Note: If we did it "right", and made these things into packages instead of modules that install into Numeric, changes to scripts would be required, but that is not being proposed here. My vision of this change is that they would still install directly into Numeric, and would be included in all "prebuilt" distributions. From frank at ned.dem.csiro.au Fri Sep 15 07:28:44 2000 From: frank at ned.dem.csiro.au (Frank Horowitz) Date: Fri, 15 Sep 2000 19:28:44 +0800 Subject: [Numpy-discussion] Da Blas Message-ID: At 3:14 PM -0700 14/9/00, wrote: >Well, I certainly didn't mean to flame anyone in particular, so I'm sorry >that Frank was offended. Publicly apologized, and publicly accepted. I'm sorry I lost my cool. Ray Beausoleil and I are having a look at this too. He's coming at the problem from a Windows perspective, and I'm coming at it from a Linux/Unix perspective. It's going to be a little slow going for me at first, since it appears that I'm going to have to get my head around the distutils way of doing things.... Just in case it saves anyone else following a false lead, yes, there is a bug in the (Linux Mandrake 7.1; but probably any unix-ish box) compile line for building the lapack_lite.so shared library (i.e. what is generated on my box is "-L/usr/lib-llapack" which obviously is missing a blank before the "-llapack" substring). However, when I coerced the distutils system to get around that bug (by specifying "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR variables in setup.py) the same problem (i.e. an "ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing lapack_lite) ultimately manifested itself. So my advice is not to bother tracking that one down (although it probably should be reported as a bug in distutils, since adding that trailing blank algorithmically instead of in a user modifiable configuration string is the "right thing to do", TM.). I'm still puzzled by Thomas Breul's report of preloading an f2c library squashing the bug under RedHat 6.2(?). That does not work under Mandrake, although clearly there is some bloody symbol table missing from somewhere. The trouble is to find out where, and then to coerce distutils to deal with that in a portable way... Frank -- -- Frank Horowitz frank at ned.dem.csiro.au CSIRO-Exploration & Mining, PO Box 437, Nedlands, WA 6009, AUSTRALIA Direct: +61 8 9284 8431; FAX: +61 8 9389 1906; Reception: +61 8 9389 8421 From ransom at cfa.harvard.edu Fri Sep 15 11:30:38 2000 From: ransom at cfa.harvard.edu (Scott M. Ransom) Date: Fri, 15 Sep 2000 11:30:38 -0400 Subject: [Numpy-discussion] Da Blas References: Message-ID: <39C2409E.51659120@cfa.harvard.edu> Frank Horowitz wrote: > > However, when I > coerced the distutils system to get around that bug (by specifying > "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR > variables in setup.py) the same problem (i.e. an "ImportError: > /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing > lapack_lite) ultimately manifested itself. This problem is easily fixed (at least on linux) by performing the link of lapack_lite.so with g77 instead of gcc (this is required because the lapack and/or blas libraries are based on fortran object files...). For instance the out-of-box link command on my machine (Debian 2.2) is: gcc -shared build/temp.linux2/Src/lapack_litemodule.o -L/usr/local/lib -L/usr/lib -llapack -lblas -o build/lib.linux2/lapack_lite.so Simply change the 'gcc' to 'g77' and everything works nicely. Not sure if this is specific to Linux or not... Scott -- Scott M. Ransom Address: Harvard-Smithsonian CfA Phone: (617) 495-4142 60 Garden St. MS 10 email: ransom at cfa.harvard.edu Cambridge, MA 02138 GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From frank at ned.dem.csiro.au Fri Sep 15 20:39:39 2000 From: frank at ned.dem.csiro.au (Frank Horowitz) Date: Sat, 16 Sep 2000 08:39:39 +0800 Subject: [Numpy-discussion] Da Blas In-Reply-To: <39C2409E.51659120@cfa.harvard.edu> References: Message-ID: At 11:30 AM -0400 9/15/00, Scott M. Ransom wrote: >Frank Horowitz wrote: >> >> However, when I >> coerced the distutils system to get around that bug (by specifying >> "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR >> variables in setup.py) the same problem (i.e. an "ImportError: >> /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing >> lapack_lite) ultimately manifested itself. > >This problem is easily fixed (at least on linux) by performing the link >of lapack_lite.so with g77 instead of gcc (this is required because the >lapack and/or blas libraries are based on fortran object files...). Good on ya, Scott. You nailed it. When I did this by hand, it worked. What I now need to figure out is how to coerce distutils into doing that automagically so others can benefit without this pain! Cheers, Frank Horowitz Dr. Frank Horowitz __ \ CSIRO Exploration & Mining ,~' L_|\ Australian 39 Fairway, PO Box 437, ;-' \ Geodynamics Nedlands, WA 6009 Australia ( \ Cooperative Phone +61 9 284 8431 + ___ / Research Fax +61 9 389 1906 L~~' "\__/ Centre frank at ned.dem.csiro.au W From kern at its.caltech.edu Fri Sep 15 22:10:10 2000 From: kern at its.caltech.edu (Robert Kern) Date: Fri, 15 Sep 2000 19:10:10 -0700 (PDT) Subject: [Numpy-discussion] Da Blas In-Reply-To: Message-ID: On Sat, 16 Sep 2000, Frank Horowitz wrote: > At 11:30 AM -0400 9/15/00, Scott M. Ransom wrote: > >Frank Horowitz wrote: > >> > >> However, when I > >> coerced the distutils system to get around that bug (by specifying > >> "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR > >> variables in setup.py) the same problem (i.e. an "ImportError: > >> /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing > >> lapack_lite) ultimately manifested itself. > > > >This problem is easily fixed (at least on linux) by performing the link > >of lapack_lite.so with g77 instead of gcc (this is required because the > >lapack and/or blas libraries are based on fortran object files...). > > Good on ya, Scott. You nailed it. When I did this by hand, it worked. What > I now need to figure out is how to coerce distutils into doing that > automagically so others can benefit without this pain! Adding "-lg2c" to the end of the gcc (not g77) link line always did the trick for me with FORTRAN libraries. I'm not sure if g77 does anything more than that compared with gcc for linking. > Cheers, > Frank Horowitz -- Robert Kern kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pauldubois at home.com Sun Sep 17 00:56:16 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Sat, 16 Sep 2000 21:56:16 -0700 Subject: [Numpy-discussion] Numeric addition: put Message-ID: In the CVS sources there is now a new function "put". "put" is roughly the opposite of "take" put(x, indices, values) is roughly x.flat[j] = values[j] for all j in indices The fine print will appear in the manuals shortly. From DavidA at ActiveState.com Sun Sep 17 01:04:01 2000 From: DavidA at ActiveState.com (David Ascher) Date: Sat, 16 Sep 2000 22:04:01 -0700 (Pacific Daylight Time) Subject: [Numpy-discussion] Numeric addition: put In-Reply-To: Message-ID: > In the CVS sources there is now a new function "put". > "put" is roughly the opposite of "take" yeah! Go, Paul, Go! =) --david From Oliphant.Travis at mayo.edu Mon Sep 18 10:06:39 2000 From: Oliphant.Travis at mayo.edu (Travis Oliphant) Date: Mon, 18 Sep 2000 09:06:39 -0500 (CDT) Subject: [Numpy-discussion] Re: New put function In-Reply-To: <200009171917.MAA12610@lists.sourceforge.net> Message-ID: Paul, Great job on the new put function. I can't wait to try it out. This goes a long way towards eliminating one of the few complaints I've heard about Numerical Python when compared to other similar environments. -Travis From jonathan.gilligan at vanderbilt.edu Mon Sep 18 17:34:17 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon, 18 Sep 2000 16:34:17 -0500 Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows Message-ID: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> I have recently upgraded to Python 1.6 (final). Now I can't figure out how to build Numpy with lapack_lite under windows. There is not a makefile for lapack_lite for MSVC and the instructions in README don't really help. All that setup.py will do is complain that lapack_lite is not installed. Specifically, are there specific compiler flags that I need to use to compile lapack lite? Should I just emulate the unix version and do cl -c *.c lib -out:blas.lib blas_lite.obj lib -out:lapack.lib dlapack_lite.obj f2c_lite.obj zlapack_lite.obj I tried doing this, but when I try to run the NumTut examples, I get the following error: D:\Programming\python\Numerical\Demo>python Python 1.6 (#0, Sep 5 2000, 08:16:13) [MSC 32 bit (Intel)] on win32 Copyright (c) 1995-2000 Corporation for National Research Initiatives. All Rights Reserved. Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All Rights Reserved. >>> import NumTut Traceback (most recent call last): File "", line 1, in ? File "NumTut\__init__.py", line 14, in ? greece = pickle.load(open(os.path.join(_dir, 'greece.pik'), 'rb')) / 256.0 File "D:\Programming\python\Python 1.6\lib\pickle.py", line 855, in load return Unpickler(file).load() File "D:\Programming\python\Python 1.6\lib\pickle.py", line 515, in load dispatch[key](self) File "D:\Programming\python\Python 1.6\lib\pickle.py", line 688, in load_globa l klass = self.find_class(module, name) File "D:\Programming\python\Python 1.6\lib\pickle.py", line 698, in find_class raise SystemError, \ from module Numericto import class array_constructor >>> FWIW, I am using the versions from :pserver:anonymous at cvs.numpy.sourceforge.net:/cvsroot/numpy, module Numerical, checked out as of Sept. 14. I didn't see any tags defined in the repository corresponding to release 16.x. Did I miss something? Should the releases not be getting tagged in the CVS repository? Thanks for any pointers, Jonathan From jonathan.gilligan at vanderbilt.edu Mon Sep 18 17:54:31 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Mon, 18 Sep 2000 16:54:31 -0500 Subject: [Numpy-discussion] Problem Solved Re: Building Numpy under Python 1.6, Windows In-Reply-To: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: <5.0.0.25.0.20000918165215.06201a10@g.mail.vanderbilt.edu> I have solved my problem building numpy under Python 1.6, windows. The problem was that Demos/NumTut/greece.pik was improperly checked into the CVS repository as a text file, when it should have been binary. Jonathan From pauldubois at home.com Mon Sep 18 18:29:04 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Mon, 18 Sep 2000 15:29:04 -0700 Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows In-Reply-To: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: Would you try syncing up to the repository and try again? I redid this stuff over the weekend because so many people are having trouble. The thing should build in lapack by default and it is in an optional package to boot. I didn't announce this because I wanted some of my fellow developers to suffer first, but they don't seem to be in a suffering mood today. Do cvs update -d -P to get the new structure. We haven't been doing cvs tags. I suppose if I could remember how to do them I would do them when I am the one cutting the release (CVS is not my usual source control system). > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of > Jonathan M. Gilligan > Sent: Monday, September 18, 2000 2:34 PM > To: numpy-discussion at lists.sourceforge.net > Cc: Jonathan M. Gilligan > Subject: [Numpy-discussion] Building Numpy under Python 1.6, Windows > > > I have recently upgraded to Python 1.6 (final). Now I can't > figure out how > to build Numpy with lapack_lite under windows. There is not a > makefile for > lapack_lite for MSVC and the instructions in README don't really > help. All > that setup.py will do is complain that lapack_lite is not installed. > From jonathan.gilligan at vanderbilt.edu Tue Sep 19 10:53:55 2000 From: jonathan.gilligan at vanderbilt.edu (Jonathan M. Gilligan) Date: Tue, 19 Sep 2000 09:53:55 -0500 Subject: [Numpy-discussion] Tagging releases under CVS (was RE: [Numpy-discussion] Building Numpy under Python 1.6, Windows) In-Reply-To: References: <5.0.0.25.0.20000918161015.00a225a0@g.mail.vanderbilt.edu> Message-ID: <5.0.0.25.0.20000919094005.044a80c0@g.mail.vanderbilt.edu> To tag the version you currently have checked out, go into the root directory of the module (e.g., Numeric) and AFTER CHECKING IN ALL MODIFICATIONS do cvs tag [-c] where is the tag name. This will apply to the current revision of each file that you have checked out. It is important to note that the tag is applied to the repository so it is essential that you check in all modified files and resolve conflicts BEFORE tagging the repository. The -c flag tells cvs to check that all files in the local directory are unmodified and warns you if they are not. The restrictions on the tag name are not well-documented, but if they match the regular expression ^[A-Za-z][A-Za-z_0-9-]+$ they will work (i.e., matching this regex is a sufficient but perhaps not necessary condition). Tags that begin with [0-9] or that contain [ \t.,;:] will not work. If you want to go back and tag old releases without checking them out, then if there is a target date that you can use to identify the version (e.g., a release date), you can run cvs rtag -D Where is the date. For format, the following excerpt from the Cederqvist manual may help: >A wide variety of date formats are supported by CVS. The most standard >ones are ISO8601 (from the International Standards Organization) and the >Internet e-mail standard (specified in RFC822 as amended by RFC1123). > >ISO8601 dates have many variants but a few examples are: > >1972-09-24 >1972-09-24 20:05 This command will tag all files in at the latest revision on or before with tag . I hope this is helpful to you. At 05:29 PM 9/18/2000, Paul F. Dubois wrote: >We haven't been doing cvs tags. I suppose if I could remember how to do them >I would do them when I am the one cutting the release (CVS is not my usual >source control system). Best regards, Jonathan From pauldubois at home.com Tue Sep 19 14:37:31 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Tue, 19 Sep 2000 11:37:31 -0700 Subject: [Numpy-discussion] [ANNOUNCE] HappyDoc docs added to numpy site Message-ID: Check it out at http://numpy.sourceforge.net/doc From trainor at uic.edu Tue Sep 19 17:42:06 2000 From: trainor at uic.edu (trainor at uic.edu) Date: Tue, 19 Sep 2000 16:42:06 -0500 Subject: [Numpy-discussion] Question: arrays of booleans Message-ID: <39C7DDAD.CDF99EB6@uic.edu> Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about comparisons of multiarray objects and arrays of booleans in particular. Back when I was checking out Numpy for an application, I read a note in the Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray objects. It said and still says: "Currently, comparisons of multiarray objects results in exceptions, since reasonable results (arrays of booleans) are not doable without non-trivial changes to the Python core. These changes are planned for Python 1.6, at which point array object comparisons will be updated." ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) Can anyone comment on what's happening with arrays of booleans in Numpy? I don't want to duplicate someone elses effort, but don't mind standing on someone elses shoulders. My application calls for operations on both non-sparse and sparce matrices. douglas From pete at visionart.com Tue Sep 19 18:55:59 2000 From: pete at visionart.com (Pete Shinners) Date: Tue, 19 Sep 2000 15:55:59 -0700 Subject: [Numpy-discussion] can i improve this code? Message-ID: <009c01c0228c$c7686f90$f73f93cd@visionart.com> I have image data stored in a 3D array. the code is pretty simple, but what i don't like about it is needing to transpose the array twice. here's the code... def map_rgb_array(surf, a): "returns an array of 2d mapped pixels from a 3d array" #image info needed to convert map rgb data loss = surf.get_losses[:3] shift = surf.get_shifts()[:3] prep = a >> loss << shift return transpose(bitwise_or.reduce(transpose(prep))) once i get to the 'prep' line, my 3d image has all the values for R,G,B mapped to the appropriate bitspace. all i need to do is bitwise_or the values together "finalcolor = R|G|B". this is where my lack of guru-ness with numpy comes in. the only way i could figure to do this was transpose the array and then apply the bitwise_or. only problem is then i need to de-transpose the array to get it back to where i started. is there some form or mix of bitwise_or i can use to make it operate on the 3rd axis instead of the 1st? thanks all, hopefully this can be improved. legibility comes second to speed as a priority :] ... hmm, tinkering time passes ... i did find this, "prep[:,:,0] | prep[:,:,1] | prep[:,:,2]". is something like that going to be best? "bitwise_or.reduce((prep[:,:,0], prep[:,:,1], prep[:,:,2]))" From cgw at fnal.gov Tue Sep 19 20:22:27 2000 From: cgw at fnal.gov (Charles G Waldman) Date: Tue, 19 Sep 2000 19:22:27 -0500 (CDT) Subject: [Numpy-discussion] Question: arrays of booleans In-Reply-To: <39C7DDAD.CDF99EB6@uic.edu> References: <39C7DDAD.CDF99EB6@uic.edu> Message-ID: <14792.835.587030.528703@buffalo.fnal.gov> trainor at uic.edu writes: > Hello -- I'm a Numpy Newbie. I have a question for the Numpy gurus about > comparisons of multiarray objects and arrays of booleans in particular. > > Back when I was checking out Numpy for an application, I read a note in the > Ascher/Dubois/Hinsen/Hugunin/Oliphant manual about comparisons of multiarray > objects. It said and still says: > > "Currently, comparisons of multiarray objects results in exceptions, > since reasonable results (arrays of booleans) are not doable without > non-trivial changes to the Python core. These changes are planned for > Python 1.6, at which point array object comparisons will be updated." > ( http://numpy.sourceforge.net/numdoc/HTML/numdoc.html ) > > Can anyone comment on what's happening with arrays of booleans in > Numpy? I believe that the passage you quote is referring to David Ascher's "Rich Comparisons" proposal. This is not in Python 1.6 and is not in Python 2.0 and nobody is currently working on it. The section in the manual should be revised. From gball at cfa.harvard.edu Thu Sep 21 11:28:02 2000 From: gball at cfa.harvard.edu (Greg Ball) Date: Thu, 21 Sep 2000 11:28:02 -0400 (EDT) Subject: [Numpy-discussion] rich comparisons In-Reply-To: <200009210641.XAA04111@lists.sourceforge.net> Message-ID: > Can anyone comment on what's happening with arrays of booleans in Numpy? > I don't want to duplicate someone elses effort, but don't mind standing > on someone elses shoulders. My application calls for operations on both > non-sparse and sparce matrices. "Rich comparisons" are really only syntactic sugar. For non-sparse matrices the ufuncs greater(a,b) and less(a,b) will do the job. Equivalents for sparse matrices probably exist. -Greg Ball From pauldubois at home.com Fri Sep 22 14:09:12 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 22 Sep 2000 11:09:12 -0700 Subject: [Numpy-discussion] Numeric-17.0.tar.gz released Message-ID: I have released the current CVS image as 17.0. The tag is r17_0. From cbarker at jps.net Thu Sep 28 18:31:14 2000 From: cbarker at jps.net (Chris Barker) Date: Thu, 28 Sep 2000 15:31:14 -0700 Subject: [Numpy-discussion] How Can I find out if an array is Contiguous in an extension module? Message-ID: <39D3C6B2.1CB86EFA@jps.net> Hi all, I'm writing an extension module that uses PyArrayObjects, and I want to be able to tell if an array passed in is contiguous. I know that I can use PyArray_ContiguousFromObject, and it will just return a reference to the same array if it is contiguous, but I want to know whether or not a new array has been created. I realize that most of the time it doesn't matter, but in this case I am chaning the array in place, and need to pass it to another function that is expecting a standard C array, so I need to make sure the user has passed in a contiguous array. I was surprised to not find a "PyArray_Contiguous" function, or something like it. I see that there is a field in PyArrayObject (int flags) that has a bit indicating whether a field is contiguous, but being a newbe to C as well, I'm not sure how to get at it. I'd love a handy utility function, but if one doesn't exist, can someone send me the code I need to check that bit? thanks, -Chris -- Christopher Barker, Ph.D. cbarker at jps.net --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ From pauldubois at home.com Thu Sep 28 22:10:05 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Thu, 28 Sep 2000 19:10:05 -0700 Subject: [Numpy-discussion] How Can I find out if an array is Contiguous in an extension module? In-Reply-To: <39D3C6B2.1CB86EFA@jps.net> Message-ID: > I was surprised to not find a "PyArray_Contiguous" function, or > something like it. The following macro is define in arrayobject.h: #define PyArray_ISCONTIGUOUS(m) ((m)->flags & CONTIGUOUS) # From pete at shinners.org Wed Sep 27 11:27:26 2000 From: pete at shinners.org (Pete Shinners) Date: Wed, 27 Sep 2000 08:27:26 -0700 Subject: [Numpy-discussion] precompiled binary for 17 Message-ID: <007b01c02897$71e241c0$0200a8c0@home> i've compiled my own numeric, and i thought i'd offer it back for other win users. i haven't seen a precompiled binary package for win32 yet, so maybe this can get added to sourceforge? if not, here's a link that'll work for a couple weeks http://www.shinners.org/pete/NumPy17_20.zip this is numeric-17.0, compiled for 2.0beta btw, much thanks and congrats to the numpy team. getting this compiled and installed was WORLDS better than working with the 16.x releases. thanks! (and congrats!) From pauldubois at home.com Fri Sep 29 10:20:27 2000 From: pauldubois at home.com (Paul F. Dubois) Date: Fri, 29 Sep 2000 07:20:27 -0700 Subject: [Numpy-discussion] precompiled binary for 17 In-Reply-To: <007b01c02897$71e241c0$0200a8c0@home> Message-ID: File released. Thank you! In future you can just email me directly. -- PFD > -----Original Message----- > From: numpy-discussion-admin at lists.sourceforge.net > [mailto:numpy-discussion-admin at lists.sourceforge.net]On Behalf Of Pete > Shinners > Sent: Wednesday, September 27, 2000 8:27 AM > To: Numpy Discussion > Subject: [Numpy-discussion] precompiled binary for 17 > > > i've compiled my own numeric, and i thought i'd offer it > back for other win users. i haven't seen a precompiled > binary package for win32 yet, so maybe this can get > added to sourceforge? if not, here's a link that'll > work for a couple weeks > > http://www.shinners.org/pete/NumPy17_20.zip > > this is numeric-17.0, compiled for 2.0beta > > > btw, much thanks and congrats to the numpy team. > getting this compiled and installed was WORLDS > better than working with the 16.x releases. thanks! > (and congrats!) > > > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion