From falted at openlc.org Wed Nov 6 08:16:08 2002 From: falted at openlc.org (Francesc Alted) Date: Wed Nov 6 08:16:08 2002 Subject: [Numpy-discussion] Segmentation fault when dealing with larga rank arrays Message-ID: <20021106161525.GA1254@openlc.org> Hi, I'm having some segmentation faults when dealing with large rank arrays in Numeric. You can see some examples in the following: >>> zeros((1,) * 21) == zeros((1,) * 21) Segmentation fault Of course, this kind of usage is quite unusual, but I'm interested on it because I'm developing tests for a package based on Numerical and wanted to check the larger rank supported. >>> ones((1,) * 41) Segmentation fault In this case, I know that Numeric supports array ranks until 40, but in opinion, an error should be raised instead of merely givin a "Segmentation fault". I'm having these problems using Numeric 20.2.1, 21.0 and 22.0 in both Python 2.1.x and 2.2.x. My platform is Intel under Linux Debian 3.0. Thank you, -- Francesc Alted PGP KeyID: 0x61C8C11F Scientific aplications developer Public PGP key available: http://www.openlc.org/falted_at_openlc.asc Key fingerprint = 1518 38FE 3A3D 8BE8 24A0 3E5B 1328 32CC 61C8 C11F From info at wsntv100.com Thu Nov 7 03:58:05 2002 From: info at wsntv100.com (info at wsntv100.com) Date: Thu Nov 7 03:58:05 2002 Subject: [Numpy-discussion] [ADV] Looking for executives, managers, producers, professors and members who can work on part-time Message-ID: An HTML attachment was scrubbed... URL: From bondpaper at earthlink.net Thu Nov 7 09:33:02 2002 From: bondpaper at earthlink.net (bondpaper) Date: Thu Nov 7 09:33:02 2002 Subject: [Numpy-discussion] Installing Numerical Python Message-ID: <3DCAA4DB.60500@earthlink.net> Hello, I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and when I try to install Numerical Python (v. 22), I get an error telling me that it cannot find the file /usr/lib/python2.2/config/Makefile. The command I use for the install is: python2 install.py build. The Python2.2 install comes directly from the RPMs on the python.org web site. Does anyone know how I might resolve this? Thanks. Tom From falted at openlc.org Thu Nov 7 10:14:05 2002 From: falted at openlc.org (Francesc Alted) Date: Thu Nov 7 10:14:05 2002 Subject: [Numpy-discussion] Installing Numerical Python In-Reply-To: <3DCAA4DB.60500@earthlink.net> References: <3DCAA4DB.60500@earthlink.net> Message-ID: <20021107181314.GB1262@openlc.org> On Thu, Nov 07, 2002 at 10:37:31AM -0700, bondpaper wrote: > I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and > when I try to install Numerical Python (v. 22), I get an error telling > me that it cannot find the file /usr/lib/python2.2/config/Makefile. The > command I use for the install is: python2 install.py build. The > Python2.2 install comes directly from the RPMs on the python.org web > site. Does anyone know how I might resolve this? It should be. Maybe you need to install the development version packages of python. For 2.2.1 you can find it at: http://www.python.org/ftp/python/2.2.1/rpms/rh7.3/python2-devel-2.2.1-2.i386.rpm Bye, -- Francesc Alted PGP KeyID: 0x61C8C11F Scientific aplications developer Public PGP key available: http://www.openlc.org/falted_at_openlc.asc Key fingerprint = 1518 38FE 3A3D 8BE8 24A0 3E5B 1328 32CC 61C8 C11F From oliphant at ee.byu.edu Thu Nov 7 10:32:08 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Nov 7 10:32:08 2002 Subject: [Numpy-discussion] Installing Numerical Python In-Reply-To: <3DCAA4DB.60500@earthlink.net> Message-ID: > > Hello, > > I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and > when I try to install Numerical Python (v. 22), I get an error telling > me that it cannot find the file /usr/lib/python2.2/config/Makefile. The > command I use for the install is: python2 install.py build. The > Python2.2 install comes directly from the RPMs on the python.org web > site. Does anyone know how I might resolve this? You probably need the python-devel package also. -Travis O. From paul at pfdubois.com Thu Nov 7 17:15:09 2002 From: paul at pfdubois.com (Paul F Dubois) Date: Thu Nov 7 17:15:09 2002 Subject: [Numpy-discussion] note on mail lists Message-ID: <000201c286c4$2708df00$6501a8c0@NICKLEBY> Please note that the numpy-developers list is no longer active. Do not send mail to this address; it will be discarded. Use numpy-discussion for questions and problems and the bug list at Source Forge for actual bugs only. Thanks! From Marc.Poinot at onera.fr Tue Nov 19 00:58:03 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Tue Nov 19 00:58:03 2002 Subject: [Numpy-discussion] Array data ownership Message-ID: <3DD9FCF8.7C175CFF@onera.fr> Hi all, I use the Numpy C API to produce/use some PyArrayObjects. To set the allocated memory zone, I use the PyArray_FromDimsAndData function, which is described to "be used to access global data that will never be freed". That's what I want. Or more exactly, I want "global data that will never be freed by Numpy, until I tell it to do so !" I mean some of my arrays are allocated and used as PyArrayObject data, but I want some of them to be seen by Numpy as its own data. I want it to deallocate the data at delete time. My questions are : [1] Is PyArray_FromDimsAndData the right function or should I use another way ? [2] Can I use the PyArrayObject.flags bit "owns the data area" to set it after the PyArray_FromDimsAndData call ? In the case of "yes", which is the bit rank ? The fourth starting from right ? Any macro already doing this ? Will I break the PyArrayObject consistency ? Marcvs [alias yes I can go into the code... I though OO was reading docs and using interfaces ;] From hinsen at cnrs-orleans.fr Tue Nov 19 03:30:04 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Nov 19 03:30:04 2002 Subject: [Numpy-discussion] Array data ownership In-Reply-To: <3DD9FCF8.7C175CFF@onera.fr> References: <3DD9FCF8.7C175CFF@onera.fr> Message-ID: Marc Poinot writes: > That's what I want. Or more exactly, I want "global data that will never be > freed by Numpy, until I tell it to do so !" That option does not exist. The options are: 1) NumPy manages the data space of your array. It gets freed when the last array object referencing it is destroyed. 2) NumPy assumes that the data space is already allocated and is not freed as long as any array object might reference it (which, in practice, is until the end of the process). PyArray_FromDimsAndData is used for allocating arrays that choose the second option. If I understand you correctly, you want NumPy to create an array object and allocate the data space, but make sure that the data space is not freed before you "allow" it. In that case, just create an ordinary array and keep an additional reference to it. When the data space may be destroyed, you remove the reference. However, there is no guarantee that the data space will be freed immediately, as there might still be other references around. > [2] Can I use the PyArrayObject.flags bit "owns the data area" to set it > after the PyArray_FromDimsAndData call ? In the case of "yes", which Whatever this does, it is not documented. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From Marc.Poinot at onera.fr Tue Nov 19 04:25:02 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Tue Nov 19 04:25:02 2002 Subject: [Numpy-discussion] Array data ownership References: <3DD9FCF8.7C175CFF@onera.fr> Message-ID: <3DDA2D82.1520A69F@onera.fr> Konrad Hinsen wrote: > > If I understand you correctly, you want NumPy to create an array > object and allocate the data space, but make sure that the data space > is not freed before you "allow" it. In that case, just create an > ordinary array and keep an additional reference to it. When the data > space may be destroyed, you remove the reference. However, there is no > guarantee that the data space will be freed immediately, as there > might still be other references around. > No. I want to set the memory zone of the array but once this zone is set, I want numpy to manage it as if it was owner of the memory. I have an external lib which allocates the returned memory zone. I put this memory zone into a PyArrayObject using PyArray_FromDimsAndData in order to avoid memory copy. But I want now this array to be the owner of the allocated zone. I mean I want this zone to be released if the Python object is deleted. The ref count of Python is ok for me, as long as an array is sharing the data, python won't release it. But I want Python to delete the memory zone if the last reference is removed. Marcvs [alias I'll have a try with myarray->flags |= OWN_DATA; and I'll let you know about my experiments...] From hinsen at cnrs-orleans.fr Tue Nov 19 05:29:06 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Nov 19 05:29:06 2002 Subject: [Numpy-discussion] Array data ownership In-Reply-To: <3DDA2D82.1520A69F@onera.fr> (message from Marc Poinot on Tue, 19 Nov 2002 13:24:34 +0100) References: <3DD9FCF8.7C175CFF@onera.fr> <3DDA2D82.1520A69F@onera.fr> Message-ID: <200211191327.gAJDRWG23603@chinon.cnrs-orleans.fr> > No. I want to set the memory zone of the array but once this zone is > set, I want numpy to manage it as if it was owner of the memory. That is the most frequent case for which there is no clean solution. There ought to be an array constructor that takes a pointer to a deallocation function which is called to free the data space. You can do myarray->flags |= OWN_DATA, then the data space will be freed using the standard free() function. But this is undocumented, and works only if the standard OS memory allocation calls were used to allocate the memory. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From falted at openlc.org Tue Nov 19 11:33:04 2002 From: falted at openlc.org (Francesc Alted) Date: Tue Nov 19 11:33:04 2002 Subject: [Numpy-discussion] [ANN] PyTables 0.2 is out Message-ID: <20021119193209.GE994@openlc.org> Announcing PyTables 0.2 ----------------------- What's new ----------- - Numerical Python arrays supported! - Much improved documentation - Programming API almost stable - Improved navegability across the object tree - Added more unit tests (there are almost 50) - Dropped HDF5_HL dependency (a tailored version is included in sources now) - License changed from LGPL to BSD What is ------- The goal of PyTables is to enable the end user to manipulate easily scientific data tables and Numerical Python objects (new in 0.2!) in a persistent hierarchical structure. The foundation of the underlying hierachical data organization is the excellent HDF5 library (http://hdf.ncsa.uiuc.edu/HDF5). Right now, PyTables provides limited support of all the HDF5 functions, but I hope to add the more interesting ones (for PyTables needs) in the near future. Nonetheless, this package is not intended to serve as a complete wrapper for the entire HDF5 API. A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms "fixed-length" and strict "data types" seems to be quite a strange requirement for an interpreted language like Python, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many scientifc applications, for example) in an efficient manner that reduces demand on CPU time and I/O. In order to emulate records (C structs in HDF5) in Python, PyTables implements a special metaclass that detects errors in field assignments as well as range overflows. PyTables also provides a powerful interface to process table data. Quite a bit effort has been invested to make browsing the hierarchical data structure a pleasant experience. PyTables implements just three (orthogonal) easy-to-use methods for browsing. What is HDF5? ------------- For those people who know nothing about HDF5, it is is a general purpose library and file format for storing scientific data made at NCSA. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic constructs, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. How fast is it? --------------- Despite to be an alpha version and that there is lot of room for improvements (it's still CPU bounded!), PyTables can read and write tables quite fast. But, if you want some (very preliminary) figures (just to know orders of magnitude), in a AMD Athlon at 900 it can currently read from 40000 up to 60000 records/s and write from 5000 up to 13000 records/s. Raw data speed in read mode ranges from 1 MB/s up to 2 MB/s, and it drops to the 200 KB/s - 600 KB/s range for writes. Go to http://pytables.sf.net/bench.html for a somewhat more detailed description of this small (and synthetic) benchmark. Anyway, this is only the beginning (premature optimization is the root of all evils, you know ;-). Platforms --------- I'm using Linux as the main development platform, but PyTables should be easy to compile/install on other UNIX machines. Thanks to Scott Prater, this package has passed all the tests on a UltraSparc platform with Solaris 7. It also compiles and passes all the tests on a SGI Origin2000 with MIPS R12000 processors and running IRIX 6.5. If you are using Windows and you get the library to work, please let me know. An example? ----------- At the bottom of this message there is some code (less that 100 lines and only less than half being real code) that shows basic capabilities of PyTables. Web site -------- Go to the PyTables web site for more details: http://pytables.sf.net/ Final note ---------- This is second alpha release, and probably last alpha, so it is still time if you want to suggest some API addition/change or addition/change of any useful missing capability. Let me know of any bugs, suggestions, gripes, kudos, etc. you may have. -- Francesc Alted falted at openlc.org *-*-*-**-*-*-**-*-*-**-*-*- Small code example *-*-*-**-*-*-**-*-*-**-*-*-* """Small but almost complete example showing the PyTables mode of use. As a result of execution, a 'tutorial1.h5' file is created. You can look at it with whatever HDF5 generic utility, like h5ls, h5dump or h5view. """ import sys from Numeric import * from tables import * #'-**-**-**-**-**-**- user record definition -**-**-**-**-**-**-**-' # Define a user record to characterize some kind of particles class Particle(IsRecord): name = '16s' # 16-character String idnumber = 'Q' # unsigned long long (i.e. 64-bit integer) TDCcount = 'B' # unsigned byte ADCcount = 'H' # unsigned short integer grid_i = 'i' # integer grid_j = 'i' # integer pressure = 'f' # float (single-precision) energy = 'd' # double (double-precision) print print '-**-**-**-**-**-**- file creation -**-**-**-**-**-**-**-' # The name of our HDF5 filename filename = "tutorial1.h5" print "Creating file:", filename # Open a file in "w"rite mode h5file = openFile(filename, mode = "w", title = "Test file") print print '-**-**-**-**-**-**- group an table creation -**-**-**-**-**-**-**-' # Create a new group under "/" (root) group = h5file.createGroup("/", 'detector', 'Detector information') print "Group '/detector' created" # Create one table on it table = h5file.createTable(group, 'readout', Particle(), "Readout example") print "Table '/detector/readout' created" # Get a shortcut to the record object in table particle = table.record # Fill the table with 10 particles for i in xrange(10): # First, assign the values to the Particle record particle.name = 'Particle: %6d' % (i) particle.TDCcount = i % 256 particle.ADCcount = (i * 256) % (1 << 16) particle.grid_i = i particle.grid_j = 10 - i particle.pressure = float(i*i) particle.energy = float(particle.pressure ** 4) particle.idnumber = i * (2 ** 34) # This exceeds long integer range # Insert a new particle record table.appendAsRecord(particle) # Flush the buffers for table table.flush() print print '-**-**-**-**-**-**- table data reading & selection -**-**-**-**-**-' # Read actual data from table. We are interested in collecting pressure values # on entries where TDCcount field is greater than 3 and pressure less than 50 pressure = [ x.pressure for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print "Last record read:" print x print "Field pressure elements satisfying the cuts ==>", pressure # Read also the names with the same cuts names = [ x.name for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print print '-**-**-**-**-**-**- array object creation -**-**-**-**-**-**-**-' print "Creating a new group called '/columns' to hold new arrays" gcolumns = h5file.createGroup(h5file.root, "columns", "Pressure and Name") print "Creating a Numeric array called 'pressure' under '/columns' group" h5file.createArray(gcolumns, 'pressure', array(pressure), "Pressure column selection") print "Creating another Numeric array called 'name' under '/columns' group" h5file.createArray('/columns', 'name', array(names), "Name column selection") # Close the file h5file.close() print "File '"+filename+"' created" From jdhunter at ace.bsd.uchicago.edu Wed Nov 20 14:37:02 2002 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed Nov 20 14:37:02 2002 Subject: [Numpy-discussion] numpy with pygsl Message-ID: If I import pygsl.rng before importing Numeric, I get an abort mother:~> python Python 2.2.2 (#1, Oct 15 2002, 08:14:58) [GCC 3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pygsl.rng >>> import Numeric python: Modules/gcmodule.c:366: delete_garbage: Assertion `((((PyGC_Head *)( op)-1))->gc.gc_refs >= 0)' failed. Abort If I import them in the other order, I have no problems. Numeric 22.0 pygsl version = "0.1a" gsl-1.2 Any ideas? Thanks, John Hunter From j_r_fonseca at yahoo.co.uk Fri Nov 22 05:34:02 2002 From: j_r_fonseca at yahoo.co.uk (=?iso-8859-15?Q?Jos=E9?= Fonseca) Date: Fri Nov 22 05:34:02 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python Message-ID: <20021122133308.GA26237@localhost.localdomain> I've made a Numeric Python binding of the ARPACK library. ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems, available at http://www.caam.rice.edu/software/ARPACK/ . These bindings have the following features: - Correspondence for all ARPACK calls - In place operation for all matrices - Easy access to ARPACK debugging control variables - Online help for all calls, with the correct Python/C 0-based indexing (automatically converted from the sources with the aid of a sed script). - Include ports of [unfortunately not all] original examples These bindings weren't generated with any automatic binding generation tool. Even though I initially tried both PyFortran and f2py, both showed to be inappropriate to handle the specificity of the ARPACK API. ARPACK uses a 'reverse communication interface' where basically the API sucessively returns to caller which must take some update steps, and re-call the API with most arguments untouched. The intelligent (and silent) argument conversions made by the above tools made very difficult to implement and debug the most simple example. Also, for large-scale problems we wouldn't want any kind of array conversion/transposing happening behind the scenes as that would completely kill performance. The source is available at http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 . The ARPACK library is not included and must be obtained and compiled seperately, and setup.py must be modified to reflect your system BLAS/LAPACK library. The bindings are API-centric. Nevertheless a Python wrapper to these calls can easily be made, where all kind of type conversions and dummy-safe actions can be made. I have done one myselve for a Sparse matrix eigenvalue determination using UMFPACK for sparse matrix factorization (for which a simple binding - just for double precision matrices - is available at http://jrfonseca.dyndns.org/work/phd/python/modules/umfpack/ ). I hope you find this interesting. Regards, Jos? Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com From verveer at embl-heidelberg.de Sun Nov 24 08:41:02 2002 From: verveer at embl-heidelberg.de (verveer at embl-heidelberg.de) Date: Sun Nov 24 08:41:02 2002 Subject: [Numpy-discussion] dimensions of zero length Message-ID: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> Hi all, I noticed that in Numeric and in numarray it is possible to create arrays with axes of zero length. For instance: zeros([1, 0]). There seems not be much that can be done with them. What is the reason for their existence? My real question is: When writing an extension in C, how to deal with such arrays? Should I treat them as empty arrays, that do not have any data? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Peter.Verveer at embl-heidelberg.de From j_r_fonseca at yahoo.co.uk Sun Nov 24 08:53:04 2002 From: j_r_fonseca at yahoo.co.uk (=?iso-8859-15?Q?Jos=E9?= Fonseca) Date: Sun Nov 24 08:53:04 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python In-Reply-To: <20021122133308.GA26237@localhost.localdomain> References: <20021122133308.GA26237@localhost.localdomain> Message-ID: <20021124165236.GA27610@localhost.localdomain> On Fri, Nov 22, 2002 at 01:33:08PM +0000, Jos? Fonseca wrote: > The source is available at > http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 > . The ARPACK library is not included and must be obtained and compiled > seperately, and setup.py must be modified to reflect your system > BLAS/LAPACK library. Two header files, arpack.h and arpackmodule.h, were missing from the above package. This has been corrected now. Thanks to Greg Whittier for pointing that out. A little more detailed install instructions are were also added. I'm also considering where I should bundle or not ARPACK in the source package, and then use scipy_distutils to compile the fortran source files. This would make much easier to install, but I'm personally against statically link libraries, as they inevitable lead to code duplication in memory. For example, while I don't manage to build a shared version, the ATLAS and LAPACK libraries appear both triplicated in on my programs - from Numeric, ARPACK and UMFPACK. This leads to a huge code bloat and a misuse of the processor code cache resulting in a lower performance. What is the opinion of the other subscribers regarding this? Jos? Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com From hinsen at cnrs-orleans.fr Sun Nov 24 10:01:08 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sun Nov 24 10:01:08 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python In-Reply-To: <20021124165236.GA27610@localhost.localdomain> References: <20021122133308.GA26237@localhost.localdomain> <20021124165236.GA27610@localhost.localdomain> Message-ID: Jos? Fonseca writes: > files. This would make much easier to install, but I'm personally > against statically link libraries, as they inevitable lead to code > duplication in memory. For example, while I don't manage to build a Me too. Konrad. From hinsen at cnrs-orleans.fr Sun Nov 24 10:04:04 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sun Nov 24 10:04:04 2002 Subject: [Numpy-discussion] dimensions of zero length In-Reply-To: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> References: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> Message-ID: writes: > I noticed that in Numeric and in numarray it is possible to create > arrays with axes of zero length. For instance: zeros([1, 0]). There > seems not be much that can be done with them. What is the reason for > their existence? They often result as special cases from some operations. Think of them as the array equivalents of empty lists. Creating zero-size arrays explicitly can be useful when suitable starting values for iterations are needed. > My real question is: When writing an extension in C, how to deal with such > arrays? Should I treat them as empty arrays, that do not have any data? Exactly. Konrad. From jmiller at stsci.edu Tue Nov 26 10:45:04 2002 From: jmiller at stsci.edu (Todd Miller) Date: Tue Nov 26 10:45:04 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released Message-ID: <3DE3C0FE.2010807@stsci.edu> Numarray 0.4 --------------------------------- Numarray is an array processing package designed to efficiently manipulate large multi-dimensional arrays. Numarray is modelled after Numeric and features c-code generated from python template scripts, the capacity to operate directly on arrays in files, and improved type promotions. Version 0.4 is a relatively large update with these features: 1. C-basetypes have been added to NDArray and NumArray to accelerate simple indexing and attribute access *and* improve Numeric compatability. 2. List <-> NumArray transformations have been sped up. 3. There's an ieeespecial module which should make it easier to find and manipulate NANs and INFs. 4. There's now a boxcar function in the Convolve package for doing fast 2D smoothing. Jochen Kupper also contributed a lineshape module which is also part of the Convolve package. 5. Bug fixes for every reported bug between now and July-02. 6. Since I still haven't fixed the add-on Packages packaging, I built windows binaries for all 4 packages so you don't have to build them from source yourself. But... basetypes (and reorganization) aren't free: 1. The "native" aspects of the numarray C-API have changed in backwards incompatible ways. In particular, the NDInfo struct is now completely gone, since it was completely redundant to the new basetypes which are modelled after Numeric's PyArrayObject. If you actually *have* a numarray extension that this breaks, and it bugs you, send it to me and I'll fix it for you. If there's enough response, I'll automate the process of updating extension wrappers. But I expect not. 2. I expect to hear about bugs which can cause numarray/Python to dump core. Of course, I have no clue where they are. So... there may be rapid re-releases to compensate. 3. Old pickles are not directly transferrable to numarray-0.4, but may instead require some copy_reg fuddling because basetypes change the pickle format. If you have old pickles you need to migrate, send me e-mail and I'll help you figure out how to do it. 4. Make *really* sure you delete any old numarray modules you have laying around. These can screw up numarray-0.4 royally. 5. Note for astronomers: PyFITS requires an update to work with numarray-0.4. This should be available shortly, if it is not already. My point is that you may be unable to use both numarray-0.4 and PyFITS today. WHERE ----------- Numarray-0.4 windows executable installers, source code, and manual is here: http://sourceforge.net/project/showfiles.php?group_id=1369 Numarray is hosted by Source Forge in the same project which hosts Numeric: http://sourceforge.net/projects/numpy/ The web page for Numarray information is at: http://stsdas.stsci.edu/numarray/index.html Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at the Source Forge project for NumPy at: http://sourceforge.net/tracker/?group_id=1369 REQUIREMENTS -------------------------- numarray-0.4 requires Python 2.2.0 or greater. AUTHORS, LICENSE ------------------------------ Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science Institute. Thanks go to Jochen Kupper of the University of North Carolina for his work on Numarray and for porting the Numarray manual to TeX format. Numarray is made available under a BSD-style License. See LICENSE.txt in the source distribution for details. -- Todd Miller jmiller at stsci.edu From Marc.Poinot at onera.fr Wed Nov 27 05:16:03 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Wed Nov 27 05:16:03 2002 Subject: [Numpy-discussion] Displaying floats with Python Message-ID: <3DE4C551.DC039B67@onera.fr> I'm not sure this is a problem, but I'm looking for a solution for this and I wonder if one could give a piece of advice: I have a C extension using doubles and floats. I return a float casted to double to Python, from my extension, and when I display it I have some extra numbers at the end of the "correct" number. In the extension, dgv is a float (in this exemple dgv=0.1). PyTuple_SET_ITEM(tp0, i, PyFloat_FromDouble((double)dgv)); I print it in Python: print tuple[0] Which produces: 0.10000000149 I get to much numbers, because the print should not try to get more then the 4 bytes float. It looks that the floatobject.c files is setting a number precision for printing, which is forced to 12. (#define PREC_STR 12) This work if you use a "double", but not a "double" casted from a "float". This problem occurs either on SGI and DEC. With stdio: printf("%.g\n", (float) dgv); printf("%.g\n", (double)dgv); printf("%.12g\n",(float) dgv); printf("%.12g\n",(double)dgv); produces (this is a "CORRECT" behavior for printf, we're printing too much digits): 0.1 0.1 0.10000000149 0.10000000149 Any idea ? How can I say to Python to forget the precision, or set it as global. Marcvs [alias Yes I could only compute with integers, but... ] From Chris.Barker at noaa.gov Wed Nov 27 11:19:02 2002 From: Chris.Barker at noaa.gov (Chris Barker) Date: Wed Nov 27 11:19:02 2002 Subject: [Numpy-discussion] Displaying floats with Python References: <3DE4C551.DC039B67@onera.fr> Message-ID: <3DE513E2.6E3CA46@noaa.gov> Marc Poinot wrote: > > I'm not sure this is a problem, It's not. > and when I display it I have some extra numbers > at the end of the "correct" number. What you are seeing is the best decimal representation of the binary number that is stored in that double. While the extra bits in binary of the double over the float should be zero, that does not mean that the extra decimal digits will be zero also. In this case, you are trying to store the value of 1.0 / 10.0, That value can not be represented exactly in binary. The value: 0.10000000149 is as close as you can get with a C float. so you are getting the right answer (subject to the limitations of floating point representation and arithmetic), as demonstrated by your example: > printf("%.12g\n",(float) dgv); > 0.10000000149 > produces (this is a "CORRECT" behavior for printf, we're printing > too much digits) It depends what you mean by too many. The above example shows what the best decimal value you can get with 12 digits from your float value, which is the same as what Python has in it's double. By the way, your four printf examples also demonstrate that you are getting exactly the same results by casting a float to a double within C, as when you do it passing to Python (which you should expect. A Python Float is a C double, after all) By default, in a print statement, Python displays all the digits that are required to reproduce the number. If you don't want to see all those digits, do what you did in C: >>> d = 0.10000000149 >>> print d 0.10000000149 >>> print "%g"%d 0.1 >>> print "%.12g"%d 0.10000000149 By the way, see: http://www.python.org/doc/current/tut/node14.html For more explaination. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From haase at msg.ucsf.edu Wed Nov 27 12:37:04 2002 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed Nov 27 12:37:04 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released References: <3DE3C0FE.2010807@stsci.edu> Message-ID: <01a301c29654$f1d8c610$3b45da80@rodan> That's good news!! Since I just signed up to this list I have some (more general questions) 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months ago or so I got like 20 ...) 2) Are most people here talking about Numeric (NumPy) or numarray ? Who is actively writing/implementing numarray and is there a specific mailing list for that ? 3) I was just starting with some C code to generate numarray lists (a week ago) and now the main data struct (NDarray) just disappeared... is that good news !? (Maybe the question should be: What is a "first class" python object ? ) 4) In NDarray there was a special pointer (void *imag) for complex data. (without much documentation actually..) How are complex array handled in numarray 0.4? Examples would be nice !! ;-) Keep up all the good work. Thanks, Sebastian ----- Original Message ----- From: "Todd Miller" Newsgroups: comp.lang.python.announce,comp.lang.python To: Sent: Tuesday, November 26, 2002 10:44 AM Subject: [Numpy-discussion] ANN: numarray-0.4 released > Numarray 0.4 > --------------------------------- > > Numarray is an array processing package designed to efficiently > manipulate large multi-dimensional arrays. Numarray is modelled after > Numeric and features c-code generated from python template scripts, > the capacity to operate directly on arrays in files, and improved type > promotions. > > Version 0.4 is a relatively large update with these features: > > 1. C-basetypes have been added to NDArray and NumArray to accelerate > simple indexing and attribute access *and* improve Numeric compatability. > > 2. List <-> NumArray transformations have been sped up. > > 3. There's an ieeespecial module which should make it easier to find > and manipulate NANs and INFs. > > 4. There's now a boxcar function in the Convolve package for doing > fast 2D smoothing. Jochen Kupper also contributed a lineshape module > which is also part of the Convolve package. > > 5. Bug fixes for every reported bug between now and July-02. > > 6. Since I still haven't fixed the add-on Packages packaging, I built > windows binaries for all 4 packages so you don't have to build them > from source yourself. > > But... basetypes (and reorganization) aren't free: > > 1. The "native" aspects of the numarray C-API have changed in > backwards incompatible ways. In particular, the NDInfo struct is now > completely gone, since it was completely redundant to the new > basetypes which are modelled after Numeric's PyArrayObject. If you > actually *have* a numarray extension that this breaks, and it bugs > you, send it to me and I'll fix it for you. If there's enough > response, I'll automate the process of updating extension wrappers. > But I expect not. > > 2. I expect to hear about bugs which can cause numarray/Python to dump > core. Of course, I have no clue where they are. So... there may be > rapid re-releases to compensate. > > 3. Old pickles are not directly transferrable to numarray-0.4, but may > instead require some copy_reg fuddling because basetypes change the > pickle format. If you have old pickles you need to migrate, send me > e-mail and I'll help you figure out how to do it. > > 4. Make *really* sure you delete any old numarray modules you have > laying around. These can screw up numarray-0.4 royally. > > 5. Note for astronomers: PyFITS requires an update to work with > numarray-0.4. This should be available shortly, if it is not already. > My point is that you may be unable to use both numarray-0.4 and PyFITS > today. > > WHERE > ----------- > > Numarray-0.4 windows executable installers, source code, and manual is > here: > > http://sourceforge.net/project/showfiles.php?group_id=1369 > > Numarray is hosted by Source Forge in the same project which hosts Numeric: > > http://sourceforge.net/projects/numpy/ > > The web page for Numarray information is at: > > http://stsdas.stsci.edu/numarray/index.html > > Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at > the Source Forge project for NumPy at: > > http://sourceforge.net/tracker/?group_id=1369 > > REQUIREMENTS > -------------------------- > > numarray-0.4 requires Python 2.2.0 or greater. > > > AUTHORS, LICENSE > ------------------------------ > > Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC > Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science > Institute. Thanks go to Jochen Kupper of the University of North > Carolina for his work on Numarray and for porting the Numarray manual > to TeX format. > > Numarray is made available under a BSD-style License. See > LICENSE.txt in the source distribution for details. > > -- > Todd Miller jmiller at stsci.edu > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T > handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0002en > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From perry at stsci.edu Wed Nov 27 12:48:06 2002 From: perry at stsci.edu (Perry Greenfield) Date: Wed Nov 27 12:48:06 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released In-Reply-To: <01a301c29654$f1d8c610$3b45da80@rodan> Message-ID: > That's good news!! > Since I just signed up to this list I have some (more general questions) > 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months > ago or so I got like 20 ...) Yes, it has been slower lately (there are sometimes related discussions on the scipy mailing lists that appear to have more traffic lately. > 2) Are most people here talking about Numeric (NumPy) or numarray ? > Who is actively writing/implementing numarray and is there a specific > mailing list for that ? > No specific mailing list for numarray. I'd guess that currently the largest user community for numarray is the astronomical one, primarily because we are distributing software that requires it to the community. Probably not many developers for yet now, but we are starting to look at making scipy compatible with numarray, and settle some remaining interface issues (but I'm going to wait until after Thankgiving before starting that). > 3) I was just starting with some C code to generate numarray lists (a week > ago) and now > the main data struct (NDarray) just disappeared... is that > good news !? > (Maybe the question should be: What is a "first class" python > object ? ) Good news. Probably not if you wrote code using it ;-), but we changed it so that numarray would be more compatible with existing Numeric C extensions, and that was the price for doing so. I think it is good news for those that have have existing C extensions for whenever they plan to migrate to numarray. Todd should answer detailed questions about the C interface, but he cleverly decided to go on vacation after releasing 0.4 until December 9. > 4) In NDarray there was a special pointer (void *imag) for complex data. > (without much documentation actually..) > How are complex array handled in numarray 0.4? Examples would be nice > !! ;-) > Writing up documentation for C-API issues is a big need and a high priority. > From rob at pythonemproject.com Thu Nov 28 04:45:01 2002 From: rob at pythonemproject.com (Rob) Date: Thu Nov 28 04:45:01 2002 Subject: [Numpy-discussion] Numpy site to be in IEEE Antennas and Propagation magazine Message-ID: <3DE60E54.AE5835BE@pythonemproject.com> Hi all, I haven't mentioned it for almost a year now, since it never happened :) , but I really am this time supposed to have my site (see sig) in IEEE Antennas and Propagation Society magazine. The Dec 02 issue. They goofed up and gave their apologies as it was originally supposed to be in this year's June issue. Now you guys are giving up Numpy and starting Numarray :) I hope the last Numpy distribution will still be available on the main site, so people can run my programs. Later, I can go in and convert them to Numarray. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com From falted at openlc.org Wed Nov 6 08:16:08 2002 From: falted at openlc.org (Francesc Alted) Date: Wed Nov 6 08:16:08 2002 Subject: [Numpy-discussion] Segmentation fault when dealing with larga rank arrays Message-ID: <20021106161525.GA1254@openlc.org> Hi, I'm having some segmentation faults when dealing with large rank arrays in Numeric. You can see some examples in the following: >>> zeros((1,) * 21) == zeros((1,) * 21) Segmentation fault Of course, this kind of usage is quite unusual, but I'm interested on it because I'm developing tests for a package based on Numerical and wanted to check the larger rank supported. >>> ones((1,) * 41) Segmentation fault In this case, I know that Numeric supports array ranks until 40, but in opinion, an error should be raised instead of merely givin a "Segmentation fault". I'm having these problems using Numeric 20.2.1, 21.0 and 22.0 in both Python 2.1.x and 2.2.x. My platform is Intel under Linux Debian 3.0. Thank you, -- Francesc Alted PGP KeyID: 0x61C8C11F Scientific aplications developer Public PGP key available: http://www.openlc.org/falted_at_openlc.asc Key fingerprint = 1518 38FE 3A3D 8BE8 24A0 3E5B 1328 32CC 61C8 C11F From info at wsntv100.com Thu Nov 7 03:58:05 2002 From: info at wsntv100.com (info at wsntv100.com) Date: Thu Nov 7 03:58:05 2002 Subject: [Numpy-discussion] [ADV] Looking for executives, managers, producers, professors and members who can work on part-time Message-ID: An HTML attachment was scrubbed... URL: From bondpaper at earthlink.net Thu Nov 7 09:33:02 2002 From: bondpaper at earthlink.net (bondpaper) Date: Thu Nov 7 09:33:02 2002 Subject: [Numpy-discussion] Installing Numerical Python Message-ID: <3DCAA4DB.60500@earthlink.net> Hello, I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and when I try to install Numerical Python (v. 22), I get an error telling me that it cannot find the file /usr/lib/python2.2/config/Makefile. The command I use for the install is: python2 install.py build. The Python2.2 install comes directly from the RPMs on the python.org web site. Does anyone know how I might resolve this? Thanks. Tom From falted at openlc.org Thu Nov 7 10:14:05 2002 From: falted at openlc.org (Francesc Alted) Date: Thu Nov 7 10:14:05 2002 Subject: [Numpy-discussion] Installing Numerical Python In-Reply-To: <3DCAA4DB.60500@earthlink.net> References: <3DCAA4DB.60500@earthlink.net> Message-ID: <20021107181314.GB1262@openlc.org> On Thu, Nov 07, 2002 at 10:37:31AM -0700, bondpaper wrote: > I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and > when I try to install Numerical Python (v. 22), I get an error telling > me that it cannot find the file /usr/lib/python2.2/config/Makefile. The > command I use for the install is: python2 install.py build. The > Python2.2 install comes directly from the RPMs on the python.org web > site. Does anyone know how I might resolve this? It should be. Maybe you need to install the development version packages of python. For 2.2.1 you can find it at: http://www.python.org/ftp/python/2.2.1/rpms/rh7.3/python2-devel-2.2.1-2.i386.rpm Bye, -- Francesc Alted PGP KeyID: 0x61C8C11F Scientific aplications developer Public PGP key available: http://www.openlc.org/falted_at_openlc.asc Key fingerprint = 1518 38FE 3A3D 8BE8 24A0 3E5B 1328 32CC 61C8 C11F From oliphant at ee.byu.edu Thu Nov 7 10:32:08 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Nov 7 10:32:08 2002 Subject: [Numpy-discussion] Installing Numerical Python In-Reply-To: <3DCAA4DB.60500@earthlink.net> Message-ID: > > Hello, > > I'm have both Python 1.5 and Python2.2.1 on a Redhat 7.3 system, and > when I try to install Numerical Python (v. 22), I get an error telling > me that it cannot find the file /usr/lib/python2.2/config/Makefile. The > command I use for the install is: python2 install.py build. The > Python2.2 install comes directly from the RPMs on the python.org web > site. Does anyone know how I might resolve this? You probably need the python-devel package also. -Travis O. From paul at pfdubois.com Thu Nov 7 17:15:09 2002 From: paul at pfdubois.com (Paul F Dubois) Date: Thu Nov 7 17:15:09 2002 Subject: [Numpy-discussion] note on mail lists Message-ID: <000201c286c4$2708df00$6501a8c0@NICKLEBY> Please note that the numpy-developers list is no longer active. Do not send mail to this address; it will be discarded. Use numpy-discussion for questions and problems and the bug list at Source Forge for actual bugs only. Thanks! From Marc.Poinot at onera.fr Tue Nov 19 00:58:03 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Tue Nov 19 00:58:03 2002 Subject: [Numpy-discussion] Array data ownership Message-ID: <3DD9FCF8.7C175CFF@onera.fr> Hi all, I use the Numpy C API to produce/use some PyArrayObjects. To set the allocated memory zone, I use the PyArray_FromDimsAndData function, which is described to "be used to access global data that will never be freed". That's what I want. Or more exactly, I want "global data that will never be freed by Numpy, until I tell it to do so !" I mean some of my arrays are allocated and used as PyArrayObject data, but I want some of them to be seen by Numpy as its own data. I want it to deallocate the data at delete time. My questions are : [1] Is PyArray_FromDimsAndData the right function or should I use another way ? [2] Can I use the PyArrayObject.flags bit "owns the data area" to set it after the PyArray_FromDimsAndData call ? In the case of "yes", which is the bit rank ? The fourth starting from right ? Any macro already doing this ? Will I break the PyArrayObject consistency ? Marcvs [alias yes I can go into the code... I though OO was reading docs and using interfaces ;] From hinsen at cnrs-orleans.fr Tue Nov 19 03:30:04 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Nov 19 03:30:04 2002 Subject: [Numpy-discussion] Array data ownership In-Reply-To: <3DD9FCF8.7C175CFF@onera.fr> References: <3DD9FCF8.7C175CFF@onera.fr> Message-ID: Marc Poinot writes: > That's what I want. Or more exactly, I want "global data that will never be > freed by Numpy, until I tell it to do so !" That option does not exist. The options are: 1) NumPy manages the data space of your array. It gets freed when the last array object referencing it is destroyed. 2) NumPy assumes that the data space is already allocated and is not freed as long as any array object might reference it (which, in practice, is until the end of the process). PyArray_FromDimsAndData is used for allocating arrays that choose the second option. If I understand you correctly, you want NumPy to create an array object and allocate the data space, but make sure that the data space is not freed before you "allow" it. In that case, just create an ordinary array and keep an additional reference to it. When the data space may be destroyed, you remove the reference. However, there is no guarantee that the data space will be freed immediately, as there might still be other references around. > [2] Can I use the PyArrayObject.flags bit "owns the data area" to set it > after the PyArray_FromDimsAndData call ? In the case of "yes", which Whatever this does, it is not documented. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From Marc.Poinot at onera.fr Tue Nov 19 04:25:02 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Tue Nov 19 04:25:02 2002 Subject: [Numpy-discussion] Array data ownership References: <3DD9FCF8.7C175CFF@onera.fr> Message-ID: <3DDA2D82.1520A69F@onera.fr> Konrad Hinsen wrote: > > If I understand you correctly, you want NumPy to create an array > object and allocate the data space, but make sure that the data space > is not freed before you "allow" it. In that case, just create an > ordinary array and keep an additional reference to it. When the data > space may be destroyed, you remove the reference. However, there is no > guarantee that the data space will be freed immediately, as there > might still be other references around. > No. I want to set the memory zone of the array but once this zone is set, I want numpy to manage it as if it was owner of the memory. I have an external lib which allocates the returned memory zone. I put this memory zone into a PyArrayObject using PyArray_FromDimsAndData in order to avoid memory copy. But I want now this array to be the owner of the allocated zone. I mean I want this zone to be released if the Python object is deleted. The ref count of Python is ok for me, as long as an array is sharing the data, python won't release it. But I want Python to delete the memory zone if the last reference is removed. Marcvs [alias I'll have a try with myarray->flags |= OWN_DATA; and I'll let you know about my experiments...] From hinsen at cnrs-orleans.fr Tue Nov 19 05:29:06 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Tue Nov 19 05:29:06 2002 Subject: [Numpy-discussion] Array data ownership In-Reply-To: <3DDA2D82.1520A69F@onera.fr> (message from Marc Poinot on Tue, 19 Nov 2002 13:24:34 +0100) References: <3DD9FCF8.7C175CFF@onera.fr> <3DDA2D82.1520A69F@onera.fr> Message-ID: <200211191327.gAJDRWG23603@chinon.cnrs-orleans.fr> > No. I want to set the memory zone of the array but once this zone is > set, I want numpy to manage it as if it was owner of the memory. That is the most frequent case for which there is no clean solution. There ought to be an array constructor that takes a pointer to a deallocation function which is called to free the data space. You can do myarray->flags |= OWN_DATA, then the data space will be freed using the standard free() function. But this is undocumented, and works only if the standard OS memory allocation calls were used to allocate the memory. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen at cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- From falted at openlc.org Tue Nov 19 11:33:04 2002 From: falted at openlc.org (Francesc Alted) Date: Tue Nov 19 11:33:04 2002 Subject: [Numpy-discussion] [ANN] PyTables 0.2 is out Message-ID: <20021119193209.GE994@openlc.org> Announcing PyTables 0.2 ----------------------- What's new ----------- - Numerical Python arrays supported! - Much improved documentation - Programming API almost stable - Improved navegability across the object tree - Added more unit tests (there are almost 50) - Dropped HDF5_HL dependency (a tailored version is included in sources now) - License changed from LGPL to BSD What is ------- The goal of PyTables is to enable the end user to manipulate easily scientific data tables and Numerical Python objects (new in 0.2!) in a persistent hierarchical structure. The foundation of the underlying hierachical data organization is the excellent HDF5 library (http://hdf.ncsa.uiuc.edu/HDF5). Right now, PyTables provides limited support of all the HDF5 functions, but I hope to add the more interesting ones (for PyTables needs) in the near future. Nonetheless, this package is not intended to serve as a complete wrapper for the entire HDF5 API. A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms "fixed-length" and strict "data types" seems to be quite a strange requirement for an interpreted language like Python, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many scientifc applications, for example) in an efficient manner that reduces demand on CPU time and I/O. In order to emulate records (C structs in HDF5) in Python, PyTables implements a special metaclass that detects errors in field assignments as well as range overflows. PyTables also provides a powerful interface to process table data. Quite a bit effort has been invested to make browsing the hierarchical data structure a pleasant experience. PyTables implements just three (orthogonal) easy-to-use methods for browsing. What is HDF5? ------------- For those people who know nothing about HDF5, it is is a general purpose library and file format for storing scientific data made at NCSA. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic constructs, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. How fast is it? --------------- Despite to be an alpha version and that there is lot of room for improvements (it's still CPU bounded!), PyTables can read and write tables quite fast. But, if you want some (very preliminary) figures (just to know orders of magnitude), in a AMD Athlon at 900 it can currently read from 40000 up to 60000 records/s and write from 5000 up to 13000 records/s. Raw data speed in read mode ranges from 1 MB/s up to 2 MB/s, and it drops to the 200 KB/s - 600 KB/s range for writes. Go to http://pytables.sf.net/bench.html for a somewhat more detailed description of this small (and synthetic) benchmark. Anyway, this is only the beginning (premature optimization is the root of all evils, you know ;-). Platforms --------- I'm using Linux as the main development platform, but PyTables should be easy to compile/install on other UNIX machines. Thanks to Scott Prater, this package has passed all the tests on a UltraSparc platform with Solaris 7. It also compiles and passes all the tests on a SGI Origin2000 with MIPS R12000 processors and running IRIX 6.5. If you are using Windows and you get the library to work, please let me know. An example? ----------- At the bottom of this message there is some code (less that 100 lines and only less than half being real code) that shows basic capabilities of PyTables. Web site -------- Go to the PyTables web site for more details: http://pytables.sf.net/ Final note ---------- This is second alpha release, and probably last alpha, so it is still time if you want to suggest some API addition/change or addition/change of any useful missing capability. Let me know of any bugs, suggestions, gripes, kudos, etc. you may have. -- Francesc Alted falted at openlc.org *-*-*-**-*-*-**-*-*-**-*-*- Small code example *-*-*-**-*-*-**-*-*-**-*-*-* """Small but almost complete example showing the PyTables mode of use. As a result of execution, a 'tutorial1.h5' file is created. You can look at it with whatever HDF5 generic utility, like h5ls, h5dump or h5view. """ import sys from Numeric import * from tables import * #'-**-**-**-**-**-**- user record definition -**-**-**-**-**-**-**-' # Define a user record to characterize some kind of particles class Particle(IsRecord): name = '16s' # 16-character String idnumber = 'Q' # unsigned long long (i.e. 64-bit integer) TDCcount = 'B' # unsigned byte ADCcount = 'H' # unsigned short integer grid_i = 'i' # integer grid_j = 'i' # integer pressure = 'f' # float (single-precision) energy = 'd' # double (double-precision) print print '-**-**-**-**-**-**- file creation -**-**-**-**-**-**-**-' # The name of our HDF5 filename filename = "tutorial1.h5" print "Creating file:", filename # Open a file in "w"rite mode h5file = openFile(filename, mode = "w", title = "Test file") print print '-**-**-**-**-**-**- group an table creation -**-**-**-**-**-**-**-' # Create a new group under "/" (root) group = h5file.createGroup("/", 'detector', 'Detector information') print "Group '/detector' created" # Create one table on it table = h5file.createTable(group, 'readout', Particle(), "Readout example") print "Table '/detector/readout' created" # Get a shortcut to the record object in table particle = table.record # Fill the table with 10 particles for i in xrange(10): # First, assign the values to the Particle record particle.name = 'Particle: %6d' % (i) particle.TDCcount = i % 256 particle.ADCcount = (i * 256) % (1 << 16) particle.grid_i = i particle.grid_j = 10 - i particle.pressure = float(i*i) particle.energy = float(particle.pressure ** 4) particle.idnumber = i * (2 ** 34) # This exceeds long integer range # Insert a new particle record table.appendAsRecord(particle) # Flush the buffers for table table.flush() print print '-**-**-**-**-**-**- table data reading & selection -**-**-**-**-**-' # Read actual data from table. We are interested in collecting pressure values # on entries where TDCcount field is greater than 3 and pressure less than 50 pressure = [ x.pressure for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print "Last record read:" print x print "Field pressure elements satisfying the cuts ==>", pressure # Read also the names with the same cuts names = [ x.name for x in table.readAsRecords() if x.TDCcount > 3 and x.pressure < 50 ] print print '-**-**-**-**-**-**- array object creation -**-**-**-**-**-**-**-' print "Creating a new group called '/columns' to hold new arrays" gcolumns = h5file.createGroup(h5file.root, "columns", "Pressure and Name") print "Creating a Numeric array called 'pressure' under '/columns' group" h5file.createArray(gcolumns, 'pressure', array(pressure), "Pressure column selection") print "Creating another Numeric array called 'name' under '/columns' group" h5file.createArray('/columns', 'name', array(names), "Name column selection") # Close the file h5file.close() print "File '"+filename+"' created" From jdhunter at ace.bsd.uchicago.edu Wed Nov 20 14:37:02 2002 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Wed Nov 20 14:37:02 2002 Subject: [Numpy-discussion] numpy with pygsl Message-ID: If I import pygsl.rng before importing Numeric, I get an abort mother:~> python Python 2.2.2 (#1, Oct 15 2002, 08:14:58) [GCC 3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pygsl.rng >>> import Numeric python: Modules/gcmodule.c:366: delete_garbage: Assertion `((((PyGC_Head *)( op)-1))->gc.gc_refs >= 0)' failed. Abort If I import them in the other order, I have no problems. Numeric 22.0 pygsl version = "0.1a" gsl-1.2 Any ideas? Thanks, John Hunter From j_r_fonseca at yahoo.co.uk Fri Nov 22 05:34:02 2002 From: j_r_fonseca at yahoo.co.uk (=?iso-8859-15?Q?Jos=E9?= Fonseca) Date: Fri Nov 22 05:34:02 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python Message-ID: <20021122133308.GA26237@localhost.localdomain> I've made a Numeric Python binding of the ARPACK library. ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems, available at http://www.caam.rice.edu/software/ARPACK/ . These bindings have the following features: - Correspondence for all ARPACK calls - In place operation for all matrices - Easy access to ARPACK debugging control variables - Online help for all calls, with the correct Python/C 0-based indexing (automatically converted from the sources with the aid of a sed script). - Include ports of [unfortunately not all] original examples These bindings weren't generated with any automatic binding generation tool. Even though I initially tried both PyFortran and f2py, both showed to be inappropriate to handle the specificity of the ARPACK API. ARPACK uses a 'reverse communication interface' where basically the API sucessively returns to caller which must take some update steps, and re-call the API with most arguments untouched. The intelligent (and silent) argument conversions made by the above tools made very difficult to implement and debug the most simple example. Also, for large-scale problems we wouldn't want any kind of array conversion/transposing happening behind the scenes as that would completely kill performance. The source is available at http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 . The ARPACK library is not included and must be obtained and compiled seperately, and setup.py must be modified to reflect your system BLAS/LAPACK library. The bindings are API-centric. Nevertheless a Python wrapper to these calls can easily be made, where all kind of type conversions and dummy-safe actions can be made. I have done one myselve for a Sparse matrix eigenvalue determination using UMFPACK for sparse matrix factorization (for which a simple binding - just for double precision matrices - is available at http://jrfonseca.dyndns.org/work/phd/python/modules/umfpack/ ). I hope you find this interesting. Regards, Jos? Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com From verveer at embl-heidelberg.de Sun Nov 24 08:41:02 2002 From: verveer at embl-heidelberg.de (verveer at embl-heidelberg.de) Date: Sun Nov 24 08:41:02 2002 Subject: [Numpy-discussion] dimensions of zero length Message-ID: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> Hi all, I noticed that in Numeric and in numarray it is possible to create arrays with axes of zero length. For instance: zeros([1, 0]). There seems not be much that can be done with them. What is the reason for their existence? My real question is: When writing an extension in C, how to deal with such arrays? Should I treat them as empty arrays, that do not have any data? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Peter.Verveer at embl-heidelberg.de From j_r_fonseca at yahoo.co.uk Sun Nov 24 08:53:04 2002 From: j_r_fonseca at yahoo.co.uk (=?iso-8859-15?Q?Jos=E9?= Fonseca) Date: Sun Nov 24 08:53:04 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python In-Reply-To: <20021122133308.GA26237@localhost.localdomain> References: <20021122133308.GA26237@localhost.localdomain> Message-ID: <20021124165236.GA27610@localhost.localdomain> On Fri, Nov 22, 2002 at 01:33:08PM +0000, Jos? Fonseca wrote: > The source is available at > http://jrfonseca.dyndns.org/work/phd/python/modules/arpack/dist/arpack-1.0.tar.bz2 > . The ARPACK library is not included and must be obtained and compiled > seperately, and setup.py must be modified to reflect your system > BLAS/LAPACK library. Two header files, arpack.h and arpackmodule.h, were missing from the above package. This has been corrected now. Thanks to Greg Whittier for pointing that out. A little more detailed install instructions are were also added. I'm also considering where I should bundle or not ARPACK in the source package, and then use scipy_distutils to compile the fortran source files. This would make much easier to install, but I'm personally against statically link libraries, as they inevitable lead to code duplication in memory. For example, while I don't manage to build a shared version, the ATLAS and LAPACK libraries appear both triplicated in on my programs - from Numeric, ARPACK and UMFPACK. This leads to a huge code bloat and a misuse of the processor code cache resulting in a lower performance. What is the opinion of the other subscribers regarding this? Jos? Fonseca __________________________________________________ Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts http://uk.my.yahoo.com From hinsen at cnrs-orleans.fr Sun Nov 24 10:01:08 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sun Nov 24 10:01:08 2002 Subject: [Numpy-discussion] Ann: ARPACK bindings to Numeric Python In-Reply-To: <20021124165236.GA27610@localhost.localdomain> References: <20021122133308.GA26237@localhost.localdomain> <20021124165236.GA27610@localhost.localdomain> Message-ID: Jos? Fonseca writes: > files. This would make much easier to install, but I'm personally > against statically link libraries, as they inevitable lead to code > duplication in memory. For example, while I don't manage to build a Me too. Konrad. From hinsen at cnrs-orleans.fr Sun Nov 24 10:04:04 2002 From: hinsen at cnrs-orleans.fr (Konrad Hinsen) Date: Sun Nov 24 10:04:04 2002 Subject: [Numpy-discussion] dimensions of zero length In-Reply-To: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> References: <1038155955.3de100b33cecd@webmail.EMBL-Heidelberg.DE> Message-ID: writes: > I noticed that in Numeric and in numarray it is possible to create > arrays with axes of zero length. For instance: zeros([1, 0]). There > seems not be much that can be done with them. What is the reason for > their existence? They often result as special cases from some operations. Think of them as the array equivalents of empty lists. Creating zero-size arrays explicitly can be useful when suitable starting values for iterations are needed. > My real question is: When writing an extension in C, how to deal with such > arrays? Should I treat them as empty arrays, that do not have any data? Exactly. Konrad. From jmiller at stsci.edu Tue Nov 26 10:45:04 2002 From: jmiller at stsci.edu (Todd Miller) Date: Tue Nov 26 10:45:04 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released Message-ID: <3DE3C0FE.2010807@stsci.edu> Numarray 0.4 --------------------------------- Numarray is an array processing package designed to efficiently manipulate large multi-dimensional arrays. Numarray is modelled after Numeric and features c-code generated from python template scripts, the capacity to operate directly on arrays in files, and improved type promotions. Version 0.4 is a relatively large update with these features: 1. C-basetypes have been added to NDArray and NumArray to accelerate simple indexing and attribute access *and* improve Numeric compatability. 2. List <-> NumArray transformations have been sped up. 3. There's an ieeespecial module which should make it easier to find and manipulate NANs and INFs. 4. There's now a boxcar function in the Convolve package for doing fast 2D smoothing. Jochen Kupper also contributed a lineshape module which is also part of the Convolve package. 5. Bug fixes for every reported bug between now and July-02. 6. Since I still haven't fixed the add-on Packages packaging, I built windows binaries for all 4 packages so you don't have to build them from source yourself. But... basetypes (and reorganization) aren't free: 1. The "native" aspects of the numarray C-API have changed in backwards incompatible ways. In particular, the NDInfo struct is now completely gone, since it was completely redundant to the new basetypes which are modelled after Numeric's PyArrayObject. If you actually *have* a numarray extension that this breaks, and it bugs you, send it to me and I'll fix it for you. If there's enough response, I'll automate the process of updating extension wrappers. But I expect not. 2. I expect to hear about bugs which can cause numarray/Python to dump core. Of course, I have no clue where they are. So... there may be rapid re-releases to compensate. 3. Old pickles are not directly transferrable to numarray-0.4, but may instead require some copy_reg fuddling because basetypes change the pickle format. If you have old pickles you need to migrate, send me e-mail and I'll help you figure out how to do it. 4. Make *really* sure you delete any old numarray modules you have laying around. These can screw up numarray-0.4 royally. 5. Note for astronomers: PyFITS requires an update to work with numarray-0.4. This should be available shortly, if it is not already. My point is that you may be unable to use both numarray-0.4 and PyFITS today. WHERE ----------- Numarray-0.4 windows executable installers, source code, and manual is here: http://sourceforge.net/project/showfiles.php?group_id=1369 Numarray is hosted by Source Forge in the same project which hosts Numeric: http://sourceforge.net/projects/numpy/ The web page for Numarray information is at: http://stsdas.stsci.edu/numarray/index.html Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at the Source Forge project for NumPy at: http://sourceforge.net/tracker/?group_id=1369 REQUIREMENTS -------------------------- numarray-0.4 requires Python 2.2.0 or greater. AUTHORS, LICENSE ------------------------------ Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science Institute. Thanks go to Jochen Kupper of the University of North Carolina for his work on Numarray and for porting the Numarray manual to TeX format. Numarray is made available under a BSD-style License. See LICENSE.txt in the source distribution for details. -- Todd Miller jmiller at stsci.edu From Marc.Poinot at onera.fr Wed Nov 27 05:16:03 2002 From: Marc.Poinot at onera.fr (Marc Poinot) Date: Wed Nov 27 05:16:03 2002 Subject: [Numpy-discussion] Displaying floats with Python Message-ID: <3DE4C551.DC039B67@onera.fr> I'm not sure this is a problem, but I'm looking for a solution for this and I wonder if one could give a piece of advice: I have a C extension using doubles and floats. I return a float casted to double to Python, from my extension, and when I display it I have some extra numbers at the end of the "correct" number. In the extension, dgv is a float (in this exemple dgv=0.1). PyTuple_SET_ITEM(tp0, i, PyFloat_FromDouble((double)dgv)); I print it in Python: print tuple[0] Which produces: 0.10000000149 I get to much numbers, because the print should not try to get more then the 4 bytes float. It looks that the floatobject.c files is setting a number precision for printing, which is forced to 12. (#define PREC_STR 12) This work if you use a "double", but not a "double" casted from a "float". This problem occurs either on SGI and DEC. With stdio: printf("%.g\n", (float) dgv); printf("%.g\n", (double)dgv); printf("%.12g\n",(float) dgv); printf("%.12g\n",(double)dgv); produces (this is a "CORRECT" behavior for printf, we're printing too much digits): 0.1 0.1 0.10000000149 0.10000000149 Any idea ? How can I say to Python to forget the precision, or set it as global. Marcvs [alias Yes I could only compute with integers, but... ] From Chris.Barker at noaa.gov Wed Nov 27 11:19:02 2002 From: Chris.Barker at noaa.gov (Chris Barker) Date: Wed Nov 27 11:19:02 2002 Subject: [Numpy-discussion] Displaying floats with Python References: <3DE4C551.DC039B67@onera.fr> Message-ID: <3DE513E2.6E3CA46@noaa.gov> Marc Poinot wrote: > > I'm not sure this is a problem, It's not. > and when I display it I have some extra numbers > at the end of the "correct" number. What you are seeing is the best decimal representation of the binary number that is stored in that double. While the extra bits in binary of the double over the float should be zero, that does not mean that the extra decimal digits will be zero also. In this case, you are trying to store the value of 1.0 / 10.0, That value can not be represented exactly in binary. The value: 0.10000000149 is as close as you can get with a C float. so you are getting the right answer (subject to the limitations of floating point representation and arithmetic), as demonstrated by your example: > printf("%.12g\n",(float) dgv); > 0.10000000149 > produces (this is a "CORRECT" behavior for printf, we're printing > too much digits) It depends what you mean by too many. The above example shows what the best decimal value you can get with 12 digits from your float value, which is the same as what Python has in it's double. By the way, your four printf examples also demonstrate that you are getting exactly the same results by casting a float to a double within C, as when you do it passing to Python (which you should expect. A Python Float is a C double, after all) By default, in a print statement, Python displays all the digits that are required to reproduce the number. If you don't want to see all those digits, do what you did in C: >>> d = 0.10000000149 >>> print d 0.10000000149 >>> print "%g"%d 0.1 >>> print "%.12g"%d 0.10000000149 By the way, see: http://www.python.org/doc/current/tut/node14.html For more explaination. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From haase at msg.ucsf.edu Wed Nov 27 12:37:04 2002 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed Nov 27 12:37:04 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released References: <3DE3C0FE.2010807@stsci.edu> Message-ID: <01a301c29654$f1d8c610$3b45da80@rodan> That's good news!! Since I just signed up to this list I have some (more general questions) 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months ago or so I got like 20 ...) 2) Are most people here talking about Numeric (NumPy) or numarray ? Who is actively writing/implementing numarray and is there a specific mailing list for that ? 3) I was just starting with some C code to generate numarray lists (a week ago) and now the main data struct (NDarray) just disappeared... is that good news !? (Maybe the question should be: What is a "first class" python object ? ) 4) In NDarray there was a special pointer (void *imag) for complex data. (without much documentation actually..) How are complex array handled in numarray 0.4? Examples would be nice !! ;-) Keep up all the good work. Thanks, Sebastian ----- Original Message ----- From: "Todd Miller" Newsgroups: comp.lang.python.announce,comp.lang.python To: Sent: Tuesday, November 26, 2002 10:44 AM Subject: [Numpy-discussion] ANN: numarray-0.4 released > Numarray 0.4 > --------------------------------- > > Numarray is an array processing package designed to efficiently > manipulate large multi-dimensional arrays. Numarray is modelled after > Numeric and features c-code generated from python template scripts, > the capacity to operate directly on arrays in files, and improved type > promotions. > > Version 0.4 is a relatively large update with these features: > > 1. C-basetypes have been added to NDArray and NumArray to accelerate > simple indexing and attribute access *and* improve Numeric compatability. > > 2. List <-> NumArray transformations have been sped up. > > 3. There's an ieeespecial module which should make it easier to find > and manipulate NANs and INFs. > > 4. There's now a boxcar function in the Convolve package for doing > fast 2D smoothing. Jochen Kupper also contributed a lineshape module > which is also part of the Convolve package. > > 5. Bug fixes for every reported bug between now and July-02. > > 6. Since I still haven't fixed the add-on Packages packaging, I built > windows binaries for all 4 packages so you don't have to build them > from source yourself. > > But... basetypes (and reorganization) aren't free: > > 1. The "native" aspects of the numarray C-API have changed in > backwards incompatible ways. In particular, the NDInfo struct is now > completely gone, since it was completely redundant to the new > basetypes which are modelled after Numeric's PyArrayObject. If you > actually *have* a numarray extension that this breaks, and it bugs > you, send it to me and I'll fix it for you. If there's enough > response, I'll automate the process of updating extension wrappers. > But I expect not. > > 2. I expect to hear about bugs which can cause numarray/Python to dump > core. Of course, I have no clue where they are. So... there may be > rapid re-releases to compensate. > > 3. Old pickles are not directly transferrable to numarray-0.4, but may > instead require some copy_reg fuddling because basetypes change the > pickle format. If you have old pickles you need to migrate, send me > e-mail and I'll help you figure out how to do it. > > 4. Make *really* sure you delete any old numarray modules you have > laying around. These can screw up numarray-0.4 royally. > > 5. Note for astronomers: PyFITS requires an update to work with > numarray-0.4. This should be available shortly, if it is not already. > My point is that you may be unable to use both numarray-0.4 and PyFITS > today. > > WHERE > ----------- > > Numarray-0.4 windows executable installers, source code, and manual is > here: > > http://sourceforge.net/project/showfiles.php?group_id=1369 > > Numarray is hosted by Source Forge in the same project which hosts Numeric: > > http://sourceforge.net/projects/numpy/ > > The web page for Numarray information is at: > > http://stsdas.stsci.edu/numarray/index.html > > Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at > the Source Forge project for NumPy at: > > http://sourceforge.net/tracker/?group_id=1369 > > REQUIREMENTS > -------------------------- > > numarray-0.4 requires Python 2.2.0 or greater. > > > AUTHORS, LICENSE > ------------------------------ > > Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC > Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science > Institute. Thanks go to Jochen Kupper of the University of North > Carolina for his work on Numarray and for porting the Numarray manual > to TeX format. > > Numarray is made available under a BSD-style License. See > LICENSE.txt in the source distribution for details. > > -- > Todd Miller jmiller at stsci.edu > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T > handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0002en > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From perry at stsci.edu Wed Nov 27 12:48:06 2002 From: perry at stsci.edu (Perry Greenfield) Date: Wed Nov 27 12:48:06 2002 Subject: [Numpy-discussion] ANN: numarray-0.4 released In-Reply-To: <01a301c29654$f1d8c610$3b45da80@rodan> Message-ID: > That's good news!! > Since I just signed up to this list I have some (more general questions) > 1) How active is this list ? Now I get like 1-2 emails a day (but 3 months > ago or so I got like 20 ...) Yes, it has been slower lately (there are sometimes related discussions on the scipy mailing lists that appear to have more traffic lately. > 2) Are most people here talking about Numeric (NumPy) or numarray ? > Who is actively writing/implementing numarray and is there a specific > mailing list for that ? > No specific mailing list for numarray. I'd guess that currently the largest user community for numarray is the astronomical one, primarily because we are distributing software that requires it to the community. Probably not many developers for yet now, but we are starting to look at making scipy compatible with numarray, and settle some remaining interface issues (but I'm going to wait until after Thankgiving before starting that). > 3) I was just starting with some C code to generate numarray lists (a week > ago) and now > the main data struct (NDarray) just disappeared... is that > good news !? > (Maybe the question should be: What is a "first class" python > object ? ) Good news. Probably not if you wrote code using it ;-), but we changed it so that numarray would be more compatible with existing Numeric C extensions, and that was the price for doing so. I think it is good news for those that have have existing C extensions for whenever they plan to migrate to numarray. Todd should answer detailed questions about the C interface, but he cleverly decided to go on vacation after releasing 0.4 until December 9. > 4) In NDarray there was a special pointer (void *imag) for complex data. > (without much documentation actually..) > How are complex array handled in numarray 0.4? Examples would be nice > !! ;-) > Writing up documentation for C-API issues is a big need and a high priority. > From rob at pythonemproject.com Thu Nov 28 04:45:01 2002 From: rob at pythonemproject.com (Rob) Date: Thu Nov 28 04:45:01 2002 Subject: [Numpy-discussion] Numpy site to be in IEEE Antennas and Propagation magazine Message-ID: <3DE60E54.AE5835BE@pythonemproject.com> Hi all, I haven't mentioned it for almost a year now, since it never happened :) , but I really am this time supposed to have my site (see sig) in IEEE Antennas and Propagation Society magazine. The Dec 02 issue. They goofed up and gave their apologies as it was originally supposed to be in this year's June issue. Now you guys are giving up Numpy and starting Numarray :) I hope the last Numpy distribution will still be available on the main site, so people can run my programs. Later, I can go in and convert them to Numarray. Rob. -- ----------------------------- The Numeric Python EM Project www.pythonemproject.com