From lior at fun-fairs.co.uk Thu Sep 1 09:22:42 2005 From: lior at fun-fairs.co.uk (Lior Cawthon) Date: Thu Sep 1 09:22:42 2005 Subject: [Numpy-discussion] Re: l2 80 percent of our city underwater. Message-ID: <002a01c5aefb$67b70880$a891a8c0@mercurial> on, the horse-handlers trotting towards the road leading black horses by plodded no farther than the fire post when he felt sick. He cried out lofty and special being. Lying down at his masters feet without even made the author of a novel which corresponds to the Gospel of Woland from Well, so I pinned the icon on my chest and ran... his head. stop the cancer! dust, chains clanking, and on their platforms men lay sprawled belly up on written all over in charcoal and pencil. 4. findirtctor: Typical Soviet contraction for financial director. learned doctors, then to quacks, and sometimes to fortune-tellers as well. confreres killed four soldiers, and, finally, the dirty traitor Judas - are said to have smothered St Philip, metropolitan of Moscow, with his own lifeless body lay with outstretched arms. The left foot was in a spot of heaving itself upon the earth, as happens only during world catastrophes. qualities, a dreamer and an eccentric. A girl fell in love with him, and he -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri Sep 2 16:55:01 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri Sep 2 16:55:01 2005 Subject: [Numpy-discussion] scipy core (Numeric3) win32 binaries to play with Message-ID: <4318E60D.4040304@ee.byu.edu> If anybody has just been waiting for a windows binary to try out the new Numeric (scipy.base) you can download this. from scipy.base import * (replaces from Numeric import *) The installer is here: http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe From aisaac at american.edu Fri Sep 2 17:37:04 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri Sep 2 17:37:04 2005 Subject: [Numpy-discussion] Re: [SciPy-dev] scipy core (Numeric3) win32 binaries to play with In-Reply-To: <4318E60D.4040304@ee.byu.edu> References: <4318E60D.4040304@ee.byu.edu> Message-ID: On Fri, 02 Sep 2005, Travis Oliphant apparently wrote: > http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe So far so good. Thanks! Alan Isaac From security at creditunion.coop Sun Sep 4 21:16:18 2005 From: security at creditunion.coop (security at creditunion.coop) Date: Sun Sep 4 21:16:18 2005 Subject: [Numpy-discussion] Important Notice Message-ID: <20050905040753.16343.qmail@az-specialists.com> An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Mon Sep 5 02:02:14 2005 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Mon Sep 5 02:02:14 2005 Subject: [Numpy-discussion] Re: [SciPy-user] Re: [SciPy-dev] scipy core (Numeric3) win32 binaries to play with In-Reply-To: References: <4318E60D.4040304@ee.byu.edu> Message-ID: <431C09EE.7010105@obs.univ-lyon1.fr> Alan G Isaac wrote: >On Fri, 02 Sep 2005, Travis Oliphant apparently wrote: > > >>http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe >> >> > >So far so good. > >Thanks! >Alan Isaac > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > Hi, That's great news! :) Where are the sources corresponding with this windows release (I would like to test that under linux asap)? Is there any beta version documentation? Thanks. Xavier. From sheltraw at unm.edu Tue Sep 6 11:45:07 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Tue Sep 6 11:45:07 2005 Subject: [Numpy-discussion] Linux to Windows porting question Message-ID: Hello NumPy Listees I am trying to port some code to Windows that works fine under Linux. The offending line is: blk = fromstring(f_fid.read(BLOCK_LEN), num_type).byteswapped().astype(Float32).tostring() The error I get is: ValueError: string size must be a multiple of element size Does anyone have an idea where the problem might be? BLOCK_LEN is specified in bytes and num_type is Int32. Thanks, Daniel From rkern at ucsd.edu Tue Sep 6 11:50:05 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue Sep 6 11:50:05 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: References: Message-ID: <431DE4A4.50204@ucsd.edu> Daniel Sheltraw wrote: > Hello NumPy Listees > > I am trying to port some code to Windows that works fine under Linux. > The offending line > is: > > blk = fromstring(f_fid.read(BLOCK_LEN), > num_type).byteswapped().astype(Float32).tostring() > > The error I get is: > > ValueError: string size must be a multiple of element size > > Does anyone have an idea where the problem might be? BLOCK_LEN is > specified in bytes > and num_type is Int32. Is f_fid opened in binary mode? f_fid = open(filename, 'rb') It should be. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From greg.ewing at canterbury.ac.nz Tue Sep 6 20:10:46 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue Sep 6 20:10:46 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: References: Message-ID: <431E59D0.6000401@canterbury.ac.nz> Daniel Sheltraw wrote: > blk = fromstring(f_fid.read(BLOCK_LEN), > num_type).byteswapped().astype(Float32).tostring() > > The error I get is: > > ValueError: string size must be a multiple of element size Did you open the file in binary mode? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From sheltraw at unm.edu Wed Sep 7 13:41:10 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Wed Sep 7 13:41:10 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: <431E59D0.6000401@canterbury.ac.nz> References: <431E59D0.6000401@canterbury.ac.nz> Message-ID: The question was answered yesterday and that was the answer> thanks On Wed, 07 Sep 2005 15:09:04 +1200 Greg Ewing wrote: > Daniel Sheltraw wrote: > >> blk = fromstring(f_fid.read(BLOCK_LEN), >> num_type).byteswapped().astype(Float32).tostring() >> >> The error I get is: >> >> ValueError: string size must be a multiple of >>element size > > Did you open the file in binary mode? > > -- > Greg Ewing, Computer Science Dept, >+--------------------------------------+ > University of Canterbury, | A citizen of >NewZealandCorp, a | > Christchurch, New Zealand | wholly-owned subsidiary >of USA Inc. | > greg.ewing at canterbury.ac.nz > +--------------------------------------+ From security at epaypal.com Wed Sep 7 20:57:04 2005 From: security at epaypal.com (PayPal Security Departament) Date: Wed Sep 7 20:57:04 2005 Subject: [Numpy-discussion] Security Measures - Are You Traveling? Message-ID: <1126151015.66200.qmail@paypal.com> An HTML attachment was scrubbed... URL: From phjoost at gmail.com Fri Sep 9 13:02:07 2005 From: phjoost at gmail.com (Joost van Evert) Date: Fri Sep 9 13:02:07 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] Message-ID: <1126296820.20258.0.camel@109-80.ipact.nl> -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From joostvanevert at gmail.com Fri Sep 9 12:08:02 2005 From: joostvanevert at gmail.com (Joost van Evert) Date: Fri, 09 Sep 2005 18:08:02 +0200 Subject: compression in storage of Numeric/numarray objects In-Reply-To: References: Message-ID: <1126282082.20664.11.camel@inpc93.et.tudelft.nl> Hi List, is it possible to use compression while storing numarray/Numeric objects? Regards, Joost --=-hPiNO5cct4UJ0DB79Znq-- From jdhunter at ace.bsd.uchicago.edu Fri Sep 9 13:08:12 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri Sep 9 13:08:12 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126296820.20258.0.camel@109-80.ipact.nl> (Joost van Evert's message of "Fri, 09 Sep 2005 22:13:40 +0200") References: <1126296820.20258.0.camel@109-80.ipact.nl> Message-ID: <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Joost" == Joost van Evert writes: Joost> is it possible to use compression while storing Joost> numarray/Numeric objects? Sure In [35]: s = rand(10000) In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) In [37]: ls -l uncompressed.dat -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) In [39]: ls -l compressed.dat -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat Compression ration for more regular data will be better. JDH From phjoost at gmail.com Fri Sep 9 13:29:10 2005 From: phjoost at gmail.com (Joost van Evert) Date: Fri Sep 9 13:29:10 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <1126298478.20258.11.camel@109-80.ipact.nl> On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > >>>>> "Joost" == Joost van Evert writes: > > Joost> is it possible to use compression while storing > Joost> numarray/Numeric objects? > > > Sure > > In [35]: s = rand(10000) > > In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) > > In [37]: ls -l uncompressed.dat > -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat > > In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) > > In [39]: ls -l compressed.dat > -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat > Thanks, this helps me, but I think not enough, because the arrays I work on are sometimes >1Gb(Correlation matrices). The tostring method would explode the size, and result in a lot of swapping. Ideally the compression also works with memmory mapped arrays. Greets, Joost From perry at stsci.edu Fri Sep 9 13:53:07 2005 From: perry at stsci.edu (Perry Greenfield) Date: Fri Sep 9 13:53:07 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <1410842aabd84fa234c23c0b7090cbec@stsci.edu> On Sep 9, 2005, at 4:41 PM, Joost van Evert wrote: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: >>>>>>> "Joost" == Joost van Evert writes: >> >> Joost> is it possible to use compression while storing >> Joost> numarray/Numeric objects? >> >> >> Sure >> >> In [35]: s = rand(10000) >> >> In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) >> >> In [37]: ls -l uncompressed.dat >> -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 >> uncompressed.dat >> >> In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) >> >> In [39]: ls -l compressed.dat >> -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 >> compressed.dat >> > Thanks, this helps me, but I think not enough, because the arrays I > work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. > Well, it seems to me that you are asking for quite a lot if you expect it to work with memory-mapped arrays that are compressed (I'm assuming you mean that individual values are decompressed on the fly as they are needed). This is something that we gave some thought to a few years ago, but it seemed that supporting such capabilities was far too complicated, at least for now. Besides some operations are bound to blow up (e.g., take on a compressed array). But I'm still not sure what you are trying to do and what you would like to see happen underneath. An example would do a lot to explain what your needs are. Thanks, Perry Greenfield From focke at slac.stanford.edu Fri Sep 9 13:56:10 2005 From: focke at slac.stanford.edu (Warren Focke) Date: Fri Sep 9 13:56:10 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: You may be able to avoid the tostring() overhead by using tofile(): s.tofile(gzip.open('compressed.dat', 'wb')) You are probably SOL on the mmapping, though. w On Fri, 9 Sep 2005, Joost van Evert wrote: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > > >>>>> "Joost" == Joost van Evert writes: > > > > Joost> is it possible to use compression while storing > > Joost> numarray/Numeric objects? > > > > > > Sure > > > > In [35]: s = rand(10000) > > > > In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) > > > > In [37]: ls -l uncompressed.dat > > -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat > > > > In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) > > > > In [39]: ls -l compressed.dat > > -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat > > > Thanks, this helps me, but I think not enough, because the arrays I work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. > > Greets, > > Joost > > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From skip at pobox.com Sun Sep 11 08:26:23 2005 From: skip at pobox.com (skip at pobox.com) Date: Sun Sep 11 08:26:23 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <17188.19552.306290.918560@montanaro.dyndns.org> Joost> is it possible to use compression while storing Joost> numarray/Numeric objects? Try the gzip or bz2 modules. Both have file-like objects that transparently (de)compress data as it is read or written. Joost> Ideally the compression also works with memmory mapped arrays. Dunno, but probably not. You'll have to experiment. Skip From faltet at carabos.com Mon Sep 12 06:02:34 2005 From: faltet at carabos.com (Francesc Altet) Date: Mon Sep 12 06:02:34 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <1126530688.3012.35.camel@localhost.localdomain> El dv 09 de 09 del 2005 a les 22:41 +0200, en/na Joost van Evert va escriure: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > Thanks, this helps me, but I think not enough, because the arrays I work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. [mode advertising on, be warned ] You may want to use pytables [1]. It supports on-line data compression and access to data on-disk on a similar way than memory-mapped arrays. Example of use: In [66]:f=tables.openFile("/tmp/test-zlib.h5","w") In [67]:fzlib=tables.Filters(complevel=1, complib="zlib") # the filter In [68]:chunk=tables.Float64Atom(shape=(50,50)) # the data 'chunk' In [69]:carr=f.createCArray(f.root, "carr",(1000, 1000),chunk,'',fzlib) In [70]:carr[:]=numarray.random_array.random((1000,1000)) In [71]:f.close() In [72]:ls -l /tmp/test-zlib.h5 -rw-r--r-- 1 faltet users 3680721 2005-09-12 14:27 /tmp/test-zlib.h5 Now, you can access the data on disk as if it was in-memory: In [73]:f=tables.openFile("/tmp/test-zlib.h5","r") In [74]:f.root.carr[300,200] Out[74]:0.76497000455856323 In [75]:f.root.carr[300:310:3,900:910:2] Out[75]: array([[ 0.5336495 , 0.55542123, 0.80049258, 0.84423071, 0.47674203], [ 0.93104523, 0.71216697, 0.23955345, 0.89759707, 0.70620197], [ 0.86999339, 0.05541291, 0.55156851, 0.96808773, 0.51768076], [ 0.29315394, 0.03837755, 0.33675179, 0.93591529, 0.99721605]]) Also, access to disk is very fast, even if you compressed your data: In [77]:tzlib=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-zlib.h5');carr=f.root.carr") In [78]:tzlib.repeat(3,100) Out[78]:[0.204339981079101, 0.176630973815917, 0.177133798599243] Compare these times with non-compressed data: In [80]:tnc=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-nocompr.h5');carr=f.root.carr") In [81]:tnc.repeat(3,100) Out[81]:[0.089105129241943, 0.084129095077514, 0.084383964538574219] That means that pytables can access data in the middle of a dataset without decompressing all the dataset, but just the interesting chunks (and you can decide the size of these chunks). You can see how the access times are in the range of milliseconds, irregardingly of the fact that the data is compressed or not. PyTables also does support others compressors apart from zlib, like bzip2 [2] or LZO [3], as well as compression pre-conditioners, like shuffle [4]. Look at the compression ratios for completely random data: In [84]:ls -l /tmp/test*.h5 -rw-r--r-- 1 faltet users 3675874 /tmp/test-bzip2-shuffle.h5 -rw-r--r-- 1 faltet users 3680615 /tmp/test-zlib-shuffle.h5 -rw-r--r-- 1 faltet users 3777749 /tmp/test-lzo-shuffle.h5 -rw-r--r-- 1 faltet users 8025024 /tmp/test-nocompr.h5 LZO is specially interesting if you want fast access to your data (it's very fast decompressing): In [82]:tlzo=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-lzo-shuffle.h5');carr=f.root.carr") In [83]:tlzo.repeat(3,100) Out[83]:[0.12332820892333984, 0.11892890930175781, 0.12009191513061523] So, retrieving compressed data using LZO is just 45% slower than if not using compression. You can see more exhaustive benchmarks and discussion in [5]. [1] http://www.pytables.org [2] http://www.bzip2.org [3] http://www.oberhumer.com/opensource/lzo [4] http://hdf.ncsa.uiuc.edu/HDF5/doc_resource/H5Shuffle_Perf.pdf [5] http://pytables.sourceforge.net/html-doc/usersguide6.html#section6.3 Uh, sorry by the blurb, but benchmarking is a lot of fun. -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From NadavH at VisionSense.com Wed Sep 14 04:14:20 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Wed Sep 14 04:14:20 2005 Subject: [Numpy-discussion] A bug in numarray? Message-ID: <4328031D.90106@VisionSense.com> It seems that the tostring method fails on rank 0 arrays: a = N.array(-4) >>> a array(-4) >>> a.tostring() Traceback (most recent call last): File "", line 1, in -toplevel- a.tostring() File "/usr/local/lib/python2.4/site-packages/numarray/generic.py", line 746, in tostring self._strides, self._itemsize) MemoryError >>> N.__version__ '1.4.0' Nadav. From faltet at carabos.com Wed Sep 14 05:38:19 2005 From: faltet at carabos.com (Francesc Altet) Date: Wed Sep 14 05:38:19 2005 Subject: [Numpy-discussion] ANN: PyTables 1.1.1 released Message-ID: <200509141437.35218.faltet@carabos.com> ========================== Announcing PyTables 1.1.1 ========================== This is a maintenance release of PyTables. In it, several optimizations and bug fixes have been made. As some of the fixed bugs were quite important, it's strongly recommended for users to upgrade. Go to the PyTables web site for downloading the beast: http://pytables.sourceforge.net/ or keep reading for more info about the improvements and bugs fixed. Changes more in depth ===================== Improvements: - Optimized the opening of files with a large number of objects. Now, files with table objects open a 50% faster, and files with arrays open more than twice as fast (up to 2000 objects/s on a Pentium 4 at 2GHz). Hence, a file with a combination of both kinds of objects opens between a 50% and 100% faster than in 1.1. - Optimized the creation of ``NestedRecArray`` objects using ``NumArray`` objects as columns, so that filling a table with the ``Table.append()`` method achieves a performance similar to PyTables pre-1.1 releases. Bug fixes: - ``Table.readCoordinates()`` now converts the coords parameter into ``Int64`` indices automatically. - Fixed a bug that prevented appending to tables (though ``Table.append()``) using a list of ``NumArray`` objects. - ``Int32`` attributes are handled correctly in 64-bit platforms now. - Correction for accepting lists of numarrays as input for ``NestedRecArrays``. - Fixed a problem when creating rank 1 multi-dimensional string columns in ``Table`` objects. Closes SF bug #1269023. - Avoid errors when unpickling objects stored in attributes. See the section ``AttributeSet`` in the reference chapter of the User's Manual for more information. Closes SF bug #1254636. - Assignment for ``*Array`` slices has been improved in order to solve some issues with shapes. Closes SF bug #1288792. - The indexation properties were lost in case the table was closed before an index was created. Now, these properties are saved even in this case. Known bugs: - Classes inheriting from ``IsDescription`` subclasses do not inherit columns defined in the super-class. See SF bug #1207732 for more info. - Time datatypes are non-portable between big-endian and little-endian architectures. This is ultimately a consequence of a HDF5 limitation. See SF bug #1234709 for more info. Backward-incompatible changes: - None (that we are aware of). Important note for MacOSX users =============================== UCL compressor works badly on MacOSX platforms. Recent investigation seems to point to a bug in the development tools in MacOSX. Until the problem is isolated and eventually solved, UCL support will not be compiled by default on MacOSX platforms, even if the installer finds it in the system. However, if you still want to get UCL support on MacOSX, you can use the ``--force-ucl`` flag in ``setup.py``. Important note for Python 2.4 and Windows users =============================================== If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (Numeric is also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further problems. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://pytables.sourceforge.net/ About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See THANKS file in distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package!. And last but not least, a big thanks to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From oliphant at ee.byu.edu Wed Sep 14 12:34:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Sep 14 12:34:31 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) now on scipy.org svn server Message-ID: <43287B06.3000301@ee.byu.edu> Now that the new scipy.base (that can replace Numeric) is pretty much complete, I'm working on bringing the other parts of Numeric along so that scipy_core can replace Numeric (and numarray in functionality) for all users. I'm now using a branch of scipy_core to do this work. The old Numeric3 CVS directory on sourceforge will start to wither... The branch is at http://svn.scipy.org/svn/scipy_core/branches/newcore I'm thinking about how to structure the new scipy_core. Right now under the new scipy_core we have Hierarchy Imports as ==================== base/ --> scipy.base (namespace also available under scipy itself) distutils/ --> scipy.distutils test/ --> scipy.test weave/ --> weave We need to bring over basic linear algebra, statistics, and fft's from Numeric. So where do we put them and how do they import? Items to consider: * the basic functionality will be expanded / replaced by anybody who installs the entire scipy library. * are we going to get f2py to live in scipy_core (I say yes)... * I think scipy_core should install a working basic scipy (i.e. import scipy as Numeric) should work and be an effective replacement for import Numeric). Of course the functionality will be quite a bit less than if full scipy was installed, but basic functions should still work. With that in mind I propose the additions Hiearchy Imports as ========================== corelib/lapack_lite/ --> scipy.lapack_lite corelib/fftpack_lite/ --> scipy.fftpack_lite corelib/random_lite/ --> scipy.random_lite linalg/ --> scipy.linalg fftpack/ --> scipy.fftpack stats/ --> scipy.stats Users would typically use only the functions in scipy.linalg, scipy.fftpack, and scipy.stats. Notice that scipy also has modules names linalg, fftpack, and stats. These would add / replace functionality available in the basic core system. Comments, -Travis O. From pearu at scipy.org Wed Sep 14 12:52:29 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed Sep 14 12:52:29 2005 Subject: [Numpy-discussion] Re: [SciPy-dev] scipy.base (Numeric3) now on scipy.org svn server In-Reply-To: <43287B06.3000301@ee.byu.edu> References: <43287B06.3000301@ee.byu.edu> Message-ID: On Wed, 14 Sep 2005, Travis Oliphant wrote: > Now that the new scipy.base (that can replace Numeric) is pretty much > complete, I'm working on > bringing the other parts of Numeric along so that scipy_core can replace > Numeric (and numarray in functionality) for all users. > > I'm now using a branch of scipy_core to do this work. The old Numeric3 CVS > directory on sourceforge will start to wither... > > The branch is at > > http://svn.scipy.org/svn/scipy_core/branches/newcore > > I'm thinking about how to structure the new scipy_core. > Right now under the new scipy_core we have > > Hierarchy Imports as > ==================== > base/ --> scipy.base (namespace also available under > scipy itself) > distutils/ --> scipy.distutils > test/ --> scipy.test > weave/ --> weave > > We need to bring over basic linear algebra, statistics, and fft's from > Numeric. So where do we put them and how do they import? I have done some work in this direction but have not commited to repository yet because it needs more testing. Basically, (not commited) scipy.distutils has support to build Fortran or f2c'd versions of various libraries (currently I have tested it on blas) depending on whether Fortran compiler is available or not. > Items to consider: > > * the basic functionality will be expanded / replaced by anybody who > installs the entire scipy library. > > * are we going to get f2py to live in scipy_core (I say yes)... That would simplify many things, so I'd also say yes. On the other hand, I have not decided what to do with f2py2e CVS repository. Suggestions are welcome (though I understand that this might be my personal problem). > * I think scipy_core should install a working basic scipy (i.e. import scipy > as Numeric) should work and be an effective replacement for import Numeric). > Of course the functionality will be quite a bit less than if full scipy was > installed, but basic functions should still work. > > With that in mind I propose the additions > > Hiearchy Imports as > ========================== > corelib/lapack_lite/ --> scipy.lapack_lite corelib/fftpack_lite/ --> > scipy.fftpack_lite > corelib/random_lite/ --> scipy.random_lite > linalg/ --> scipy.linalg > fftpack/ --> scipy.fftpack > stats/ --> scipy.stats > > Users would typically use only the functions in scipy.linalg, scipy.fftpack, > and scipy.stats. > > Notice that scipy also has modules names linalg, fftpack, and stats. These > would add / replace functionality available in the basic core system. Since lapack_lite, fftpack_lite can be copied from Numeric then there's no rush for me to commit my scipy.distutils work, I guess. I'll do that when it is more or less stable and then we can gradually apply f2c to various scipy modules that currently have fortran sources which would allow compiling the whole scipy without having fortran compiler around. Pearu From oliphant at ee.byu.edu Thu Sep 15 12:15:39 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Sep 15 12:15:39 2005 Subject: [Numpy-discussion] scipy core now builds from svn Message-ID: <4329C819.6070302@ee.byu.edu> This is to officially announce that the new replacement for Numeric (scipy_core) is available at SVN. Read permission is open to everyone so a simple checkout: svn co http://svn.scipy.org/svn/scipy_core/branches/newcore newcore should get you the distribution that should install with cd newcore python setup.py install I'm in the process of adding the linear algebra routines, fft, random, and dotblas from Numeric. This should be done by the conference. I will make a windows binary release for the SciPy conference, but not before then. There is a script in newcore/scipy/base/convertcode.py that will take code written for Numeric (or numerix) and convert it to code for the new scipy base object. This code is not foolproof, but it takes care of the minor incompatibilities (a few search and replaces are done). The compatibility issues are documented (mostly in the typecode characters and a few method name changes). The one bigger incompatibility is that a.flat does something a little different (a 1-d iterator object). The convert code script changes uses of a.flat that are not indexing or set attribute related to a.ravel() C-code should build for the new system with a change of #include Numeric/arrayobject.h to #include scipy/arrayobject.h --- though you may want to enhance your code to take advantage of the new features (and more extensive C-API). I also still need to add the following ufuncs: isnan, isfinite, signbit, isinf, frexp, and ldexp. This should not take too long. -Travis O. From shuntim.luk at polyu.edu.hk Thu Sep 15 22:50:15 2005 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Thu Sep 15 22:50:15 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <4329C819.6070302@ee.byu.edu> References: <4329C819.6070302@ee.byu.edu> Message-ID: <432A5CF4.9050909@polyu.edu.hk> Travis Oliphant wrote: > > This is to officially announce that the new replacement for Numeric > (scipy_core) is available at SVN. Read permission is open to everyone > so a simple checkout: > > svn co http://svn.scipy.org/svn/scipy_core/branches/newcore newcore > should get you the distribution that should install with > I got this time out error when I tried, several times. :-( svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status line: Connection timed out (http://svn.scipy.org) Please see if this is an server configuration issue. Regards, ST -- From Fernando.Perez at colorado.edu Fri Sep 16 08:46:43 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 08:46:43 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432A5CF4.9050909@polyu.edu.hk> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> Message-ID: <432AE87C.6030406@colorado.edu> LUK ShunTim wrote: >> I got this time out error when I tried, several times. :-( >> >> svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' >> svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status >> line: Connection timed out (http://svn.scipy.org) >> >> Please see if this is an server configuration issue. No, it's an issue with your setup, not something on scipy's side. You are behind a proxy blocking REPORT requests. See this for details: http://www.sipfoundry.org/tools/svn-tips.html which says: What does 'REPORT request failed' mean? When I try to check out a subversion repository > svn co http://scm.sipfoundry.org/rep/project/main project I get an error like: svn: REPORT request failed on '/rep/project/!svn/vcc/default' svn: REPORT of '/rep/project/!svn/vcc/default': 400 Bad Request (http://scm.sipfoundry.org) You are behind a web proxy that is not passing the WebDAV methods that subversion uses. You can work around the problem by using SSL to hide what you're doing from the proxy: > svn co https://scm.sipfoundry.org/rep/project/main project Cheers, f From Fernando.Perez at colorado.edu Fri Sep 16 08:50:02 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 08:50:02 2005 Subject: [SciPy-dev] Re: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432AE87C.6030406@colorado.edu> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> Message-ID: <432AE966.90007@colorado.edu> Fernando Perez wrote: > You are behind a web proxy that is not passing the WebDAV methods that > subversion uses. You can work around the problem by using SSL to hide what > you're doing from the proxy: > > > svn co https://scm.sipfoundry.org/rep/project/main project I forgot to add that the https method will NOT work with scipy, which doesn't provide svn/ssl support. You need to fix your proxy config. Cheers, f From shuntim.luk at polyu.edu.hk Fri Sep 16 22:00:04 2005 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Fri Sep 16 22:00:04 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432AE87C.6030406@colorado.edu> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> Message-ID: <432BA2B2.7090307@polyu.edu.hk> Fernando Perez wrote: > LUK ShunTim wrote: > > >>> I got this time out error when I tried, several times. :-( >>> >>> svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' >>> svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status >>> line: Connection timed out (http://svn.scipy.org) >>> >>> Please see if this is an server configuration issue. > > > No, it's an issue with your setup, not something on scipy's side. > > You are behind a proxy blocking REPORT requests. See this for details: > > http://www.sipfoundry.org/tools/svn-tips.html > > which says: > > What does 'REPORT request failed' mean? > > When I try to check out a subversion repository > > > svn co http://scm.sipfoundry.org/rep/project/main project > > I get an error like: > > svn: REPORT request failed on '/rep/project/!svn/vcc/default' > svn: REPORT of '/rep/project/!svn/vcc/default': 400 Bad Request > (http://scm.sipfoundry.org) > > You are behind a web proxy that is not passing the WebDAV methods that > subversion uses. You can work around the problem by using SSL to hide what > you're doing from the proxy: > > > svn co https://scm.sipfoundry.org/rep/project/main project > > > Cheers, > > f Thanks very much. However no luck. :-( I now got this error > svn: PROPFIND of '/svn/scipy_core/branches/newcore': 405 Method Not Allowed (https://svn.scipy.org) with the suggested workaround. I guess I'll learn a bit about svn and may be have to take it up with our system admin. In the mean time, is CVS still available? BTW, perhaps it might help people like me who are not familiar with svn by putting this tip somethere in the download page. Thanks again, ST -- From Fernando.Perez at colorado.edu Fri Sep 16 22:12:07 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 22:12:07 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432BA2B2.7090307@polyu.edu.hk> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> <432BA2B2.7090307@polyu.edu.hk> Message-ID: <432BA519.3040705@colorado.edu> LUK ShunTim wrote: > Fernando Perez wrote: > >>LUK ShunTim wrote: > Thanks very much. However no luck. :-( I now got this error > > >>svn: PROPFIND of '/svn/scipy_core/branches/newcore': 405 Method Not Allowed (https://svn.scipy.org) That's what I said in the message immediately afterward, because I hit send too soon: that the https:// approach would NOT work with scipy. You need to have your proxy fixed, I'm afraid. Cheers, f From shashank_84 at rediffmail.com Sat Sep 17 23:34:04 2005 From: shashank_84 at rediffmail.com (shashank karnik) Date: Sat Sep 17 23:34:04 2005 Subject: [Numpy-discussion] How to install Numeric on Zope? Message-ID: <20050918063350.3616.qmail@webmail7.rediffmail.com> Hello everyone Can anyone help me to install Numeric extension or package for my Zope server 2.4 version running on Windows Xp? You see i am a beginner at python and zope both and need to use a product in Zope(GNOWSYS) which requires Python -Numeric and Python-XMLBase packages.. I downloaded the Numeric py package for Windows from this link http://prdownloads.sourceforge.net/numpy/Numeric-23.8.win32-py2.4.exe?download However...i cant figure out how to install it so that my Zope server recognises it. The installer currently just installs the package so that it is recognised by the Python interpreter...but the Zope server is stored in Program Files on my machine..how do i make it understand that the Numeric package has been installed.. I tried putting the numeric folder in this path C:\Program Files\Zope\lib\python\Products But it doesnt work Please help me out If think that this question should be asked on a Zope forum...please let me know! Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Sun Sep 18 10:50:11 2005 From: steve at shrogers.com (Steven H. Rogers) Date: Sun Sep 18 10:50:11 2005 Subject: [Numpy-discussion] How to install Numeric on Zope? In-Reply-To: <20050918063350.3616.qmail@webmail7.rediffmail.com> References: <20050918063350.3616.qmail@webmail7.rediffmail.com> Message-ID: <432DA8A2.4040203@shrogers.com> Combining Numeric and Zope is a neat idea and I've speculated about how to do it, but haven't got beyond that. I believe that you'd have to write a new Zope product that called the Numeric package. You might get a more informed response on the Zope, Plone, or SciPy mailing lists. Regards, Steve shashank karnik wrote: > > Hello everyone > > Can anyone help me to install Numeric extension or package for my Zope > server 2.4 version running on Windows Xp? > > You see i am a beginner at python and zope both and need to use a > product in Zope(GNOWSYS) which requires Python -Numeric and > Python-XMLBase packages.. > > I downloaded the Numeric py package for Windows from this link > > http://prdownloads.sourceforge.net/numpy/Numeric-23.8.win32-py2.4.exe?download > > However...i cant figure out how to install it so that my Zope server > recognises it. > > The installer currently just installs the package so that it is > recognised by the Python interpreter...but the Zope server is stored in > Program Files on my machine..how do i make it understand that the > Numeric package has been installed.. > > I tried putting the numeric folder in this path > C:\Program Files\Zope\lib\python\Products > > But it doesnt work > > Please help me out > > If think that this question should be asked on a Zope forum...please > let me know! > > Thank you > > > > > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From humufr at yahoo.fr Tue Sep 20 08:45:01 2005 From: humufr at yahoo.fr (Humufr) Date: Tue Sep 20 08:45:01 2005 Subject: [Numpy-discussion] numarray speed problem Message-ID: <43302E32.10708@yahoo.fr> Hello, I have a problem with numarray and especially the function numarray.all. I want to compare two files to do this I read the files with a function readcol2 who can put them in a list or numarray format (string or numerical). I'm doing a comparaison on each line of the file. If I'm using the array format and the numarray.all function, that take forever to do the comparaison for 2 big files. If I'm using python list object, it's very fast. I think there are some problem or at least some improvement to do. If I understand correctly the goal of numarray, it has been write to speed up some part of python but here it slow down a lot. An very simple sample to see the effect is at the bottom of this mail. Thanks for numarray, I hope to not bother you. My comments are more to improve numarray than other things. I have been able to find the problem so no I can avoied it. H. def readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): """ Load ASCII data from fname into an array and return the array. The data must be regular, same number of values in every row fname can be a filename or a file handle. Input: - Fname : the name of the file to read Optionnal input: - comments : a string to indicate the charactor to delimit the domments. the default is the matlab character '%'. - columns : list or tuple ho contains the columns to use. - delimiter : a string to delimit the columns - dep : an integer to indicate from which line you want to begin to use the file (useful to avoid the descriptions lines) - arraytype : a string to indicate which kind of array you want ot have: numeric array (numeric) or character array (numstring) or list (list). By default it's the list mode used matfile data is not currently supported, but see Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz Example usage: x,y = transpose(readcol('test.dat')) # data in two columns X = readcol('test.dat') # a matrix of data x = readcol('test.dat') # a single column of data x = readcol('test.dat,'#') # the character use like a comment delimiter is '#' initial function from pylab (J.Hunter). Change by myself for my specific need """ from numarray import array,transpose fh = file(fname) X = [] numCols = None nline = 0 if columns is None: for line in fh: nline += 1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue if arraytype=='numeric': row = [float(val) for val in line.split(delimiter)] else: row = [val.strip() for val in line.split(delimiter)] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) else: for line in fh: nline +=1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue row = line.split(delimiter) if arraytype=='numeric': row = [float(row[i-1]) for i in columns] elif arraytype=='numstring': row = [row[i-1].strip() for i in columns] else: row = [row[i-1].strip() for i in columns] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) if arraytype=='numeric': X = array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), elif arraytype == 'numstring': import numarray.strings # pb if numeric+pylab X = numarray.strings.array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), return X ------------------------------------------- files_test_creation.py ------------------------------------------- f1 = file('test1.dat','w') for i in range(10000): f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') f1.close() f2 = file('test2.dat','w') for i in range(10000): f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') f2.close() ------------------------------------------- numarray_pb_sample.py ------------------------------------------- import numarray data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numstring') data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numstring') #or in non string array form (same result) ## data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numeric') ## data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numeric') for a_i in range(data1.shape[0]): for b_i in range(data2.shape[0]): if numarray.all(data1[a_i,:] == data2[b_i,:]): print a_i,b_i ------------------------------------------- python_list_sample.py ------------------------------------------- data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='list') data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='list') for a_i in range(len(data1)): for b_i in range(len(data2)): if data1[a_i] == data2[b_i]: print a_i,b_i From jmiller at stsci.edu Tue Sep 20 10:17:18 2005 From: jmiller at stsci.edu (Todd Miller) Date: Tue Sep 20 10:17:18 2005 Subject: [Numpy-discussion] numarray speed problem In-Reply-To: <43302E32.10708@yahoo.fr> References: <43302E32.10708@yahoo.fr> Message-ID: <433043CD.1030700@stsci.edu> Hi H, I did some work on this problem based on your previous post but apparently my response never made it to numpy-discussion. In a nutshell, I made numarray 12x faster for a benchmark like your numarray_pb_sample.py by speeding up string comparisons and improving all(). The changes are in numarray CVS but there is no Source Forge release that contains them yet. numarray-1.4.0 is still several weeks away. If you want to try CVS from UNIX/Linux just do: % cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login % cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P numarray Regards, Todd Humufr wrote: > Hello, > > I have a problem with numarray and especially the function numarray.all. > > I want to compare two files to do this I read the files with a > function readcol2 who can put them in a list or numarray format > (string or numerical). > > I'm doing a comparaison on each line of the file. > If I'm using the array format and the numarray.all function, that take > forever to do the comparaison for 2 big files. If I'm using python > list object, it's very fast. I think there are some problem or at > least some improvement to do. If I understand correctly the goal of > numarray, it has been write to speed up some part of python but here > it slow down a lot. > > An very simple sample to see the effect is at the bottom of this mail. > > Thanks for numarray, I hope to not bother you. My comments are more to > improve numarray than other things. I have been able to find the > problem so no I can avoied it. > > H. > > > > > def > readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): > > """ > Load ASCII data from fname into an array and return the array. > The data must be regular, same number of values in every row > fname can be a filename or a file handle. > > Input: > > - Fname : the name of the file to read > > Optionnal input: > - comments : a string to indicate the charactor to delimit the > domments. > the default is the matlab character '%'. > - columns : list or tuple ho contains the columns to use. > - delimiter : a string to delimit the columns > > - dep : an integer to indicate from which line you want to begin > > to use the file (useful to avoid the descriptions lines) > > - arraytype : a string to indicate which kind of array you want ot > have: numeric array (numeric) or character array > (numstring) or list (list). By default it's the > > list mode used > > matfile data is not currently supported, but see > Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz > > Example usage: > > x,y = transpose(readcol('test.dat')) # data in two columns > > X = readcol('test.dat') # a matrix of data > > x = readcol('test.dat') # a single column of data > > x = readcol('test.dat,'#') # the character use like a comment > delimiter is '#' > > initial function from pylab (J.Hunter). Change by myself for my > specific need > > """ > from numarray import array,transpose > > fh = file(fname) > > X = [] > numCols = None > nline = 0 > if columns is None: > for line in fh: > nline += 1 > if dep is not None and nline <= dep: continue > line = line[:line.find(comments)].strip() > if not len(line): continue > if arraytype=='numeric': > row = [float(val) for val in line.split(delimiter)] > else: > row = [val.strip() for val in line.split(delimiter)] > thisLen = len(row) > if numCols is not None and thisLen != numCols: > raise ValueError('All rows must have the same number of > columns') > X.append(row) > else: > for line in fh: > nline +=1 > if dep is not None and nline <= dep: continue > line = line[:line.find(comments)].strip() > if not len(line): continue > row = line.split(delimiter) > if arraytype=='numeric': > row = [float(row[i-1]) for i in columns] > elif arraytype=='numstring': > row = [row[i-1].strip() for i in columns] > else: > row = [row[i-1].strip() for i in columns] > thisLen = len(row) > if numCols is not None and thisLen != numCols: > raise ValueError('All rows must have the same number of > columns') > X.append(row) > > if arraytype=='numeric': > X = array(X) > r,c = X.shape > if r==1 or c==1: > X.shape = max([r,c]), > elif arraytype == 'numstring': > import numarray.strings # pb if numeric+pylab > X = numarray.strings.array(X) > r,c = X.shape > if r==1 or c==1: > X.shape = max([r,c]), > return X > > > ------------------------------------------- > files_test_creation.py > > ------------------------------------------- > > f1 = file('test1.dat','w') > for i in range(10000): > f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') > f1.close() > > > f2 = file('test2.dat','w') > for i in range(10000): > f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') > f2.close() > > ------------------------------------------- > numarray_pb_sample.py > > ------------------------------------------- > > import numarray > data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numstring') > data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numstring') > > #or in non string array form (same result) > ## data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numeric') > ## data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numeric') > > for a_i in range(data1.shape[0]): > for b_i in range(data2.shape[0]): > if numarray.all(data1[a_i,:] == data2[b_i,:]): > print a_i,b_i > > ------------------------------------------- > python_list_sample.py > > ------------------------------------------- > > data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='list') > data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='list') > > for a_i in range(len(data1)): > for b_i in range(len(data2)): > if data1[a_i] == data2[b_i]: > print a_i,b_i > > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download it for free - -and be entered to win a 42" plasma tv or your > very > own Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From humufr at yahoo.fr Tue Sep 20 11:40:15 2005 From: humufr at yahoo.fr (Humufr) Date: Tue Sep 20 11:40:15 2005 Subject: [Numpy-discussion] numarray speed problem In-Reply-To: <433043CD.1030700@stsci.edu> References: <43302E32.10708@yahoo.fr> <433043CD.1030700@stsci.edu> Message-ID: <4330576A.8080201@yahoo.fr> Thank you very much. I saw no answer before. It's why I reduce a lot the sample :) I'll try it now Todd Miller wrote: > Hi H, > > I did some work on this problem based on your previous post but > apparently my response never made it to numpy-discussion. In a > nutshell, I made numarray 12x faster for a benchmark like your > numarray_pb_sample.py by speeding up string comparisons and improving > all(). The changes are in numarray CVS but there is no Source Forge > release that contains them yet. numarray-1.4.0 is still several > weeks away. If you want to try CVS from UNIX/Linux just do: > > % cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login > % cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co > -P numarray > > Regards, > Todd > > Humufr wrote: > >> Hello, >> >> I have a problem with numarray and especially the function numarray.all. >> >> I want to compare two files to do this I read the files with a >> function readcol2 who can put them in a list or numarray format >> (string or numerical). >> >> I'm doing a comparaison on each line of the file. >> If I'm using the array format and the numarray.all function, that >> take forever to do the comparaison for 2 big files. If I'm using >> python list object, it's very fast. I think there are some problem or >> at least some improvement to do. If I understand correctly the goal >> of numarray, it has been write to speed up some part of python but >> here it slow down a lot. >> >> An very simple sample to see the effect is at the bottom of this mail. >> >> Thanks for numarray, I hope to not bother you. My comments are more >> to improve numarray than other things. I have been able to find the >> problem so no I can avoied it. >> >> H. >> >> >> >> >> def >> readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): >> >> """ >> Load ASCII data from fname into an array and return the array. >> The data must be regular, same number of values in every row >> fname can be a filename or a file handle. >> >> Input: >> >> - Fname : the name of the file to read >> >> Optionnal input: >> - comments : a string to indicate the charactor to delimit the >> domments. >> the default is the matlab character '%'. >> - columns : list or tuple ho contains the columns to use. >> - delimiter : a string to delimit the columns >> >> - dep : an integer to indicate from which line you want to begin >> >> to use the file (useful to avoid the descriptions lines) >> >> - arraytype : a string to indicate which kind of array you want ot >> have: numeric array (numeric) or character array >> (numstring) or list (list). By default it's the >> >> list mode used >> matfile data is not currently supported, but see >> Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz >> >> Example usage: >> >> x,y = transpose(readcol('test.dat')) # data in two columns >> >> X = readcol('test.dat') # a matrix of data >> >> x = readcol('test.dat') # a single column of data >> >> x = readcol('test.dat,'#') # the character use like a comment >> delimiter is '#' >> >> initial function from pylab (J.Hunter). Change by myself for my >> specific need >> >> """ >> from numarray import array,transpose >> >> fh = file(fname) >> >> X = [] >> numCols = None >> nline = 0 >> if columns is None: >> for line in fh: >> nline += 1 >> if dep is not None and nline <= dep: continue >> line = line[:line.find(comments)].strip() >> if not len(line): continue >> if arraytype=='numeric': >> row = [float(val) for val in line.split(delimiter)] >> else: >> row = [val.strip() for val in line.split(delimiter)] >> thisLen = len(row) >> if numCols is not None and thisLen != numCols: >> raise ValueError('All rows must have the same number >> of columns') >> X.append(row) >> else: >> for line in fh: >> nline +=1 >> if dep is not None and nline <= dep: continue >> line = line[:line.find(comments)].strip() >> if not len(line): continue >> row = line.split(delimiter) >> if arraytype=='numeric': >> row = [float(row[i-1]) for i in columns] >> elif arraytype=='numstring': >> row = [row[i-1].strip() for i in columns] >> else: >> row = [row[i-1].strip() for i in columns] >> thisLen = len(row) >> if numCols is not None and thisLen != numCols: >> raise ValueError('All rows must have the same number >> of columns') >> X.append(row) >> >> if arraytype=='numeric': >> X = array(X) >> r,c = X.shape >> if r==1 or c==1: >> X.shape = max([r,c]), >> elif arraytype == 'numstring': >> import numarray.strings # pb if numeric+pylab >> X = numarray.strings.array(X) >> r,c = X.shape >> if r==1 or c==1: >> X.shape = max([r,c]), >> return X >> >> >> ------------------------------------------- >> files_test_creation.py >> >> ------------------------------------------- >> >> f1 = file('test1.dat','w') >> for i in range(10000): >> f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') >> f1.close() >> >> >> f2 = file('test2.dat','w') >> for i in range(10000): >> f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') >> f2.close() >> >> ------------------------------------------- >> numarray_pb_sample.py >> >> ------------------------------------------- >> >> import numarray >> data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numstring') >> data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numstring') >> >> #or in non string array form (same result) >> ## data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numeric') >> ## data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numeric') >> >> for a_i in range(data1.shape[0]): >> for b_i in range(data2.shape[0]): >> if numarray.all(data1[a_i,:] == data2[b_i,:]): >> print a_i,b_i >> >> ------------------------------------------- >> python_list_sample.py >> >> ------------------------------------------- >> >> data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='list') >> data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='list') >> >> for a_i in range(len(data1)): >> for b_i in range(len(data2)): >> if data1[a_i] == data2[b_i]: >> print a_i,b_i >> >> >> >> >> >> >> ------------------------------------------------------- >> SF.Net email is sponsored by: >> Tame your development challenges with Apache's Geronimo App Server. >> Download it for free - -and be entered to win a 42" plasma tv or your >> very >> own Sony(tm)PSP. Click here to play: >> http://sourceforge.net/geronimo.php >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From ariciputi at pito.com Thu Sep 22 08:47:01 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Thu Sep 22 08:47:01 2005 Subject: [Numpy-discussion] Piecewise functions. Message-ID: Hi all, this is probably an already discussed problem, but I've not been able to find a solution even after googling a lot. I've a piecewise defined function: / | f1(x) if x <= a f(x) = | | f2(x) if x > a \ where f1 and f2 are not defined outside the above range. How can I define such a function in Python in order to apply (map) it to an array ranging from values smaller to values bigger than a? Thanks, Andrea. From guim at guim.org Thu Sep 22 08:50:08 2005 From: guim at guim.org (Alexandre Guimond) Date: Thu Sep 22 08:50:08 2005 Subject: [Numpy-discussion] numarray bug in gaussian_filter1d? Message-ID: <67d31e4205092208414dbb0646@mail.gmail.com> Hi. I think I found a bug in gaussian_filter1d. roughly the function creates a 1d kernel and then calls correlate1d. The problem I see is that the kernel should be mirrored prior to calling correlate1d since we want to convolve, not correlate. as a result: >>> import numarray.nd_image >>> numarray.nd_image.gaussian_filter1d( ( 0.0, 1.0, 0.0 ), 1, order = 1, axis = 0, mode = 'constant' ) array([-0.24197145, 0. , 0.24197145]) >>> when it should be [ 0.24197145, 0. , -0.24197145]) (notice the change in the sign of coefficients) Or did I get that wrong? alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmiller at stsci.edu Thu Sep 22 09:43:34 2005 From: jmiller at stsci.edu (Todd Miller) Date: Thu Sep 22 09:43:34 2005 Subject: [Numpy-discussion] A bug in numarray? In-Reply-To: <4328031D.90106@VisionSense.com> References: <4328031D.90106@VisionSense.com> Message-ID: <4332DEB1.5090604@stsci.edu> Thanks Nadav. This is fixed in CVS. Regards, Todd Nadav Horesh wrote: >It seems that the tostring method fails on rank 0 arrays: > >a = N.array(-4) > > >>>>a >>>> >>>> >array(-4) > > >>>>a.tostring() >>>> >>>> > >Traceback (most recent call last): > File "", line 1, in -toplevel- > a.tostring() > File "/usr/local/lib/python2.4/site-packages/numarray/generic.py", >line 746, in tostring > self._strides, self._itemsize) >MemoryError > > >>>>N.__version__ >>>> >>>> >'1.4.0' > > > Nadav. > > > From aisaac at american.edu Thu Sep 22 12:31:36 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu Sep 22 12:31:36 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > I've a piecewise defined function: > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? I suspect I do not understand your question. But perhaps you want this: def f(x): return x<=a and f1(x) or f2(x) fwiw, Alan From verveer at embl-heidelberg.de Thu Sep 22 13:15:00 2005 From: verveer at embl-heidelberg.de (Peter Verveer) Date: Thu Sep 22 13:15:00 2005 Subject: [Numpy-discussion] numarray bug in gaussian_filter1d? In-Reply-To: <67d31e4205092208414dbb0646@mail.gmail.com> References: <67d31e4205092208414dbb0646@mail.gmail.com> Message-ID: <9B004606-EB90-4F4F-88E5-877117DA079E@embl-heidelberg.de> I think you are correct: The result of a gaussian filter of order one should be a derivative operator. Thus the response to an array created with arange() should be close to one (barring edge effects). Currently we have: >>> from numarray import * >>> from numarray.nd_image import gaussian_filter >>> a = arange(10, type = Float64) >>> gaussian_filter(a, 1.0, order = 1) array([-0.36378461, -0.84938238, -0.98502645, -0.99939268, -0.999928 , -0.999928 , -0.99939268, -0.98502645, -0.84938238, -0.36378461]) So the sign is wrong, that can be fixed by mirroring the gaussian kernels. I have done so in CVS. The same holds true for the Sobel and Prewitt filters, they were also defined 'incorrectly' according to this criterion. I also changed those. That may be a bit more controversial since a quick look on the web seemed to indicate that often it is defined the other around. If anybody thinks my changes are no good, please let me know. Cheers, Peter On 22 Sep, 2005, at 17:41, Alexandre Guimond wrote: > Hi. > > I think I found a bug in gaussian_filter1d. > > roughly the function creates a 1d kernel and then calls > correlate1d. The problem I see is that the kernel should be > mirrored prior to calling correlate1d since we want to convolve, > not correlate. > > as a result: > > >>> import numarray.nd_image > >>> numarray.nd_image.gaussian_filter1d( ( 0.0, 1.0, 0.0 ), 1, > order = 1, axis = 0, mode = 'constant' ) > array([-0.24197145, 0. , 0.24197145]) > >>> > > when it should be [ 0.24197145, 0. , -0.24197145]) (notice > the change in the sign of coefficients) > > Or did I get that wrong? > > alex. From ariciputi at pito.com Thu Sep 22 14:26:48 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Thu Sep 22 14:26:48 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> I've already tried something like this, but it doesn't work since f1 and f2 return not valid values outside the range over they are defined. Perhaps an example could clarify; suppose that f1(x) = 1./ sqrt(1 - x**2) for x <= 1, and f2(x) = 1./sqrt(x**2 - 1) for x > 1. Your suggestion, as the other I've tried, fails with a "OverflowError: math range error". Any helps? Andrea. On Sep 22, 2005, at 9:33 PM, Alan G Isaac wrote: > On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > >> this is probably an already discussed problem, but I've not been able >> to find a solution even after googling a lot. >> > > >> I've a piecewise defined function: >> > > >> / >> | f1(x) if x <= a >> f(x) = | >> | f2(x) if x > a >> \ >> > > >> where f1 and f2 are not defined outside the above range. How can I >> define such a function in Python in order to apply (map) it to an >> array ranging from values smaller to values bigger than a? >> > > I suspect I do not understand your question. > But perhaps you want this: > > def f(x): > return x<=a and f1(x) or f2(x) > > fwiw, > Alan > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download it for free - -and be entered to win a 42" plasma tv or > your very > own Sony(tm)PSP. Click here to play: http://sourceforge.net/ > geronimo.php > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Thu Sep 22 15:18:10 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu Sep 22 15:18:10 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> Message-ID: On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > I've already tried something like this, but it doesn't work since f1 > and f2 return not valid values outside the range over they are > defined. Perhaps an example could clarify; suppose that f1(x) = 1./ > sqrt(1 - x**2) for x <= 1, and f2(x) = 1./sqrt(x**2 - 1) for x > 1. > Your suggestion, as the other I've tried, fails with a > "OverflowError: math range error". If you do it as I suggested, they should not I believe be evaluated outside of their range. So your function must be generating an overflow error within this range. >>> import math >>> import random >>> def f1(x): return math.sqrt(1-x**2) ... >>> def f2(x): return 1./math.sqrt(x**2-1) ... >>> def f(x): return x<=1 and f1(x) or f2(x) ... >>> d = [random.uniform(0,2) for i in range(20)] >>> fd = [f(x) for x in d] Works fine. Cheers, Alan Isaac From greg.ewing at canterbury.ac.nz Thu Sep 22 20:26:10 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu Sep 22 20:26:10 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> Message-ID: <43337527.6080002@canterbury.ac.nz> Andrea Riciputi wrote: > On Sep 22, 2005, at 9:33 PM, Alan G Isaac wrote: > > >> def f(x): >> return x<=a and f1(x) or f2(x) > > I've already tried something like this, but it doesn't work I wish people would stop suggesting the 'a and b or c' trick, because it DOESN'T WORK except in special circumstances (i.e. when you can be sure that b is never false). What you want is: def f(x): if x <= a: return f1(x) else: return f2(x) -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From ariciputi at pito.com Fri Sep 23 00:03:07 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Fri Sep 23 00:03:07 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <43337527.6080002@canterbury.ac.nz> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> <43337527.6080002@canterbury.ac.nz> Message-ID: On Sep 23, 2005, at 5:23 AM, Greg Ewing wrote: > I wish people would stop suggesting the 'a and b or c' trick, > because it DOESN'T WORK except in special circumstances (i.e. > when you can be sure that b is never false). > > What you want is: > > def f(x): > if x <= a: > return f1(x) > else: > return f2(x) It doesn't work either. As I've already explained x is an array containing values both above and below a! What I really need is a way to prevent f1 and f2 from acting on those values of the 'x' array for which the functions are not defined. Any other hints? Andrea. From joostvanevert at gmail.com Fri Sep 23 01:33:43 2005 From: joostvanevert at gmail.com (Joost van Evert) Date: Fri Sep 23 01:33:43 2005 Subject: [Numpy-discussion] array indexing in numarray Message-ID: <1127464280.3803.3.camel@inpc93.et.tudelft.nl> Hi list, I have two questions on numarray indexing: 1) does anyone know an elegant way to assign values to a non-contiguous slice. Like: In [1]:from numarray import * In [2]:a = arange(25).resize((5,5)) In [3]:a Out[3]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) In [4]:i = [1,3] In [5]:transpose(a[i])[i]=1 ---------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) ... ValueError: Invalid destination array: partial indices require contiguous non-byteswapped destination 2) And second: why didn't you choose to return all combinations of indices: In [8]:a[i,i] Out[8]:array([ 6, 18]) Where I, in my humble opinion, would prefer to see all possible combinations. e.g. a[[1,3],[1,3]] = a[[1,1,3,3],[1,3,1,3]] Anyone has an idea how to easyly obtain this behaviour? Greets+thanks, Joost From gerard.vermeulen at grenoble.cnrs.fr Fri Sep 23 01:55:46 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Fri Sep 23 01:55:46 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> On Thu, 22 Sep 2005 17:44:59 +0200 Andrea Riciputi wrote: > Hi all, > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > > I've a piecewise defined function: > > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? > This does maybe what you want: from scipy import * def f(x): """Approximative implementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) i = argmin(abs(x-pi)) # you may have to tweak i here, because it may be off by 1 result[:i+1] = -1/(x[:i+1]-pi) result[i+1:] = 1/(x[i+1:]-pi) return result x = arange(0, 10, 1, Float) print f(x) print abs(1/(x-pi)) Gerard From pjssilva at ime.usp.br Fri Sep 23 06:34:19 2005 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Fri Sep 23 06:34:19 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> References: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <1127474667.13490.4.camel@localhost.localdomain> Here is a slightly different solution, that is easier to my eyes and that can handle arguments in arbitrary order. It requires numarray (as it uses array indexing). If I understand well it should work woth the new scipy-core (Numeric3), but I haven't it compiled here. Obs: Probably f2 implementation is faster than f1. Best, Paulo ---- from numarray import * def f1(x): """Implementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) # First solution, probably slower, but clear. result[x < pi] = -1/(x[x < pi]-pi) result[x > pi] = 1/(x[x > pi]-pi) return result def f2(x): """Second Iplementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) # Second solution, probably faster as where returns only # the correct indexes. small = where(x < pi) big = where(x > pi) result[small] = -1/(x[small]-pi) result[big] = 1/(x[big]-pi) return result x = arange(0, 10, 1, Float) print f1(x) print f2(x) print abs(1/(x-pi)) # It uses the default value for pi. api = array([pi]) print f1(api) print f2(api) From aisaac at american.edu Fri Sep 23 07:58:08 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri Sep 23 07:58:08 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> <43337527.6080002@canterbury.ac.nz> Message-ID: On Fri, 23 Sep 2005, Andrea Riciputi apparently wrote: > What I really need is a way to prevent f1 and f2 from > acting on those values of the 'x' array for which the > functions are not defined. The example I posted works with an array, which I called d. If you must feed the array to the function, just move the list comprehension inside of f. Of course, you may find list comprehension too slow for your application. Alan Isaac From ariciputi at pito.com Fri Sep 23 09:58:08 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Fri Sep 23 09:58:08 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <1127474667.13490.4.camel@localhost.localdomain> References: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> <1127474667.13490.4.camel@localhost.localdomain> Message-ID: <3C09D7A9-9548-432A-8D53-B7FB23A62FF8@pito.com> On Sep 23, 2005, at 1:24 PM, Paulo J. S. Silva wrote: > Here is a slightly different solution, that is easier to my eyes and > that can handle arguments in arbitrary order. > > It requires numarray (as it uses array indexing). If I understand well > [snip] Thanks, it is exactly what I'm looking for. Unfortunately, I'm using Numeric, and I won't switch to numarray only for these feature; small arrays are too frequent in my applications. Anyway the method proposed by Gerard Vermeulen works too (thanks a lot Gerard!), and I'll stay with it, at least until Numeric3 won't be ready for the root-mean-square users. ;-) Cheers, Andrea. From service at paypal.com Sat Sep 24 03:41:28 2005 From: service at paypal.com (service at paypal.com) Date: Sat Sep 24 03:41:28 2005 Subject: [Numpy-discussion] Important PayPal! Account Information. Message-ID: An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Sat Sep 24 21:52:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat Sep 24 21:52:40 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. Message-ID: <43362CCF.5020100@ee.byu.edu> At the SciPy 2005 conference I announced that I was hoping to get a beta of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) released by the end of the conference. This did not happen. Some last minute features were suggested by Fernando Perez that I think will be relatively easy to add and make the release that much stronger. Look for the beta announcement next week. For the impatient, the svn server is always available: http://svn.scipy.org/svn/scipy_core/branches/newcore -Travis O. From gerard.vermeulen at grenoble.cnrs.fr Sun Sep 25 10:16:05 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Sun Sep 25 10:16:05 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <43362CCF.5020100@ee.byu.edu> References: <43362CCF.5020100@ee.byu.edu> Message-ID: <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> On Sat, 24 Sep 2005 22:51:27 -0600 Travis Oliphant wrote: > > At the SciPy 2005 conference I announced that I was hoping to get a beta > of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) > released by the end of the conference. > > This did not happen. Some last minute features were suggested by > Fernando Perez that I think will be relatively easy to add and make the > release that much stronger. > > Look for the beta announcement next week. > > For the impatient, the svn server is always available: > > http://svn.scipy.org/svn/scipy_core/branches/newcore > Hi Travis, when I tried a few months ago to compile one of my C++ Python modules with Numeric3, g++-3.4.3 choked on the line typedef unsigned char bool; in arrayobject.h, because bool is a predefined type in C++. I see the offending line is still in SVN (did not try to build it though). Sorry for sitting on the bug so long; the main reason is that at the time (I suppose it is still the case) Numeric3 does not coexist well with Numeric in the same Python interpreter (I remember import conflicts). If a typical Linux user wants to play with Numeric3, he has either to remove Numeric (and break possible dependencies) or build his own Python for Numeric3. I think that most Linux users are not going to do this and that it will take more than a year before distros make the move. Hence, my lack of motivation for reporting bugs or giving it a real try. Gerard From stephen.walton at csun.edu Sun Sep 25 11:40:25 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun Sep 25 11:40:25 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? Message-ID: <4336EEB1.1070105@csun.edu> In my ongoing struggles with FC4, I now find that the Fedora developers have put Numeric 23.7 into the core as python-numeric and built pygtk2 against it, so the latter has a dependency on the former. I think this is a bad idea unless the core developers are going to include ATLAS and build Numeric against it. After all, this community and the Scipy one have gone to great lengths to optimize ATLAS and build Numeric and numarray (and presumably the new scipy core) against it. But I'm probably not going to convince the Redhat folks on my own. So, questions are: can someone more familiar with pygtk2 than me tell me what parts of it depend on Numeric and why? Can we start a campaign to put ATLAS into Fedora Core if Numeric is going to be there too? From rkern at ucsd.edu Sun Sep 25 19:41:04 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun Sep 25 19:41:04 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4336EEB1.1070105@csun.edu> References: <4336EEB1.1070105@csun.edu> Message-ID: <43375FAC.1080303@ucsd.edu> Stephen Walton wrote: > In my ongoing struggles with FC4, I now find that the Fedora developers > have put Numeric 23.7 into the core as python-numeric and built pygtk2 > against it, so the latter has a dependency on the former. > > I think this is a bad idea unless the core developers are going to > include ATLAS and build Numeric against it. After all, this community > and the Scipy one have gone to great lengths to optimize ATLAS and build > Numeric and numarray (and presumably the new scipy core) against it. > But I'm probably not going to convince the Redhat folks on my own. > > So, questions are: can someone more familiar with pygtk2 than me tell > me what parts of it depend on Numeric and why? Can we start a campaign > to put ATLAS into Fedora Core if Numeric is going to be there too? It's for returning a Numeric array from a pixbuf. Strictly speaking, it's optional, but it looks like the package maintainers decided to compile it in for FC4. Can you make an RPM of python-numeric compiled against ATLAS and install it yourself? Or can you install Numeric yourself and make a dummy package to tell RPM that yes, indeed, Numeric is installed? I'm terribly unfamiliar with Fedora and RPM; I've always prefered Debian. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rkern at ucsd.edu Sun Sep 25 20:00:10 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun Sep 25 20:00:10 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> References: <43362CCF.5020100@ee.byu.edu> <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <4337640B.2030405@ucsd.edu> Gerard Vermeulen wrote: > On Sat, 24 Sep 2005 22:51:27 -0600 > Travis Oliphant wrote: > >>At the SciPy 2005 conference I announced that I was hoping to get a beta >>of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) >>released by the end of the conference. >> >>This did not happen. Some last minute features were suggested by >>Fernando Perez that I think will be relatively easy to add and make the >>release that much stronger. >> >>Look for the beta announcement next week. >> >>For the impatient, the svn server is always available: >> >>http://svn.scipy.org/svn/scipy_core/branches/newcore > > Hi Travis, > > when I tried a few months ago to compile one of my C++ Python modules with > Numeric3, g++-3.4.3 choked on the line > > typedef unsigned char bool; > > in arrayobject.h, because bool is a predefined type in C++. > > I see the offending line is still in SVN (did not try to build it though). Will this do the trick? #ifndef __cplusplus typedef unsigned char bool; #define false 0 #define true 1 #endif /* __cplusplus */ > Sorry for sitting on the bug so long; the main reason is that at the time > (I suppose it is still the case) Numeric3 does not coexist well with > Numeric in the same Python interpreter (I remember import conflicts). > If a typical Linux user wants to play with Numeric3, he has either to remove > Numeric (and break possible dependencies) or build his own Python for Numeric3. > I think that most Linux users are not going to do this and that it will take more > than a year before distros make the move. Hence, my lack of motivation for > reporting bugs or giving it a real try. scipy_core does not interfere with Numeric anymore. It's installed as scipy (so it *will* interfere with previous versions of scipy). While we're on the subject of bugs (for reference, I'm on OS X 10.4 with Python 2.4.1): * When linking umath.so, distutils is including a bare "-l" that causes the link to fail (gcc can't interpret the argument). I have no idea where it's coming from. Just after the Extension object for umath.so is created, the libraries attribute is empty, just like the other Extension objects. * When linking against Accelerate.framework, it can't find cblas.h . I have a patch in scipy.distutils.system_info for that (and also to remove the -framework arguments for compiling; they're solely linker flags). * setup.py looks for an optimized BLAS through blas_info, but getting lapack_info is commented out. Is this deliberate? * Despite comment lines claiming the contrary, scipy.linalg hasn't been converted yet. basic_lite.py tries to import lapack and flapack. * scipy.base.limits is missing (and is used in basic_lite.py). Feature request: * Bundle the include directory in with the package itself and provide a function somewhere that returns the appropriate directory. People writing setup.py scripts for extension modules that use scipy_core can then do the following: from scipy.distutils import get_scipy_include ... someext = Extension('someext', include_dirs=[get_scipy_include(), ...], ...) This would help people who can't write to sys.prefix+'/include/python2.4' and people like me who are trying to package scipy_core into a PythonEgg (which doesn't yet support installing header files to the canonical location). -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Sun Sep 25 22:04:01 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun Sep 25 22:04:01 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <43375FAC.1080303@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> Message-ID: <4337811E.4020806@colorado.edu> Robert Kern wrote: > Can you make an RPM of python-numeric compiled against ATLAS and install > it yourself? Or can you install Numeric yourself and make a dummy > package to tell RPM that yes, indeed, Numeric is installed? I'm terribly > unfamiliar with Fedora and RPM; I've always prefered Debian. You can. In fact, Numeric builds out of the box with python bdist_rpm, though the package name comes out to be named 'Numeric', but that should not be a problem, since the setup.cfg file reads: [bdist_rpm] provides=python-numeric, python-numeric-devel build_script=rpm_build.sh install_script=rpm_install.sh which means that the python-numeric dependency should be satisfied. If you really need to have the rpm be named python-numeric, this can be done by either writing out the spec file and fixing it by hand via: python setup.py bdist_rpm --spec-only or by changing the 'name' flag in the setup.py by hand to read 'python-config' instead of 'Numeric'. If changing this name for rpms is a common need, we can patch up setup.py to take an optional argument. Cheers, f From Chris.Barker at noaa.gov Sun Sep 25 23:34:11 2005 From: Chris.Barker at noaa.gov (Chris Barker) Date: Sun Sep 25 23:34:11 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <43375FAC.1080303@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> Message-ID: <4337962C.8000404@noaa.gov> Robert Kern wrote: > Stephen Walton wrote: >>So, questions are: can someone more familiar with pygtk2 than me tell >>me what parts of it depend on Numeric and why? Can we start a campaign >>to put ATLAS into Fedora Core if Numeric is going to be there too? > > It's for returning a Numeric array from a pixbuf. This is a Darn Good example of why we need the new array protocol. Let's make sure to make sure it gets used as widely as possible. -Chris From greg.ewing at canterbury.ac.nz Mon Sep 26 02:20:09 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon Sep 26 02:20:09 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337962C.8000404@noaa.gov> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> Message-ID: <4337BA3C.8080500@canterbury.ac.nz> Chris Barker wrote: > This is a Darn Good example of why we need the new array protocol. I came across another one the other day when working on pygui. I wanted to use glReadPixels to read data into a buffer belonging to an NSBitmapImageRep, but PyOpenGL's glReadPixels can only read data into an existing memory block if it's a Numeric array, and PyObjC doesn't know anything about Numeric... Greg From rkern at ucsd.edu Mon Sep 26 02:44:08 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon Sep 26 02:44:08 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337BA3C.8080500@canterbury.ac.nz> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> <4337BA3C.8080500@canterbury.ac.nz> Message-ID: <4337C2D1.1020204@ucsd.edu> Greg Ewing wrote: > Chris Barker wrote: > >> This is a Darn Good example of why we need the new array protocol. > > I came across another one the other day when working on > pygui. I wanted to use glReadPixels to read data into a > buffer belonging to an NSBitmapImageRep, but PyOpenGL's > glReadPixels can only read data into an existing memory > block if it's a Numeric array, and PyObjC doesn't know > anything about Numeric... I've taken the liberty of adding these to a new Wiki page. Let's record these "Oh I wish everyone used the array protocol" moments when we come across them. http://www.scipy.org/wikis/featurerequests/ArrayInterfaceUseCases/ This will probably migrate to the Trac instance for scipy whenever that comes about. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Mon Sep 26 08:18:22 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Sep 26 08:18:22 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337C2D1.1020204@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> <4337BA3C.8080500@canterbury.ac.nz> <4337C2D1.1020204@ucsd.edu> Message-ID: <433810FA.5040704@ee.byu.edu> Robert Kern wrote: >Greg Ewing wrote: > > >>Chris Barker wrote: >> >> >> >>>This is a Darn Good example of why we need the new array protocol. >>> >>> >>I came across another one the other day when working on >>pygui. I wanted to use glReadPixels to read data into a >>buffer belonging to an NSBitmapImageRep, but PyOpenGL's >>glReadPixels can only read data into an existing memory >>block if it's a Numeric array, and PyObjC doesn't know >>anything about Numeric... >> >> > >I've taken the liberty of adding these to a new Wiki page. Let's record >these "Oh I wish everyone used the array protocol" moments when we come >across them. > >http://www.scipy.org/wikis/featurerequests/ArrayInterfaceUseCases/ > > This is a great idea. It will help me sell an array_protocol_in_Python PEP to the Python Powers. -Travis From stephen.walton at csun.edu Mon Sep 26 08:40:01 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon Sep 26 08:40:01 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337811E.4020806@colorado.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337811E.4020806@colorado.edu> Message-ID: <433815FE.7040007@csun.edu> Fernando Perez wrote: > Robert Kern wrote: > >> Can you make an RPM of python-numeric compiled against ATLAS and install >> it yourself? > > > You can. In fact, Numeric builds out of the box with > > python bdist_rpm, > > though the package name comes out to be named 'Numeric', but that > should not be a problem, since the setup.cfg file reads: > > [bdist_rpm] > provides=python-numeric, python-numeric-devel > build_script=rpm_build.sh > install_script=rpm_install.sh > > which means that the python-numeric dependency should be satisfied. Yes, it is, and I sort of found that out myself. Yum is much smarter about this than RPM apparently; if I do rpm -U Numeric-23.8-i386.rpm rpm refuses to replace python-numeric-23.7 as distributed with FC4, but "yum install Numeric" when pointed at my local repo works fine. As does "rpm -i --force" of course. There is at least a verbal commitment at bugzilla.redhat.com from Redhat to migrate pygtk2 to the new array object when it comes out. From gerard.vermeulen at grenoble.cnrs.fr Mon Sep 26 08:53:34 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Mon Sep 26 08:53:34 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <4337640B.2030405@ucsd.edu> References: <43362CCF.5020100@ee.byu.edu> <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> <4337640B.2030405@ucsd.edu> Message-ID: <20050926175152.33bb7b62.gerard.vermeulen@grenoble.cnrs.fr> On Sun, 25 Sep 2005 19:59:23 -0700 Robert Kern wrote: > > typedef unsigned char bool; > > > > in arrayobject.h, because bool is a predefined type in C++. > > > > I see the offending line is still in SVN (did not try to build it though). > > Will this do the trick? > > #ifndef __cplusplus > typedef unsigned char bool; > #define false 0 > #define true 1 > #endif /* __cplusplus */ > It works for my g++, but it may not be a general solution. See footnote 15 under item 2 of section 5.3.3 of http://www.open-std.org/jtc1/sc22/open/n2356/ (ISO C++ draft) saying that sizeof(bool) is not required to be 1. Gerard From oliphant at ee.byu.edu Mon Sep 26 10:44:26 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Sep 26 10:44:26 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <4338334D.4050709@ee.byu.edu> Andrea Riciputi wrote: > Hi all, > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > > I've a piecewise defined function: > > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? This is not a trivial problem in current versions of Numeric. What are you using, Numeric, numarray? In new scipy core (which replaces Numeric) and, I think, in numarray, you could say gta = x>a lea = x<=a y = x.copy() y[gta] = f2(x[gta]) y[lea] = f1(x[lea]) I've also just written a piecewise function for the new scipy core so you could write y = piecewise(x, x<=a, [f1,f2]) -Travis Oliphant From ariciputi at pito.com Tue Sep 27 02:10:32 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Tue Sep 27 02:10:32 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <4338334D.4050709@ee.byu.edu> References: <4338334D.4050709@ee.byu.edu> Message-ID: <96C7A372-9079-4338-85E5-6A3C0751623E@pito.com> On Sep 26, 2005, at 7:43 PM, Travis Oliphant wrote: > This is not a trivial problem in current versions of Numeric. > > What are you using, Numeric, numarray? > > In new scipy core (which replaces Numeric) and, I think, in > numarray, you could say > > gta = x>a > lea = x<=a > y = x.copy() > y[gta] = f2(x[gta]) > y[lea] = f1(x[lea]) > > I've also just written a piecewise function for the new scipy core > so you could write > > y = piecewise(x, x<=a, [f1,f2]) > > -Travis Oliphant Yes I'm using Numeric, and in the short to middle term I'll stay with it. Anyway I'm aware of the effort you are spending in putting together a Numeric replacement, and I'll look forward for a stable release of it. In the meanwhile I'll use the tricks already suggested here. Thanks again for your effort, Andrea. From junkmail at chatsubo.lagged.za.net Tue Sep 27 09:05:45 2005 From: junkmail at chatsubo.lagged.za.net (Neilen Marais) Date: Tue Sep 27 09:05:45 2005 Subject: [Numpy-discussion] Determining the condition number of matrix Message-ID: Hi Does numeric have a facility to estimate the condition number of a matrix? Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From luszczek at cs.utk.edu Tue Sep 27 11:54:59 2005 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue Sep 27 11:54:59 2005 Subject: [Numpy-discussion] Determining the condition number of matrix In-Reply-To: References: Message-ID: <433994D7.8050100@cs.utk.edu> Neilen Marais wrote: > Hi > > Does numeric have a facility to estimate the condition number of a matrix? > > Thanks > Neilen The only way I can think of is through SVD: import RandomArray as RA import LinearAlgebra as LA n = 100 a = RA.random((n, n)) vt, s, u = LA.singular_value_decomposition(a) cond2 = s[0] / s[-1] print cond2 The above code computes 2-norm condition number. Since Numeric has only limited binding to LAPACK you should probably look into SciPy that might have bindings to LAPACK's condition number estimators. Piotr From edcjones at comcast.net Wed Sep 28 11:28:34 2005 From: edcjones at comcast.net (Edward C. Jones) Date: Wed Sep 28 11:28:34 2005 Subject: [Numpy-discussion] numarray: Possible hash collision problem Message-ID: <433AE061.2030206@comcast.net> hash(numarray.arange(1000)) == hash(numarray.arange(10000)) The hash value changes each time I enter the Python interpreter. I have always assumed that hashing was deterministic. Is it? From cookedm at physics.mcmaster.ca Wed Sep 28 11:46:40 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed Sep 28 11:46:40 2005 Subject: [Numpy-discussion] numarray: Possible hash collision problem In-Reply-To: <433AE061.2030206@comcast.net> (Edward C. Jones's message of "Wed, 28 Sep 2005 14:26:41 -0400") References: <433AE061.2030206@comcast.net> Message-ID: "Edward C. Jones" writes: > hash(numarray.arange(1000)) == hash(numarray.arange(10000)) > > The hash value changes each time I enter the Python interpreter. I have > always assumed that hashing was deterministic. Is it? Not suprising: I also get this: hash(object()) == hash(object()) Looking through the source, I think the hash for an array is determined by the object base class, and hence is the id() of the array. The code above can be written long hand as a = numarray.arange(1000) ha = hash(a) # in this case, hash(a) == id(a) del a b = numarray.arange(10000) hb = hash(b) # in this case, hash(b) == id(b) del b ha == hb It's those (implicit) del statements that mean that a and b are stored to the same location in memory, and hence have the same id(): there's no other object created in the interpreter between when a is deleted and b is created. Basically, id() of a object is guaranteed to be unique *amongst all active objects*. It is _not_ guaranteed to be different from objects that have been created and destroyed. This will return false: a = numarray.arange(1000) b = numarray.arange(10000) hash(a) == hash(b) as a and b still both exist. Since arrays are mutable, there's no good way to get a content-based hash. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Fri Sep 30 00:03:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri Sep 30 00:03:31 2005 Subject: [Numpy-discussion] Release of SciPy Core 0.4 (Beta) Message-ID: <433CE24A.6040509@ee.byu.edu> This is to announce the release of SciPy Core 0.4.X (beta) It is available at sourceforge which you can access by going to http://numeric.scipy.org Thanks to all who helped me with this release, especially Robert Kern Pearu Peterson Now, let's start getting this stable.... -Travis Oliphant From lior at fun-fairs.co.uk Thu Sep 1 09:22:42 2005 From: lior at fun-fairs.co.uk (Lior Cawthon) Date: Thu Sep 1 09:22:42 2005 Subject: [Numpy-discussion] Re: l2 80 percent of our city underwater. Message-ID: <002a01c5aefb$67b70880$a891a8c0@mercurial> on, the horse-handlers trotting towards the road leading black horses by plodded no farther than the fire post when he felt sick. He cried out lofty and special being. Lying down at his masters feet without even made the author of a novel which corresponds to the Gospel of Woland from Well, so I pinned the icon on my chest and ran... his head. stop the cancer! dust, chains clanking, and on their platforms men lay sprawled belly up on written all over in charcoal and pencil. 4. findirtctor: Typical Soviet contraction for financial director. learned doctors, then to quacks, and sometimes to fortune-tellers as well. confreres killed four soldiers, and, finally, the dirty traitor Judas - are said to have smothered St Philip, metropolitan of Moscow, with his own lifeless body lay with outstretched arms. The left foot was in a spot of heaving itself upon the earth, as happens only during world catastrophes. qualities, a dreamer and an eccentric. A girl fell in love with him, and he -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri Sep 2 16:55:01 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri Sep 2 16:55:01 2005 Subject: [Numpy-discussion] scipy core (Numeric3) win32 binaries to play with Message-ID: <4318E60D.4040304@ee.byu.edu> If anybody has just been waiting for a windows binary to try out the new Numeric (scipy.base) you can download this. from scipy.base import * (replaces from Numeric import *) The installer is here: http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe From aisaac at american.edu Fri Sep 2 17:37:04 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri Sep 2 17:37:04 2005 Subject: [Numpy-discussion] Re: [SciPy-dev] scipy core (Numeric3) win32 binaries to play with In-Reply-To: <4318E60D.4040304@ee.byu.edu> References: <4318E60D.4040304@ee.byu.edu> Message-ID: On Fri, 02 Sep 2005, Travis Oliphant apparently wrote: > http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe So far so good. Thanks! Alan Isaac From security at creditunion.coop Sun Sep 4 21:16:18 2005 From: security at creditunion.coop (security at creditunion.coop) Date: Sun Sep 4 21:16:18 2005 Subject: [Numpy-discussion] Important Notice Message-ID: <20050905040753.16343.qmail@az-specialists.com> An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Mon Sep 5 02:02:14 2005 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Mon Sep 5 02:02:14 2005 Subject: [Numpy-discussion] Re: [SciPy-user] Re: [SciPy-dev] scipy core (Numeric3) win32 binaries to play with In-Reply-To: References: <4318E60D.4040304@ee.byu.edu> Message-ID: <431C09EE.7010105@obs.univ-lyon1.fr> Alan G Isaac wrote: >On Fri, 02 Sep 2005, Travis Oliphant apparently wrote: > > >>http://numeric.scipy.org/files/scipy_core-0.4.0.win32-py2.4.exe >> >> > >So far so good. > >Thanks! >Alan Isaac > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > Hi, That's great news! :) Where are the sources corresponding with this windows release (I would like to test that under linux asap)? Is there any beta version documentation? Thanks. Xavier. From sheltraw at unm.edu Tue Sep 6 11:45:07 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Tue Sep 6 11:45:07 2005 Subject: [Numpy-discussion] Linux to Windows porting question Message-ID: Hello NumPy Listees I am trying to port some code to Windows that works fine under Linux. The offending line is: blk = fromstring(f_fid.read(BLOCK_LEN), num_type).byteswapped().astype(Float32).tostring() The error I get is: ValueError: string size must be a multiple of element size Does anyone have an idea where the problem might be? BLOCK_LEN is specified in bytes and num_type is Int32. Thanks, Daniel From rkern at ucsd.edu Tue Sep 6 11:50:05 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue Sep 6 11:50:05 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: References: Message-ID: <431DE4A4.50204@ucsd.edu> Daniel Sheltraw wrote: > Hello NumPy Listees > > I am trying to port some code to Windows that works fine under Linux. > The offending line > is: > > blk = fromstring(f_fid.read(BLOCK_LEN), > num_type).byteswapped().astype(Float32).tostring() > > The error I get is: > > ValueError: string size must be a multiple of element size > > Does anyone have an idea where the problem might be? BLOCK_LEN is > specified in bytes > and num_type is Int32. Is f_fid opened in binary mode? f_fid = open(filename, 'rb') It should be. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From greg.ewing at canterbury.ac.nz Tue Sep 6 20:10:46 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue Sep 6 20:10:46 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: References: Message-ID: <431E59D0.6000401@canterbury.ac.nz> Daniel Sheltraw wrote: > blk = fromstring(f_fid.read(BLOCK_LEN), > num_type).byteswapped().astype(Float32).tostring() > > The error I get is: > > ValueError: string size must be a multiple of element size Did you open the file in binary mode? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From sheltraw at unm.edu Wed Sep 7 13:41:10 2005 From: sheltraw at unm.edu (Daniel Sheltraw) Date: Wed Sep 7 13:41:10 2005 Subject: [Numpy-discussion] Linux to Windows porting question In-Reply-To: <431E59D0.6000401@canterbury.ac.nz> References: <431E59D0.6000401@canterbury.ac.nz> Message-ID: The question was answered yesterday and that was the answer> thanks On Wed, 07 Sep 2005 15:09:04 +1200 Greg Ewing wrote: > Daniel Sheltraw wrote: > >> blk = fromstring(f_fid.read(BLOCK_LEN), >> num_type).byteswapped().astype(Float32).tostring() >> >> The error I get is: >> >> ValueError: string size must be a multiple of >>element size > > Did you open the file in binary mode? > > -- > Greg Ewing, Computer Science Dept, >+--------------------------------------+ > University of Canterbury, | A citizen of >NewZealandCorp, a | > Christchurch, New Zealand | wholly-owned subsidiary >of USA Inc. | > greg.ewing at canterbury.ac.nz > +--------------------------------------+ From security at epaypal.com Wed Sep 7 20:57:04 2005 From: security at epaypal.com (PayPal Security Departament) Date: Wed Sep 7 20:57:04 2005 Subject: [Numpy-discussion] Security Measures - Are You Traveling? Message-ID: <1126151015.66200.qmail@paypal.com> An HTML attachment was scrubbed... URL: From phjoost at gmail.com Fri Sep 9 13:02:07 2005 From: phjoost at gmail.com (Joost van Evert) Date: Fri Sep 9 13:02:07 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] Message-ID: <1126296820.20258.0.camel@109-80.ipact.nl> -------------- next part -------------- An embedded message was scrubbed... From: unknown sender Subject: no subject Date: no date Size: 38 URL: From joostvanevert at gmail.com Fri Sep 9 12:08:02 2005 From: joostvanevert at gmail.com (Joost van Evert) Date: Fri, 09 Sep 2005 18:08:02 +0200 Subject: compression in storage of Numeric/numarray objects In-Reply-To: References: Message-ID: <1126282082.20664.11.camel@inpc93.et.tudelft.nl> Hi List, is it possible to use compression while storing numarray/Numeric objects? Regards, Joost --=-hPiNO5cct4UJ0DB79Znq-- From jdhunter at ace.bsd.uchicago.edu Fri Sep 9 13:08:12 2005 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri Sep 9 13:08:12 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126296820.20258.0.camel@109-80.ipact.nl> (Joost van Evert's message of "Fri, 09 Sep 2005 22:13:40 +0200") References: <1126296820.20258.0.camel@109-80.ipact.nl> Message-ID: <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Joost" == Joost van Evert writes: Joost> is it possible to use compression while storing Joost> numarray/Numeric objects? Sure In [35]: s = rand(10000) In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) In [37]: ls -l uncompressed.dat -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) In [39]: ls -l compressed.dat -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat Compression ration for more regular data will be better. JDH From phjoost at gmail.com Fri Sep 9 13:29:10 2005 From: phjoost at gmail.com (Joost van Evert) Date: Fri Sep 9 13:29:10 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <1126298478.20258.11.camel@109-80.ipact.nl> On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > >>>>> "Joost" == Joost van Evert writes: > > Joost> is it possible to use compression while storing > Joost> numarray/Numeric objects? > > > Sure > > In [35]: s = rand(10000) > > In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) > > In [37]: ls -l uncompressed.dat > -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat > > In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) > > In [39]: ls -l compressed.dat > -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat > Thanks, this helps me, but I think not enough, because the arrays I work on are sometimes >1Gb(Correlation matrices). The tostring method would explode the size, and result in a lot of swapping. Ideally the compression also works with memmory mapped arrays. Greets, Joost From perry at stsci.edu Fri Sep 9 13:53:07 2005 From: perry at stsci.edu (Perry Greenfield) Date: Fri Sep 9 13:53:07 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <1410842aabd84fa234c23c0b7090cbec@stsci.edu> On Sep 9, 2005, at 4:41 PM, Joost van Evert wrote: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: >>>>>>> "Joost" == Joost van Evert writes: >> >> Joost> is it possible to use compression while storing >> Joost> numarray/Numeric objects? >> >> >> Sure >> >> In [35]: s = rand(10000) >> >> In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) >> >> In [37]: ls -l uncompressed.dat >> -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 >> uncompressed.dat >> >> In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) >> >> In [39]: ls -l compressed.dat >> -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 >> compressed.dat >> > Thanks, this helps me, but I think not enough, because the arrays I > work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. > Well, it seems to me that you are asking for quite a lot if you expect it to work with memory-mapped arrays that are compressed (I'm assuming you mean that individual values are decompressed on the fly as they are needed). This is something that we gave some thought to a few years ago, but it seemed that supporting such capabilities was far too complicated, at least for now. Besides some operations are bound to blow up (e.g., take on a compressed array). But I'm still not sure what you are trying to do and what you would like to see happen underneath. An example would do a lot to explain what your needs are. Thanks, Perry Greenfield From focke at slac.stanford.edu Fri Sep 9 13:56:10 2005 From: focke at slac.stanford.edu (Warren Focke) Date: Fri Sep 9 13:56:10 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: You may be able to avoid the tostring() overhead by using tofile(): s.tofile(gzip.open('compressed.dat', 'wb')) You are probably SOL on the mmapping, though. w On Fri, 9 Sep 2005, Joost van Evert wrote: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > > >>>>> "Joost" == Joost van Evert writes: > > > > Joost> is it possible to use compression while storing > > Joost> numarray/Numeric objects? > > > > > > Sure > > > > In [35]: s = rand(10000) > > > > In [36]: file('uncompressed.dat', 'wb').write(s.tostring()) > > > > In [37]: ls -l uncompressed.dat > > -rw-r--r-- 1 jdhunter jdhunter 80000 2005-09-09 15:04 uncompressed.dat > > > > In [38]: gzip.open('compressed.dat', 'wb').write(s.tostring()) > > > > In [39]: ls -l compressed.dat > > -rw-r--r-- 1 jdhunter jdhunter 41393 2005-09-09 15:04 compressed.dat > > > Thanks, this helps me, but I think not enough, because the arrays I work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. > > Greets, > > Joost > > > > ------------------------------------------------------- > SF.Net email is Sponsored by the Better Software Conference & EXPO > September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices > Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA > Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From skip at pobox.com Sun Sep 11 08:26:23 2005 From: skip at pobox.com (skip at pobox.com) Date: Sun Sep 11 08:26:23 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <17188.19552.306290.918560@montanaro.dyndns.org> Joost> is it possible to use compression while storing Joost> numarray/Numeric objects? Try the gzip or bz2 modules. Both have file-like objects that transparently (de)compress data as it is read or written. Joost> Ideally the compression also works with memmory mapped arrays. Dunno, but probably not. You'll have to experiment. Skip From faltet at carabos.com Mon Sep 12 06:02:34 2005 From: faltet at carabos.com (Francesc Altet) Date: Mon Sep 12 06:02:34 2005 Subject: [Numpy-discussion] [Fwd: compression in storage of Numeric/numarray objects] In-Reply-To: <1126298478.20258.11.camel@109-80.ipact.nl> References: <1126296820.20258.0.camel@109-80.ipact.nl> <87d5ni0zvi.fsf@peds-pc311.bsd.uchicago.edu> <1126298478.20258.11.camel@109-80.ipact.nl> Message-ID: <1126530688.3012.35.camel@localhost.localdomain> El dv 09 de 09 del 2005 a les 22:41 +0200, en/na Joost van Evert va escriure: > On Fri, 2005-09-09 at 15:06 -0500, John Hunter wrote: > Thanks, this helps me, but I think not enough, because the arrays I work > on are sometimes >1Gb(Correlation matrices). The tostring method would > explode the size, and result in a lot of swapping. Ideally the > compression also works with memmory mapped arrays. [mode advertising on, be warned ] You may want to use pytables [1]. It supports on-line data compression and access to data on-disk on a similar way than memory-mapped arrays. Example of use: In [66]:f=tables.openFile("/tmp/test-zlib.h5","w") In [67]:fzlib=tables.Filters(complevel=1, complib="zlib") # the filter In [68]:chunk=tables.Float64Atom(shape=(50,50)) # the data 'chunk' In [69]:carr=f.createCArray(f.root, "carr",(1000, 1000),chunk,'',fzlib) In [70]:carr[:]=numarray.random_array.random((1000,1000)) In [71]:f.close() In [72]:ls -l /tmp/test-zlib.h5 -rw-r--r-- 1 faltet users 3680721 2005-09-12 14:27 /tmp/test-zlib.h5 Now, you can access the data on disk as if it was in-memory: In [73]:f=tables.openFile("/tmp/test-zlib.h5","r") In [74]:f.root.carr[300,200] Out[74]:0.76497000455856323 In [75]:f.root.carr[300:310:3,900:910:2] Out[75]: array([[ 0.5336495 , 0.55542123, 0.80049258, 0.84423071, 0.47674203], [ 0.93104523, 0.71216697, 0.23955345, 0.89759707, 0.70620197], [ 0.86999339, 0.05541291, 0.55156851, 0.96808773, 0.51768076], [ 0.29315394, 0.03837755, 0.33675179, 0.93591529, 0.99721605]]) Also, access to disk is very fast, even if you compressed your data: In [77]:tzlib=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-zlib.h5');carr=f.root.carr") In [78]:tzlib.repeat(3,100) Out[78]:[0.204339981079101, 0.176630973815917, 0.177133798599243] Compare these times with non-compressed data: In [80]:tnc=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-nocompr.h5');carr=f.root.carr") In [81]:tnc.repeat(3,100) Out[81]:[0.089105129241943, 0.084129095077514, 0.084383964538574219] That means that pytables can access data in the middle of a dataset without decompressing all the dataset, but just the interesting chunks (and you can decide the size of these chunks). You can see how the access times are in the range of milliseconds, irregardingly of the fact that the data is compressed or not. PyTables also does support others compressors apart from zlib, like bzip2 [2] or LZO [3], as well as compression pre-conditioners, like shuffle [4]. Look at the compression ratios for completely random data: In [84]:ls -l /tmp/test*.h5 -rw-r--r-- 1 faltet users 3675874 /tmp/test-bzip2-shuffle.h5 -rw-r--r-- 1 faltet users 3680615 /tmp/test-zlib-shuffle.h5 -rw-r--r-- 1 faltet users 3777749 /tmp/test-lzo-shuffle.h5 -rw-r--r-- 1 faltet users 8025024 /tmp/test-nocompr.h5 LZO is specially interesting if you want fast access to your data (it's very fast decompressing): In [82]:tlzo=timeit.Timer("carr[300:310:3,900:910:2]","import tables;f=tables.openFile('/tmp/test-lzo-shuffle.h5');carr=f.root.carr") In [83]:tlzo.repeat(3,100) Out[83]:[0.12332820892333984, 0.11892890930175781, 0.12009191513061523] So, retrieving compressed data using LZO is just 45% slower than if not using compression. You can see more exhaustive benchmarks and discussion in [5]. [1] http://www.pytables.org [2] http://www.bzip2.org [3] http://www.oberhumer.com/opensource/lzo [4] http://hdf.ncsa.uiuc.edu/HDF5/doc_resource/H5Shuffle_Perf.pdf [5] http://pytables.sourceforge.net/html-doc/usersguide6.html#section6.3 Uh, sorry by the blurb, but benchmarking is a lot of fun. -- >0,0< Francesc Altet http://www.carabos.com/ V V C?rabos Coop. V. Enjoy Data "-" From NadavH at VisionSense.com Wed Sep 14 04:14:20 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Wed Sep 14 04:14:20 2005 Subject: [Numpy-discussion] A bug in numarray? Message-ID: <4328031D.90106@VisionSense.com> It seems that the tostring method fails on rank 0 arrays: a = N.array(-4) >>> a array(-4) >>> a.tostring() Traceback (most recent call last): File "", line 1, in -toplevel- a.tostring() File "/usr/local/lib/python2.4/site-packages/numarray/generic.py", line 746, in tostring self._strides, self._itemsize) MemoryError >>> N.__version__ '1.4.0' Nadav. From faltet at carabos.com Wed Sep 14 05:38:19 2005 From: faltet at carabos.com (Francesc Altet) Date: Wed Sep 14 05:38:19 2005 Subject: [Numpy-discussion] ANN: PyTables 1.1.1 released Message-ID: <200509141437.35218.faltet@carabos.com> ========================== Announcing PyTables 1.1.1 ========================== This is a maintenance release of PyTables. In it, several optimizations and bug fixes have been made. As some of the fixed bugs were quite important, it's strongly recommended for users to upgrade. Go to the PyTables web site for downloading the beast: http://pytables.sourceforge.net/ or keep reading for more info about the improvements and bugs fixed. Changes more in depth ===================== Improvements: - Optimized the opening of files with a large number of objects. Now, files with table objects open a 50% faster, and files with arrays open more than twice as fast (up to 2000 objects/s on a Pentium 4 at 2GHz). Hence, a file with a combination of both kinds of objects opens between a 50% and 100% faster than in 1.1. - Optimized the creation of ``NestedRecArray`` objects using ``NumArray`` objects as columns, so that filling a table with the ``Table.append()`` method achieves a performance similar to PyTables pre-1.1 releases. Bug fixes: - ``Table.readCoordinates()`` now converts the coords parameter into ``Int64`` indices automatically. - Fixed a bug that prevented appending to tables (though ``Table.append()``) using a list of ``NumArray`` objects. - ``Int32`` attributes are handled correctly in 64-bit platforms now. - Correction for accepting lists of numarrays as input for ``NestedRecArrays``. - Fixed a problem when creating rank 1 multi-dimensional string columns in ``Table`` objects. Closes SF bug #1269023. - Avoid errors when unpickling objects stored in attributes. See the section ``AttributeSet`` in the reference chapter of the User's Manual for more information. Closes SF bug #1254636. - Assignment for ``*Array`` slices has been improved in order to solve some issues with shapes. Closes SF bug #1288792. - The indexation properties were lost in case the table was closed before an index was created. Now, these properties are saved even in this case. Known bugs: - Classes inheriting from ``IsDescription`` subclasses do not inherit columns defined in the super-class. See SF bug #1207732 for more info. - Time datatypes are non-portable between big-endian and little-endian architectures. This is ultimately a consequence of a HDF5 limitation. See SF bug #1234709 for more info. Backward-incompatible changes: - None (that we are aware of). Important note for MacOSX users =============================== UCL compressor works badly on MacOSX platforms. Recent investigation seems to point to a bug in the development tools in MacOSX. Until the problem is isolated and eventually solved, UCL support will not be compiled by default on MacOSX platforms, even if the installer finds it in the system. However, if you still want to get UCL support on MacOSX, you can use the ``--force-ucl`` flag in ``setup.py``. Important note for Python 2.4 and Windows users =============================================== If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win.ZIP What it is ========== **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. PyTables runs on top of the HDF5 library and numarray (Numeric is also supported) package for achieving maximum throughput and convenient use. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. PyTables sports indexing capabilities as well, allowing doing selections in tables exceeding one billion of rows in just seconds. Platforms ========= This version has been extensively checked on quite a few platforms, like Linux on Intel32 (Pentium), Win on Intel32 (Pentium), Linux on Intel64 (Itanium2), FreeBSD on AMD64 (Opteron), Linux on PowerPC and MacOSX on PowerPC. For other platforms, chances are that the code can be easily compiled and run without further problems. Please, contact us in case you are experiencing problems. Resources ========= Go to the PyTables web site for more details: http://pytables.sourceforge.net/ About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About numarray: http://www.stsci.edu/resources/software_hardware/numarray To know more about the company behind the PyTables development, see: http://www.carabos.com/ Acknowledgments =============== Thanks to various the users who provided feature improvements, patches, bug reports, support and suggestions. See THANKS file in distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package!. And last but not least, a big thanks to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team From oliphant at ee.byu.edu Wed Sep 14 12:34:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed Sep 14 12:34:31 2005 Subject: [Numpy-discussion] scipy.base (Numeric3) now on scipy.org svn server Message-ID: <43287B06.3000301@ee.byu.edu> Now that the new scipy.base (that can replace Numeric) is pretty much complete, I'm working on bringing the other parts of Numeric along so that scipy_core can replace Numeric (and numarray in functionality) for all users. I'm now using a branch of scipy_core to do this work. The old Numeric3 CVS directory on sourceforge will start to wither... The branch is at http://svn.scipy.org/svn/scipy_core/branches/newcore I'm thinking about how to structure the new scipy_core. Right now under the new scipy_core we have Hierarchy Imports as ==================== base/ --> scipy.base (namespace also available under scipy itself) distutils/ --> scipy.distutils test/ --> scipy.test weave/ --> weave We need to bring over basic linear algebra, statistics, and fft's from Numeric. So where do we put them and how do they import? Items to consider: * the basic functionality will be expanded / replaced by anybody who installs the entire scipy library. * are we going to get f2py to live in scipy_core (I say yes)... * I think scipy_core should install a working basic scipy (i.e. import scipy as Numeric) should work and be an effective replacement for import Numeric). Of course the functionality will be quite a bit less than if full scipy was installed, but basic functions should still work. With that in mind I propose the additions Hiearchy Imports as ========================== corelib/lapack_lite/ --> scipy.lapack_lite corelib/fftpack_lite/ --> scipy.fftpack_lite corelib/random_lite/ --> scipy.random_lite linalg/ --> scipy.linalg fftpack/ --> scipy.fftpack stats/ --> scipy.stats Users would typically use only the functions in scipy.linalg, scipy.fftpack, and scipy.stats. Notice that scipy also has modules names linalg, fftpack, and stats. These would add / replace functionality available in the basic core system. Comments, -Travis O. From pearu at scipy.org Wed Sep 14 12:52:29 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed Sep 14 12:52:29 2005 Subject: [Numpy-discussion] Re: [SciPy-dev] scipy.base (Numeric3) now on scipy.org svn server In-Reply-To: <43287B06.3000301@ee.byu.edu> References: <43287B06.3000301@ee.byu.edu> Message-ID: On Wed, 14 Sep 2005, Travis Oliphant wrote: > Now that the new scipy.base (that can replace Numeric) is pretty much > complete, I'm working on > bringing the other parts of Numeric along so that scipy_core can replace > Numeric (and numarray in functionality) for all users. > > I'm now using a branch of scipy_core to do this work. The old Numeric3 CVS > directory on sourceforge will start to wither... > > The branch is at > > http://svn.scipy.org/svn/scipy_core/branches/newcore > > I'm thinking about how to structure the new scipy_core. > Right now under the new scipy_core we have > > Hierarchy Imports as > ==================== > base/ --> scipy.base (namespace also available under > scipy itself) > distutils/ --> scipy.distutils > test/ --> scipy.test > weave/ --> weave > > We need to bring over basic linear algebra, statistics, and fft's from > Numeric. So where do we put them and how do they import? I have done some work in this direction but have not commited to repository yet because it needs more testing. Basically, (not commited) scipy.distutils has support to build Fortran or f2c'd versions of various libraries (currently I have tested it on blas) depending on whether Fortran compiler is available or not. > Items to consider: > > * the basic functionality will be expanded / replaced by anybody who > installs the entire scipy library. > > * are we going to get f2py to live in scipy_core (I say yes)... That would simplify many things, so I'd also say yes. On the other hand, I have not decided what to do with f2py2e CVS repository. Suggestions are welcome (though I understand that this might be my personal problem). > * I think scipy_core should install a working basic scipy (i.e. import scipy > as Numeric) should work and be an effective replacement for import Numeric). > Of course the functionality will be quite a bit less than if full scipy was > installed, but basic functions should still work. > > With that in mind I propose the additions > > Hiearchy Imports as > ========================== > corelib/lapack_lite/ --> scipy.lapack_lite corelib/fftpack_lite/ --> > scipy.fftpack_lite > corelib/random_lite/ --> scipy.random_lite > linalg/ --> scipy.linalg > fftpack/ --> scipy.fftpack > stats/ --> scipy.stats > > Users would typically use only the functions in scipy.linalg, scipy.fftpack, > and scipy.stats. > > Notice that scipy also has modules names linalg, fftpack, and stats. These > would add / replace functionality available in the basic core system. Since lapack_lite, fftpack_lite can be copied from Numeric then there's no rush for me to commit my scipy.distutils work, I guess. I'll do that when it is more or less stable and then we can gradually apply f2c to various scipy modules that currently have fortran sources which would allow compiling the whole scipy without having fortran compiler around. Pearu From oliphant at ee.byu.edu Thu Sep 15 12:15:39 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu Sep 15 12:15:39 2005 Subject: [Numpy-discussion] scipy core now builds from svn Message-ID: <4329C819.6070302@ee.byu.edu> This is to officially announce that the new replacement for Numeric (scipy_core) is available at SVN. Read permission is open to everyone so a simple checkout: svn co http://svn.scipy.org/svn/scipy_core/branches/newcore newcore should get you the distribution that should install with cd newcore python setup.py install I'm in the process of adding the linear algebra routines, fft, random, and dotblas from Numeric. This should be done by the conference. I will make a windows binary release for the SciPy conference, but not before then. There is a script in newcore/scipy/base/convertcode.py that will take code written for Numeric (or numerix) and convert it to code for the new scipy base object. This code is not foolproof, but it takes care of the minor incompatibilities (a few search and replaces are done). The compatibility issues are documented (mostly in the typecode characters and a few method name changes). The one bigger incompatibility is that a.flat does something a little different (a 1-d iterator object). The convert code script changes uses of a.flat that are not indexing or set attribute related to a.ravel() C-code should build for the new system with a change of #include Numeric/arrayobject.h to #include scipy/arrayobject.h --- though you may want to enhance your code to take advantage of the new features (and more extensive C-API). I also still need to add the following ufuncs: isnan, isfinite, signbit, isinf, frexp, and ldexp. This should not take too long. -Travis O. From shuntim.luk at polyu.edu.hk Thu Sep 15 22:50:15 2005 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Thu Sep 15 22:50:15 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <4329C819.6070302@ee.byu.edu> References: <4329C819.6070302@ee.byu.edu> Message-ID: <432A5CF4.9050909@polyu.edu.hk> Travis Oliphant wrote: > > This is to officially announce that the new replacement for Numeric > (scipy_core) is available at SVN. Read permission is open to everyone > so a simple checkout: > > svn co http://svn.scipy.org/svn/scipy_core/branches/newcore newcore > should get you the distribution that should install with > I got this time out error when I tried, several times. :-( svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status line: Connection timed out (http://svn.scipy.org) Please see if this is an server configuration issue. Regards, ST -- From Fernando.Perez at colorado.edu Fri Sep 16 08:46:43 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 08:46:43 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432A5CF4.9050909@polyu.edu.hk> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> Message-ID: <432AE87C.6030406@colorado.edu> LUK ShunTim wrote: >> I got this time out error when I tried, several times. :-( >> >> svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' >> svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status >> line: Connection timed out (http://svn.scipy.org) >> >> Please see if this is an server configuration issue. No, it's an issue with your setup, not something on scipy's side. You are behind a proxy blocking REPORT requests. See this for details: http://www.sipfoundry.org/tools/svn-tips.html which says: What does 'REPORT request failed' mean? When I try to check out a subversion repository > svn co http://scm.sipfoundry.org/rep/project/main project I get an error like: svn: REPORT request failed on '/rep/project/!svn/vcc/default' svn: REPORT of '/rep/project/!svn/vcc/default': 400 Bad Request (http://scm.sipfoundry.org) You are behind a web proxy that is not passing the WebDAV methods that subversion uses. You can work around the problem by using SSL to hide what you're doing from the proxy: > svn co https://scm.sipfoundry.org/rep/project/main project Cheers, f From Fernando.Perez at colorado.edu Fri Sep 16 08:50:02 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 08:50:02 2005 Subject: [SciPy-dev] Re: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432AE87C.6030406@colorado.edu> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> Message-ID: <432AE966.90007@colorado.edu> Fernando Perez wrote: > You are behind a web proxy that is not passing the WebDAV methods that > subversion uses. You can work around the problem by using SSL to hide what > you're doing from the proxy: > > > svn co https://scm.sipfoundry.org/rep/project/main project I forgot to add that the https method will NOT work with scipy, which doesn't provide svn/ssl support. You need to fix your proxy config. Cheers, f From shuntim.luk at polyu.edu.hk Fri Sep 16 22:00:04 2005 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Fri Sep 16 22:00:04 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432AE87C.6030406@colorado.edu> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> Message-ID: <432BA2B2.7090307@polyu.edu.hk> Fernando Perez wrote: > LUK ShunTim wrote: > > >>> I got this time out error when I tried, several times. :-( >>> >>> svn: REPORT request failed on '/svn/scipy_core/!svn/vcc/default' >>> svn: REPORT of '/svn/scipy_core/!svn/vcc/default': Could not read status >>> line: Connection timed out (http://svn.scipy.org) >>> >>> Please see if this is an server configuration issue. > > > No, it's an issue with your setup, not something on scipy's side. > > You are behind a proxy blocking REPORT requests. See this for details: > > http://www.sipfoundry.org/tools/svn-tips.html > > which says: > > What does 'REPORT request failed' mean? > > When I try to check out a subversion repository > > > svn co http://scm.sipfoundry.org/rep/project/main project > > I get an error like: > > svn: REPORT request failed on '/rep/project/!svn/vcc/default' > svn: REPORT of '/rep/project/!svn/vcc/default': 400 Bad Request > (http://scm.sipfoundry.org) > > You are behind a web proxy that is not passing the WebDAV methods that > subversion uses. You can work around the problem by using SSL to hide what > you're doing from the proxy: > > > svn co https://scm.sipfoundry.org/rep/project/main project > > > Cheers, > > f Thanks very much. However no luck. :-( I now got this error > svn: PROPFIND of '/svn/scipy_core/branches/newcore': 405 Method Not Allowed (https://svn.scipy.org) with the suggested workaround. I guess I'll learn a bit about svn and may be have to take it up with our system admin. In the mean time, is CVS still available? BTW, perhaps it might help people like me who are not familiar with svn by putting this tip somethere in the download page. Thanks again, ST -- From Fernando.Perez at colorado.edu Fri Sep 16 22:12:07 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri Sep 16 22:12:07 2005 Subject: [Numpy-discussion] scipy core now builds from svn In-Reply-To: <432BA2B2.7090307@polyu.edu.hk> References: <4329C819.6070302@ee.byu.edu> <432A5CF4.9050909@polyu.edu.hk> <432AE87C.6030406@colorado.edu> <432BA2B2.7090307@polyu.edu.hk> Message-ID: <432BA519.3040705@colorado.edu> LUK ShunTim wrote: > Fernando Perez wrote: > >>LUK ShunTim wrote: > Thanks very much. However no luck. :-( I now got this error > > >>svn: PROPFIND of '/svn/scipy_core/branches/newcore': 405 Method Not Allowed (https://svn.scipy.org) That's what I said in the message immediately afterward, because I hit send too soon: that the https:// approach would NOT work with scipy. You need to have your proxy fixed, I'm afraid. Cheers, f From shashank_84 at rediffmail.com Sat Sep 17 23:34:04 2005 From: shashank_84 at rediffmail.com (shashank karnik) Date: Sat Sep 17 23:34:04 2005 Subject: [Numpy-discussion] How to install Numeric on Zope? Message-ID: <20050918063350.3616.qmail@webmail7.rediffmail.com> Hello everyone Can anyone help me to install Numeric extension or package for my Zope server 2.4 version running on Windows Xp? You see i am a beginner at python and zope both and need to use a product in Zope(GNOWSYS) which requires Python -Numeric and Python-XMLBase packages.. I downloaded the Numeric py package for Windows from this link http://prdownloads.sourceforge.net/numpy/Numeric-23.8.win32-py2.4.exe?download However...i cant figure out how to install it so that my Zope server recognises it. The installer currently just installs the package so that it is recognised by the Python interpreter...but the Zope server is stored in Program Files on my machine..how do i make it understand that the Numeric package has been installed.. I tried putting the numeric folder in this path C:\Program Files\Zope\lib\python\Products But it doesnt work Please help me out If think that this question should be asked on a Zope forum...please let me know! Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at shrogers.com Sun Sep 18 10:50:11 2005 From: steve at shrogers.com (Steven H. Rogers) Date: Sun Sep 18 10:50:11 2005 Subject: [Numpy-discussion] How to install Numeric on Zope? In-Reply-To: <20050918063350.3616.qmail@webmail7.rediffmail.com> References: <20050918063350.3616.qmail@webmail7.rediffmail.com> Message-ID: <432DA8A2.4040203@shrogers.com> Combining Numeric and Zope is a neat idea and I've speculated about how to do it, but haven't got beyond that. I believe that you'd have to write a new Zope product that called the Numeric package. You might get a more informed response on the Zope, Plone, or SciPy mailing lists. Regards, Steve shashank karnik wrote: > > Hello everyone > > Can anyone help me to install Numeric extension or package for my Zope > server 2.4 version running on Windows Xp? > > You see i am a beginner at python and zope both and need to use a > product in Zope(GNOWSYS) which requires Python -Numeric and > Python-XMLBase packages.. > > I downloaded the Numeric py package for Windows from this link > > http://prdownloads.sourceforge.net/numpy/Numeric-23.8.win32-py2.4.exe?download > > However...i cant figure out how to install it so that my Zope server > recognises it. > > The installer currently just installs the package so that it is > recognised by the Python interpreter...but the Zope server is stored in > Program Files on my machine..how do i make it understand that the > Numeric package has been installed.. > > I tried putting the numeric folder in this path > C:\Program Files\Zope\lib\python\Products > > But it doesnt work > > Please help me out > > If think that this question should be asked on a Zope forum...please > let me know! > > Thank you > > > > > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From humufr at yahoo.fr Tue Sep 20 08:45:01 2005 From: humufr at yahoo.fr (Humufr) Date: Tue Sep 20 08:45:01 2005 Subject: [Numpy-discussion] numarray speed problem Message-ID: <43302E32.10708@yahoo.fr> Hello, I have a problem with numarray and especially the function numarray.all. I want to compare two files to do this I read the files with a function readcol2 who can put them in a list or numarray format (string or numerical). I'm doing a comparaison on each line of the file. If I'm using the array format and the numarray.all function, that take forever to do the comparaison for 2 big files. If I'm using python list object, it's very fast. I think there are some problem or at least some improvement to do. If I understand correctly the goal of numarray, it has been write to speed up some part of python but here it slow down a lot. An very simple sample to see the effect is at the bottom of this mail. Thanks for numarray, I hope to not bother you. My comments are more to improve numarray than other things. I have been able to find the problem so no I can avoied it. H. def readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): """ Load ASCII data from fname into an array and return the array. The data must be regular, same number of values in every row fname can be a filename or a file handle. Input: - Fname : the name of the file to read Optionnal input: - comments : a string to indicate the charactor to delimit the domments. the default is the matlab character '%'. - columns : list or tuple ho contains the columns to use. - delimiter : a string to delimit the columns - dep : an integer to indicate from which line you want to begin to use the file (useful to avoid the descriptions lines) - arraytype : a string to indicate which kind of array you want ot have: numeric array (numeric) or character array (numstring) or list (list). By default it's the list mode used matfile data is not currently supported, but see Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz Example usage: x,y = transpose(readcol('test.dat')) # data in two columns X = readcol('test.dat') # a matrix of data x = readcol('test.dat') # a single column of data x = readcol('test.dat,'#') # the character use like a comment delimiter is '#' initial function from pylab (J.Hunter). Change by myself for my specific need """ from numarray import array,transpose fh = file(fname) X = [] numCols = None nline = 0 if columns is None: for line in fh: nline += 1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue if arraytype=='numeric': row = [float(val) for val in line.split(delimiter)] else: row = [val.strip() for val in line.split(delimiter)] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) else: for line in fh: nline +=1 if dep is not None and nline <= dep: continue line = line[:line.find(comments)].strip() if not len(line): continue row = line.split(delimiter) if arraytype=='numeric': row = [float(row[i-1]) for i in columns] elif arraytype=='numstring': row = [row[i-1].strip() for i in columns] else: row = [row[i-1].strip() for i in columns] thisLen = len(row) if numCols is not None and thisLen != numCols: raise ValueError('All rows must have the same number of columns') X.append(row) if arraytype=='numeric': X = array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), elif arraytype == 'numstring': import numarray.strings # pb if numeric+pylab X = numarray.strings.array(X) r,c = X.shape if r==1 or c==1: X.shape = max([r,c]), return X ------------------------------------------- files_test_creation.py ------------------------------------------- f1 = file('test1.dat','w') for i in range(10000): f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') f1.close() f2 = file('test2.dat','w') for i in range(10000): f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') f2.close() ------------------------------------------- numarray_pb_sample.py ------------------------------------------- import numarray data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numstring') data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numstring') #or in non string array form (same result) ## data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numeric') ## data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='numeric') for a_i in range(data1.shape[0]): for b_i in range(data2.shape[0]): if numarray.all(data1[a_i,:] == data2[b_i,:]): print a_i,b_i ------------------------------------------- python_list_sample.py ------------------------------------------- data1 = readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='list') data2 = readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' ',dep=1,arraytype='list') for a_i in range(len(data1)): for b_i in range(len(data2)): if data1[a_i] == data2[b_i]: print a_i,b_i From jmiller at stsci.edu Tue Sep 20 10:17:18 2005 From: jmiller at stsci.edu (Todd Miller) Date: Tue Sep 20 10:17:18 2005 Subject: [Numpy-discussion] numarray speed problem In-Reply-To: <43302E32.10708@yahoo.fr> References: <43302E32.10708@yahoo.fr> Message-ID: <433043CD.1030700@stsci.edu> Hi H, I did some work on this problem based on your previous post but apparently my response never made it to numpy-discussion. In a nutshell, I made numarray 12x faster for a benchmark like your numarray_pb_sample.py by speeding up string comparisons and improving all(). The changes are in numarray CVS but there is no Source Forge release that contains them yet. numarray-1.4.0 is still several weeks away. If you want to try CVS from UNIX/Linux just do: % cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login % cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P numarray Regards, Todd Humufr wrote: > Hello, > > I have a problem with numarray and especially the function numarray.all. > > I want to compare two files to do this I read the files with a > function readcol2 who can put them in a list or numarray format > (string or numerical). > > I'm doing a comparaison on each line of the file. > If I'm using the array format and the numarray.all function, that take > forever to do the comparaison for 2 big files. If I'm using python > list object, it's very fast. I think there are some problem or at > least some improvement to do. If I understand correctly the goal of > numarray, it has been write to speed up some part of python but here > it slow down a lot. > > An very simple sample to see the effect is at the bottom of this mail. > > Thanks for numarray, I hope to not bother you. My comments are more to > improve numarray than other things. I have been able to find the > problem so no I can avoied it. > > H. > > > > > def > readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): > > """ > Load ASCII data from fname into an array and return the array. > The data must be regular, same number of values in every row > fname can be a filename or a file handle. > > Input: > > - Fname : the name of the file to read > > Optionnal input: > - comments : a string to indicate the charactor to delimit the > domments. > the default is the matlab character '%'. > - columns : list or tuple ho contains the columns to use. > - delimiter : a string to delimit the columns > > - dep : an integer to indicate from which line you want to begin > > to use the file (useful to avoid the descriptions lines) > > - arraytype : a string to indicate which kind of array you want ot > have: numeric array (numeric) or character array > (numstring) or list (list). By default it's the > > list mode used > > matfile data is not currently supported, but see > Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz > > Example usage: > > x,y = transpose(readcol('test.dat')) # data in two columns > > X = readcol('test.dat') # a matrix of data > > x = readcol('test.dat') # a single column of data > > x = readcol('test.dat,'#') # the character use like a comment > delimiter is '#' > > initial function from pylab (J.Hunter). Change by myself for my > specific need > > """ > from numarray import array,transpose > > fh = file(fname) > > X = [] > numCols = None > nline = 0 > if columns is None: > for line in fh: > nline += 1 > if dep is not None and nline <= dep: continue > line = line[:line.find(comments)].strip() > if not len(line): continue > if arraytype=='numeric': > row = [float(val) for val in line.split(delimiter)] > else: > row = [val.strip() for val in line.split(delimiter)] > thisLen = len(row) > if numCols is not None and thisLen != numCols: > raise ValueError('All rows must have the same number of > columns') > X.append(row) > else: > for line in fh: > nline +=1 > if dep is not None and nline <= dep: continue > line = line[:line.find(comments)].strip() > if not len(line): continue > row = line.split(delimiter) > if arraytype=='numeric': > row = [float(row[i-1]) for i in columns] > elif arraytype=='numstring': > row = [row[i-1].strip() for i in columns] > else: > row = [row[i-1].strip() for i in columns] > thisLen = len(row) > if numCols is not None and thisLen != numCols: > raise ValueError('All rows must have the same number of > columns') > X.append(row) > > if arraytype=='numeric': > X = array(X) > r,c = X.shape > if r==1 or c==1: > X.shape = max([r,c]), > elif arraytype == 'numstring': > import numarray.strings # pb if numeric+pylab > X = numarray.strings.array(X) > r,c = X.shape > if r==1 or c==1: > X.shape = max([r,c]), > return X > > > ------------------------------------------- > files_test_creation.py > > ------------------------------------------- > > f1 = file('test1.dat','w') > for i in range(10000): > f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') > f1.close() > > > f2 = file('test2.dat','w') > for i in range(10000): > f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') > f2.close() > > ------------------------------------------- > numarray_pb_sample.py > > ------------------------------------------- > > import numarray > data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numstring') > data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numstring') > > #or in non string array form (same result) > ## data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numeric') > ## data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='numeric') > > for a_i in range(data1.shape[0]): > for b_i in range(data2.shape[0]): > if numarray.all(data1[a_i,:] == data2[b_i,:]): > print a_i,b_i > > ------------------------------------------- > python_list_sample.py > > ------------------------------------------- > > data1 = > readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='list') > data2 = > readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' > ',dep=1,arraytype='list') > > for a_i in range(len(data1)): > for b_i in range(len(data2)): > if data1[a_i] == data2[b_i]: > print a_i,b_i > > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download it for free - -and be entered to win a 42" plasma tv or your > very > own Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From humufr at yahoo.fr Tue Sep 20 11:40:15 2005 From: humufr at yahoo.fr (Humufr) Date: Tue Sep 20 11:40:15 2005 Subject: [Numpy-discussion] numarray speed problem In-Reply-To: <433043CD.1030700@stsci.edu> References: <43302E32.10708@yahoo.fr> <433043CD.1030700@stsci.edu> Message-ID: <4330576A.8080201@yahoo.fr> Thank you very much. I saw no answer before. It's why I reduce a lot the sample :) I'll try it now Todd Miller wrote: > Hi H, > > I did some work on this problem based on your previous post but > apparently my response never made it to numpy-discussion. In a > nutshell, I made numarray 12x faster for a benchmark like your > numarray_pb_sample.py by speeding up string comparisons and improving > all(). The changes are in numarray CVS but there is no Source Forge > release that contains them yet. numarray-1.4.0 is still several > weeks away. If you want to try CVS from UNIX/Linux just do: > > % cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login > % cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co > -P numarray > > Regards, > Todd > > Humufr wrote: > >> Hello, >> >> I have a problem with numarray and especially the function numarray.all. >> >> I want to compare two files to do this I read the files with a >> function readcol2 who can put them in a list or numarray format >> (string or numerical). >> >> I'm doing a comparaison on each line of the file. >> If I'm using the array format and the numarray.all function, that >> take forever to do the comparaison for 2 big files. If I'm using >> python list object, it's very fast. I think there are some problem or >> at least some improvement to do. If I understand correctly the goal >> of numarray, it has been write to speed up some part of python but >> here it slow down a lot. >> >> An very simple sample to see the effect is at the bottom of this mail. >> >> Thanks for numarray, I hope to not bother you. My comments are more >> to improve numarray than other things. I have been able to find the >> problem so no I can avoied it. >> >> H. >> >> >> >> >> def >> readcol(fname,comments='%',columns=None,delimiter=None,dep=0,arraytype='list'): >> >> """ >> Load ASCII data from fname into an array and return the array. >> The data must be regular, same number of values in every row >> fname can be a filename or a file handle. >> >> Input: >> >> - Fname : the name of the file to read >> >> Optionnal input: >> - comments : a string to indicate the charactor to delimit the >> domments. >> the default is the matlab character '%'. >> - columns : list or tuple ho contains the columns to use. >> - delimiter : a string to delimit the columns >> >> - dep : an integer to indicate from which line you want to begin >> >> to use the file (useful to avoid the descriptions lines) >> >> - arraytype : a string to indicate which kind of array you want ot >> have: numeric array (numeric) or character array >> (numstring) or list (list). By default it's the >> >> list mode used >> matfile data is not currently supported, but see >> Nigel Wade's matfile ftp://ion.le.ac.uk/matfile/matfile.tar.gz >> >> Example usage: >> >> x,y = transpose(readcol('test.dat')) # data in two columns >> >> X = readcol('test.dat') # a matrix of data >> >> x = readcol('test.dat') # a single column of data >> >> x = readcol('test.dat,'#') # the character use like a comment >> delimiter is '#' >> >> initial function from pylab (J.Hunter). Change by myself for my >> specific need >> >> """ >> from numarray import array,transpose >> >> fh = file(fname) >> >> X = [] >> numCols = None >> nline = 0 >> if columns is None: >> for line in fh: >> nline += 1 >> if dep is not None and nline <= dep: continue >> line = line[:line.find(comments)].strip() >> if not len(line): continue >> if arraytype=='numeric': >> row = [float(val) for val in line.split(delimiter)] >> else: >> row = [val.strip() for val in line.split(delimiter)] >> thisLen = len(row) >> if numCols is not None and thisLen != numCols: >> raise ValueError('All rows must have the same number >> of columns') >> X.append(row) >> else: >> for line in fh: >> nline +=1 >> if dep is not None and nline <= dep: continue >> line = line[:line.find(comments)].strip() >> if not len(line): continue >> row = line.split(delimiter) >> if arraytype=='numeric': >> row = [float(row[i-1]) for i in columns] >> elif arraytype=='numstring': >> row = [row[i-1].strip() for i in columns] >> else: >> row = [row[i-1].strip() for i in columns] >> thisLen = len(row) >> if numCols is not None and thisLen != numCols: >> raise ValueError('All rows must have the same number >> of columns') >> X.append(row) >> >> if arraytype=='numeric': >> X = array(X) >> r,c = X.shape >> if r==1 or c==1: >> X.shape = max([r,c]), >> elif arraytype == 'numstring': >> import numarray.strings # pb if numeric+pylab >> X = numarray.strings.array(X) >> r,c = X.shape >> if r==1 or c==1: >> X.shape = max([r,c]), >> return X >> >> >> ------------------------------------------- >> files_test_creation.py >> >> ------------------------------------------- >> >> f1 = file('test1.dat','w') >> for i in range(10000): >> f1.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') >> f1.close() >> >> >> f2 = file('test2.dat','w') >> for i in range(10000): >> f2.write(str(i)+' '+str(i+1)+' '+str(i+2)+'\n') >> f2.close() >> >> ------------------------------------------- >> numarray_pb_sample.py >> >> ------------------------------------------- >> >> import numarray >> data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numstring') >> data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numstring') >> >> #or in non string array form (same result) >> ## data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numeric') >> ## data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='numeric') >> >> for a_i in range(data1.shape[0]): >> for b_i in range(data2.shape[0]): >> if numarray.all(data1[a_i,:] == data2[b_i,:]): >> print a_i,b_i >> >> ------------------------------------------- >> python_list_sample.py >> >> ------------------------------------------- >> >> data1 = >> readcol2.readcol('test1.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='list') >> data2 = >> readcol2.readcol('test2.dat',columns=[1,2,3],comments='#',delimiter=' >> ',dep=1,arraytype='list') >> >> for a_i in range(len(data1)): >> for b_i in range(len(data2)): >> if data1[a_i] == data2[b_i]: >> print a_i,b_i >> >> >> >> >> >> >> ------------------------------------------------------- >> SF.Net email is sponsored by: >> Tame your development challenges with Apache's Geronimo App Server. >> Download it for free - -and be entered to win a 42" plasma tv or your >> very >> own Sony(tm)PSP. Click here to play: >> http://sourceforge.net/geronimo.php >> _______________________________________________ >> Numpy-discussion mailing list >> Numpy-discussion at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > From ariciputi at pito.com Thu Sep 22 08:47:01 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Thu Sep 22 08:47:01 2005 Subject: [Numpy-discussion] Piecewise functions. Message-ID: Hi all, this is probably an already discussed problem, but I've not been able to find a solution even after googling a lot. I've a piecewise defined function: / | f1(x) if x <= a f(x) = | | f2(x) if x > a \ where f1 and f2 are not defined outside the above range. How can I define such a function in Python in order to apply (map) it to an array ranging from values smaller to values bigger than a? Thanks, Andrea. From guim at guim.org Thu Sep 22 08:50:08 2005 From: guim at guim.org (Alexandre Guimond) Date: Thu Sep 22 08:50:08 2005 Subject: [Numpy-discussion] numarray bug in gaussian_filter1d? Message-ID: <67d31e4205092208414dbb0646@mail.gmail.com> Hi. I think I found a bug in gaussian_filter1d. roughly the function creates a 1d kernel and then calls correlate1d. The problem I see is that the kernel should be mirrored prior to calling correlate1d since we want to convolve, not correlate. as a result: >>> import numarray.nd_image >>> numarray.nd_image.gaussian_filter1d( ( 0.0, 1.0, 0.0 ), 1, order = 1, axis = 0, mode = 'constant' ) array([-0.24197145, 0. , 0.24197145]) >>> when it should be [ 0.24197145, 0. , -0.24197145]) (notice the change in the sign of coefficients) Or did I get that wrong? alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmiller at stsci.edu Thu Sep 22 09:43:34 2005 From: jmiller at stsci.edu (Todd Miller) Date: Thu Sep 22 09:43:34 2005 Subject: [Numpy-discussion] A bug in numarray? In-Reply-To: <4328031D.90106@VisionSense.com> References: <4328031D.90106@VisionSense.com> Message-ID: <4332DEB1.5090604@stsci.edu> Thanks Nadav. This is fixed in CVS. Regards, Todd Nadav Horesh wrote: >It seems that the tostring method fails on rank 0 arrays: > >a = N.array(-4) > > >>>>a >>>> >>>> >array(-4) > > >>>>a.tostring() >>>> >>>> > >Traceback (most recent call last): > File "", line 1, in -toplevel- > a.tostring() > File "/usr/local/lib/python2.4/site-packages/numarray/generic.py", >line 746, in tostring > self._strides, self._itemsize) >MemoryError > > >>>>N.__version__ >>>> >>>> >'1.4.0' > > > Nadav. > > > From aisaac at american.edu Thu Sep 22 12:31:36 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu Sep 22 12:31:36 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > I've a piecewise defined function: > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? I suspect I do not understand your question. But perhaps you want this: def f(x): return x<=a and f1(x) or f2(x) fwiw, Alan From verveer at embl-heidelberg.de Thu Sep 22 13:15:00 2005 From: verveer at embl-heidelberg.de (Peter Verveer) Date: Thu Sep 22 13:15:00 2005 Subject: [Numpy-discussion] numarray bug in gaussian_filter1d? In-Reply-To: <67d31e4205092208414dbb0646@mail.gmail.com> References: <67d31e4205092208414dbb0646@mail.gmail.com> Message-ID: <9B004606-EB90-4F4F-88E5-877117DA079E@embl-heidelberg.de> I think you are correct: The result of a gaussian filter of order one should be a derivative operator. Thus the response to an array created with arange() should be close to one (barring edge effects). Currently we have: >>> from numarray import * >>> from numarray.nd_image import gaussian_filter >>> a = arange(10, type = Float64) >>> gaussian_filter(a, 1.0, order = 1) array([-0.36378461, -0.84938238, -0.98502645, -0.99939268, -0.999928 , -0.999928 , -0.99939268, -0.98502645, -0.84938238, -0.36378461]) So the sign is wrong, that can be fixed by mirroring the gaussian kernels. I have done so in CVS. The same holds true for the Sobel and Prewitt filters, they were also defined 'incorrectly' according to this criterion. I also changed those. That may be a bit more controversial since a quick look on the web seemed to indicate that often it is defined the other around. If anybody thinks my changes are no good, please let me know. Cheers, Peter On 22 Sep, 2005, at 17:41, Alexandre Guimond wrote: > Hi. > > I think I found a bug in gaussian_filter1d. > > roughly the function creates a 1d kernel and then calls > correlate1d. The problem I see is that the kernel should be > mirrored prior to calling correlate1d since we want to convolve, > not correlate. > > as a result: > > >>> import numarray.nd_image > >>> numarray.nd_image.gaussian_filter1d( ( 0.0, 1.0, 0.0 ), 1, > order = 1, axis = 0, mode = 'constant' ) > array([-0.24197145, 0. , 0.24197145]) > >>> > > when it should be [ 0.24197145, 0. , -0.24197145]) (notice > the change in the sign of coefficients) > > Or did I get that wrong? > > alex. From ariciputi at pito.com Thu Sep 22 14:26:48 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Thu Sep 22 14:26:48 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> I've already tried something like this, but it doesn't work since f1 and f2 return not valid values outside the range over they are defined. Perhaps an example could clarify; suppose that f1(x) = 1./ sqrt(1 - x**2) for x <= 1, and f2(x) = 1./sqrt(x**2 - 1) for x > 1. Your suggestion, as the other I've tried, fails with a "OverflowError: math range error". Any helps? Andrea. On Sep 22, 2005, at 9:33 PM, Alan G Isaac wrote: > On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > >> this is probably an already discussed problem, but I've not been able >> to find a solution even after googling a lot. >> > > >> I've a piecewise defined function: >> > > >> / >> | f1(x) if x <= a >> f(x) = | >> | f2(x) if x > a >> \ >> > > >> where f1 and f2 are not defined outside the above range. How can I >> define such a function in Python in order to apply (map) it to an >> array ranging from values smaller to values bigger than a? >> > > I suspect I do not understand your question. > But perhaps you want this: > > def f(x): > return x<=a and f1(x) or f2(x) > > fwiw, > Alan > > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download it for free - -and be entered to win a 42" plasma tv or > your very > own Sony(tm)PSP. Click here to play: http://sourceforge.net/ > geronimo.php > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > From aisaac at american.edu Thu Sep 22 15:18:10 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu Sep 22 15:18:10 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> Message-ID: On Thu, 22 Sep 2005, Andrea Riciputi apparently wrote: > I've already tried something like this, but it doesn't work since f1 > and f2 return not valid values outside the range over they are > defined. Perhaps an example could clarify; suppose that f1(x) = 1./ > sqrt(1 - x**2) for x <= 1, and f2(x) = 1./sqrt(x**2 - 1) for x > 1. > Your suggestion, as the other I've tried, fails with a > "OverflowError: math range error". If you do it as I suggested, they should not I believe be evaluated outside of their range. So your function must be generating an overflow error within this range. >>> import math >>> import random >>> def f1(x): return math.sqrt(1-x**2) ... >>> def f2(x): return 1./math.sqrt(x**2-1) ... >>> def f(x): return x<=1 and f1(x) or f2(x) ... >>> d = [random.uniform(0,2) for i in range(20)] >>> fd = [f(x) for x in d] Works fine. Cheers, Alan Isaac From greg.ewing at canterbury.ac.nz Thu Sep 22 20:26:10 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu Sep 22 20:26:10 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> Message-ID: <43337527.6080002@canterbury.ac.nz> Andrea Riciputi wrote: > On Sep 22, 2005, at 9:33 PM, Alan G Isaac wrote: > > >> def f(x): >> return x<=a and f1(x) or f2(x) > > I've already tried something like this, but it doesn't work I wish people would stop suggesting the 'a and b or c' trick, because it DOESN'T WORK except in special circumstances (i.e. when you can be sure that b is never false). What you want is: def f(x): if x <= a: return f1(x) else: return f2(x) -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+ From ariciputi at pito.com Fri Sep 23 00:03:07 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Fri Sep 23 00:03:07 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <43337527.6080002@canterbury.ac.nz> References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> <43337527.6080002@canterbury.ac.nz> Message-ID: On Sep 23, 2005, at 5:23 AM, Greg Ewing wrote: > I wish people would stop suggesting the 'a and b or c' trick, > because it DOESN'T WORK except in special circumstances (i.e. > when you can be sure that b is never false). > > What you want is: > > def f(x): > if x <= a: > return f1(x) > else: > return f2(x) It doesn't work either. As I've already explained x is an array containing values both above and below a! What I really need is a way to prevent f1 and f2 from acting on those values of the 'x' array for which the functions are not defined. Any other hints? Andrea. From joostvanevert at gmail.com Fri Sep 23 01:33:43 2005 From: joostvanevert at gmail.com (Joost van Evert) Date: Fri Sep 23 01:33:43 2005 Subject: [Numpy-discussion] array indexing in numarray Message-ID: <1127464280.3803.3.camel@inpc93.et.tudelft.nl> Hi list, I have two questions on numarray indexing: 1) does anyone know an elegant way to assign values to a non-contiguous slice. Like: In [1]:from numarray import * In [2]:a = arange(25).resize((5,5)) In [3]:a Out[3]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) In [4]:i = [1,3] In [5]:transpose(a[i])[i]=1 ---------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) ... ValueError: Invalid destination array: partial indices require contiguous non-byteswapped destination 2) And second: why didn't you choose to return all combinations of indices: In [8]:a[i,i] Out[8]:array([ 6, 18]) Where I, in my humble opinion, would prefer to see all possible combinations. e.g. a[[1,3],[1,3]] = a[[1,1,3,3],[1,3,1,3]] Anyone has an idea how to easyly obtain this behaviour? Greets+thanks, Joost From gerard.vermeulen at grenoble.cnrs.fr Fri Sep 23 01:55:46 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Fri Sep 23 01:55:46 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> On Thu, 22 Sep 2005 17:44:59 +0200 Andrea Riciputi wrote: > Hi all, > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > > I've a piecewise defined function: > > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? > This does maybe what you want: from scipy import * def f(x): """Approximative implementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) i = argmin(abs(x-pi)) # you may have to tweak i here, because it may be off by 1 result[:i+1] = -1/(x[:i+1]-pi) result[i+1:] = 1/(x[i+1:]-pi) return result x = arange(0, 10, 1, Float) print f(x) print abs(1/(x-pi)) Gerard From pjssilva at ime.usp.br Fri Sep 23 06:34:19 2005 From: pjssilva at ime.usp.br (Paulo J. S. Silva) Date: Fri Sep 23 06:34:19 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> References: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <1127474667.13490.4.camel@localhost.localdomain> Here is a slightly different solution, that is easier to my eyes and that can handle arguments in arbitrary order. It requires numarray (as it uses array indexing). If I understand well it should work woth the new scipy-core (Numeric3), but I haven't it compiled here. Obs: Probably f2 implementation is faster than f1. Best, Paulo ---- from numarray import * def f1(x): """Implementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) # First solution, probably slower, but clear. result[x < pi] = -1/(x[x < pi]-pi) result[x > pi] = 1/(x[x > pi]-pi) return result def f2(x): """Second Iplementation of: -1/(x-pi) for x < pi 1/(x-pi) for x > pi """ result = zeros(len(x), x.typecode()) # Second solution, probably faster as where returns only # the correct indexes. small = where(x < pi) big = where(x > pi) result[small] = -1/(x[small]-pi) result[big] = 1/(x[big]-pi) return result x = arange(0, 10, 1, Float) print f1(x) print f2(x) print abs(1/(x-pi)) # It uses the default value for pi. api = array([pi]) print f1(api) print f2(api) From aisaac at american.edu Fri Sep 23 07:58:08 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri Sep 23 07:58:08 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: <6F76B128-5B9B-4E0D-88D9-4F795A5B5B6E@pito.com> <43337527.6080002@canterbury.ac.nz> Message-ID: On Fri, 23 Sep 2005, Andrea Riciputi apparently wrote: > What I really need is a way to prevent f1 and f2 from > acting on those values of the 'x' array for which the > functions are not defined. The example I posted works with an array, which I called d. If you must feed the array to the function, just move the list comprehension inside of f. Of course, you may find list comprehension too slow for your application. Alan Isaac From ariciputi at pito.com Fri Sep 23 09:58:08 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Fri Sep 23 09:58:08 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <1127474667.13490.4.camel@localhost.localdomain> References: <20050923105452.717227fb.gerard.vermeulen@grenoble.cnrs.fr> <1127474667.13490.4.camel@localhost.localdomain> Message-ID: <3C09D7A9-9548-432A-8D53-B7FB23A62FF8@pito.com> On Sep 23, 2005, at 1:24 PM, Paulo J. S. Silva wrote: > Here is a slightly different solution, that is easier to my eyes and > that can handle arguments in arbitrary order. > > It requires numarray (as it uses array indexing). If I understand well > [snip] Thanks, it is exactly what I'm looking for. Unfortunately, I'm using Numeric, and I won't switch to numarray only for these feature; small arrays are too frequent in my applications. Anyway the method proposed by Gerard Vermeulen works too (thanks a lot Gerard!), and I'll stay with it, at least until Numeric3 won't be ready for the root-mean-square users. ;-) Cheers, Andrea. From service at paypal.com Sat Sep 24 03:41:28 2005 From: service at paypal.com (service at paypal.com) Date: Sat Sep 24 03:41:28 2005 Subject: [Numpy-discussion] Important PayPal! Account Information. Message-ID: An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Sat Sep 24 21:52:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat Sep 24 21:52:40 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. Message-ID: <43362CCF.5020100@ee.byu.edu> At the SciPy 2005 conference I announced that I was hoping to get a beta of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) released by the end of the conference. This did not happen. Some last minute features were suggested by Fernando Perez that I think will be relatively easy to add and make the release that much stronger. Look for the beta announcement next week. For the impatient, the svn server is always available: http://svn.scipy.org/svn/scipy_core/branches/newcore -Travis O. From gerard.vermeulen at grenoble.cnrs.fr Sun Sep 25 10:16:05 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Sun Sep 25 10:16:05 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <43362CCF.5020100@ee.byu.edu> References: <43362CCF.5020100@ee.byu.edu> Message-ID: <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> On Sat, 24 Sep 2005 22:51:27 -0600 Travis Oliphant wrote: > > At the SciPy 2005 conference I announced that I was hoping to get a beta > of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) > released by the end of the conference. > > This did not happen. Some last minute features were suggested by > Fernando Perez that I think will be relatively easy to add and make the > release that much stronger. > > Look for the beta announcement next week. > > For the impatient, the svn server is always available: > > http://svn.scipy.org/svn/scipy_core/branches/newcore > Hi Travis, when I tried a few months ago to compile one of my C++ Python modules with Numeric3, g++-3.4.3 choked on the line typedef unsigned char bool; in arrayobject.h, because bool is a predefined type in C++. I see the offending line is still in SVN (did not try to build it though). Sorry for sitting on the bug so long; the main reason is that at the time (I suppose it is still the case) Numeric3 does not coexist well with Numeric in the same Python interpreter (I remember import conflicts). If a typical Linux user wants to play with Numeric3, he has either to remove Numeric (and break possible dependencies) or build his own Python for Numeric3. I think that most Linux users are not going to do this and that it will take more than a year before distros make the move. Hence, my lack of motivation for reporting bugs or giving it a real try. Gerard From stephen.walton at csun.edu Sun Sep 25 11:40:25 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun Sep 25 11:40:25 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? Message-ID: <4336EEB1.1070105@csun.edu> In my ongoing struggles with FC4, I now find that the Fedora developers have put Numeric 23.7 into the core as python-numeric and built pygtk2 against it, so the latter has a dependency on the former. I think this is a bad idea unless the core developers are going to include ATLAS and build Numeric against it. After all, this community and the Scipy one have gone to great lengths to optimize ATLAS and build Numeric and numarray (and presumably the new scipy core) against it. But I'm probably not going to convince the Redhat folks on my own. So, questions are: can someone more familiar with pygtk2 than me tell me what parts of it depend on Numeric and why? Can we start a campaign to put ATLAS into Fedora Core if Numeric is going to be there too? From rkern at ucsd.edu Sun Sep 25 19:41:04 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun Sep 25 19:41:04 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4336EEB1.1070105@csun.edu> References: <4336EEB1.1070105@csun.edu> Message-ID: <43375FAC.1080303@ucsd.edu> Stephen Walton wrote: > In my ongoing struggles with FC4, I now find that the Fedora developers > have put Numeric 23.7 into the core as python-numeric and built pygtk2 > against it, so the latter has a dependency on the former. > > I think this is a bad idea unless the core developers are going to > include ATLAS and build Numeric against it. After all, this community > and the Scipy one have gone to great lengths to optimize ATLAS and build > Numeric and numarray (and presumably the new scipy core) against it. > But I'm probably not going to convince the Redhat folks on my own. > > So, questions are: can someone more familiar with pygtk2 than me tell > me what parts of it depend on Numeric and why? Can we start a campaign > to put ATLAS into Fedora Core if Numeric is going to be there too? It's for returning a Numeric array from a pixbuf. Strictly speaking, it's optional, but it looks like the package maintainers decided to compile it in for FC4. Can you make an RPM of python-numeric compiled against ATLAS and install it yourself? Or can you install Numeric yourself and make a dummy package to tell RPM that yes, indeed, Numeric is installed? I'm terribly unfamiliar with Fedora and RPM; I've always prefered Debian. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rkern at ucsd.edu Sun Sep 25 20:00:10 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun Sep 25 20:00:10 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> References: <43362CCF.5020100@ee.byu.edu> <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <4337640B.2030405@ucsd.edu> Gerard Vermeulen wrote: > On Sat, 24 Sep 2005 22:51:27 -0600 > Travis Oliphant wrote: > >>At the SciPy 2005 conference I announced that I was hoping to get a beta >>of the new scipy (core) (aka Numeric3 aka Numeric Next Generation) >>released by the end of the conference. >> >>This did not happen. Some last minute features were suggested by >>Fernando Perez that I think will be relatively easy to add and make the >>release that much stronger. >> >>Look for the beta announcement next week. >> >>For the impatient, the svn server is always available: >> >>http://svn.scipy.org/svn/scipy_core/branches/newcore > > Hi Travis, > > when I tried a few months ago to compile one of my C++ Python modules with > Numeric3, g++-3.4.3 choked on the line > > typedef unsigned char bool; > > in arrayobject.h, because bool is a predefined type in C++. > > I see the offending line is still in SVN (did not try to build it though). Will this do the trick? #ifndef __cplusplus typedef unsigned char bool; #define false 0 #define true 1 #endif /* __cplusplus */ > Sorry for sitting on the bug so long; the main reason is that at the time > (I suppose it is still the case) Numeric3 does not coexist well with > Numeric in the same Python interpreter (I remember import conflicts). > If a typical Linux user wants to play with Numeric3, he has either to remove > Numeric (and break possible dependencies) or build his own Python for Numeric3. > I think that most Linux users are not going to do this and that it will take more > than a year before distros make the move. Hence, my lack of motivation for > reporting bugs or giving it a real try. scipy_core does not interfere with Numeric anymore. It's installed as scipy (so it *will* interfere with previous versions of scipy). While we're on the subject of bugs (for reference, I'm on OS X 10.4 with Python 2.4.1): * When linking umath.so, distutils is including a bare "-l" that causes the link to fail (gcc can't interpret the argument). I have no idea where it's coming from. Just after the Extension object for umath.so is created, the libraries attribute is empty, just like the other Extension objects. * When linking against Accelerate.framework, it can't find cblas.h . I have a patch in scipy.distutils.system_info for that (and also to remove the -framework arguments for compiling; they're solely linker flags). * setup.py looks for an optimized BLAS through blas_info, but getting lapack_info is commented out. Is this deliberate? * Despite comment lines claiming the contrary, scipy.linalg hasn't been converted yet. basic_lite.py tries to import lapack and flapack. * scipy.base.limits is missing (and is used in basic_lite.py). Feature request: * Bundle the include directory in with the package itself and provide a function somewhere that returns the appropriate directory. People writing setup.py scripts for extension modules that use scipy_core can then do the following: from scipy.distutils import get_scipy_include ... someext = Extension('someext', include_dirs=[get_scipy_include(), ...], ...) This would help people who can't write to sys.prefix+'/include/python2.4' and people like me who are trying to package scipy_core into a PythonEgg (which doesn't yet support installing header files to the canonical location). -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Sun Sep 25 22:04:01 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun Sep 25 22:04:01 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <43375FAC.1080303@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> Message-ID: <4337811E.4020806@colorado.edu> Robert Kern wrote: > Can you make an RPM of python-numeric compiled against ATLAS and install > it yourself? Or can you install Numeric yourself and make a dummy > package to tell RPM that yes, indeed, Numeric is installed? I'm terribly > unfamiliar with Fedora and RPM; I've always prefered Debian. You can. In fact, Numeric builds out of the box with python bdist_rpm, though the package name comes out to be named 'Numeric', but that should not be a problem, since the setup.cfg file reads: [bdist_rpm] provides=python-numeric, python-numeric-devel build_script=rpm_build.sh install_script=rpm_install.sh which means that the python-numeric dependency should be satisfied. If you really need to have the rpm be named python-numeric, this can be done by either writing out the spec file and fixing it by hand via: python setup.py bdist_rpm --spec-only or by changing the 'name' flag in the setup.py by hand to read 'python-config' instead of 'Numeric'. If changing this name for rpms is a common need, we can patch up setup.py to take an optional argument. Cheers, f From Chris.Barker at noaa.gov Sun Sep 25 23:34:11 2005 From: Chris.Barker at noaa.gov (Chris Barker) Date: Sun Sep 25 23:34:11 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <43375FAC.1080303@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> Message-ID: <4337962C.8000404@noaa.gov> Robert Kern wrote: > Stephen Walton wrote: >>So, questions are: can someone more familiar with pygtk2 than me tell >>me what parts of it depend on Numeric and why? Can we start a campaign >>to put ATLAS into Fedora Core if Numeric is going to be there too? > > It's for returning a Numeric array from a pixbuf. This is a Darn Good example of why we need the new array protocol. Let's make sure to make sure it gets used as widely as possible. -Chris From greg.ewing at canterbury.ac.nz Mon Sep 26 02:20:09 2005 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Mon Sep 26 02:20:09 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337962C.8000404@noaa.gov> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> Message-ID: <4337BA3C.8080500@canterbury.ac.nz> Chris Barker wrote: > This is a Darn Good example of why we need the new array protocol. I came across another one the other day when working on pygui. I wanted to use glReadPixels to read data into a buffer belonging to an NSBitmapImageRep, but PyOpenGL's glReadPixels can only read data into an existing memory block if it's a Numeric array, and PyObjC doesn't know anything about Numeric... Greg From rkern at ucsd.edu Mon Sep 26 02:44:08 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon Sep 26 02:44:08 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337BA3C.8080500@canterbury.ac.nz> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> <4337BA3C.8080500@canterbury.ac.nz> Message-ID: <4337C2D1.1020204@ucsd.edu> Greg Ewing wrote: > Chris Barker wrote: > >> This is a Darn Good example of why we need the new array protocol. > > I came across another one the other day when working on > pygui. I wanted to use glReadPixels to read data into a > buffer belonging to an NSBitmapImageRep, but PyOpenGL's > glReadPixels can only read data into an existing memory > block if it's a Numeric array, and PyObjC doesn't know > anything about Numeric... I've taken the liberty of adding these to a new Wiki page. Let's record these "Oh I wish everyone used the array protocol" moments when we come across them. http://www.scipy.org/wikis/featurerequests/ArrayInterfaceUseCases/ This will probably migrate to the Trac instance for scipy whenever that comes about. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Mon Sep 26 08:18:22 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Sep 26 08:18:22 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337C2D1.1020204@ucsd.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337962C.8000404@noaa.gov> <4337BA3C.8080500@canterbury.ac.nz> <4337C2D1.1020204@ucsd.edu> Message-ID: <433810FA.5040704@ee.byu.edu> Robert Kern wrote: >Greg Ewing wrote: > > >>Chris Barker wrote: >> >> >> >>>This is a Darn Good example of why we need the new array protocol. >>> >>> >>I came across another one the other day when working on >>pygui. I wanted to use glReadPixels to read data into a >>buffer belonging to an NSBitmapImageRep, but PyOpenGL's >>glReadPixels can only read data into an existing memory >>block if it's a Numeric array, and PyObjC doesn't know >>anything about Numeric... >> >> > >I've taken the liberty of adding these to a new Wiki page. Let's record >these "Oh I wish everyone used the array protocol" moments when we come >across them. > >http://www.scipy.org/wikis/featurerequests/ArrayInterfaceUseCases/ > > This is a great idea. It will help me sell an array_protocol_in_Python PEP to the Python Powers. -Travis From stephen.walton at csun.edu Mon Sep 26 08:40:01 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon Sep 26 08:40:01 2005 Subject: [Numpy-discussion] python-numeric in FC4: good idea? In-Reply-To: <4337811E.4020806@colorado.edu> References: <4336EEB1.1070105@csun.edu> <43375FAC.1080303@ucsd.edu> <4337811E.4020806@colorado.edu> Message-ID: <433815FE.7040007@csun.edu> Fernando Perez wrote: > Robert Kern wrote: > >> Can you make an RPM of python-numeric compiled against ATLAS and install >> it yourself? > > > You can. In fact, Numeric builds out of the box with > > python bdist_rpm, > > though the package name comes out to be named 'Numeric', but that > should not be a problem, since the setup.cfg file reads: > > [bdist_rpm] > provides=python-numeric, python-numeric-devel > build_script=rpm_build.sh > install_script=rpm_install.sh > > which means that the python-numeric dependency should be satisfied. Yes, it is, and I sort of found that out myself. Yum is much smarter about this than RPM apparently; if I do rpm -U Numeric-23.8-i386.rpm rpm refuses to replace python-numeric-23.7 as distributed with FC4, but "yum install Numeric" when pointed at my local repo works fine. As does "rpm -i --force" of course. There is at least a verbal commitment at bugzilla.redhat.com from Redhat to migrate pygtk2 to the new array object when it comes out. From gerard.vermeulen at grenoble.cnrs.fr Mon Sep 26 08:53:34 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Mon Sep 26 08:53:34 2005 Subject: [Numpy-discussion] Release of scipy core beta will happen next week. In-Reply-To: <4337640B.2030405@ucsd.edu> References: <43362CCF.5020100@ee.byu.edu> <20050925191415.3a686369.gerard.vermeulen@grenoble.cnrs.fr> <4337640B.2030405@ucsd.edu> Message-ID: <20050926175152.33bb7b62.gerard.vermeulen@grenoble.cnrs.fr> On Sun, 25 Sep 2005 19:59:23 -0700 Robert Kern wrote: > > typedef unsigned char bool; > > > > in arrayobject.h, because bool is a predefined type in C++. > > > > I see the offending line is still in SVN (did not try to build it though). > > Will this do the trick? > > #ifndef __cplusplus > typedef unsigned char bool; > #define false 0 > #define true 1 > #endif /* __cplusplus */ > It works for my g++, but it may not be a general solution. See footnote 15 under item 2 of section 5.3.3 of http://www.open-std.org/jtc1/sc22/open/n2356/ (ISO C++ draft) saying that sizeof(bool) is not required to be 1. Gerard From oliphant at ee.byu.edu Mon Sep 26 10:44:26 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon Sep 26 10:44:26 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: References: Message-ID: <4338334D.4050709@ee.byu.edu> Andrea Riciputi wrote: > Hi all, > this is probably an already discussed problem, but I've not been able > to find a solution even after googling a lot. > > I've a piecewise defined function: > > / > | f1(x) if x <= a > f(x) = | > | f2(x) if x > a > \ > > where f1 and f2 are not defined outside the above range. How can I > define such a function in Python in order to apply (map) it to an > array ranging from values smaller to values bigger than a? This is not a trivial problem in current versions of Numeric. What are you using, Numeric, numarray? In new scipy core (which replaces Numeric) and, I think, in numarray, you could say gta = x>a lea = x<=a y = x.copy() y[gta] = f2(x[gta]) y[lea] = f1(x[lea]) I've also just written a piecewise function for the new scipy core so you could write y = piecewise(x, x<=a, [f1,f2]) -Travis Oliphant From ariciputi at pito.com Tue Sep 27 02:10:32 2005 From: ariciputi at pito.com (Andrea Riciputi) Date: Tue Sep 27 02:10:32 2005 Subject: [Numpy-discussion] Piecewise functions. In-Reply-To: <4338334D.4050709@ee.byu.edu> References: <4338334D.4050709@ee.byu.edu> Message-ID: <96C7A372-9079-4338-85E5-6A3C0751623E@pito.com> On Sep 26, 2005, at 7:43 PM, Travis Oliphant wrote: > This is not a trivial problem in current versions of Numeric. > > What are you using, Numeric, numarray? > > In new scipy core (which replaces Numeric) and, I think, in > numarray, you could say > > gta = x>a > lea = x<=a > y = x.copy() > y[gta] = f2(x[gta]) > y[lea] = f1(x[lea]) > > I've also just written a piecewise function for the new scipy core > so you could write > > y = piecewise(x, x<=a, [f1,f2]) > > -Travis Oliphant Yes I'm using Numeric, and in the short to middle term I'll stay with it. Anyway I'm aware of the effort you are spending in putting together a Numeric replacement, and I'll look forward for a stable release of it. In the meanwhile I'll use the tricks already suggested here. Thanks again for your effort, Andrea. From junkmail at chatsubo.lagged.za.net Tue Sep 27 09:05:45 2005 From: junkmail at chatsubo.lagged.za.net (Neilen Marais) Date: Tue Sep 27 09:05:45 2005 Subject: [Numpy-discussion] Determining the condition number of matrix Message-ID: Hi Does numeric have a facility to estimate the condition number of a matrix? Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From luszczek at cs.utk.edu Tue Sep 27 11:54:59 2005 From: luszczek at cs.utk.edu (Piotr Luszczek) Date: Tue Sep 27 11:54:59 2005 Subject: [Numpy-discussion] Determining the condition number of matrix In-Reply-To: References: Message-ID: <433994D7.8050100@cs.utk.edu> Neilen Marais wrote: > Hi > > Does numeric have a facility to estimate the condition number of a matrix? > > Thanks > Neilen The only way I can think of is through SVD: import RandomArray as RA import LinearAlgebra as LA n = 100 a = RA.random((n, n)) vt, s, u = LA.singular_value_decomposition(a) cond2 = s[0] / s[-1] print cond2 The above code computes 2-norm condition number. Since Numeric has only limited binding to LAPACK you should probably look into SciPy that might have bindings to LAPACK's condition number estimators. Piotr From edcjones at comcast.net Wed Sep 28 11:28:34 2005 From: edcjones at comcast.net (Edward C. Jones) Date: Wed Sep 28 11:28:34 2005 Subject: [Numpy-discussion] numarray: Possible hash collision problem Message-ID: <433AE061.2030206@comcast.net> hash(numarray.arange(1000)) == hash(numarray.arange(10000)) The hash value changes each time I enter the Python interpreter. I have always assumed that hashing was deterministic. Is it? From cookedm at physics.mcmaster.ca Wed Sep 28 11:46:40 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed Sep 28 11:46:40 2005 Subject: [Numpy-discussion] numarray: Possible hash collision problem In-Reply-To: <433AE061.2030206@comcast.net> (Edward C. Jones's message of "Wed, 28 Sep 2005 14:26:41 -0400") References: <433AE061.2030206@comcast.net> Message-ID: "Edward C. Jones" writes: > hash(numarray.arange(1000)) == hash(numarray.arange(10000)) > > The hash value changes each time I enter the Python interpreter. I have > always assumed that hashing was deterministic. Is it? Not suprising: I also get this: hash(object()) == hash(object()) Looking through the source, I think the hash for an array is determined by the object base class, and hence is the id() of the array. The code above can be written long hand as a = numarray.arange(1000) ha = hash(a) # in this case, hash(a) == id(a) del a b = numarray.arange(10000) hb = hash(b) # in this case, hash(b) == id(b) del b ha == hb It's those (implicit) del statements that mean that a and b are stored to the same location in memory, and hence have the same id(): there's no other object created in the interpreter between when a is deleted and b is created. Basically, id() of a object is guaranteed to be unique *amongst all active objects*. It is _not_ guaranteed to be different from objects that have been created and destroyed. This will return false: a = numarray.arange(1000) b = numarray.arange(10000) hash(a) == hash(b) as a and b still both exist. Since arrays are mutable, there's no good way to get a content-based hash. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Fri Sep 30 00:03:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri Sep 30 00:03:31 2005 Subject: [Numpy-discussion] Release of SciPy Core 0.4 (Beta) Message-ID: <433CE24A.6040509@ee.byu.edu> This is to announce the release of SciPy Core 0.4.X (beta) It is available at sourceforge which you can access by going to http://numeric.scipy.org Thanks to all who helped me with this release, especially Robert Kern Pearu Peterson Now, let's start getting this stable.... -Travis Oliphant