From charlesr.harris at gmail.com Thu Dec 1 00:38:56 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 30 Nov 2005 22:38:56 -0700 Subject: [SciPy-dev] A scalar is no 1 \times 1 matrix ? In-Reply-To: <438D99A7.60306@mecha.uni-stuttgart.de> References: <438D99A7.60306@mecha.uni-stuttgart.de> Message-ID: Looks right to me. On 11/30/05, Nils Wagner wrote: > > >>> linalg.inv(2) > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", line > 183, in inv > raise ValueError, 'expected square matrix' > ValueError: expected square matrix > >>> linalg.inv(mat(2)) > array([[ 0.5]]) > > Is this behaviour wanted ? > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 1 02:46:56 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 01 Dec 2005 08:46:56 +0100 Subject: [SciPy-dev] Version 1471 should have my changes In-Reply-To: <438E3065.6020000@ieee.org> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> <438E2429.9030605@csun.edu> <438E3065.6020000@ieee.org> Message-ID: <438EAA70.80102@ntc.zcu.cz> Travis Oliphant wrote: > Well I've been on a different page with scipy development since the > 0.4.3 tagging. > > My changes were not getting placed on the main tree because of my silly > mistake. > > After some fiddling, I've finally merged changes back to the HEAD > revision of scipy. > > Hopefully I didn't mess something else up. Thanks, the 12 error problem has disappeared now! But a new failure came: ====================================================================== FAIL: check_expon (scipy.stats.morestats.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/share/software/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 782, in assert_array_less AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 1.8661555117137354 Array 2: [ 1.587 1.9339999999999999] ---------------------------------------------------------------------- Ran 1377 tests in 146.900s FAILED (failures=1) r. From cimrman3 at ntc.zcu.cz Thu Dec 1 04:56:15 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 01 Dec 2005 10:56:15 +0100 Subject: [SciPy-dev] bug and debugging (on gentoo linux) Message-ID: <438EC8BF.9040402@ntc.zcu.cz> Is there any gentooist who could help me setup python installation so that scipy compiles with debugging symbols? BTW. I use a local install (python setup.py install --root=/home/share/software + setting PYTHONPATH=$PYTHONPATH:/home/share/software/usr/lib/python2.4/site-packages) instead of the system install. Reason: the following stopped working (segfault) after 1533 (not sure exactly when) core revision: PyArrayObject *obj = 0; intp plen[1]; plen[0] = obj = (PyArrayObject *) PyArray_SimpleNew( 1, len, PyArray_INT32 ); My first sentence hints that I am not able to provide a better stack trace than this: (gdb) bt #0 0xb7a05ddf in PyArray_DescrFromType () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #1 0xb7a17bfe in PyArray_ValidType () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #2 0xb6410ce9 in helper_newCArrayObject_i32 (len=271, array=0x8332e18) at graph_wrap.c:788 I have also had to replace PyArray_ContiguousFromObject() (caused segfaults) with PyArray_ContiguousFromAny(). thanks, r. From cimrman3 at ntc.zcu.cz Thu Dec 1 07:07:30 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 01 Dec 2005 13:07:30 +0100 Subject: [SciPy-dev] update: bug and debugging (on gentoo linux) In-Reply-To: <438EC8BF.9040402@ntc.zcu.cz> References: <438EC8BF.9040402@ntc.zcu.cz> Message-ID: <438EE782.9090708@ntc.zcu.cz> Robert Cimrman wrote: > Is there any gentooist who could help me setup python installation so > that scipy compiles with debugging symbols? I have managed to build python with debugging symbols using a custom ebuild in PORTAGE_OVERLAY, but scipy still builds without them. r. From dd55 at cornell.edu Thu Dec 1 09:24:14 2005 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 1 Dec 2005 09:24:14 -0500 Subject: [SciPy-dev] pickling problem In-Reply-To: <438E1386.90400@ieee.org> References: <200511301504.27962.dd55@cornell.edu> <200511301534.58185.dd55@cornell.edu> <438E1386.90400@ieee.org> Message-ID: <200512010924.14952.dd55@cornell.edu> Hi Travis, On Wednesday 30 November 2005 16:03, Travis Oliphant wrote: > Darren Dale wrote: > >The following gives me the same error: > > > >import pickle > >import scipy > >import Numeric > > > >a = scipy.arange(10) > >pickle.dump(a,file('temp.p','w')) > >b = pickle.load(file('temp.p')) > >c = Numeric.array(b) > > > >Traceback (most recent call last): > > File "pickletest.py", line 8, in ? > > c = Numeric.array(b) > >ValueError: cannot handle misaligned or not writeable arrays. > > Which platform are you on? Do you have an old version of Numeric > getting picked up by accident? I am using gentoo on a 64 bit Athlon, gcc-3.4.4, python-2.4.2. This morning, I removed Numeric, the rebuilt version 24.2 from scratch, and still get the same error. Last night I tested the same scipy/Numeric combination on a 32 bit system (gentoo, same gcc and python), and was not able to reproduce the error there. > I can't seem to reproduce this error. > > This error will only show up in the array_fromstructinterface code which > parses the array_struct interface information. > > What is the result of b.flags on your system? To find out insert print > b.flags right before you call Numeric.array. > > Mine is > > {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, > 'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': False} > > In particular you are looking for WRITEABLE and ALIGNED to both be True. mine is {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': False, 'OWNDATA': False} Darren From oliphant.travis at ieee.org Thu Dec 1 11:29:32 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Dec 2005 09:29:32 -0700 Subject: [SciPy-dev] pickling problem In-Reply-To: <200512010924.14952.dd55@cornell.edu> References: <200511301504.27962.dd55@cornell.edu> <200511301534.58185.dd55@cornell.edu> <438E1386.90400@ieee.org> <200512010924.14952.dd55@cornell.edu> Message-ID: <438F24EC.6050504@ieee.org> Darren Dale wrote: >I am using gentoo on a 64 bit Athlon, gcc-3.4.4, python-2.4.2. This morning, I >removed Numeric, the rebuilt version 24.2 from scratch, and still get the >same error. Last night I tested the same scipy/Numeric combination on a 32 >bit system (gentoo, same gcc and python), and was not able to reproduce the >error there. > > Perhaps this is a 64-bit issue with unpickling? > > >>I can't seem to reproduce this error. >> >>This error will only show up in the array_fromstructinterface code which >>parses the array_struct interface information. >> >>What is the result of b.flags on your system? To find out insert print >>b.flags right before you call Numeric.array. >> >>Mine is >> >>{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, >>'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': False} >> >>In particular you are looking for WRITEABLE and ALIGNED to both be True. >> >> > >mine is > >{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': >True, 'FORTRAN': True, 'ALIGNED': False, 'OWNDATA': False} > > That's why you are getting the error, right there. Your unpickled array is flagged as misaligned. Numeric doesn't deal with misaligned or not writeable data. That means either that it's pointer is not on a boundary for the type or the strides are not. I'm sure this is a 64-bit issue but don't know exactly the cause. Can you give me the result of b.strides b.__array_data__ and b.flags.aligned = True ? -Travis From dd55 at cornell.edu Thu Dec 1 12:15:50 2005 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 1 Dec 2005 12:15:50 -0500 Subject: [SciPy-dev] pickling problem In-Reply-To: <438F24EC.6050504@ieee.org> References: <200511301504.27962.dd55@cornell.edu> <200512010924.14952.dd55@cornell.edu> <438F24EC.6050504@ieee.org> Message-ID: <200512011215.50783.dd55@cornell.edu> On Thursday 01 December 2005 11:29, Travis Oliphant wrote: > >>What is the result of b.flags on your system? To find out insert print > >>b.flags right before you call Numeric.array. > >> > >>Mine is > >> > >>{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, > >>'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': False} > >> > >>In particular you are looking for WRITEABLE and ALIGNED to both be True. > > > >mine is > > > >{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, > > 'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': False, 'OWNDATA': False} > > That's why you are getting the error, right there. Your unpickled array > is flagged as misaligned. Numeric doesn't deal with misaligned or not > writeable data. > > That means either that it's pointer is not on a boundary for the type or > the strides are not. I'm sure this is a 64-bit issue but don't know > exactly the cause. > > Can you give me the result of > > b.strides > b.__array_data__ > > and > > b.flags.aligned = True ? b.strides: (8,) b.__array_data__: ('0x5d3f44', False) b.flags.aligned=True Traceback (most recent call last): File "pickletest.py", line 11, in ? b.flags.aligned=True File "/usr/lib64/python2.4/site-packages/scipy/base/_internal.py", line 159, in set_aligned self._arr.setflags(align=val) ValueError: cannot set aligned flag of mis-aligned array to True From oliphant.travis at ieee.org Thu Dec 1 11:32:52 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 01 Dec 2005 09:32:52 -0700 Subject: [SciPy-dev] pickling problem In-Reply-To: <200512010924.14952.dd55@cornell.edu> References: <200511301504.27962.dd55@cornell.edu> <200511301534.58185.dd55@cornell.edu> <438E1386.90400@ieee.org> <200512010924.14952.dd55@cornell.edu> Message-ID: <438F25B4.9000207@ieee.org> >>I can't seem to reproduce this error. >> >>This error will only show up in the array_fromstructinterface code which >>parses the array_struct interface information. >> >>What is the result of b.flags on your system? To find out insert print >>b.flags right before you call Numeric.array. >> >>Mine is >> >>{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, >>'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': False} >> >>In particular you are looking for WRITEABLE and ALIGNED to both be True. >> >> > >mine is > >{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': >True, 'FORTRAN': True, 'ALIGNED': False, 'OWNDATA': False} > >Darren > > > Also, could you send me the result of import scipy.base.numerictypes as sbn sbn._alignment[b.dtype] Thanks, -Travis From oliphant at ee.byu.edu Thu Dec 1 15:00:17 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 01 Dec 2005 13:00:17 -0700 Subject: [SciPy-dev] Records module Message-ID: <438F5651.5060104@ee.byu.edu> Recent SVN has the changes to the ndarray object to allow getfield to also receive a tuple for the data type. A tuple for the data-type in getfield implies that the field is a sub-array. The returned array has this sub-array tacked on to the end of the shape (no data ever gets moved around for field access, just the way it is viewed...) With this change, I see being able to make record arrays essentially just a map between field names and a (type, offset) pair --- where type may itself be a tuple (actual type, shape). Additionally: It's fine to have a field method, I suppose, but would it be possible to also do it with properties? For each field name create a (dynamic) property with that name. The get property code maps the name to the (type, offset) pair using the stored dictionary and uses getfield to extract that particular field. I'm not sure if dynamic properties are possible, or if some other attribute access would have to be implemented. But it's something to look into. -Travis From chanley at stsci.edu Thu Dec 1 16:45:52 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 01 Dec 2005 16:45:52 -0500 Subject: [SciPy-dev] Records module In-Reply-To: <438F5651.5060104@ee.byu.edu> References: <438F5651.5060104@ee.byu.edu> Message-ID: <438F6F10.3020503@stsci.edu> Travis Oliphant wrote: > I'm not sure if dynamic properties are possible, or if some other > attribute access would have to be implemented. But it's something to > look into. Hi Travis, About a year ago (summer 2004) on the numpy distribution list there was a lot of discussion of the records interface. I will dig through my notes and put together a summary. Chris From oliphant at ee.byu.edu Thu Dec 1 20:10:06 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 01 Dec 2005 18:10:06 -0700 Subject: [SciPy-dev] Records in scipy core In-Reply-To: <438F6F10.3020503@stsci.edu> References: <438F5651.5060104@ee.byu.edu> <438F6F10.3020503@stsci.edu> Message-ID: <438F9EEE.904@ee.byu.edu> Christopher Hanley wrote: >Hi Travis, > >About a year ago (summer 2004) on the numpy distribution list there was >a lot of discussion of the records interface. I will dig through my >notes and put together a summary. > > Thanks for the pointers. I had forgotten about that discussion. I went back and re-read the thread. Here's a good link for others to re-read (the end of) this thread: http://news.gmane.org/find-root.php?message_id=%3cBD22BAC0.E9EB%25perry%40stsci.edu%3e I think some very good points were made. These points should be addressed from the context of scipy arrays which now support records in a very basic way. Because of this, we can support nested records of records --- but how is this to be presented to the user is still an open question (i.e. how do you build one...) I've finally been converted to believe that the notion of records is very important because it speaks of how to do the basic (typeless, mathless) array object that will go into Python correctly If we can get the general records type done right, then all the other types are examples of it. Thus, I would like to revive discussion of the record object for inclusion in scipy core. I pretty much agree with the semantics that Perry described in his final email (is this all implemented in numarray, yet?), except I would agree with Francesc Alted that a titles or labels concept should be allowed. I'm more enthusiastic about code than discussion, so I'm hoping for a short-lived discussion followed by actual code. I'm ready to do the implementation this week (I've already borrowed lots of great code from numarray which makes it easier), but feel free to chime in even if you read this later. In my mind, the discussion about the records array is primarily a discussion about the records data-type. The way I'm thinking, the scipy ndarray is a homogeneous collection of the same "thing." The big change in scipy core is that Numeric used to allow only certain data types, but now the ndarray can contain an arbitrary "void" data type. You can also add data-types to scipy core. These data-types are "almost" full members of the scipy data-type community. The "almost" is because the N*N casting matrix is not updated (this would require a re-design of how casting is considered). At some point, I'd like to fix this wart and make it so that data-types can be added at will -- I think if we get the record type right, I'll be able to figure out how to do this. We need to add a "record" data-type to scipy. Then, any array can be of "record" type, and there will be an additional "array scalar" that is what is returned when selecting a single element from the array. So, a record array would simply be an array of "records" plus some extra stuff for dealing with the mapping from field names to actual segments of the array element (we may decide that this mapping is general enough that all scipy arrays should have the capability of assigning names to sub-bytes of its main data-type and means of accessing those sub-bytes in which case the subclass is unnecessary). Let me explain further: Right now, the machinery is in place in scipy_core to get and set in any ndarray (regardless of its data-type) an arbitrary "field". A "field" in this context is defined as a sub-section of the basic element making up the array. Generically the sub-section is defined by an offset and a data-type or a tuple of a data type and a shape (to allow sub-arrays in a record). What I understand the user to want is the binding of a name to this generic sub-section descriptor. 1) Should we allow that for every scipy ndarray: complex data types have an obvious binding, would anybody want to name the first two bytes of their int32 array? I suggest holding off on this one until a records array is working.... 2) Supposing we don't go with number 1, we need to design a record data type that has this name-binding capability. The recarray class in scipy core SVN essentially just does this. Question: How important is backwards compatibility with old numarray specification. In particular, I would go with the .fields access described by Perry, and eliminate the .field() approach? Thanks for reading and any comments you can make. -Travis From a.h.jaffe at gmail.com Thu Dec 1 22:33:27 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 03:33:27 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: <438FC087.8050309@gmail.com> Hi All, A bit more information on the problem: Following a suggestion in the pickling thread going on here, ff I add in lines to the cholesky_decomposition function in basic_lite.py to print out the flags of the array, the following happens: input array flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': False, 'ALIGNED': True, 'OWNDATA': True} cholesky: after _castCopyAndTranspose, flags= {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': False, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} So, for some reason, _castCopyAndTranspose is somehow making the array non-Contiguous, which is then a problem in the underlying lapack call. What's going on here? Also, an (unrelated?) point: I note that basic_lite.py exists in both scipy/linalg and scipy/basic -- two different files with somewhat different contents, imported in different ways! Which is correct? Should they both exist? (They both give the same error for cholesky.) Is one a leftover from an earlier installation? Thanks, Andrew Andrew Jaffe wrote: > Hi All, > > One more data point: in addition to my problems on OS X, someone else > has the same issue on Win XP... So it's not OS-specific, but it does > occur only for some installations. > > Perhaps it depends on the underlying BLAS/LAPACK library? > > Andrew > > Andrew Jaffe wrote: > >>hi all, >> >>(apologies that a similar question has appeared elsewhere...) >> >>In the newest incarnation of scipy_core, I am having trouble with the >>cholesky(a) routine. Here is some minimal code reproducing the bug >>(on OS X) >> >>------------------------------------------------------------ >>from scipy import identity, __core_version__, Float64 >>import scipy.linalg as la >>print 'Scipy version: ', __core_version__ >>i = identity(4, Float64) >>print 'identity matrix:' >>print i >> >>print 'about to get cholesky decomposition' >>c = la.cholesky(i) >>print c >>------------------------------------------------------------ >> >>which gives >> >>------------------------------------------------------------ >>Scipy version: 0.6.1 >>identity matrix: >>[[ 1. 0. 0. 0.] >> [ 0. 1. 0. 0.] >> [ 0. 0. 1. 0.] >> [ 0. 0. 0. 1.]] >>about to get cholesky decomposition >>Traceback (most recent call last): >> File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? >> c = la.cholesky(i) >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >>python2.4/site-packages/scipy/linalg/basic_lite.py", line 117, in >>cholesky_decomposition >> results = lapack_routine('L', n, a, m, 0) >>lapack_lite.LapackError: Parameter a is not contiguous in >>lapack_lite.dpotrf >>------------------------------------------------------------ >> >>(The cholesky decomposition in this case should just be the matrix >>itself; the same error occurs with a complex matrix.) >> >> >>Any ideas? Could this have anything to do with _CastCopyAndtranspose >>in Basic_lite.py? (Since there are few other candidates for anything >>that actually changes the matrix.) >> >>Thanks in advance, >> >>A >> >> >>______________________________________________________________________ >>Andrew Jaffe a.jaffe at imperial.ac.uk >>Astrophysics Group +44 207 594-7526 >>Blackett Laboratory, Room 1013 FAX 7541 >>Imperial College, Prince Consort Road >>London SW7 2AZ ENGLAND http://astro.imperial.ac.uk/~jaffe From a.h.jaffe at gmail.com Thu Dec 1 22:40:25 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 03:40:25 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <438FC087.8050309@gmail.com> References: <438FC087.8050309@gmail.com> Message-ID: Andrew Jaffe wrote: > Also, an (unrelated?) point: I note that basic_lite.py exists in both > scipy/linalg and scipy/basic -- two different files with somewhat > different contents, imported in different ways! Which is correct? Should > they both exist? (They both give the same error for cholesky.) Is one a > leftover from an earlier installation? OK, if I completely delete my old installation and reinstall, I find that only the scipy/basic version is there... but the cholesky problem persists! Andrew From cimrman3 at ntc.zcu.cz Fri Dec 2 04:23:23 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 02 Dec 2005 10:23:23 +0100 Subject: [SciPy-dev] PyArray_New problem In-Reply-To: <438EE782.9090708@ntc.zcu.cz> References: <438EC8BF.9040402@ntc.zcu.cz> <438EE782.9090708@ntc.zcu.cz> Message-ID: <4390128B.1090200@ntc.zcu.cz> Could someone tell me what I am doing wrong? I would like to use a function like the snippet below to create an array. PyArrayObject *helper_newCArrayObject_i32( int32 len, int32 *array ) { intp plen[1]; PyArrayObject *obj = 0; plen[0] = len; printf( "11111 %d\n", PyArray_INT32 ); /* obj = (PyArrayObject *) PyArray_SimpleNew( 1, plen, PyArray_INT32 ); */ obj = (PyArrayObject *) PyArray_New( &PyArray_Type, 1, plen, PyArray_INT32, NULL, NULL, 0, CARRAY_FLAGS, NULL ); .... } but I get this segfault on the line 'obj = ...'. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 19363)] 0xb7a91e50 in PyArray_DescrFromType (type=-1213462880) at arraytypes.inc:11645 11645 return descrs[type]; (gdb) bt #0 0xb7a91e50 in PyArray_DescrFromType (type=-1213462880) at arraytypes.inc:11645 #1 0xb7aa1ee3 in PyArray_ValidType (type=-1213462880) at arrayobject.c:5050 #2 0xb65a38c0 in helper_newCArrayObject_i32 (len=271, array=0x82ed318) at graph_wrap.c:790 #3 0xb65a65df in _wrap_mesh_rawGraph (self=0x0, args=0xb66053bc) at graph_wrap.c:1980 #4 0xb7e7d1ff in PyCFunction_Call (func=0xb65e258c, arg=0xb66053bc, kw=0x0) at methodobject.c:73 #5 0xb7ebf857 in call_function (pp_stack=0xbf9769d8, oparg=5) at ceval.c:3558 ... In [2]:nm.__scipy_version__ Out[2]:'0.4.3.1471' In [3]:nm.__core_version__ Out[3]:'0.7.4.1550' r. From a.h.jaffe at gmail.com Fri Dec 2 07:55:19 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 12:55:19 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <438FC087.8050309@gmail.com> References: <438FC087.8050309@gmail.com> Message-ID: Hi All, OK, drilling down a bit further... The problem lies in the use of scipy.transpose() in _castCopyAndTranspose, since transpose() makes a contiguous, non-Fortran array into a non-contiguous fortran array (since it appears that at present it doesn't copy, just makes a new view). One solution is just to actually copy the array. The other function, _fastCopyAndTranspose(), seems to work fine in this situation, but I'm not sure how and why this differs from _castCopyAndTranspose (in particular, it doesn't seem to set the Fortran flag). [However, the cholesky decomposition only makes sense for a symmetric (or Hermitian) array, so, if we're not changing the type, for real matrices really all we need to do is the cast and copy; for complex matrices we need to transpose (not hermitian conjugate since we really want to change the ordering), but this is equivalent to just taking the complex conjugate of every element, which must be faster than copying. If we are changing the type, I assume the speed difference isn't as important.] Any ideas on what the best change(s) would be? Andrew > A bit more information on the problem: > > Following a suggestion in the pickling thread going on here, ff I add in > lines to the cholesky_decomposition function in basic_lite.py to print > out the flags of the array, the following happens: > > > input array flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, > 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': False, 'ALIGNED': > True, 'OWNDATA': True} > > cholesky: after _castCopyAndTranspose, flags= {'WRITEABLE': True, > 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': False, > 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} > > So, for some reason, _castCopyAndTranspose is somehow making the array > non-Contiguous, which is then a problem in the underlying lapack call. > > What's going on here? > > Also, an (unrelated?) point: I note that basic_lite.py exists in both > scipy/linalg and scipy/basic -- two different files with somewhat > different contents, imported in different ways! Which is correct? Should > they both exist? (They both give the same error for cholesky.) Is one a > leftover from an earlier installation? > > Thanks, > > Andrew > > > > > Andrew Jaffe wrote: > >>Hi All, >> >>One more data point: in addition to my problems on OS X, someone else >>has the same issue on Win XP... So it's not OS-specific, but it does >>occur only for some installations. >> >>Perhaps it depends on the underlying BLAS/LAPACK library? >> >>Andrew >> >>Andrew Jaffe wrote: >> >> >>>hi all, >>> >>>(apologies that a similar question has appeared elsewhere...) >>> >>>In the newest incarnation of scipy_core, I am having trouble with the >>>cholesky(a) routine. Here is some minimal code reproducing the bug >>>(on OS X) >>> >>>------------------------------------------------------------ >> >>>from scipy import identity, __core_version__, Float64 >> >>>import scipy.linalg as la >>>print 'Scipy version: ', __core_version__ >>>i = identity(4, Float64) >>>print 'identity matrix:' >>>print i >>> >>>print 'about to get cholesky decomposition' >>>c = la.cholesky(i) >>>print c >>>------------------------------------------------------------ >>> >>>which gives >>> >>>------------------------------------------------------------ >>>Scipy version: 0.6.1 >>>identity matrix: >>>[[ 1. 0. 0. 0.] >>> [ 0. 1. 0. 0.] >>> [ 0. 0. 1. 0.] >>> [ 0. 0. 0. 1.]] >>>about to get cholesky decomposition >>>Traceback (most recent call last): >>> File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? >>> c = la.cholesky(i) >>> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >>>python2.4/site-packages/scipy/linalg/basic_lite.py", line 117, in >>>cholesky_decomposition >>> results = lapack_routine('L', n, a, m, 0) >>>lapack_lite.LapackError: Parameter a is not contiguous in >>>lapack_lite.dpotrf >>>------------------------------------------------------------ >>> >>>(The cholesky decomposition in this case should just be the matrix >>>itself; the same error occurs with a complex matrix.) >>> >>> >>>Any ideas? Could this have anything to do with _CastCopyAndtranspose >>>in Basic_lite.py? (Since there are few other candidates for anything >>>that actually changes the matrix.) >>> >>>Thanks in advance, >>> >>>A >>> >>> >>>______________________________________________________________________ >>>Andrew Jaffe a.jaffe at imperial.ac.uk >>>Astrophysics Group +44 207 594-7526 >>>Blackett Laboratory, Room 1013 FAX 7541 >>>Imperial College, Prince Consort Road >>>London SW7 2AZ ENGLAND http://astro.imperial.ac.uk/~jaffe From perry at stsci.edu Fri Dec 2 08:36:47 2005 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 2 Dec 2005 08:36:47 -0500 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: <438F9EEE.904@ee.byu.edu> Message-ID: Travis Oliphant wrote: > > Thus, I would like to revive discussion of the record object for > inclusion in scipy core. I pretty much agree with the semantics that > Perry described in his final email (is this all implemented in numarray, > yet?), No, it was only talk to date, with plans to implment it, but other things have drawn our work up to now. > Question: How important is backwards compatibility with old numarray > specification. In particular, I would go with the .fields access > described by Perry, and eliminate the .field() approach? > For us, probably not critical since we have to do some rewriting anyway. (But it would be nice to retain for a while as deprecated). But what about field names that don't map well to attributes? I haven't had a chance to reread the past emails but I seem to recall this was a significant issue. That would imply that .field() would be needed for those cases anyway. Perry From oliphant.travis at ieee.org Fri Dec 2 10:50:22 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 08:50:22 -0700 Subject: [SciPy-dev] PyArray_New problem In-Reply-To: <4390128B.1090200@ntc.zcu.cz> References: <438EC8BF.9040402@ntc.zcu.cz> <438EE782.9090708@ntc.zcu.cz> <4390128B.1090200@ntc.zcu.cz> Message-ID: <43906D3E.4010109@ieee.org> Robert Cimrman wrote: >Could someone tell me what I am doing wrong? I would like to use a >function like the snippet below to create an array. > >PyArrayObject *helper_newCArrayObject_i32( int32 len, int32 *array ) { > intp plen[1]; > PyArrayObject *obj = 0; > > plen[0] = len; > printf( "11111 %d\n", PyArray_INT32 ); >/* obj = (PyArrayObject *) PyArray_SimpleNew( 1, plen, PyArray_INT32 ); */ > > obj = (PyArrayObject *) PyArray_New( &PyArray_Type, 1, plen, >PyArray_INT32, NULL, NULL, 0, CARRAY_FLAGS, NULL ); >.... > > First of all, don't pass in CARRAY_FLAGS when the data argument to PyArray_New is NULL. A non-zero flags entry tells the subroutine to create a FORTRAN strides array if no data is passed. Remember: DATA flags are only to describe already available memory. If you create the memory in PyArray_New, then the only thing to decide is FORTRAN or C- contiguous. So, in this routine, you are creating a Fortran array. Perhaps this is causing problems later. Also, you are not showing us the rest of the code. Your traceback is showing PyArray_ValidType being called which is not shown anywere... -Travis From oliphant.travis at ieee.org Fri Dec 2 10:56:45 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 08:56:45 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> Message-ID: <43906EBD.5010407@ieee.org> Andrew Jaffe wrote: >Hi All, > >OK, drilling down a bit further... The problem lies in the use of >scipy.transpose() in _castCopyAndTranspose, since transpose() makes a >contiguous, non-Fortran array into a non-contiguous fortran array (since >it appears that at present it doesn't copy, just makes a new view). One >solution is just to actually copy the array. The other function, >_fastCopyAndTranspose(), seems to work fine in this situation, but I'm >not sure how and why this differs from _castCopyAndTranspose (in >particular, it doesn't seem to set the Fortran flag). > >[However, the cholesky decomposition only makes sense for a symmetric >(or Hermitian) array, so, if we're not changing the type, for real >matrices really all we need to do is the cast and copy; for complex >matrices we need to transpose (not hermitian conjugate since we really >want to change the ordering), but this is equivalent to just taking the >complex conjugate of every element, which must be faster than copying. >If we are changing the type, I assume the speed difference isn't as >important.] > >Any ideas on what the best change(s) would be? > > > The problem with making changes based on what's been posted so far is that the code works for other platforms. If this is a problem with OS X, is this a problem for just your installation or does everybody else see it. I have not seen enough data to know. One strong possibility is that the error is in the lapack_lite and most people are using the optimized lapack. We should definitely check that direction. -Travis From cimrman3 at ntc.zcu.cz Fri Dec 2 11:04:40 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 02 Dec 2005 17:04:40 +0100 Subject: [SciPy-dev] PyArray_New problem In-Reply-To: <43906D3E.4010109@ieee.org> References: <438EC8BF.9040402@ntc.zcu.cz> <438EE782.9090708@ntc.zcu.cz> <4390128B.1090200@ntc.zcu.cz> <43906D3E.4010109@ieee.org> Message-ID: <43907098.7010305@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > > >> Could someone tell me what I am doing wrong? I would like to use a >> function like the snippet below to create an array. >> >> PyArrayObject *helper_newCArrayObject_i32( int32 len, int32 *array >> ) { intp plen[1]; PyArrayObject *obj = 0; >> >> plen[0] = len; printf( "11111 %d\n", PyArray_INT32 ); /* obj = >> (PyArrayObject *) PyArray_SimpleNew( 1, plen, PyArray_INT32 ); */ >> >> obj = (PyArrayObject *) PyArray_New( &PyArray_Type, 1, plen, >> PyArray_INT32, NULL, NULL, 0, CARRAY_FLAGS, NULL ); .... >> >> > > > First of all, don't pass in CARRAY_FLAGS when the data argument to > PyArray_New is NULL. A non-zero flags entry tells the subroutine to > create a FORTRAN strides array if no data is passed. > > Remember: DATA flags are only to describe already available memory. > If you create the memory in PyArray_New, then the only thing to > decide is FORTRAN or C- contiguous. So, in this routine, you are > creating a Fortran array. Perhaps this is causing problems later. I understand this - I first tried with PyArray_SimpleNew(), then with PyArray_New() with zero flags and finally tried to play with the flags to no avail. > Also, you are not showing us the rest of the code. Your traceback is > showing PyArray_ValidType being called which is not shown anywere... Well, that is what is really strange - I have just traced the execution with gdb and what is hapenning is, that, instead of PyArray_New(), PyArray_ValidType() gets called, and so, obviously, it sees &PyArray_Type as its 'int type', which is -1212938592 in my case -> segfault. I am clueless why this happens. r. From oliphant.travis at ieee.org Fri Dec 2 11:16:36 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 09:16:36 -0700 Subject: [SciPy-dev] PyArray_New problem In-Reply-To: <43907098.7010305@ntc.zcu.cz> References: <438EC8BF.9040402@ntc.zcu.cz> <438EE782.9090708@ntc.zcu.cz> <4390128B.1090200@ntc.zcu.cz> <43906D3E.4010109@ieee.org> <43907098.7010305@ntc.zcu.cz> Message-ID: <43907364.4030501@ieee.org> Robert Cimrman wrote: >Travis Oliphant wrote: > > >>Robert Cimrman wrote: >> >> >> >> >>>Could someone tell me what I am doing wrong? I would like to use a >>> function like the snippet below to create an array. >>> >>>PyArrayObject *helper_newCArrayObject_i32( int32 len, int32 *array >>>) { intp plen[1]; PyArrayObject *obj = 0; >>> >>>plen[0] = len; printf( "11111 %d\n", PyArray_INT32 ); /* obj = >>>(PyArrayObject *) PyArray_SimpleNew( 1, plen, PyArray_INT32 ); */ >>> >>>obj = (PyArrayObject *) PyArray_New( &PyArray_Type, 1, plen, >>>PyArray_INT32, NULL, NULL, 0, CARRAY_FLAGS, NULL ); .... >>> >>> >>> >>> >>First of all, don't pass in CARRAY_FLAGS when the data argument to >>PyArray_New is NULL. A non-zero flags entry tells the subroutine to >> create a FORTRAN strides array if no data is passed. >> >>Remember: DATA flags are only to describe already available memory. >> If you create the memory in PyArray_New, then the only thing to >>decide is FORTRAN or C- contiguous. So, in this routine, you are >>creating a Fortran array. Perhaps this is causing problems later. >> >> > >I understand this - I first tried with PyArray_SimpleNew(), then with >PyArray_New() with zero flags and finally tried to play with the flags >to no avail. > > > >>Also, you are not showing us the rest of the code. Your traceback is >> showing PyArray_ValidType being called which is not shown anywere... >> >> > >Well, that is what is really strange - I have just traced the execution >with gdb and what is hapenning is, that, instead of PyArray_New(), >PyArray_ValidType() gets called, and so, obviously, it sees >&PyArray_Type as its 'int type', which is -1212938592 in my case -> >segfault. I am clueless why this happens. > > > Have you completely re-compiled since upgrading your scipy_core installation. If you haven't, you are using the wrong function-pointer table (the C-API is actually a function-pointer table). If there are changes to the C-API and you don't recompile, this kind of thing happens. -Travis From oliphant.travis at ieee.org Fri Dec 2 11:21:09 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 09:21:09 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <438FC087.8050309@gmail.com> References: <438FC087.8050309@gmail.com> Message-ID: <43907475.6000303@ieee.org> Andrew Jaffe wrote: >What's going on here? > >Also, an (unrelated?) point: I note that basic_lite.py exists in both >scipy/linalg and scipy/basic -- two different files with somewhat >different contents, imported in different ways! Which is correct? Should >they both exist? (They both give the same error for cholesky.) Is one a > > >leftover from an earlier installation? > > Yes, I think you figured that out, though, right? >Andrew Jaffe wrote: > > >>Hi All, >> >>One more data point: in addition to my problems on OS X, someone else >>has the same issue on Win XP... So it's not OS-specific, but it does >>occur only for some installations. >> >>Perhaps it depends on the underlying BLAS/LAPACK library? >> >> Hmm.. The more I look in to this the less I'm inclined to blame the BLAS/LAPACK library. This looks like a problem with flags but I'm not sure the source. I can't reproduce the problem on Windows or Linux (however I'm using a more recent version of scipy). It's possible that this problem is a bug in older versions of scipy. I would recommend upgrading from 0.6.1. As soon as the records module has been fleshed out a bit, there will be another release. In the mean time, checking out from SVN is not difficult. Ask if you need specific help. Best regards, -Travis From strawman at astraw.com Fri Dec 2 11:21:54 2005 From: strawman at astraw.com (Andrew Straw) Date: Fri, 02 Dec 2005 08:21:54 -0800 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: References: Message-ID: <439074A2.2090508@astraw.com> Perry Greenfield wrote: >Travis Oliphant wrote: > > >>Question: How important is backwards compatibility with old numarray >>specification. In particular, I would go with the .fields access >>described by Perry, and eliminate the .field() approach? >> >> >> >For us, probably not critical since we have to do some rewriting anyway. >(But it would be nice to retain for a while as deprecated). > >But what about field names that don't map well to attributes? >I haven't had a chance to reread the past emails but I seem to >recall this was a significant issue. That would imply that .field() >would be needed for those cases anyway. > > I haven't read the above thread extensively, but the issue of field names that don't map well to attributes is significant. For example, users of pytables often have columns with names that are not valid Python names. So, regardless of what solution is the most obvious, there should at least be a way to get as such field names. (pytables users are used to doing that.) Cheers! Andrew From cimrman3 at ntc.zcu.cz Fri Dec 2 11:41:12 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 02 Dec 2005 17:41:12 +0100 Subject: [SciPy-dev] PyArray_New problem In-Reply-To: <43907364.4030501@ieee.org> References: <438EC8BF.9040402@ntc.zcu.cz> <438EE782.9090708@ntc.zcu.cz> <4390128B.1090200@ntc.zcu.cz> <43906D3E.4010109@ieee.org> <43907098.7010305@ntc.zcu.cz> <43907364.4030501@ieee.org> Message-ID: <43907928.8020402@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > > >>Travis Oliphant wrote: >> >> >> >>>Robert Cimrman wrote: >>> >>> >>> >>> >>> >>>>Could someone tell me what I am doing wrong? I would like to use a >>>>function like the snippet below to create an array. >>>> >>>>PyArrayObject *helper_newCArrayObject_i32( int32 len, int32 *array >>>>) { intp plen[1]; PyArrayObject *obj = 0; >>>> >>>>plen[0] = len; printf( "11111 %d\n", PyArray_INT32 ); /* obj = >>>>(PyArrayObject *) PyArray_SimpleNew( 1, plen, PyArray_INT32 ); */ >>>> >>>>obj = (PyArrayObject *) PyArray_New( &PyArray_Type, 1, plen, >>>>PyArray_INT32, NULL, NULL, 0, CARRAY_FLAGS, NULL ); .... >>>> >>>> >>>> >>>> >>> >>>First of all, don't pass in CARRAY_FLAGS when the data argument to >>>PyArray_New is NULL. A non-zero flags entry tells the subroutine to >>>create a FORTRAN strides array if no data is passed. >>> >>>Remember: DATA flags are only to describe already available memory. >>>If you create the memory in PyArray_New, then the only thing to >>>decide is FORTRAN or C- contiguous. So, in this routine, you are >>>creating a Fortran array. Perhaps this is causing problems later. >>> >>> >> >>I understand this - I first tried with PyArray_SimpleNew(), then with >>PyArray_New() with zero flags and finally tried to play with the flags >>to no avail. >> >> >> >> >>>Also, you are not showing us the rest of the code. Your traceback is >>>showing PyArray_ValidType being called which is not shown anywere... >>> >>> >> >>Well, that is what is really strange - I have just traced the execution >>with gdb and what is hapenning is, that, instead of PyArray_New(), >>PyArray_ValidType() gets called, and so, obviously, it sees >>&PyArray_Type as its 'int type', which is -1212938592 in my case -> >>segfault. I am clueless why this happens. >> >> >> > > Have you completely re-compiled since upgrading your scipy_core > installation. If you haven't, you are using the wrong function-pointer > table (the C-API is actually a function-pointer table). If there are > changes to the C-API and you don't recompile, this kind of thing happens. I use the script below for updates. Now I have manually removed 'build' directories in both core and scipy, and removed also /home/share/software/usr/lib/python2.4/site-packages/scipy dir where the installed package resides. Oh wait - I have an old installation in python2.3 direcotry... Did not help... But! Well, thanks! Because of my nonstadard installation dir, I have manually copied the include files into a location that is on my include path... and forgot to update them. Removing the dir and making a symbolic link solved the problem! So sorry for the hassle, but it might have been illuminating for others :) thank you! r. --- rm -r core/build svn co http://svn.scipy.org/svn/scipy_core/trunk core #svn co http://svn.scipy.org/svn/scipy_core/branches/newcore/ cd core python setup.py install --root=/home/share/software cd .. rm -r scipy/build svn co http://svn.scipy.org/svn/scipy/trunk scipy #svn co http://svn.scipy.org/svn/scipy/branches/newscipy/ cd scipy python setup.py install --root=/home/share/software cd .. From oliphant.travis at ieee.org Fri Dec 2 12:13:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 10:13:03 -0700 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: <43907B7A.7000905@sympatico.ca> References: <438F5651.5060104@ee.byu.edu> <438F6F10.3020503@stsci.edu> <438F9EEE.904@ee.byu.edu> <43907B7A.7000905@sympatico.ca> Message-ID: <4390809F.7050004@ieee.org> > I'm not clear as to what the current design objective is and so I'll > try to recap and perhaps expand my pieces in the referenced discussion > to set out the sort of arrangement I would like to see. I have two objectives: 1) Make the core scipy array object flexible enough to support a very good records sub-class. In other works, I wonder if the core scipy array object could be made flexible enough to be used as a decent record array by itself, without adding much difficulty. In the process, I'm really trying to understand how the data-type of an array should be generally considered. An array object that has this generic perspective on data-type is what should go into Python, I believe. 2) Make a (more) useful records subclass of the ndarray object that is perhaps easier for the end-user to use. Involved with this, of course, is making functions that make it easy to create a records sub-class. > > We are moving towards having a multi-dimensional array which can hold > objects of fixed size and type, the smallest being one byte (although > the void would appear to be a collection of no size objects). Most of > the need, and thus the focus, is on numeric objects, ranging in size > from Int8 to Complex64. > > The Record is a fixed size object containing fields. Each field has a > name, an optional title and data of a fixed type (perhaps including > another record instance and maybe arrays of fixed size?). Right, so the record is really another kind of data-type. The concept of the multidimensional array does not need adjustment, but the concept of what constitutes a data-type may need some fixing up. > > In the example below, AddressRecord and PersonRecord would be > sub-classes of Record where the fields are named and, optionally, > field titles given. The names would be consistent with Python naming > whereas the title could be any Python string. I like the notion of titles and names. I think they are both useful. > > The use of attributes raises the possibility that one could have > nested records. For example, suppose one has an address record: Now, I'm in favor of attribute access. But, nested records is possible without attribute access (it's just extra typing). It's the underlying structure that provides the possibility for nested records (and that's what I'm trying to figure out how to support, generally). If I can support this generally in the basic ndarray object by augmenting the notion of data-type as appropriate, then making a subclass that has the nice syntatic sugar is easy. So, there are two issues really. 1) How to think about the data-type of a general ndarray object in order to support nested records in a straightforward way. 2) What should the syntatic sugar of a record array subclass be... I suppose a third is 3) How much of the syntatic sugar should be applied to all ndarray's? -Travis > I see no need to have the attribute 'field' and would like to avoid > the use of strings to identify a record component. This does require > that fields be named as Python identifiers but is this restriction a > killer? For a record array subclass that may be true. But, as was raised by others in the previous thread, there are problems of "name-space" collision with the methods and attributes of the array that would prevent certain names from being used (and any additions to the methods and attributes of the array would cause incompatibilities with some-people's records). But, at this point, I like the readability of the attribute access approach and could support it. -Travis From chanley at stsci.edu Fri Dec 2 13:25:59 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 02 Dec 2005 13:25:59 -0500 Subject: [SciPy-dev] stupid namespace question Message-ID: <439091B7.1080203@stsci.edu> Hi, In the past, I have always done imports of the form "from scipy.base import records". However, the following import raises an ImportError: "from scipy import records" My (simple) understanding was that everything in the scipy.base namespace was included in the scipy namespace. Am I missing something obvious? Thanks, Chris From golux at comcast.net Fri Dec 2 14:00:35 2005 From: golux at comcast.net (Stephen Waterbury) Date: Fri, 02 Dec 2005 14:00:35 -0500 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: <4390942F.6090302@ieee.org> References: <4390942F.6090302@ieee.org> Message-ID: <439099D3.2050803@comcast.net> Travis Oliphant wrote: > So, I've been re-thinking the notion of "registering a data-type". It > seems to me that while it's O.K. to have a set of pre-defined data > types. The notion of data-type ought to be flexible enough to allow the > user to define one "on-the-fly". > I'm thinking of ways to do this right now. Any suggestions are welcome. I'm doing that in an application I'm developing. My objects have an attribute called '_schema' that is an instance of Zope InterfaceClass. An object (read "record" ;) is assigned a _schema when it is instantiated, and all information about its attributes (a.k.a. "fields") is contained in the _schema's Properties (my 'Property' subtypes the Zope interfaces 'Attribute' type, and has a host of (meta-)attributes like 'domain', 'range', 'id', 'name', etc. -- which could easily be extended to include things like 'title', but I use another mechanism for display characteristics, called 'DisplayMap', which can be used to specify the order in which you want the object's properties to appear in a grid, what you want their "display names" to be, etc. ... which are also customizable by the end-user. Let me know if this sounds interesting. Cheers, Steve From golux at comcast.net Fri Dec 2 14:45:15 2005 From: golux at comcast.net (Stephen Waterbury) Date: Fri, 02 Dec 2005 14:45:15 -0500 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: <43909C55.4030409@ieee.org> References: <4390942F.6090302@ieee.org> <439099D3.2050803@comcast.net> <43909C55.4030409@ieee.org> Message-ID: <4390A44B.8030204@comcast.net> Travis Oliphant wrote: > Thanks for the example. This is exactly what I'm trying to do. > The difficulty is that the notion of data type is strongly bound in the > C implementation of scipy core to a type number describing the array > descriptor. In fact, currently the C-member of the array object that > really defines the type is not even a Python object (it's just a > C-structure). SciPy Core does a much better job of abstracting the > notion of type in the code than Numeric did, but it's still got some of > that type_number specific stuff in it. It's part of what makes it hard > to really abstract the idea of data type and still keep the speed of the > old code-base. IMO, this is a perfect example of why the optimizations needed for numeric arrays are rarely appropriate for general "record" (read "adaptable object") types, and why IMO a generic record type is probably almost always a premature optimization -- most domain objects are more properly modeled in "relational" structures than "nested" structures (although mapping the relational structure into a nested structure for *presentation* is often useful, it's generally best implemented as a view). My 2 cents. Steve From oliphant.travis at ieee.org Fri Dec 2 13:38:23 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 11:38:23 -0700 Subject: [SciPy-dev] stupid namespace question In-Reply-To: <439091B7.1080203@stsci.edu> References: <439091B7.1080203@stsci.edu> Message-ID: <4390949F.3040902@ieee.org> Christopher Hanley wrote: >Hi, > >In the past, I have always done imports of the form "from scipy.base >import records". However, the following import raises an ImportError: >"from scipy import records" > >My (simple) understanding was that everything in the scipy.base >namespace was included in the scipy namespace. Am I missing something >obvious? > > > No, just that I haven't fully "enabled" the records module since I'm still working on it :-) So, for now you have to get it from the full path name. Eventually it should be accessible like that. I'm currently re-thinking what I was doing with the records array. I think some more things need to be done on the fundamental array object in order to support nested records better. -Travis From oliphant.travis at ieee.org Fri Dec 2 13:36:31 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 11:36:31 -0700 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: References: Message-ID: <4390942F.6090302@ieee.org> Perry Greenfield wrote: > > >For us, probably not critical since we have to do some rewriting anyway. >(But it would be nice to retain for a while as deprecated). > > Easy enough to do by defining an actual record array (however, see below). I've been retaining backwards compatibility in other ways while not documenting it. For example, you can actually now pass in strings like 'Int32' for types. >But what about field names that don't map well to attributes? >I haven't had a chance to reread the past emails but I seem to >recall this was a significant issue. That would imply that .field() >would be needed for those cases anyway. > > What I'm referring to as the solution here is a slight modification to what Perry described. In other words, all arrays have the attribute .fields You can set this attribute to a dictionary which will automagically gives field names to any array (this dictionary has ordered lists of 'names', (optionally) 'titles', and "(data-descr, [offset])" lists which defines the mapping. If offset is not given, then the "next-available" offset is assumed. The data-descr is either 1) a data-type or 2) a tuple of (data-type, shape). The data-type is either a defined data-type or alias, or an object with a .fields attribute that provides the same dictionary and an .itemsize attribute that computes the total size of the data-type. You can get this attribute which returns a special fields object (written in Python initially like the flags attribute) that can look up field names like a dictionary, or with attribute access for names that are either 1) acceptable or 2) have a user-provided "python-name" associated with them. Thus, .fields['home address'] would always work but .fields.hmaddr would only work if the user had previously made the association hmaddr -> 'home address' for the data type of this array. Thus 'home address' would be a title but hmaddr would be the name. The records module would simply provide functions for making record arrays and a record data type. Driving my thinking is the concept that the notion of a record array is really a description of the data type of the array (not the array itself). Thus, all the fields information should really just be part of the data type itself. Now, I don't really want to create and register a new data type every time somebody has a new record layout. So, I've been re-thinking the notion of "registering a data-type". It seems to me that while it's O.K. to have a set of pre-defined data types. The notion of data-type ought to be flexible enough to allow the user to define one "on-the-fly". I'm thinking of ways to do this right now. Any suggestions are welcome. -Travis From oliphant.travis at ieee.org Fri Dec 2 14:11:17 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 12:11:17 -0700 Subject: [SciPy-dev] [Numpy-discussion] Records in scipy core In-Reply-To: <439099D3.2050803@comcast.net> References: <4390942F.6090302@ieee.org> <439099D3.2050803@comcast.net> Message-ID: <43909C55.4030409@ieee.org> Stephen Waterbury wrote: >Travis Oliphant wrote: > > >>So, I've been re-thinking the notion of "registering a data-type". It >>seems to me that while it's O.K. to have a set of pre-defined data >>types. The notion of data-type ought to be flexible enough to allow the >>user to define one "on-the-fly". >>I'm thinking of ways to do this right now. Any suggestions are welcome. >> >> > >I'm doing that in an application I'm developing. >My objects have an attribute called '_schema' that is an instance >of Zope InterfaceClass. An object (read "record" ;) is assigned a _schema >when it is instantiated, and all information about its attributes (a.k.a. >"fields") is contained in the _schema's Properties (my 'Property' subtypes >the Zope interfaces 'Attribute' type, and has a host of (meta-)attributes >like 'domain', 'range', 'id', 'name', etc. -- which could easily be >extended to include things like 'title', but I use another mechanism >for display characteristics, called 'DisplayMap', which can be used to >specify the order in which you want the object's properties to appear >in a grid, what you want their "display names" to be, etc. ... which are >also customizable by the end-user. > > Thanks for the example. This is exactly what I'm trying to do. The difficulty is that the notion of data type is strongly bound in the C implementation of scipy core to a type number describing the array descriptor. In fact, currently the C-member of the array object that really defines the type is not even a Python object (it's just a C-structure). SciPy Core does a much better job of abstracting the notion of type in the code than Numeric did, but it's still got some of that type_number specific stuff in it. It's part of what makes it hard to really abstract the idea of data type and still keep the speed of the old code-base. I think I've got some ideas though. I'm working on them right now. -Travis From oliphant.travis at ieee.org Fri Dec 2 16:24:36 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 14:24:36 -0700 Subject: [SciPy-dev] Warnings about incompatibilities ahead Message-ID: <4390BB94.5010409@ieee.org> This is a warning regarding changes to the C-API of scipy core I'm going to check in (hopefully by the end of the day). 1) PyArray_Typecode structure is going away --- I always hated it. All C-API interfaces that use it will instead take a pointer to PyArray_Descr *. If the C-API actually used the funny fortran member of the structure, a new argument called fortran will be placed in the calling sequence. 2) The itemsize member on the array is going back into the PyArray_Descr where it belongs. Note that PyArray_ITEMSIZE(self) will not be affected its just that 3) There will be a new attribute .descr that returns the PyArray_Descr object (it's now a real object). 4) New C-API calls will be added. These changes are being made to facilitated nested records, but I think they are good changes anyway and now is better than later. I'm the one who has to pay the biggest price by fixing all the code in scipy core to get rid of PyArray_Typecode, references to self->itemsize and to allow for dynamic PyArray_Descr * objects. I noticed people are starting to use the C-API, so this is a warning not to 1) use PyArray_Typecode and 2) use direct access to the itemsize member of an array. Thanks, -Travis From a.h.jaffe at gmail.com Fri Dec 2 16:31:09 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 21:31:09 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <43906EBD.5010407@ieee.org> References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> Message-ID: Travis Oliphant wrote: > Andrew Jaffe wrote: >> >>OK, drilling down a bit further... The problem lies in the use of >>scipy.transpose() in _castCopyAndTranspose, since transpose() makes a >>contiguous, non-Fortran array into a non-contiguous fortran array (since >>it appears that at present it doesn't copy, just makes a new view). One >>solution is just to actually copy the array. The other function, >>_fastCopyAndTranspose(), seems to work fine in this situation, but I'm >>not sure how and why this differs from _castCopyAndTranspose (in >>particular, it doesn't seem to set the Fortran flag). >> >>[However, the cholesky decomposition only makes sense for a symmetric >>(or Hermitian) array, so, if we're not changing the type, for real >>matrices really all we need to do is the cast and copy; for complex >>matrices we need to transpose (not hermitian conjugate since we really >>want to change the ordering), but this is equivalent to just taking the >>complex conjugate of every element, which must be faster than copying. >>If we are changing the type, I assume the speed difference isn't as >>important.] >> >>Any ideas on what the best change(s) would be? >> > The problem with making changes based on what's been posted so far is > that the code works for other platforms. If this is a problem with OS > X, is this a problem for just your installation or does everybody else > see it. > > I have not seen enough data to know. One strong possibility is that > the error is in the lapack_lite and most people are using the optimized > lapack. We should definitely check that direction. I still do get the problem with the latest version (rev 1550). I thought I was using the optimized lapack -- it's automatic on OS X, isn't it? Does the fact that the problem is occuring during a lapack_lite call imply something else? If there's no problem there, I guess the questions are: does _castCopyAndTranspose result in a non-contiguous array on all platforms? And does lapack_lite.dpotrf() balk at that, and why? Andrew From oliphant.travis at ieee.org Fri Dec 2 16:43:07 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 14:43:07 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> Message-ID: <4390BFEB.6010200@ieee.org> Andrew Jaffe wrote: >I still do get the problem with the latest version (rev 1550). I thought >I was using the optimized lapack -- it's automatic on OS X, isn't it? >Does the fact that the problem is occuring during a lapack_lite call >imply something else? > > O.K. That's good to know. >If there's no problem there, I guess the questions are: does >_castCopyAndTranspose result in a non-contiguous array on all platforms? >And does lapack_lite.dpotrf() balk at that, and why? > > No, it doesn't, that's what has me stumped. It should be returning a contiguous array (because of the .astype method). So, figuring out why the .astype() method is not returning a contiguous array is the key to understanding this, I think. If you could determine what the parameters being passed to the .astype() method are and try to reproduce the problem of not getting a contiguous parameter on output, that would be very helpful. Sorry for your trouble. -Travis From jswhit at fastmail.fm Fri Dec 2 16:58:49 2005 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Fri, 02 Dec 2005 14:58:49 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: <4390C399.8050307@fastmail.fm> Andrew Jaffe wrote: >hi all, > >(apologies that a similar question has appeared elsewhere...) > >In the newest incarnation of scipy_core, I am having trouble with the >cholesky(a) routine. Here is some minimal code reproducing the bug >(on OS X) > >------------------------------------------------------------ >from scipy import identity, __core_version__, Float64 >import scipy.linalg as la >print 'Scipy version: ', __core_version__ >i = identity(4, Float64) >print 'identity matrix:' >print i > >print 'about to get cholesky decomposition' >c = la.cholesky(i) >print c >------------------------------------------------------------ > >which gives > >------------------------------------------------------------ >Scipy version: 0.6.1 >identity matrix: >[[ 1. 0. 0. 0.] > [ 0. 1. 0. 0.] > [ 0. 0. 1. 0.] > [ 0. 0. 0. 1.]] >about to get cholesky decomposition >Traceback (most recent call last): > File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? > c = la.cholesky(i) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >python2.4/site-packages/scipy/linalg/basic_lite.py", line 117, in >cholesky_decomposition > results = lapack_routine('L', n, a, m, 0) >lapack_lite.LapackError: Parameter a is not contiguous in >lapack_lite.dpotrf >------------------------------------------------------------ > >(The cholesky decomposition in this case should just be the matrix >itself; the same error occurs with a complex matrix.) > > >Any ideas? Could this have anything to do with _CastCopyAndtranspose >in Basic_lite.py? (Since there are few other candidates for anything >that actually changes the matrix.) > >Thanks in advance, > >A > > > Another data point - on OS X 10.3.9 with python 2.4.2, scipy_core 0.6.1 and scipy 0.4.3 I get the expected result (no error). -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From a.h.jaffe at gmail.com Fri Dec 2 16:56:39 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 21:56:39 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <4390BFEB.6010200@ieee.org> References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> <4390BFEB.6010200@ieee.org> Message-ID: Travis Oliphant wrote: > Andrew Jaffe wrote: > >>If there's no problem there, I guess the questions are: does >>_castCopyAndTranspose result in a non-contiguous array on all platforms? >>And does lapack_lite.dpotrf() balk at that, and why? >> >> > > No, it doesn't, that's what has me stumped. It should be returning a > contiguous array (because of the .astype method). > > So, figuring out why the .astype() method is not returning a contiguous > array is the key to understanding this, I think. > > If you could determine what the parameters being passed to the .astype() > method are and try to reproduce the problem of not getting a contiguous > parameter on output, that would be very helpful. > > Sorry for your trouble. > OK, on my machine, the astype() method only gives a non-contiguous array when the old and new types are different: In [33]: a = identity(4, Float64) In [34]: print transpose(a).astype(Float64).flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': False, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} In [35]: print transpose(a).astype(Float32).flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': False, 'ALIGNED': True, 'OWNDATA': True} A From a.h.jaffe at gmail.com Fri Dec 2 17:11:13 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 22:11:13 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> <4390BFEB.6010200@ieee.org> Message-ID: Andrew Jaffe wrote: > Travis Oliphant wrote: > >>Andrew Jaffe wrote: >> >> >>>If there's no problem there, I guess the questions are: does >>>_castCopyAndTranspose result in a non-contiguous array on all platforms? >>>And does lapack_lite.dpotrf() balk at that, and why? >>> >>> >> >>No, it doesn't, that's what has me stumped. It should be returning a >>contiguous array (because of the .astype method). >> >>So, figuring out why the .astype() method is not returning a contiguous >>array is the key to understanding this, I think. >> >>If you could determine what the parameters being passed to the .astype() >>method are and try to reproduce the problem of not getting a contiguous >>parameter on output, that would be very helpful. >> >>Sorry for your trouble. >> OK, on my machine, the astype() method only gives a non-contiguous array when the old and new types are different: In [33]: a = identity(4, Float64) In [34]: print transpose(a).astype(Float64).flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': False, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} In [35]: print transpose(a).astype(Float32).flags {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': False, 'ALIGNED': True, 'OWNDATA': True} And so, if I change my original code to start with a float32 matrix, the cholesky works fine, since the astype converts it to float64 internally. A From oliphant.travis at ieee.org Fri Dec 2 17:21:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 15:21:03 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> <4390BFEB.6010200@ieee.org> Message-ID: <4390C8CF.70805@ieee.org> Andrew Jaffe wrote: >OK, on my machine, the astype() method only gives a non-contiguous array >when the old and new types are different: > >In [33]: a = identity(4, Float64) > >In [34]: print transpose(a).astype(Float64).flags >{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, >'CONTIGUOUS': False, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} > > O.K, this gives me some place to look. I thought this produced a copy. Let me check into it... Found the problem. I can also now reproduce it (I wasn't using a floating-point identity matrix before...my bad). It does produce a copy, but it is in "Fortran order" because of an inadvertant change to the code. Obviously, the way astype is used, it should always return a C-contiguous result. Unfortunately, I can't check in the change because I'm in the middle of major edits. However, the code to change is in src/arraymethods.c in the array_cast function. Change PyArray_NewCopy(self, -1) to PyArray_NewCopy(self, 0). Best, -Travis From oliphant.travis at ieee.org Fri Dec 2 18:22:13 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 16:22:13 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> <4390BFEB.6010200@ieee.org> Message-ID: <4390D725.5000708@ieee.org> Andrew Jaffe wrote: >Travis Oliphant wrote: > > >>Andrew Jaffe wrote: >> >> >> Update. Using the magic of SVN branching..., I moved my extensive changes off to a branch and have fixed the bug you found. The new version is in SVN. -Travis From a.h.jaffe at gmail.com Fri Dec 2 18:23:58 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 02 Dec 2005 23:23:58 +0000 Subject: [SciPy-dev] Random number (mtrand) problems Message-ID: Hi All, I've been having with random number generation in the latest iterations of scipy_core: In [1]: from scipy.lib.mtrand import normal Importing test to scipy Importing base to scipy Importing basic to scipy In [2]: n = normal(0,1) In [3]: print n 0.421867235737 In [4]: n1 = normal(0,1,1) Bus error The problem occurs for any size !=None and for 'uniform', too. But it's more complicated than that: usually, I get bus errors. Sometimes it's slightly different: >>> n1 = normal(0,1,1) >>> print n1 ValueError: Invalid type for array >>> n1 array([ 2.90522444e-310]) (the ValueError seems to be happening in array2string) The difference seems to be related somewhat to whether the name (n1 above) had already been bound to an object, but I can't figure out the general pattern. Hmmmmm. A From oliphant.travis at ieee.org Fri Dec 2 17:21:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 15:21:03 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: <438FC087.8050309@gmail.com> <43906EBD.5010407@ieee.org> <4390BFEB.6010200@ieee.org> Message-ID: <4390C8CF.70805@ieee.org> Andrew Jaffe wrote: >OK, on my machine, the astype() method only gives a non-contiguous array >when the old and new types are different: > >In [33]: a = identity(4, Float64) > >In [34]: print transpose(a).astype(Float64).flags >{'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, >'CONTIGUOUS': False, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': True} > > O.K, this gives me some place to look. I thought this produced a copy. Let me check into it... Found the problem. I can also now reproduce it (I wasn't using a floating-point identity matrix before...my bad). It does produce a copy, but it is in "Fortran order" because of an inadvertant change to the code. Obviously, the way astype is used, it should always return a C-contiguous result. Unfortunately, I can't check in the change because I'm in the middle of major edits. However, the code to change is in src/arraymethods.c in the array_cast function. Change PyArray_NewCopy(self, -1) to PyArray_NewCopy(self, 0). Best, -Travis From oliphant.travis at ieee.org Fri Dec 2 20:54:00 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Dec 2005 18:54:00 -0700 Subject: [SciPy-dev] [SciPy-user] segfault in scipy.random.standard_normal In-Reply-To: <4390F6CC.8080107@gmail.com> References: <723eb6930512021717p31685f9cuc6a09777122f5df3@mail.gmail.com> <4390F6CC.8080107@gmail.com> Message-ID: <4390FAB8.1040606@ieee.org> Robert Kern wrote: >Chris Fonnesbeck wrote: > > >>Something is squirrely with the size argument in standard_normal: >> >>In [5]: random.standard_normal() >>Out[5]: 0.31727273308342824 >> >>In [6]: random.standard_normal(2) >>Segmentation fault >> >>I'm using a recent svn build from scipy_core. >> >> I've been mucking with the C-API lately and whenever that happens strange errors can occur unless you rebuild everything. The reason for the C-API changes and the big ones coming has been getting the numarray records module working better than I had previously envisioned. These changes will be for the better and should be the last ones for awhile. I try to change the version number whenever I alter the C-API, so new version number means new C-API. After the records module is ported (and improved IMHO), the C-API changes will settle down. Then, the only changes I can forsee in the future are additions to ease the porting of numarray extensions like nd_image and PyTables. I'd love it if somebody started thinking about a good way to ease the numarray transition. -Travis From schofield at ftw.at Sat Dec 3 06:47:18 2005 From: schofield at ftw.at (Ed Schofield) Date: Sat, 3 Dec 2005 11:47:18 +0000 Subject: [SciPy-dev] Random number (mtrand) problems In-Reply-To: References: Message-ID: <97421B07-EE9E-4303-B156-31DE8835BBF3@ftw.at> > In [4]: n1 = normal(0,1,1) > Bus error Hmmm ... I've been getting some bus errors too recently, on both PPC and x86. I got them when trying "import scipy", but they seemed to clear up in both instances when I removed the scipy/ directory and old crud in build/ and reinstalled. Would this fix your problem? -- Ed From a.h.jaffe at gmail.com Sat Dec 3 08:06:57 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Sat, 03 Dec 2005 13:06:57 +0000 Subject: [SciPy-dev] Random number (mtrand) problems In-Reply-To: <97421B07-EE9E-4303-B156-31DE8835BBF3@ftw.at> References: <97421B07-EE9E-4303-B156-31DE8835BBF3@ftw.at> Message-ID: Ed Schofield wrote: >>In [4]: n1 = normal(0,1,1) >>Bus error > > > Hmmm ... I've been getting some bus errors too recently, on both PPC > and x86. I got them when trying "import scipy", but they seemed to > clear up in both instances when I removed the scipy/ directory and > old crud in build/ and reinstalled. Would this fix your problem? > > -- Ed Yep, rebuilding fixes the problem! Thanks, A From a.h.jaffe at gmail.com Sat Dec 3 08:08:27 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Sat, 03 Dec 2005 13:08:27 +0000 Subject: [SciPy-dev] [SciPy-user] segfault in scipy.random.standard_normal In-Reply-To: <4390FAB8.1040606@ieee.org> References: <723eb6930512021717p31685f9cuc6a09777122f5df3@mail.gmail.com> <4390F6CC.8080107@gmail.com> <4390FAB8.1040606@ieee.org> Message-ID: Travis Oliphant wrote: >>>Something is squirrely with the size argument in standard_normal: >>> >>>In [5]: random.standard_normal() >>>Out[5]: 0.31727273308342824 >>> >>>In [6]: random.standard_normal(2) >>>Segmentation fault >>> >>>I'm using a recent svn build from scipy_core. >>> >>> > > > I've been mucking with the C-API lately and whenever that happens > strange errors can occur unless you rebuild everything. Aha! That fixed my normal() problem, too. Perhaps you can 'touch' the appropriate files when you do a new checkin so that the correct files get rebuilt when we 'svn update' and then rebuild? Or should we always assume that's needed? A From jorgen.stenarson at bostream.nu Sat Dec 3 09:05:45 2005 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Sat, 03 Dec 2005 15:05:45 +0100 Subject: [SciPy-dev] Bug or inconsistent behaviour Message-ID: <4391A639.4060205@bostream.nu> Hi, I stumbled on inconsistent behaviour, perhaps its a bug, in the array creation function. When calling array with a list of integers and a character I get very different result in scipy, numarray and numeric. numarray throws TypeError exceptions no matter where the character is in the list. Numeric throws an exception if the character is the first element of the list (because it tries to call len on the other integers), but if it is anywhere else in the list it gets silently converted to an integer using ord if it is a single character but there is a "TypeError: integer required" if it is a string. Scipy always converts such a lists to a stringtype array with the integer being converted to a string using str, the size of the stringtype depends on which order the elements are in the list. I'm not sure which behaviour is supposed to be "right". I prefer numarray's as it is most predictable and will throw an error if there is a mix of string and numeric data in a list I try to convert to an array. best regards, J?rgen Stenarson >>> import numarray >>> numarray.__version__ '1.4.1' >>> numarray.array([1,2,3]) array([1, 2, 3]) >>> numarray.array([1,2,3,"a"]) . . . TypeError: Expecting a python numeric type, got something else. >>> numarray.array(["a",1,2,3]) . . . TypeError: Expecting a python numeric type, got something else. >>> numarray.array([1,"ac",2,3]) . . . TypeError: Expecting a python numeric type, got something else. >>> import Numeric >>> Numeric.__version__ '24.2' >>> Numeric.array([1,2,3,"a"]) array([ 1, 2, 3, 97]) >>> Numeric.array([1,2,3,"va"]) . . . TypeError: an integer is required >>> Numeric.array(["a",1,2,3]) . . . TypeError: len() of unsized object >>> import scipy >>> scipy.__core_version__ '0.7.0.1495' >>> scipy.array([1,2,3,"a"]) array([1, 2, 3, a], dtype=string4) >>> scipy.array(["a",1,2,3]) array([a, 1, 2, 3], dtype=string1) From oliphant.travis at ieee.org Sat Dec 3 15:25:01 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 03 Dec 2005 13:25:01 -0700 Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <4391FF1D.9070500@ieee.org> Gerard Vermeulen wrote: >I have benchmarked some array manipulations using the default >Numeric and numarray on my Mandrake-10.2 and a recent snapshot >of the new scipy core. > > Thanks for the benchmarks. There definitely may be additional code-optimizations we can do. I would like to figure out why those specific lines you mention are taking longer --- it may be some simply thing. The other thing to keep in mind is the buffer size. SciPy core has a user-settable buffer size that determines when "mis-behaved" arrays (or type-cast-needed arrays) are copied and when they aren't. Playing with that size can make a difference on a per-platform basis. I just picked a default size that seemed reasonable. There are definitely some other optimizations that could be applied. Benchmarks like these can help us find them. If there is really something inherently slow in the ufunc algorithm then that needs to be fixed. -Travis From oliphant.travis at ieee.org Sat Dec 3 17:57:06 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 03 Dec 2005 15:57:06 -0700 Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <723eb6930512031436l74ec2fbfrc92cc4dc70bce5dd@mail.gmail.com> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> <723eb6930512031436l74ec2fbfrc92cc4dc70bce5dd@mail.gmail.com> Message-ID: <439222C2.90902@ieee.org> Chris Fonnesbeck wrote: >On 12/3/05, Gerard Vermeulen wrote: > > >>Conclusion: >>- the overal performance of numarray is 23 % better than scipy-newcore and >> 27 % better than Numeric. >>- numarray is consistently faster than the other packages. >>- scipy newcore is on average somewhat faster than Numeric3, but some operations >> are really slow in comparison with the other packages. In partical the >> statements labeled 2, 3, 4, 6 and 7 take 2 times more time using scipy-newcore >> than using Numeric. >> >> >> > >These results seem a little shocking to me. Has numarray made recent >strides? As recent as a month ago, numarray was a dog, running orders >of magnitude slower for almost everything, unless arrays were *huge*. >What is up? > > > I think that numarray is still faster for very large arrays in certain circumstances (I have test cases that show that scipy core is faster in basic operations). For small arrays, numarray is still slow. Based on what I've seen of your code, you use a lot of small arrays. Scipy core is still slower than Numeric on very small arrays too because of the increased ufunc overhead accompanying the added features. I'm hoping that some of this slowness will be alleviated once we give array scalars their own faster math (right now, array scalars go through the entire ufunc machinery for math --- definitely slower than what is possible). I would like to figure out if there are things Numarray is doing for ufuncs on large arrays, that scipy core is not doing. I would also like to figure out how numarray does such a quick arange (if the number shown in the benchmark is accurate). Benchmarks can be useful (because I'd like to get rid of unnecessary speed bumps in scipy which may still exist) but given the complexity of these code bases and the many paths through them, they can rarely be used as "X" is (always) faster than "Y". Best regards, -Travis From stephen.walton at csun.edu Sat Dec 3 21:08:58 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 03 Dec 2005 18:08:58 -0800 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: <438CE004.6000806@ee.byu.edu> References: <438CE004.6000806@ee.byu.edu> Message-ID: <43924FBA.8090402@csun.edu> Travis Oliphant wrote: >Did they build their own version, or use the provided binary (which >links to the version of ATLAS that I have)? I strongly suspect the >BLAS/LAPACK library, but I could be wrong. > Just as one datum, I'm seeing four test failures, including test_cholesky, on a Fedora Core 4 system with Absoft Fortran 9.0 on which I built ATLAS 3.7.11 myself. Another Fedora Core 4 system with the distributed FC4 Extras atlas 3.6.0 binaries runs all tests without failures. All systems have core version 1557, scipy version 1471. (Incidentally, gcc 4.0.2 appears not to build ATLAS 3.7.11.) ====================================================================== ERROR: check_random (scipy.linalg.decomp.test_decomp.test_cholesky) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 283, in check_random c = cholesky(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 334, in cholesky if info>0: raise LinAlgError, "matrix not positive definite" LinAlgError: matrix not positive definite ====================================================================== FAIL: check_double_integral (scipy.integrate.quadpack.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 95, in check_double_integral 5/6.0 * (b**3.0-a**3.0)) File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 11, in assert_quad assert err < errTol, (err, errTol) AssertionError: (23403637477.367432, 1.4999999999999999e-08) ====================================================================== FAIL: check_triple_integral (scipy.integrate.quadpack.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 106, in check_triple_integral 8/3.0 * (b**4.0 - a**4.0)) File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (4.2487097720268533e-21, 40.0, 1.5870895476678149e-21) ====================================================================== FAIL: check_sygv (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/gesv_tests.py", line 15, in check_sygv assert_array_almost_equal(dot(a,v[:,i]),w[i]*dot(b,v[:,i]),self.decimal) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ nan nan nan] Array 2: [ nan nan nan] From charlesr.harris at gmail.com Sat Dec 3 23:43:59 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 3 Dec 2005 21:43:59 -0700 Subject: [SciPy-dev] [SciPy-user] segfault in scipy.random.standard_normal In-Reply-To: <4390FAB8.1040606@ieee.org> References: <723eb6930512021717p31685f9cuc6a09777122f5df3@mail.gmail.com> <4390F6CC.8080107@gmail.com> <4390FAB8.1040606@ieee.org> Message-ID: > After the records module is ported (and improved IMHO), the C-API > changes will settle down. Then, the only changes I can forsee in the > future are additions to ease the porting of numarray extensions like > nd_image and PyTables. > > I'd love it if somebody started thinking about a good way to ease the > numarray transition. I don't have any great ideas in that regard, but I would like to make sure that the sorts I added to numarray come over. There was a faster quicksort, heapsort (guaranteed), and merge sort (stable). I use the merge sort myself for several things, so that is probably the one I am most interested in. -Travis > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Dec 3 23:51:25 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 3 Dec 2005 21:51:25 -0700 Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <439222C2.90902@ieee.org> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> <723eb6930512031436l74ec2fbfrc92cc4dc70bce5dd@mail.gmail.com> <439222C2.90902@ieee.org> Message-ID: On 12/3/05, Travis Oliphant wrote: > > Chris Fonnesbeck wrote: > > >On 12/3/05, Gerard Vermeulen wrote: > > > > > >>Conclusion: > >>- the overal performance of numarray is 23 % better than scipy-newcore > and > >> 27 % better than Numeric. > >>- numarray is consistently faster than the other packages. > >>- scipy newcore is on average somewhat faster than Numeric3, but some > operations > >> are really slow in comparison with the other packages. In partical the > >> statements labeled 2, 3, 4, 6 and 7 take 2 times more time using > scipy-newcore > >> than using Numeric. > >> > >> > >> > > > >These results seem a little shocking to me. Has numarray made recent > >strides? As recent as a month ago, numarray was a dog, running orders > >of magnitude slower for almost everything, unless arrays were *huge*. > >What is up? Can we see the benchmark numbers? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Sun Dec 4 05:21:07 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 04 Dec 2005 02:21:07 -0800 Subject: [SciPy-dev] bug(?) in scipy.sparse Message-ID: <4392C313.2040109@astraw.com> I've written a test which fails with current scipy. Although I'm no sparse matrix expert, I think this is a valid bug. Please feel free to include this test in test_sparse.py under the scipy license. import scipy.base from scipy.base import zeros, allclose from scipy.sparse import dok_matrix # build sparse matrix from dictionary of keys a=dok_matrix() shape=5,7 # set shape (by tickling dok_matrix's innards) a[shape[0]-1,shape[1]-1]=1.0 a[shape[0]-1,shape[1]-1]=0.0 # set a few elements, but none in the last column a[2,1]=1 a[0,2]=2 a[3,1]=3 a[1,5]=4 a[4,3]=5 a[4,2]=6 # assert that the last column is all zeros assert allclose( a.todense()[:,6], zeros((shape[0],) ) ) # make sure it still works for CSC format csc=a.tocsc() assert allclose( csc.todense()[:,6], zeros((shape[0],) ) ) # fails From schofield at ftw.at Sun Dec 4 11:47:05 2005 From: schofield at ftw.at (Ed Schofield) Date: Sun, 4 Dec 2005 16:47:05 +0000 Subject: [SciPy-dev] bug(?) in scipy.sparse In-Reply-To: <4392C313.2040109@astraw.com> References: <4392C313.2040109@astraw.com> Message-ID: <41AE65B7-C7EB-4BB7-A37A-4F3FB01A0A7C@ftw.at> On 04/12/2005, at 10:21 AM, Andrew Straw wrote: > I've written a test which fails with current scipy. Although I'm no > sparse matrix expert, I think this is a valid bug. Please feel free to > include this test in test_sparse.py under the scipy license. Yes, you're right ... there's something very fishy about the CSC matrix conversion. It seems that it doesn't handle columns of zeros correctly. I'll look into it further and get back you. Thanks for the catch, and the test! -- Ed From oliphant.travis at ieee.org Sun Dec 4 17:57:05 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 04 Dec 2005 15:57:05 -0700 Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <20051204232524.654f752a.gerard.vermeulen@grenoble.cnrs.fr> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> <43925D8E.5040304@ieee.org> <20051204232524.654f752a.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <43937441.4040303@ieee.org> Gerard Vermeulen wrote: >I took a look at the difference between arange in numarray and scipy: >in numarray arange is a Python function which dispatches the real work >to a type dependent C function, whereas in scipy arange does >all calculations in C doubles, which are cast to the requested type. > >This may explain why numarray's arange is 5 times faster than scipy's >arange on my system (don't ask me why David's results for numarray are >so slow). > > I looked into that last night and saw that one. We could very easily add a "fillarray" function to each data type if that optimization is seen as useful. I think something should definitely be done so that a cast is not done everytime. The Arange function could be made much faster, for sure. The other issue of vector-vector and vector-scalar operations, I'm less convinced about. Do we really need a whole other class of functions in the ufunc machinery. If so, I'm inclined to included them in the math operations for array-scalars, rather than the ufunc machinery. The major slow-down that does have me wondering whether an algorithm change (or optimization) is necessary is lines 4 and 7. These are mixed-type operations which I think are exercising the BUFFER_LOOP section of the general ufunc code. As the array sizes are larger than the buffer size (default is 80000 bytes and could be changed), no copy is made. In Numeric, a copy-cast is done on the entire array which is the main reason, I think, for its slower performance. In scipy core, currently, the cast is only done on a filled buffer. Right now, there are two things happening which could be optimized: 1) even if an array is not misbehaved it is still copied over into a buffer so that the inner loops are performed on the buffers. Technically, this is not necessary, but otherwise we would have to figure out a different way to signal that the inner loop should be called (right now its when the buffers are filled). Otherwise it would have to be some combination of filled buffer or the more complicated notion of (single-striding no longer possible for this array). 2) Items are copied over to the buffer 1 at a time. We should take advantage of contiguous chunks where we can. In short, numarray is doing a better job of handling the memory for the misbehaved cases and we could learn something from that. -Travis From charlesr.harris at gmail.com Sun Dec 4 23:03:19 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 4 Dec 2005 21:03:19 -0700 Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <43937441.4040303@ieee.org> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> <43925D8E.5040304@ieee.org> <20051204232524.654f752a.gerard.vermeulen@grenoble.cnrs.fr> <43937441.4040303@ieee.org> Message-ID: > > > In short, numarray is doing a better job of handling the memory for the > misbehaved cases and we could learn something from that. Sounds like cache might be coming into play here. Using the stack pointer can also help on the Intel architecture: moving data to the stack often seems to give a speed up. As to the cache, I have seen speedups of 2x - 5x just by trying to use chunks small enough ( < 16 KB or so) to fit in cache. Unrolling the innermost loop of the ufunc might also help, by which I don't really mean unrolling, but simply using efficient explicit c code. Of course, I haven't yet looked at how you implemented these things, so I am just tossing out ideas and maybe making a fool of myself. Chuck -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Mon Dec 5 01:55:12 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 07:55:12 +0100 (CET) Subject: [SciPy-dev] [SciPy-user] some benchmark data for numarray, Numeric and scipy-newcore In-Reply-To: <43937441.4040303@ieee.org> References: <20051203131412.74250ba7.gerard.vermeulen@grenoble.cnrs.fr> <43925D8E.5040304@ieee.org> <43937441.4040303@ieee.org> Message-ID: On Sun, 4 Dec 2005, Travis Oliphant wrote: [...] > The other issue of vector-vector and vector-scalar operations, I'm less > convinced about. Do we really need a whole other class of functions in > the ufunc machinery. If so, I'm inclined to included them in the math > operations for array-scalars, rather than the ufunc machinery. It seems that on some machines there are special math libraries available which specifically deal with such operations (havn't tried any of those yet, so take this comment with a bit of care). If there is a way to make use of them from scipy, that *could* give some further speed improvement ... And a (related) question: - installing scipy_core works without ATLAS - full scipy needs ATLAS - if a user installs ATLAS only in the second step, will the dot operation from scipy_core use dotblas, or not? (Also wondering about how eggs will handle such cases ...) Best, Arnd From arnd.baecker at web.de Mon Dec 5 05:21:38 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 11:21:38 +0100 (CET) Subject: [SciPy-dev] new core, Itanium/Intel compiler probs Message-ID: Hi, we just got access to a new 64 Bit toy with many CPUs - so scipy has to be installed there ;-). The recommended compiler is the Intel one (icc,ifc). Scipy core builds/installs alright, using python setup.py config --compiler=intel build python setup.py install --prefix=$DESTDIR but a scipy.test(10,10) gives the folling failures. Note that during compile *many* warnings pass by (Already compiling python 2.4.2 gives many warnings ..., but that's a different story...) ====================================================================== FAIL: check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_index_tricks.py", line 30, in check_nd assert_array_almost_equal(c[0][-1,:],ones(10,'d'),11) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [-0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555... Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumprod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 163, in check_basic 1320, 6600, 26400],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 2. 20. 0. 0. 0. 0.] Array 2: [ 1.0000000000000000e+00 2.0000000000000000e+00 2.0000000000000000e+01 2.2000000000000000e+02 1.32000000000000... ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumsum) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 122, in check_basic assert_array_equal(cumsum(a), array([1,3,13,24,30,35,39],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 3. 13. 11. 17. 5. 9.] Array 2: [ 1. 3. 13. 24. 30. 35. 39.] ====================================================================== FAIL: Test add, sum, product. ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 163, in check_testAddSumProd self.failUnless (eq(scipy.add.accumulate(x), add.accumulate(x))) AssertionError ---------------------------------------------------------------------- Ran 157 tests in 1.532s FAILED (failures=4) Out[2]: There is more lurking: Trying to find out the version afterwards: In [3]: import scipy In [4]: scipy.__cSegmentation fault (Aha, works reproducibly, also from a fresh shell ... - see the idb backtrace below) Anyway, this is In [2]: scipy.__core_version__ Out[2]: '0.7.4.1561' and I just checked that the very same version works fine on the other 64 Bit machine. What should I do next? I will look into the definitions of the tests which fail. One option might be that I send the full build log with all its warnings... For example, there are build/src/scipy/base/src/arraytypes.inc(5007): warning #1338: arithmetic on pointer to void or function type for(i=0; idimensions[data->nd-1]; and many more Best, Arnd P.S.: Concerning the segfault with ipython: idb -gdb Linux Application Debugger for Itanium(R)-based applications, Version 9.0-10, Build 20050413 (idb) file /home/baecker/python/bin/ipython Reading symbols from /work/home/baecker/python/bin/python... Warning: bad source correlation found. Further instances ignored. done. Warning: expected to attach to loader, but couldn't: Invalid argument (idb) run Warning: expected to attach to loader, but couldn't: Invalid argument Starting program: /work/home/baecker/python/bin/python Python 2.4.2 (#1, Dec 2 2005, 18:05:36) Type "copyright", "credits" or "license" for more information. IPython 0.6.16.svn -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import scipy Importing test to scipy Importing base to scipy Importing basic to scipy In [2]: scipy.__coThread received signal SEGV rl_complete_internal () in /lib/libreadline.so.4.3 (idb) bt #0 0x2000000000cf7c40 in rl_complete_internal () in /lib/libreadline.so.4.3 #1 0x2000000000cf8550 in rl_complete () in /lib/libreadline.so.4.3 #2 0x2000000000ce7950 in _rl_dispatch_subseq () in /lib/libreadline.so.4.3 #3 0x2000000000ce7bc0 in _rl_dispatch () in /lib/libreadline.so.4.3 #4 0x2000000000ce7ec0 in readline_internal_char () in /lib/libreadline.so.4.3 #5 0x2000000000ce8f70 in readline () in /lib/libreadline.so.4.3 #6 0x2000000000cadcb0 in readline_until_enter_or_signal (prompt=0x0, signal=0x0) at readline.c:834 #7 0x2000000000cad8d0 in call_readline (sys_stdin=0x0, sys_stdout=0x0, prompt=0x0) at readline.c:864 #8 0x400000000021dea0 in PyOS_Readline (sys_stdin=0x0, sys_stdout=0x0, prompt=0x0) at Parser/myreadline.c:208 #9 0x4000000000135510 in builtin_raw_input (self=0x0, args=0x0) at Python/bltinmodule.c:1631 #10 0x400000000023bb80 in PyCFunction_Call (func=0x0, arg=0x0, kw=0x0) at Objects/methodobject.c:73 #11 0x40000000001597f0 in call_function (pp_stack=0x0, oparg=0) at Python/ceval.c:3558 #12 0x4000000000149a60 in PyEval_EvalFrame (f=0x8000000000015201) at Python/ceval.c:2163 #13 0x4000000000147b10 in PyEval_EvalCodeEx (co=0x6b6f6f, globals=0x0, locals=0x6000000000049210, args=0x2000000000ac65a8, argcount=2462048, kws=0x4000000000259140, kwcount=0, defs=0xa8c7e0, defcount=1, closure=0x0) at Python/ceval.c:2736 #14 0x400000000015b2b0 in fast_function (func=0x6000000000049210, pp_stack=0x2000000000ac65a8, n=2462048, na=2462016, nk=0) at Python/ceval.c:3651 #15 0x400000000015a260 in call_function (pp_stack=0x6000000000049210, oparg=11298216) at Python/ceval.c:3579 #16 0x4000000000149a60 in PyEval_EvalFrame (f=0x6) at Python/ceval.c:2163 #17 0x4000000000147b10 in PyEval_EvalCodeEx (co=0x4000000000259160, globals=0x4000000000259140, locals=0x6000000000240510, args=0x0, argcount=7249808, kws=0x0, kwcount=0, defs=0x0, defcount=1, closure=0x0) at Python/ceval.c:2736 #18 0x400000000015b2b0 in fast_function (func=0x6000000000240510, pp_stack=0x0, n=7249808, na=0, nk=0) at Python/ceval.c:3651 #19 0x400000000015a260 in call_function (pp_stack=0x6000000000240510, oparg=0) at Python/ceval.c:3579 #20 0x4000000000149a60 in PyEval_EvalFrame (f=0x0) at Python/ceval.c:2163 #21 0x4000000000147b10 in PyEval_EvalCodeEx (co=0x0, globals=0x0, locals=0x0, args=0x2000000000030073, argcount=0, kws=0x200000000067d930, kwcount=0, defs=0x4000000000000001, defcount=1, closure=0x0) at Python/ceval.c:2736 #22 0x400000000015b2b0 in fast_function (func=0x0, pp_stack=0x2000000000030073, n=0, na=6805808, nk=0) at Python/ceval.c:3651 #23 0x400000000015a260 in call_function (pp_stack=0x0, oparg=196723) at Python/ceval.c:3579 #24 0x4000000000149a60 in PyEval_EvalFrame (f=0x1a3301) at Python/ceval.c:2163 #25 0x4000000000147b10 in PyEval_EvalCodeEx (co=0x60000fffffffa5c0, globals=0x0, locals=0x6000000000049210, args=0x6000000000049218, argcount=2462048, kws=0x4000000000259140, kwcount=0, defs=0x60000000000129a0, defcount=2, closure=0x0) at Python/ceval.c:2736 #26 0x400000000015b2b0 in fast_function (func=0x6000000000049210, pp_stack=0x6000000000049218, n=2462048, na=2462016, nk=0) at Python/ceval.c:3651 #27 0x400000000015a260 in call_function (pp_stack=0x6000000000049210, oparg=299544) at Python/ceval.c:3579 #28 0x4000000000149a60 in PyEval_EvalFrame (f=0x60000000000472ac) at Python/ceval.c:2163 #29 0x4000000000147b10 in PyEval_EvalCodeEx (co=0x0, globals=0x0, locals=0x60000000000b1240, args=0x6000000000053cd0, argcount=0, kws=0xfff5, kwcount=248200, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #30 0x40000000001463a0 in PyEval_EvalCode (co=0x60000000000b1240, globals=0x6000000000053cd0, locals=0xf200000000000000) at Python/ceval.c:484 #31 0x40000000001c6d60 in run_err_node (n=0x60000000000b1240, filename=0x6000000000053cd0 "\360cn", globals=0xf200000000000000, locals=0xfff5, flags=0x200000000003c988) at Python/pythonrun.c:1265 #32 0x40000000001c2010 in PyRun_SimpleFileExFlags (fp=0x10, filename=0x60000fffffffb022 "/home/baecker/python/bin/ipython", closeit=0, flags=0x0) at Python/pythonrun.c:1243 #33 0x40000000001c38d0 in PyRun_AnyFileExFlags (fp=0x10, filename=0x60000fffffffb022 "/home/baecker/python/bin/ipython", closeit=0, flags=0x0) at Python/pythonrun.c:664 #34 0x4000000000013e70 in Py_Main (argc=0, argv=0x0) at Modules/main.c:484 #35 0x4000000000012b70 in main (argc=0, argv=0x0) at Modules/python.c:23 #36 0x200000000044d850 in __libc_start_main () in /lib/tls/libc.so.6.1 #37 0x4000000000012980 in _start () in /work/home/baecker/python/bin/python From pearu at scipy.org Mon Dec 5 04:31:40 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 5 Dec 2005 03:31:40 -0600 (CST) Subject: [SciPy-dev] new core, Itanium/Intel compiler probs In-Reply-To: References: Message-ID: On Mon, 5 Dec 2005, Arnd Baecker wrote: > Hi, > > we just got access to a new 64 Bit toy with many CPUs - so scipy > has to be installed there ;-). > > The recommended compiler is the Intel one (icc,ifc). > Scipy core builds/installs alright, using > python setup.py config --compiler=intel build Could you check that the build was really done with intel compilers? I would have suggested the following command python setup.py config --fcompiler=intel install --prefix=$DESTDIR Also, could you try building scipy_core (and Python if necessary) with gcc? How the tests behave then? Pearu From arnd.baecker at web.de Mon Dec 5 05:56:38 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 11:56:38 +0100 (CET) Subject: [SciPy-dev] cephes and icc Message-ID: Hi, concerning new scipy: cephes does not compile with icc: [...] Lib/special/cephes/const.c(92): error: floating-point operation result is out of range double INFINITY = 1.0/0.0; /* 99e999; */ ^ Lib/special/cephes/const.c(97): warning #1418: external definition with no prior declaration double NAN = 1.0/0.0 - 1.0/0.0; ^ Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ Lib/special/cephes/const.c(102): warning #1418: external definition with no prior declaration double NEGZERO = -0.0; compilation aborted for Lib/special/cephes/const.c (code 2) Is there an easy solution/workaround for this? Best, Arnd From arnd.baecker at web.de Mon Dec 5 06:18:31 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 12:18:31 +0100 (CET) Subject: [SciPy-dev] cephes and icc In-Reply-To: References: Message-ID: Commenting out the build of special in Lib/setup.py it gets a bit further: build/src/Lib/fftpack/_fftpackmodule.c(22): remark #593: variable "_fftpack_module" was set but never used static PyObject *_fftpack_module; ^ icc: Lib/fftpack/src/drfft.c icc: Command line warning: ignoring option '-O'; no argument required Lib/fftpack/src/drfft.c(61): warning #1419: external declaration in primary source file extern void F_FUNC(dfftf,DFFTF)(int*,double*,double*); ^ Lib/fftpack/src/drfft.c(62): warning #1419: external declaration in primary source file extern void F_FUNC(dfftb,DFFTB)(int*,double*,double*); ^ Lib/fftpack/src/drfft.c(63): warning #1419: external declaration in primary source file extern void F_FUNC(dffti,DFFTI)(int*,double*); ^ Lib/fftpack/src/drfft.c(73): warning #1418: external definition with no prior declaration extern void destroy_drfft_cache(void) { ^ Lib/fftpack/src/drfft.c(87): warning #1418: external definition with no prior declaration extern void drfft(double *inout, ^ icc: Lib/fftpack/src/zfft.c icc: Command line warning: ignoring option '-O'; no argument required Lib/fftpack/src/zfft.c(56): warning #1419: external declaration in primary source file extern void F_FUNC(zfftf,ZFFTF)(int*,double*,double*); ^ Lib/fftpack/src/zfft.c(57): warning #1419: external declaration in primary source file extern void F_FUNC(zfftb,ZFFTB)(int*,double*,double*); ^ Lib/fftpack/src/zfft.c(58): warning #1419: external declaration in primary source file extern void F_FUNC(zffti,ZFFTI)(int*,double*); ^ Lib/fftpack/src/zfft.c(68): warning #1418: external definition with no prior declaration extern void destroy_zfft_cache(void) { ^ Lib/fftpack/src/zfft.c(84): warning #1418: external definition with no prior declaration extern void zfft(complex_double *inout, ^ icc: Lib/fftpack/src/zrfft.c icc: Command line warning: ignoring option '-O'; no argument required Lib/fftpack/src/zrfft.c(9): warning #1419: external declaration in primary source file extern void drfft(double *inout,int n,int direction,int howmany,int normalize); ^ Lib/fftpack/src/zrfft.c(11): warning #1418: external definition with no prior declaration extern void zrfft(complex_double *inout, ^ Traceback (most recent call last): File "setup.py", line 42, in ? setup_package() File "setup.py", line 35, in setup_package setup( **config.todict() ) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/core.py", line 93, in setup return old_setup(**new_attr) File "/home/baecker/python//lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/home/baecker/python//lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/home/baecker/python//lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/command/build_ext.py", line 107, in run self.build_extensions() File "/home/baecker/python//lib/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/command/build_ext.py", line 299, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' removed Lib/__svn_version__.py removed Lib/__svn_version__.pyc Any advice on this? ((I used: python setup.py config --compiler=intel config_fc --fcompiler=intel build | $ )) Many thanks in advance, Arnd From arnd.baecker at web.de Mon Dec 5 07:34:07 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 13:34:07 +0100 (CET) Subject: [SciPy-dev] new core, Itanium/Intel compiler probs In-Reply-To: References: Message-ID: Hi Pearu, On Mon, 5 Dec 2005, Pearu Peterson wrote: > On Mon, 5 Dec 2005, Arnd Baecker wrote: > > > Hi, > > > > we just got access to a new 64 Bit toy with many CPUs - so scipy > > has to be installed there ;-). > > > > The recommended compiler is the Intel one (icc,ifc). > > Scipy core builds/installs alright, using > > python setup.py config --compiler=intel build > > Could you check that the build was really done with intel compilers? Pretty confident - the logfile only contains `icc` calls > I would have suggested the following command > > python setup.py config --fcompiler=intel install --prefix=$DESTDIR To be sure I used that as well. icc is being used: [...] customize IntelFCompiler customize IntelFCompiler using config icc options: '-pthread -fno-strict-aliasing -OPT:Olimit=0 -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -fPIC' compile options: '-I/home/baecker/python/include/python2.4 -Iscipy/base/src -I/home/baecker/python/include/python2.4 -c' icc: _configtest.c icc: Command line warning: ignoring option '-O'; no argument required _configtest.c(50): warning #181: argument is incompatible with corresponding format string conversion fprintf(fp,"#define SIZEOF_LONG_DOUBLE %d\n", sizeof(long double)); [...] (Note this is all with `icc -v: Version 9.0`) > Also, could you try building scipy_core (and Python if necessary) with > gcc? How the tests behave then? With gcc version 3.3.3 (SuSE Linux) everything compiles fine (BTW: much less warnings). scipy.test(10,10): Ran 160 tests in 1.219s So this looks fine! I think that the problem is really related to icc. Many thanks, Arnd From cimrman3 at ntc.zcu.cz Mon Dec 5 08:55:57 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 05 Dec 2005 14:55:57 +0100 Subject: [SciPy-dev] sparse matrix formats Message-ID: <439446ED.6070002@ntc.zcu.cz> Hi all, Is there a python module that can read (sparse) matrices in one of the formats listed at http://math.nist.gov/MatrixMarket/formats.html. I am especially interested in Harwell-Boeing format, which is used at http://www.cise.ufl.edu/research/sparse/matrices/. btw, the C (even in scipy: scipy/Lib/sparse/SuperLU/*readhb.c) or fortran (in UMFPACK package: UMFPACK/UMFPACK/Demo/readhb.f) functions are well available, just the wrappers I cannot find. I am going to swig'em myself because I need it - I posted this question in order to prevent (re)inventing wheels :) r. From arnd.baecker at web.de Mon Dec 5 10:21:22 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 16:21:22 +0100 (CET) Subject: [SciPy-dev] new core, Itanium/Intel compiler probs In-Reply-To: References: Message-ID: On Mon, 5 Dec 2005, Arnd Baecker wrote: > There is more lurking: Trying to find out the version afterwards: > In [3]: import scipy > In [4]: scipy.__cSegmentation fault > (Aha, works reproducibly, also from a fresh shell ... - see > the idb backtrace below) Well, this also happens without scipy - so readline or whatever has to be blamed. For the moment we will go without readline support... Best, Arnd From arnd.baecker at web.de Mon Dec 5 10:57:13 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 5 Dec 2005 16:57:13 +0100 (CET) Subject: [SciPy-dev] new core, Itanium/Intel compiler probs In-Reply-To: References: Message-ID: More details on some of the failures: > ====================================================================== > FAIL: check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_index_tricks.py", > line 30, in check_nd > assert_array_almost_equal(c[0][-1,:],ones(10,'d'),11) > File > "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", > line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [-0.555555555556 -0.555555555556 -0.555555555556 > -0.555555555556 > -0.555555555556 -0.555555555556 -0.555555555556 -0.555... > Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] In [1]: from scipy import * In [2]: c = mgrid[-1:1:10j,-2:2:10j] In [3]: c[1][:,-1] Out[3]: array([-0.66666667, -0.22222222, 0.22222222, 0.66666667, 1.11111111, 1.55555556, 2. , 2.44444444, 2.88888889, 3.33333333]) In [4]: 2*ones(10,'d') Out[4]: array([ 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.]) Interestingly, c contains values >2 ?!: In [5]: c Out[5]: array([[[-1. , -1. , -1. , -1. , -1. , -1. , -1. , -1. , -1. , -1. ], [-0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778, -0.77777778], [-0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556, -0.55555556], [-0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333, -0.33333333], [-0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111, -0.11111111], [ 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111, 0.11111111], [ 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333], [ 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556, 0.55555556], [ 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778, 0.77777778], [ 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ]], [[-2. , -1.55555556, -1.11111111, -1.55555556, -1.11111111, -1.55555556, -1.11111111, -1.55555556, -1.11111111, -0.66666667], [-2. , -1.55555556, -1.11111111, -1.11111111, -0.66666667, -1.11111111, -0.66666667, -1.11111111, -0.66666667, -0.22222222], [-2. , -1.55555556, -1.11111111, -0.66666667, -0.22222222, -0.66666667, -0.22222222, -0.66666667, -0.22222222, 0.22222222], [-2. , -1.55555556, -1.11111111, -0.22222222, 0.22222222, -0.22222222, 0.22222222, -0.22222222, 0.22222222, 0.66666667], [-2. , -1.55555556, -1.11111111, 0.22222222, 0.66666667, 0.22222222, 0.66666667, 0.22222222, 0.66666667, 1.11111111], [-2. , -1.55555556, -1.11111111, 0.66666667, 1.11111111, 0.66666667, 1.11111111, 0.66666667, 1.11111111, 1.55555556], [-2. , -1.55555556, -1.11111111, 1.11111111, 1.55555556, 1.11111111, 1.55555556, 1.11111111, 1.55555556, 2. ], [-2. , -1.55555556, -1.11111111, 1.55555556, 2. , 1.55555556, 2. , 1.55555556, 2. , 2.44444444], [-2. , -1.55555556, -1.11111111, 2. , 2.44444444, 2. , 2.44444444, 2. , 2.44444444, 2.88888889], [-2. , -1.55555556, -1.11111111, 2.44444444, 2.88888889, 2.44444444, 2.88888889, 2.44444444, 2.88888889, 3.33333333]]]) So mgrid seems to be working incorrectly here? > ====================================================================== > FAIL: check_basic > (scipy.base.function_base.test_function_base.test_cumprod) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", > line 163, in check_basic > 1320, 6600, 26400],ctype)) > File > "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", > line 733, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 57.1428571429%): > Array 1: [ 1. 2. 20. 0. 0. 0. 0.] > Array 2: [ 1.0000000000000000e+00 2.0000000000000000e+00 > 2.0000000000000000e+01 > 2.2000000000000000e+02 1.32000000000000... In [1]: from scipy import * Importing test to scipy Importing base to scipy Importing basic to scipy In [2]: In [2]: ba = [1,2,10,11,6,5,4] In [3]: for ctype in [float32,float64]: ...: a = array(ba,ctype) ...: print ctype ...: print cumprod(a),array([1, 2, 20, 220,1320, 6600, 26400],ctype) ...: [ 1.00000000e+00 2.00000000e+00 2.00000000e+01 3.58916033e-38 2.15349620e-37 2.40609112e-37 9.62436448e-37] [ 1.00000000e+00 2.00000000e+00 2.00000000e+01 2.20000000e+02 1.32000000e+03 6.60000000e+03 2.64000000e+04] [ 1. 2. 20. 0. 0. 0. 0.] [ 1.00000000e+00 2.00000000e+00 2.00000000e+01 2.20000000e+02 1.32000000e+03 6.60000000e+03 2.64000000e+04] repeated calls give different numbers ... > ====================================================================== > FAIL: check_basic > (scipy.base.function_base.test_function_base.test_cumsum) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", > line 122, in check_basic > assert_array_equal(cumsum(a), array([1,3,13,24,30,35,39],ctype)) > File > "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", > line 733, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 57.1428571429%): > Array 1: [ 1. 3. 13. 11. 17. 5. 9.] > Array 2: [ 1. 3. 13. 24. 30. 35. 39.] This is a nasty one - I can only trigger it with the following: In [1]: from scipy import * Importing test to scipy Importing base to scipy Importing basic to scipy In [2]: ba = [1,2,10,11,6,5,4] In [3]: for ctype in [uint32,float32,float64]: ...: print ctype ...: a = array(ba,ctype) ...: print cumsum(a), array([1,3,13,24,30,35,39],ctype) ...: [ 1 3 13 24 30 35 39] [ 1 3 13 24 30 35 39] [ 1. 3. 13. 11. 17. 5. 9.] [ 1. 3. 13. 24. 30. 35. 39.] [ 1.00000000e+000 3.00000000e+000 1.30000000e+001 2.68156159e+154 2.68156159e+154 5.00000000e+000 9.00000000e+000] [ 1. 3. 13. 24. 30. 35. 39.] Leaving out either uint32 or float32 does not show the problem ... > ====================================================================== > FAIL: Test add, sum, product. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", > line 163, in check_testAddSumProd > self.failUnless (eq(scipy.add.accumulate(x), add.accumulate(x))) > AssertionError The failing test boils down to: In [1]: import scipy Importing test to scipy Importing base to scipy Importing basic to scipy In [2]: from scipy import pi In [3]: In [3]: x=scipy.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) In [4]: scipy.add.accumulate(x) Out[4]: array([ 1.00000000e+000, 2.00000000e+000, 3.00000000e+000, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154, 2.68156159e+154]) In [5]: scipy.base.ma.add.accumulate(x) Out[5]: array([ 1. 2. 3. -2. -0.42920367 4. 9. -10. 0. 1. 3. 6. ]) In [6]: scipy.base.ma.add.accumulate(x) Out[6]: array([ 1. 2. 3. -2. -0.42920367 4. 9. -10. 0. 1. 3. 6. ]) In [7]: scipy.base.ma.add.accumulate(x) Out[7]: array([ 1. 2. 3. -2. -0.42920367 4. 9. -10. 0. 1. 3. 6. ]) In [8]: scipy.base.ma.add.accumulate(x) Out[8]: array([ 1. 2. 3. -2. -0.42920367 4. 9. -10. 0. 1. 3. 6. ]) In [9]: scipy.base.ma.add.accumulate(x) Out[9]: array([ 1.00000000e+000 2.00000000e+000 3.00000000e+000 7.89614591e+150 7.89614591e+150 4.00000000e+000 9.00000000e+000 8.23900140e+015 8.23900140e+015 1.71130458e+059 1.71130458e+059 1.71130458e+059]) In [10]: scipy.base.ma.add.accumulate(x) Out[10]: array([ 1.00000000e+000 2.00000000e+000 3.00000000e+000 7.89614591e+150 7.89614591e+150 2.08399685e+064 2.08399685e+064 8.23900250e+015 8.23900250e+015 1.71130458e+059 1.71130458e+059 1.71130458e+059]) In [11]: scipy.add.accumulate(x) Out[11]: array([ 1. , 2. , 3. , -2. , -0.42920367, 4. , 9. , -10. , 0. , 1. , 3. , 6. ]) In [12]: scipy.add.accumulate(x) Out[12]: array([ 1. , 2. , 3. , -2. , -0.42920367, 4. , 9. , -10. , 0. , 1. , 3. , 6. ]) In [13]: scipy.add.accumulate(x) Out[13]: array([ 1.00000000e+000, 2.00000000e+000, 3.00000000e+000, 2.68156159e+154, 2.68156159e+154, 4.00000000e+000, 9.00000000e+000, -1.00000000e+001, 0.00000000e+000, 1.00000000e+000, 3.00000000e+000, 6.00000000e+000]) I hope this helps a bit. Best, Arnd From strawman at astraw.com Mon Dec 5 12:01:37 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 05 Dec 2005 09:01:37 -0800 Subject: [SciPy-dev] bug(?) in scipy.sparse In-Reply-To: <41AE65B7-C7EB-4BB7-A37A-4F3FB01A0A7C@ftw.at> References: <4392C313.2040109@astraw.com> <41AE65B7-C7EB-4BB7-A37A-4F3FB01A0A7C@ftw.at> Message-ID: <43947271.8020508@astraw.com> This stops my test from failing -- I'm not completely sure it stops the bug because I'm not sure what the col_ptr vector of a CSC matrix expects after the last real data. I assume here it is the number of non-zero elements. Index: Lib/sparse/sparse.py =================================================================== --- Lib/sparse/sparse.py (revision 1471) +++ Lib/sparse/sparse.py (working copy) @@ -1610,8 +1610,8 @@ nzmax = max(nnz, nzmax) data = [0]*nzmax rowind = [0]*nzmax - col_ptr = [0]*(self.shape[1]+1) - current_col = 0 + col_ptr = [nnz]*(self.shape[1]+1) + current_col = -1 k = 0 for key in keys: ikey0 = int(key[0]) @@ -1623,7 +1623,6 @@ data[k] = self[key] rowind[k] = ikey0 k += 1 - col_ptr[-1] = nnz data = array(data) rowind = array(rowind) col_ptr = array(col_pt Ed Schofield wrote: >On 04/12/2005, at 10:21 AM, Andrew Straw wrote: > > > >>I've written a test which fails with current scipy. Although I'm no >>sparse matrix expert, I think this is a valid bug. Please feel free to >>include this test in test_sparse.py under the scipy license. >> >> > >Yes, you're right ... there's something very fishy about the CSC >matrix conversion. It seems that it doesn't handle columns of zeros >correctly. I'll look into it further and get back you. > >Thanks for the catch, and the test! >-- Ed > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From strawman at astraw.com Mon Dec 5 12:32:28 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 05 Dec 2005 09:32:28 -0800 Subject: [SciPy-dev] bug in scipy_core fromstring() - MemoryError Message-ID: <439479AC.1090401@astraw.com> astraw at hdmg:~$ python Python 2.4.1 (#2, May 6 2005, 11:22:24) [GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Importing test to scipy Importing base to scipy Importing basic to scipy >>> scipy.fromstring(' ',scipy.UInt8) Traceback (most recent call last): File "", line 1, in ? MemoryError >>> From oliphant.travis at ieee.org Mon Dec 5 12:44:51 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 10:44:51 -0700 Subject: [SciPy-dev] bug in scipy_core fromstring() - MemoryError In-Reply-To: <439479AC.1090401@astraw.com> References: <439479AC.1090401@astraw.com> Message-ID: <43947C93.4040001@ieee.org> Andrew Straw wrote: >astraw at hdmg:~$ python >Python 2.4.1 (#2, May 6 2005, 11:22:24) >[GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 >Type "help", "copyright", "credits" or "license" for more information. > > >>>>import scipy >>>> >>>> >Importing test to scipy >Importing base to scipy >Importing basic to scipy > > >>>>scipy.fromstring(' ',scipy.UInt8) >>>> >>>> >Traceback (most recent call last): > File "", line 1, in ? >MemoryError > > > > > You are on 64-bit right? As coincidence would have it, I just caught this one, while going through the code to finalize the major data-type surgery I'm doing on the fixtype branch. I'll fix it in the trunk shortly. -Travis From strawman at astraw.com Mon Dec 5 12:48:59 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 05 Dec 2005 09:48:59 -0800 Subject: [SciPy-dev] bug in scipy_core fromstring() - MemoryError In-Reply-To: <43947C93.4040001@ieee.org> References: <439479AC.1090401@astraw.com> <43947C93.4040001@ieee.org> Message-ID: <43947D8B.6060901@astraw.com> Travis Oliphant wrote: >Andrew Straw wrote: > > >>MemoryError >> >You are on 64-bit right? > Yes. >As coincidence would have it, I just caught >this one, while going through the code to finalize the major data-type >surgery I'm doing on the fixtype branch. > >I'll fix it in the trunk shortly. > > Great! Thanks again, Andrew From oliphant.travis at ieee.org Mon Dec 5 13:07:52 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 11:07:52 -0700 Subject: [SciPy-dev] Big changes to trunk later today Message-ID: <439481F8.9060207@ieee.org> This is a warning that I will be making the type-handling changes to the trunk later today. There may be some new bugs introduced at that point (I had to go back and think about reference counting for all the type usage since what was once PyArray_Typecode * is now PyArray_Descr * and a regular Python object) --- changes this size can always introduce new bugs. If you have any changes to commit to the trunk, please do so within the hour (or email me to hold off the merge). I'm very enthused about the change. Besides the new feature of being able to describe any kind of array data type (e.g. nested records of arrays of records...), several awkward things simplified in the code (and several C-API calls became macros). This seems to indicate to me that this is the right move. I've also updated the PEP stored at http://svn.scipy.org/svn/PEP to reflect the new PyArray_Descr * structure that goes along with the change. I think the new structure is complete enough that it should go into Python itself along with the generic multidimensional array. Best, -Travis From dd55 at cornell.edu Mon Dec 5 17:05:06 2005 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 5 Dec 2005 17:05:06 -0500 Subject: [SciPy-dev] possible bug in interp1d Message-ID: <200512051705.06993.dd55@cornell.edu> Running this short script yields an error message: from scipy import * i = interpolate.interp1d(arange(100), arange(100)) i(arange(10, 90, 100)) exceptions.AttributeError Traceback (most recent call last) /home/darren/ /usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py in __call__(self, x_new) 196 out_of_bounds.shape = sec_shape 197 new_out = ones(new_shape)*out_of_bounds --> 198 putmask(y_new, new_out.flat, self.fill_value) 199 y_new.shape = yshape 200 # Rotate the values of y_new back so that they coorespond to the /usr/lib64/python2.4/site-packages/scipy/base/oldnumeric.py in putmask(a, mask, v) 186 """ 187 print dir(a) --> 188 return a.putmask(v, mask) 189 190 def swapaxes(a, axis1, axis2): AttributeError: 'scipy.flatiter' object has no attribute 'putmask' I checked the namespace of a in oldnumeric.putmask: ['__array__', '__class__', '__delattr__', '__delitem__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', 'base', 'copy', 'next'] Darren From oliphant.travis at ieee.org Mon Dec 5 17:06:57 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 15:06:57 -0700 Subject: [SciPy-dev] Data type change completed Message-ID: <4394BA01.20501@ieee.org> I've committed the data-type change discussed at the end of last week to the SVN repository. Now the concept of a data type for an array has been replaced with a "data-descriptor". This data-descriptor is flexible enough to handle an arbitrary record specification with fields that include records and arrays or arrays of records. While nesting may not be the best data-layout for a new design, when memory-mapping an arbitrary fixed-record-length file, this capability allows you to handle even the most obsure record file. While the basic core tests pass for me, there may be lurking problems and so testing of the SVN trunk of scipy core will be appreciated. I've bumped up the version number because the C-API has changed (a few new functions and some functions becoming macros). I'd like to make a release of the new version by the end of the week (as soon as Chris Hanley at STSCI and I get records.py working better), so please test. Recently some intel c-compiler tests were failing on a 64-bit platform. It would be nice to figure out why that is happening as well, but I will probably not have time for that this week. Thanks, -Travis From oliphant.travis at ieee.org Mon Dec 5 23:36:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 21:36:03 -0700 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: References: Message-ID: <43951533.3030001@ieee.org> David M. Cooke wrote: >Alan G Isaac writes: > > > SciPy full now builds using the recent SVN version of scipy (a few missed changes still needed to be made to f2py). I'm getting only 1 error in check_odeint1 as well. So, I think the changes were reasonably successful (especially given how deep the surgery was --- PyArray_Typecode had spread its ugliness everywhere) Feel free to start relying on SVN again. -Travis From cookedm at physics.mcmaster.ca Tue Dec 6 00:43:04 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 06 Dec 2005 00:43:04 -0500 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: <43951533.3030001@ieee.org> (Travis Oliphant's message of "Mon, 05 Dec 2005 21:36:03 -0700") References: <43951533.3030001@ieee.org> Message-ID: Travis Oliphant writes: > David M. Cooke wrote: > >>Alan G Isaac writes: >> >> >> > SciPy full now builds using the recent SVN version of scipy (a few > missed changes still needed to be made to f2py). > > I'm getting only 1 error in check_odeint1 as well. So, I think the > changes were reasonably successful (especially given how deep the > surgery was --- PyArray_Typecode had spread its ugliness everywhere) > > Feel free to start relying on SVN again. > > -Travis Isn't compiling for me: [00:39:39] [~/stuff/python/scipy/core] cookedm at arbutus$ svn update At revision 1575. [00:40:12] [~/stuff/python/scipy/core] cookedm at arbutus$ rm -rf build/ [00:40:21] [~/stuff/python/scipy/core] cookedm at arbutus$ python setup.py build [snip] building 'scipy.base.multiarray' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' creating build/temp.linux-x86_64-2.3/scipy creating build/temp.linux-x86_64-2.3/scipy/base creating build/temp.linux-x86_64-2.3/scipy/base/src compile options: '-Ibuild/src/scipy/base/src -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.3 -c' gcc: scipy/base/src/multiarraymodule.c In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c:3325: error: conflicting types for ?PyArray_NewFromDescr? build/src/scipy/base/__multiarray_api.h:135: error: previous declaration of ?PyArray_NewFromDescr? was here scipy/base/src/arrayobject.c: In function ?PyArray_NewFromDescr?: scipy/base/src/arrayobject.c:3343: warning: passing argument 4 of ?PyArray_NewFromDescr? from incompatible pointer type scipy/base/src/arrayobject.c:3343: warning: passing argument 5 of ?PyArray_NewFromDescr? from incompatible pointer type scipy/base/src/arrayobject.c:3399: warning: passing argument 2 of ?_array_fill_strides? from incompatible pointer type scipy/base/src/arrayobject.c: In function ?array_new?: scipy/base/src/arrayobject.c:3718: warning: passing argument 4 of ?PyArray_NewFromDescr? from incompatible pointer type scipy/base/src/arrayobject.c:3767: warning: passing argument 4 of ?PyArray_ (and more) I think the declarations in scipy/base/code_generators/generate_array_api.py are out of sync with current reality. I'm looking at making something that'll pull API declarations directly from the files (with some minimal markup) so that this doesn't happen. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Tue Dec 6 00:43:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 22:43:30 -0700 Subject: [SciPy-dev] ***[Possible UCE]*** Re: [Numpy-discussion] Data type change completed In-Reply-To: <4394FC7B.6040005@sympatico.ca> References: <4394BA01.20501@ieee.org> <4394FC7B.6040005@sympatico.ca> Message-ID: <43952502.1050609@ieee.org> Colin J. Williams wrote: > Travis Oliphant wrote: > >> >> I've committed the data-type change discussed at the end of last week >> to the SVN repository. Now the concept of a data type for an array >> has been replaced with a "data-descriptor". This data-descriptor is >> flexible enough to handle an arbitrary record specification with >> fields that include records and arrays or arrays of records. While >> nesting may not be the best data-layout for a new design, when >> memory-mapping an arbitrary fixed-record-length file, this capability >> allows you to handle even the most obsure record file. >> > Does this mean that the dtype parameter is changed? obscure?? No, it's not changed. The dtype parameter is still used and it is still called the same thing. It's just that what constitutes a data-type has changed significantly. For example now tuples and dictionaries can be used to describe a data-type. These definitions are recursive so that whenever data-type is used it means anything that can be interpreted as a data-type. And I really mean data-descriptor, but data-type is in such common usage that I still use it. Tuple: ======== (fixed-size-data-type, shape) (generic-size-data-type, itemsize) (base-type-data-type, new-type-data-type) Examples: dtype=(int32, (5,5)) --- a 5x5 array of int32 is the description of this item. dtype=(str, 10) --- a length-10 string dtype=(int16, {'real':(int8,0),'imag':(int8,4)} --- a descriptor that acts like an int16 array mathematically (in ufuncs) but has real and imag fields. Dictionary (defaults to a dtypechar == 'V') ========== format1: {"names": list-of-field-names, "formats": list of data-types, "offsets" : list of start-of-the-field "titles" : extra field names } format2 (and how it's stored internally) {key1 : (data-type1, offset1 [, title1]), key2 : (data-type2, offset2 [, title2]), ... keyn : (data-typen, offsetn [, titlen]) } Other objects not already covered: ===================== ???? Right now, it just passes the tp_dict of the typeobject to the dictionary-conversion routine. I'm open for ideas here and will probably have better ideas once the actual record data-type (not data-descriptor but actual subclass of the scipy.void data type) looks like. All of these can be used as the dtype parameter wherever it is taken (of course you can't always do something useful with every data-descriptor). When an ndarray has an associated type descriptor with fields (that's where the field information is stored), then those fields can be accessed using string or unicode keys to the getitem call. Thus, you can do something like this: >>> a = ones((4,3), dtype=(int16, {'real':(int8, 0), 'imag':(int8, 1)})) >>> a['imag'] = 2 >>> a['real'] = 1 >>> a.tostring() '\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02\x01\x02' Note that there are now three distinct but interacting Python objects: 1) the N-dimensional array of a fixed itemsize. 2) a Python object representing one element of the array. 3) the data-descriptor object describing the data-type of the array. These three things were always there under the covers (the PyArray_Descr* has been there since Numeric), and the standard Python types were always filling in for number 2. Now we are just being more explicit about it. Now, all three things are present and accounted for. I'm really quite happy with the resulting infrastructure. I think it will allow some really neat possibilities. I'm thinking the record array subclass will allow attribute-based look-up and register a nice record type for the actual "element" in of the record array. -Travis From oliphant.travis at ieee.org Tue Dec 6 00:56:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 05 Dec 2005 22:56:30 -0700 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: References: <43951533.3030001@ieee.org> Message-ID: <4395280E.1070408@ieee.org> David M. Cooke wrote: > > Isn't compiling for me: > > [snip] > building 'scipy.base.multiarray' extension > scipy/base/src/arrayobject.c:3325: error: conflicting types for > ?PyArray_NewFromDescr? > build/src/scipy/base/__multiarray_api.h:135: error: previous > declaration of ?PyArray_NewFromDescr? was here My bad, the new function had the wrong type-signature. There weren't a lot of 64-bit changes but I would like more 64-bit testing to find any more issues like this. > > > (and more) > > I think the declarations in > scipy/base/code_generators/generate_array_api.py are out of sync with > current reality. Not the probelm but your solution would be nice anyway. -Travis From arnd.baecker at web.de Tue Dec 6 03:13:47 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 6 Dec 2005 09:13:47 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: <4395280E.1070408@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: Hi Travis On Mon, 5 Dec 2005, Travis Oliphant wrote: [...] > My bad, the new function had the wrong type-signature. There weren't a > lot of 64-bit changes but I would like more 64-bit testing to find > any more issues like this. OK, here we go: In [3]: scipy.__core_version__ Out[3]: '0.8.0.1577' ====================================================================== ERROR: Test of put ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_95/inst_scipy_newcore/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 277, in check_testPut x[[1,4]] = [10,40] File "/home/abaecker/BUILDS2/Build_95//inst_scipy_newcore/lib/python2.4/site-packages/scipy/base/ma.py", line 799, in __setitem__ d[index] = value IndexError: index (1) out of range (0<=index<=4) in dimension 0 ====================================================================== ERROR: check_simple (scipy.base.shape_base.test_shape_base.test_apply_along_axis) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_95/inst_scipy_newcore/lib/python2.4/site-packages/scipy/base/tests/test_shape_base.py", line 13, in check_simple assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) File "/home/abaecker/BUILDS2/Build_95//inst_scipy_newcore/lib/python2.4/site-packages/scipy/base/shape_base.py", line 36, in apply_along_axis outarr[ind] = res IndexError: index (0) out of range (0<=index<=9) in dimension 0 ====================================================================== FAIL: check_odeint1 (scipy.integrate.test_integrate.test_odeint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_95/inst_scipy_newcore/lib/python2.4/site-packages/scipy/integrate/tests/test_integrate.py", line 51, in check_odeint1 assert res < 1.0e-6 AssertionError ---------------------------------------------------------------------- Ran 1362 tests in 2.103s FAILED (failures=1, errors=2) Out[2]: So this looks pretty good already ... I will try to get back with more details ... Best, Arnd From cookedm at physics.mcmaster.ca Tue Dec 6 03:31:20 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 06 Dec 2005 03:31:20 -0500 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: <4395280E.1070408@ieee.org> (Travis Oliphant's message of "Mon, 05 Dec 2005 22:56:30 -0700") References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: Travis Oliphant writes: > David M. Cooke wrote: > >> >> Isn't compiling for me: >> >> [snip] >> building 'scipy.base.multiarray' extension > >> scipy/base/src/arrayobject.c:3325: error: conflicting types for >> ?PyArray_NewFromDescr? >> build/src/scipy/base/__multiarray_api.h:135: error: previous >> declaration of ?PyArray_NewFromDescr? was here > > My bad, the new function had the wrong type-signature. There weren't a > lot of 64-bit changes but I would like more 64-bit testing to find > any more issues like this. Ok, works now (almost). I get two failures for MA functions: ====================================================================== ERROR: Test of put ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 277, in check_testPut x[[1,4]] = [10,40] File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/ma.py", line 798, in __setitem__ d[index] = value IndexError: index (1) out of range (0<=index<=4) in dimension 0 ====================================================================== ERROR: check_simple (scipy.base.shape_base.test_shape_base.test_apply_along_axis) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/tests/test_shape_base.py", line 13, in check_simple assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/shape_base.py", line 36, in apply_along_axis outarr[ind] = res IndexError: index (0) out of range (0<=index<=9) in dimension 0 ---------------------------------------------------------------------- Ran 157 tests in 0.819s >> (and more) >> >> I think the declarations in >> scipy/base/code_generators/generate_array_api.py are out of sync with >> current reality. > > Not the probelm but your solution would be nice anyway. Here's what I'm planning: Functions that are part of the API will look like this in the scipy/base/src/files: /*OBJECT_API A documentation string. */ static PyObject * PyArray_SomeAPIFunction(...and it's arguments) { generate_array_api.py will scan the source files, and generate the appropiate objectapi_list and multiapi_list that are (right now) assigned by hand in that file. To get the order, api functions will be listed in files array_api_order.txt and multiarray_api_order.txt. (The tag MULTIARRAY_API will be used for the multiarray API.) So, to add a new function: change the source file (in scipy/base/src/whatever), and add the function name to the appropiate place in the appropiate *_api_order.txt inside scipy/base/code_generators. Another thing I'm thinking about is using this to compute a hash of the API, so extension modules can check that they're using the same API as scipy. Putting this check into the import_array() macro would mean that it'd be easier to track down those problems where the API's changed, without the extension module being recompiled to match. It could throw a warning much like Python when it tries to run something compiled for an older C API. [Depending how much mondo magic you'd want, it'd be possible to store a string representation of the API, and pinpoint precisely _where_ the API changed between the extension module and scipy_core -- but I'll leave that for another day.] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Tue Dec 6 03:39:41 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 06 Dec 2005 01:39:41 -0700 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: <43954E4D.4040102@ieee.org> David M. Cooke wrote: >Travis Oliphant writes: > > > >>David M. Cooke wrote: >> >> >> >>>Isn't compiling for me: >>> >>>[snip] >>>building 'scipy.base.multiarray' extension >>> >>> >>>scipy/base/src/arrayobject.c:3325: error: conflicting types for >>>?PyArray_NewFromDescr? >>>build/src/scipy/base/__multiarray_api.h:135: error: previous >>>declaration of ?PyArray_NewFromDescr? was here >>> >>> >>My bad, the new function had the wrong type-signature. There weren't a >>lot of 64-bit changes but I would like more 64-bit testing to find >>any more issues like this. >> >> > >Ok, works now (almost). I get two failures for MA functions: > >====================================================================== >ERROR: Test of put >---------------------------------------------------------------------- >Traceback (most recent call last): > File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 277, in check_testPut > x[[1,4]] = [10,40] > File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/ma.py", line 798, in __setitem__ > d[index] = value >IndexError: index (1) out of range (0<=index<=4) in dimension 0 > >====================================================================== >ERROR: check_simple (scipy.base.shape_base.test_shape_base.test_apply_along_axis) >---------------------------------------------------------------------- >Traceback (most recent call last): > File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/tests/test_shape_base.py", line 13, in check_simple > assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) > File "/home/cookedm/usr/lib/python2.4/site-packages/scipy/base/shape_base.py", line 36, in apply_along_axis > outarr[ind] = res >IndexError: index (0) out of range (0<=index<=9) in dimension 0 > >---------------------------------------------------------------------- >Ran 157 tests in 0.819s > > O.K. Thanks... I saw these on Arnd's list as well. I'm not getting them on 32-bit, but I think I know where to look. > > >>>(and more) >>> >>>I think the declarations in >>>scipy/base/code_generators/generate_array_api.py are out of sync with >>>current reality. >>> >>> >>Not the probelm but your solution would be nice anyway. >> >> > >Here's what I'm planning: Functions that are part of the API will look >like this in the scipy/base/src/files: > >/*OBJECT_API > A documentation string. >*/ >static PyObject * >PyArray_SomeAPIFunction(...and it's arguments) >{ > >generate_array_api.py will scan the source files, and generate the >appropiate objectapi_list and multiapi_list that are (right now) >assigned by hand in that file. To get the order, api functions will be >listed in files array_api_order.txt and multiarray_api_order.txt. >(The tag MULTIARRAY_API will be used for the multiarray API.) > >So, to add a new function: change the source file (in >scipy/base/src/whatever), and add the function name to the appropiate >place in the appropiate *_api_order.txt inside scipy/base/code_generators. > >Another thing I'm thinking about is using this to compute a hash of >the API, so extension modules can check that they're using the same >API as scipy. Putting this check into the import_array() macro would >mean that it'd be easier to track down those problems where the API's >changed, without the extension module being recompiled to match. It >could throw a warning much like Python when it tries to run something >compiled for an older C API. > > > This sounds very, very good. >[Depending how much mondo magic you'd want, it'd be possible to store >a string representation of the API, and pinpoint precisely _where_ the >API changed between the extension module and scipy_core -- but I'll >leave that for another day.] > > With an extra very if you can pull that off... -Travis From arnd.baecker at web.de Tue Dec 6 04:03:24 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 6 Dec 2005 10:03:24 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: For the icc situation (Intel compiler on Itanium) I get (basically the same as yesterday + the new ones) those listed below. I don't know about the quality of the intel compilers, but the amount of warnings (1438 + 3025 remarks), just for the core, is discouraging. At the end of this mail I also attach the result of cat build_log_scipy_new_core.txt | grep warning (only those related to arraytypes.inc and ufuncobject.c). For full scipy things get even more problematic with icc as cephes does not compile due to the double NAN = 1.0/0.0 - 1.0/0.0; definitions leading to an error (see at the end for the 3 errors of this type). How should one define NAN when using icc? Commenting out the build of special in Lib/setup.py the next problem sits in compilation aborted for build/src/Lib/interpolate/dfitpackmodule.c (code 2) error: Command "icc -fno-strict-aliasing -OPT:Olimit=0 -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -fPIC -Ibuild/src -I/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/include -I/home/baecker/python/include/python2.4 -c build/src/Lib/interpolate/dfitpackmodule.c -o build/temp.linux-ia64-2.4/build/src/Lib/interpolate/dfitpackmodule.o" failed with exit status 2 Here icc does not like thoses places with: build/src/Lib/interpolate/dfitpackmodule.c(2976): error: expected a ";" int calc_lwrk1(void) { In code like int calc_lwrk1(void) { int u = nxest-kx-1; int v = nyest-ky-1; int km = MAX(kx,ky)+1; int ne = MAX(nxest,nyest); int bx = kx*v+ky+1; int by = ky*u+kx+1; int b1,b2; if (bx<=by) {b1=bx;b2=bx+v-ky;} else {b1=by;b2=by+u-kx;} return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; } Is this code f2py generated?? Ok, kicking out interpolate in Lib/setup.py ... I finally get to some place, where magically g77 is called, despite python setup.py config --fcompiler=intel install --prefix=$DESTDIR I will retry this with additional options for the fortran side. a) python setup.py config --compiler=intel config_fc --fcompiler=intel b) Maybe setting export F77=ifort export CC=icc export CXX=icc (will report later) Any advice on how to attack these various issues is very welcome! Best, Arnd ====================================================================== ERROR: Test of put ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 277, in check_testPut x[[1,4]] = [10,40] File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/base/ma.py", line 799, in __setitem__ d[index] = value IndexError: index (1) out of range (0<=index<=4) in dimension 0 ====================================================================== ERROR: check_simple (scipy.base.shape_base.test_shape_base.test_apply_along_axis) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_shape_base.py", line 13, in check_simple assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/base/shape_base.py", line 36, in apply_along_axis outarr[ind] = res IndexError: index (0) out of range (0<=index<=9) in dimension 0 ====================================================================== FAIL: check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_index_tricks.py", line 30, in check_nd assert_array_almost_equal(c[0][-1,:],ones(10,'d'),11) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [-0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555555555556 -0.555... Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumprod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 163, in check_basic 1320, 6600, 26400],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 2. 20. 0. 0. 0. 0.] Array 2: [ 1.0000000000000000e+00 2.0000000000000000e+00 2.0000000000000000e+01 2.2000000000000000e+02 1.32000000000000... ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumsum) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 122, in check_basic assert_array_equal(cumsum(a), array([1,3,13,24,30,35,39],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 3. 13. 11. 17. 5. 9.] Array 2: [ 1. 3. 13. 24. 30. 35. 39.] ====================================================================== FAIL: Test add, sum, product. ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_ma.py", line 163, in check_testAddSumProd self.failUnless (eq(scipy.add.accumulate(x), add.accumulate(x))) AssertionError ---------------------------------------------------------------------- Ran 157 tests in 1.590s FAILED (failures=4, errors=2) Out[3]: build/src/scipy/base/src/arraytypes.inc(5007): warning #1338: arithmetic on pointer to void or function type build/src/scipy/base/src/arraytypes.inc(5447): warning #1338: arithmetic on pointer to void or function type build/src/scipy/base/src/arraytypes.inc(5887): warning #1338: arithmetic on pointer to void or function type build/src/scipy/base/src/arraytypes.inc(7151): warning #181: argument is incompatible with corresponding format string conversion build/src/scipy/base/src/arraytypes.inc(7171): warning #181: argument is incompatible with corresponding format string conversion build/src/scipy/base/src/arraytypes.inc(10381): warning #1338: arithmetic on pointer to void or function type scipy/base/src/arrayobject.c(3725): warning #1338: arithmetic on pointer to void or function type build/src/scipy/base/__multiarray_api.c(6): warning #1418: external definition with no prior declaration scipy/base/src/multiarraymodule.c(4391): warning #1418: external definition with no prior declaration icc: Command line warning: ignoring option '-O'; no argument required build/src/scipy/base/src/umathmodule.c(56): warning #1419: external declaration in primary source file scipy/base/src/ufuncobject.c(1164): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1167): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1432): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1483): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1746): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1758): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1811): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1821): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1841): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1907): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1951): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1962): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(1984): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(2051): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(2138): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(2145): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(2195): warning #1338: arithmetic on pointer to void or function type scipy/base/src/ufuncobject.c(2592): warning #1338: arithmetic on pointer to void or function type build/src/scipy/base/__ufunc_api.c(6): warning #1418: external definition with no prior declaration build/src/scipy/base/src/umathmodule.c(8073): warning #1418: external definition with no prior declaration icc: Command line warning: ignoring option '-O'; no argument required scipy/base/src/_compiled_base.c(353): warning #1418: external definition with no prior declaration icc: Command line warning: ignoring option '-O'; no argument required scipy/corelib/fftpack_lite/fftpack.c(1207): warning #1418: external definition with no prior declaration ----------- Cephes: Lib/special/cephes/const.c(92): error: floating-point operation result is out of range double INFINITY = 1.0/0.0; /* 99e999; */ Lib/special/cephes/const.c(97): warning #1418: external definition with no prior declaration double NAN = 1.0/0.0 - 1.0/0.0; Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; Lib/special/cephes/const.c(102): warning #1418: external definition with no prior declaration double NEGZERO = -0.0; From nwagner at mecha.uni-stuttgart.de Tue Dec 6 04:06:19 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 Dec 2005 10:06:19 +0100 Subject: [SciPy-dev] Bug in arrayobject.c Message-ID: <4395548B.7020106@mecha.uni-stuttgart.de> In file included from scipy/base/src/multiarraymodule.c:45: scipy/base/src/arrayobject.c: In function `array_subscript': scipy/base/src/arrayobject.c:1755: error: syntax error before "else" scipy/base/src/arrayobject.c:1759: error: `value' undeclared (first use in this function) scipy/base/src/arrayobject.c:1759: error: (Each undeclared identifier is reported only once scipy/base/src/arrayobject.c:1759: error: for each function it appears in.) From cimrman3 at ntc.zcu.cz Tue Dec 6 05:22:21 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 06 Dec 2005 11:22:21 +0100 Subject: [SciPy-dev] Data type change completed In-Reply-To: <4394BA01.20501@ieee.org> References: <4394BA01.20501@ieee.org> Message-ID: <4395665D.4090608@ntc.zcu.cz> a small patch to make full scipy compile: Index: Lib/signal/sigtoolsmodule.c =================================================================== --- Lib/signal/sigtoolsmodule.c (revision 1471) +++ Lib/signal/sigtoolsmodule.c (working copy) @@ -1408,7 +1408,7 @@ temp_ind[k]++; if (!(check && index_out_of_bounds(temp_ind,dims1,ndims)) && \ - memcmp(ip2, ptr, ap2->itemsize)) { + memcmp(ip2, ptr, PyArray_ITEMSIZE( ap2 ))) { memcpy(sort_buffer, ip1, elsize); sort_buffer += elsize; } @@ -1621,7 +1621,7 @@ gen->data = py_arr->data; gen->nd = py_arr->nd; gen->dimensions = py_arr->dimensions; - gen->elsize = py_arr->itemsize; + gen->elsize = PyArray_ITEMSIZE( py_arr ); gen->strides = py_arr->strides; gen->zero = PyArray_Zero(py_arr); return; @@ -1629,7 +1629,7 @@ static void Py_copy_info_vec(Generic_Vector *gen, PyArrayObject *py_arr) { gen->data = py_arr->data; - gen->elsize = py_arr->itemsize; + gen->elsize = PyArray_ITEMSIZE( py_arr ); gen->numels = PyArray_Size((PyObject *)py_arr); gen->zero = PyArray_Zero(py_arr); return; ----------------- and then: In [4]:nm.__scipy_version__ Out[4]:'0.4.3.1471' In [5]:nm.__core_version__ Out[5]:'0.8.0.1581' ====================================================================== FAIL: check_odeint1 (scipy.integrate.test_integrate.test_odeint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/share/software/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_integrate.py", line 51, in check_odeint1 assert res < 1.0e-6 AssertionError ---------------------------------------------------------------------- Ran 1377 tests in 151.758s FAILED (failures=1) r. From cookedm at physics.mcmaster.ca Tue Dec 6 06:12:52 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 06 Dec 2005 06:12:52 -0500 Subject: [SciPy-dev] [SciPy-user] linspace In-Reply-To: <43954E4D.4040102@ieee.org> (Travis Oliphant's message of "Tue, 06 Dec 2005 01:39:41 -0700") References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <43954E4D.4040102@ieee.org> Message-ID: Travis Oliphant writes: > David M. Cooke wrote: >>Travis Oliphant writes: >>>David M. Cooke wrote: >>>>I think the declarations in >>>>scipy/base/code_generators/generate_array_api.py are out of sync with >>>>current reality. >>>> >>>> >>>Not the probelm but your solution would be nice anyway. >>> >>> >> >>Here's what I'm planning: Functions that are part of the API will look >>like this in the scipy/base/src/files: >> >>/*OBJECT_API >> A documentation string. >>*/ >>static PyObject * >>PyArray_SomeAPIFunction(...and it's arguments) >>{ >> >>generate_array_api.py will scan the source files, and generate the >>appropiate objectapi_list and multiapi_list that are (right now) >>assigned by hand in that file. To get the order, api functions will be >>listed in files array_api_order.txt and multiarray_api_order.txt. >>(The tag MULTIARRAY_API will be used for the multiarray API.) >> >>So, to add a new function: change the source file (in >>scipy/base/src/whatever), and add the function name to the appropiate >>place in the appropiate *_api_order.txt inside scipy/base/code_generators. Ok, this is done and checked in. >>Another thing I'm thinking about is using this to compute a hash of >>the API, so extension modules can check that they're using the same >>API as scipy. Putting this check into the import_array() macro would >>mean that it'd be easier to track down those problems where the API's >>changed, without the extension module being recompiled to match. It >>could throw a warning much like Python when it tries to run something >>compiled for an older C API. > > This sounds very, very good. Not done yet, but much easier to add now. I'll add it later. >>[Depending how much mondo magic you'd want, it'd be possible to store >>a string representation of the API, and pinpoint precisely _where_ the >>API changed between the extension module and scipy_core -- but I'll >>leave that for another day.] >> > With an extra very if you can pull that off... Looks good for this; I figure I'll #define a 32-bit constant (determined from a hash of the return type, name, and argument types) for each routine, and check it if the API hashes are not equal. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Tue Dec 6 04:36:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 06 Dec 2005 02:36:30 -0700 Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: <43955B9E.9080406@ieee.org> Arnd Baecker wrote: >For the icc situation (Intel compiler on Itanium) I get >(basically the same as yesterday + the new ones) those listed below. > >I don't know about the quality of the intel compilers, >but the amount of warnings (1438 + 3025 remarks), >just for the core, is discouraging. >At the end of this mail I also attach the result of > cat build_log_scipy_new_core.txt | grep warning >(only those related to arraytypes.inc and ufuncobject.c). > >For full scipy things get even more problematic with icc >as cephes does not compile due to the > double NAN = 1.0/0.0 - 1.0/0.0; >definitions leading to an error (see at the end for the 3 errors of this >type). >How should one define NAN when using icc? > > NAN can be defined as a bit-field. >Commenting out the build of special in Lib/setup.py >the next problem sits in > >compilation aborted for build/src/Lib/interpolate/dfitpackmodule.c (code >2) >error: Command "icc -fno-strict-aliasing -OPT:Olimit=0 -DNDEBUG -g -O3 >-Wall -Wstrict-prototypes -fPIC -fPIC -Ibuild/src >-I/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/include >-I/home/baecker/python/include/python2.4 -c >build/src/Lib/interpolate/dfitpackmodule.c -o >build/temp.linux-ia64-2.4/build/src/Lib/interpolate/dfitpackmodule.o" >failed with exit status 2 > >Here icc does not like thoses places with: > >build/src/Lib/interpolate/dfitpackmodule.c(2976): error: expected a ";" > int calc_lwrk1(void) { > >In code like > int calc_lwrk1(void) { > int u = nxest-kx-1; > int v = nyest-ky-1; > int km = MAX(kx,ky)+1; > int ne = MAX(nxest,nyest); > int bx = kx*v+ky+1; > int by = ky*u+kx+1; > int b1,b2; > if (bx<=by) {b1=bx;b2=bx+v-ky;} > else {b1=by;b2=by+u-kx;} > return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; > } > >Is this code f2py generated?? > > I don't think so. The f2py-generated code should be clean; >Ok, kicking out interpolate in Lib/setup.py ... >I finally get to some place, where magically g77 is >called, despite > python setup.py config --fcompiler=intel install --prefix=$DESTDIR >I will retry this with additional options for the fortran side. > a) python setup.py config --compiler=intel config_fc --fcompiler=intel > b) Maybe setting > export F77=ifort > export CC=icc > export CXX=icc > (will report later) > >Any advice on how to attack these various issues is very welcome! > > I know that there are issues with distutils not allowing you to change certain things. Perhaps better support for icc is needed for scipy-distutils. Try your tests again on the recent code-base. I think I've fixed them.... fingers crossed. Perhaps there are options to icc to make it compile with fewer warnings. The problem with arithmetic on void pointers is a little bothersome. There are lots of void * pointers in scipy_core. They could all be changed to char *, but I think David M. has been doing work to get them changed from char * to void * Alternatively casts could be done in the right places. My suggestion would be to just turn that warning off in icc. The other issues may require some more attention. We may need a better header file to support icc. -Travis From oliphant.travis at ieee.org Tue Dec 6 08:12:15 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 06 Dec 2005 06:12:15 -0700 Subject: [SciPy-dev] alpha nd_image added Message-ID: <43958E2F.4040307@ieee.org> I just added very alpha nd_image port to the SciPy sandbox. Actually, all I did was fiddle with a numarray compatibility layer until the thing compiled. I have not checked out the resulting functions to see if they work, but it shows promise. Perhaps porting compiled numarray apps won't be too hard. Please feel free to play in the sandbox if you are interested. -Travis From arnd.baecker at web.de Tue Dec 6 08:22:40 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 6 Dec 2005 14:22:40 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: <43955B9E.9080406@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <43955B9E.9080406@ieee.org> Message-ID: Hi Travis, On Tue, 6 Dec 2005, Travis Oliphant wrote: > I know that there are issues with distutils not allowing you to change > certain things. Perhaps better support for icc is needed for > scipy-distutils. > > Try your tests again on the recent code-base. I think I've fixed > them.... fingers crossed. looks better: ====================================================================== FAIL: check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_index_tricks.py", line 30, in check_nd assert_array_almost_equal(c[0][-1,:],ones(10,'d'),11) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 1.099621528581e+173 -5.555555555556e-001 7.250435759321e+177 -5.555555555556e-001 -5.555555555556e-001 -5.5555... Array 2: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumprod) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 163, in check_basic 1320, 6600, 26400],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 2. 20. 0. 0. 0. 0.] Array 2: [ 1.0000000000000000e+00 2.0000000000000000e+00 2.0000000000000000e+01 2.2000000000000000e+02 1.32000000000000... ====================================================================== FAIL: check_basic (scipy.base.function_base.test_function_base.test_cumsum) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/tests/test_function_base.py", line 122, in check_basic assert_array_equal(cumsum(a), array([1,3,13,24,30,35,39],ctype)) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 57.1428571429%): Array 1: [ 1. 3. 13. 11. 17. 5. 9.] Array 2: [ 1. 3. 13. 24. 30. 35. 39.] ---------------------------------------------------------------------- Ran 157 tests in 1.541s Have to run - best, Arnd From cimrman3 at ntc.zcu.cz Tue Dec 6 08:25:59 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 06 Dec 2005 14:25:59 +0100 Subject: [SciPy-dev] simple UMFPACK wrapper added Message-ID: <43959167.3040608@ntc.zcu.cz> I just added a simple swig-generated wrapper of UMFPACK sparse linear solver to the SciPy sandbox. It is very small and does not depend on any other package except scipy.base and scipy.sparse. Below you can find its performance compared with the current SuperLU-based implementation (scipy.sparse.solve()) on matrices corresponding to the Poisson equation (e.g. temperature distribution). The test script is included in the sandbox, too. The brave and interested can test it, but first you will have to edit the setup.py (a very basic one, needs scipyfication), since the paths to umfpack and scipy are hardcoded. Awaiting your feedback, r. ************************************************** url: mtxAA file: mtxAA format: triplet reading... ok size : (240, 240) (2682 nnz) umfpack : 0.00 s ||Ax-b|| : 5.25488472447e-17 ||x - x_{exact}|| : 2.10942374679e-15 Use minimum degree ordering on A'+A. sparse.solve : 0.01 s ||Ax-b|| : 1.04615897916e-16 ||x - x_{exact}|| : 4.44366678924e-15 ************************************************** url: mtxBB file: mtxBB format: triplet reading... ok size : (2706, 2706) (34664 nnz) umfpack : 0.12 s ||Ax-b|| : 1.3700207406e-13 ||x - x_{exact}|| : 1.75496026659e-14 Use minimum degree ordering on A'+A. sparse.solve : 1.18 s ||Ax-b|| : 3.5072700769e-13 ||x - x_{exact}|| : 7.46831110403e-14 ************************************************** url: mtxCC file: mtxCC format: triplet reading... ok size : (33718, 33718) (482570 nnz) umfpack : 36.68 s ||Ax-b|| : 3.17661662319e-15 ||x - x_{exact}|| : 4.91723530478e-14 Use minimum degree ordering on A'+A. Can't expand MemType 0: jcol 26067 -> SIGSEGV after about 1 hour (on 2 GB RAM). sparse.solve : infinity From schofield at ftw.at Tue Dec 6 13:08:14 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 06 Dec 2005 18:08:14 +0000 Subject: [SciPy-dev] bug(?) in scipy.sparse In-Reply-To: <43947271.8020508@astraw.com> References: <4392C313.2040109@astraw.com> <41AE65B7-C7EB-4BB7-A37A-4F3FB01A0A7C@ftw.at> <43947271.8020508@astraw.com> Message-ID: <4395D38E.1020705@ftw.at> Andrew Straw wrote: >This stops my test from failing -- I'm not completely sure it stops the >bug because I'm not sure what the col_ptr vector of a CSC matrix expects >after the last real data. I assume here it is the number of non-zero >elements. > > Yes, that's the convention we're using... Thanks very much for the bug report and the patch! I've committed it, along with your test and a similar fix for dok_matrix.tocsr(). Best wishes, -- Ed From twilson at eduplay.com Tue Dec 6 23:43:47 2005 From: twilson at eduplay.com (Tyler W. Wilson) Date: Wed, 07 Dec 2005 00:43:47 -0400 Subject: [SciPy-dev] Issue with interpolate.interp1d() Message-ID: <43966883.4010900@eduplay.com> I am trying to use the interpolate.interp1d package. But when I get to the point of calling the returned function, I am getting the following error: File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", line 198, in __call__ putmask(y_new, new_out.flat, self.fill_value) # will insert work here? File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line 170, in putmask return a.putmask(v, mask) AttributeError: 'scipy.flatiter' object has no attribute 'putmask' Any ideas? Thanks, Tyler From twilson at eduplay.com Tue Dec 6 23:52:13 2005 From: twilson at eduplay.com (Tyler W. Wilson) Date: Wed, 07 Dec 2005 00:52:13 -0400 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: <200512051705.06993.dd55@cornell.edu> References: <200512051705.06993.dd55@cornell.edu> Message-ID: I have seen the exact same issue and would love to see a fix. Seems like any code that calls into a file called 'oldnumeric' needs some updating. :-) - Tyler Darren Dale wrote: > Running this short script yields an error message: > > from scipy import * > i = interpolate.interp1d(arange(100), arange(100)) > i(arange(10, 90, 100)) > > exceptions.AttributeError Traceback (most recent > call last) > > /home/darren/ > > /usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py in > __call__(self, x_new) > 196 out_of_bounds.shape = sec_shape > 197 new_out = ones(new_shape)*out_of_bounds > --> 198 putmask(y_new, new_out.flat, self.fill_value) > 199 y_new.shape = yshape > 200 # Rotate the values of y_new back so that they coorespond to > the > > /usr/lib64/python2.4/site-packages/scipy/base/oldnumeric.py in putmask(a, > mask, v) > 186 """ > 187 print dir(a) > --> 188 return a.putmask(v, mask) > 189 > 190 def swapaxes(a, axis1, axis2): > > AttributeError: 'scipy.flatiter' object has no attribute 'putmask' > > I checked the namespace of a in oldnumeric.putmask: > > ['__array__', '__class__', '__delattr__', '__delitem__', '__doc__', > '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', > '__len__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', > '__setattr__', '__setitem__', '__str__', 'base', 'copy', 'next'] > > > Darren From oliphant.travis at ieee.org Wed Dec 7 03:15:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 07 Dec 2005 01:15:03 -0700 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: References: <200512051705.06993.dd55@cornell.edu> Message-ID: <43969A07.5000608@ieee.org> Tyler W. Wilson wrote: >I have seen the exact same issue and would love to see a fix. Seems like >any code that calls into a file called 'oldnumeric' needs some updating. :-) > > This should be fixed in SVN. .flat should be changed to .ravel() in this instance. -Travis From twilson at eduplay.com Wed Dec 7 07:57:03 2005 From: twilson at eduplay.com (Tyler W. Wilson) Date: Wed, 07 Dec 2005 08:57:03 -0400 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: <43969A07.5000608@ieee.org> References: <200512051705.06993.dd55@cornell.edu> <43969A07.5000608@ieee.org> Message-ID: <4396DC1F.3050404@eduplay.com> Well, that fixed the that issue, but now I get the following: File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", line 180, in __call__ putmask(y_new, new_out.ravel(), self.fill_value) File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line 170, in putmask return a.putmask(v, mask) TypeError: array cannot be safely cast to required type Thanks, Tyler Travis Oliphant wrote: > Tyler W. Wilson wrote: > > >> I have seen the exact same issue and would love to see a fix. Seems like >> any code that calls into a file called 'oldnumeric' needs some updating. :-) >> >> >> > This should be fixed in SVN. > > .flat should be changed to .ravel() in this instance. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From dd55 at cornell.edu Wed Dec 7 09:49:16 2005 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 7 Dec 2005 09:49:16 -0500 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: <4396DC1F.3050404@eduplay.com> References: <200512051705.06993.dd55@cornell.edu> <43969A07.5000608@ieee.org> <4396DC1F.3050404@eduplay.com> Message-ID: <200512070949.17087.dd55@cornell.edu> On Wednesday 07 December 2005 07:57, Tyler W. Wilson wrote: > Well, that fixed the that issue, but now I get the following: > > File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", > line 180, in __call__ > putmask(y_new, new_out.ravel(), self.fill_value) > File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line > 170, in putmask > return a.putmask(v, mask) > TypeError: array cannot be safely cast to required type I see this too. Additionally, I have one consistent failure to report: Also, I have one consistent failure to report: ====================================================================== FAIL: check_odeint1 (scipy.integrate.test_integrate.test_odeint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/integrate/tests/test_integrate.py", line 51, in check_odeint1 assert res < 1.0e-6 AssertionError From arnd.baecker at web.de Wed Dec 7 10:12:58 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 7 Dec 2005 16:12:58 +0100 (CET) Subject: [SciPy-dev] ACML FFT and new scipy Message-ID: Hi, has anyone tried to use the ACML (http://developer.amd.com/acml.aspx) FFT routines from scipy? (It seems that one has to extend fftpack for this) In addition to LAPACK the ACML offers. "fast scalar, vector, and array math transcendental library routines optimized for high performance on AMD Opteron processors." Would it be possible that scipy ufuncs could make use of them? (Also Intels IMKL provides a Vector Math Library, which seems to give significant speed-up: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/220052.htm ) Best, Arnd From dd55 at cornell.edu Wed Dec 7 10:21:00 2005 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 7 Dec 2005 10:21:00 -0500 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: <4396DC1F.3050404@eduplay.com> References: <200512051705.06993.dd55@cornell.edu> <43969A07.5000608@ieee.org> <4396DC1F.3050404@eduplay.com> Message-ID: <200512071021.01030.dd55@cornell.edu> On Wednesday 07 December 2005 07:57, Tyler W. Wilson wrote: > Well, that fixed the that issue, but now I get the following: > > File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", > line 180, in __call__ > putmask(y_new, new_out.ravel(), self.fill_value) > File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line > 170, in putmask > return a.putmask(v, mask) > TypeError: array cannot be safely cast to required type reminder of the test script: from scipy import * a = arange(100) b = arange(10,90,100) i = interpolate.interp1d(a,a) i(b) I checked the values of a and v in putmask. a has dtype 'l', and v is nan. I dont understand the TypeError, since a[1]=nan doesn't raise one (it just sets a[2] equal to 0). If I change the dtype of a to 'f' and keep b the same, I get the following: /usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py in __call__(self, x_new) 150 # would be inserted. 151 # Note: If x_new[n] = x[m], then m is returned by searchsorted. --> 152 x_new_indices = searchsorted(self.x,x_new) 153 # 3. Clip x_new_indices so that they are within the range of 154 # self.x indices and at least 1. Removes mis-interpolation /usr/lib64/python2.4/site-packages/scipy/base/oldnumeric.py in searchsorted(a, v) 235 a = array(a,copy=False) 236 print a.dtypechar, a, v --> 237 return a.searchsorted(v) 238 239 def resize(a, new_shape): TypeError: array cannot be safely cast to required type From arnd.baecker at web.de Wed Dec 7 12:08:41 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 7 Dec 2005 18:08:41 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: <43955B9E.9080406@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <43955B9E.9080406@ieee.org> Message-ID: Hi, On Tue, 6 Dec 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >For the icc situation (Intel compiler on Itanium) I get > >(basically the same as yesterday + the new ones) those listed below. > > > >I don't know about the quality of the intel compilers, > >but the amount of warnings (1438 + 3025 remarks), > >just for the core, is discouraging. > >At the end of this mail I also attach the result of > > cat build_log_scipy_new_core.txt | grep warning > >(only those related to arraytypes.inc and ufuncobject.c). > > > >For full scipy things get even more problematic with icc > >as cephes does not compile due to the > > double NAN = 1.0/0.0 - 1.0/0.0; > >definitions leading to an error (see at the end for the 3 errors of this > >type). > >How should one define NAN when using icc? > > > NAN can be defined as a bit-field. Do you have a pointer to more information on this - I googled around a bit, but without success. > >Commenting out the build of special in Lib/setup.py > >the next problem sits in > > > >compilation aborted for build/src/Lib/interpolate/dfitpackmodule.c (code > >2) > >error: Command "icc -fno-strict-aliasing -OPT:Olimit=0 -DNDEBUG -g -O3 > >-Wall -Wstrict-prototypes -fPIC -fPIC -Ibuild/src > >-I/home/baecker/python/newscipy/lib/python2.4/site-packages/scipy/base/include > >-I/home/baecker/python/include/python2.4 -c > >build/src/Lib/interpolate/dfitpackmodule.c -o > >build/temp.linux-ia64-2.4/build/src/Lib/interpolate/dfitpackmodule.o" > >failed with exit status 2 > > > >Here icc does not like thoses places with: > > > >build/src/Lib/interpolate/dfitpackmodule.c(2976): error: expected a ";" > > int calc_lwrk1(void) { > > > >In code like > > int calc_lwrk1(void) { > > int u = nxest-kx-1; > > int v = nyest-ky-1; > > int km = MAX(kx,ky)+1; > > int ne = MAX(nxest,nyest); > > int bx = kx*v+ky+1; > > int by = ky*u+kx+1; > > int b1,b2; > > if (bx<=by) {b1=bx;b2=bx+v-ky;} > > else {b1=by;b2=by+u-kx;} > > return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; > > } > > > >Is this code f2py generated?? > > > I don't think so. The f2py-generated code should be clean; I just checked again: the file dfitpackmodule.c does not exist on check-out and there is a fitpack.pyf and the setup.py in interpolate has the lines config.add_extension('dfitpack', sources=['fitpack.pyf'], libraries=['fitpack'], ) So I do think that it is generated by f2py. Anyway, I can't dig into this further, because my whole build is screwed at the moment - no idea what is going on ... Best, Arnd From jorgen.stenarson at bostream.nu Wed Dec 7 15:16:22 2005 From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=) Date: Wed, 07 Dec 2005 21:16:22 +0100 Subject: [SciPy-dev] Bug in distuils appendpath In-Reply-To: References: Message-ID: <43974316.6030404@bostream.nu> Hi, The tests for appendpath in scipy.distutils.misc_util fail on a windows machine. The reason is that the join used to build the expected answer does not normalize the path to contain \ instead of /. There is also a small problem in appendpath itself. Both prefix and path need to have abspath applied otherwise one of them will begin with c:\prefix and the other one won't. After these two changes the tests pass on my windows machine. Best regards, J?rgen Stenarson -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: patch URL: From oliphant.travis at ieee.org Wed Dec 7 18:11:26 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 07 Dec 2005 16:11:26 -0700 Subject: [SciPy-dev] simple UMFPACK wrapper added In-Reply-To: <43959167.3040608@ntc.zcu.cz> References: <43959167.3040608@ntc.zcu.cz> Message-ID: <43976C1E.9000201@ieee.org> Robert Cimrman wrote: >I just added a simple swig-generated wrapper of UMFPACK sparse linear >solver to the SciPy sandbox. It is very small and does not depend on any >other package except scipy.base and scipy.sparse. Below you can find its >performance compared with the current SuperLU-based implementation >(scipy.sparse.solve()) on matrices corresponding to the Poisson equation >(e.g. temperature distribution). The test script is included in the >sandbox, too. > >The brave and interested can test it, but first you will have to edit >the setup.py (a very basic one, needs scipyfication), since the paths to >umfpack and scipy are hardcoded. > > This is great Robert. What is keeping us from replacing the SuperLU-based solver entirely? -Travis From byrnes at bu.edu Thu Dec 8 01:32:48 2005 From: byrnes at bu.edu (John Byrnes) Date: Thu, 8 Dec 2005 01:32:48 -0500 Subject: [SciPy-dev] possible bug in interp1d In-Reply-To: <200512071021.01030.dd55@cornell.edu> References: <200512051705.06993.dd55@cornell.edu> <43969A07.5000608@ieee.org> <4396DC1F.3050404@eduplay.com> <200512071021.01030.dd55@cornell.edu> Message-ID: <20051208063248.GA27318@localhost.localdomain> In order to get interp1d to work, I commented out the putmask line in interpolate.py. In my case, I'm dealing with generated data, so there aren't any dropouts. It seems to work. John On Wed, Dec 07, 2005 at 10:21:00AM -0500, Darren Dale wrote: > On Wednesday 07 December 2005 07:57, Tyler W. Wilson wrote: > > Well, that fixed the that issue, but now I get the following: > > > > File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", > > line 180, in __call__ > > putmask(y_new, new_out.ravel(), self.fill_value) > > File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line > > 170, in putmask > > return a.putmask(v, mask) > > TypeError: array cannot be safely cast to required type > > reminder of the test script: > > from scipy import * > a = arange(100) > b = arange(10,90,100) > i = interpolate.interp1d(a,a) > i(b) > > I checked the values of a and v in putmask. a has dtype 'l', and v is nan. I > dont understand the TypeError, since a[1]=nan doesn't raise one (it just sets > a[2] equal to 0). > If I change the dtype of a to 'f' and keep b the same, I get the following: > > /usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py in > __call__(self, x_new) > 150 # would be inserted. > 151 # Note: If x_new[n] = x[m], then m is returned by > searchsorted. > --> 152 x_new_indices = searchsorted(self.x,x_new) > 153 # 3. Clip x_new_indices so that they are within the range of > 154 # self.x indices and at least 1. Removes mis-interpolation > > /usr/lib64/python2.4/site-packages/scipy/base/oldnumeric.py in searchsorted(a, > v) > 235 a = array(a,copy=False) > 236 print a.dtypechar, a, v > --> 237 return a.searchsorted(v) > 238 > 239 def resize(a, new_shape): > > TypeError: array cannot be safely cast to required type > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- When a man sits with a pretty girl for an hour, it seems like a minute. But let him sit on a hot stove for a minute -- and it's longer than any hour. That's relativity. -- Albert Einstein From arnd.baecker at web.de Thu Dec 8 02:48:06 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 08:48:06 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing (was linspace) In-Reply-To: References: <43951533.3030001@ieee.org><4395280E.1070408@ieee.org> Message-ID: Hi, now I managed to have another closer look at the compile problem with dfitpackmodule.c. Some facts ahead - this is with intel icc - dfitpackmodule.c is generated via f2py (it is explicitely written at the start of that file - sorry for not making this clear in the very begining - I jumped straight to the error ...) - The error is build/src/Lib/interpolate/dfitpackmodule.c(2528): error: expected a ";" int calc_lwrk1(void) { - the code looks alright to me (but after too long exposure to python I would not be able to spot a missing ";" anyway ) - the same file compiles fine with gcc! So does this mean the code uses something which is specific to gcc? Maybe some (ic)C expert can spot the problem?? (Somehow I am tempted to say that this is a bug in icc ...) Any ideas are appreciated. I attach the corresponding section of the code below (line 2528 is marked by <=====). Best, Arnd -------------------------------- dfitpackmodule.c: Lines - 2401-2554 /******************************** surfit_smth ********************************/ static char doc_f2py_rout_dfitpack_surfit_smth[] = "\ Function signature:\n\ nx,tx,ny,ty,c,fp,wrk1,ier = surfit_smth(x,y,z,[w,xb,xe,yb,ye,kx,ky,s,nxest,nyest,eps,lwrk2])\n\ Required arguments:\n" " x : input rank-1 array('d') with bounds (m)\n" " y : input rank-1 array('d') with bounds (m)\n" " z : input rank-1 array('d') with bounds (m)\n" "Optional arguments:\n" " w := 1.0 input rank-1 array('d') with bounds (m)\n" " xb := dmin(x,m) input float\n" " xe := dmax(x,m) input float\n" " yb := dmin(y,m) input float\n" " ye := dmax(y,m) input float\n" " kx := 3 input int\n" " ky := 3 input int\n" " s := m input float\n" " nxest := imax(kx+1+sqrt(m/2),2*(kx+1)) input int\n" " nyest := imax(ky+1+sqrt(m/2),2*(ky+1)) input int\n" " eps := 1e-16 input float\n" " lwrk2 := calc_lwrk2() input int\n" "Return objects:\n" " nx : int\n" " tx : rank-1 array('d') with bounds (nmax)\n" " ny : int\n" " ty : rank-1 array('d') with bounds (nmax)\n" " c : rank-1 array('d') with bounds ((nxest-kx-1)*(nyest-ky-1))\n" " fp : float\n" " wrk1 : rank-1 array('d') with bounds (lwrk1)\n" " ier : int"; /* extern void F_FUNC(surfit,SURFIT)(int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,int*,int*); */ static PyObject *f2py_rout_dfitpack_surfit_smth(const PyObject *capi_self, PyObject *capi_args, PyObject *capi_keywds, void (*f2py_func)(int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,int*,int*)) { PyObject * volatile capi_buildvalue = NULL; volatile int f2py_success = 1; /*decl*/ int iopt = 0; int m = 0; double *x = NULL; intp x_Dims[1] = {-1}; const int x_Rank = 1; PyArrayObject *capi_x_tmp = NULL; int capi_x_intent = 0; PyObject *x_capi = Py_None; double *y = NULL; intp y_Dims[1] = {-1}; const int y_Rank = 1; PyArrayObject *capi_y_tmp = NULL; int capi_y_intent = 0; PyObject *y_capi = Py_None; double *z = NULL; intp z_Dims[1] = {-1}; const int z_Rank = 1; PyArrayObject *capi_z_tmp = NULL; int capi_z_intent = 0; PyObject *z_capi = Py_None; double *w = NULL; intp w_Dims[1] = {-1}; const int w_Rank = 1; PyArrayObject *capi_w_tmp = NULL; int capi_w_intent = 0; PyObject *w_capi = Py_None; double xb = 0; PyObject *xb_capi = Py_None; double xe = 0; PyObject *xe_capi = Py_None; double yb = 0; PyObject *yb_capi = Py_None; double ye = 0; PyObject *ye_capi = Py_None; int kx = 0; PyObject *kx_capi = Py_None; int ky = 0; PyObject *ky_capi = Py_None; double s = 0; PyObject *s_capi = Py_None; int nxest = 0; PyObject *nxest_capi = Py_None; int nyest = 0; PyObject *nyest_capi = Py_None; int nmax = 0; double eps = 0; PyObject *eps_capi = Py_None; int nx = 0; double *tx = NULL; intp tx_Dims[1] = {-1}; const int tx_Rank = 1; PyArrayObject *capi_tx_tmp = NULL; int capi_tx_intent = 0; int ny = 0; double *ty = NULL; intp ty_Dims[1] = {-1}; const int ty_Rank = 1; PyArrayObject *capi_ty_tmp = NULL; int capi_ty_intent = 0; double *c = NULL; intp c_Dims[1] = {-1}; const int c_Rank = 1; PyArrayObject *capi_c_tmp = NULL; int capi_c_intent = 0; double fp = 0; double *wrk1 = NULL; intp wrk1_Dims[1] = {-1}; const int wrk1_Rank = 1; PyArrayObject *capi_wrk1_tmp = NULL; int capi_wrk1_intent = 0; int lwrk1 = 0; double *wrk2 = NULL; intp wrk2_Dims[1] = {-1}; const int wrk2_Rank = 1; PyArrayObject *capi_wrk2_tmp = NULL; int capi_wrk2_intent = 0; int lwrk2 = 0; PyObject *lwrk2_capi = Py_None; int *iwrk = NULL; intp iwrk_Dims[1] = {-1}; const int iwrk_Rank = 1; PyArrayObject *capi_iwrk_tmp = NULL; int capi_iwrk_intent = 0; int kwrk = 0; int ier = 0; static char *capi_kwlist[] = {"x","y","z","w","xb","xe","yb","ye","kx","ky","s","nxest","nyest","eps","lwrk2",NULL}; /* start usercode multiline (0) */ int calc_lwrk1(void) { <======================= 2528! int u = nxest-kx-1; int v = nyest-ky-1; int km = MAX(kx,ky)+1; int ne = MAX(nxest,nyest); int bx = kx*v+ky+1; int by = ky*u+kx+1; int b1,b2; if (bx<=by) {b1=bx;b2=bx+v-ky;} else {b1=by;b2=by+u-kx;} return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; } int calc_lwrk2(void) { int u = nxest-kx-1; int v = nyest-ky-1; int bx = kx*v+ky+1; int by = ky*u+kx+1; int b2 = (bx<=by?bx+v-ky:by+u-kx); return u*v*(b2+1)+b2; } /* end multiline (0)*/ /*routdebugenter*/ #ifdef F2PY_REPORT_ATEXIT f2py_start_clock(); #endif From cookedm at physics.mcmaster.ca Thu Dec 8 02:54:01 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 08 Dec 2005 02:54:01 -0500 Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: (Arnd Baecker's message of "Thu, 8 Dec 2005 08:48:06 +0100 (CET)") References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: Arnd Baecker writes: > Hi, > > now I managed to have another closer look at the compile problem > with dfitpackmodule.c. > > Some facts ahead > - this is with intel icc > - dfitpackmodule.c is generated via f2py > (it is explicitely written at the start of that file - > sorry for not making this clear in the very begining - > I jumped straight to the error ...) > - The error is > build/src/Lib/interpolate/dfitpackmodule.c(2528): error: expected a ";" > int calc_lwrk1(void) { > - the code looks alright to me > (but after too long exposure to python I would not > be able to spot a missing ";" anyway ) > - the same file compiles fine with gcc! > > So does this mean the code uses something which is specific to gcc? > Maybe some (ic)C expert can spot the problem?? > (Somehow I am tempted to say that this is a bug in icc ...) I've pruned it down below. It looks to me like icc may not like nested functions. IIRC, nested functions have been supported by gcc for a while, they're in the C99 standard, but not in C89. It's complaining about a missing ";" because adding one before the "{" on that line would make it a valid function declaration. Is there a flag to make icc use the C99 standard? That may help. > Any ideas are appreciated. I attach the corresponding section of > the code below (line 2528 is marked by <=====). > > Best, Arnd > > -------------------------------- > dfitpackmodule.c: [snip] > /* extern void > F_FUNC(surfit,SURFIT)(int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,int*,int*); > */ > static PyObject *f2py_rout_dfitpack_surfit_smth(const PyObject *capi_self, > PyObject *capi_args, > PyObject *capi_keywds, > void > (*f2py_func)(int*,int*,double*,double*,double*,double*,double*,double*,double*,double*,int*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,int*,double*,int*,int*,int*,int*)) > { > PyObject * volatile capi_buildvalue = NULL; > volatile int f2py_success = 1; > /*decl*/ [snip] > int ier = 0; > static char *capi_kwlist[] = > {"x","y","z","w","xb","xe","yb","ye","kx","ky","s","nxest","nyest","eps","lwrk2",NULL}; > /* start usercode multiline (0) */ > > int calc_lwrk1(void) { <======================= 2528! > int u = nxest-kx-1; > int v = nyest-ky-1; > int km = MAX(kx,ky)+1; > int ne = MAX(nxest,nyest); > int bx = kx*v+ky+1; > int by = ky*u+kx+1; > int b1,b2; > if (bx<=by) {b1=bx;b2=bx+v-ky;} > else {b1=by;b2=by+u-kx;} > return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; > } > int calc_lwrk2(void) { > int u = nxest-kx-1; > int v = nyest-ky-1; > int bx = kx*v+ky+1; > int by = ky*u+kx+1; > int b2 = (bx<=by?bx+v-ky:by+u-kx); > return u*v*(b2+1)+b2; > } > > /* end multiline (0)*/ > /*routdebugenter*/ > #ifdef F2PY_REPORT_ATEXIT > f2py_start_clock(); > #endif -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From arnd.baecker at web.de Thu Dec 8 03:11:42 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 09:11:42 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: Hi David, thanks a lot for having a look at this! On Thu, 8 Dec 2005, David M. Cooke wrote: > Arnd Baecker writes: > > > Hi, > > > > now I managed to have another closer look at the compile problem > > with dfitpackmodule.c. > > > > Some facts ahead > > - this is with intel icc > > - dfitpackmodule.c is generated via f2py > > (it is explicitely written at the start of that file - > > sorry for not making this clear in the very begining - > > I jumped straight to the error ...) > > - The error is > > build/src/Lib/interpolate/dfitpackmodule.c(2528): error: expected a ";" > > int calc_lwrk1(void) { > > - the code looks alright to me > > (but after too long exposure to python I would not > > be able to spot a missing ";" anyway ) > > - the same file compiles fine with gcc! > > > > So does this mean the code uses something which is specific to gcc? > > Maybe some (ic)C expert can spot the problem?? > > (Somehow I am tempted to say that this is a bug in icc ...) > > I've pruned it down below. It looks to me like icc may not like nested > functions. IIRC, nested functions have been supported by gcc for a > while, they're in the C99 standard, but not in C89. It's complaining > about a missing ";" because adding one before the "{" on that line > would make it a valid function declaration. > > Is there a flag to make icc use the C99 standard? That may help. yes there is - from man icc -std=c99 Enable C99 support for C programs but this does not seem to help. What next? Many thanks, Arnd From oliphant.travis at ieee.org Thu Dec 8 03:27:27 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Dec 2005 01:27:27 -0700 Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> Message-ID: <4397EE6F.8000300@ieee.org> Arnd Baecker wrote: >Hi David, > > > >yes there is - from man icc > -std=c99 Enable C99 support for C programs > > > http://cache-www.intel.com/cd/00/00/22/23/222301_222301.pdf This document gives some of the compatibility issues between the icc compiler and gcc. In particular, note that nested functions are not supported. So, it looks like icc can't be used to build the fitpackmodule.c unless f2py is changed to not use nested functions. There are only a few incompatibilities in icc and gcc but that apparently is one of them. I'm not sure how high on the priority list supporting the icc compiler is for most of us. But, we would gladly accept fixes to allow the icc compiler to be used.... Sorry about the trouble. -Travis From cimrman3 at ntc.zcu.cz Thu Dec 8 04:30:02 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 08 Dec 2005 10:30:02 +0100 Subject: [SciPy-dev] simple UMFPACK wrapper added In-Reply-To: <43976C1E.9000201@ieee.org> References: <43959167.3040608@ntc.zcu.cz> <43976C1E.9000201@ieee.org> Message-ID: <4397FD1A.2060007@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > > >>I just added a simple swig-generated wrapper of UMFPACK sparse linear >>solver to the SciPy sandbox. It is very small and does not depend on any >>other package except scipy.base and scipy.sparse. Below you can find its >>performance compared with the current SuperLU-based implementation >>(scipy.sparse.solve()) on matrices corresponding to the Poisson equation >>(e.g. temperature distribution). The test script is included in the >>sandbox, too. >> >>The brave and interested can test it, but first you will have to edit >>the setup.py (a very basic one, needs scipyfication), since the paths to >>umfpack and scipy are hardcoded. >> >> > > > This is great Robert. > > What is keeping us from replacing the SuperLU-based solver entirely? I would say nothing except some more testing and documenting. :) r. From cookedm at physics.mcmaster.ca Thu Dec 8 04:58:35 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 08 Dec 2005 04:58:35 -0500 Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: <4397EE6F.8000300@ieee.org> (Travis Oliphant's message of "Thu, 08 Dec 2005 01:27:27 -0700") References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: Travis Oliphant writes: > Arnd Baecker wrote: > >>Hi David, >> >> >> >>yes there is - from man icc >> -std=c99 Enable C99 support for C programs >> >> >> > http://cache-www.intel.com/cd/00/00/22/23/222301_222301.pdf > > This document gives some of the compatibility issues between the icc > compiler and gcc. > > In particular, note that nested functions are not supported. So, it > looks like icc can't be used to build the fitpackmodule.c unless f2py is > changed to not use nested functions. The problem isn't f2py generating a nested function per se; it's that fitpack.pyf uses a "usercode" directive inside of a subroutine declaration to insert some definitions for functions that it uses when initializing some variables. I've moved those functions to module-level functions instead. It's revision 1479. Works for me with gcc (but then, it worked for me before :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From arnd.baecker at web.de Thu Dec 8 04:59:35 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 10:59:35 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: <4397EE6F.8000300@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: On Thu, 8 Dec 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >Hi David, > > > >yes there is - from man icc > > -std=c99 Enable C99 support for C programs > > > > > > > http://cache-www.intel.com/cd/00/00/22/23/222301_222301.pdf > > This document gives some of the compatibility issues between the icc > compiler and gcc. Thanks - good to know about that! > In particular, note that nested functions are not supported. So, it > looks like icc can't be used to build the fitpackmodule.c unless f2py is > changed to not use nested functions. It seems that nested functions are not in the C99 standard and some people have a strong opinion on them http://lkml.org/lkml/2004/5/12/248 (no I did not google for negative statements, but just wanted to learn about how widespread nested functions are ...;-). I don't know the internals of f2py (way out of my league), but at first glance it might be possible to transfer those nested functions outside of the function and call them with the corresponding parameters - overall this is seems more complicated, already on the C side, so I don't even want to speculate on how difficult this would be on the f2py level... > There are only a few incompatibilities in icc and gcc but that > apparently is one of them. I'm not sure how high on the priority list > supporting the icc compiler is for most of us. Fair enough - it *might* have some importance to us as python itself has been reported to be 16% faster with icc http://mail.python.org/pipermail/python-dev/2005-June/054233.html OTOH, a lot of the time spent in our applications is in LAPACK and FFT. Still, when you have access to a machine with 192 CPUs it would be good to get the maximal possible performance... > But, we would gladly > accept fixes to allow the icc compiler to be used.... Presently I see three problems a) the NAN stuff for cephes b) the dfitpackmodule problem above c) the bunch of test failures > Sorry about the trouble. No problem at all! - I am only worried about c) as this might indicate something more problematic. Unfortunately, I did not have enough time to look into more details than I posted so far. Best, Arnd From oliphant.travis at ieee.org Thu Dec 8 03:58:54 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Dec 2005 01:58:54 -0700 Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: <4397EE6F.8000300@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: <4397F5CE.5050005@ieee.org> Travis Oliphant wrote: >Arnd Baecker wrote: > > > >>Hi David, >> >> >> >>yes there is - from man icc >> -std=c99 Enable C99 support for C programs >> >> >> >> >> >http://cache-www.intel.com/cd/00/00/22/23/222301_222301.pdf > >This document gives some of the compatibility issues between the icc >compiler and gcc. > >In particular, note that nested functions are not supported. So, it >looks like icc can't be used to build the fitpackmodule.c unless f2py is > > >changed to not use nested functions. > > Actually, it's not f2py here that's using nested functions. It's the "user code" in the actual .pyf file. This should be changed so that a nested function is not used. Either a module-level function should be defined or the calculation written as a big macro. I'd like to figure out why you are getting errors also in the scipy core after compiling with icc. This will require a bit of detective work, however, and may not happen for awhile. -Travis From oliphant.travis at ieee.org Thu Dec 8 06:19:29 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Dec 2005 04:19:29 -0700 Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: <439816C1.7060804@ieee.org> Arnd Baecker wrote: >It seems that nested functions are not in the C99 standard >and some people have a strong opinion on them >http://lkml.org/lkml/2004/5/12/248 >(no I did not google for negative statements, but just wanted >to learn about how widespread nested functions are ...;-). > > I think we should try to avoid nested functions and I'm glad David removed them. >>There are only a few incompatibilities in icc and gcc but that >> >>Presently I see three problems >>a) the NAN stuff for cephes >> >> I think this one can be fixed with the right configuration settings. Notice in cephes/const.c there is already a bit-field definition of NAN and the only way 1.0/0.0 gets tried is if UNK is defined. So, we need to fix-up the cephes setup.py file or mconf.h to define the right constants. >>b) the dfitpackmodule problem above >> >> Looks like that one is fixed already (thanks David...) >>c) the bunch of test failures >> >> I would like to get to the bottom of these, eventually.... Perhaps its just a matter of not doing arithmetic on void * pointers. The gcc default might be different than the icc default (although I would expect that to give a segfault and not just give numerical errors). Or, there might be some other subtlety that's been over-looked. Ideally, of course, scipy core should compile and work with a wide-variety of compilers. I've just fixed up the records.py module (I think there is some very cool functionality that came out of that endeavor) and presently I'm working on speeding up the ufunc loop in the BUFFERED case (to avoid all the unnecessary copying that is going on and was definitely slowing down the klein test unnecessarily). I have the new algorithm coded but now need to get all the bugs out of it. I'm anxious to see if the new approach improves the scipy times in Gerard's klein test (I also fixed up the LONG_remainder function to not use floating-point unless there is a mismatch in sign --- when C's % and Python's % don't agree). But, all of this will have to wait until after I give the final to my linear algebra students later today... -Travis From dd55 at cornell.edu Thu Dec 8 09:25:32 2005 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 8 Dec 2005 09:25:32 -0500 Subject: [SciPy-dev] bug in array.putmask (was possible bug in interp1d) In-Reply-To: <4396DC1F.3050404@eduplay.com> References: <200512051705.06993.dd55@cornell.edu> <43969A07.5000608@ieee.org> <4396DC1F.3050404@eduplay.com> Message-ID: <200512080925.32940.dd55@cornell.edu> On Wednesday 07 December 2005 07:57, Tyler W. Wilson wrote: > Well, that fixed the that issue, but now I get the following: > > File "E:\Python24\Lib\site-packages\scipy\interpolate\interpolate.py", > line 180, in __call__ > putmask(y_new, new_out.ravel(), self.fill_value) > File "E:\Python24\Lib\site-packages\scipy\base\oldnumeric.py", line > 170, in putmask > return a.putmask(v, mask) > TypeError: array cannot be safely cast to required type Line 179 in scipy/Lib/interpolate/interpolate.py can be changed from new_out = ones(new_shape)*out_of_bounds to new_out = ones(new_shape, '?')*out_of_bounds But that raises a new question. I dont understand why this raises a casting error: ones(10).putmask(0.01, ones(10)) and this does not: ones(10).astype('?') From schofield at ftw.at Thu Dec 8 10:22:08 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 08 Dec 2005 15:22:08 +0000 Subject: [SciPy-dev] simple UMFPACK wrapper added In-Reply-To: <43976C1E.9000201@ieee.org> References: <43959167.3040608@ntc.zcu.cz> <43976C1E.9000201@ieee.org> Message-ID: <43984FA0.2040904@ftw.at> Travis Oliphant wrote: >Robert Cimrman wrote: > > >>I just added a simple swig-generated wrapper of UMFPACK sparse linear >>solver to the SciPy sandbox. It is very small and does not depend on any >>other package except scipy.base and scipy.sparse. Below you can find its >>performance compared with the current SuperLU-based implementation >>(scipy.sparse.solve()) on matrices corresponding to the Poisson equation >>(e.g. temperature distribution). The test script is included in the >>sandbox, too. >> >>The brave and interested can test it, but first you will have to edit >>the setup.py (a very basic one, needs scipyfication), since the paths to >>umfpack and scipy are hardcoded. >> >> > >This is great Robert. > > Yes, well done :) >What is keeping us from replacing the SuperLU-based solver entirely? > > Sounds good to me ... -- Ed From arnd.baecker at web.de Thu Dec 8 12:13:54 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 18:13:54 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: Hi David, On Thu, 8 Dec 2005, David M. Cooke wrote: > Travis Oliphant writes: > > > Arnd Baecker wrote: > > > >>Hi David, > >> > >> > >> > >>yes there is - from man icc > >> -std=c99 Enable C99 support for C programs > >> > >> > >> > > http://cache-www.intel.com/cd/00/00/22/23/222301_222301.pdf > > > > This document gives some of the compatibility issues between the icc > > compiler and gcc. > > > > In particular, note that nested functions are not supported. So, it > > looks like icc can't be used to build the fitpackmodule.c unless f2py is > > changed to not use nested functions. > > The problem isn't f2py generating a nested function per se; it's that > fitpack.pyf uses a "usercode" directive inside of a subroutine > declaration to insert some definitions for functions that it uses when > initializing some variables. I've moved those functions to > module-level functions instead. It's revision 1479. Works for me with > gcc (but then, it worked for me before :-) **Many thanks** - It compiles fine now!! I can't test it right now, because after all this, the next problem appears when it wants to handle fftpack link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' (see at the end for more) ((I used export FC_VENDOR=Intel export ATLAS=/opt/intel/mkl72/lib/64:/opt/intel/mkl72/include/ python setup.py config --compiler=intel config_fc --fcompiler=intel install Maybe I should just use python setup.py config --fcompiler=intel install as Pearu said ... )) Best, Arnd Lib/fftpack/src/zrfft.c(11): warning #1418: external definition with no prior declaration extern void zrfft(complex_double *inout, ^ Traceback (most recent call last): File "setup.py", line 42, in ? setup_package() File "setup.py", line 35, in setup_package setup( **config.todict() ) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/core.py", line 93, in setup return old_setup(**new_attr) File "/home/baecker/python//lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/home/baecker/python//lib/python2.4/distutils/command/install.py", line 506, in run self.run_command('build') File "/home/baecker/python//lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/home/baecker/python//lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/home/baecker/python//lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/home/baecker/python//lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/command/build_ext.py", line 107, in run self.build_extensions() File "/home/baecker/python//lib/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/home/baecker/python//newscipy/lib/python2.4/site-packages/scipy/distutils/command/build_ext.py", line 299, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' From oliphant.travis at ieee.org Thu Dec 8 12:20:02 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 08 Dec 2005 10:20:02 -0700 Subject: [SciPy-dev] [SciPy-user] 'object_arrtype' problem In-Reply-To: References: Message-ID: <43986B42.5000904@ieee.org> Ryan Krauss wrote: >I am having a problem very similar to one Chris Fonnesbeck was having >a few days ago. I have a list of user defined objects. I also want >to allow for the passing of a scalar to this particular function, so I >call atleast_1d on the list/scalar and then iterate over it. After I >call atleast_1d, indexing the list returns object_arrtype. One >interesting thing is that I have redefined __repr__ for this object, >and that method seems to be called correctly, but trying to access any >other property of the object causes messages like "*** AttributeError: >'object_arrtype' object has no attribute 'dof'". Here is a pdb >session just before and after atleast_1d is called. Is this a bug or >should I be doing this differently with new scipy (this code worked >fine with old scipy). > > I think the object scalars need a little more work. There is a valid question whether to have them at all. What these scalars do is unify the methods of scalars and arrays. Without them, then object arrays would have different selection semantics then non-object arrays. Perhaps this would be a "good" thing. At any rate, you can always get the underlying object the object-scalar wraps using the toscalar() method. I think a very natural thing-to-do, though, which would clear up these issues and leave the object scalar around is to hand off behavior of the object to the underlying object in all cases. -Travis From arnd.baecker at web.de Thu Dec 8 12:49:57 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 18:49:57 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: <439816C1.7060804@ieee.org> References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org><439816C1.7060804@ieee.org> Message-ID: On Thu, 8 Dec 2005, Travis Oliphant wrote: > Arnd Baecker wrote: [...] > >>Presently I see three problems > >>a) the NAN stuff for cephes > >> > >> > I think this one can be fixed with the right configuration settings. > Notice in cephes/const.c there is already > a bit-field definition of NAN and the only way 1.0/0.0 gets tried is if > UNK is defined. So, we need to fix-up the cephes setup.py file or > mconf.h to define the right constants. I just added a `defined(__itanium__)` in mconf.h #elif defined(ns32000) || defined(sun386) || \ defined(i386) || defined(MIPSEL) || defined(_MIPSEL) || \ defined(BIT_ZERO_ON_RIGHT) || defined(__alpha__) || defined(__alpha) || \ defined(sequent) || defined(i386) || defined(__itanium__) || \ defined(__ns32000__) || defined(__sun386__) || defined(__i386__) #define IBMPC 1 /* Intel IEEE, low order words come first */ #define BIGENDIAN 0 It seems it got beyond the criticaly point for cephes ... (let's wait for the tests ... - hmm don't look too good, more on this tomorrow). > Looks like that one is fixed already (thanks David...) Yep - looks fine - many thanx!! > >>c) the bunch of test failures > >> > >> > I would like to get to the bottom of these, eventually.... Perhaps its > just a matter of not doing arithmetic on void * pointers. The gcc > default might be different than the icc default (although I would expect > that to give a segfault and not just give numerical errors). Or, there > might be some other subtlety that's been over-looked. > > Ideally, of course, scipy core should compile and work with a > wide-variety of compilers. Once I get the other compile bits sorted, I will try to look into that tomorrow ... > I've just fixed up the records.py module (I think there is some very > cool functionality that came out of that endeavor) and presently I'm > working on speeding up the ufunc loop in the BUFFERED case (to avoid all > the unnecessary copying that is going on and was definitely slowing down > the klein test unnecessarily). I have the new algorithm coded but now > need to get all the bugs out of it. I'm anxious to see if the new > approach improves the scipy times in Gerard's klein test (I also fixed > up the LONG_remainder function to not use floating-point unless there is > a mismatch in sign --- when C's % and Python's % don't agree). Great - things are really developing very nicely!!! > But, > all of this will have to wait until after I give the final to my linear > algebra students later today... Have fun with that - best, Arnd From arnd.baecker at web.de Thu Dec 8 13:07:51 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 19:07:51 +0100 (CET) Subject: [SciPy-dev] more 64 Bit testing In-Reply-To: References: <43951533.3030001@ieee.org> <4395280E.1070408@ieee.org> <4397EE6F.8000300@ieee.org> Message-ID: On Thu, 8 Dec 2005, Arnd Baecker wrote: [...] > next problem appears when it wants to handle fftpack > > link = self.fcompiler.link_shared_object > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > (see at the end for more) > > ((I used > export FC_VENDOR=Intel > export ATLAS=/opt/intel/mkl72/lib/64:/opt/intel/mkl72/include/ > python setup.py config --compiler=intel config_fc --fcompiler=intel install > > Maybe I should just use > python setup.py config --fcompiler=intel install > as Pearu said ... > )) > > Best, Arnd Removing everything (not-counting-the-number-of-times-anymore...) and starting again, this time with export FC_VENDOR=Intel export F77=ifort export CC=icc export CXX=icc export ATLAS=/opt/intel/mkl72/lib/64:/opt/intel/mkl72/include/ python setup.py config --compiler=intel install --prefix=$DESTDIR | tee ../build_log_scipy_new_scipy.txt brought a full scipy which installed - progress! ((including the modification of mconf.h)) scipy.test(10,10) gives Ran 770 tests in 3.146s FAILED (failures=18, errors=55) and I won't bother you today with all the glory details behind the failures and errors. Best, Arnd From oliphant at ee.byu.edu Thu Dec 8 14:01:41 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Dec 2005 12:01:41 -0700 Subject: [SciPy-dev] [SciPy-user] 'object_arrtype' problem In-Reply-To: References: <43986B42.5000904@ieee.org> Message-ID: <43988315.7090809@ee.byu.edu> Ryan Krauss wrote: >What do you mean by "hand off behavior of the object to the underlying >object in all cases"? > > All arrays return array scalars. This allows us to control the math and the methods of the returned objects. This is very useful for the regular arrays. Object arrays simply inherited that behavior and therefore there is an "object"-array type. (Run type() on one of your returned objects...) In this case, though, the array object contains a pointer to another object (that's the thing that has the methods and attributes you wanted). There is the obvious question as to whether or not having a separate object array-scalar type is even useful, though. It does provide consistency of behavior but I think the expense is too high in this case. Right now, I'm leaning to having OBJECT arrays return the actual object like Numeric did. This will mean, of course that they won't have the array methods and there will be an "except-for object arrays" cluttered throughout documentation, but I think it's a better compromise solution . The change is not difficult and could be done in about 30 minutes. I'd like to hear from people as to what their attitude is: 1) Have object arrays return the actual Python object like Numeric did but different from all other array types which return an "array-scalar" type. 2) Keep the current behavior but soup up the object array type so it "works better". I don't think we could ever make a generic object that can always respond as any other object, though, so there will be minor issues regardless. Let me know. -Travis From ryanlists at gmail.com Thu Dec 8 14:13:50 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 8 Dec 2005 14:13:50 -0500 Subject: [SciPy-dev] [SciPy-user] 'object_arrtype' problem In-Reply-To: <43988315.7090809@ee.byu.edu> References: <43986B42.5000904@ieee.org> <43988315.7090809@ee.byu.edu> Message-ID: One thing I don't like about Numeric returning the object is that things get a little messy when you aren't sure if you have an array or not. I find myself calling atleast_1d a lot or writing tests using "if shape(obtject)" which returns an empty tuple for scalars. I guess my thoughts are slightly twisted by everything being an array in Matlab. I want to test if len==1, then it must be a scalar. I guess I would be happiest with clean scalar tests and objects that worked well when iterating or indexing from an object array. Ryan On 12/8/05, Travis Oliphant wrote: > Ryan Krauss wrote: > > >What do you mean by "hand off behavior of the object to the underlying > >object in all cases"? > > > > > > All arrays return array scalars. This allows us to control the math and > the methods of the returned objects. This is very useful for the > regular arrays. Object arrays simply inherited that behavior and > therefore there is an "object"-array type. (Run type() on one of your > returned objects...) In this case, though, the array object contains a > pointer to another object (that's the thing that has the methods and > attributes you wanted). > > There is the obvious question as to whether or not having a separate > object array-scalar type is even useful, though. It does provide > consistency of behavior but I think the expense is too high in this case. > > Right now, I'm leaning to having OBJECT arrays return the actual object > like Numeric did. This will mean, of course that they won't have the > array methods and there will be an "except-for object arrays" cluttered > throughout documentation, but I think it's a better compromise solution . > > The change is not difficult and could be done in about 30 minutes. > > I'd like to hear from people as to what their attitude is: > > 1) Have object arrays return the actual Python object like Numeric did > but different from all other array types which return an "array-scalar" > type. > > 2) Keep the current behavior but soup up the object array type so it > "works better". I don't think we could ever make a generic object that > can always respond as any other object, though, so there will be minor > issues regardless. > > Let me know. > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From arnd.baecker at web.de Thu Dec 8 14:20:02 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 8 Dec 2005 20:20:02 +0100 (CET) Subject: [SciPy-dev] PostponedException missing Message-ID: Moin, now for something completely different: flinalg.py contains at the beginning: try: import _flinalg except ImportError: from scipy.distutils.misc_util import PostponedException _flinalg = PostponedException() print _flinalg.__doc__ has_column_major_storage = lambda a:0 However, scipy.distutils.misc_util does not have a `PostponedException`. (Better don't ask me how I managed to get into the except - this is a longer story ;-). Arnd From charlesr.harris at gmail.com Thu Dec 8 16:29:03 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Dec 2005 14:29:03 -0700 Subject: [SciPy-dev] Box-Muller considered slow. Message-ID: I have done some benchmarking of the ziggurat algorithm vs the Box-Muller algorithm in boost/random for computing gaussian random variables. The ziggurat method was 6x faster. As scipy at this time uses the Box-Muller method, I would like to propose a switch to the ziggurat method. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Dec 8 17:25:39 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 08 Dec 2005 17:25:39 -0500 Subject: [SciPy-dev] Box-Muller considered slow. In-Reply-To: (Charles R. Harris's message of "Thu, 8 Dec 2005 14:29:03 -0700") References: Message-ID: Charles R Harris writes: > I have done some benchmarking of the ziggurat algorithm vs the Box-Muller > algorithm in boost/random for computing gaussian random variables. The ziggurat > method was 6x faster. As scipy at this time uses the Box-Muller method, I would > like to propose a switch to the ziggurat method. boost/random looks like it uses a deterministic polar method for gaussian RVs (at least in 1.33). I've had a look at the ziggurat algorithm; it's more complicated, so we'd need some code for it (hint, hint :-) Mind you, that code could be written in Pyrex, not C, as the mtrand module is written as a Pyrex wrapper around C code. For those interested, Marsgalia and Wai Wan Tang's updated method (2000) is here, http://www.jstatsoft.org/v05/i08/ziggurat.pdf with some comments about a problem using the single random number for rejection in it here: http://www.doornik.com/research/ziggurat.pdf -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Thu Dec 8 17:58:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Dec 2005 15:58:51 -0700 Subject: [SciPy-dev] examples In-Reply-To: <4398A51B.1010709@stsci.edu> References: <4398A51B.1010709@stsci.edu> Message-ID: <4398BAAB.1010302@ee.byu.edu> Christopher Hanley wrote: > Hi Travis, > > I've been doing a little playing with your records module today. I > was wondering if you could give me a couple examples of it's usage. > In particular, how would you accomplish the following done in numarray? > > r=array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'),(6,66,'f'),(7,77,'g')],formats='u1,f4,a1') > > > Thank you for your time and help, There is no fromarrays function yet, so there is no records.array function yet. It should be a straightforward matter to create one (you shouldn't have to distinguish between chararrays and other arrays though). In this case say x is the list of tuples. r = ndrecarray(len(x), formats= 'u1,f4,S1') for ind, val in enumerate(x): for k in range(len(val)): r[ind]['f%d' %(k+1)] = val[k] should do it (get recent SVN, there was a bug in setitem that prevented this from working before). Then (on my system): >>> r ndrecarray([[1, 11.0, 'a'], [2, 22.0, 'b'], [3, 33.0, 'c'], [4, 44.0, 'd'], [5, 55.0, 'e'], [6, 66.0, 'f'], [7, 77.0, 'g']], dtype=record6) >>> r.f1 ndrecarray([1, 2, 3, 4, 5, 6, 7], dtype=uint8) >>> r.f2 ndrecarray([ 11., 22., 33., 44., 55., 66., 77.], dtype=float32) >>> r.f3 ndrecarray([a, b, c, d, e, f, g], dtype=string1) Obviously a fromarray function is needed to make this process easier. This function would need to be a bit more general to handle multidimensional cases and arbitrary field names. The format string does not allow an easy way to right nested records, but the general ndrecarray allows it. The infrastructure for making it easy to deal with nested records is there but the user-convenience may not be. I would love your help on the fromarray, fromstring, and friends functions. -Travis -Travis From oliphant at ee.byu.edu Thu Dec 8 18:06:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Dec 2005 16:06:42 -0700 Subject: [SciPy-dev] timing comparison In-Reply-To: <43987DDB.5010505@hawaii.edu> References: <4395DC4F.8010703@hawaii.edu> <87r78q837e.fsf@peds-pc311.bsd.uchicago.edu> <4396693E.7030804@hawaii.edu> <43969BB1.2080201@ieee.org> <43987DDB.5010505@hawaii.edu> Message-ID: <4398BC82.9020601@ee.byu.edu> Eric Firing wrote: > Travis, > > Attached are profile outputs from the mpl contour_demo script, after I > made the changes from min to amin etc in ma.py. The overall times are > similar, but there is a factor of two difference in total time spent > in calls to the array function. The array function is more complicated now so that may be unavoidable. If the sizes of arrays being created are small, then the increased overhead may have an impact. The array-creation code was taken from Numeric. One thing that can speed things up is to specify a type rather than have the code try to determine one. I suspect that trying to determine a type from an object can be slower in scipy then in Numeric (there are more cases to check), so this might be most of the slow-down. Just a hunch... If mpl relys on ma.py, then it is possible that ma.py is slower as well. For large arrays the extra overhead that exists should not matter, but for small arrays it might Thanks for the profiles, -Travis From oliphant at ee.byu.edu Thu Dec 8 18:07:55 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Dec 2005 16:07:55 -0700 Subject: [SciPy-dev] examples In-Reply-To: <4398BAAB.1010302@ee.byu.edu> References: <4398A51B.1010709@stsci.edu> <4398BAAB.1010302@ee.byu.edu> Message-ID: <4398BCCB.4010506@ee.byu.edu> Travis Oliphant wrote: >The format string does not allow an easy way to right nested records, > > Ooops... "write" nested records. -Travis From charlesr.harris at gmail.com Thu Dec 8 18:33:07 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 Dec 2005 16:33:07 -0700 Subject: [SciPy-dev] Box-Muller considered slow. In-Reply-To: References: Message-ID: On 12/8/05, David M. Cooke wrote: > > Charles R Harris writes: > > > I have done some benchmarking of the ziggurat algorithm vs the > Box-Muller > > algorithm in boost/random for computing gaussian random variables. The > ziggurat > > method was 6x faster. As scipy at this time uses the Box-Muller method, > I would > > like to propose a switch to the ziggurat method. > > boost/random looks like it uses a deterministic polar method for > gaussian RVs (at least in 1.33). That's the Box-Muller algorithm. I've had a look at the ziggurat algorithm; it's more complicated, so > we'd need some code for it (hint, hint :-) I've attached a templated version I use just to get some code out there. I modified the original ziggurat method a bit to make it more straightforward. I admit the code is not scipy compatible as it stands; in particular, my default uniform doubles are thin and on the open interval (0,1). If I get some time I may make a genuine contribution ;) How does one test the normal distribution ? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RandomNormal.h Type: application/octet-stream Size: 1593 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RandomUniform.h Type: application/octet-stream Size: 1254 bytes Desc: not available URL: From robert.kern at gmail.com Thu Dec 8 18:44:28 2005 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Dec 2005 15:44:28 -0800 Subject: [SciPy-dev] Box-Muller considered slow. In-Reply-To: References: Message-ID: <4398C55C.4040909@gmail.com> Charles R Harris wrote: > How does one test the normal distribution ? It should pass the test in scipy.stats for the normal distribution. That should just be the Kolmogorov-Smirnov test. The Anderson-Darling test would be nice because it has greater power in the tails; however, the tables are distribution-specific for that test, but you may be able to find appropriate values for the normal distribution somewhere online. There may be some more normality tests one could apply, but they won't add much value. But most important in this case would be the tests outlined by Doornik in section 3 of the paper David pointed out. Basically, you generate normally-distributed values, pass them through the inverse CDF, and apply the usual tests for uniform random numbers. I haven't read through it all and tracked down licenses for the code he has in the paper, but it looks like an ACM TOMS paper, and the ACM TOMS non-commercial use license would probably be the applicable one. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From arnd.baecker at web.de Fri Dec 9 07:18:30 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 13:18:30 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions Message-ID: Hi, yesterday I ran into a bunch of problems related to flapack/clapack which I tried to work-around via distutils. Some of those attempts did not work, so I would like to ask if I did something wrong here. Apologies ahead if this is already mentioned in some documtation - a pointer would be very much appreciated in that case. a) The Intel Math Kernel library provides LAPACK in both double (libmkl_lapack64.so) and single (libmkl_lapack32.so) precision. With the following in `site.cfg` [atlas] library_dirs = /opt/intel/mkl72/lib/64 include_dirs = /opt/intel/mkl72/include/ atlas_libs = mkl_lapack64,libmkl_lapack32,mkl [lapack] library_dirs = /opt/intel/mkl72/lib/64 include_dirs = /opt/intel/mkl72/include/ lapack_libs = mkl_lapack64,libmkl_lapack32 it was not able to add them (without `libmkl_lapack32` the double precision one worked). Would that in principle be the correct way? b) As this did not work, I changed in `Lib/linalg/setup.py` `skip_single_routines = True` This only had an effect (in the sense of rebuilding lapack/flapack/...) when I touched the corresponding pyf files, touch *.pyf ((Is this mentioned somewhere in the docs? If so, I completely missed that)) Is it also correct that removing the contents of the `linalg` directory in the `build` directory does not force a rebuild? c) It seems that not all s* routines are listed in linalg/setup.py. I had to add the following in Lib/linalg/setup.py: skip_names['clapack'].extend(\ 'sgesv sgetrf sgetrs cgesv cgetrf cgetrs ... skip_names['flapack'].extend(\ 'sgesv sgebal sgbsv sgehrd cgbsv cgebal cgehrd ... Would it possible to extract/exclude those automatically? d) running the usual `python setup.py .... install --prefix=...` rebuilds the corresponding libraries (e.g _flapack.so). ((I always removed them before, but that might not be necessary)) However, the resulting libararies were not moved to the destination, given by prefix. I had to copy them manually. Is this to be expected? If all the above behaviour is really unavoidable (eg. by more clever command line options etc.), then I think the following would help a lot: - ad: b) if setup.py is newer than the files which are generated, they should be rebuild. - ad: c) it would be good, if changed files get copied over to the destination. In addition: could you pleas add `sample_site.cfg` again (or/and site.cfg, with everything commented out) Because of all this (and previous problems concerning exchanging fft libraries, presumably for similar reasons) I presently completely remove the destination tree and the build tree (sometimes I even do a full new checkout, if everything seems screwed). One more remark/question: Usually scipy (core) will not require to add a site.cfg. However for full scipy adding a site.cfg is often needed. It seems that this file has to be placed into the site-packages/scipy/distutils/ directory or are there other places? (For example couldn't it look in the directory from which the `python setup.py install` command is issued?) Best, Arnd From chanley at stsci.edu Fri Dec 9 08:16:30 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 9 Dec 2005 08:16:30 -0500 Subject: [SciPy-dev] examples In-Reply-To: <4398BAAB.1010302@ee.byu.edu> Message-ID: <200512091316.CVB56666@stsci.edu> Thanks, this is what I needed to get started. Chris > -----Original Message----- > From: scipy-dev-bounces at scipy.net [mailto:scipy-dev-bounces at scipy.net] On > Behalf Of Travis Oliphant > Sent: Thursday, December 08, 2005 5:59 PM > To: Christopher Hanley > Cc: SciPy Developers List > Subject: Re: [SciPy-dev] examples > > There is no fromarrays function yet, so there is no records.array > function yet. > > It should be a straightforward matter to create one (you shouldn't have > to distinguish between chararrays and other arrays though). > > In this case say x is the list of tuples. > > r = ndrecarray(len(x), formats= 'u1,f4,S1') > > for ind, val in enumerate(x): > for k in range(len(val)): > r[ind]['f%d' %(k+1)] = val[k] > > should do it (get recent SVN, there was a bug in setitem that prevented > this from working before). > > Then (on my system): > > >>> r > ndrecarray([[1, 11.0, 'a'], [2, 22.0, 'b'], [3, 33.0, 'c'], [4, 44.0, > 'd'], > [5, 55.0, 'e'], [6, 66.0, 'f'], [7, 77.0, 'g']], dtype=record6) > > >>> r.f1 > ndrecarray([1, 2, 3, 4, 5, 6, 7], dtype=uint8) > >>> r.f2 > ndrecarray([ 11., 22., 33., 44., 55., 66., 77.], dtype=float32) > >>> r.f3 > ndrecarray([a, b, c, d, e, f, g], dtype=string1) > > Obviously a fromarray function is needed to make this process easier. > This function would need to be a bit more general to handle > multidimensional cases and arbitrary field names. > > The format string does not allow an easy way to right nested records, > but the general ndrecarray allows it. The infrastructure for making it > easy to deal with nested records is there but the user-convenience may > not be. > > I would love your help on the fromarray, fromstring, and friends > functions. > > > -Travis > > From pearu at scipy.org Fri Dec 9 07:55:43 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 9 Dec 2005 06:55:43 -0600 (CST) Subject: [SciPy-dev] Bug in distuils appendpath In-Reply-To: <43974316.6030404@bostream.nu> References: <43974316.6030404@bostream.nu> Message-ID: On Wed, 7 Dec 2005, J?rgen Stenarson wrote: > Hi, > > The tests for appendpath in scipy.distutils.misc_util fail on a windows > machine. The reason is that the join used to build the expected answer does > not normalize the path to contain \ instead of /. There is also a small > problem in appendpath itself. Both prefix and path need to have abspath > applied otherwise one of them will begin with c:\prefix and the other one > won't. > > After these two changes the tests pass on my windows machine. Thanks for the patches. appendpath and its tests are fixed in SVN now. Pearu From arnd.baecker at web.de Fri Dec 9 11:44:31 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 17:44:31 +0100 (CET) Subject: [SciPy-dev] fft segfault, 64 Bit Opteron Message-ID: Hi Travis, some more info on the segfault for fft when running scipy.test(10,10). Looks similar to the other one, but is a different place: Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912507335168 (LWP 2965)] DOUBLE_subtract (args=0x1620400, dimensions=0x7fffffed722c, steps=0x16bd550, func=0x2aaab7bce000) at umathmodule.c:1959 1959 *((double *)op)=*((double *)i1) - *((double *)i2); (gdb) bt #0 DOUBLE_subtract (args=0x1620400, dimensions=0x7fffffed722c, steps=0x16bd550, func=0x2aaab7bce000) at umathmodule.c:1959 #1 0x00002aaaabd421fd in PyUFunc_GenericFunction (self=0x719520, args=0x2aaab600c830, mps=0x2) at ufuncobject.c:1569 #2 0x00002aaaabd44c69 in ufunc_generic_call (self=0x719520, args=0x2aaab600c830) at ufuncobject.c:2553 #3 0x0000000000417808 in PyObject_CallFunction (callable=0x719520, format=0x67fffffd796
) at abstract.c:1756 #4 0x00002aaaabba68f7 in array_subtract (m1=0x67fffffd796, m2=0x7fffffed722c) at arrayobject.c:2261 #5 0x00000000004145a7 in binary_op1 (v=0x159a530, w=0x162f490, op_slot=8) at abstract.c:371 #6 0x000000000041500e in PyNumber_Subtract (v=0x159a530, w=0x162f490) at abstract.c:422 #7 0x0000000000474c9e in PyEval_EvalFrame (f=0xf741c0) at ceval.c:1144 #8 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaabd233b0, globals=0x7fffffed722c, locals=0x16bd550, args=0xf741c0, argcount=2, kws=0xf736b8, kwcount=0, defs=0x2aaaae93ce28, defcount=1, closure=0x0) at ceval.c:2736 #9 0x00000000004788f7 in PyEval_EvalFrame (f=0xf734b0) at ceval.c:3650 #10 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab944e30, globals=0x7fffffed722c, locals=0x16bd550, args=0xf734b0, argcount=2, kws=0x834010, kwcount=0, defs=0x2aaaab967a88, defcount=2, closure=0x0) at ceval.c:2736 #11 0x00000000004788f7 in PyEval_EvalFrame (f=0x833e20) at ceval.c:3650 #12 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaab68f6c00, globals=0x7fffffed722c, locals=0x16bd550, args=0x833e20, argcount=1, kws=0x6f1490, kwcount=0, defs=0x2aaab68f5b28, defcount=1, closure=0x0) at ceval.c:2736 #13 0x00000000004788f7 in PyEval_EvalFrame (f=0x6f12e0) at ceval.c:3650 #14 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab95c810, globals=0x7fffffed722c, locals=0x16bd550, args=0x6f12e0, argcount=2, kws=0xb32110, kwcount=0, defs=0x2aaaab9643a8, defcount=1, closure=0x0) at ceval.c:2736 #15 0x00000000004c6099 in function_call (func=0x2aaaab969758, arg=0x2aaab5425128, kw=0xfab4e0) at funcobject.c:548 #16 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #17 0x00000000004772ea in PyEval_EvalFrame (f=0x894f90) at ceval.c:3835 #18 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab95c880, globals=0x7fffffed722c, locals=0x16bd550, args=0x894f90, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #19 0x00000000004c6099 in function_call (func=0x2aaaab9697d0, arg=0x2aaab54215a8, kw=0x0) at funcobject.c:548 #20 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #21 0x0000000000420ee0 in instancemethod_call (func=0x1620400, arg=0x2aaab54215a8, kw=0x0) at classobject.c:2447 #22 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #23 0x00000000004777d9 in PyEval_EvalFrame (f=0x833780) at ceval.c:3766 #24 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaaab42340, globals=0x7fffffed722c, locals=0x16bd550, args=0x833780, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaab964168, defcount=1, closure=0x0) at ceval.c:2736 #25 0x00000000004c6099 in function_call (func=0x2aaaab96b668, arg=0x2aaab5421488, kw=0x0) at funcobject.c:548 #26 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #27 0x0000000000420ee0 in instancemethod_call (func=0x1620400, arg=0x2aaab5421488, kw=0x0) at classobject.c:2447 #28 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #29 0x000000000044fd80 in slot_tp_call (self=0x2aaab68f5d10, args=0x2aaab693ef90, kwds=0x0) at typeobject.c:4536 #30 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #31 0x00000000004777d9 in PyEval_EvalFrame (f=0x6d3dc0) at ceval.c:3766 #32 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab95e2d0, globals=0x7fffffed722c, locals=0x16bd550, args=0x6d3dc0, argcount=2, kws=0xf51060, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #33 0x00000000004c6099 in function_call (func=0x2aaaab96a050, arg=0x2aaab5421680, kw=0xf4d590) at funcobject.c:548 #34 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 #35 0x00000000004772ea in PyEval_EvalFrame (f=0xc00a80) at ceval.c:3835 #36 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaaab95e340, globals=0x7fffffed722c, locals=0x16bd550, args=0xc00a80, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #37 0x00000000004c6099 in function_call (func=0x2aaaab96a0c8, arg=0x2aaab5421518, kw=0x0) at funcobject.c:548 #38 0x0000000000417700 in PyObject_Call (func=0x1620400, arg=0x7fffffed722c, kw=0x16bd550) at abstract.c:1756 HTH, Arnd From arnd.baecker at web.de Fri Dec 9 12:12:22 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 18:12:22 +0100 (CET) Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: References: Message-ID: Hi Travis, one more addition - the build log gives: In file included from build/src/scipy/base/src/umathmodule.c:8036: scipy/base/src/ufuncobject.c: In function `PyUFunc_GenericFunction': scipy/base/src/ufuncobject.c:1569: warning: passing arg 2 of pointer to function from incompatible pointer type gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/scipy/base/src/umathm The patch below fixes the compile warning. However, it does not fix the segfault I get ... (unless the changed file did not transfer to the destination directory after building - see my distutils questions ...;-) Best, Arnd abaecker at ptphp01:~/BUILDS2/Build_100/core> svn diff Index: scipy/base/src/ufuncobject.c =================================================================== --- scipy/base/src/ufuncobject.c (revision 1616) +++ scipy/base/src/ufuncobject.c (working copy) @@ -1442,7 +1442,7 @@ int fastmemcpy[MAX_ARGS]; int *needbuffer=loop->needbuffer; intp index=loop->index, size=loop->size; - int bufsize; + intp bufsize; int copysizes[MAX_ARGS]; void **bufptr = loop->bufptr; void **buffer = loop->buffer; From oliphant.travis at ieee.org Fri Dec 9 13:05:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 11:05:30 -0700 Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: References: Message-ID: <4399C76A.4030902@ieee.org> Arnd Baecker wrote: >Hi Travis, > >one more addition - the build log gives: > >In file included from build/src/scipy/base/src/umathmodule.c:8036: >scipy/base/src/ufuncobject.c: In function `PyUFunc_GenericFunction': >scipy/base/src/ufuncobject.c:1569: warning: passing arg 2 of pointer to >function from incompatible pointer type >gcc -pthread -shared >build/temp.linux-x86_64-2.4/build/src/scipy/base/src/umathm > > > Thanks much for this testing. Could you send the entire build log again. Perhaps there is something else. I've made a fix which should work better on 64-bit. More 64-bit testing needed. One trick to test the buffered section of code using smaller arrays is to set the buffer size to something very small (but a multiple of 16 --- 16 is the smallest). For arrays smaller than the buffer size, array's are just copied when a cast is needed. But, for larger arrays, the buffered code is exercised. For example: scipy.setbufsize(16) scipy.test(1,1) At some-point we might play with different buffer sizes to see if some numbers are better than others. -Travis From jan_braun at gmx.net Fri Dec 9 13:11:02 2005 From: jan_braun at gmx.net (Jan Braun) Date: Fri, 9 Dec 2005 19:11:02 +0100 (MET) Subject: [SciPy-dev] scipy core and atlas on IA64 Message-ID: <24002.1134151862@www90.gmx.net> Hi all, after many problems with Intel mkl and SGI scs on IA64 we are now switching back to atlas. Atlas is compiled with its very own (self chosen) options. So it takes the Intel compilers and needs libifcore at runtime. Unfortunately there seems to be no way to get scipy find atlas __and__ link libifcore in. This is very similar to problems arising from mkl and scs compiles where everything compiles fine and ends with missing symbols. Trying to add the missing libs to site.cfg results in the equivalent of Step 3 (see below): no atlas/lapack is found at all. The question is what to do to get scipy compile and link additional libs by modifying site.cfg. The description of the mentioned problems occuring with scipy.distutils and site.cfg's follow. My description only covers scipy core from svn. Thanks, ? Jan ----------------------------------------------------------- Step 1: ? site.cfg: nothing set (empty) ? env variable ATLAS to the right directory ? in scipy core: ? ? python setup.py install | tee install.log Result 1: ? Setting PTATLAS=ATLAS ? FOUND: ? ? libraries = ['ptf77blas', 'ptcblas', 'atlas'] ? ? library_dirs = ['/work/home/jb672983/gcc-python//lib/atlas'] ? ? language = c ? ? include_dirs = ['/usr/include'] ? everyting compiles fine and is installed Test 1: at the ipython prompt ? In [1]: import scipy ? Importing test to scipy ? Importing base to scipy ? Importing basic to scipy ? Failed to import basic ? lib/python2.4/site-packages/scipy/lib/lapack_lite.so: undefined symbol: for_cpstr ----------------------------------------------------------- Step 2: relink lapack_lite.so (grep on install.log) ? /usr/bin/g77 -shared [build...]lapack_litemodule.o \ ? ? -L/gcc-python/lib/atlas -L/opt/intel/fc_90/lib -lifcore \ ? ? -llapack -lptf77blas -lptcblas -latlas -lg2c -o [build...]lapack_lite.so ? copy lapack_lite.so to site-packages/scipy/lib Test 2: ?at the ipython prompt ? In [1]: import scipy ? Importing test to scipy ? Importing base to scipy ? Importing basic to scipy ? => everythings ok ----------------------------------------------------------- Step 3: use results from 1. and 2., add the libifcore to site.cfg ? site.cfg now includes: ? ? [atlas] ? ? library_dirs = /opt/intel/fc_90/lib:/home/jb672983/gcc-python/lib/atlas ? ? atlas_libs = ptf77blas, ptcblas, atlas, ifcore Result 3: ? /work/home/jb672983/src/core-fftw3-atlas/scipy/distutils/system_info.py:1076: UserWarning: ? ? Atlas (http://math-atlas.sourceforge.net/) libraries not found. ? ? Directories to search for the libraries can be specified in the ? ? scipy_distutils/site.cfg file (section [atlas]) or by setting ? ? the ATLAS environment variable. ? warnings.warn(AtlasNotFoundError.__doc__) -- 10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail +++ GMX - die erste Adresse f?r Mail, Message, More +++ From strawman at astraw.com Fri Dec 9 13:26:31 2005 From: strawman at astraw.com (Andrew Straw) Date: Fri, 09 Dec 2005 10:26:31 -0800 Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: <4399C76A.4030902@ieee.org> References: <4399C76A.4030902@ieee.org> Message-ID: <4399CC57.1010900@astraw.com> FYI, on my Athlon64 (Debian sarge, gcc 3.3, stock atlas), I'm not getting segfaults using scipy.test(10,10) but only a single failure. This is a svn checkout last night, so it may lag slightly, but it seems to indicate Arnd's segfaults may reflect a gcc vs. icc issue. In [2]: scipy.__core_version__ Out[2]: '0.8.1.1611' In [3]: scipy.__scipy_version__ Out[3]: '0.4.3.1479' ====================================================================== FAIL: check_odeint1 (scipy.integrate.test_integrate.test_odeint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/integrate/tests/test_integrate.py", line 51, in check_odeint1 assert res < 1.0e-6 AssertionError ---------------------------------------------------------------------- Ran 1238 tests in 60.816s FAILED (failures=1) Travis Oliphant wrote: >Arnd Baecker wrote: > > > >>Hi Travis, >> >>one more addition - the build log gives: >> >>In file included from build/src/scipy/base/src/umathmodule.c:8036: >>scipy/base/src/ufuncobject.c: In function `PyUFunc_GenericFunction': >>scipy/base/src/ufuncobject.c:1569: warning: passing arg 2 of pointer to >>function from incompatible pointer type >>gcc -pthread -shared >>build/temp.linux-x86_64-2.4/build/src/scipy/base/src/umathm >> >> >> >> >> >Thanks much for this testing. > >Could you send the entire build log again. Perhaps there is something >else. >I've made a fix which should work better on 64-bit. > >More 64-bit testing needed. > >One trick to test the buffered section of code using smaller arrays is >to set the buffer size to something very small (but a multiple of 16 --- >16 is the smallest). For arrays smaller than the buffer size, array's >are just copied when a cast is needed. But, for larger arrays, the >buffered code is exercised. > >For example: > >scipy.setbufsize(16) >scipy.test(1,1) > >At some-point we might play with different buffer sizes to see if some >numbers are better than others. > >-Travis > > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From arnd.baecker at web.de Fri Dec 9 13:39:04 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 19:39:04 +0100 (CET) Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: <4399CC57.1010900@astraw.com> References: <4399C76A.4030902@ieee.org> <4399CC57.1010900@astraw.com> Message-ID: On Fri, 9 Dec 2005, Andrew Straw wrote: > FYI, on my Athlon64 (Debian sarge, gcc 3.3, stock atlas), I'm not > getting segfaults using scipy.test(10,10) but only a single failure. > This is a svn checkout last night, so it may lag slightly, but it seems > to indicate Arnd's segfaults may reflect a gcc vs. icc issue. Sorry if I created confusion by trying to install scipy on too many machines. Presently we have dumped icc for almost all purposes (apart from those like ATLAS, which penetrantly finds icc, even if it is removed from any paths - rumours say that icc is even found when the corresponding hard disk is disconnected ... ;-) ((sorry could not resist - just the signs of 5 days of installation troubles ...))) The reported segfault comes from a gcc only installation (fortunately there is no icc on that 64 Bit Opteron ...) Best, Arnd P.S.: the check_odeint1 is a persistent failure (don't remember in which version this was introduced - around the beginning of this week?) From arnd.baecker at web.de Fri Dec 9 14:03:11 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 20:03:11 +0100 (CET) Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: <4399C76A.4030902@ieee.org> References: <4399C76A.4030902@ieee.org> Message-ID: On Fri, 9 Dec 2005, Travis Oliphant wrote: [...] > I've made a fix which should work better on 64-bit. With scipy.__core_version__ = '0.8.1.1617' we got it working - both on the Opteron and Itanium!! In both cases gcc was used. > More 64-bit testing needed. > > One trick to test the buffered section of code using smaller arrays is > to set the buffer size to something very small (but a multiple of 16 --- > 16 is the smallest). For arrays smaller than the buffer size, array's > are just copied when a cast is needed. But, for larger arrays, the > buffered code is exercised. > > For example: > > scipy.setbufsize(16) > scipy.test(1,1) Works fine for both machines! Travis, there is something which is bothering me for a while, and which clearly shows in Jan's build: The fftw3 performance is very poor/weird for one-dimensional complex arrays (this is on the Itanium2 with gcc): Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 1.28 | 1.59 | 10.06 | 1.57 (secs for 7000 calls) 1000 | 1.08 | 3.06 | 9.36 | 3.00 (secs for 2000 calls) 256 | 2.39 | 3.74 | 19.29 | 3.68 (secs for 10000 calls) 512 | 3.54 | 8.27 | 26.81 | 8.10 (secs for 10000 calls) 1024 | 0.57 | 1.44 | 4.29 | 1.41 (secs for 1000 calls) 2048 | 0.99 | 3.18 | 7.40 | 3.12 (secs for 1000 calls) 4096 | 0.96 | 3.04 | 6.93 | 2.99 (secs for 500 calls) 8192 | 2.04 | 7.91 | 14.40 | 7.85 (secs for 500 calls) Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 | 0.83 | 2.33 | 0.82 | 2.24 (secs for 100 calls) 1000x100 | 0.57 | 2.04 | 0.58 | 2.13 (secs for 7 calls) 256x256 | 0.68 | 1.62 | 0.67 | 1.63 (secs for 10 calls) 512x512 | 1.85 | 3.28 | 1.71 | 3.41 (secs for 3 calls) Do you have any idea, what could be causing this? Best, Arnd From oliphant.travis at ieee.org Fri Dec 9 14:17:36 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 12:17:36 -0700 Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: References: <4399C76A.4030902@ieee.org> <4399CC57.1010900@astraw.com> Message-ID: <4399D850.30106@ieee.org> Arnd Baecker wrote: >On > >The reported segfault comes from a gcc only installation >(fortunately there is no icc on that 64 Bit Opteron ...) > > Did you try the most recent SVN? -Travis From arnd.baecker at web.de Fri Dec 9 14:28:03 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 9 Dec 2005 20:28:03 +0100 (CET) Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: <4399D850.30106@ieee.org> References: <4399C76A.4030902@ieee.org> <4399CC57.1010900@astraw.com> <4399D850.30106@ieee.org> Message-ID: On Fri, 9 Dec 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >On > > > >The reported segfault comes from a gcc only installation > >(fortunately there is no icc on that 64 Bit Opteron ...) > > > > > Did you try the most recent SVN? by now yes (not to the time of writing the above lines)... see my other message. Arnd From oliphant.travis at ieee.org Fri Dec 9 16:20:14 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 14:20:14 -0700 Subject: [SciPy-dev] fft segfault, 64 Bit Opteron In-Reply-To: <4399CC57.1010900@astraw.com> References: <4399C76A.4030902@ieee.org> <4399CC57.1010900@astraw.com> Message-ID: <4399F50E.6020202@ieee.org> Andrew Straw wrote: >FYI, on my Athlon64 (Debian sarge, gcc 3.3, stock atlas), I'm not >getting segfaults using scipy.test(10,10) but only a single failure. >This is a svn checkout last night, so it may lag slightly, but it seems >to indicate Arnd's segfaults may reflect a gcc vs. icc issue. > >In [2]: scipy.__core_version__ >Out[2]: '0.8.1.1611' > >In [3]: scipy.__scipy_version__ >Out[3]: '0.4.3.1479' > >====================================================================== >FAIL: check_odeint1 (scipy.integrate.test_integrate.test_odeint) >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/integrate/tests/test_integrate.py", >line 51, in check_odeint1 > assert res < 1.0e-6 >AssertionError > >---------------------------------------------------------------------- > > Finally tracked this one down. It was a problem with a Fortran array getting created inappropriately because when I changed the implementation of the back-wards compatible Numeric C-API for PyArray_FromDims, I made a mistake so that Fortran arrays always got created. Now, what surprises me is that it didn't cause more problems. I suppose that's because f2py does not use PyArray_FromDims anymore. But, lots of other people's code does I'm sure.... Anyway it should be fixed and now all scipy tests pass for me again. -Travis From oliphant.travis at ieee.org Fri Dec 9 14:31:34 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 12:31:34 -0700 Subject: [SciPy-dev] [SciPy-user] Benchmark data In-Reply-To: <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> References: <43995919.2090401@ieee.org> <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <4399DB96.3080501@ieee.org> Gerard Vermeulen wrote: >On Fri, 09 Dec 2005 03:14:49 -0700 >Travis Oliphant wrote: > > > >>I'd like people to try out scipy core in SVN. I made improvements to the >>buffered ufunc section of code that I think will make a big difference >>in the recently published benchmarks. >> >> >> > >Hi Travis, > >indeed, it made a big difference (for big arrays scipy is now fastest on some >statements). > >Below are my benchmark results on my DIY python, see >http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2005-December/006057.html > > > Thanks for these tests. I'd like to gather more data, but it seems that the real difference here is the optimizations that are set for your tests. On my tests no optimizations were done by the compiler (I'm using a debug build and scipy is clearly doing better). On your system with optimizations set numarray starts to edge out for very large arrays. Apparently, the compiler is able to optimize the inner loops for numarray better (at least on your machine). If anybody has opinions on that, it would be great. One idea is that the inner loops for scipy and Numeric use strided memory access and pointer dereferencing on the inner loop. Perhaps the optimizer could do a better job if the dereferencing were done outside of the inner loop? Thanks, -Travis From oliphant.travis at ieee.org Fri Dec 9 15:01:41 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 13:01:41 -0700 Subject: [SciPy-dev] [SciPy-user] Benchmark data In-Reply-To: <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> References: <43995919.2090401@ieee.org> <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <4399E2A5.3090601@ieee.org> Gerard Vermeulen wrote: >On Fri, 09 Dec 2005 03:14:49 -0700 >Travis Oliphant wrote: > > >>I'd like people to try out scipy core in SVN. I made improvements to the >>buffered ufunc section of code that I think will make a big difference >>in the recently published benchmarks. >> >> >> > >Hi Travis, > >indeed, it made a big difference (for big arrays scipy is now fastest on some >statements). > >Below are my benchmark results on my DIY python, see >http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2005-December/006057.html > >On my system and for large arrays (>4096), numarray is still fastest, scipy moved >to second and Numeric is third. >Numeric is still fastest for small arrays, scipy is second, numarray is third. > > Numeric will always be faster for small-enough arrays, I think, because it doesn't have the ufunc overhead. I just don't want it to be a lot faster. We can improve the limiting scalar case in scipy_core using separate scalar math. It looks like we are doing reasonably well. >Invoking: python bench.py 12 >Importing test to scipy >Importing base to scipy >Importing basic to scipy >Python 2.4.2 (#1, Dec 4 2005, 08:21:04) >[GCC 3.4.3 (Mandrakelinux 10.2 3.4.3-7mdk)] >Optimization flags: -DNDEBUG -O3 -march=i686 >CPU info: getNCPUs=2 has_mmx has_sse has_sse2 is_32bit is_Intel is_Pentium is_PentiumIV >Numeric-24.2 >numarray-1.5.0 >scipy-core-0.8.1.1617 >benchmark size = 12 (vectors of length 16777216) >label Numeric numarray scipy.base > 1 0.4127 0.07423 0.3927 > 2 0.2734 0.2321 0.3234 > 3 0.1975 0.1821 0.2733 > 4 0.8747 0.5371 0.5588 > 5 0.2896 0.2342 0.2737 > 6 0.2066 0.1731 0.2718 > 7 0.8761 0.6286 0.5524 > 8 0.6546 0.4556 0.4533 > 9 9.488 7.566 8.717 > 10 9.506 8.064 8.745 > 11 7.879 6.301 7.305 >TOTAL 30.66 24.45 27.87 > > As mentioned before, it looks like the optimizer is doing something nice on your system. One issue is arange which could definitely be made faster by having different "fillers" for different types. I'm still astonished by the markedly different numbers you seem to get than others have shown. Is this all -O3 optimization kicking in? The other issue is the sin and cosine functions. They don't have their own inner loops. They call a generic inner loop with a "function-pointer" data. Perhaps the optimizer can't do as much with that or it needs to written with an optimizer in mind. Ultimately, though, I'd like to see some of the inner loops to take advantage of SSE (and equivalent) instructions if the number of iterations is large-enough. So, yes, I think we could get faster. But, I'd first like to get more data from more machines and compiler flags to determine where the slowness is really coming from. It might be good, for example, to break up one of lines 9, 10, and 11 so that at least one sin and cos calculation is done alone. Thanks, -Travis From oliphant.travis at ieee.org Fri Dec 9 17:23:42 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 15:23:42 -0700 Subject: [SciPy-dev] [SciPy-user] Benchmark data In-Reply-To: <4399F80F.5000801@gmail.com> References: <43995919.2090401@ieee.org> <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> <4399E2A5.3090601@ieee.org> <4399F80F.5000801@gmail.com> Message-ID: <439A03EE.2080705@ieee.org> Robert Kern wrote: >When we start to seriously look at this, we should consider using liboil >to implement these optimizations. > > Hey, great find, that looks just like the kind of thing we'll want at some point. One way this could be done is just like the dotblas extension module so that if liboil is found on your system, an extension module is loaded that replaces the respective inner-loops of the affected functions. Alternatively, at compile time, the right functions could be inserted in the ufunc function table to begin with... -Travis From robert.kern at gmail.com Fri Dec 9 18:23:56 2005 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 09 Dec 2005 15:23:56 -0800 Subject: [SciPy-dev] [SciPy-user] Benchmark data In-Reply-To: <439A03EE.2080705@ieee.org> References: <43995919.2090401@ieee.org> <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> <4399E2A5.3090601@ieee.org> <4399F80F.5000801@gmail.com> <439A03EE.2080705@ieee.org> Message-ID: <439A120C.8060504@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: > >>When we start to seriously look at this, we should consider using liboil >>to implement these optimizations. > > Hey, great find, that looks just like the kind of thing we'll want at > some point. > > One way this could be done is just like the dotblas extension module so > that if liboil is found on your system, an extension module is loaded > that replaces the respective inner-loops of the affected functions. > > Alternatively, at compile time, the right functions could be inserted in > the ufunc function table to begin with... liboil does this already. We would simply use the appropriate API function, say oil_conv_f64_u32(). In import_array(), we would call oil_init() which will select the appropriate implementations and make the function pointers point to them. Sometimes, some of those choices will be made at compile time, e.g. assembly implementations for the wrong architecure won't be compiled; others will be made at initialization. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant.travis at ieee.org Fri Dec 9 18:49:21 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 16:49:21 -0700 Subject: [SciPy-dev] [SciPy-user] Benchmark data In-Reply-To: <439A120C.8060504@gmail.com> References: <43995919.2090401@ieee.org> <20051209201510.04c352d6.gerard.vermeulen@grenoble.cnrs.fr> <4399E2A5.3090601@ieee.org> <4399F80F.5000801@gmail.com> <439A03EE.2080705@ieee.org> <439A120C.8060504@gmail.com> Message-ID: <439A1801.90808@ieee.org> Robert Kern wrote: >Travis Oliphant wrote: > > >>Robert Kern wrote: >> >> >> >>>When we start to seriously look at this, we should consider using liboil >>>to implement these optimizations. >>> >>> >>Hey, great find, that looks just like the kind of thing we'll want at >>some point. >> >>One way this could be done is just like the dotblas extension module so >>that if liboil is found on your system, an extension module is loaded >>that replaces the respective inner-loops of the affected functions. >> >>Alternatively, at compile time, the right functions could be inserted in >>the ufunc function table to begin with... >> >> > >liboil does this already. We would simply use the appropriate API >function, say oil_conv_f64_u32(). In import_array(), we would call >oil_init() which will select the appropriate implementations and make >the function pointers point to them. Sometimes, some of those choices >will be made at compile time, e.g. assembly implementations for the >wrong architecure won't be compiled; others will be made at initialization. > > > Right, I'm just wondering aloud if we would require liboil, bundle it in, or selectively use it at compile time. It seems like a nice project, regardless. -Travis From oliphant.travis at ieee.org Fri Dec 9 18:52:40 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 09 Dec 2005 16:52:40 -0700 Subject: [SciPy-dev] [SciPy-user] FFT troubles In-Reply-To: References: <4399ED15.5000602@ieee.org> <439A0955.6040207@ieee.org> Message-ID: <439A18C8.3020402@ieee.org> Rob Managan wrote: > I get > > running build_clib > customize UnixCCompiler > ... > customize GnuFCompiler using build_ext > building 'scipy.fftpack._fftpack' extension > compiling C sources > gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes' > compile options: '-Ibuild/src > -I/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/base/include > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' > /usr/local/bin/g77 -undefined dynamic_lookup -bundle > build/temp.darwin-7.9.0-Power_Macintosh-2.4/build/src/Lib/fftpack/_fftpackmodule.o > build/temp.darwin-7.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o > build/temp.darwin-7.9.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o > build/temp.darwin-7.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o > build/temp.darwin-7.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o > build/temp.darwin-7.9.0-Power_Macintosh-2.4/build/src/fortranobject.o > -L/usr/local/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 > -Lbuild/temp.darwin-7.9.0-Power_Macintosh-2.4 -ldfftpack -lg2c > -lcc_dynamic -o > build/lib.darwin-7.9.0-Power_Macintosh-2.4/scipy/fftpack/_fftpack.so > running install_lib > > I don't see the -lrfftw -lfftw options on the compile line so I > wonder if I am actually linking against fftw?? Well that's interesting... That could signal a problem with the fftpack supplied with full scipy... You can check to see if your system is picking up fftw using: python -c "import scipy.distutils.system_info as sds; print sds.get_info('fftw')" The output of that will show you more. -Travis From nwagner at mecha.uni-stuttgart.de Sat Dec 10 05:54:42 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 10 Dec 2005 11:54:42 +0100 Subject: [SciPy-dev] _dotblas.c:71: error: structure has no member named `dotfunc' Message-ID: I am using the latest svn versions of core and scipy python setup.py install yields gcc: scipy/corelib/blasdot/_dotblas.c scipy/corelib/blasdot/_dotblas.c: In function `dotblas_alterdot': scipy/corelib/blasdot/_dotblas.c:71: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:72: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:75: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:76: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:79: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:80: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:83: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:84: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c: In function `dotblas_restoredot': scipy/corelib/blasdot/_dotblas.c:104: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:108: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:112: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:116: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c: In function `dotblas_alterdot': scipy/corelib/blasdot/_dotblas.c:71: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:72: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:75: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:76: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:79: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:80: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:83: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:84: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c: In function `dotblas_restoredot': scipy/corelib/blasdot/_dotblas.c:104: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:108: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:112: error: structure has no member named `dotfunc' scipy/corelib/blasdot/_dotblas.c:116: error: structure has no member named `dotfunc' error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -DATLAS_INFO="\"3.7.11\"" -Iscipy/corelib/blasdot -I/usr/local/include/atlas -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/local/include/python2.4 -c scipy/corelib/blasdot/_dotblas.c -o build/temp.linux-i686-2.4/scipy/corelib/blasdot/_dotblas.o" failed with exit status 1 removed scipy/base/__svn_version__.py removed scipy/base/__svn_version__.pyc removed scipy/f2py/__svn_version__.py removed scipy/f2py/__svn_version__.pyc From nwagner at mecha.uni-stuttgart.de Sat Dec 10 07:06:53 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 10 Dec 2005 13:06:53 +0100 Subject: [SciPy-dev] Segmentation fault Message-ID: scipy.test(1,10) results in a segfault. Here comes the backtrace check_basic (scipy.io.array_import.test_array_import.test_numpyio) ... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076175008 (LWP 15562)] 0x00000000 in ?? () (gdb) bt #0 0x00000000 in ?? () #1 0x421e4861 in numpyio_tofile (self=0x0, args=0x45ac5b6c) at numpyiomodule.c:324 #2 0x0811eb56 in PyCFunction_Call (func=0x402b91ac, arg=0x45ac5b6c, kw=0x0) at methodobject.c:93 #3 0x080c74ed in PyEval_EvalFrame (f=0x81f0cec) at ceval.c:1499 #4 0x080c8551 in PyEval_EvalFrame (f=0x82ca60c) at ceval.c:3640 #5 0x080c8bb4 in PyEval_EvalCodeEx (co=0x402c7b60, globals=0x402b8b54, locals=0x0, args=0x45903858, argcount=2, kws=0x85dbf18, kwcount=0, defs=0x402ce9f8, defcount=1, closure=0x0) at ceval.c:2736 #6 0x0811de51 in function_call (func=0x402d1224, arg=0x4590384c, kw=0x45ad4d74) at funcobject.c:548 #7 0x0805935e in PyObject_Call (func=0x402d1224, arg=0x4590384c, kw=0x45ad4d74) at abstract.c:1756 #8 0x080c4a4d in PyEval_EvalFrame (f=0x8290b04) at ceval.c:3835 #9 0x080c8bb4 in PyEval_EvalCodeEx (co=0x402c7ba0, globals=0x402b8b54, locals=0x0, args=0x459037d8, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #10 0x0811dda2 in function_call (func=0x402d125c, arg=0x459037cc, kw=0x0) at funcobject.c:548 #11 0x0805935e in PyObject_Call (func=0x402d125c, arg=0x459037cc, kw=0x0) at abstract.c:1756 #12 0x08064bd4 in instancemethod_call (func=0x3, arg=0x459037cc, kw=0x0) at classobject.c:2447 #13 0x0805935e in PyObject_Call (func=0x45aefd4c, arg=0x45b0774c, kw=0x0) at abstract.c:1756 #14 0x08097c47 in slot_tp_call (self=0x45add74c, args=0x45b0774c, kwds=0x0) at typeobject.c:4536 #15 0x0805935e in PyObject_Call (func=0x45add74c, arg=0x45b0774c, kw=0x0) at abstract.c:1756 #16 0x080c4c5e in PyEval_EvalFrame (f=0x81e6dfc) at ceval.c:3766 #17 0x080c8bb4 in PyEval_EvalCodeEx (co=0x402cc160, globals=0x402b8b54, locals=0x0, args=0x459038f8, argcount=2, kws=0x8616a30, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #18 0x0811de51 in function_call (func=0x402d1614, arg=0x459038ec, kw=0x45ad4cec) at funcobject.c:548 #19 0x0805935e in PyObject_Call (func=0x402d1614, arg=0x459038ec, kw=0x45ad4cec) at abstract.c:1756 #20 0x080c4a4d in PyEval_EvalFrame (f=0x84436bc) at ceval.c:3835 #21 0x080c8bb4 in PyEval_EvalCodeEx (co=0x402cc1a0, globals=0x402b8b54, locals=0x0, args=0x45903838, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #22 0x0811dda2 in function_call (func=0x402d164c, arg=0x4590382c, kw=0x0) at funcobject.c:548 #23 0x0805935e in PyObject_Call (func=0x402d164c, arg=0x4590382c, kw=0x0) at abstract.c:1756 #24 0x08064bd4 in instancemethod_call (func=0x3, arg=0x4590382c, kw=0x0) at classobject.c:2447 #25 0x0805935e in PyObject_Call (func=0x458fda7c, arg=0x45af554c, kw=0x0) at abstract.c:1756 #26 0x08097c47 in slot_tp_call (self=0x459092cc, args=0x45af554c, kwds=0x0) at typeobject.c:4536 #27 0x0805935e in PyObject_Call (func=0x459092cc, arg=0x45af554c, kw=0x0) at abstract.c:1756 #28 0x080c4c5e in PyEval_EvalFrame (f=0x822257c) at ceval.c:3766 #29 0x080c8551 in PyEval_EvalFrame (f=0x820d4ec) at ceval.c:3640 #30 0x080c8bb4 in PyEval_EvalCodeEx (co=0x402b47e0, globals=0x402b835c, locals=0x0, args=0x82b20b8, ---Type to continue, or q to quit--- argcount=3, kws=0x82b20c4, kwcount=0, defs=0x402d4038, defcount=2, closure=0x0) at ceval.c:2736 #31 0x080c5f67 in PyEval_EvalFrame (f=0x82b1f6c) at ceval.c:3650 #32 0x080c8bb4 in PyEval_EvalCodeEx (co=0x4566cb20, globals=0x4026c824, locals=0x4026c824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #33 0x080c8de5 in PyEval_EvalCode (co=0x4566cb20, globals=0x4026c824, locals=0x4026c824) at ceval.c:484 #34 0x080f7c26 in PyRun_InteractiveOneFlags (fp=0x4024d720, filename=0x8124263 "", flags=0xbfffed04) at pythonrun.c:1265 #35 0x080f7e89 in PyRun_InteractiveLoopFlags (fp=0x4024d720, filename=0x8124263 "", flags=0xbfffed04) at pythonrun.c:695 #36 0x080f7fb0 in PyRun_AnyFileExFlags (fp=0x4024d720, filename=0x8124263 "", closeit=0, flags=0xbfffed04) at pythonrun.c:658 #37 0x08055917 in Py_Main (argc=0, argv=0xbfffedd4) at main.c:484 #38 0x08054fc8 in main (argc=1, argv=0xbfffedd4) at python.c:23 Nils From dd55 at cornell.edu Sat Dec 10 10:33:17 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sat, 10 Dec 2005 10:33:17 -0500 Subject: [SciPy-dev] unable to build scipy svn version 1481 Message-ID: <200512101033.17968.dd55@cornell.edu> I just updated from svn, and am unable to build quadpack. i686-pc-linux-gnu-gcc: Lib/integrate/_quadpackmodule.c In file included from Lib/integrate/_quadpackmodule.c:6: Lib/integrate/__quadpack.h: In function `quadpack_qagse': Lib/integrate/__quadpack.h:141: error: too many arguments to function `dqagse_' Lib/integrate/__quadpack.h: In function `quadpack_qagie': Lib/integrate/__quadpack.h:219: error: too many arguments to function `dqagie_' Lib/integrate/__quadpack.h: In function `quadpack_qagpe': Lib/integrate/__quadpack.h:314: error: too many arguments to function `dqagpe_' Lib/integrate/__quadpack.h: In function `quadpack_qawoe': Lib/integrate/__quadpack.h:420: error: too many arguments to function `dqawoe_' Lib/integrate/__quadpack.h: In function `quadpack_qawfe': Lib/integrate/__quadpack.h:521: error: too many arguments to function `dqawfe_' Lib/integrate/__quadpack.h: In function `quadpack_qawce': Lib/integrate/__quadpack.h:610: error: too many arguments to function `dqawce_' Lib/integrate/__quadpack.h: In function `quadpack_qawse': Lib/integrate/__quadpack.h:690: error: too many arguments to function `dqawse_' In file included from Lib/integrate/_quadpackmodule.c:6: Lib/integrate/__quadpack.h: In function `quadpack_qagse': Lib/integrate/__quadpack.h:141: error: too many arguments to function `dqagse_' Lib/integrate/__quadpack.h: In function `quadpack_qagie': Lib/integrate/__quadpack.h:219: error: too many arguments to function `dqagie_' Lib/integrate/__quadpack.h: In function `quadpack_qagpe': Lib/integrate/__quadpack.h:314: error: too many arguments to function `dqagpe_' Lib/integrate/__quadpack.h: In function `quadpack_qawoe': Lib/integrate/__quadpack.h:420: error: too many arguments to function `dqawoe_' Lib/integrate/__quadpack.h: In function `quadpack_qawfe': Lib/integrate/__quadpack.h:521: error: too many arguments to function `dqawfe_' Lib/integrate/__quadpack.h: In function `quadpack_qawce': Lib/integrate/__quadpack.h:610: error: too many arguments to function `dqawce_' Lib/integrate/__quadpack.h: In function `quadpack_qawse': Lib/integrate/__quadpack.h:690: error: too many arguments to function `dqawse_' error: Command "i686-pc-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -fPIC -I/usr/lib/python2.4/site-packages/scipy/base/include -I/usr/include/python2.4 -c Lib/integrate/_quadpackmodule.c -o build/temp.linux-i686-2.4/Lib/integrate/_quadpackmodule.o" failed with exit status 1 -- Darren S. Dale, Ph.D. dd55 at cornell.edu From oliphant.travis at ieee.org Sat Dec 10 12:39:27 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 10 Dec 2005 10:39:27 -0700 Subject: [SciPy-dev] [SciPy-user] gcc error In-Reply-To: <439AC8A8.3050602@gmx.net> References: <1134055230.43984f3e2e8ee@imp3-g19.free.fr> <200512091131.43983.basvandijk@home.nl> <439ABCE7.10607@gmx.net> <439AC8A8.3050602@gmx.net> Message-ID: <439B12CF.1060606@ieee.org> Steve Schmerler wrote: >Thanx for the *fast* _dotblas.c update :). But compiling scipy I get > >######################################################################################################################################### > >python scipy/setup.py install > >[...] > > > Thanks for the note. I changed something to get rid of a compiler warning but changed it badly. I apologize for the mistake. The recent tree instability was brought about by my moving the function pointers out of the type descr structure so that the PyArray_Descr structure just has a pointer to a function-pointer table. Type descriptors are dynamic and can get copied around and this will save a little-bit of unnecessary jostling. This has C-API issues for those who used direct access to the function pointers in the descr table. Now, you reach them through the member named 'f'. Thus what was descr->cast is now descr->f->cast. -Travis From strawman at astraw.com Sat Dec 10 15:40:19 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Dec 2005 12:40:19 -0800 Subject: [SciPy-dev] issues with scipy and eggs Message-ID: <439B3D33.6080507@astraw.com> Hi, Issue 1 - probably a minor bug with scipy distutils =================================================== Attempting to build scipy (full) using "python setup.py bdist_egg" fails on my machine with: /usr/bin/g77 -shared build/temp.linux-x86_64-2.4/build/src/Lib/fftpack/_fftpackmodule.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/drfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zrfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfftnd.o build/temp.linux-x86_64-2.4/build/src/fortranobject.o -Lbuild/temp.linux-x86_64-2.4 -ldfftpack -lg2c-pic -o build/lib.linux-x86_64-2.4/scipy/fftpack/_fftpack.so /usr/bin/ld: cannot find -ldfftpack collect2: ld returned 1 exit status /usr/bin/ld: cannot find -ldfftpack collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -shared build/temp.linux-x86_64-2.4/build/src/Lib/fftpack/_fftpackmodule.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/drfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zrfft.o build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfftnd.o build/temp.linux-x86_64-2.4/build/src/fortranobject.o -Lbuild/temp.linux-x86_64-2.4 -ldfftpack -lg2c-pic -o build/lib.linux-x86_64-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 However, "python setup.py build bdist_egg" works. Is there something wrong with scipy distutils that is causing this behavior? I am happy to provide more information if requested... Issue 2 - how to use full scipy from an egg? ============================================ I can't figure out the incantation for using scipy (full) from an egg. AFAICT this should just work with "import scipy" because both the scipy_core and scipy egg are in my easy-install.pth file. But doing "import scipy" only gets me scipy core. Furthermore, Robert Kern suggested the following, which does apparently find the full scipy egg, but it doesn't load full scipy. (You can see below that I have a non-root install, but I don't think this is an issue. At least it hasn't been with other eggs.) In [1]: import pkg_resources In [2]: pkg_resources.require('SciPy') Out[2]: [scipy 0.4.3.1482 (/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg)] In [3]: import scipy Importing test to scipy Importing base to scipy Importing basic to scipy In [4]: import scipy.signal --------------------------------------------------------------------------- exceptions.ImportError From robert.kern at gmail.com Sat Dec 10 15:57:45 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Dec 2005 12:57:45 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B3D33.6080507@astraw.com> References: <439B3D33.6080507@astraw.com> Message-ID: <439B4149.6000408@gmail.com> Andrew Straw wrote: > Hi, > > Issue 1 - probably a minor bug with scipy distutils > =================================================== > > Attempting to build scipy (full) using "python setup.py bdist_egg" fails > on my machine with: > > /usr/bin/g77 -shared > build/temp.linux-x86_64-2.4/build/src/Lib/fftpack/_fftpackmodule.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/drfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zrfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfftnd.o > build/temp.linux-x86_64-2.4/build/src/fortranobject.o > -Lbuild/temp.linux-x86_64-2.4 -ldfftpack -lg2c-pic -o > build/lib.linux-x86_64-2.4/scipy/fftpack/_fftpack.so > /usr/bin/ld: cannot find -ldfftpack > collect2: ld returned 1 exit status > /usr/bin/ld: cannot find -ldfftpack > collect2: ld returned 1 exit status > error: Command "/usr/bin/g77 -shared > build/temp.linux-x86_64-2.4/build/src/Lib/fftpack/_fftpackmodule.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/drfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zrfft.o > build/temp.linux-x86_64-2.4/Lib/fftpack/src/zfftnd.o > build/temp.linux-x86_64-2.4/build/src/fortranobject.o > -Lbuild/temp.linux-x86_64-2.4 -ldfftpack -lg2c-pic -o > build/lib.linux-x86_64-2.4/scipy/fftpack/_fftpack.so" failed with exit > status 1 > > However, "python setup.py build bdist_egg" works. Is there something > wrong with scipy distutils that is causing this behavior? I am happy to > provide more information if requested... I've found scipy.distutils (and scipy_distutils) to be cranky about computing the dependencies between build commands. I always use a build script, e.g.: python setup.py build_src build_clib build_ext -i --fcompiler=gnu > Issue 2 - how to use full scipy from an egg? > ============================================ > > I can't figure out the incantation for using scipy (full) from an egg. > AFAICT this should just work with "import scipy" because both the > scipy_core and scipy egg are in my easy-install.pth file. But doing > "import scipy" only gets me scipy core. Furthermore, Robert Kern > suggested the following, which does apparently find the full scipy egg, > but it doesn't load full scipy. No, you need to add the following to the scipy/__init__.py in both scipy_core and full scipy: try: __import__('pkg_resources').declare_namespace(__name__) except ImportError: pass That way, pkg_resources knows to keep looking for the other half. Also, if both eggs are already on sys.path, then pkg_resources.require() isn't necessary. Additionally, if they weren't the lone pkg_resources.require('SciPy') wouldn't find scipy_core. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Sat Dec 10 16:51:30 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Dec 2005 13:51:30 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B4149.6000408@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> Message-ID: <439B4DE2.80801@astraw.com> Robert Kern wrote: >Andrew Straw wrote: > >>Issue 2 - how to use full scipy from an egg? >>============================================ >> >>I can't figure out the incantation for using scipy (full) from an egg. >>AFAICT this should just work with "import scipy" because both the >>scipy_core and scipy egg are in my easy-install.pth file. But doing >>"import scipy" only gets me scipy core. Furthermore, Robert Kern >>suggested the following, which does apparently find the full scipy egg, >>but it doesn't load full scipy. >> > >No, you need to add the following to the scipy/__init__.py in both scipy_core >and full scipy: > >try: > __import__('pkg_resources').declare_namespace(__name__) >except ImportError: > pass > >That way, pkg_resources knows to keep looking for the other half. > I'm attempting to do this, but it doesn't work. If I add these lines to core/scipy/__init__.py, scipy core on its own works fine. However, after installing an un-modified full scipy, I get the following error: Python 2.4.1 (#2, May 6 2005, 11:22:24) [GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py:24: UserWarning: Module scipy was already imported from /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.pyc, but /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg is being added to sys.path tmp_pkg_resources = __import__('pkg_resources') Importing test to scipy Importing base to scipy Importing basic to scipy Traceback (most recent call last): File "", line 1, in ? File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py", line 70, in ? __doc__ += PackageImport().import_packages() File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/_import_tools.py", line 147, in import_packages return self._format_titles(titles) File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/_import_tools.py", line 90, in _format_titles max_length = max(lengths) ValueError: max() arg is an empty sequence If I attempt to create an __init__.py in scipy/build/scipy/__init__.py before doing "python setup.py bdist_egg" with those lines, I get the same error. (There doesn't seem to be a scipy/__init__.py in full scipy.) > >Also, if both eggs are already on sys.path, then pkg_resources.require() isn't >necessary. > Yes, I'm just going by your previous email which suggested using pkg_resources.require(). All my other eggs load fine without this, so I figured this might have worked around some scipy funkiness. >Additionally, if they weren't the lone pkg_resources.require('SciPy') >wouldn't find scipy_core. > I have no idea what you mean here. Having two packages that use the "scipy" namespace seems to be at the root of the issue here. (Hence the UserWarning above.) Do we need to do something relating to setuptools' namespace packages? Cheers! Andrew From arnd.baecker at web.de Sat Dec 10 17:15:50 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Sat, 10 Dec 2005 23:15:50 +0100 (CET) Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B4DE2.80801@astraw.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> Message-ID: On Sat, 10 Dec 2005, Andrew Straw wrote: [...] > Having two packages that use the "scipy" namespace seems to be at the > root of the issue here. (Hence the UserWarning above.) Do we need to do > something relating to setuptools' namespace packages? Maybe this is related: I tried to install scipy core to one place and full scipy to a different place and add boths paths to PYTHONPATH. This does not work and I think this is because (depending on the order) either (scipy core is found) or (scipy full is found but then scipy core is not found) ... I am also wondering about the implications for adding addditional libararies (eg. those from the sandbox) to some other place: imagine scipy core and scipy full are installed as debian/etc.. packages and one would like to add something - is this possible (as non-root)? Best, Arnd From robert.kern at gmail.com Sat Dec 10 17:18:43 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Dec 2005 14:18:43 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> Message-ID: <439B5443.1010406@gmail.com> Arnd Baecker wrote: > On Sat, 10 Dec 2005, Andrew Straw wrote: > > [...] > >>Having two packages that use the "scipy" namespace seems to be at the >>root of the issue here. (Hence the UserWarning above.) Do we need to do >>something relating to setuptools' namespace packages? > > Maybe this is related: I tried to install scipy core to > one place and full scipy to a different place and > add boths paths to PYTHONPATH. > This does not work and I think this is because > (depending on the order) either > (scipy core is found) or (scipy full is found > but then scipy core is not found) ... > > I am also wondering about the implications for adding > addditional libararies (eg. those from the sandbox) > to some other place: imagine scipy core and scipy full > are installed as debian/etc.. packages and one > would like to add something - is this possible (as non-root)? Not normally, no. Eggs can do this, though. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Sat Dec 10 17:48:57 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Dec 2005 14:48:57 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B4DE2.80801@astraw.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> Message-ID: <439B5B59.6040109@gmail.com> Andrew Straw wrote: > Robert Kern wrote: >>No, you need to add the following to the scipy/__init__.py in both scipy_core >>and full scipy: >> >>try: >> __import__('pkg_resources').declare_namespace(__name__) >>except ImportError: >> pass >> >>That way, pkg_resources knows to keep looking for the other half. > > I'm attempting to do this, but it doesn't work. If I add these lines to > core/scipy/__init__.py, scipy core on its own works fine. However, > after installing an un-modified full scipy, I get the following error: > > Python 2.4.1 (#2, May 6 2005, 11:22:24) > [GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>>import scipy > > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py:24: > UserWarning: Module scipy was already imported from > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.pyc, > but > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg > is being added to sys.path > tmp_pkg_resources = __import__('pkg_resources') > Importing test to scipy > Importing base to scipy > Importing basic to scipy > Traceback (most recent call last): > File "", line 1, in ? > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py", > line 70, in ? > __doc__ += PackageImport().import_packages() > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/_import_tools.py", > line 147, in import_packages > return self._format_titles(titles) > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/_import_tools.py", > line 90, in _format_titles > max_length = max(lengths) > ValueError: max() arg is an empty sequence > > If I attempt to create an __init__.py in scipy/build/scipy/__init__.py > before doing "python setup.py bdist_egg" with those lines, I get the > same error. (There doesn't seem to be a scipy/__init__.py in full scipy.) It's been working for me when I make one in scipy/__init__.py; I try not to directly mess around inside the build directory if I can avoid it. I think that this is a buglet in _import_tools.py. _format_titles(titles) is expecting titles to be non-empty. PackageImport is not expecting scipy_core and the rest of scipy to be in different places. The workaround for now is to comment out the last line in scipy_core's __init__.py . The UserWarning is new. I'll have to figure out why that's happening. >>Also, if both eggs are already on sys.path, then pkg_resources.require() isn't >>necessary. > > Yes, I'm just going by your previous email which suggested using > pkg_resources.require(). All my other eggs load fine without this, so I > figured this might have worked around some scipy funkiness. > >>Additionally, if they weren't the lone pkg_resources.require('SciPy') >>wouldn't find scipy_core. > > I have no idea what you mean here. If neither egg had been in your sys.path, then you would have had to call both pkg_resources.require('scipy_core') pkg_resources.require('scipy') to make sure both were found. I think. (Note: the capitalization seems to be fixed, now). > Having two packages that use the "scipy" namespace seems to be at the > root of the issue here. (Hence the UserWarning above.) Do we need to do > something relating to setuptools' namespace packages? Yes, that's what the fragment I gave does. We can't use the standard setuptools namespace package mechanism because scipy_core's __init__.py actually contains code that needs to run. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Sat Dec 10 19:37:08 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Dec 2005 16:37:08 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B5B59.6040109@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> Message-ID: <439B74B4.4070008@astraw.com> Robert Kern wrote: >Andrew Straw wrote: > > >>If I attempt to create an __init__.py in scipy/build/scipy/__init__.py >>before doing "python setup.py bdist_egg" with those lines, I get the >>same error. (There doesn't seem to be a scipy/__init__.py in full scipy.) >> >> > >It's been working for me when I make one in scipy/__init__.py; I try not to >directly mess around inside the build directory if I can avoid it. > > OK, but which "scipy" directory are you talking about? When I do an svn checkout of full scipy, there is no scipy directory. Only after I do some building is there a "scipy" directory -- below "build". >I think that this is a buglet in _import_tools.py. _format_titles(titles) is >expecting titles to be non-empty. PackageImport is not expecting scipy_core and >the rest of scipy to be in different places. > >The workaround for now is to comment out the last line in scipy_core's __init__.py . > > OK, that does seem to eliminate the exception. Thanks for your help on this... I hope we can turn this stuff on by default in scipy -- not to force anyone to use eggs, but just to allow building as eggs without going through these hoops. From robert.kern at gmail.com Sat Dec 10 20:20:37 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Dec 2005 17:20:37 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B74B4.4070008@astraw.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> Message-ID: <439B7EE5.7030408@gmail.com> Andrew Straw wrote: > OK, but which "scipy" directory are you talking about? When I do an svn > checkout of full scipy, there is no scipy directory. Only after I do > some building is there a "scipy" directory -- below "build". Sorry. Lib/__init__.py -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Sat Dec 10 21:12:59 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 10 Dec 2005 18:12:59 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B7EE5.7030408@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> Message-ID: <439B8B2B.8020309@astraw.com> OK, with your changes, I have scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg and scipy-0.4.3.1482-py2.4-linux-x86_64.egg. But unfortunately not all is well. Here's my latest traceback. Note that all of scipy works when not installed as an egg, and all my other eggs work. This is with setuptools 0.6a8. astraw at hdmg:~$ python -=-=-=-=-=-=-=-=-=-= python version info -=-=-=-=-=-=-=-=-=-= 2.4.1 /usr PYTHONPATH /home/astraw/py24-amd64/lib/python2.4/site-packages:/home/astraw/py24-amd64/lib/python2.4/site-packages/Numeric:/home/astraw/py24-amd64/lib/python2.4/site-packages/setuptools-0.6a8-py2.4.egg -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Python 2.4.1 (#2, May 6 2005, 11:22:24) [GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py:24: UserWarning: Module scipy was already imported from /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.pyc, but /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg is being added to sys.path tmp_pkg_resources = __import__('pkg_resources') Importing test to scipy Importing base to scipy Importing basic to scipy >>> import scipy.signal Traceback (most recent call last): File "", line 1, in ? File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/signal/__init__.py", line 11, in ? from ltisys import * File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/signal/ltisys.py", line 13, in ? import scipy.linalg as linalg File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/linalg/__init__.py", line 10, in ? from basic import * File "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/linalg/basic.py", line 18, in ? from scipy.lib.lapack import get_lapack_funcs ImportError: No module named lapack Robert Kern wrote: >Andrew Straw wrote: > > >>OK, but which "scipy" directory are you talking about? When I do an svn >>checkout of full scipy, there is no scipy directory. Only after I do >>some building is there a "scipy" directory -- below "build". >> > >Sorry. Lib/__init__.py > > From robert.kern at gmail.com Sat Dec 10 21:43:22 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 10 Dec 2005 18:43:22 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B8B2B.8020309@astraw.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> Message-ID: <439B924A.8020802@gmail.com> Andrew Straw wrote: > OK, with your changes, I have > scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg and > scipy-0.4.3.1482-py2.4-linux-x86_64.egg. > > But unfortunately not all is well. Here's my latest traceback. Note that > all of scipy works when not installed as an egg, and all my other eggs > work. This is with setuptools 0.6a8. > > astraw at hdmg:~$ python > -=-=-=-=-=-=-=-=-=-= python version info -=-=-=-=-=-=-=-=-=-= > 2.4.1 /usr > PYTHONPATH > /home/astraw/py24-amd64/lib/python2.4/site-packages:/home/astraw/py24-amd64/lib/python2.4/site-packages/Numeric:/home/astraw/py24-amd64/lib/python2.4/site-packages/setuptools-0.6a8-py2.4.egg > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > > Python 2.4.1 (#2, May 6 2005, 11:22:24) > [GCC 3.3.6 (Debian 1:3.3.6-2)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>>import scipy > > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.py:24: > UserWarning: Module scipy was already imported from > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg/scipy/__init__.pyc, > but > /.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg > is being added to sys.path > tmp_pkg_resources = __import__('pkg_resources') > Importing test to scipy > Importing base to scipy > Importing basic to scipy > >>>>import scipy.signal > > Traceback (most recent call last): > File "", line 1, in ? > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/signal/__init__.py", > line 11, in ? > from ltisys import * > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/signal/ltisys.py", > line 13, in ? > import scipy.linalg as linalg > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/linalg/__init__.py", > line 10, in ? > from basic import * > File > "/.sharehome/astraw/py24-amd64/lib/python2.4/site-packages/scipy-0.4.3.1482-py2.4-linux-x86_64.egg/scipy/linalg/basic.py", > line 18, in ? > from scipy.lib.lapack import get_lapack_funcs > ImportError: No module named lapack D'oh! Add the same fragment to scipy_core's scipy/lib/__init__.py and full scipy's Lib/lib/__init__.py . -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant.travis at ieee.org Sun Dec 11 03:46:16 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 11 Dec 2005 01:46:16 -0700 Subject: [SciPy-dev] Arange has been sped up Message-ID: <439BE758.10109@ieee.org> Friends, I've just sped up arange using individual filling loops like numarray does -- and added (untested) complex arange functionality. The benchmarks are showing a marked improvement in the arange function for scipy over its previous performance. Thanks for the testing. Remember this is with a debug-build for scipy on an AMD chip. Python 2.4.2 (#1, Nov 14 2005, 21:26:13) [GCC 3.4.1 (Mandrakelinux 10.1 3.4.1-4mdk)] Optimization flags: -g -Wall -Wstrict-prototypes CPU info: getNCPUs has_3dnow has_3dnowext has_mmx is_32bit is_AMD is_singleCPU Numeric-24.2 numarray-1.5.1 scipy-core-0.8.2.1627 benchmark size = 10 (vectors of length 1048576) label Numeric numarray scipy.base 1 0.1377 0.0505 0.02734 2 0.1017 0.1063 0.09208 3 0.07226 0.07828 0.0825 4 0.4258 0.2689 0.1533 5 0.1126 0.1079 0.08457 6 0.0719 0.07721 0.08304 7 0.3053 0.3167 0.1547 8 0.1958 0.1964 0.1075 9 2.259 2.829 1.775 10 2.221 2.925 1.811 11 1.725 2.329 1.438 TOTAL 7.628 9.285 5.81 For all but lines 3 and 6, scipy_core is apparently doing better on my system. These are lines calling x % y and scipy_core implements the remainder the same way that Python does (thus needing extra checking) which I expect accounts for its tiny slowness -Travis From oliphant.travis at ieee.org Sun Dec 11 03:53:08 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 11 Dec 2005 01:53:08 -0700 Subject: [SciPy-dev] New release on Monday Message-ID: <439BE8F4.3070302@ieee.org> I'm going to make a new release of scipy core on Monday evening (U.S. Mountain Time) unless I hear from anyone. I'll probably post new Windows binaries of full scipy at that point as well. Now's a good time for any bug-reports or outstanding issues. -Travis From prabhu_r at users.sf.net Sun Dec 11 04:41:51 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sun, 11 Dec 2005 15:11:51 +0530 Subject: [SciPy-dev] New release on Monday In-Reply-To: <439BE8F4.3070302@ieee.org> References: <439BE8F4.3070302@ieee.org> Message-ID: <17307.62559.831909.772430@monster.iitb.ac.in> >>>>> "Travis" == Travis Oliphant writes: Travis> I'm going to make a new release of scipy core on Monday Travis> evening (U.S. Mountain Time) unless I hear from anyone. Travis> I'll probably post new Windows binaries of full scipy at Travis> that point as well. Travis> Now's a good time for any bug-reports or outstanding Travis> issues. Thanks for the great work! scipy.test(10) passes perfectly on a Debian woody box (i386). There is one small bug in scipy.distutils. Attached is a patch. cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_core.patch URL: From arnd.baecker at web.de Sun Dec 11 05:53:19 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 11 Dec 2005 11:53:19 +0100 (CET) Subject: [SciPy-dev] [SciPy-user] Arange has been sped up In-Reply-To: <439BE758.10109@ieee.org> References: <439BE758.10109@ieee.org> Message-ID: Hi, On Sun, 11 Dec 2005, Travis Oliphant wrote: > Friends, > > I've just sped up arange using individual filling loops like numarray > does -- and added (untested) complex arange functionality. > > The benchmarks are showing a marked improvement in the arange function > for scipy over its previous performance. Thanks for the testing. On laptop I get for `python test_bench.py`: Python 2.3.5 (#2, Sep 4 2005, 22:01:42) [GCC 3.3.5 (Debian 1:3.3.5-13)] Optimization flags: -DNDEBUG -g -O3 -Wall -Wstrict-prototypes CPU info: getNCPUs has_mmx has_sse is_32bit is_Intel is_Pentium is_PentiumII is_PentiumIII is_i686 is_singleCPU Numeric: 23.8 numarray: 1.1.1 Running pystone... Pystone(1.1) time for 50000 passes = 2.6 This machine benchmarks at 19230.8 pystones/second scipy: 0.8.2.1623 ==== bench arange ==== size Numeric numarray scipy 10 0.12 1.77 0.18 (secs for 10000 calls) 100 0.20 1.75 0.25 (secs for 10000 calls) 1000 0.44 0.89 0.41 (secs for 5000 calls) 10000 0.77 0.20 0.66 (secs for 1000 calls) 100000 0.79 0.05 0.67 (secs for 100 calls) scipy: 0.8.3.1630 ==== bench arange ==== size Numeric numarray scipy 10 0.12 1.77 0.18 (secs for 10000 calls) 100 0.20 1.80 0.20 (secs for 10000 calls) 1000 0.44 0.90 0.11 (secs for 5000 calls) 10000 0.78 0.20 0.04 (secs for 1000 calls) 100000 0.83 0.06 0.05 (secs for 100 calls) So only at small array sizes Numeric is faster (if one can believe the benchmark ;-). The data suggest that the crossover is at around 100. For the larger sizes getting a factor of more than 10 is impressive!! My compliments, Travis! Best, Arnd From arnd.baecker at web.de Sun Dec 11 08:05:02 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 11 Dec 2005 14:05:02 +0100 (CET) Subject: [SciPy-dev] New release on Monday In-Reply-To: <17307.62559.831909.772430@monster.iitb.ac.in> References: <439BE8F4.3070302@ieee.org> <17307.62559.831909.772430@monster.iitb.ac.in> Message-ID: On Sun, 11 Dec 2005, Prabhu Ramachandran wrote: > >>>>> "Travis" == Travis Oliphant writes: > > Travis> I'm going to make a new release of scipy core on Monday > Travis> evening (U.S. Mountain Time) unless I hear from anyone. > Travis> I'll probably post new Windows binaries of full scipy at > Travis> that point as well. > > Travis> Now's a good time for any bug-reports or outstanding > Travis> issues. > > Thanks for the great work! scipy.test(10) passes perfectly on a > Debian woody box (i386). no problems as well on the 64 Bit opteron machine (with gcc. 3.4.4, self built ATLAS ...) Note that performing tests on the 64 Bit Itanium (with gcc) are essentially impossible (or would require a lot of hand work) until the distutils problems concerning ATLAS are resolved (see Jan Braun's mail to scipy-dev ) Best, Arnd From pebarrett at gmail.com Sun Dec 11 12:02:35 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Sun, 11 Dec 2005 12:02:35 -0500 Subject: [SciPy-dev] [SciPy-user] Arange has been sped up In-Reply-To: References: <439BE758.10109@ieee.org> Message-ID: <40e64fa20512110902m15017170nd19982a4660aec65@mail.gmail.com> On 12/11/05, Arnd Baecker wrote: > > > On Sun, 11 Dec 2005, Travis Oliphant wrote: > > > Friends, > > > > I've just sped up arange using individual filling loops like numarray > > does -- and added (untested) complex arange functionality. > > > > The benchmarks are showing a marked improvement in the arange function > > for scipy over its previous performance. Thanks for the testing. > > So only at small array sizes Numeric is faster (if one can believe > the benchmark ;-). The data suggest that the crossover > is at around 100. > For the larger sizes getting a factor of more than 10 is impressive!! > My compliments, Travis! > The difference between Scipy and Numeric for small arrays may be the amount of stack usage. Numeric stores a lot of information on the stack, which avoids allocating memory. I know from personal experience that memory allocation can have a significant affect on performance for small arrays. Travis may want to confirm whether or not this is correct. -- Paul -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Dec 11 14:20:07 2005 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 11 Dec 2005 14:20:07 -0500 Subject: [SciPy-dev] linear_least_squares problem Message-ID: The basic_lite the function linear_least_squares begins with the statements: a = asndarray(a) b = asndarary(b) The first produces an error: File "C:\Python24\lib\site-packages\scipy\basic\basic_lite.py", line 414, in linear_least_squares a = asndarray(a) NameError: global name 'asndarray' is not defined The second is in any case misspelled. Changing both to asarray allows the function to work; is it the right solution? Alan Isaac From aisaac at american.edu Sun Dec 11 14:20:10 2005 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 11 Dec 2005 14:20:10 -0500 Subject: [SciPy-dev] linear_least_squares problem Message-ID: The basic_lite the function linear_least_squares begins with the statements: a = asndarray(a) b = asndarary(b) These cause produce an error: File "C:\Python24\lib\site-packages\scipy\basic\basic_lite.py", line 414, in linear_least_squares a = asndarray(a) NameError: global name 'asndarray' is not defined Alan Isaac From schofield at ftw.at Sun Dec 11 16:55:06 2005 From: schofield at ftw.at (Ed Schofield) Date: Sun, 11 Dec 2005 21:55:06 +0000 Subject: [SciPy-dev] New release on Monday In-Reply-To: <439BE8F4.3070302@ieee.org> References: <439BE8F4.3070302@ieee.org> Message-ID: <767A9640-49E7-42B8-BE28-6B4001D93F38@ftw.at> On 11/12/2005, at 8:53 AM, Travis Oliphant wrote: > > I'm going to make a new release of scipy core on Monday evening (U.S. > Mountain Time) > unless I hear from anyone. I'll probably post new Windows binaries of > full scipy > at that point as well. > > Now's a good time for any bug-reports or outstanding issues. > Is there any chance of including type checking for this release for unsafe in-place operations? Several people supported the idea in the thread [In-place operators and casting] a few weeks ago. In summary, the existing behaviour would still be achievable with an explicit cast: >>> my_int_array += my_float_array.astype(int) I was actually bitten by the current behaviour myself last week when I was priming my maximum entropy module for the scipy tree. I spent an hour trying to figure out why the computations were coming out wrong. It turned out that the problem was that when converting my home-made emptyarray() function to scipy_core's empty() I had forgotten to specify doubles and my data were being silently truncated. So I think type-checking wouldn't just help new users; it would also save debugging time for users who know the existing behaviour but, er, forget ;) Well done with your recent optimizations! -- Ed From schofield at ftw.at Sun Dec 11 17:43:10 2005 From: schofield at ftw.at (Ed Schofield) Date: Sun, 11 Dec 2005 22:43:10 +0000 Subject: [SciPy-dev] New release on Monday In-Reply-To: <767A9640-49E7-42B8-BE28-6B4001D93F38@ftw.at> References: <439BE8F4.3070302@ieee.org> <767A9640-49E7-42B8-BE28-6B4001D93F38@ftw.at> Message-ID: On 11/12/2005, at 9:55 PM, Ed Schofield wrote: > > On 11/12/2005, at 8:53 AM, Travis Oliphant wrote: > >> >> I'm going to make a new release of scipy core on Monday evening (U.S. >> Mountain Time) >> unless I hear from anyone. I'll probably post new Windows >> binaries of >> full scipy >> at that point as well. >> >> Now's a good time for any bug-reports or outstanding issues. >> > > Is there any chance of including type checking for this release for > unsafe in-place operations? Several people supported the idea in > the thread [In-place operators and casting] a few weeks ago. In > summary, the existing behaviour would still be achievable with an > explicit cast: > >>> my_int_array += my_float_array.astype(int) > Actually, I may as well stick my neck out and argue that another (the last remaining?) example of unsafe casting behaviour >>> my_int_array[0] = 1.5 ought to change too eventually. I'd argue for a TypeError here also. The current behaviour would still be achievable with an explicit cast like >>> my_int_array[0] = int(1.5). with a bonus of easier readability. I'd guess that the performance impact of a type check would be small, given the overhead anyway of setting an individual array element from Python. But I'll run some benchmark tests to verify this... -- Ed From strawman at astraw.com Sun Dec 11 17:13:19 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 11 Dec 2005 14:13:19 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B924A.8020802@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> Message-ID: <439CA47F.4000207@astraw.com> Robert Kern wrote: >D'oh! Add the same fragment to scipy_core's scipy/lib/__init__.py and full >scipy's Lib/lib/__init__.py . > > I guess you mean scipy_core's scipy/corelib/__init__.py. OK, yes, I added the snippet to these 2 files and... it works, mostly! scipy.test(10,10) passes with both eggs installed, but apparently only does scipy_core tests (158 tests) and a small fraction of the full scipy tests (it only does 342 instead of all 1380 tests). But, other than that, all seems OK -- the various bits of full scipy do seem installed. Now, thinking about commiting these changes to svn, I'm sure we want to allow non-setuptools use of full scipy. But the changes you've made break this because both scipy_core and full scipy would install a scipy/__init__.py in the site-packages directory (from scipy_core: scipy/__init__.py and full scipy: Lib/__init__.py). Since full scipy is installed after scipy_core but scipy_core's version does some magic, this breaks the installation. So, my naive approach would be to copy scipy core's version to full scipy. I've tried this out, and it appears to work for both a non-setuptools install and a setuptools install. If this naive approach is reasonable, the only remaining issue I see is that there would now be two copies of the magic __init__.py that need to stay in sync. This could be done by hand (how often does that file change?), but maybe this is another, in this case technical, argument for merging the svn trees of scipy_core and scipy? I'm enclosing patches that incorporate all these changes in the hopes that someone will apply these so both scipy_core and full scipy build as eggs out-of-the-box. These patches also support the previous non-setuptools usage hopefully unchanged. Cheers! Andrew -------------- next part -------------- A non-text attachment was scrubbed... Name: eggs_scipy_core.patch Type: text/x-patch Size: 1627 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: eggs_full_scipy.patch Type: text/x-patch Size: 2624 bytes Desc: not available URL: From oliphant.travis at ieee.org Sun Dec 11 23:37:13 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 11 Dec 2005 21:37:13 -0700 Subject: [SciPy-dev] New release on Monday In-Reply-To: <767A9640-49E7-42B8-BE28-6B4001D93F38@ftw.at> References: <439BE8F4.3070302@ieee.org> <767A9640-49E7-42B8-BE28-6B4001D93F38@ftw.at> Message-ID: <439CFE79.6050303@ieee.org> Ed Schofield wrote: >On 11/12/2005, at 8:53 AM, Travis Oliphant wrote: > > >Is there any chance of including type checking for this release for >unsafe in-place operations? Several people supported the idea in the >thread [In-place operators and casting] a few weeks ago. In summary, >the existing behaviour would still be achievable with an explicit cast: > >>> my_int_array += my_float_array.astype(int) > > Actually, this would not do the same thing because it would force the entire float array into memory as an integer array and then add it. The current behavior creates only bufsize memories for this kind of copying. So, for large-array performance this approach would be worse, which is a big reason I'm not really supportive of a switch. I'm also hesitant to change this because it is the default behavior of numarray, so I'd like to receive more feedback from members of that community who are coming over to scipy_core before doing something different. I think, however, Numeric raises an error in this circumstance. So, I would advise changing this behavior in the current release, but I don't see this issue as closed. While I would never support changing the data-type of the array when using an inplace operator, I could see the logic in raising an error when the type cannot be cast safely. -Travis From aisaac at american.edu Mon Dec 12 13:16:09 2005 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 12 Dec 2005 13:16:09 -0500 Subject: [SciPy-dev] matrix element access oddity Message-ID: It seems zip cannot get at matrix elements. (Arrays work as expected.) >>> t = [[1],[2],[3]] >>> print zip(*t) [(1, 2, 3)] >>> t2 = scipy.mat(t) >>> print zip(*t2) [(matrix([[1]]), matrix([[2]]), matrix([[3]]))] Is this the desired behavior? Thanks, Alan Isaac From oliphant.travis at ieee.org Mon Dec 12 15:29:10 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 12 Dec 2005 13:29:10 -0700 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439B924A.8020802@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> Message-ID: <439DDD96.8070908@ieee.org> Robert Kern wrote: >Andrew Straw wrote: > > >>OK, with your changes, I have >>scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg and >>scipy-0.4.3.1482-py2.4-linux-x86_64.egg. >> >> >D'oh! Add the same fragment to scipy_core's scipy/lib/__init__.py and full >scipy's Lib/lib/__init__.py . > > Should we just add this fragment to the default installations. Would there be any problems with doing this? -Travis From robert.kern at gmail.com Mon Dec 12 15:41:14 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 12:41:14 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439DDD96.8070908@ieee.org> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> <439DDD96.8070908@ieee.org> Message-ID: <439DE06A.7010102@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: > >>Andrew Straw wrote: >> >>>OK, with your changes, I have >>>scipy_core-0.8.2.1625-py2.4-linux-x86_64.egg and >>>scipy-0.4.3.1482-py2.4-linux-x86_64.egg. >> >>D'oh! Add the same fragment to scipy_core's scipy/lib/__init__.py and full >>scipy's Lib/lib/__init__.py . > > Should we just add this fragment to the default installations. Would > there be any problems with doing this? Well, there's the problem with the __init__.py's from full scipy overwriting scipy_core's during a regular install. As Andrew suggested, we can simply have synced copies in both packages. To do that *right*, we need some SVN magic (svn:external?). -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Mon Dec 12 15:53:27 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 12 Dec 2005 13:53:27 -0700 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439DE06A.7010102@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> <439DDD96.8070908@ieee.org> <439DE06A.7010102@gmail.com> Message-ID: <439DE347.8000706@colorado.edu> Robert Kern wrote: > Well, there's the problem with the __init__.py's from full scipy overwriting > scipy_core's during a regular install. As Andrew suggested, we can simply have > synced copies in both packages. To do that *right*, we need some SVN magic > (svn:external?). Mmh, I hadn't noticed this because I've been building from source so far. Is this going to be problematic for package managers? From what I've seen, rpm does not like packages which overwrite files in other packages (though there may be directives to tweak that), I don't know what apt-get does in this case. I think that ease of integration into rpm/deb/fink packaging systems should be in our radar, as it does have an impact on how easy it becomes for new users to jump on board. Now that Fedora Extras is growing and is official (and since it carries even ATLAS), and with Ubuntu making such excellent inroads on the desktop, it would be very good to have soon the ability to tell all Fedora/Debian/Ubuntu/Fink users to simply [yum|apt-get] install scipy matplotlib and be done with a basic 'matlab-like' system. This also means that we/Enthought doesn't need to commit any resources to (as has been proposed in the past) keeping yum repos around, since instead the distro's own mechanisms will handle it out of the box. While I know that it doesn't cover 100% of the audience (native OSX and win32, other linux distros, other *nix variants), it does cover a significant enough chunk of it to be, I think, worth catering to. Cheers, f From perry at stsci.edu Mon Dec 12 16:02:54 2005 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 12 Dec 2005 16:02:54 -0500 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439DE22B.9070400@ieee.org> References: <439DE22B.9070400@ieee.org> Message-ID: <367c3c211512e3bac5ff4aabe127721d@stsci.edu> On Dec 12, 2005, at 3:48 PM, Travis Oliphant wrote: > > A few days ago I played with nd_image and was able to make it compile > for scipy_core. In the process, I had some ideas for making the > transition to scipy_core easier for numarray users: > > 1) It would be nice if there was someway to document on the Python > level any needed changes. In the process, we might find things that > need to be added to scipy_core. I would like to have some kind of > program for automatically making most of those needed changes before a > 1.0 release of scipy_core. > > 2) On the C-API side, my experience with nd_image showed me that quite > a few of the numarray C-API calls can be written as macros while some > will need to be additional functions. I think we could easily write a > numcompatmodule.c and associated numcompat.h file so that users > needing the numarray C-API could include the numcompat.h file and then > the import_libnumarray() command would load the numcompatmodule.c with > it's compatibility functions. > > In this way, the transition for C-API users could be as easy as > changing the include file to numcompat.h? > -Travis > That certainly would be nice. We are starting to look migrating some Python code. It may be a little while (a couple months?) before we can start tackling migrating some of the C extension code so we won't be exercising that right away (but maybe someone else can). Perry From ravi at ati.com Mon Dec 12 16:04:30 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Mon, 12 Dec 2005 16:04:30 -0500 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439DE347.8000706@colorado.edu> References: <439B3D33.6080507@astraw.com> <439DE06A.7010102@gmail.com> <439DE347.8000706@colorado.edu> Message-ID: <200512121604.30277.ravi@ati.com> On Monday 12 December 2005 15:53, Fernando Perez wrote: > Robert Kern wrote: > > Well, there's the problem with the __init__.py's from full scipy > > overwriting scipy_core's during a regular install. As Andrew suggested, > > we can simply have synced copies in both packages. To do that *right*, we > > need some SVN magic (svn:external?). > > Mmh, I hadn't noticed this because I've been building from source so far. > ?Is this going to be problematic for package managers? ?From what I've > seen, rpm does not like packages which overwrite files in other packages > (though there may be directives to tweak that), I don't know what apt-get > does in this case. Cheapo solution: the proverbial extra level of indirection. Make __init__.py only in scipy_core which checks for existence of scipy_full_init.py at run time and sources it if necessary. As I understand it, we need scipy_core to install scipy_full; so we can remove __init__.py from scipy_full. Ravi From robert.kern at gmail.com Mon Dec 12 17:03:20 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 14:03:20 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439DE347.8000706@colorado.edu> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> <439DDD96.8070908@ieee.org> <439DE06A.7010102@gmail.com> <439DE347.8000706@colorado.edu> Message-ID: <439DF3A8.1070002@gmail.com> Fernando Perez wrote: > Robert Kern wrote: > >>Well, there's the problem with the __init__.py's from full scipy overwriting >>scipy_core's during a regular install. As Andrew suggested, we can simply have >>synced copies in both packages. To do that *right*, we need some SVN magic >>(svn:external?). > > Mmh, I hadn't noticed this because I've been building from source so far. Is > this going to be problematic for package managers? From what I've seen, rpm > does not like packages which overwrite files in other packages (though there > may be directives to tweak that), I don't know what apt-get does in this case. You're right. setuptools is actually supposed to generate an __init__.py with that fragment when one needs to be there but isn't. The last time I tried, I don't think I got it to work, but I wasn't trying hard. I only cared about getting the packages built and working for me, so manually adding them was the easiest way. If I can get it to work, the solution here would be to delete scipy/Lib/lib/__init__.py and add the fragments to the relevant __init__.py's in scipy_core. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Mon Dec 12 17:04:28 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 12 Dec 2005 14:04:28 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <200512121604.30277.ravi@ati.com> References: <439B3D33.6080507@astraw.com> <439DE06A.7010102@gmail.com> <439DE347.8000706@colorado.edu> <200512121604.30277.ravi@ati.com> Message-ID: <439DF3EC.7080808@astraw.com> Ravikiran Rajagopal wrote: >On Monday 12 December 2005 15:53, Fernando Perez wrote: > > >>Robert Kern wrote: >> >> >>>Well, there's the problem with the __init__.py's from full scipy >>>overwriting scipy_core's during a regular install. As Andrew suggested, >>>we can simply have synced copies in both packages. To do that *right*, we >>>need some SVN magic (svn:external?). >>> >>> >>Mmh, I hadn't noticed this because I've been building from source so far. >> Is this going to be problematic for package managers? From what I've >>seen, rpm does not like packages which overwrite files in other packages >>(though there may be directives to tweak that), I don't know what apt-get >>does in this case. >> >> > > > I agree that the issue of package managers not wanting to overwrite files seems a serious and understandable one, and I agree this is best avoided. Hopefully we can find an alternative. >Cheapo solution: the proverbial extra level of indirection. Make __init__.py >only in scipy_core which checks for existence of scipy_full_init.py at run >time and sources it if necessary. As I understand it, we need scipy_core to >install scipy_full; so we can remove __init__.py from scipy_full. > > Unfortunately, I don't think this is a solution -- I think it's essentially what we have now. Basically, we have two conflicting cases: 1) A non-setuptools install where site-packages/scipy/__init__.py is installed by scipy_core but is smart enough to figure out full scipy is installed 2) A setuptools install where site-packages/scipy_core.egg and scipy.egg exist and both need a scipy/__init__.py file to tell pkg_resources to keep looking for the other scipy package. I don't know enough about the setuptools innards to know if approach 2 is the only way to go. (Heck, I was just blindly following Robert's lead to get as far as I did in this...) Robert (or anyone): what if we used "declare_namespace" in full scipy's setup() call -- would that perform the necessary setuptools magic without needing an __init__.py file that would overwrite scipy_core's? Perhaps a question to the distutils list (the de facto setuptools list) will help resolve the issue. Perhaps the issue is even worth a feature request in setuptools if one is needed to support our use cases. From strawman at astraw.com Mon Dec 12 17:07:04 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 12 Dec 2005 14:07:04 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439DF3A8.1070002@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B4DE2.80801@astraw.com> <439B5B59.6040109@gmail.com> <439B74B4.4070008@astraw.com> <439B7EE5.7030408@gmail.com> <439B8B2B.8020309@astraw.com> <439B924A.8020802@gmail.com> <439DDD96.8070908@ieee.org> <439DE06A.7010102@gmail.com> <439DE347.8000706@colorado.edu> <439DF3A8.1070002@gmail.com> Message-ID: <439DF488.2020400@astraw.com> Robert Kern wrote: >Fernando Perez wrote: > > >>Robert Kern wrote: >> >> >> >>>Well, there's the problem with the __init__.py's from full scipy overwriting >>>scipy_core's during a regular install. As Andrew suggested, we can simply have >>>synced copies in both packages. To do that *right*, we need some SVN magic >>>(svn:external?). >>> >>> >>Mmh, I hadn't noticed this because I've been building from source so far. Is >>this going to be problematic for package managers? From what I've seen, rpm >>does not like packages which overwrite files in other packages (though there >>may be directives to tweak that), I don't know what apt-get does in this case. >> >> > >You're right. setuptools is actually supposed to generate an __init__.py with >that fragment when one needs to be there but isn't. The last time I tried, I >don't think I got it to work, but I wasn't trying hard. I only cared about >getting the packages built and working for me, so manually adding them was the >easiest way. If I can get it to work, the solution here would be to delete >scipy/Lib/lib/__init__.py and add the fragments to the relevant __init__.py's in >scipy_core. > > > Ahh, I just read this email after I sent my last one (where I suggested to use "declare_namespace" in full scipy setup.py setup()). I believe that proposal is essentially the same as what you're proposing -- great. Let's see if we can get it to work... From oliphant.travis at ieee.org Mon Dec 12 15:48:43 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 12 Dec 2005 13:48:43 -0700 Subject: [SciPy-dev] Thoughts on making it easier for numarray users to transition to scipy_core Message-ID: <439DE22B.9070400@ieee.org> A few days ago I played with nd_image and was able to make it compile for scipy_core. In the process, I had some ideas for making the transition to scipy_core easier for numarray users: 1) It would be nice if there was someway to document on the Python level any needed changes. In the process, we might find things that need to be added to scipy_core. I would like to have some kind of program for automatically making most of those needed changes before a 1.0 release of scipy_core. 2) On the C-API side, my experience with nd_image showed me that quite a few of the numarray C-API calls can be written as macros while some will need to be additional functions. I think we could easily write a numcompatmodule.c and associated numcompat.h file so that users needing the numarray C-API could include the numcompat.h file and then the import_libnumarray() command would load the numcompatmodule.c with it's compatibility functions. In this way, the transition for C-API users could be as easy as changing the include file to numcompat.h? -Travis From oliphant.travis at ieee.org Mon Dec 12 17:49:11 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 12 Dec 2005 15:49:11 -0700 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <367c3c211512e3bac5ff4aabe127721d@stsci.edu> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> Message-ID: <439DFE67.2020600@ieee.org> >That certainly would be nice. We are starting to look migrating some >Python code. It may be a little while >(a couple months?) before we can start tackling migrating some of the C >extension code so we won't be exercising that right away (but maybe >someone else can). > > I think I will have something basic in-place by then. We can add needed API calls as extensions get ported. On a related note, where should the Packages (like nd_image) from numarray go? Should they all go into scipy core, full scipy, or be separately downloadable as scipy_core addons? -Travis From robert.kern at gmail.com Mon Dec 12 18:54:52 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 15:54:52 -0800 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439DFE67.2020600@ieee.org> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439DFE67.2020600@ieee.org> Message-ID: <439E0DCC.2080907@gmail.com> Travis Oliphant wrote: > On a related note, where should the Packages (like nd_image) from > numarray go? Should they all go into scipy core, full scipy, or be > separately downloadable as scipy_core addons? Let's work on making each subpackage in full scipy a "separately downloadable scipy_core addon." -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant.travis at ieee.org Mon Dec 12 18:59:38 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 12 Dec 2005 16:59:38 -0700 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439E0DCC.2080907@gmail.com> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439DFE67.2020600@ieee.org> <439E0DCC.2080907@gmail.com> Message-ID: <439E0EEA.8030707@ieee.org> Robert Kern wrote: >Travis Oliphant wrote: > > > >>On a related note, where should the Packages (like nd_image) from >>numarray go? Should they all go into scipy core, full scipy, or be >>separately downloadable as scipy_core addons? >> >> > > > This is my preferred approach and I think we are almost there with the current scipy (minus a few inter-dependent addons). -Travis From arnd.baecker at web.de Tue Dec 13 01:56:43 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 13 Dec 2005 07:56:43 +0100 (CET) Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439E0EEA.8030707@ieee.org> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E0EEA.8030707@ieee.org> Message-ID: On Mon, 12 Dec 2005, Travis Oliphant wrote: > Robert Kern wrote: > > >Travis Oliphant wrote: > > > >>On a related note, where should the Packages (like nd_image) from > >>numarray go? Should they all go into scipy core, full scipy, or be > >>separately downloadable as scipy_core addons? > >> > This is my preferred approach and I think we are almost there with the > current scipy (minus a few inter-dependent addons). Does this also mean that a user will be able to install - scipy core to one place - full scipy to some other place and just add both placees to PYTHONPATH? This would be very helpful for parallel installations making use of different FFT/LAPACK etc. Best, Arnd From robert.kern at gmail.com Tue Dec 13 02:07:44 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 23:07:44 -0800 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E0EEA.8030707@ieee.org> Message-ID: <439E7340.6060008@gmail.com> Arnd Baecker wrote: > On Mon, 12 Dec 2005, Travis Oliphant wrote: > >>Robert Kern wrote: >> >>>Travis Oliphant wrote: >>> >>>>On a related note, where should the Packages (like nd_image) from >>>>numarray go? Should they all go into scipy core, full scipy, or be >>>>separately downloadable as scipy_core addons? [This bit got snipped. Me:] Let's work on making each subpackage in full scipy a "separately downloadable scipy_core addon." >>This is my preferred approach and I think we are almost there with the >>current scipy (minus a few inter-dependent addons). > > Does this also mean that a user will be able to install > - scipy core to one place > - full scipy to some other place > and just add both placees to PYTHONPATH? > > This would be very helpful for parallel installations making Only if we require a recent setuptools to be installed. Neither scipy_core nor scipy would have to be built as eggs, but pkg_resources would have to be available. I think the .egg-info metadata would also have to be present. Of course, if we do that, then we can also use that metadata to collect docstrings and unit test locations in a robust way. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Tue Dec 13 01:35:13 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 13 Dec 2005 00:35:13 -0600 (CST) Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439CA47F.4000207@astraw.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B74B4.4070008@astraw.com><439B924A.8020802@gmail.com> <439CA47F.4000207@astraw.com> Message-ID: Hi, I haven't read the whole thread of scipy/eggs issues yet and I don't use eggs myself, so I may miss some important points in the following. But as I understand you guys want to copy core/scipy/__init__.py to scipy/Lib/ and the installation of scipy will overwrite core's __init__.py. If this is the case, then let's think over and find another solution as overwriting __init__.py will certainly cause problems with other packaging tools. Pearu From arnd.baecker at web.de Tue Dec 13 02:41:30 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 13 Dec 2005 08:41:30 +0100 (CET) Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439E7340.6060008@gmail.com> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E7340.6060008@gmail.com> Message-ID: On Mon, 12 Dec 2005, Robert Kern wrote: > Arnd Baecker wrote: > > On Mon, 12 Dec 2005, Travis Oliphant wrote: > > > >>Robert Kern wrote: > >> > >>>Travis Oliphant wrote: > >>> > >>>>On a related note, where should the Packages (like nd_image) from > >>>>numarray go? Should they all go into scipy core, full scipy, or be > >>>>separately downloadable as scipy_core addons? > > [This bit got snipped. Me:] > Let's work on making each subpackage in full scipy a "separately downloadable > scipy_core addon." > > >>This is my preferred approach and I think we are almost there with the > >>current scipy (minus a few inter-dependent addons). > > > > Does this also mean that a user will be able to install > > - scipy core to one place > > - full scipy to some other place > > and just add both placees to PYTHONPATH? > > > > This would be very helpful for parallel installations making > > Only if we require a recent setuptools to be installed. I just had a look at http://peak.telecommunity.com/DevCenter/setuptools They even have what they call `bootstrap module`, ez_setup.py, which by inclusion "will automatically download and install setuptools if the user is building your package from source and doesn't have a suitable version already installed." (Personally, I am not a big fan of software "phoning" home/somewhere, but during installation, one could just check if a sufficiently new setuptools exists, and if not, just print "please run `python ez_setup.py` before installation" print "to get a sufficiently new setuptools" and stop. > Neither scipy_core nor > scipy would have to be built as eggs, but pkg_resources would have to be > available. I think the .egg-info metadata would also have to be present. That sounds really great to me! > Of course, if we do that, then we can also use that metadata to collect > docstrings and unit test locations in a robust way. Sounds like another big benefit - E.g. one could have routines from different sandboxes floating around, or own code which will use the scipy machinery, but is not (yet) in scipy. This could make contributions easier and lead to many scipy-toolboxes! Best, Arnd P.S.: With the following lines in `.pydistutils.cfg` ## [install] install_lib = ~/.PythonLibrary/Python$py_version_short/site-packages prefix=~/.PythonLibrary/ ## a `python ez_setup.py` worked fine as non-root on a debian sarge linux box with python 2.3.5. From robert.kern at gmail.com Tue Dec 13 02:49:07 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 23:49:07 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439B74B4.4070008@astraw.com><439B924A.8020802@gmail.com> <439CA47F.4000207@astraw.com> Message-ID: <439E7CF3.7020706@gmail.com> Pearu Peterson wrote: > Hi, > > I haven't read the whole thread of scipy/eggs issues yet and I don't use > eggs myself, so I may miss some important points in the following. > > But as I understand you guys want to copy core/scipy/__init__.py to > scipy/Lib/ and the installation of scipy will overwrite core's > __init__.py. If this is the case, then let's think over and find another > solution as overwriting __init__.py will certainly cause problems with > other packaging tools. Don't worry. We don't want that. See my last message in response to Fernando. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Tue Dec 13 02:53:23 2005 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Dec 2005 23:53:23 -0800 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E7340.6060008@gmail.com> Message-ID: <439E7DF3.5000605@gmail.com> Arnd Baecker wrote: > I just had a look at http://peak.telecommunity.com/DevCenter/setuptools > They even have what they call `bootstrap module`, ez_setup.py, > which by inclusion > "will automatically download and install setuptools if the user is > building your package from source and doesn't have a suitable version > already installed." > > (Personally, I am not a big fan of software "phoning" home/somewhere, > but during installation, one could just check if a sufficiently > new setuptools exists, and if not, just > print "please run `python ez_setup.py` before installation" > print "to get a sufficiently new setuptools" > and stop. Believe me, I don't like that advice either and would veto our use of the ez_setup.py script at all. It has proven to be distinctly not robust. My recommendation in fact would be to point people to the setuptools tarball on PyPI and tell them to "python setup.py install" it after reading the directions and setting up their ~/.pydistutils.cfg appropriately. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Tue Dec 13 01:58:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 13 Dec 2005 00:58:28 -0600 (CST) Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439E7CF3.7020706@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439CA47F.4000207@astraw.com><439E7CF3.7020706@gmail.com> Message-ID: On Mon, 12 Dec 2005, Robert Kern wrote: > Pearu Peterson wrote: >> Hi, >> >> I haven't read the whole thread of scipy/eggs issues yet and I don't use >> eggs myself, so I may miss some important points in the following. >> >> But as I understand you guys want to copy core/scipy/__init__.py to >> scipy/Lib/ and the installation of scipy will overwrite core's >> __init__.py. If this is the case, then let's think over and find another >> solution as overwriting __init__.py will certainly cause problems with >> other packaging tools. > > Don't worry. We don't want that. See my last message in response to Fernando. Ok, thanks, Robert. In the referred message you say something about deleting Lib/lib/__init__.py. Is this a typo and you meant Lib/__init__.py? If so, then nevermind. Otherwise I don't understand the solution. Pearu From robert.kern at gmail.com Tue Dec 13 03:12:30 2005 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Dec 2005 00:12:30 -0800 Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439CA47F.4000207@astraw.com><439E7CF3.7020706@gmail.com> Message-ID: <439E826E.7090103@gmail.com> Pearu Peterson wrote: > In the referred message you say something about deleting > Lib/lib/__init__.py. Is this a typo and you meant Lib/__init__.py? > If so, then nevermind. Otherwise I don't understand the solution. No, it's not a typo. Not only is scipy a namespace package, but scipy.lib is one as well. Currently, both scipy_core and full scipy provide __init__.py's (scipy_core/scipy/corelib/__init__.py and scipy/Lib/lib/__init__.py). -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Tue Dec 13 03:14:15 2005 From: strawman at astraw.com (Andrew Straw) Date: Tue, 13 Dec 2005 00:14:15 -0800 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E7340.6060008@gmail.com> Message-ID: <439E82D7.4070204@astraw.com> Arnd Baecker wrote: >I just had a look at http://peak.telecommunity.com/DevCenter/setuptools >They even have what they call `bootstrap module`, ez_setup.py, >which by inclusion >"will automatically download and install setuptools if the user is >building your package from source and doesn't have a suitable version >already installed." > >(Personally, I am not a big fan of software "phoning" home/somewhere, >but during installation, one could just check if a sufficiently >new setuptools exists, and if not, just > print "please run `python ez_setup.py` before installation" > print "to get a sufficiently new setuptools" >and stop. > > It looks like Robert has beaten me to the punch on this email, too, but just to throw a little water of my own on the fire, setuptools and python eggs are controversial in some circles. (Not just ez_setup.py that Robert pointed out.) Although I agree with you that they seem to offer many really nice things, there have been recent raging discussions on distutils-sig and debian-python. One serious issue with eggs is that they are a package system outside the control/knowledge of package managers such as debian's apt-get. Since debian figured out package management a long time ago, and since setuptools evolved independently of debian, largely for use outside of debian, it's understandable that setuptools offers partially overlapping and also portions of distinctly non-debian behavior. And it's also understandable, but possibly avoidable and IMO definitely regrettable, that some debian developers are actively and even vehemently resisting setuptools. I think eggs are great for, among other things, maintaining a personal (non-root) cache of python packages in your personal directory, even in a debian system. Also, since they do dependency checking of Python packages based on what's in the packages, it means your Python development can be with bleeding-edge stuff but still under the control of a package manager. And then there are OSs without package managers whatsoever, where I suspect people may be even more enthusiastic... (At least I think they should be!) From pearu at scipy.org Tue Dec 13 02:32:45 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 13 Dec 2005 01:32:45 -0600 (CST) Subject: [SciPy-dev] issues with scipy and eggs In-Reply-To: <439E826E.7090103@gmail.com> References: <439B3D33.6080507@astraw.com> <439B4149.6000408@gmail.com> <439CA47F.4000207@astraw.com><439E7CF3.7020706@gmail.com> <439E826E.7090103@gmail.com> Message-ID: On Tue, 13 Dec 2005, Robert Kern wrote: > Pearu Peterson wrote: > >> In the referred message you say something about deleting >> Lib/lib/__init__.py. Is this a typo and you meant Lib/__init__.py? >> If so, then nevermind. Otherwise I don't understand the solution. > > No, it's not a typo. Not only is scipy a namespace package, but scipy.lib is one > as well. Currently, both scipy_core and full scipy provide __init__.py's > (scipy_core/scipy/corelib/__init__.py and scipy/Lib/lib/__init__.py). Ok, I understand now, corelib is installed as scipy.lib as well. core/scipy/corelib/__init__.py can use the same hooks as core/scipy/__init__.py to import subpackages. Thanks, Pearu From robert.kern at gmail.com Tue Dec 13 04:58:41 2005 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 Dec 2005 01:58:41 -0800 Subject: [SciPy-dev] [Numpy-discussion] Thoughts on making it easier for numarray users to transition to scipy_core In-Reply-To: <439E82D7.4070204@astraw.com> References: <439DE22B.9070400@ieee.org> <367c3c211512e3bac5ff4aabe127721d@stsci.edu> <439E7340.6060008@gmail.com> <439E82D7.4070204@astraw.com> Message-ID: <439E9B51.3070705@gmail.com> Andrew Straw wrote: > One serious issue with eggs is that they are a package system outside > the control/knowledge of package managers such as debian's apt-get. This isn't *quite* true, but this perception is certainly the source of all the brouhaha. Eggs themselves are not a packaging system. In fact, they were originally not designed to distribute Python packages per se at all. Rather they were designed for the distribution and discovery of plugins and data. Later, they were adapted for distributing Python packages, and easy_install was written to provide basic package management for systems that don't have any. It is *easy_install* that is the package system that tries to cover part of the territory of apt-get/yum/etc. But we don't need easy_install to use eggs. Nor do we even have to use eggs in the form of zip files, either. Most of the fancy egg features (namespace packages, discovery of metadata without importing, but not multiple versions) will work fine as long as the egg metadata is provided somewhere on sys.path. At this point, the argument on debian-python and the Distutils-SIG has settled down with a few predictable holdouts. It looks like that Debian packages for setuptools-using Python packages will install the Python package parts regularly and provide the appropriate metadata. To provide a concrete example, suppose a Debian user has scipy_core and the rest of stable scipy provided by proper Debian packages. /usr/lib/python2.4/site-packages/ scipy/ scipy-core.egg-info/ scipy.egg-info/ He might also have some sandbox packages installed in /usr/local, too, because they certainly won't be packaged by Debian. /usr/local/lib/python2.4/site-packages/ scipy/ scipy-sandbox-delaunay.egg-info/ scipy-sandbox-maxent.egg-info/ Presuming that I can get the __init__.py fragments distributed appropriately, everything will simply appear as one scipy package. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From prabhu_r at users.sf.net Tue Dec 13 11:32:06 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Tue, 13 Dec 2005 22:02:06 +0530 Subject: [SciPy-dev] Segfault with arange Message-ID: <17310.63366.843157.844993@monster.iitb.ac.in> Hi, I accidentally typed this on an interpreter session and got a segfault: import scipy s = scipy.arange((10,10,10)) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1075417856 (LWP 11145)] 0x0805ba60 in PyObject_IsSubclass () (gdb) bt #0 0x0805ba60 in PyObject_IsSubclass () #1 0x0805a547 in PyNumber_TrueDivide () #2 0x4044a47a in _calc_length (start=0x813edbc, stop=0x0, step=0x813edb0, next=0xbffff3f8, cmplx=0) at scipy/base/src/multiarraymodule.c:4244 #3 0x4042398d in PyArray_ArangeObj (start=0x813edbc, stop=0x41ac3a04, step=0x813edb0, dtype=0x4045a700) at scipy/base/src/multiarraymodule.c:4315 #4 0x4044a7c0 in array_arange (ignored=0x0, args=0x0, kws=0x0) at scipy/base/src/multiarraymodule.c:4375 #5 0x080fdede in PyCFunction_Call () HTH. cheers, prabhu From oliphant.travis at ieee.org Tue Dec 13 12:49:34 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Dec 2005 10:49:34 -0700 Subject: [SciPy-dev] Segfault with arange In-Reply-To: <17310.63366.843157.844993@monster.iitb.ac.in> References: <17310.63366.843157.844993@monster.iitb.ac.in> Message-ID: <439F09AE.1060409@ieee.org> Prabhu Ramachandran wrote: >Hi, > >I accidentally typed this on an interpreter session and got a >segfault: > >import scipy >s = scipy.arange((10,10,10)) > > >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 1075417856 (LWP 11145)] >0x0805ba60 in PyObject_IsSubclass () > > Thanks for finding this. It's fixed now in SVN. The check for NULL was there (just on the non-dereferenced pointer...doh!) -Travis From arnd.baecker at web.de Wed Dec 14 02:24:56 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 08:24:56 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions Message-ID: Hi, we still have not been able to install scipy properly on an Itanium2 (with gcc) because the self-compiled ATLAS requires an additional libarary at link time. See step 3 in Jan's message http://www.scipy.org/mailinglists/mailman?fn=scipy-dev/2005-December/004405.html for more details. Because of this I tried to debug this a bit, but already running `systeminfo.py` does not work: core> python scipy/distutils/system_info.py Traceback (most recent call last): File "scipy/distutils/system_info.py", line 112, in ? from exec_command import find_executable, exec_command, get_pythonexe File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/exec_command.py", line 58, in ? from log import _global_log as log File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/log.py", line 7, in ? from scipy.distutils.misc_util import red_text, yellow_text, cyan_text ImportError: No module named scipy.distutils.misc_util If I remember correctly, something like this used to work for old scipy and was very helpful to find out about which libraries are detected. Of course one can start the install and then do CTRL-c. Possible solution: As `system_info.py` is buried deep I would propose to add a `library_info.py` directly under core which does the same as `setup.py`, apart from the `#setup( **config.todict() )` line. Another option is to add a script complete_system_info.py which contains the two lines import scipy.distutils.system_info scipy.distutils.system_info.show_all() (Maybe this is too much on the level of scipy core, because it might confuse new users about the dependencies, but for debugging this is really what I need ... ;-) Concerning the other distutils problems: it would be very nice if we could get some guidance on how to solve them. Best, Arnd From pearu at scipy.org Wed Dec 14 02:29:01 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 01:29:01 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > Because of this I tried to debug this a bit, > but already running `systeminfo.py` does not work: > > core> python scipy/distutils/system_info.py > Traceback (most recent call last): > File "scipy/distutils/system_info.py", line 112, in ? > from exec_command import find_executable, exec_command, get_pythonexe > File > "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/exec_command.py", > line 58, in ? > from log import _global_log as log > File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/log.py", > line 7, in ? > from scipy.distutils.misc_util import red_text, yellow_text, cyan_text > ImportError: No module named scipy.distutils.misc_util That's strange. scipy/distutils/log.py is there but scipy/distutils/misc_util.py isn't, both are python modules... you can try PYTHONPATH=. python scipy/distutils/system_info.py Anyway, I have fixed system_info.py to detect atlas version when running from scipy core source directory as python scipy/distutils/system_info.py. May be that fix fixes also the above error. I'll look into other scipy.distutils problems as well.. Pearu From arnd.baecker at web.de Wed Dec 14 04:23:46 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 10:23:46 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: Hi Pearu, thanks a lot for looking into this!! On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Arnd Baecker wrote: > > > Because of this I tried to debug this a bit, > > but already running `systeminfo.py` does not work: > > > > core> python scipy/distutils/system_info.py > > Traceback (most recent call last): > > File "scipy/distutils/system_info.py", line 112, in ? > > from exec_command import find_executable, exec_command, get_pythonexe > > File > > "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/exec_command.py", > > line 58, in ? > > from log import _global_log as log > > File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/log.py", > > line 7, in ? > > from scipy.distutils.misc_util import red_text, yellow_text, cyan_text > > ImportError: No module named scipy.distutils.misc_util > > That's strange. scipy/distutils/log.py is there but > scipy/distutils/misc_util.py isn't, both are python modules... you can try > PYTHONPATH=. python scipy/distutils/system_info.py > > Anyway, I have fixed system_info.py to detect atlas version when running > from scipy core source directory as python scipy/distutils/system_info.py. > May be that fix fixes also the above error. > > I'll look into other scipy.distutils problems as well.. I got some further info on this: The lapack stuff is provided in libmkl_lapack64.so and libmkl.so which are in /opt/intel/mkl72/lib/64 To link correctly, they need libifcore.so libguide.so which are in /opt/intel/fc_90/lib/ and libpthread.so which is in /usr/lib/ We tried the following, but without success [atlas] library_dirs = /opt/intel/mkl72/lib/64:/opt/intel/fc_90/lib/ include_dirs = /opt/intel/mkl72/include/ atlas_libs = mkl_lapack64,mkl,ifcore So I think the problem boils down to the question on how one has to specify additional libraries and their paths which are needed to link (for example) lapack/atlas. Is there already a way to do this? Many thanks, Arnd From pearu at scipy.org Wed Dec 14 04:09:57 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 03:09:57 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > I got some further info on this: > > The lapack stuff is provided in > libmkl_lapack64.so and libmkl.so > which are in /opt/intel/mkl72/lib/64 > To link correctly, they need > libifcore.so libguide.so > which are in /opt/intel/fc_90/lib/ > and > libpthread.so > which is in /usr/lib/ > > We tried the following, but without success > > [atlas] > library_dirs = /opt/intel/mkl72/lib/64:/opt/intel/fc_90/lib/ > include_dirs = /opt/intel/mkl72/include/ > atlas_libs = mkl_lapack64,mkl,ifcore First, mkl is NOT atlas. mkl: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/index.htm atlas: http://math-atlas.sourceforge.net/ system_info assumes that atlas libraries are used according to official instructions given in http://math-atlas.sourceforge.net/errata.html#LINK mkl and atlas are different libraries of optimized blas/lapack/.. libraries. So the problem boils down to adding mkl support to system_info. Btw, then also fftpack, ufunc's, random could take advantage of mkl some day.. Adding mkl support means implementing class intelmkl_info(system_info): section = 'intelmkl' dir_env_var = 'INTELMKL' _lib_names = ['mkl'] .. class in system_info.py and adding required hooks to lapack_opt, blas_opt classes. The link http://www.intel.com/software/products/mkl/docs/mklqref/index.htm seems to be broken but to implement intelmkl support, one needs to know how/where mkl libraries are installed and how to use them (the order of libraries, etc), this information should come with mkl, I couldn't find this information from internet. Pearu From pearu at scipy.org Wed Dec 14 04:15:39 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 03:15:39 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > The link > http://www.intel.com/software/products/mkl/docs/mklqref/index.htm > seems to be broken but to implement intelmkl support, one needs to know > how/where mkl libraries are installed and how to use them (the order of > libraries, etc), this information should come with mkl, I couldn't find > this information from internet. I didn't look quite hard, now I found it: http://www.intel.com/software/products/mkl/docs/mklgs_lnx.htm Pearu From nwagner at mecha.uni-stuttgart.de Wed Dec 14 05:18:03 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 14 Dec 2005 11:18:03 +0100 Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: <439FF15B.8030006@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Wed, 14 Dec 2005, Arnd Baecker wrote: > > >>I got some further info on this: >> >>The lapack stuff is provided in >> libmkl_lapack64.so and libmkl.so >>which are in /opt/intel/mkl72/lib/64 >>To link correctly, they need >> libifcore.so libguide.so >>which are in /opt/intel/fc_90/lib/ >>and >> libpthread.so >>which is in /usr/lib/ >> >>We tried the following, but without success >> >>[atlas] >>library_dirs = /opt/intel/mkl72/lib/64:/opt/intel/fc_90/lib/ >>include_dirs = /opt/intel/mkl72/include/ >>atlas_libs = mkl_lapack64,mkl,ifcore >> > >First, mkl is NOT atlas. > >mkl: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/index.htm >atlas: http://math-atlas.sourceforge.net/ > >system_info assumes that atlas libraries are used >according to official instructions given in >http://math-atlas.sourceforge.net/errata.html#LINK > >mkl and atlas are different libraries of optimized blas/lapack/.. >libraries. So the problem boils down to adding mkl support to system_info. >Btw, then also fftpack, ufunc's, random could take advantage of mkl some >day.. > >Adding mkl support means implementing > >class intelmkl_info(system_info): > section = 'intelmkl' > dir_env_var = 'INTELMKL' > _lib_names = ['mkl'] > .. > >class in system_info.py and adding required hooks to lapack_opt, blas_opt >classes. > > >The link > http://www.intel.com/software/products/mkl/docs/mklqref/index.htm >seems to be broken but to implement intelmkl support, one needs to know >how/where mkl libraries are installed and how to use them (the order of >libraries, etc), this information should come with mkl, I couldn't find >this information from internet. > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Pearu, There is also Goto's BLAS http://www.tacc.utexas.edu/resources/software/software.php http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php Nils From arnd.baecker at web.de Wed Dec 14 07:48:49 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 13:48:49 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: Hi Pearu, On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Arnd Baecker wrote: > > > Because of this I tried to debug this a bit, > > but already running `systeminfo.py` does not work: > > > > core> python scipy/distutils/system_info.py > > Traceback (most recent call last): > > File "scipy/distutils/system_info.py", line 112, in ? > > from exec_command import find_executable, exec_command, get_pythonexe > > File > > "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/exec_command.py", > > line 58, in ? > > from log import _global_log as log > > File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/log.py", > > line 7, in ? > > from scipy.distutils.misc_util import red_text, yellow_text, cyan_text > > ImportError: No module named scipy.distutils.misc_util > > That's strange. scipy/distutils/log.py is there but > scipy/distutils/misc_util.py isn't, both are python modules... you can try > PYTHONPATH=. python scipy/distutils/system_info.py > > Anyway, I have fixed system_info.py to detect atlas version when running > from scipy core source directory as python scipy/distutils/system_info.py. > May be that fix fixes also the above error. Looks good! The next one is: blas_opt_info: Traceback (most recent call last): File "scipy/distutils/system_info.py", line 1564, in ? show_all() File "scipy/distutils/system_info.py", line 1560, in show_all r = c.get_info() File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/system_info.py", line 336, in get_info self.calc_info() File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/system_info.py", line 1084, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/system_info.py", line 912, in get_atlas_version from core import Extension, setup File "/home/abaecker/BUILDS2/Build_104/core/scipy/distutils/core.py", line 13, in ? from scipy.distutils.extension import Extension ImportError: No module named scipy.distutils.extension > I'll look into other scipy.distutils problems as well.. I am just in the process of writing a longer response to your other mail - I am running through the full installation again, so it will take a little bit more time ... Best, Arnd From pearu at scipy.org Wed Dec 14 07:20:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 06:20:28 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > So the problem boils down to adding mkl support to system_info. I have added mkl support to system_info. It is tested against mkl 8.0.1 version: pearu at p4:~/svn/core$ python scipy/distutils/system_info.py lapack_opt lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] include_dirs = ['/opt/intel/mkl/8.0.1/include'] FOUND: libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] include_dirs = ['/opt/intel/mkl/8.0.1/include'] ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] include_dirs = ['/opt/intel/mkl/8.0.1/include'] and all scipy core tests pass ok. To disable detecting mkl, define environment variable MKL=None. For mkl 7.x versions one may need to fix library names (8.x does not have ifcore, for instance) in system_info.py. Pearu From arnd.baecker at web.de Wed Dec 14 10:13:55 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 16:13:55 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: Hi Pearu, On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Pearu Peterson wrote: > > > So the problem boils down to adding mkl support to system_info. > > I have added mkl support to system_info. Fantastic! *** Many thanks *** > It is tested against mkl > 8.0.1 version: > > pearu at p4:~/svn/core$ python scipy/distutils/system_info.py lapack_opt > lapack_opt_info: > lapack_mkl_info: > mkl_info: > FOUND: > libraries = ['mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] > library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] > include_dirs = ['/opt/intel/mkl/8.0.1/include'] > > FOUND: > libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', > 'pthread'] > library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] > include_dirs = ['/opt/intel/mkl/8.0.1/include'] > > ( library_dirs = /usr/local/lib:/usr/lib ) > FOUND: > libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', > 'pthread'] > library_dirs = ['/opt/intel/mkl/8.0.1/lib/32'] > include_dirs = ['/opt/intel/mkl/8.0.1/include'] > > and all scipy core tests pass ok. Is there a way to convince him to look for the 64 Bit variant under /opt/intel/mkl72/64 ?? Presently the result is #------------------ python scipy/distutils/system_info.py lapack_opt lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl72/lib/32'] include_dirs = ['/opt/intel/mkl72/include'] FOUND: libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl72/lib/32'] include_dirs = ['/opt/intel/mkl72/include'] ( library_dirs = /home/baecker/python2/lib:/usr/local/lib:/usr/lib ) FOUND: libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl72/lib/32'] include_dirs = ['/opt/intel/mkl72/include'] #---------------- I tried [mkl_libs] or [mkl_lapack] or [mkl] library_dirs = /opt/intel/mkl72/lib/64 include_dirs = /opt/intel/mkl72/include/ but non worked? > To disable detecting mkl, define environment variable MKL=None. > For mkl 7.x versions one may need to fix library names (8.x does not have > ifcore, for instance) in system_info.py. For the full scipy I suspect that we will run into the single/double precision lapack routines problem (libmkl_lapack32.so provides all the s* and c* while libmkl_lapack64.so provides the d* and z* routines), but one thing after another ... ;-) Many thanks, Arnd From fonnesbeck at gmail.com Wed Dec 14 11:13:09 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Wed, 14 Dec 2005 11:13:09 -0500 Subject: [SciPy-dev] scipy release? Message-ID: <723eb6930512140813g56b5d4a6w25a038676dc47616@mail.gmail.com> I know there have been scipy_core releases beaing cranked out by Travis recently, but what about releases of the full scipy package? Things like statistical distributions (not just random number generators) are pretty important to have around. I am going to be teaching a course to biology students that will be based on python, and I'm wondering how long we will have to dive into svn to get builds of the entire scipy. Thanks, C. -- Chris Fonnesbeck Atlanta, GA From pearu at scipy.org Wed Dec 14 10:35:00 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 09:35:00 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > Is there a way to convince him to look for the 64 Bit variant > under /opt/intel/mkl72/64 ?? In system_info.py mkl72/64 is used when cpu.is_Itanium() is true. What is the output of python scipy/distutils/cpuinfo.py in your case? > Presently the result is > > #------------------ > python scipy/distutils/system_info.py lapack_opt > lapack_opt_info: > lapack_mkl_info: > mkl_info: > FOUND: > libraries = ['mkl_ia32', 'mkl', 'vml', 'guide', 'pthread'] > library_dirs = ['/opt/intel/mkl72/lib/32'] > include_dirs = ['/opt/intel/mkl72/include'] > > FOUND: > libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', > 'pthread'] > library_dirs = ['/opt/intel/mkl72/lib/32'] > include_dirs = ['/opt/intel/mkl72/include'] > > ( library_dirs = /home/baecker/python2/lib:/usr/local/lib:/usr/lib ) > FOUND: > libraries = ['mkl_lapack', 'mkl_ia32', 'mkl', 'vml', 'guide', > 'pthread'] > library_dirs = ['/opt/intel/mkl72/lib/32'] > include_dirs = ['/opt/intel/mkl72/include'] > #---------------- > > > I tried > > [mkl_libs] or [mkl_lapack] or [mkl] > library_dirs = /opt/intel/mkl72/lib/64 > include_dirs = /opt/intel/mkl72/include/ Hmm, [mkl] library_dirs = .. include_dirs = .. should have worked. Btw, though people use site.cfg, I don't use it myself, it's better to fix/extend system_info.py so that people wouldn't have to use site.cfg. > but non worked? > >> To disable detecting mkl, define environment variable MKL=None. >> For mkl 7.x versions one may need to fix library names (8.x does not have >> ifcore, for instance) in system_info.py. > > > For the full scipy I suspect that we will run into > the single/double precision lapack routines problem > (libmkl_lapack32.so provides all the s* and c* > while libmkl_lapack64.so provides the d* and z* routines), > but one thing after another ... ;-) But isn't libmkl_lapack include both precision lapack routines? This seems to be the case with mkl 8.x anyway. Could you send me the link of mkl 7.x readme file that explains then contents of mkl? Thanks, Pearu From arnd.baecker at web.de Wed Dec 14 11:55:57 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 17:55:57 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Arnd Baecker wrote: > > > Is there a way to convince him to look for the 64 Bit variant > > under /opt/intel/mkl72/64 ?? > > In system_info.py mkl72/64 is used when cpu.is_Itanium() is true. > What is the output of > python scipy/distutils/cpuinfo.py > in your case? python scipy/distutils/cpuinfo.py CPU information: getNCPUs=64 is_32bit And: cat /proc/cpuinfo processor : 0 vendor : GenuineIntel arch : IA-64 family : Itanium 2 model : 2 revision : 1 archrev : 0 features : branchlong cpu number : 0 cpu regs : 4 cpu MHz : 1500.000000 itc MHz : 1500.000000 BogoMIPS : 2214.59 siblings : 1 (repeating until: processor : 63) [...] > > I tried > > > > [mkl_libs] or [mkl_lapack] or [mkl] > > library_dirs = /opt/intel/mkl72/lib/64 > > include_dirs = /opt/intel/mkl72/include/ > > Hmm, > > [mkl] > library_dirs = .. > include_dirs = .. > > should have worked. Btw, though people use site.cfg, I don't use it > myself, it's better to fix/extend system_info.py so that people wouldn't > have to use site.cfg. Even better!! OTOH, when system configuration is slightly off from normal (and that will be the case for the above machine, as in the end MKL 72, MKL80, ATLAS, scsl, ... will be installed), some special guidance might be needed... (What I really dislike about site.cfg is that it has to be put in the directory scipy/distutils/ which means that once scipy core is installed, a normal user cannot modify this anymore - unless site.cfg is searched for in other directories...) > > but non worked? > > > >> To disable detecting mkl, define environment variable MKL=None. > >> For mkl 7.x versions one may need to fix library names (8.x does not have > >> ifcore, for instance) in system_info.py. > > > > For the full scipy I suspect that we will run into > > the single/double precision lapack routines problem > > (libmkl_lapack32.so provides all the s* and c* > > while libmkl_lapack64.so provides the d* and z* routines), > > but one thing after another ... ;-) > > But isn't libmkl_lapack include both precision lapack routines? Checking: - nm /opt/intel/mkl72/lib/64/libmkl_lapack.a | grep gesvd | grep -v sst result: does include all 4 variants - nm /opt/intel/mkl72/lib/64/libmkl_lapack64.so | grep gesvd result: only d* and z* - nm /opt/intel/mkl72/lib/64/libmkl_lapack32.so | grep gesvd result: only s* and c* So you are right, libmkl_lapack.a provides both variants. > This seems to be the case with mkl 8.x anyway. Could you send me the link > of mkl 7.x readme file that explains then contents of mkl? Does this one http://nf.apac.edu.au/facilities/software/IntelCompilers/mkl72/mklnotes.htm help? Many thanks, Arnd From arnd.baecker at web.de Wed Dec 14 12:04:27 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 18:04:27 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Arnd Baecker wrote: > > > I got some further info on this: > > > > The lapack stuff is provided in > > libmkl_lapack64.so and libmkl.so > > which are in /opt/intel/mkl72/lib/64 > > To link correctly, they need > > libifcore.so libguide.so > > which are in /opt/intel/fc_90/lib/ > > and > > libpthread.so > > which is in /usr/lib/ > > > > We tried the following, but without success > > > > [atlas] > > library_dirs = /opt/intel/mkl72/lib/64:/opt/intel/fc_90/lib/ > > include_dirs = /opt/intel/mkl72/include/ > > atlas_libs = mkl_lapack64,mkl,ifcore > > First, mkl is NOT atlas. > > mkl: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/index.htm > atlas: http://math-atlas.sourceforge.net/ > > system_info assumes that atlas libraries are used > according to official instructions given in > http://math-atlas.sourceforge.net/errata.html#LINK > > mkl and atlas are different libraries of optimized blas/lapack/.. > libraries. So the problem boils down to adding mkl support to system_info. > Btw, then also fftpack, ufunc's, random could take advantage of mkl some > day.. This would indeed be fantastic! (Similarly, this also applies to the AMD Core Math Library ACML, which also provides BLAS/LAPACK, FFT and "Fast scalar, vector, and array math transcendental library routines optimized for high performance on AMD Opteron processors.", http://developer.amd.com/acml.aspx ) If one can believe the descriptions there, then this can make quite a difference ... Best - and thanks for all your effort - Arnd From pearu at scipy.org Wed Dec 14 14:32:52 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 13:32:52 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > On Wed, 14 Dec 2005, Pearu Peterson wrote: > >> On Wed, 14 Dec 2005, Arnd Baecker wrote: >> >>> Is there a way to convince him to look for the 64 Bit variant >>> under /opt/intel/mkl72/64 ?? >> >> In system_info.py mkl72/64 is used when cpu.is_Itanium() is true. >> What is the output of >> python scipy/distutils/cpuinfo.py >> in your case? > > python scipy/distutils/cpuinfo.py > CPU information: getNCPUs=64 is_32bit > > And: > > cat /proc/cpuinfo > processor : 0 > vendor : GenuineIntel > arch : IA-64 > family : Itanium 2 > model : 2 > revision : 1 > archrev : 0 > features : branchlong > cpu number : 0 > cpu regs : 4 > cpu MHz : 1500.000000 > itc MHz : 1500.000000 > BogoMIPS : 2214.59 > siblings : 1 Thanks for the data. I have fixed cpuinfo.py, could you try again? I would expect the following output: getNCPUs=64 is_64bit is_Itanium >> This seems to be the case with mkl 8.x anyway. Could you send me the link >> of mkl 7.x readme file that explains then contents of mkl? > > Does this one > http://nf.apac.edu.au/facilities/software/IntelCompilers/mkl72/mklnotes.htm > help? Ok, thanks. Actually .../mkl72/doc/mkluse.htm was helpful, it's instructions about linking are the same as of mkl 8.x. Pearu From arnd.baecker at web.de Wed Dec 14 16:01:32 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 22:01:32 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: [...] > Thanks for the data. I have fixed cpuinfo.py, could you try again? > I would expect the following output: > getNCPUs=64 is_64bit is_Itanium Almost: python scipy/distutils/cpuinfo.py CPU information: getNCPUs=64 is_32bit is_Itanium Best, Arnd From pearu at scipy.org Wed Dec 14 15:09:38 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 14:09:38 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > On Wed, 14 Dec 2005, Pearu Peterson wrote: > > [...] > >> Thanks for the data. I have fixed cpuinfo.py, could you try again? >> I would expect the following output: >> getNCPUs=64 is_64bit is_Itanium > > Almost: > > python scipy/distutils/cpuinfo.py > CPU information: getNCPUs=64 is_32bit is_Itanium The was a typo in _is_64bit method of cpuinfo class and should now be fixed. If not, could you take a look at cpuinfo.py and fix it for your platform. Pearu From arnd.baecker at web.de Wed Dec 14 16:11:41 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 22:11:41 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > On Wed, 14 Dec 2005, Pearu Peterson wrote: > > [...] > > > Thanks for the data. I have fixed cpuinfo.py, could you try again? > > I would expect the following output: > > getNCPUs=64 is_64bit is_Itanium > > Almost: > > python scipy/distutils/cpuinfo.py > CPU information: getNCPUs=64 is_32bit is_Itanium I think that the reason is that _is_64bit from cpuinfo.py fails silently and therefore is_32bit is True: from cpuinfo import * In [4]: cpu._is_Itanium() Out[4]: True n [6]: cpu._is_32bit() Out[6]: True In [7]: cpu._is_64bit() --------------------------------------------------------------------------- exceptions.KeyError Traceback (most recent call last) /work/home/baecker/INSTALL_PYTHON2/CompileDir/CompileNEW/core/scipy/distutils/ /work/home/baecker/INSTALL_PYTHON2/CompileDir/CompileNEW/core/scipy/distutils/cpuinfo.py in _is_64bit(self) 231 if self.info[0].get('clflush size','')=='64': 232 return 1 --> 233 if self.info[0]['uname_m']=='x86_64': 234 return 1 235 if self.info.get('arch','')=='IA-64': KeyError: 'uname_m' Note that In [16]: print cpu.info[0]["arch"] IA-64 looks ok And one more: In [18]: print cpu.info[0] {'vendor': 'GenuineIntel', 'features': 'branchlong', 'family': 'Itanium 2', 'cpu regs': '4', 'itc MHz': '1500.000000', 'archrev': '0', 'cpu number': '0', 'BogoMIPS': '2214.59', 'siblings': '1', 'model': '2', 'arch': 'IA-64', 'processor': '0', 'cpu MHz': '1500.000000', 'revision': '1'} HTH, Arnd From arnd.baecker at web.de Wed Dec 14 16:15:27 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 22:15:27 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > On Wed, 14 Dec 2005, Arnd Baecker wrote: > > > On Wed, 14 Dec 2005, Pearu Peterson wrote: > > > > [...] > > > >> Thanks for the data. I have fixed cpuinfo.py, could you try again? > >> I would expect the following output: > >> getNCPUs=64 is_64bit is_Itanium > > > > Almost: > > > > python scipy/distutils/cpuinfo.py > > CPU information: getNCPUs=64 is_32bit is_Itanium > > The was a typo in _is_64bit method of cpuinfo class and should now be > fixed. If not, could you take a look at cpuinfo.py and fix it for your > platform. Does not yet work: U scipy/distutils/cpuinfo.py Updated to revision 1658. baecker at merkur:~/INSTALL_PYTHON2/CompileDir/CompileNEW/core> python scipy/distutils/cpuinfo.py CPU information: getNCPUs=64 is_32bit is_Itanium See my other mail which should be just arriving now ;-) which should explain what is failing. If not, I will check tomorrow morning (doing these things over modem is not 100% fun ;-). Best, Arnd From pearu at scipy.org Wed Dec 14 15:15:43 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 14 Dec 2005 14:15:43 -0600 (CST) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Arnd Baecker wrote: > I think that the reason is that _is_64bit from cpuinfo.py fails > silently and therefore is_32bit is True: > > from cpuinfo import * > > In [4]: cpu._is_Itanium() > Out[4]: True > > n [6]: cpu._is_32bit() > Out[6]: True > > In [7]: cpu._is_64bit() > --------------------------------------------------------------------------- > exceptions.KeyError Traceback (most > recent call last) > > /work/home/baecker/INSTALL_PYTHON2/CompileDir/CompileNEW/core/scipy/distutils/ console> > > /work/home/baecker/INSTALL_PYTHON2/CompileDir/CompileNEW/core/scipy/distutils/cpuinfo.py > in _is_64bit(self) > 231 if self.info[0].get('clflush size','')=='64': > 232 return 1 > --> 233 if self.info[0]['uname_m']=='x86_64': > 234 return 1 > 235 if self.info.get('arch','')=='IA-64': > > KeyError: 'uname_m' Ok, that was it. The fix is in SVN now. Pearu From arnd.baecker at web.de Wed Dec 14 16:47:25 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 14 Dec 2005 22:47:25 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: On Wed, 14 Dec 2005, Pearu Peterson wrote: > Ok, that was it. The fix is in SVN now. U scipy/distutils/cpuinfo.py Updated to revision 1659. core> python scipy/distutils/cpuinfo.py CPU information: getNCPUs=64 is_64bit is_Itanium Yep! Many thanks. Doing the install of scipy core with that looks fine: Ran 162 tests in 1.415s !! (only an export LD_LIBRARY_PATH=/opt/intel/mkl72/lib/64:$LD_LIBRARY_PATH was necessary) I will do the install of full scipy tomorrow and report. Arnd From oliphant.travis at ieee.org Wed Dec 14 17:56:26 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 14 Dec 2005 15:56:26 -0700 Subject: [SciPy-dev] Changes in SVN scipy_core Message-ID: <43A0A31A.1000607@ieee.org> Hi Folks, It's great to see the configuration stuff getting better in scipy.distutils. Thanks to Pearu for all his hard work on scipy.distutils. I want to let you all know about the recent changes in the SVN version of scipy_core. As you may recall, I dramatically improved the way the data in an array can be understood by improving the PyArray_Descr structure and making it an object. As part of that improvement, it became clear that the NOTSWAPPED flag was an anomaly and shouldn't be a flag on the array itself. The byte-order of the data is a property of the data-type descriptor. This is especially clear when considering records which according to the array protocol can have some fields in one byte order and some in another. As a result, I've removed the NOTSWAPPED flag from the array object. The equivalent of arr.flags.notswapped is now arr.dtypedescr.isnative. In C, the macro PyArray_ISNOTSWAPPED(arr) is still available but it checks the data-type descriptor. All C-API and python calls that used to pass a swap paramter along with a data-type descriptor have the swap paramter deleted. These C-API changes mean you will need to fully rebuild (remove the build directory and build) both scipy_core and scipy if you check out the new SVN version. I realize that the pace of development has been rapid --- but I'm on a tight schedule ;-) Hopefully, this wil be the last relatively major change to the code base for a while. All tests pass for me for full scipy after these changes. As we all know, however, there still may be remaining issues. One issue, for example, is that a.copy() returns a copy of the data (with the same data-type descriptor so no change in the byte-order is done). There is a new method a.newbyteorder() which returns the equivalent of a.view(a.dtypedescr.newbyteorder()) which returns a new view of the data using a different byte-order description. Note that the newbyteorder method of a dtypedescr object returns a new copy of the dtypedescr object with the byte-orders swapped if is not given or forces the byteorder to a particular value if arg is given (and not arg=='|' which means ignore). All fields of the dtypedescr object are (recursively) updated with the same rule as well. Look what comes along for the ride because of these changes: a = ones((5,), 'i4') # define a big-endian array a.tostring() '\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00' b.tostring() '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' You can use these byte-order flags anywhere a data-type descriptor (dtype parameter) is required. I have not tested all the possibilities, of course, so there may still be outstanding issues. Note, however, that if you work with arrays that are not in native-byte order some operations will generally be slower. Regards, -Travis From oliphant.travis at ieee.org Wed Dec 14 18:16:48 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 14 Dec 2005 16:16:48 -0700 Subject: [SciPy-dev] records and new NOTSWAPPED changes In-Reply-To: <43A0862F.7040409@stsci.edu> References: <43A0862F.7040409@stsci.edu> Message-ID: <43A0A7E0.3030905@ieee.org> Hey Chris, The recent NOTSWAPPED changes were once again prompted by closely trying to understand records. I need to modify the behavior of the byteorder argument to the records constructors and go back and make sure things are still working. My thinking is that if the byteorder argument is given it will force a particular byteorder on all the descriptors (byte-order can also be indicated on the types themselves however): e.g. '>i4,(2,3)>f8,a10' so formats = 'i4, (2,3)f8, a10' with byteorder = 'big' would be equivalent to formats = '>i4, (2,3)>f8, a10' Sound OK? But, there are still some outstanding issues regarding the record arrays and the new changes. I should be able to work them out in the next few hours. But, just to let you know in case you try things and see the errors on your new tests (thanks for those...) Thanks for your help and encouragement. I really like where we are ending up with this... -Travis From oliphant.travis at ieee.org Thu Dec 15 03:57:37 2005 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Thu, 15 Dec 2005 01:57:37 -0700 Subject: [SciPy-dev] New named fields in scipy core --- possible uses in color images In-Reply-To: <439CD911.9000407@astraw.com> References: <439CD911.9000407@astraw.com> Message-ID: <43A13001.8090808@ieee.org> I know that many people are not aware of the new named fields that are now an integral part of scipy_core. So, here is a simple example of their use. First of all, named fields can be constructed from any data type, not just records. The only thing the recarray subclass does is to make construction a bit cleaner, perhaps, and to allow attribute access to fields. Consider the following code (works with recent SVN): image = zeros((256,256),dtype=(uint32, [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) This creates an array of zeros (which for math operations can be interpreted as an unsigned 32-bit integer) but which has named fields 'r', 'g', 'b', and 'a'. These fields can be accessed (as unsigned 8-bit integers) using image['r'] image['g'] image['b'] image['a'] or the whole image can be accessed at one time using the image array. I'm not sure, aside from perhaps code readibility, if there is any benefit from this approach over the standard representation as image = zeros((256,256,4), dtype=uint8) but, it's kind of interesting that such things are now possible. Note, however, that one thing the records.py module provides over this simple approach is an "array-scalar" that also can have fields. In our example: image['r'][10,20] would work and return a uint8 scalar, but image[10,20]['r'] would not because image[10,20] is a uint32 scalar (no fields). image[10,20].getfield(*image.dtypedescr.fields['r']) would work though. Have fun... -Travis From arnd.baecker at web.de Thu Dec 15 04:40:39 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Dec 2005 10:40:39 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: Good morning alltogether ;-) thanks to Pearu things look very good now: - scipy core (0.8.6.1668) installs fine on the Itanium 2 and mkl72. All tests pass. - full scipy: installs fine. scipy.test(10,10): **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ====================================================================== FAIL: check_tandg (scipy.special.basic.test_basic.test_cephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy2/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 440, in check_tandg assert_equal(cephes.tandg(45),1.0) File "/home/baecker/python2//scipy2/lib/python2.4/site-packages/scipy/test/testing.py", line 661, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 1.0 ACTUAL: 1.0000000000000002 ---------------------------------------------------------------------- Ran 1384 tests in 89.783s Compared to .... tests on a machine with ATLAS The number of tests (apart from 3 tests from distuils) is the one which I also get on a different machine, so things look very good on this side! While we are at it: - I would also like to test the FFT provided by mkl, as it claims to be faster than fftw3 by a factor 1.5 and more in a couple of cases: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219662.htm For MKL 8.0, there is a fftw interface which might make this possible without too much problems. - ACML: On an Opteron the following worked with site.cfg [atlas] library_dirs = /opt/cluster/acml/2.7.0/lib atlas_libs = acml [blas] library_dirs = /opt/cluster/acml/2.7.0/lib blas_libs = acml [lapack] library_dirs = /opt/cluster/acml/2.7.0/lib Do you also plan a `site.cfg`-less configuration for ACML? (or does this already work?) - For the SGI ALtix we have one more candidate: scsl - Scientific Computing Software Library from SGI (http://www.sgi.com/products/software/scsl.html) which provides BLAS/LAPACK, FFT and more... I would be surprised, if you had scsl, so I think that one will be our job, right? Some questions: Similarly to mkl we would have to provide 'scsl':scsll_info, 'lapack_scsl':lapack_scsl_info, 'blas_scls':blas_scsl_info, add the corresponding routines accordingly, add a default_scls_dirs and add the corresponding stuff in class lapack_opt_info(system_info): ? One more general question: In the end it might well happen, that ATLAS is faster for one set of problems and MKL is faster for another bunch of problems. This would mean to have two different installations of scipy around. Adding ACML and scsl to the mess leads to 4 installations and allowing for different types of FFTs would lead to a nice tree of installations ... Do you think it is feasible to make it possible to choose between different (for example) flapack.so from within one scipy installation? ((I am just running a second install for fftw3 and see if it is also slow on this machine, but that's for a separate thread...)) Many thanks, Arnd From faltet at carabos.com Thu Dec 15 04:48:07 2005 From: faltet at carabos.com (Francesc Altet) Date: Thu, 15 Dec 2005 10:48:07 +0100 Subject: [SciPy-dev] Changes in SVN scipy_core In-Reply-To: <43A0A31A.1000607@ieee.org> References: <43A0A31A.1000607@ieee.org> Message-ID: <200512151048.08579.faltet@carabos.com> Travis, A Dimecres 14 Desembre 2005 23:56, Travis Oliphant va escriure: > I want to let you all know about the recent changes in the SVN version > of scipy_core. As you may recall, I dramatically improved the way the > data in an array can be understood by improving the PyArray_Descr > structure and making it an object. As part of that improvement, it > became clear that the NOTSWAPPED flag was an anomaly and shouldn't be a > flag on the array itself. The byte-order of the data is a property of > the data-type descriptor. This is especially clear when considering > records which according to the array protocol can have some fields in > one byte order and some in another. It is not clear to me whether supporting different byte orders in the same recarray (or plain array) would be necessary. This question arose some moths ago in the numpy list, and not even Perry Greenfield was able to realize an example on situations this would be useful. My opinion about this is that this feature complicates unnecessarily the code, potentially making the treatment of non-native byteorder records more costly. If this is not the case, great, but if it is, I'd very much prefer to assume that the byteorder of every object in scipy_core would be the same. BTW, I've seen in the latest SVN checkout that you are making great progress in porting the recarray object. I'd like to have the code a look and try to port our nestedrecarray as well, and offer it for your consideration. I've seen also that you started a new unit test module for recarray. That's great! I hope to contribute some tests units for it as well. Thanks for your excellent work! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From nwagner at mecha.uni-stuttgart.de Thu Dec 15 04:52:02 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 15 Dec 2005 10:52:02 +0100 Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: Message-ID: <43A13CC2.50409@mecha.uni-stuttgart.de> Arnd Baecker wrote: >Good morning alltogether ;-) > >thanks to Pearu things look very good now: > >- scipy core (0.8.6.1668) installs fine on the Itanium 2 > and mkl72. All tests pass. >- full scipy: installs fine. > > scipy.test(10,10): > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by scipy/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by scipy/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > >====================================================================== >FAIL: check_tandg (scipy.special.basic.test_basic.test_cephes) >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/home/baecker/python2/scipy2/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", >line 440, in check_tandg > assert_equal(cephes.tandg(45),1.0) > File >"/home/baecker/python2//scipy2/lib/python2.4/site-packages/scipy/test/testing.py", >line 661, in assert_equal > assert desired == actual, msg >AssertionError: >Items are not equal: >DESIRED: 1.0 >ACTUAL: 1.0000000000000002 > >---------------------------------------------------------------------- >Ran 1384 tests in 89.783s > > Compared to .... tests on a machine with ATLAS > The number of tests (apart from 3 tests from distuils) > is the one which I also get on a different machine, > so things look very good on this side! > >While we are at it: >- I would also like to test the FFT provided by mkl, > as it claims to be faster than fftw3 by a factor 1.5 and more > in a couple of cases: > http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219662.htm > > For MKL 8.0, there is a fftw interface which might make > this possible without too much problems. > >- ACML: On an Opteron the following worked with site.cfg > > [atlas] > library_dirs = /opt/cluster/acml/2.7.0/lib > atlas_libs = acml > > [blas] > library_dirs = /opt/cluster/acml/2.7.0/lib > blas_libs = acml > > [lapack] > library_dirs = /opt/cluster/acml/2.7.0/lib > > Do you also plan a `site.cfg`-less configuration for ACML? > (or does this already work?) > >- For the SGI ALtix we have one more candidate: > scsl - Scientific Computing Software Library from SGI > (http://www.sgi.com/products/software/scsl.html) > which provides BLAS/LAPACK, FFT and more... > > I would be surprised, if you had scsl, so I think that one > will be our job, right? > Some questions: Similarly to mkl we would have to provide > 'scsl':scsll_info, > 'lapack_scsl':lapack_scsl_info, > 'blas_scls':blas_scsl_info, > add the corresponding routines accordingly, add a > default_scls_dirs > and add the corresponding stuff in > class lapack_opt_info(system_info): > ? > >One more general question: >In the end it might well happen, that ATLAS is faster for >one set of problems and MKL is faster for another bunch of problems. >This would mean to have two different installations of scipy around. >Adding ACML and scsl to the mess leads to 4 installations >and allowing for different types of FFTs would lead >to a nice tree of installations ... > >Do you think it is feasible to make it possible to >choose between different >(for example) flapack.so from within one scipy installation? > >((I am just running a second install for fftw3 and >see if it is also slow on this machine, but that's >for a separate thread...)) > >Many thanks, > >Arnd > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Arnd, You might be interested in the following links http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php http://bohnsack.com/writing/libgoto-hpl/ http://www.pathscale.com/building_code/gotoblas.html http://www.tacc.utexas.edu/~kgoto/ Nils From cimrman3 at ntc.zcu.cz Thu Dec 15 05:06:23 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 15 Dec 2005 11:06:23 +0100 Subject: [SciPy-dev] scipy distutils questions In-Reply-To: <43A13CC2.50409@mecha.uni-stuttgart.de> References: <43A13CC2.50409@mecha.uni-stuttgart.de> Message-ID: <43A1401F.2060600@ntc.zcu.cz> Nils Wagner wrote: > Arnd, > > You might be interested in the following links > > http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php > http://bohnsack.com/writing/libgoto-hpl/ > http://www.pathscale.com/building_code/gotoblas.html > http://www.tacc.utexas.edu/~kgoto/ Especially 'UMFPACK likes it!' section at http://www.tacc.utexas.edu/~kgoto/ looks interesting! :) r. From arnd.baecker at web.de Thu Dec 15 05:14:54 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Dec 2005 11:14:54 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: <43A1401F.2060600@ntc.zcu.cz> References: <43A13CC2.50409@mecha.uni-stuttgart.de> <43A1401F.2060600@ntc.zcu.cz> Message-ID: On Thu, 15 Dec 2005, Robert Cimrman wrote: > Nils Wagner wrote: > > Arnd, > > > > You might be interested in the following links > > > > http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php > > http://bohnsack.com/writing/libgoto-hpl/ > > http://www.pathscale.com/building_code/gotoblas.html > > http://www.tacc.utexas.edu/~kgoto/ > > Especially 'UMFPACK likes it!' section at > http://www.tacc.utexas.edu/~kgoto/ looks interesting! :) Indeed very interesting! Nils, have you tried to build a LAPACK usable for scipy on top of goto? Best, Arnd P.S.: And I thought goto would be of no use in python http://www.entrian.com/goto/ ;-) From nwagner at mecha.uni-stuttgart.de Thu Dec 15 05:18:34 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 15 Dec 2005 11:18:34 +0100 Subject: [SciPy-dev] scipy distutils questions In-Reply-To: References: <43A13CC2.50409@mecha.uni-stuttgart.de> <43A1401F.2060600@ntc.zcu.cz> Message-ID: <43A142FA.1000400@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Thu, 15 Dec 2005, Robert Cimrman wrote: > > >>Nils Wagner wrote: >> >>>Arnd, >>> >>>You might be interested in the following links >>> >>>http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php >>>http://bohnsack.com/writing/libgoto-hpl/ >>>http://www.pathscale.com/building_code/gotoblas.html >>>http://www.tacc.utexas.edu/~kgoto/ >>> >>Especially 'UMFPACK likes it!' section at >>http://www.tacc.utexas.edu/~kgoto/ looks interesting! :) >> > >Indeed very interesting! >Nils, have you tried to build a LAPACK usable for scipy >on top of goto? > > Not yet. BTW, this is the answer given by Pearu a few months ago ... http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2004-October/003570.html Nils >Best, Arnd > >P.S.: And I thought goto would be of no use in python > http://www.entrian.com/goto/ ;-) > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From arnd.baecker at web.de Thu Dec 15 05:25:52 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Dec 2005 11:25:52 +0100 (CET) Subject: [SciPy-dev] scipy distutils questions In-Reply-To: <43A142FA.1000400@mecha.uni-stuttgart.de> References: <43A13CC2.50409@mecha.uni-stuttgart.de> <43A1401F.2060600@ntc.zcu.cz> <43A142FA.1000400@mecha.uni-stuttgart.de> Message-ID: On Thu, 15 Dec 2005, Nils Wagner wrote: > Arnd Baecker wrote: [...] > >Indeed very interesting! > >Nils, have you tried to build a LAPACK usable for scipy > >on top of goto? > > > > > Not yet. > BTW, this is the answer given by Pearu a few months ago ... > http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2004-October/003570.html Seems pretty easy - looking forward to your results!! Best, Arnd From oliphant.travis at ieee.org Thu Dec 15 05:58:29 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 15 Dec 2005 03:58:29 -0700 Subject: [SciPy-dev] Changes in SVN scipy_core In-Reply-To: <200512151048.08579.faltet@carabos.com> References: <43A0A31A.1000607@ieee.org> <200512151048.08579.faltet@carabos.com> Message-ID: <43A14C55.4050803@ieee.org> Francesc Altet wrote: >Travis, > >A Dimecres 14 Desembre 2005 23:56, Travis Oliphant va escriure: > > >>I want to let you all know about the recent changes in the SVN version >>of scipy_core. As you may recall, I dramatically improved the way the >>data in an array can be understood by improving the PyArray_Descr >>structure and making it an object. As part of that improvement, it >>became clear that the NOTSWAPPED flag was an anomaly and shouldn't be a >>flag on the array itself. The byte-order of the data is a property of >>the data-type descriptor. This is especially clear when considering >>records which according to the array protocol can have some fields in >>one byte order and some in another. >> >> > >It is not clear to me whether supporting different byte orders in the >same recarray (or plain array) would be necessary. This question arose >some moths ago in the numpy list, and not even Perry Greenfield was >able to realize an example on situations this would be useful. > > It's not clear to me when such beasts would be useful as well. However, there is no effort at all to supporting them once you place the idea of byte-swapping in the data-type descriptor where it belongs. Doing this actually cleans up a lot of code because logically, the data-type descriptor is the right place for the concept of machine byte-order. My general thinking has been that the whole point of record arrays is to be able to support memory-mapped versions of files with arbitrary fixed-length records. Part of this has been to abstract the notion of data-type descriptor to where its at now. >My opinion about this is that this feature complicates unnecessarily >the code, potentially making the treatment of non-native byteorder >records more costly. > I don't see why what I've done would cause this. If you don't specify byteorders in your data-type descriptors you always get native byteorder which always executes the quickest. >BTW, I've seen in the latest SVN checkout that you are making great >progress in porting the recarray object. I'd like to have the code a >look and try to port our nestedrecarray as well, and offer it for your >consideration. I've seen also that you started a new unit test module >for recarray. That's great! I hope to contribute some tests units for >it as well. > > I'm really excited about the progress of records in scipy_core. I think I've been able to get so far so quickly as soon as I abstracted the notion of data-type descriptor which was always sitting there under-utilized in Numeric. For the first time, the full array_interface __array_descr__ protocol is supported (well I haven't actually placed the couple of lines in that would consume the interface, but it is exported...) including nested records... Of course there are probably still some outstanding issues that will crop up as more people test. One issue, for example is that the recarray class currently returns recarray objects when accessing fields (because that is the default behavior of every subclass). In theory this should be fine, but another approach is to return actual ndarray's and chararrays instead. Thanks for the feedback. Absent the feedback, I've done a lot of "moving forward" which I hope isn't too awful once people see start to understand and utilize what is there... -Travis From arnd.baecker at web.de Thu Dec 15 09:18:27 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 15 Dec 2005 15:18:27 +0100 (CET) Subject: [SciPy-dev] fftw2 and fftw3 Message-ID: Hi, now also the results for the fftw2 vs. fftw3 comparison on the Itanium 2 are there. There is the same problem as on the other machines, for example: fftw3 ===== Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 1.23 | 1.59 | 9.59 | 1.56 (secs for 7000 calls) 1000 | 1.01 | 3.08 | 9.38 | 2.98 (secs for 2000 calls) 256 | 2.32 | 3.71 | 18.95 | 3.65 (secs for 10000 calls) 512 | 3.39 | 8.25 | 26.69 | 8.08 (secs for 10000 calls) 1024 | 0.55 | 1.45 | 4.34 | 1.42 (secs for 1000 calls) 2048 | 0.96 | 3.19 | 7.57 | 3.15 (secs for 1000 calls) 4096 | 0.90 | 3.05 | 7.13 | 3.01 (secs for 500 calls) 8192 | 1.97 | 7.91 | 14.83 | 7.86 (secs for 500 calls) .... Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 | 0.80 | 2.31 | 0.78 | 2.30 (secs for 100 calls) 1000x100 | 0.52 | 2.08 | 0.56 | 2.08 (secs for 7 calls) 256x256 | 0.67 | 1.59 | 0.64 | 1.59 (secs for 10 calls) 512x512 | 1.63 | 3.45 | 1.59 | 3.49 (secs for 3 calls) ..... fftw2 ===== Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 1.23 | 1.58 | 1.14 | 1.56 (secs for 7000 calls) 1000 | 1.02 | 3.03 | 0.80 | 3.00 (secs for 2000 calls) 256 | 2.26 | 3.72 | 1.86 | 3.63 (secs for 10000 calls) 512 | 3.07 | 8.24 | 2.35 | 8.09 (secs for 10000 calls) 1024 | 0.51 | 1.44 | 0.41 | 1.41 (secs for 1000 calls) 2048 | 0.89 | 3.19 | 0.70 | 3.14 (secs for 1000 calls) 4096 | 0.83 | 3.05 | 0.68 | 3.02 (secs for 500 calls) 8192 | 1.64 | 7.86 | 1.41 | 7.82 (secs for 500 calls) .... Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 | 0.41 | 2.28 | 0.41 | 2.28 (secs for 100 calls) 1000x100 | 0.37 | 2.06 | 0.40 | 2.08 (secs for 7 calls) 256x256 | 0.38 | 1.60 | 0.37 | 1.61 (secs for 10 calls) 512x512 | 0.91 | 3.59 | 1.10 | 3.53 (secs for 3 calls) ..... So using fftw3 from scipy is drastically slower compared to fftw2, in particular for complex arrays. In order to be sure, that it is not fftw3 by itself which is slow, I installed benchfft http://www.fftw.org/benchfft/ For size 8192, complex vectors, the results read: cat fftw2/doitm_nd.double.speed | grep 8192 |grep dc fftw2-nd-measure dcif 8192 2483.9 0.000214375 15.039 fftw2-nd-measure dcib 8192 2442.6 0.000218 15.0292 fftw2-nd-measure dcof 8192 3083.5 0.0001726875 14.5242 fftw2-nd-measure dcob 8192 3072.4 0.0001733125 14.7939 cat fftw3/doit.double.speed | grep 8192 |grep dc fftw3 dcif 8192 3141.5 0.0001695 4.27479 fftw3 dcib 8192 3169.5 0.000168 4.27484 fftw3 dcof 8192 3297.1 0.0001615 2.31973 fftw3 dcob 8192 3359.5 0.0001585 2.26557 Remarks - i: in-place (o: out-of-place) - c: complex (r: real) - f: forward (b: backwards fft) - The 4-th column is the ``mflops'' number. This clearly shows that fftw3 is faster than fftw2 (in particular for "i" - inplace situation) (and not at a all a factor of 10 slower!). Therefore the results for using fftw3 from python are completely puzzling. I am at complete loss of what is going on as I don't understand fftpack at all. Could an fft expert have a look or give some guidance on how to explore this further? Best, Arnd From fishburn at MIT.EDU Thu Dec 15 14:41:36 2005 From: fishburn at MIT.EDU (Matt Fishburn) Date: Thu, 15 Dec 2005 14:41:36 -0500 Subject: [SciPy-dev] fftw2 and fftw3 In-Reply-To: References: Message-ID: <43A1C6F0.3080103@mit.edu> Arnd Baecker wrote: > Hi, > > now also the results for the fftw2 vs. fftw3 comparison on the > Itanium 2 are there. > > There is the same problem as on the other machines, for example: > ... I coded the original FFTW3 patches, but these were my first patches to OSS and I still don't completely understand scipy's build process/tradeoff philosophy. If no one else wants to take a look at the FFTW stuff I guess you're stuck with me. :) >> I am at complete loss of what is going on as I don't understand >> fftpack at all. >> Could an fft expert have a look or give some guidance on >> how to explore this further? In a nutshell, this is what I think is going on: FFTW plans out the transform before execution, pushing off as much of the execution as possible into the planning state. FFTW3 plans have static memory locations for plans, whereas FFTW2 plans do not. This allows FFTW2 plans to be much more flexible than FFTW3 plans. I believe the rationale for the plan/memory decision has to do with various speedups FFTW3 supports, such as SSE, but I'm not positive on this. This planning difference requires FFTW3 to either cache the entire chunk of array memory, copying the array to the plan memory when the transform is operating on the array, or come up with a new plan each time a transform is performed. There are tradeoffs between caching/copying memory and plan generation time - for small arrays, the FFTW3 code should probably cache the memory as copying memory is cheaper than coming up with a new plan, while for large arrays (or very large arrays) the overhead from copying the memory may be poor compared to the plan generation time. Currently, I believe scipy caches the plan and memory for real transforms, but memory and plans for complex plans is not cached. This may explain the poor performance of complex FFTs versus real FFTs for fftw3, but would not explain why fftw3 is slower than fftw2. I think the fftw3 versus fftw2 issue stems from the planning difference. > This clearly shows that > fftw3 is faster than fftw2 (in particular for "i" - inplace situation) > (and not at a all a factor of 10 slower!). Keep in mind that any fftw3 vs fftw2 data you get off of the fftw website may dance around the planning difference. mflop data may be only for the actual transform, not including plan generation, so it would be possible for only a single transform (not including plan generation or memory copying) using fftw3 to appear faster. As for scipy, I don't remember what scipy caches for fftw3 versus fftw2. My theories are guesses from what I remember about the code, which I haven't seen for a few months, but I'll try to take a look at it this weekend or next weekend. I do have final exams for college, though, and may not be able to get to this until Christmas break on the 21st. -Matt Fishburn who should really be studying for his Game Theory final From strawman at astraw.com Thu Dec 15 17:22:22 2005 From: strawman at astraw.com (Andrew Straw) Date: Thu, 15 Dec 2005 14:22:22 -0800 Subject: [SciPy-dev] getting scipy.io.mio / numpyio working again Message-ID: <43A1EC9E.2040709@astraw.com> Hi, I'm trying to get the following working: import scipy a={'a':scipy.arange(10)} scipy.io.mio.savemat('test.mat',a) b = scipy.io.mio.loadmat('test.mat') print b If I apply the following patch: Index: mio.py =================================================================== --- mio.py (revision 1485) +++ mio.py (working copy) @@ -870,7 +870,7 @@ imagf = var.dtypechar in ['F', 'D'] fid.fwrite([var.shape[1], var.shape[0], imagf, len(variable)+1],'int') - fid.fwrite(variable+'\x00','char') + fid.fwrite(variable+'\x00','uchar') if imagf: fid.fwrite(var.real) fid.fwrite(var.imag) I can get as far as the following traceback. I don't have time right now to delve deeper. Traceback (most recent call last): File "numpyiotest.py", line 4, in ? scipy.io.mio.savemat('test.mat',a) File "/usr/lib/python2.3/site-packages/scipy/io/mio.py", line 873, in savemat fid.fwrite(variable+'\x00','uchar') File "/usr/lib/python2.3/site-packages/scipy/io/mio.py", line 221, in write numpyio.fwrite(self,count,data,mtype,bs) numpyio.error: Does not support extended types. From gruben at bigpond.net.au Thu Dec 15 18:44:57 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 16 Dec 2005 10:44:57 +1100 Subject: [SciPy-dev] [SciPy-user] New named fields in scipy core --- possible uses in color images In-Reply-To: <43A13001.8090808@ieee.org> References: <439CD911.9000407@astraw.com> <43A13001.8090808@ieee.org> Message-ID: <43A1FFF9.4070105@bigpond.net.au> I think this named record feature will be quite popular. I was expecting that fields would be accessed using a syntax like image.r image.g etc. However, using strings as Travis has done suggests other possibilities for accessing slices such as image['rgb'] or image['r','g','b'] and perhaps image['g','b','r']. Just putting this out to see if it's a good idea, Gary Ruben Travis E. Oliphant wrote: > I know that many people are not aware of the new named fields that are > now an integral part of scipy_core. So, here is a simple example of > their use. > > First of all, named fields can be constructed from any data type, not > just records. The only thing the recarray subclass does is to make > construction a bit cleaner, perhaps, and to allow attribute access to > fields. > > Consider the following code (works with recent SVN): > > image = zeros((256,256),dtype=(uint32, > [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) > > This creates an array of zeros (which for math operations can be > interpreted as an unsigned 32-bit integer) but which has named fields > 'r', 'g', 'b', and 'a'. > > These fields can be accessed (as unsigned 8-bit integers) using > > image['r'] > image['g'] > image['b'] > image['a'] > > or the whole image can be accessed at one time using the image array. > > I'm not sure, aside from perhaps code readibility, if there is any > benefit from this approach over the standard representation as > > image = zeros((256,256,4), dtype=uint8) > > but, it's kind of interesting that such things are now possible. > > Note, however, that one thing the records.py module provides over this > simple approach is an "array-scalar" that also can have fields. > > In our example: > > image['r'][10,20] would work and return a uint8 scalar, but > image[10,20]['r'] would not because image[10,20] is a uint32 scalar (no > fields). > > image[10,20].getfield(*image.dtypedescr.fields['r']) would work though. > > Have fun... > > -Travis > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From dalcinl at gmail.com Thu Dec 15 19:50:45 2005 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 15 Dec 2005 21:50:45 -0300 Subject: [SciPy-dev] exception in 'flatten()' Message-ID: Using the last release from SourceForge... [dalcinl at trantor dalcinl]$ python Python 2.4.2 (#1, Nov 7 2005, 16:51:05) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.base import * >>> a = arange(10) >>> a.flatten() Traceback (most recent call last): File "", line 1, in ? SystemError: error return without exception set >>> b = empty(10) >>> b.flatten() Traceback (most recent call last): File "", line 1, in ? SystemError: error return without exception set >>> -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From arnd.baecker at web.de Fri Dec 16 03:50:00 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 16 Dec 2005 09:50:00 +0100 (CET) Subject: [SciPy-dev] building fftpack separately Message-ID: Hi, I have a question on how to build fftpack separately: - fftpack/NOTES.txt says that one should do:: python setup_fftpack.py install but that file does not exist - Doing:: python setup.py install --prefix=.... works and installs an fftpack directly into site-packages (and not under scipy). - However, there is no test subdirectory. Running the tests only works partially, as it only takes the default test-level: fftpack/NOTES.txt says that for testing you should run:: python tests/test_basic.py 10 python tests/test_pseudo_diffs.py 10 python tests/test_helper.py 10 They do run (after ommitting the `tests/`), but none of the fft benchmarks. further fftpack/NOTES.txt says >>> import fftpack >>> fftpack.test(10) but this does not work:: In [1]: import fftpack In [2]: fftpack.test(10) AttributeError: 'module' object has no attribute 'test' Best, Arnd From kamrik at gmail.com Fri Dec 16 04:34:18 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Fri, 16 Dec 2005 11:34:18 +0200 Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: I've set up another demo Moin site to try out different themes. http://wiswiki.smashhost.com/cgi-bin/scipy_test2.cgi The default theme there is a modification of sinorca4moin. I've change it to show minimal wikiness to non logged in users (like the Moder CMS theme on Andrew's demo site) and also increased the font size slightly. There is also "sections" plugin installed, (the news side bar on the main page is made with it). Didn't yet have the time to work on the logo. There is a test user set up: username: TestUser pwd: test So you can log in and try out the other themes that are installed there. User's theme can be changed in the UserPreferences dialog . (I also liked the sand theme) Is there any estimation about when (if at all) Moin can be installed on scipy.org ? From robert.kern at gmail.com Fri Dec 16 04:39:30 2005 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Dec 2005 01:39:30 -0800 Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: <43A28B52.7010103@gmail.com> Mark Koudritsky wrote: > Is there any estimation about when (if at all) Moin can be installed > on scipy.org ? I've asked Joe to start working on it. I was hoping we would be putting content in it last week, but I don't think that's happened yet. I'll prod him a bit. It might be difficult with the holidays coming up. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Fri Dec 16 04:22:32 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 16 Dec 2005 03:22:32 -0600 (CST) Subject: [SciPy-dev] building fftpack separately In-Reply-To: References: Message-ID: On Fri, 16 Dec 2005, Arnd Baecker wrote: > I have a question on how to build fftpack separately: > > - fftpack/NOTES.txt says that one should do:: > > python setup_fftpack.py install > > but that file does not exist Fixed. > - Doing:: > > python setup.py install --prefix=.... > > works and installs an fftpack directly into site-packages > (and not under scipy). That is expected. > - However, there is no test subdirectory. > > Running the tests only works partially, as it only > takes the default test-level: > > fftpack/NOTES.txt says that for testing you should run:: > > python tests/test_basic.py 10 > python tests/test_pseudo_diffs.py 10 > python tests/test_helper.py 10 > > They do run (after ommitting the `tests/`), but > none of the fft benchmarks. Use python tests/test_basic.py -l 10 python tests/test_pseudo_diffs.py -l 10 python tests/test_helper.py -l 10 instead. > further fftpack/NOTES.txt says > > >>> import fftpack > >>> fftpack.test(10) > > but this does not work:: > > In [1]: import fftpack > In [2]: fftpack.test(10) > AttributeError: 'module' object has no attribute 'test' Fixed. You'll need to update scipy_core as well to have these fixes effective. Thanks for feedback, Pearu From arnd.baecker at web.de Fri Dec 16 05:52:35 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 16 Dec 2005 11:52:35 +0100 (CET) Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: On Fri, 16 Dec 2005, Mark Koudritsky wrote: > I've set up another demo Moin site to try out different themes. > http://wiswiki.smashhost.com/cgi-bin/scipy_test2.cgi > > The default theme there is a modification of sinorca4moin. > I've change it to show minimal wikiness to non logged in users (like > the Moder CMS theme on Andrew's demo site) and also increased the font > size slightly. That looks very nice to me. > There is also "sections" plugin installed, (the news side bar on the > main page is made with it). > > Didn't yet have the time to work on the logo. > > There is a test user set up: > username: TestUser > pwd: test > So you can log in and try out the other themes that are installed > there. User's theme can be changed in the UserPreferences dialog . (I > also liked the sand theme) Maybe the following overview is helpful: http://moinmoin.wikiwikiweb.de/ThemeMarket > Is there any estimation about when (if at all) Moin can be installed > on scipy.org ? The sooner the better - I am hesitant to add contents to scipy.org at the moment because of the planned change... Thanks for setting up the demo site, Arnd From arnd.baecker at web.de Fri Dec 16 05:59:03 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 16 Dec 2005 11:59:03 +0100 (CET) Subject: [SciPy-dev] building fftpack separately In-Reply-To: References: Message-ID: Hi Pearu, On Fri, 16 Dec 2005, Pearu Peterson wrote: [...] > Fixed. > > You'll need to update scipy_core as well to have these fixes effective. > > Thanks for feedback, I have to really thank you for the fast solution. This is extremely helpful for testing fftpack! Best, Arnd From oliphant.travis at ieee.org Fri Dec 16 07:17:45 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Dec 2005 05:17:45 -0700 Subject: [SciPy-dev] Moving random.py Message-ID: <43A2B069.9000300@ieee.org> Hey Robert. I temporariliy moved random back under basic for now. I did this because it's really important to me that rand, randn, fft, and ifft are in the scipy namespace. I don't want you to think I'm unsupportive of the general changes you are trying to see, though. While in principle it would be nice to not do any import magic. There are three problems that I'd like to see solved that I think the import magic is trying to accomplish: 1) making some names visible in the scipy namespace (like fft, rand, etc.) 2) having some mechanism for scipy_core to use the tools in full scipy should it be installed (like svd, for example). 3) having some means to speed up importing. I think that there are two views of how scipy should behave that need to be resolved. The first view is where scipy is at right now in that when the user imports scipy the whole kit-and-kaboodle gets imported at the same time. This view was developed back when it was envisoned that scipy alone would be the MATLAB-like replacement. I think it is more clear now that IPython is better for interactive work and Matplotlib (and Enthought's coming tools) are better for plotting, that something else should provide the "environment for computing" besides the simple command import scipy. The second view is that scipy should just be a simple name-space package and not try to do any magic. Users will primarily use scipy for writing code and will import the needed functions from the right places. Most of what scipy is has been done to support interactive work, so that one does not have to do: import scipy import scipy.integrate import scipy.special in order to get access to the sub-packages. Perhaps this should be re-thought and something like this magic moved to IPython. -Travis From gruben at bigpond.net.au Fri Dec 16 08:31:34 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 17 Dec 2005 00:31:34 +1100 Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A2B069.9000300@ieee.org> References: <43A2B069.9000300@ieee.org> Message-ID: <43A2C1B6.1070502@bigpond.net.au> I don't know if this is a stupid idea, but if the vote if for the magic to be moved to ipython, perhaps you could create a subpackage whose sole purpose was provide access to all subpackages in one hit a'la from scipy.interactive import * Gary R. Travis Oliphant wrote: > Hey Robert. > > I temporariliy moved random back under basic for now. I did this > because it's really important to me that rand, randn, fft, and ifft are > in the scipy namespace. I don't want you to think I'm unsupportive of > the general changes you are trying to see, though. > > While in principle it would be nice to not do any import magic. There > are three problems that I'd like to see solved that I think the import > magic is trying to accomplish: > > 1) making some names visible in the scipy namespace (like fft, rand, etc.) > 2) having some mechanism for scipy_core to use the tools in full scipy > should it be installed (like svd, for example). > 3) having some means to speed up importing. > > I think that there are two views of how scipy should behave that need to > be resolved. > > The first view is where scipy is at right now in that when the user > imports scipy the whole kit-and-kaboodle gets imported at the same > time. This view was developed back when it was envisoned that scipy > alone would be the MATLAB-like replacement. I think it is more clear > now that IPython is better for interactive work and Matplotlib (and > Enthought's coming tools) are better for plotting, that something else > should provide the "environment for computing" besides the simple > command import scipy. > > The second view is that scipy should just be a simple name-space package > and not try to do any magic. Users will primarily use scipy for writing > code and will import the needed functions from the right places. > > Most of what scipy is has been done to support interactive work, so that > one does not have to do: > > import scipy > import scipy.integrate > import scipy.special > > in order to get access to the sub-packages. Perhaps this should be > re-thought and something like this magic moved to IPython. > > -Travis From pearu at scipy.org Fri Dec 16 08:08:01 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 16 Dec 2005 07:08:01 -0600 (CST) Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A2B069.9000300@ieee.org> References: <43A2B069.9000300@ieee.org> Message-ID: On Fri, 16 Dec 2005, Travis Oliphant wrote: > I temporariliy moved random back under basic for now. I did this > because it's really important to me that rand, randn, fft, and ifft are > in the scipy namespace. I don't want you to think I'm unsupportive of > the general changes you are trying to see, though. random.py does not need to be under basic to make rand, randn to appear in scipy namespace. When moving random.py to scipy/, the full scipy needs some fixes too, several modules do 'from scipy.basic.random import ..'. > While in principle it would be nice to not do any import magic. There > are three problems that I'd like to see solved that I think the import > magic is trying to accomplish: > > 1) making some names visible in the scipy namespace (like fft, rand, etc.) see a comment at the end of this message.. > 2) having some mechanism for scipy_core to use the tools in full scipy > should it be installed (like svd, for example). > 3) having some means to speed up importing. The speed of importing full scipy without ppimport hooks has been improved a lot when using Python2.3 and up (and faster computers). I myself do not consider the import speed a big issue anymore. > I think that there are two views of how scipy should behave that need to > be resolved. > > The first view is where scipy is at right now in that when the user > imports scipy the whole kit-and-kaboodle gets imported at the same > time. This view was developed back when it was envisoned that scipy > alone would be the MATLAB-like replacement. I think it is more clear > now that IPython is better for interactive work and Matplotlib (and > Enthought's coming tools) are better for plotting, that something else > should provide the "environment for computing" besides the simple > command import scipy. > > The second view is that scipy should just be a simple name-space package > and not try to do any magic. Users will primarily use scipy for writing > code and will import the needed functions from the right places. > > Most of what scipy is has been done to support interactive work, so that > one does not have to do: > > import scipy > import scipy.integrate > import scipy.special > > in order to get access to the sub-packages. Perhaps this should be > re-thought and something like this magic moved to IPython. I just realized that scipy could do similar to Maple when importing packages that provide global symbols. For example, when executing import scipy.fftpack in interactive shell then code like (untested, unoptimized) from scipy.distutils.misc_util import get_frame if get_frame(1).f_locals['__name__']=='__main__': # expose fft to interactive shell exec 'fft = scipy.fftpack.fft' in get_frame(1).f_locals,get_frame(1).f_globals at the end of fftpack/__init__.py would expose fft function to interactive shell (Maple would show also a warning on changing global namespace if I recall correctly). Pearu From strawman at astraw.com Fri Dec 16 14:46:25 2005 From: strawman at astraw.com (Andrew Straw) Date: Fri, 16 Dec 2005 11:46:25 -0800 Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: <43A31991.1030408@astraw.com> Wow, I love the theme sincora4 you chose for default... My only comment is that the text bars ("Search" and "More Actions:") are wider than the left navistrip, but I'm sure this is an easy CSS tweak. Mark Koudritsky wrote: >I've set up another demo Moin site to try out different themes. >http://wiswiki.smashhost.com/cgi-bin/scipy_test2.cgi > >The default theme there is a modification of sinorca4moin. >I've change it to show minimal wikiness to non logged in users (like >the Moder CMS theme on Andrew's demo site) and also increased the font >size slightly. > >There is also "sections" plugin installed, (the news side bar on the >main page is made with it). > >Didn't yet have the time to work on the logo. > >There is a test user set up: >username: TestUser >pwd: test >So you can log in and try out the other themes that are installed >there. User's theme can be changed in the UserPreferences dialog . (I >also liked the sand theme) > >Is there any estimation about when (if at all) Moin can be installed >on scipy.org ? > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From oliphant.travis at ieee.org Fri Dec 16 15:46:53 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Dec 2005 13:46:53 -0700 Subject: [SciPy-dev] scipy.org In-Reply-To: <43A32268.6090802@enthought.com> References: <43A31E49.7050906@astraw.com> <43A32268.6090802@enthought.com> Message-ID: <43A327BD.5020101@ieee.org> > > Have you any clue how we're going to bring over our vast collection of > Plone content? The actual "switch" can't happen until we figure that > issue out...but you can get started working on it as soon as I turn on > Moin. My attitude is that we can transfer the Plone content (what's worth transferring anyway) after we have the site up. Right now, the site lags so far behind what is currently the case that the Plone content that's there is almost worthless. I think starting a new Moin site and adding the useful content back is the way to go. I'm thrilled that Andrew has taken this bull by the horns and that you are helping him. Best regards, -Travis From stephen.walton at csun.edu Fri Dec 16 19:32:47 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 16 Dec 2005 16:32:47 -0800 Subject: [SciPy-dev] exception in 'flatten()' In-Reply-To: References: Message-ID: <43A35CAF.1020900@csun.edu> Lisandro Dalcin wrote: >Using the last release from SourceForge... > >[dalcinl at trantor dalcinl]$ python >Python 2.4.2 (#1, Nov 7 2005, 16:51:05) >[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 >Type "help", "copyright", "credits" or "license" for more information. > > >>>>from scipy.base import * >>>>a = arange(10) >>>>a.flatten() >>>> >>>> >Traceback (most recent call last): > File "", line 1, in ? >SystemError: error return without exception set > > I can't reproduce this here, I'm afraid (Fedora Core 4, gcc4). From Fernando.Perez at colorado.edu Fri Dec 16 19:55:30 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 16 Dec 2005 17:55:30 -0700 Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A2C1B6.1070502@bigpond.net.au> References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> Message-ID: <43A36202.7090504@colorado.edu> Gary Ruben wrote: > I don't know if this is a stupid idea, but if the vote if for the magic > to be moved to ipython, perhaps you could create a subpackage whose sole > purpose was provide access to all subpackages in one hit a'la > from scipy.interactive import * I think this is a good idea, but I'd implement it just a little differently. The convenience of scipy.* can be great for code that uses a lot of scipy, so you don't have to type import scipy.this, scipy.that, scipy.that_else,... But I tend to agree with R. Kern that the black magic played by scipy's delayed importer tends to (at least it has in the past, though in fairness it's currently pretty much OK) cause problems. I also think that even though scipy starts pretty fast these days, it would still be a good idea to keep its init time to a minimum, especially for software that is run many times with a fresh process (like unit tests, for example). I'd suggest having in scipy's top level a single method import_all, so that one could write: import scipy scipy.import_all() and from then on use: scipy.this.foo() + scipy.that.bar() This simple idiom could be used in most code that needs to access a lot of scipy, and yet it doesn't pollute the user's global namespace. If you really want a full top-level import, we could have an 'all' module: from scipy.all import * the 'all' module would be the one with the heavy-duty imports. I'd name it 'all' instead of 'interactive' because I think this describes better the fact that it contains 'all of scipy' (though as a matter of suggesting policy by naming, interactive is in fact better :) With this in place, it's a trivial matter for any frequent scipy users to add to their ipython profile the above line (along with matploblib imports and anything else they may want). This means it would work without any changes needed on ipython's side. Note that I'm very happy to make any modifications needed to improve the interactive experience, but I like it even better if we can find solutions that don't require ipython changes (so they also benefit users of other interactive systems, including the plain python '>>>' shell). Cheers, f From Fernando.Perez at colorado.edu Fri Dec 16 20:02:47 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 16 Dec 2005 18:02:47 -0700 Subject: [SciPy-dev] Moving random.py In-Reply-To: References: <43A2B069.9000300@ieee.org> Message-ID: <43A363B7.1070608@colorado.edu> Pearu Peterson wrote: > I just realized that scipy could do similar to Maple when importing > packages that provide global symbols. > > For example, when executing > > import scipy.fftpack > > in interactive shell then code like (untested, unoptimized) > > from scipy.distutils.misc_util import get_frame > if get_frame(1).f_locals['__name__']=='__main__': > # expose fft to interactive shell > exec 'fft = scipy.fftpack.fft' in get_frame(1).f_locals,get_frame(1).f_globals > > at the end of fftpack/__init__.py would expose fft function to interactive > shell (Maple would show also a warning on changing global namespace if I > recall correctly). -1 on this: for one thing, the caller's interactive namespace is not necessarily one frame above in the stack, so this is fairly brittle. It wouldn't work in ipython, for example, as the user's locals/globals are internally managed dicts and not the frame object's f_locals/f_globals. It would also fail on shells that have more levels of indirection, because __name__ != '__main__' until you actually reach the command-line script that invoked the shell. I also dislike this 'automatic promotion' from a simple import statement, too much implicit behavior for my taste. I think that the import_all() solution, along with Gary's idea, provide a cleaner solution to these issues, which is also (a big plus IMHO) actually robust. Cheers, f From oliphant.travis at ieee.org Fri Dec 16 20:27:34 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Dec 2005 18:27:34 -0700 Subject: [SciPy-dev] exception in 'flatten()' In-Reply-To: <43A35CAF.1020900@csun.edu> References: <43A35CAF.1020900@csun.edu> Message-ID: <43A36986.3040101@ieee.org> Stephen Walton wrote: >Lisandro Dalcin wrote: > > > >>Using the last release from SourceForge... >> >>[dalcinl at trantor dalcinl]$ python >>Python 2.4.2 (#1, Nov 7 2005, 16:51:05) >>[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 >>Type "help", "copyright", "credits" or "license" for more information. >> >> >> >> >>>>>from scipy.base import * >>>> >>>> >>>>>a = arange(10) >>>>>a.flatten() >>>>> >>>>> >>>>> >>>>> >>Traceback (most recent call last): >> File "", line 1, in ? >>SystemError: error return without exception set >> >> >> >> >I can't reproduce this here, I'm afraid (Fedora Core 4, gcc4). > > > It was there but I fixed it in SVN. Sorry I didn't comment. Very silly misplaced return NULL; -Travis From oliphant.travis at ieee.org Fri Dec 16 20:30:45 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Dec 2005 18:30:45 -0700 Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A36202.7090504@colorado.edu> References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> Message-ID: <43A36A45.6080107@ieee.org> >I think this is a good idea, but I'd implement it just a little differently. >The convenience of scipy.* can be great for code that uses a lot of scipy, so >you don't have to type > >import scipy.this, scipy.that, scipy.that_else,... > > Right this is the reason for the current behavior. >But I tend to agree with R. Kern that the black magic played by scipy's >delayed importer tends to (at least it has in the past, though in fairness >it's currently pretty much OK) cause problems. > >I also think that even though scipy starts pretty fast these days, it would >still be a good idea to keep its init time to a minimum, especially for >software that is run many times with a fresh process (like unit tests, for >example). > >I'd suggest having in scipy's top level a single method import_all, so that >one could write: > >import scipy >scipy.import_all() > >and from then on use: > >scipy.this.foo() + scipy.that.bar() > > I like this approach quite a bit. That way coders could choose packages as desired. And interactive use could be made easy as well. What do you think Pearu? You're probably the best one suited to make it happen. -Travis From oliphant.travis at ieee.org Fri Dec 16 20:36:25 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 16 Dec 2005 18:36:25 -0700 Subject: [SciPy-dev] Python Memory allocator in SciPy Message-ID: <43A36B99.8040102@ieee.org> I've added back the use of the Python Memory Allocator. It can be turned off using a define in arrayobject.h. I think it might help with allocation of lots of small-dimensional arrays. -Travis From gruben at bigpond.net.au Fri Dec 16 20:56:31 2005 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 17 Dec 2005 12:56:31 +1100 Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A36202.7090504@colorado.edu> References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> Message-ID: <43A3704F.4030605@bigpond.net.au> I like Fernando's approach, plus 'all' is quicker to type than 'interactive' at an interactive prompt :-) Gary R. From nwagner at mecha.uni-stuttgart.de Sat Dec 17 02:32:12 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 17 Dec 2005 08:32:12 +0100 Subject: [SciPy-dev] *** glibc detected *** free(): invalid pointer: 0x403931d8 *** Message-ID: Hi all, I am using the latest svn versions. python setup.py install works fine in core but the same in scipy results in *** glibc detected *** free(): invalid pointer: 0x403931d8 *** Nils From oliphant.travis at ieee.org Sat Dec 17 02:34:40 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 17 Dec 2005 00:34:40 -0700 Subject: [SciPy-dev] *** glibc detected *** free(): invalid pointer: 0x403931d8 *** In-Reply-To: References: Message-ID: <43A3BF90.9070205@ieee.org> Nils Wagner wrote: >Hi all, > >I am using the latest svn versions. >python setup.py install works fine in core >but the same in scipy results in > *** glibc detected *** free(): invalid pointer: >0x403931d8 *** > > Did you recompile? When did this error occur? From nwagner at mecha.uni-stuttgart.de Sat Dec 17 02:39:15 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 17 Dec 2005 08:39:15 +0100 Subject: [SciPy-dev] *** glibc detected *** free(): invalid pointer: 0x403931d8 *** In-Reply-To: References: Message-ID: On Sat, 17 Dec 2005 08:32:12 +0100 "Nils Wagner" wrote: > Hi all, > > I am using the latest svn versions. > python setup.py install works fine in core Sorry, but I missed _configtest.o(.text+0x21): In function `main': /home/nwagner/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o(.text+0x21): In function `main': /home/nwagner/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status Nils > but the same in scipy results in > *** glibc detected *** free(): invalid pointer: > 0x403931d8 *** > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From oliphant.travis at ieee.org Sat Dec 17 02:42:28 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 17 Dec 2005 00:42:28 -0700 Subject: [SciPy-dev] *** glibc detected *** free(): invalid pointer: 0x403931d8 *** In-Reply-To: References: Message-ID: <43A3C164.8060201@ieee.org> Nils Wagner wrote: >Hi all, > >I am using the latest svn versions. >python setup.py install works fine in core >but the same in scipy results in > *** glibc detected *** free(): invalid pointer: >0x403931d8 *** > > > I found a stray use of the free function (instead of _pya_free) which could be the cause of this. I've fixed it in SVN. I'd like some more tests on the use of the Python memory allocator to see if it makes any kind of difference (small array tests would be the most useful---I don't expect any difference on large array tests). Changing the PyArray_USE_PYMEM constant in arrayobject.h switches which allocator is used. -Travis From nwagner at mecha.uni-stuttgart.de Sat Dec 17 02:54:14 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 17 Dec 2005 08:54:14 +0100 Subject: [SciPy-dev] *** glibc detected *** free(): invalid pointer: 0x403931d8 *** In-Reply-To: <43A3C164.8060201@ieee.org> References: <43A3C164.8060201@ieee.org> Message-ID: On Sat, 17 Dec 2005 00:42:28 -0700 Travis Oliphant wrote: > Nils Wagner wrote: > >>Hi all, >> >>I am using the latest svn versions. >>python setup.py install works fine in core >>but the same in scipy results in >> *** glibc detected *** free(): invalid pointer: >>0x403931d8 *** >> >> >> > I found a stray use of the free function (instead of >_pya_free) which > could be the cause of this. I've fixed it in SVN. I'd >like some more > tests on the use of the Python memory allocator to see >if it makes any > kind of difference (small array tests would be the most >useful---I don't > expect any difference on large array tests). Changing >the > PyArray_USE_PYMEM constant in arrayobject.h switches >which allocator is > used. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev here is the problem _configtest.o(.text+0x21): In function `main': /home/nwagner/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status Nils From schofield at ftw.at Sat Dec 17 06:57:37 2005 From: schofield at ftw.at (Ed Schofield) Date: Sat, 17 Dec 2005 11:57:37 +0000 Subject: [SciPy-dev] Links in MoinMoin In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: <69CD1CC2-412C-4EA8-8B42-4533FC9AB66D@ftw.at> Hi all, Can we set up MoinMoin to allow spaces in links? I find its default RunTogetherWords hideously ugly. Wikimedia allows flexible formatting in links to fit the surrounding text. An example from this week's current events page on Wikipedia is: The Sixth Ministerial Conference of the World Trade Organization opened ... where the first link is to an article called "WTO Ministerial Conference of 2005". How civilized! :) Does MoinMoin allow something like this too, or is this a ghastly LimitationWe'dHaveToPutUpWith? -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From kamrik at gmail.com Sat Dec 17 09:21:50 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Sat, 17 Dec 2005 16:21:50 +0200 Subject: [SciPy-dev] Links in MoinMoin In-Reply-To: <69CD1CC2-412C-4EA8-8B42-4533FC9AB66D@ftw.at> References: <4377F2C7.806@astraw.com> <69CD1CC2-412C-4EA8-8B42-4533FC9AB66D@ftw.at> Message-ID: Moin supports spaces in links out of the box, almost in the same way as Wikipedia. Wikipedia style link: [[World Trade Organization]] The same link in Moin format: ["World Trade Organization"] CamelCase is just a kind of syntactic sugar to avoid typing the [" and "]. As well as a tradition carried over from the first wiki implementations. Most of the modern wiki implementations have some syntax for free text links that include spaces. CamelCase makes sense for wikis that are mostly social gathering and discussion places where external appearance is less important than ease of editing. But I definitely agree that on a site that is a public face of a project one shouldn't use CamelCase. On 12/17/05, Ed Schofield wrote: > Hi all, > > Can we set up MoinMoin to allow spaces in links? I find its default > RunTogetherWords hideously ugly. Wikimedia allows flexible formatting in > links to fit the surrounding text. An example from this week's current > events page on Wikipedia is: > > > > The Sixth Ministerial Conference of the World Trade Organization opened ... > > where the first link is to an article called "WTO Ministerial Conference of > 2005". How civilized! :) Does MoinMoin allow something like this too, or > is this a ghastly LimitationWe'dHaveToPutUpWith? > > > -- Ed > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > > > From kamrik at gmail.com Sat Dec 17 09:37:41 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Sat, 17 Dec 2005 16:37:41 +0200 Subject: [SciPy-dev] Links in MoinMoin In-Reply-To: <69CD1CC2-412C-4EA8-8B42-4533FC9AB66D@ftw.at> References: <4377F2C7.806@astraw.com> <69CD1CC2-412C-4EA8-8B42-4533FC9AB66D@ftw.at> Message-ID: (Continued from my previous message) Forgot to mention, there are also named links in Moin: A construct like this [:World Trade Organization:WTO] Would result in text "WTO" which links to a page named "World Trade Organization" That is, it would create html code like this WTO From jh at oobleck.astro.cornell.edu Sat Dec 17 10:42:39 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Sat, 17 Dec 2005 10:42:39 -0500 Subject: [SciPy-dev] Scipy.org In-Reply-To: (scipy-dev-request@scipy.net) References: Message-ID: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> While I agree with Travis on the state of the web site content (so much out of date that it all bears going over before posting on the new site), the random approach is how we got the mess we have now in the first place. It wasn't Plone that did it, it was us (or the lack of us). We are about (I hope, I dream, if I were religious I'd pray) to blossom into a popular site, with the majority of visitors not being developers and maybe not being more than budding scientists. The organization that developers or experienced users want will not meet the needs of a high-schooler who is using scipy to do a homework assignment for the first time on her school computer. Organic design works when everyone's goals are in line. Site visitors' goals are divergent, so letting the site "grow" in a pure-wiki way will make finding your way around the site like navigating a rose bush: beauty all around, but too painful to get to. We have some web design experts among us. Let's design a site structure that has room to grow, and slot content into it, rather than letting the site design evolve organically. It makes sense (to me) to have some "start here if you're a ..." pages that present an introduction to the appropriate audience and then index into the main site content. How to present these without slowing down the experienced user or developer is the question. It's not hard to think up successful strategies, but they may depend on a particular underlying organization, so we should do a little thinking before we start. I suggest a two-pronged approach: 1. Set up a staging area for the content itself. It should have a flat organization: items go in its top level for discussion and review before being moved into the main site. Items shouldn't stay here long. Doing this lets conversion start before we have a site structure. 2. While (eager!) content authors are moving their stuff over to the Moin staging area and updating for the new scipy, we all have a discussion of site organization, including input from web experts. Then, set up the structure and move over the content. Meanwhile, keep the current plone site up, with a note at the top saying what's going on (new Scipy, numeric and numarray unified, new site in development, discussion on how to organize it HERE, content conversion HERE, nothing new on this site, thanks for your indulgence). Set up the Moin site as new.scipy.org, and hang the staging area off it as new.scipy.org/staging. Be open about our process but don't stick it in newbies' faces until there's something there to stick. There will be a switchover time, likely before the new site is polished up, but it's not now. To begin with, I've made two lists. The first is a list of potential site user types. The second lists categories of content. Website Accessors New, inexperienced user New user experienced with similar software (IDL, Matlab) Existing user looking for news updates/downloads software main add-on docs place to report problem/track problem reports talk with other users seek advice give suggestion help others search for past coverage of issue resources on a topic links elsewhere topical page Developer all of the above talk with other developers active sources (SVN) web posting Potential user Media Donor Advocate Potential customer/employer Website Areas "What is" summary page demos on web demos you can run links Getting Started install intro docs intro demos community links Download packages docs News Community mailing lists events presentations user area past events Topical Developers bug reports SVN Media kit Donating Advocacy kits Store FAQ Docs --jh-- From pearu at scipy.org Sat Dec 17 11:33:40 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 17 Dec 2005 10:33:40 -0600 (CST) Subject: [SciPy-dev] Moving random.py In-Reply-To: <43A36A45.6080107@ieee.org> References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> <43A36A45.6080107@ieee.org> Message-ID: On Fri, 16 Dec 2005, Travis Oliphant wrote: >> But I tend to agree with R. Kern that the black magic played by scipy's >> delayed importer tends to (at least it has in the past, though in fairness >> it's currently pretty much OK) cause problems. >> >> I also think that even though scipy starts pretty fast these days, it would >> still be a good idea to keep its init time to a minimum, especially for >> software that is run many times with a fresh process (like unit tests, for >> example). I agree. >> I'd suggest having in scipy's top level a single method import_all, so that >> one could write: >> >> import scipy >> scipy.import_all() >> >> and from then on use: >> >> scipy.this.foo() + scipy.that.bar() >> >> > > I like this approach quite a bit. That way coders could choose packages > as desired. And interactive use could be made easy as well. > > What do you think Pearu? You're probably the best one suited to make it > happen. Instead of import scipy scipy.import_all() we could have import scipy.import_all or similar such as `import scipy.all` etc. Though having a function import_all() would have an advantage of specifying packages one wishes to see in scipy namespace: import_all('linalg','integrate') would import only scipy.linalg and scipy.integrate packages and their dependencies if there are any. import_all() would import all scipy packages. Hmm, may be scipy.import_all should read as scipy.import and scipy.import() would be scipy.import('all'). However, using Python keyword `import` as a function name would be showstopper here, just import_all does not sound right when it could have optional arguments. Suggestions for better names are welcome. I have another question: what would be in scipy namespace when one types import scipy ? base, test, basic modules. What else? Pearu From strawman at astraw.com Sat Dec 17 13:41:15 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 17 Dec 2005 10:41:15 -0800 Subject: [SciPy-dev] Scipy.org In-Reply-To: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> References: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> Message-ID: <43A45BCB.5090606@astraw.com> Joe -- fantastic ideas. I've taken this opportunity to convert http://astraw.com/scipy (aka http://scipy.astraw.com ) from a kick-Moin's-tires into the "staging area" suggested. (If I could set up new.scipy.org, I would...) I've also used Mark K's ideas about using sinorca4moin as the default theme. I haven't set up the site structure "for real" yet (with, for example, what Joe suggests) because this decision will be with us for a long time and I want to make sure we get some consensus on this. (There is currently some site structure in the staging area -- I copied it blindly from the current scipy.org when I set up that site and should be ignored.) When we decide on the "top-level" pages, I will edit the moin configuration file to have those placed in the navigation strip. Now, for my personal opinions about Joe's overall layout: I would prefer taking a "portal to the universe of Python software for scientific computing" approach, with scipy_core and full scipy being locally hosted and emphasized players in that universe. As such, I'd prefer an organization more like the following. Home sidebar with brief news summary News (e.g. scipy_core and full scipy nearing end of heaving re-structuring) Getting started simple demos using scipy_core and matplotlib, for example pointers to individual packages, both scipy and external Screenshots made with matplotlib and scipy_core and/or full scipy made with MayaVi made with raw VTK benchmarks from 64 CPU 64 bit Itanium system scipy_core About Installation Intro demos Download page with links to source and binary releases FAQ Documentation Info about svn, link to Trac (full) scipy About Installation Intro demos Download page with links to source and binary releases FAQ Documentation Info about svn, link to Trac Cookbook we should think about some organization for code receipes Download point to (or include) scipy_core/Download and scipy/Download Joe Harrington wrote: >While I agree with Travis on the state of the web site content (so >much out of date that it all bears going over before posting on the >new site), the random approach is how we got the mess we have now in >the first place. It wasn't Plone that did it, it was us (or the lack >of us). > >We are about (I hope, I dream, if I were religious I'd pray) to >blossom into a popular site, with the majority of visitors not being >developers and maybe not being more than budding scientists. The >organization that developers or experienced users want will not meet >the needs of a high-schooler who is using scipy to do a homework >assignment for the first time on her school computer. Organic design >works when everyone's goals are in line. Site visitors' goals are >divergent, so letting the site "grow" in a pure-wiki way will make >finding your way around the site like navigating a rose bush: beauty >all around, but too painful to get to. We have some web design >experts among us. Let's design a site structure that has room to >grow, and slot content into it, rather than letting the site design >evolve organically. > >It makes sense (to me) to have some "start here if you're a ..." >pages that present an introduction to the appropriate audience and >then index into the main site content. How to present these without >slowing down the experienced user or developer is the question. It's >not hard to think up successful strategies, but they may depend on a >particular underlying organization, so we should do a little thinking >before we start. > >I suggest a two-pronged approach: > >1. Set up a staging area for the content itself. It should have a > flat organization: items go in its top level for discussion and > review before being moved into the main site. Items shouldn't stay > here long. Doing this lets conversion start before we have a site > structure. > >2. While (eager!) content authors are moving their stuff over to the > Moin staging area and updating for the new scipy, we all have a > discussion of site organization, including input from web experts. > Then, set up the structure and move over the content. > >Meanwhile, keep the current plone site up, with a note at the top >saying what's going on (new Scipy, numeric and numarray unified, new >site in development, discussion on how to organize it HERE, content >conversion HERE, nothing new on this site, thanks for your >indulgence). Set up the Moin site as new.scipy.org, and hang the >staging area off it as new.scipy.org/staging. Be open about our >process but don't stick it in newbies' faces until there's something >there to stick. There will be a switchover time, likely before the >new site is polished up, but it's not now. > >To begin with, I've made two lists. The first is a list of potential >site user types. The second lists categories of content. > >Website Accessors > >New, inexperienced user >New user experienced with similar software (IDL, Matlab) >Existing user looking for > news > updates/downloads > software > main > add-on > docs > place to report problem/track problem reports > talk with other users > seek advice > give suggestion > help others > search for past coverage of issue > resources on a topic > links elsewhere > topical page >Developer > all of the above > talk with other developers > active sources (SVN) > web posting >Potential user >Media >Donor >Advocate >Potential customer/employer > > > >Website Areas > >"What is" > summary page > demos on web > demos you can run > links >Getting Started > install > intro docs > intro demos > community links >Download > packages > docs >News >Community > mailing lists > events > presentations > user area > past events >Topical >Developers > bug reports > SVN >Media kit >Donating >Advocacy kits >Store >FAQ >Docs > >--jh-- > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From Fernando.Perez at colorado.edu Sat Dec 17 18:33:10 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 17 Dec 2005 16:33:10 -0700 Subject: [SciPy-dev] Moving random.py In-Reply-To: References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> <43A36A45.6080107@ieee.org> Message-ID: <43A4A036.2050704@colorado.edu> Pearu Peterson wrote: >>What do you think Pearu? You're probably the best one suited to make it >>happen. > > > Instead of > > import scipy > scipy.import_all() > > we could have > > import scipy.import_all > > or similar such as `import scipy.all` etc. > > Though having a function import_all() would have an advantage of > specifying packages one wishes to see in scipy namespace: > import_all('linalg','integrate') would import only scipy.linalg and > scipy.integrate packages and their dependencies if there are any. > > import_all() would import all scipy packages. Hmm, may be scipy.import_all > should read as scipy.import and scipy.import() would be > scipy.import('all'). However, using Python keyword `import` as a function > name would be showstopper here, just import_all does not sound right when > it could have optional arguments. Suggestions for better names are > welcome. What I don't like about having a module that magically, upon import, rebuilds the scipy namespace, is the silent overloading of the default python semantics for the import statement. It's one more thing to explain: 'this magic module does this and that under the hood just from being imported, ...'. I think that a function approach is clearer and cleaner: its docstring can explicitly specify what it does and doesn't. Thinking of names, I also think it's better to rename it something other than 'import*', so that we distinguish the fact that it's more than an import statement, but rather a namespeace loader. Here's a suggested function header (rough cut): def mod_load(*args): """Load one or more modules into scipy's top-level namespace. Usage: This function is intended to shorten the need to import many of scipy's submodules constantly with statements such as import scipy.linalg, scipy.fft, scipy.etc... Instead, you can say: import scipy scipy.mod_load('linalg','fft',...) or scipy.mod_load('all') to load all of them in one call. If a name which doesn't exist (except for 'all') in scipy's namespace is given, an exception [[WHAT? ImportError, probably?]] is raised. Inputs: - the names (one or more strings) of all the scipy modules one wishes to load into the top-level namespace. If the single argument 'all' is given, then all of scipy's subpackages are imported. Outputs: The function returns a tuple with all the names of the modules which were actually imported. """ And speaking of the 'all' module for interactive use, why can't we just populate the __all__ attribute of scipy's __init__.py without performing any actual imports? With this, a regular from scipy import * will work, as everything listed in __all__ will get imported, but the plain 'import scipy' will still be fast, since __all__ is just a list of strings, it doesn't contain the actual module objects. Granted, with this you can't do import scipy scipy.linalg.foo() but that's precisely why we have scipy.mod_load(). So in summary, unless I'm missing something, with: 1. mod_load as indicated above 2. __all__ properly populated it seems to me we'd have all the desired usage cases covered, no? Cheers, f From oliphant.travis at ieee.org Sun Dec 18 00:52:31 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 17 Dec 2005 22:52:31 -0700 Subject: [SciPy-dev] [SciPy-user] sequence emulation -/-> array access? In-Reply-To: <877ja3nlkj.fsf@uwo.ca> References: <20051216174520.GA49361@cutter.rexx.com><87bqzfnwh5.fsf@uwo.ca> <43A4B443.7040908@ieee.org> <877ja3nlkj.fsf@uwo.ca> Message-ID: <43A4F91F.2080009@ieee.org> >>The only confusion, >>perhaps, is that the array function *is not* a __new__ method. The >>ndarray.__new__ method has arguments that are slightly different. There >>are basically two ways to create an array (as an empty array or from a >>buffer). The docstring of ndarray has more information in newer >>versions: help(ndarray). Also, look at matrix.py, chararray.py, >>records.py, and memmap.py for examples of how to subclass. >> >>Basically, you need to define the __new__ method (the only thing that's >>necessary, no __init__ is needed). Then define __mul__ and __add__ the >>way you want. >> >>from scipy import ndarray >> >>class mysub(ndarray): >> def __new__(self, *args): >> pass >> #you need to call ndarray.__new__ in here with one of two sets >>of arguments >> >> > >I guess what you are saying is that it's not generally useful to >create an array directly with > > arr = ndarray(...params...) > >and so if I subclass without overriding the __new__ method, I won't >be able to conveniently create objects of my new class with > > myob = mysub(...param...) > >I'm a bit confused by why ndarray is designed in a way that makes this >necessary, but I should be able to imitate what is done in matrix.py, >etc, to do what I want. I haven't yet installed scipy core but will >hopefully find time to play with it soon. > > > Well, no, you could actually not override the new method and just use the same creation function for your subclass. It's just that the most common array creation function "array"0 is *not* the new method. I suppose this is historical more than anything. Perhaps we should rethink the new method of the ndarray so that in fact ndarray(someobject) has the same behavior as array(someobject) I'm certainly not opposed to that and have been considering it. Sooner is better than latter, of course. Ideas welcome. -Travis From arnd.baecker at web.de Mon Dec 19 05:36:03 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 19 Dec 2005 11:36:03 +0100 (CET) Subject: [SciPy-dev] patch for scsl support Message-ID: Hi, the attached patch allows to detect the scsl (Scientific Computing Software Library, http://www.sgi.com/products/software/scsl.html ) via system_info.py. Here we have used the OpenMP variant scs_mp. This library allows (among other stuff) to perform linear algebra operations using several CPUs in parallel. On our machine we have scsl, ATLAS and MKL , so export ATLAS=None export MKL=None enforces scsl. Build and install work without problems (with gcc) and almost all tests of scipy.test(10,10) pass, apart from ERROR: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_logm) FAIL: check_tandg (scipy.special.basic.test_basic.test_cephes) FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) Full output below. Not sure what is going on with those ... In addition one gets during the test: **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Questions - how crucial are these warnings? - what is the best way to support both scsl and scsl_mp? I am not sure if we managed to add everything in the right way, so it would be great if someone more experienced with this (Pearu?!) could have a look at the patch. Many thanks, Arnd and Jan ====================================================================== ERROR: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_logm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy_lintst_scsl/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 82, in check_nils logm((identity(7)*3.1+0j)-a) File "/home/baecker/python2//scipy_lintst_scsl/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 230, in logm temp = mat(solve(E.T,(E-A).T)) File "/home/baecker/python2//scipy_lintst_scsl/lib/python2.4/site-packages/scipy/linalg/basic.py", line 103, in solve a1, b1 = map(asarray_chkfinite,(a,b)) File "/home/baecker/python2//scipy_lintst_scsl/lib/python2.4/site-packages/scipy/base/function_base.py", line 211, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: check_tandg (scipy.special.basic.test_basic.test_cephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy_lintst_scsl/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 440, in check_tandg assert_equal(cephes.tandg(45),1.0) File "/home/baecker/python2//scipy_lintst_scsl/lib/python2.4/site-packages/scipy/test/testing.py", line 666, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 1.0 ACTUAL: 1.0000000000000002 ====================================================================== FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy_lintst_scsl/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 44, in check_nils assert_array_almost_equal(r,cr) File "/home/baecker/python2//scipy_lintst_scsl/lib/python2.4/site-packages/scipy/test/testing.py", line 763, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 5.5434479e+146 -3.0932899e+146j 1.0704570e+147 +2.4698621e+146j 2.1300115e+146 -1.8113002e+147j 7.0783749e+1... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: svn_diff.diff URL: From arnd.baecker at web.de Mon Dec 19 05:55:20 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 19 Dec 2005 11:55:20 +0100 (CET) Subject: [SciPy-dev] fftw2 and fftw3 In-Reply-To: <43A1C6F0.3080103@mit.edu> References: <43A1C6F0.3080103@mit.edu> Message-ID: On Thu, 15 Dec 2005, Matt Fishburn wrote: > Arnd Baecker wrote: > > Hi, > > > > now also the results for the fftw2 vs. fftw3 comparison on the > > Itanium 2 are there. > > > > There is the same problem as on the other machines, for example: > > > ... > > I coded the original FFTW3 patches, but these were my first patches to > OSS and I still don't completely understand scipy's build > process/tradeoff philosophy. If no one else wants to take a look at the > FFTW stuff I guess you're stuck with me. :) Being "stuck" with the original coder is not the worst thing to happen ;-) I am sure that Pearu can help with the build aspects (though I don't think that it is really a build issue). Note, that with recent changes in svn by Pearu it is possible to build fftpack without installing all of full scipy. This is very convenient for testing! > >> I am at complete loss of what is going on as I don't understand > >> fftpack at all. > >> Could an fft expert have a look or give some guidance on > >> how to explore this further? > > In a nutshell, this is what I think is going on: FFTW plans out the > transform before execution, pushing off as much of the execution as > possible into the planning state. FFTW3 plans have static memory > locations for plans, whereas FFTW2 plans do not. This allows FFTW2 > plans to be much more flexible than FFTW3 plans. I believe the > rationale for the plan/memory decision has to do with various speedups > FFTW3 supports, such as SSE, but I'm not positive on this. > > This planning difference requires FFTW3 to either cache the entire chunk > of array memory, copying the array to the plan memory when the transform > is operating on the array, or come up with a new plan each time a > transform is performed. There are tradeoffs between caching/copying > memory and plan generation time - for small arrays, the FFTW3 code > should probably cache the memory as copying memory is cheaper than > coming up with a new plan, while for large arrays (or very large arrays) > the overhead from copying the memory may be poor compared to the plan > generation time. I am not quite sure about this - it seems that planning times can become very large for large arrays... > Currently, I believe scipy caches the plan and memory for real > transforms, but memory and plans for complex plans is not cached. This > may explain the poor performance of complex FFTs versus real FFTs for > fftw3, OK, that would be one important improvement. I am not sure If you have read the mail http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2005-December/006170.html on scipy-user, where Darren Dale discusses similar problems, and some more benchmarks on this ar given. In particular, don't miss the pics ;-) http://www.physik.tu-dresden.de/~baecker/tmp/fftw/test_AB.png http://www.physik.tu-dresden.de/~baecker/tmp/fftw/test_DD.png So I really think that the complex vs. real difference is the most important thing. > but would not explain why fftw3 is slower than fftw2. I think > the fftw3 versus fftw2 issue stems from the planning difference. OK, that might be the next step to tackle ... > > This clearly shows that > > fftw3 is faster than fftw2 (in particular for "i" - inplace situation) > > (and not at a all a factor of 10 slower!). > > Keep in mind that any fftw3 vs fftw2 data you get off of the fftw > website Well, these don't help too much anyway, because the benchmarks are for different machines. That's why one has to use benchfft - or, with fftw3, the `tests/bench` programm > may dance around the planning difference. mflop data may be > only for the actual transform, not including plan generation, so it > would be possible for only a single transform (not including plan > generation or memory copying) using fftw3 to appear faster. Some hints on this may be gotten by cd fftw-3.0.1/tests ./bench -s icf8192 Problem: icf8192, setup: 476.52 ms, time: 202.25 us, ``mflops'': 2632.8 ./bench -opatient -s icf8192 Problem: icf8192, setup: 4.29 s, time: 198.00 us, ``mflops'': 2689.3 ./bench -oexhaustive -s icf8192 Problem: icf8192, setup: 28.92 s, time: 197.00 us, ``mflops'': 2702.9 (different machine than for the plots at www.ph..., Itanium 2): > As for scipy, I don't remember what scipy caches for fftw3 versus fftw2. > My theories are guesses from what I remember about the code, which I > haven't seen for a few months, but I'll try to take a look at it this > weekend or next weekend. That would be fantastic!! > I do have final exams for college, though, and > may not be able to get to this until Christmas break on the 21st. In the long run it would be nice if we could get support for all the FFTs from ACML, IMKL, scsl, ... Looking at the code for fftpack, this would mean to add many more #if's to src/fftpack.h and src/zfft.c - not sure if that will become unreadable ... A collegue of mine just told me that the IMKL FFT has an interface which is pretty similar to the fftw2 one, so this might be a lower hanging fruit ... Many thanks, looking forward to your findings!! Best, Arnd From chanley at stsci.edu Mon Dec 19 11:37:09 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 19 Dec 2005 11:37:09 -0500 Subject: [SciPy-dev] arrayobject.c modification for Solaris support Message-ID: <43A6E1B5.8040305@stsci.edu> Hi Travis, I moved a couple of C++ like variable declarations in arrayobject.c so that I could build scipy_core with the native Solaris compilers. All of my regression tests and scipy_core unittests now pass. Chris From strawman at astraw.com Mon Dec 19 12:24:02 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Dec 2005 09:24:02 -0800 Subject: [SciPy-dev] Scipy.org In-Reply-To: <43A45BCB.5090606@astraw.com> References: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> <43A45BCB.5090606@astraw.com> Message-ID: <43A6ECB2.1000900@astraw.com> Good news -- the "official" test site is online: http://new.scipy.org/Wiki I'm hoping Enthought can install a couple addons to greatly improve the look and feel of the site, but we can start adding content immediately. So, let's hear what people think about the site layout ideas proposed by Joe Harrington and myself. Cheers! Andrew From jh at oobleck.astro.cornell.edu Mon Dec 19 14:15:56 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Mon, 19 Dec 2005 14:15:56 -0500 Subject: [SciPy-dev] Scipy.org In-Reply-To: <43A6ECB2.1000900@astraw.com> (message from Andrew Straw on Mon, 19 Dec 2005 09:24:02 -0800) References: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> <43A45BCB.5090606@astraw.com> <43A6ECB2.1000900@astraw.com> Message-ID: <200512191915.jBJJFunh027898@oobleck.astro.cornell.edu> Thanks for setting up the new site so quickly! But, I think you've missed my point. http://new.scipy.org/Wiki/KickTheTires: Joe Harrington proposed an overall layout like: "What is" summary page demos on web demos you can run links Getting Started install intro docs intro demos community links .... That was not a layout proposal! It's just an unordered (and incomplete) list of content types. Much of my concern comes from the fact that that our engineering mindset likes to equate the two, but a website is more than an index. I'd much rather see a front page that is visually friendly, is not too cluttered, and is easily navigable by newbies, yet has sufficient navigation tools to take experienced users quickly to the "real stuff". Obviously, there are choices to be made here; some may be hard. My point is that it's worth a *DESIGN EFFORT* to get it right. Then, the interior site has the traditional tabs-and-sidebars look, and sports a lot of info at once, but still has obvious "GO HERE FIRST" signs for newbies. I think the next thing is for interested parties to post here some text descriptions of front page concepts. Alternatively, hang a faux front page that you made off the main page of the wiki, with a like like FredSmithsFrontPage. Here's my (current) idea, very subject to change: It should have an attractive logo/banner, larger than the version that will appear on top of the pages in the site. It should have an attractive graphic made with the package (later, we can put several of these in a random rotation, so that on each visit you get a different one). Size of the image matters here: physically large but quick to download is nice. It should have a short, non-jargony, non-boasting summary paragraph. It has to be relatively narrow, so the right-side material isn't off the edge. It should be on the left to be read first. It should have a moderate number of links for newbies by type of visitor (see previously posted list). These should be down the right side, in a column. We should eventually collect hit statistics and promote the more-visited ones to the top of the list, and replace underused ones. Below that should be some news. Below that should be a content listing (the key interior navigation headers) on the right side. No "notices" junk at the bottom. For example, a crude text version, all parts up for discussion: ---------------------------------------------------------------------- [big logo: S C I P Y] Deutsch, Francais, Espagnol, $()#(%, [more] Scipy is open-source software for Introductions and mathematics, science, and engineering. demos for new users: It runs on all popular operating Researcher/College student systems, is quick to install, and is Teacher/Professor free of charge. Scipy is easy enough K-12 Student that a 12-year-old can use it to plot Media up her math homework, but complete [more] enough that it is depended upon by some of the world's forefront scientists and Latest news: engineers. If you need to manipulate Scipy 2.0 released numbers on a computer and display the results, click into the demos and give Quick Navigation: Scipy a try! Download Documentation [splufty graphic] Community [more] Search [box] ----------------------------------------------------------------------- The background is black, with the area under the paragraph containing the graphic in bold color on black background. Maybe if you click on the graphic, you get another graphic. Leave that as an Easter egg for the enterprising to find. Clicking any link on this page takes you to the appropriate page of the traditional tabs-and-sidebars site inside. The demos are tailored to the audience. Each of the demos listed here is a tab under Demos, inside. Likewise, clicking on "news" takes you to the interior news area, Download, Documentation, Community likewise. Not all interior tabs should be here, just the main ones. All this should look clean and professional, without looking too "produced". Language should be English, readable by a 12-year-old (both to accomodate 12-year-olds and to be nice to non-native-English speakers). Once we have translations, we can add them in a list under the banner, in small type, maybe with flags. Inside, the main page has a description that gives you the background you need to know to use the site. For example: Scipy is an extension to the popular [Python] language. Its foundation is [scipy_core], which is a small package that gives Python its basic scientific functions and the ability to manipulate arrays of numbers. Scipy_core can be used alone to do many tasks, or as part of full Scipy, which includes some popular application packages. Scipy_core, full Scipy, and their support community are hosted here. Many additional packages that use Scipy are listed, with descriptions and links, under the "Add-ons" heading. The "Introduction" heading has instructions for downloading and installing the right binary package for your computer. Once you have installed Scipy, check out the "Getting Started" area. If you have problems, please ask a question in our community's "Just Starting" forum. Andrew's already onto the internal site structure. I think we need to make the point that scipy_core can be run separately from full scipy, but I don't think it's beneficial to have two download pages, two FAQs, two doc pages, and so on. It will make for a lot of rereading and clicking back and forth for new users. For the remaining tabs, why not combine Andrew's list and mine, and try to subset a few into one tab. For example What Is, Getting Started, and Screenshots go under Introduction; Cookbook goes under Docs (maybe call it "Help"?). The listing of additional software (which shouldn't be called "Topical" anymore) should be a top-level navigation item. I'm very interested to hear from our web design experts ... Janet? Sara? Others? --jh-- From chris at pseudogreen.org Mon Dec 19 16:08:03 2005 From: chris at pseudogreen.org (Christopher Stawarz) Date: Mon, 19 Dec 2005 16:08:03 -0500 Subject: [SciPy-dev] empty_like/zeros_like don't preserve Fortran-ness Message-ID: Hi, Just a small bug to report: When given a Fortran-contiguous array as input, the empty_like and zeros_like functions return a C-contiguous array: >>> a = array([[1,2,3],[4,5,6]], fortran=True) >>> isfortran(a) True >>> b = empty_like(a) >>> isfortran(b) False >>> c = zeros_like(a) >>> isfortran(c) False I tested this on scipy_core 0.8.4, but I see the same problem in the code in SVN. The problem is that both functions first call asanyarray with default arguments, which includes fortran=False. I'm not sure what the right way to fix this is, but it does seem like asarray and asanyarray should preserve the memory layout of existing arrays. Thanks, Chris From oliphant at ee.byu.edu Tue Dec 20 00:12:13 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 19 Dec 2005 22:12:13 -0700 Subject: [SciPy-dev] return type issue? In-Reply-To: <9b4d0d9f6c7864b48218df2b95e01109@stsci.edu> References: <20051214180007.CGJ81571@comet.stsci.edu> <9b4d0d9f6c7864b48218df2b95e01109@stsci.edu> Message-ID: <43A792AD.3060003@ee.byu.edu> Perry Greenfield wrote: > I'd argue that most people would want to see the field array as the > type that it is rather than another instance of a recarray. Most of > the "fun" of doing this is to have something that can be manipulated > just like the numeric array that it is (i.e., ufuncs work on it, etc.) > I'd also argue that it is more pythonic in the sense that the field > indexing is akin to indexing a list or dictionary. When you do that > you get the type it contains, not another list or dictionary. If I > were select several fields, then yes, I would expect to get another > record array, but not if I select only 1. We can override this with > another subclass, but I wonder if this isn't so common a use case as > to make it automatic whenever the type of the field is a standard > array type. I can see this point. You do realize, however, that a recarray still has a type (and if that type is a fixed number type then the ufuncs work). A recarray is a subclass of the ndarray and so it acts like an ndarray in almost every respect. The only difference is that attribute access can be used to get at fields and the __new__ method is a bit different. So, the question is when should field selection (which can be done on all arrays) return the base-type or the sub-type. I hesitate to enforce returning the base-type on all ndarrays because it seems limiting. But it could easily be done simply by creating a base-type ndarray in the getfield method if the descriptor has no fields. I guess I'd like to see some real problems that emerge before changing the default behavior by inserting "special-case" code. If we do want to place the special-case code to return ndarray's, or chararrays, or more recarray's, I guess I would put it in the recarray subclass itself. -Travis From pearu at scipy.org Tue Dec 20 06:27:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 20 Dec 2005 05:27:28 -0600 (CST) Subject: [SciPy-dev] scipy.pkgload (was Re: Moving random.py) In-Reply-To: <43A4A036.2050704@colorado.edu> References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> <43A36A45.6080107@ieee.org> <43A4A036.2050704@colorado.edu> Message-ID: Hi, I have implemented hooks for scipy.pkgload(..) following the idea of Fernando's scipy.mod_load(..) functions with few modifications. It's doc string is given below. I have renamed mod_load to pkgload because we have packages. In addition to importing the packages to scipy namespace, scipy.pkgload fills also scipy.__all__ list. Hooks for filling scipy.__doc__ remains to be implemented, it's not difficult though. Currently scipy.pkgload is a bit more verbose than `import scipy` used to be but all the messages are sent to sys.stderr and some of these messages indicate that other parts in scipy needs to be fixed, such as overwriting scipy.test module with ScipyTest instance method etc. Also, `import scipy` still does the full import but after scipy.pkgload is stabilized, importing scipy the following will happen in scipy name space (according to current info.py files): import test from test import ScipyTest import base from base import * import basic from basic import fft, ifft, rand, randn, linalg, fftpack, random I would be interested to know which of these import statements are important to people and which can be removed. For example, what would be the recommended way to access array facilities: from scipy.base import * or from scipy import * ? If the former, then we could remove `from base import *` statement. To import full scipy, one needs to execute import scipy scipy.pkgload() Regards, Pearu Type: instance Base Class: scipy.PackageLoader String Form: Namespace: Interactive Constructor Docstring: Manages loading SciPy packages. Callable: Yes Call def: scipy.pkgload(self, *packages, **options) Call docstring: Load one or more packages into scipy's top-level namespace. Usage: This function is intended to shorten the need to import many of scipy's subpackages constantly with statements such as import scipy.linalg, scipy.fft, scipy.etc... Instead, you can say: import scipy scipy.pkgload('linalg','fft',...) or scipy.pkgload() to load all of them in one call. If a name which doesn't exist in scipy's namespace is given, an exception [[WHAT? ImportError, probably?]] is raised. [NotImplemented] Inputs: - the names (one or more strings) of all the scipy modules one wishes to load into the top-level namespace. Optional keyword inputs: - verbose - integer specifying verbosity level [default: 0]. - force - when True, force reloading loaded packages [default: False]. If no input arguments are given, then all of scipy's subpackages are imported. Outputs: The function returns a tuple with all the names of the modules which were actually imported. [NotImplemented] EOM From nwagner at mecha.uni-stuttgart.de Tue Dec 20 07:57:25 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 20 Dec 2005 13:57:25 +0100 Subject: [SciPy-dev] Type handling of matrices Message-ID: <43A7FFB5.509@mecha.uni-stuttgart.de> Hi all, Is there something like isspd in scipy -- that is a test for real symmetric positive definite matrices ? A Cholesky factorization can be used for this task. If it exists the matrix is spd. Nils I found Type handling ============== iscomplexobj -- Test for complex object, scalar result isrealobj -- Test for real object, scalar result iscomplex -- Test for complex elements, array result isreal -- Test for real elements, array result imag -- Imaginary part real -- Real part real_if_close -- Turns complex number with tiny imaginary part to real isneginf -- Tests for negative infinity ---| isposinf -- Tests for positive infinity | isnan -- Tests for nans |---- array results isinf -- Tests for infinity | isfinite -- Tests for finite numbers ---| isscalar -- True if argument is a scalar nan_to_num -- Replaces NaN's with 0 and infinities with large numbers cast -- Dictionary of functions to force cast to each type common_type -- Determine the 'minimum common type code' for a group of arrays mintypecode -- Return minimal allowed common typecode. From nwagner at mecha.uni-stuttgart.de Tue Dec 20 10:12:19 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 20 Dec 2005 16:12:19 +0100 Subject: [SciPy-dev] _configtest.c:5: undefined reference to `exp' Message-ID: <43A81F53.8020005@mecha.uni-stuttgart.de> Hi all, I have the following problem on a 64 bit machine and '0.8.6.1687' python setup.py install results in removing: _configtest.c _configtest.o _configtest gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -g -fPIC' compile options: '-Iscipy/base/src -I/usr/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest.o(.text+0xd): In function `main': /usr/local/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o(.text+0xd): In function `main': /usr/local/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status failure. Nils From schofield at ftw.at Tue Dec 20 10:39:05 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 20 Dec 2005 15:39:05 +0000 Subject: [SciPy-dev] scipy.pkgload (was Re: Moving random.py) In-Reply-To: References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> <43A36A45.6080107@ieee.org> <43A4A036.2050704@colorado.edu> Message-ID: <43A82599.8030200@ftw.at> Pearu Peterson wrote: >I have implemented hooks for scipy.pkgload(..) following the idea of >Fernando's scipy.mod_load(..) functions with few modifications. It's doc >string is given below. > > Well done! >... after scipy.pkgload is stabilized, >importing scipy the following will happen in scipy name space (according >to current info.py files): > > import test > from test import ScipyTest > import base > from base import * > import basic > from basic import fft, ifft, rand, randn, linalg, fftpack, random > >I would be interested to know which of these import statements are >important to people and which can be removed. > Am I right in thinking that fft, ifft, rand, and randn are functions, while linalg, fftpack, and random are modules? If so, I'd vote for importing the functions but leaving the modules for an explicit pkgload() command. > For example, what would be >the recommended way to access array facilities: > > from scipy.base import * > >or > > from scipy import * > >? If the former, then we could remove `from base import *` statement. > > I think the latter is friendlier ;) -- Ed From loredo at astro.cornell.edu Tue Dec 20 12:27:31 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 20 Dec 2005 12:27:31 -0500 Subject: [SciPy-dev] Difficulties on OS X In-Reply-To: References: Message-ID: <1135099651.43a83f03b0c7a@astrosun2.astro.cornell.edu> Hi folks, I'm trying to use the current CVS version of scipy with the latest release of scipy_core on OS 10.3.9 with Python 2.4. It appears that the installation is not using Apple's ATLAS equivalent---e.g, it builds lapack_lite, and scipy.test() says: **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** There is also this strange behavior, where it complains that fftpack isn't there, yet one can use it: >>> import scipy Failed to import fftpack No module named fftpack Failed to import signal No module named fftpack >>> scipy.fftpack.fft([0., 1., 0., -1., 0.]) array([ 0. +0.j , 1.11803399-1.53884177j, -1.11803399+0.36327126j, -1.11803399-0.36327126j, 1.11803399+1.53884177j]) >>> import scipy.fftpack Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? from scipy.basic.fftpack import fftshift, ifftshift, fftfreq ImportError: No module named fftpack >>> scipy.fftpack >>> scipy.signal Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'signal' >>> import scipy.signal Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/signal/__init__.py", line 12, in ? from signaltools import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/signal/signaltools.py", line 9, in ? from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2 File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? from scipy.basic.fftpack import fftshift, ifftshift, fftfreq ImportError: No module named fftpack I don't know how this might affect other things. E.g., when I build matplotlib, it begins by importing scipy so these complaints appear; I don't know mpl well enough to know if the build is affected by the "failed" imports. There is also this failed test: FAIL: check_round (scipy.special.basic.test_basic.test_round) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/tests/ test_basic.py", line 1793, in check_round assert_array_equal(rnd,rndrl) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 25.0%): Array 1: [10 10 10 11] Array 2: [10 10 11 11] Any ideas on fixes would be greatly appreciated. I'll be teaching parts of scipy to a class in January and I need to have a solid distribution for them to install on their laptops soon. -Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From loredo at astro.cornell.edu Tue Dec 20 16:12:05 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 20 Dec 2005 16:12:05 -0500 (EST) Subject: [SciPy-dev] fftpack issues - not just OS X Message-ID: <200512202112.jBKLC5S19727@laplace.astro.cornell.edu> Hi again, Now I'm at work on my FC3 box, and just updated to the last scipy_core release and the current SVN version of scipy. I have the same fftpack weirdness I just reported on OS X, so apparently it isn't an OS X issue. Fedora folks, am I the only one seeing this? -Tom From loredo at astro.cornell.edu Tue Dec 20 16:36:09 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Tue, 20 Dec 2005 16:36:09 -0500 (EST) Subject: [SciPy-dev] fftpack issues - not just OS X Message-ID: <200512202136.jBKLa9W19737@laplace.astro.cornell.edu> Paul wrote: > FWIW, this checkout from December 9 is working fine for me with FC3: My previous build using a checkout from CVS around Dec 5, also did not have this problem. It's something someone has changed in the last week or two. -Tom From josh.p.marshall at gmail.com Tue Dec 20 22:22:55 2005 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Wed, 21 Dec 2005 14:22:55 +1100 Subject: [SciPy-dev] Issues "freezing" scipy using py2app Message-ID: I have an application that is using PyObjC, matplotlib and scipy to perform some image classification and feature extraction. I am now attempting to use py2app to bundle it up into a single OS X .app. This has led to a few questions regarding the architecture of scipy, since there are a few problems I have had to work around. 1) The first concerns __init.py__ and _import_utils.py. In get_info_modules() from _import_tools, the presence of info.py files is required. However, when distributing a condensed version, these files are not included while the compiled info.pyc is. I have made a few alterations to _import_tools.py, such that if the info.py-s don't exist, then the .pyc files are tried instead. What I have done is the following: + use_pyc = False if packages is None: info_files = glob(os.path.join (self.parent_path,'*','info.py')) + info_files = [f for f in info_files if os.path.exists(f)] + if not info_files: + use_pyc =True + info_files = glob(os.path.join (self.parent_path,'*','info.pyc')) and then, when we try actually importing them using load_module: for info_file in info_files: package_name = os.path.basename(os.path.dirname(info_file)) fullname = self.parent_name +'.'+ package_name if use_pyc: filedescriptor = ('.pyc','rb',2) else: filedescriptor = ('.py','U',1) try: info_module = imp.load_module(fullname+'.info', open (info_file,filedescriptor[1]), info_file, filedescriptor) There must be a simpler way, such as using find_module, but I couldn't get this to work since it requires the parent package to be imported before the subpackages. 2) The second issue was a problem involving get_config_vars being called from ppimport.py. distutils then requires the presence of the Python config file, which doesn't exist in the bundle. This was easily fixed by adding if sys.platform[:5]=='linux': so_ext = '.so' + elif sys.platform=='darwin': + so_ext = '.so' I'm not sure if this is correct, since there can also be .dylibs, but on my system at least all the Python C extensions are .so-s. 3) Finally, I can't distribute my .app as-is, after being generated by py2app. py2app operates by all needed modules from the system site- packages being collected into a "site-packages.zip" within the application bundle. At runtime, some magic with import causes everything to work, and this is where scipy falls down. When trying to import the subpackages using the _import_utils.py magic, it cannot find them within the site-packages.zip. I can fix this by distributing the .app with the site-packages.zip expanded into a directory. However, I was wondering if anyone who knows the import magic better knows if we can fix this. For reference, I am using current SVN scipy_core and scipy, current CVS (0.85.1) matplotlib, over Python 2.4.1 on OS X 10.4.3. I have a pretty nifty MatplotlibView which inherits from NSImageView, using some code from the CocoaAgg backend. (It only relies on the Agg backend). Works well for embedding matplotlib into a standalone application, if anyone is interested in it. Thanks for the great work everyone's putting into scipy and matplotlib. It's completely replaced Matlab for me, and I have shown it can be used both for the prototyping stage and the final application. Cheers, Josh From oliphant at ee.byu.edu Wed Dec 21 01:30:50 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 20 Dec 2005 23:30:50 -0700 Subject: [SciPy-dev] fftpack issues - not just OS X In-Reply-To: <200512202136.jBKLa9W19737@laplace.astro.cornell.edu> References: <200512202136.jBKLa9W19737@laplace.astro.cornell.edu> Message-ID: <43A8F69A.7080001@ee.byu.edu> Tom Loredo wrote: >Paul wrote: > > > >>FWIW, this checkout from December 9 is working fine for me with FC3: >> >> > >My previous build using a checkout from CVS around Dec 5, also did not >have this problem. It's something someone has changed in the >last week or two. > > This sounds like issues with the new package loader scheme that is being introduced. This needs to be fixed urgently. -Travis From pearu at scipy.org Wed Dec 21 00:35:50 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 20 Dec 2005 23:35:50 -0600 (CST) Subject: [SciPy-dev] fftpack issues - not just OS X In-Reply-To: <43A8F69A.7080001@ee.byu.edu> References: <200512202136.jBKLa9W19737@laplace.astro.cornell.edu> <43A8F69A.7080001@ee.byu.edu> Message-ID: On Tue, 20 Dec 2005, Travis Oliphant wrote: > Tom Loredo wrote: > >> Paul wrote: >> >> >> >>> FWIW, this checkout from December 9 is working fine for me with FC3: >>> >>> >> >> My previous build using a checkout from CVS around Dec 5, also did not >> have this problem. It's something someone has changed in the >> last week or two. >> >> > This sounds like issues with the new package loader scheme that is being > introduced. This needs to be fixed urgently. I'm not sure on that. Messages like >>> import scipy Failed to import fftpack No module named fftpack Failed to import signal No module named fftpack [that Tom Loredo reported] would not appear in the the new package loader scheme. Pearu From loredo at astro.cornell.edu Wed Dec 21 02:23:18 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Wed, 21 Dec 2005 02:23:18 -0500 Subject: [SciPy-dev] fftpack issues - not just OS X In-Reply-To: References: Message-ID: <1135149798.43a902e6ce812@astrosun2.astro.cornell.edu> > This sounds like issues with the new package loader scheme that is being > introduced. This needs to be fixed urgently. I don't know if this is related, but it seems undesirable to me: >>> from scipy.stats import norm Failed to import fftpack No module named fftpack Failed to import signal No module named fftpack [i.e., importing "norm" this way works as expected; but...] >>> from scipy.random import normal Traceback (most recent call last): File "", line 1, in ? ImportError: No module named random >>> import scipy.stats as stats >>> stats >>> import scipy.random as random Traceback (most recent call last): File "", line 1, in ? ImportError: No module named random But random is certainly there: >>> from scipy import * >>> random >>> random.normal >>> from scipy import random >>> -Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From pearu at scipy.org Wed Dec 21 01:29:09 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 00:29:09 -0600 (CST) Subject: [SciPy-dev] Issues "freezing" scipy using py2app In-Reply-To: References: Message-ID: On Wed, 21 Dec 2005, Josh Marshall wrote: > I have an application that is using PyObjC, matplotlib and scipy to > perform some image classification and feature extraction. I am now > attempting to use py2app to bundle it up into a single OS X .app. > > This has led to a few questions regarding the architecture of scipy, > since there are a few problems I have had to work around. > > 1) The first concerns __init.py__ and _import_utils.py. In > get_info_modules() from _import_tools, the presence of info.py files > is required. However, when distributing a condensed version, these > files are not included while the compiled info.pyc is. I have made a > few alterations to _import_tools.py, such that if the info.py-s don't > exist, then the .pyc files are tried instead. What I have done is the > following: > > + use_pyc = False > if packages is None: > info_files = glob(os.path.join > (self.parent_path,'*','info.py')) > + info_files = [f for f in info_files if os.path.exists(f)] > + if not info_files: > + use_pyc =True > + info_files = glob(os.path.join > (self.parent_path,'*','info.pyc')) > > and then, when we try actually importing them using load_module: > > for info_file in info_files: > package_name = os.path.basename(os.path.dirname(info_file)) > fullname = self.parent_name +'.'+ package_name > > if use_pyc: > filedescriptor = ('.pyc','rb',2) > else: > filedescriptor = ('.py','U',1) > > try: > info_module = imp.load_module(fullname+'.info', > open > (info_file,filedescriptor[1]), > info_file, > filedescriptor) Thanks for the patch. I have applied it to new pkgload feature. scipy does not use _import_tools.py anymore. > There must be a simpler way, such as using find_module, but I > couldn't get this to work since it requires the parent package to be > imported before the subpackages. info.py files should be regarded as information files about packages, they are python modules only for convinience. And using imp.load_module is necessary to prevent importing the whole package, at the import time we need only the information from the info.py file. > 2) The second issue was a problem involving get_config_vars being > called from ppimport.py. distutils then requires the presence of the > Python config file, which doesn't exist in the bundle. > > This was easily fixed by adding > > if sys.platform[:5]=='linux': > so_ext = '.so' > + elif sys.platform=='darwin': > + so_ext = '.so' Hmm, what exception do you get when Python config file is not present? ppimport._get_so_ext uses so_ext='.so' when ImportError appears. I think the proper fix would be to catch proper exception. > 3) Finally, I can't distribute my .app as-is, after being generated > by py2app. py2app operates by all needed modules from the system site- > packages being collected into a "site-packages.zip" within the > application bundle. At runtime, some magic with import causes > everything to work, and this is where scipy falls down. When trying > to import the subpackages using the _import_utils.py magic, it cannot > find them within the site-packages.zip. I can fix this by > distributing the .app with the site-packages.zip expanded into a > directory. However, I was wondering if anyone who knows the import > magic better knows if we can fix this. I'd like to learn more about the magic that Python does with .zip files. Are there similar issues when using eggs? Pearu From nwagner at mecha.uni-stuttgart.de Wed Dec 21 02:33:13 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Dec 2005 08:33:13 +0100 Subject: [SciPy-dev] Problems with latest svn Message-ID: <43A90539.9050800@mecha.uni-stuttgart.de> >>> import scipy [Errno 2] No such file or directory: '/usr/local/lib/python2.4/site-packages/scipy/scipy/linalg/info.py' [Errno 2] No such file or directory: '/usr/local/lib/python2.4/site-packages/scipy/scipy/special/info.py' >>> scipy.base.__version__ '0.8.6.1688' >>> From pearu at scipy.org Wed Dec 21 01:45:25 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 00:45:25 -0600 (CST) Subject: [SciPy-dev] Problems with latest svn In-Reply-To: <43A90539.9050800@mecha.uni-stuttgart.de> References: <43A90539.9050800@mecha.uni-stuttgart.de> Message-ID: On Wed, 21 Dec 2005, Nils Wagner wrote: > >>> import scipy > [Errno 2] No such file or directory: > '/usr/local/lib/python2.4/site-packages/scipy/scipy/linalg/info.py' > [Errno 2] No such file or directory: > '/usr/local/lib/python2.4/site-packages/scipy/scipy/special/info.py' >>>> scipy.base.__version__ > '0.8.6.1688' Fixed in scipy svn. Core scipy now uses SCIPY_IMPORT_VERBOSE environment variable. If present, its value is used as scipy.pkgload verbose kw argument. Pearu From nwagner at mecha.uni-stuttgart.de Wed Dec 21 02:53:15 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Dec 2005 08:53:15 +0100 Subject: [SciPy-dev] Problems with latest svn In-Reply-To: References: <43A90539.9050800@mecha.uni-stuttgart.de> Message-ID: <43A909EB.4070004@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Wed, 21 Dec 2005, Nils Wagner wrote: > > >>>>>import scipy >>>>> >>[Errno 2] No such file or directory: >>'/usr/local/lib/python2.4/site-packages/scipy/scipy/linalg/info.py' >>[Errno 2] No such file or directory: >>'/usr/local/lib/python2.4/site-packages/scipy/scipy/special/info.py' >> >>>>>scipy.base.__version__ >>>>> >>'0.8.6.1688' >> > >Fixed in scipy svn. Core scipy now uses SCIPY_IMPORT_VERBOSE environment >variable. If present, its value is used as scipy.pkgload verbose kw >argument. > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Can I ignore this message ? removing: _configtest.c _configtest.o _configtest gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-Iscipy/base/src -I/usr/local/include/python2.4 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest.o(.text+0x21): In function `main': /var/tmp/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status _configtest.o(.text+0x21): In function `main': /var/tmp/svn/core/_configtest.c:5: undefined reference to `exp' collect2: ld returned 1 exit status failure. Nils From pearu at scipy.org Wed Dec 21 01:59:50 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 00:59:50 -0600 (CST) Subject: [SciPy-dev] Problems with latest svn In-Reply-To: <43A909EB.4070004@mecha.uni-stuttgart.de> References: <43A90539.9050800@mecha.uni-stuttgart.de> <43A909EB.4070004@mecha.uni-stuttgart.de> Message-ID: On Wed, 21 Dec 2005, Nils Wagner wrote: > Can I ignore this message ? > removing: _configtest.c _configtest.o _configtest > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' > compile options: '-Iscipy/base/src -I/usr/local/include/python2.4 -c' > gcc: _configtest.c > gcc -pthread _configtest.o -o _configtest > _configtest.o(.text+0x21): In function `main': > /var/tmp/svn/core/_configtest.c:5: undefined reference to `exp' > collect2: ld returned 1 exit status > _configtest.o(.text+0x21): In function `main': > /var/tmp/svn/core/_configtest.c:5: undefined reference to `exp' > collect2: ld returned 1 exit status > failure. Yes, if scipy.base builds ok. These messages come from testing different variations of math libraries. Pearu From nwagner at mecha.uni-stuttgart.de Wed Dec 21 03:32:54 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Dec 2005 09:32:54 +0100 Subject: [SciPy-dev] Different number of tests Message-ID: <43A91336.40405@mecha.uni-stuttgart.de> Hi all, I have installed the latest svn versions of core and scipy on two different machines. On the first box scipy.test(1,10) Ran 1372 tests in 8.086s FAILED (failures=1) On the second box Ran 1230 tests in 7.358s FAILED (failures=1) What is the reason for the difference in the number of tests (1372 versus 1230) ? Nils From pearu at scipy.org Wed Dec 21 03:20:54 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 02:20:54 -0600 (CST) Subject: [SciPy-dev] Reviewing scipy.test package Message-ID: Hi, We have scipy.test package that currently contains only two modules: testing.py and auto_test.py. testing.py is widely used in scipy test sites. auto_test.py is rather old and does not work without serious review. It is tempting to remove it from scipy core svn but it has some interesting ideas. This raises a general question: Shall we clean scipy up or should we keep old-interesting-but-not-working codes for future ideas? There is also a name conflict with scipy.test. When importing scipy, scipy.test module is overwritten with ScipyTest.test method. To resolve this conflict, either the name of the module or the method needs to be changed. I see two options: 1) rename ScipyTest.test -> ScipyTest.runtest or similar 2) rename scipy.test -> scipy.testing People are used to run scipy.test(), so I am inclined to applying the option (2). Any objections? Pearu From arnd.baecker at web.de Wed Dec 21 04:31:19 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 21 Dec 2005 10:31:19 +0100 (CET) Subject: [SciPy-dev] Different number of tests In-Reply-To: <43A91336.40405@mecha.uni-stuttgart.de> References: <43A91336.40405@mecha.uni-stuttgart.de> Message-ID: On Wed, 21 Dec 2005, Nils Wagner wrote: > Hi all, > > I have installed the latest svn versions of core and scipy on two > different machines. > > On the first box > scipy.test(1,10) > Ran 1372 tests in 8.086s > > FAILED (failures=1) > > > On the second box > Ran 1230 tests in 7.358s > > FAILED (failures=1) > > > What is the reason for the difference in the number of tests (1372 > versus 1230) ? if you do scipy.test(1) (or 10) you should see a summary like Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 2 tests for scipy.base.umath Found 10 tests for scipy.integrate.quadpack Found 92 tests for scipy.stats.stats Found 9 tests for scipy.base.twodim_base Found 36 tests for scipy.linalg.decomp Found 50 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Comparing these two lists should reveal, where less tests are found. Best, Arnd > Nils > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Wed Dec 21 03:50:23 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 02:50:23 -0600 (CST) Subject: [SciPy-dev] new.scipy.org/Wiki and external ReST documents Message-ID: Hi, I have added http://svn.scipy.org/svn/scipy_core/trunk/scipy/doc/DISTUTILS.txt to http://new.scipy.org/Wiki/Docs but it would be nice if ReST-ed DISTUTILS.txt would be processed by Wiki engine. Are there any such features possible in Wiki? If not, what would be a good practice to make ReST documents available to scipy Wiki? Thanks, Pearu From strawman at astraw.com Wed Dec 21 05:33:25 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 21 Dec 2005 02:33:25 -0800 Subject: [SciPy-dev] new.scipy.org/Wiki and external ReST documents In-Reply-To: References: Message-ID: <34C8209D-4D9B-414E-AABD-386BED589C11@astraw.com> Hi, MoinMoin can do rst, but Enthought has to turn it on. I've added it to the laundry list of configuration to-dos at http://new.scipy.org/Wiki/KickTheTires On Dec 21, 2005, at 12:50 AM, Pearu Peterson wrote: > > Hi, > > I have added > > http://svn.scipy.org/svn/scipy_core/trunk/scipy/doc/DISTUTILS.txt > > to > > http://new.scipy.org/Wiki/Docs > > but it would be nice if ReST-ed DISTUTILS.txt would be processed by > Wiki > engine. Are there any such features possible in Wiki? > > If not, what would be a good practice to make ReST documents > available to > scipy Wiki? > > Thanks, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From nwagner at mecha.uni-stuttgart.de Wed Dec 21 05:49:49 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Dec 2005 11:49:49 +0100 Subject: [SciPy-dev] Different number of tests In-Reply-To: References: <43A91336.40405@mecha.uni-stuttgart.de> Message-ID: <43A9334D.9080106@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Wed, 21 Dec 2005, Nils Wagner wrote: > > >>Hi all, >> >>I have installed the latest svn versions of core and scipy on two >>different machines. >> >>On the first box >>scipy.test(1,10) >>Ran 1372 tests in 8.086s >> >>FAILED (failures=1) >> >> >>On the second box >>Ran 1230 tests in 7.358s >> >>FAILED (failures=1) >> >> >>What is the reason for the difference in the number of tests (1372 >>versus 1230) ? >> > >if you do scipy.test(1) (or 10) you should see a summary like > > Found 4 tests for scipy.io.array_import > Found 128 tests for scipy.linalg.fblas > Found 2 tests for scipy.base.umath > Found 10 tests for scipy.integrate.quadpack > Found 92 tests for scipy.stats.stats > Found 9 tests for scipy.base.twodim_base > Found 36 tests for scipy.linalg.decomp > Found 50 tests for scipy.sparse.sparse > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > > >Comparing these two lists should reveal, where less tests >are found. > >Best, Arnd > > > >>Nils >> >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > The main difference is here Found 128 tests for scipy.lib.blas.fblas which isn't available on the other machine. From cimrman3 at ntc.zcu.cz Wed Dec 21 05:53:37 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 21 Dec 2005 11:53:37 +0100 Subject: [SciPy-dev] concatenate with None problem Message-ID: <43A93431.8080002@ntc.zcu.cz> In [4]:nm.concatenate( (nm.array( [1,2,3] ), None) ) *** glibc detected *** free(): invalid pointer: 0xb61d6380 *** (SIGABRT) In [3]:nm.__scipy_version__ Out[3]:'0.4.3.1492' In [4]:nm.__core_version__ Out[4]:'0.8.6.1691' details: ======== *** glibc detected *** free(): invalid pointer: 0xb7a67380 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 11847)] 0xb7ba60b1 in kill () from /lib/libc.so.6 (gdb) bt #0 0xb7ba60b1 in kill () from /lib/libc.so.6 #1 0xb7d8c1e1 in pthread_kill () from /lib/libpthread.so.0 #2 0xb7d8c55b in raise () from /lib/libpthread.so.0 #3 0xb7ba5e44 in raise () from /lib/libc.so.6 #4 0xb7ba730d in abort () from /lib/libc.so.6 #5 0xb7bd85bc in __fsetlocking () from /lib/libc.so.6 #6 0xb7be2417 in mallopt () from /lib/libc.so.6 #7 0xb7be10df in mallopt () from /lib/libc.so.6 #8 0xb7bdfccf in free () from /lib/libc.so.6 #9 0xb7e28c1b in PyObject_Free (p=0xb7a67380) at obmalloc.c:798 #10 0xb7a4ddfc in arraydescr_dealloc () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #11 0xb7a385e1 in array_dealloc () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #12 0xb7a52041 in PyArray_Concatenate () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #13 0xb7a59ab4 in array_concatenate () from /home/share/software/usr/lib/python2.4/site-packages/scipy/base/multiarray.so #14 0xb7e24220 in PyCFunction_Call (func=0xb7abba6c, arg=0xb5f33e4c, kw=0x0) at methodobject.c:77 #15 0xb7e66857 in call_function (pp_stack=0xbf81f628, oparg=1) at ceval.c:3558 #16 0xb7e639a6 in PyEval_EvalFrame (f=0x8100894) at ceval.c:2163 From pearu at scipy.org Wed Dec 21 04:56:52 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 21 Dec 2005 03:56:52 -0600 (CST) Subject: [SciPy-dev] new.scipy.org/Wiki and external ReST documents In-Reply-To: <34C8209D-4D9B-414E-AABD-386BED589C11@astraw.com> References: <34C8209D-4D9B-414E-AABD-386BED589C11@astraw.com> Message-ID: On Wed, 21 Dec 2005, Andrew Straw wrote: > Hi, > > MoinMoin can do rst, but Enthought has to turn it on. I've added it > to the laundry list of configuration to-dos at > > http://new.scipy.org/Wiki/KickTheTires Ok, thanks. Pearu > On Dec 21, 2005, at 12:50 AM, Pearu Peterson wrote: > >> >> Hi, >> >> I have added >> >> http://svn.scipy.org/svn/scipy_core/trunk/scipy/doc/DISTUTILS.txt >> >> to >> >> http://new.scipy.org/Wiki/Docs >> >> but it would be nice if ReST-ed DISTUTILS.txt would be processed by >> Wiki >> engine. Are there any such features possible in Wiki? >> >> If not, what would be a good practice to make ReST documents >> available to >> scipy Wiki? From arnd.baecker at web.de Wed Dec 21 06:27:02 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 21 Dec 2005 12:27:02 +0100 (CET) Subject: [SciPy-dev] Different number of tests In-Reply-To: <43A9334D.9080106@mecha.uni-stuttgart.de> References: <43A91336.40405@mecha.uni-stuttgart.de> <43A9334D.9080106@mecha.uni-stuttgart.de> Message-ID: Hi Nils, On Wed, 21 Dec 2005, Nils Wagner wrote: > Arnd Baecker wrote: > >On Wed, 21 Dec 2005, Nils Wagner wrote: > >>Hi all, > >> > >>I have installed the latest svn versions of core and scipy on two > >>different machines. > >> > >>On the first box > >>scipy.test(1,10) > >>Ran 1372 tests in 8.086s > >> > >>FAILED (failures=1) > >> > >> > >>On the second box > >>Ran 1230 tests in 7.358s > >> > >>FAILED (failures=1) > >> > >> > >>What is the reason for the difference in the number of tests (1372 > >>versus 1230) ? > > > >if you do scipy.test(1) (or 10) you should see a summary like > > > > Found 4 tests for scipy.io.array_import > > Found 128 tests for scipy.linalg.fblas [...] > > > >Comparing these two lists should reveal, where less tests > >are found. [...] > The main difference is here > Found 128 tests for scipy.lib.blas.fblas > which isn't available on the other machine. Which still leaves a difference of further 14 tests. So, did your comparison answer your question well enough? Best, Arnd From jh at oobleck.astro.cornell.edu Wed Dec 21 09:29:46 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 21 Dec 2005 09:29:46 -0500 Subject: [SciPy-dev] old web site front page Message-ID: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> I've recently gotten an inquiry from a new site that wants to install scipy. They wanted RPM packages and a Yum repo. I explained the current situation and they suggested posting a more detailed notice on the front page of the web site advising new users of the situation and giving them the options. I think this is a good idea. I've pasted some text below, as a starting point, to replace paragraphs 5 and 6 of the current front page. Please comment, and also say if you disagree that a more detailed notice on the scipy.org site is a good thing. --jh-- SciPy is nearing completion of a major overhaul. The new SciPy features high performance for both small and large arrays, and rich syntactic functionality for both. This effort is mostly done. We are now beating the bugs out of the various platform builds to make stable binary installs. Documentation is also well underway. If you are new to SciPy, there are several reasonable paths to take, depending on your circumstances: 1. If you can wait until the Spring of 2006, the new package, binary installers, and its basic documentation should be ready by then. This is the path of least resistance, and we recommend it! 2. If you must run an application that requires either old version of the array extension to Python (Numeric or Numarray), go ahead and install that version. They will both be viable for some time to come. 3. If you are experienced with software installs and like getting your hands dirty, download the tarball for the new version, build it for your architecture, take notes, subscribe to scipy-dev at scipy.net, and post your notes once you've read recent threads in a similar vein. Your experience will help us, and other new users, a great deal. Where to get the different versions: The new SciPy core is being distributed from http://numeric.scipy.org/ and http://sourceforge.net/projects/numpy/ (click on SciPy_core). We strongly recommend subscribing to both scipy-dev at scipy.net and scipy-user at scipy.net while the software is still in testing, and updating whenever a new release occurs. A version of full SciPy that works on newcore is available for anyonymous check out from a Subversion repostitory at http://svn.scipy.org/svn/scipy/trunk. File downloads are available on this site or from Sourceforge. Old (Numeric-based) SciPy is available from http://www.scipy.org/download/ and has a Yum repo at http://www.enthought.com/python/fedora/. Most of the material on this web site refers to the old, Numeric-based SciPy. WE NEED VOLUNTEERS! At this point, if you just do things for yourself and post what you do, you'll make a big contribution. Initially, that will help us get the code into release-stable form. Eventually, it will make a friendly and easy set of installation instructions a reality. We are also looking for people to write documentation and to help design and populate the new SciPy web site. The new site is http://new.scipy.org. From ndbecker2 at gmail.com Wed Dec 21 09:41:31 2005 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 21 Dec 2005 09:41:31 -0500 Subject: [SciPy-dev] scipy-0.8.4 SRPM patch available Message-ID: <200512210941.31535.ndbecker2@gmail.com> I have patched scipy SRPM to build scipy-0.8.4 on Fedora4, x86_64. Also builds on i386. Please update the SPEC file. Here is the SPEC file I used: Note: I don't currently have atlas installed. If I did, I guess I would also need to add something for python setup.py to find it. %define name scipy_core %define version 0.8.4 %define release 1 %define python_sitearch %(%{__python} -c 'from distutils import sysconfig; print sysconfig.get_python_lib(1)') Summary: Core SciPy Name: %{name} Version: %{version} Release: %{release} Source0: %{name}-%{version}.tar.gz License: BSD Group: Development/Libraries BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot Prefix: %{_prefix} Vendor: SciPy Developers Url: http://numeric.scipy.org %description UNKNOWN %prep %setup %build env BLAS=/usr/lib64 LAPACK=/usr/lib64 CFLAGS="$RPM_OPT_FLAGS" python setup.py build #env CFLAGS="$RPM_OPT_FLAGS" python setup.py build %install python setup.py install --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES %clean rm -rf $RPM_BUILD_ROOT %files -f INSTALLED_FILES #for some reason this was missed! %{python_sitearch}/scipy/lib/_dotblas.so %defattr(-,root,root) From cookedm at physics.mcmaster.ca Wed Dec 21 11:52:59 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 21 Dec 2005 08:52:59 -0800 Subject: [SciPy-dev] Reviewing scipy.test package In-Reply-To: References: Message-ID: <64E88103-EE3D-4711-BCB3-DECE05D830F4@physics.mcmaster.ca> On Dec 21, 2005, at 0:20 , Pearu Peterson wrote: > Hi, > > We have scipy.test package that currently contains only two modules: > testing.py and auto_test.py. > > testing.py is widely used in scipy test sites. > > auto_test.py is rather old and does not work without serious > review. It > is tempting to remove it from scipy core svn but it has some > interesting > ideas. This raises a general question: +1 remove it. As you say, it needs serious review. There are better ways to do some of it (setuptools for instance). > Shall we clean scipy up or should we keep > old-interesting-but-not-working codes for future ideas? How about moving them to a directory where they won't be installed? contrib/, old/, sandbox/, ...? I've been thinking we should have a sandbox/ or contrib/ directory in scipy_core for things like implementations of the array interface that others can use in their code. > There is also a name conflict with scipy.test. When importing scipy, > scipy.test module is overwritten with ScipyTest.test method. To > resolve > this conflict, either the name of the module or the method needs to be > changed. I see two options: > 1) rename ScipyTest.test -> ScipyTest.runtest or similar > 2) rename scipy.test -> scipy.testing > > People are used to run scipy.test(), so I am inclined to applying the > option (2). Any objections? +1 (I suggested this a few months ago when I added doctest support :-) -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From schofield at ftw.at Wed Dec 21 12:56:16 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 21 Dec 2005 17:56:16 +0000 Subject: [SciPy-dev] old web site front page In-Reply-To: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> Message-ID: <02D5D630-0760-41BD-A210-2AAE562B2147@ftw.at> On 21/12/2005, at 2:29 PM, Joe Harrington wrote: > Please comment, and also say if you disagree > that a more detailed notice on the scipy.org site is a good thing. > I agree this is a good idea. Well done for the initiative! > If you are new to SciPy, there are several reasonable paths to take, > depending on your circumstances: > > 1. If you can wait until the Spring of 2006, the new package, binary > installers, and its basic documentation should be ready by then. This > is the path of least resistance, and we recommend it! > I think we should be more enthusiastic about recommending that people migrate now to the new SciPy. There are already binary packages for Win32 and OS X. Discouraging potential users until an "official" release is made seems counterproductive -- the more testing it gets in the wild now, the better. I'd suggest merging points 1 and 3 like this: 1. SciPy is still in 'beta' release status, but we encourage you to try it out anyway. Source code and binary installers for a few platforms are available from http://numeric.scipy.org. The final release, due in Spring 2006, will be similar, but with more documentation, binaries for more platforms, and more bug fixes. 2. If you must run an application that requires either old version of the array extension to Python (Numeric or Numarray), go ahead and install that version. The latest versions of both Numeric and Numarray support importing and exporting SciPy Core arrays. They will both be viable for some time to come. -- Ed From kamrik at gmail.com Wed Dec 21 13:24:07 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Wed, 21 Dec 2005 20:24:07 +0200 Subject: [SciPy-dev] old web site front page In-Reply-To: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> Message-ID: I've added the first paragraph of the text proposed in the previous message to the front page. And the rest to installation instructions. I didn't know where to put the plea for voulnteers. On 12/21/05, Joe Harrington wrote: > I've recently gotten an inquiry from a new site that wants to install > scipy. They wanted RPM packages and a Yum repo. I explained the > current situation and they suggested posting a more detailed notice on > the front page of the web site advising new users of the situation and > giving them the options. I think this is a good idea. I've pasted > some text below, as a starting point, to replace paragraphs 5 and 6 of > the current front page. Please comment, and also say if you disagree > that a more detailed notice on the scipy.org site is a good thing. > > --jh-- > > SciPy is nearing completion of a major overhaul. The new SciPy > features high performance for both small and large arrays, and rich > syntactic functionality for both. This effort is mostly done. We are > now beating the bugs out of the various platform builds to make stable > binary installs. Documentation is also well underway. > > If you are new to SciPy, there are several reasonable paths to take, > depending on your circumstances: > > 1. If you can wait until the Spring of 2006, the new package, binary > installers, and its basic documentation should be ready by then. This > is the path of least resistance, and we recommend it! > > 2. If you must run an application that requires either old version of > the array extension to Python (Numeric or Numarray), go ahead and > install that version. They will both be viable for some time to come. > > 3. If you are experienced with software installs and like getting your > hands dirty, download the tarball for the new version, build it for > your architecture, take notes, subscribe to scipy-dev at scipy.net, and > post your notes once you've read recent threads in a similar vein. > Your experience will help us, and other new users, a great deal. > > Where to get the different versions: > > The new SciPy core is being distributed from http://numeric.scipy.org/ > and http://sourceforge.net/projects/numpy/ (click on SciPy_core). We > strongly recommend subscribing to both scipy-dev at scipy.net and > scipy-user at scipy.net while the software is still in testing, and > updating whenever a new release occurs. A version of full SciPy that > works on newcore is available for anyonymous check out from a > Subversion repostitory at http://svn.scipy.org/svn/scipy/trunk. File > downloads are available on this site or from Sourceforge. > > Old (Numeric-based) SciPy is available from > http://www.scipy.org/download/ and has a Yum repo at > http://www.enthought.com/python/fedora/. Most of the material on this > web site refers to the old, Numeric-based SciPy. > > WE NEED VOLUNTEERS! > > At this point, if you just do things for yourself and post what you > do, you'll make a big contribution. Initially, that will help us get > the code into release-stable form. Eventually, it will make a > friendly and easy set of installation instructions a reality. > > We are also looking for people to write documentation and to help > design and populate the new SciPy web site. The new site is > http://new.scipy.org. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From nwagner at mecha.uni-stuttgart.de Wed Dec 21 13:36:28 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Dec 2005 19:36:28 +0100 Subject: [SciPy-dev] Guide to SciPy: Core System Message-ID: Hi Travis, When is your Guide to SciPy: Core System http://www.tramy.us/ available ? Nils From aisaac at american.edu Wed Dec 21 14:08:21 2005 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Dec 2005 14:08:21 -0500 Subject: [SciPy-dev] Guide to SciPy: Core System In-Reply-To: References: Message-ID: On Wed, 21 Dec 2005 19:36:28 +0100 Nils Wagner apparently wrote: > Hi Travis, > When is your Guide to SciPy: Core System > http://www.tramy.us/ > available ? I've already recevied the first draft and an update. Lots of material. Very helpful! Cheers, Alan Isaac From jh at oobleck.astro.cornell.edu Wed Dec 21 14:42:09 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 21 Dec 2005 14:42:09 -0500 Subject: [SciPy-dev] old web site front page In-Reply-To: <02D5D630-0760-41BD-A210-2AAE562B2147@ftw.at> (message from Ed Schofield on Wed, 21 Dec 2005 17:56:16 +0000) References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> <02D5D630-0760-41BD-A210-2AAE562B2147@ftw.at> Message-ID: <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> > I think we should be more enthusiastic about recommending that people > migrate now to the new SciPy. Ed, I agree that there are people we should be encouraging to try the beta: current, experienced SciPy users on platforms with binary, full, newcore builds. Currently there are no full builds on any of the public servers. There is also no documentation (Travis's book is still a pre-order), and as we've been discussing the web sites need serious work. Even the most far-along builds are still beta for these reasons. I think encouraging anyone who is not already a user or potential developer would be a mistake, because we'd risk losing them. There are a lot of such people, and the main-page text has to address everyone. I've incorporated some of your ideas in the revised text, below. This is now the full text I propose for the page. > I've added the first paragraph of the text proposed in the previous > message to the front page. And the rest to installation instructions. > I didn't know where to put the plea for voulnteers. I'm proposing this text for the current (Plone) site, since that's where newbies go. No problem cribbing it for the new site, but remember that when the new site goes live our recommendations will (hopefully!) be different from what they are today. --jh-- WHAT IS SCIPY? SciPy is Open Source software for mathematics, science, and engineering. It runs on all popular operating systems, is quick to install, and is free of charge. SciPy is easy enough that a 12-year-old can use it to plot up her math homework, but complete enough that it is depended upon by some of the world's forefront scientists and engineers. If you need to manipulate numbers on a computer and display the results, we hope you'll give SciPy a try! SciPy consists of two packages. The "SciPy Core" provides basic functionality for array mathematics. "Full SciPy" provides modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, genetic algorithms, ODE solvers, special functions, and more. In addition, well over 100 application software packages use SciPy for specific tasks, and the number is growing. CURRENT STATUS: SciPy is nearing completion of a major overhaul. The new SciPy features high performance for both small and large arrays, and rich syntactic functionality for both. This effort is mostly done. We are now beating the bugs out of the various platform builds to make stable binary installs. Documentation is also well underway, as is a web site overhaul. WHAT TO DO NOW: If you are a CURRENT SCIPY USER, we encourage you to give the new version a try. Source code and binary installers for the latest beta-test version (for a few platforms) are available from http://numeric.scipy.org. If a binary package is available for your computer, things should be relatively stable. If you are NEW TO SCIPY, there are several reasonable paths to take, depending on your circumstances: 1. If you can wait until the Spring of 2006, the new package, binary installers, and its basic documentation should be ready by then. This is the path of least resistance! 2. If you want to try the new SciPy anyway, you can try the beta above. Look to the old SciPy documentation, the tutorials for the Numeric and Numarray packages, and the scipy-user at scipy.net mailing list for help. 3. If you must run an application that requires either old version of the array extension to Python (Numeric or Numarray), go ahead and install that version. The latest versions of both Numeric and Numarray support importing and exporting SciPy Core arrays. They will both be viable for some time to come. In any case, if you are VERY EXPERIENCED WITH SOFTWARE INSTALLS and like getting your hands dirty, download the tarball for the new version, build it for your architecture, take notes, subscribe to scipy-dev at scipy.net, and post your notes once you've read recent threads in a similar vein. Your experience will help us, and other new users, a great deal. In addition to the official download site, see http://new.scipy.org/Wiki/Download for SVN access to a version of full SciPy that works on newcore. The official release of the new SciPy is due in Spring 2006. It will be similar to the beta, but with more documentation, binaries for more platforms, and more bug fixes. Old (Numeric-based) SciPy is available from http://www.scipy.org/download/ and has a Yum repo at http://www.enthought.com/python/fedora/. Most of the material on this web site refers to the old, Numeric-based SciPy. We strongly recommend subscribing to both scipy-dev at scipy.net and scipy-user at scipy.net while the software is still in testing, and updating whenever a new release occurs. WE NEED VOLUNTEERS! At this point, if you just do things for yourself and post what you do, you'll make a big contribution. Initially, that will help us get the code into release-stable form. Eventually, it will make a friendly and easy set of installation instructions a reality. We are also looking for people to write documentation and to help design and populate the new SciPy web site. The new site is http://new.scipy.org. SciPy is a community project that is sponsored and supported by Enthought, inc. From josh.p.marshall at gmail.com Thu Dec 22 02:16:58 2005 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Thu, 22 Dec 2005 18:16:58 +1100 Subject: [SciPy-dev] Issues "freezing" scipy using py2app In-Reply-To: References: Message-ID: <7E40D1B6-F34D-4191-B275-CCB6F646CB83@gmail.com> On Wed, 21 Dec 2005, Pearu Peterson wrote: >> I have an application that is using PyObjC, matplotlib and scipy to >> perform some image classification and feature extraction. I am now >> attempting to use py2app to bundle it up into a single OS X .app. >> >> ... > > Thanks for the patch. I have applied it to new pkgload feature. > scipy does > not use _import_tools.py anymore. As soon as this is available in SVN I will test it out. Not using _import_tools.py should make it easier to package the app. I will see how I go. >> There must be a simpler way, such as using find_module, but I >> couldn't get this to work since it requires the parent package to be >> imported before the subpackages. > > info.py files should be regarded as information files about > packages, they > are python modules only for convinience. And using imp.load_module is > necessary to prevent importing the whole package, at the import > time we > need only the information from the info.py file. I think it is a good idea to hold this information in python modules. If it is held in a data file, this becomes problematic with regards to the different ways of distributing scipy, eg install from source, frozen in a zip. This is similar to the problem matplotlib had with where to put their data files. Keep as is, works well. >> 2) The second issue was a problem involving get_config_vars being >> called from ppimport.py. distutils then requires the presence of the >> Python config file, which doesn't exist in the bundle. >> >> This was easily fixed by adding >> >> if sys.platform[:5]=='linux': >> so_ext = '.so' >> + elif sys.platform=='darwin': >> + so_ext = '.so' > > Hmm, what exception do you get when Python config file is not present? > ppimport._get_so_ext uses so_ext='.so' when ImportError appears. > I think the proper fix would be to catch proper exception. It was a problem related to trying to include the entirety of distutils in my packaged application. The exception came from distutils. I will try catching the exception and see if this is a valid fix. >> 3) Finally, I can't distribute my .app as-is, after being generated >> by py2app. py2app operates by all needed modules from the system >> site- >> packages being collected into a "site-packages.zip" within the >> application bundle. At runtime, some magic with import causes >> everything to work, and this is where scipy falls down. When trying >> to import the subpackages using the _import_utils.py magic, it cannot >> find them within the site-packages.zip. I can fix this by >> distributing the .app with the site-packages.zip expanded into a >> directory. However, I was wondering if anyone who knows the import >> magic better knows if we can fix this. > > I'd like to learn more about the magic that Python does with .zip > files. > Are there similar issues when using eggs? I will look into this after the Christmas break. It would be nice to be able to package scipy as an egg, and also create a proper Mac installer. Merry Christmas to all. Josh From arnd.baecker at web.de Thu Dec 22 04:03:51 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 22 Dec 2005 10:03:51 +0100 (CET) Subject: [SciPy-dev] old web site front page In-Reply-To: <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> Message-ID: The text reads very well! FWIW, just some small comments: On Wed, 21 Dec 2005, Joe Harrington wrote: [...] > In any case, if you are VERY EXPERIENCED WITH SOFTWARE INSTALLS and The *VERY* seems too strong to me - Personally I would not call myself as "very experienced" and still I celebrated the 100th build of scipy new core (on one machine, where I could automate the stuff - no counters on the others ;-) Of course I have to admit, that I really learned a fair bit during the last years of going through the various stages of scipy ;-) Therefore I propose to omit the "VERY" to get more testing. > like getting your hands dirty, also this is maybe a bit too discouraging? (I think this phrase could be just left out) ((if one uses gcc (<4.0), g77, a plain ATLAS and a normal fftw2 on a normal machine things are pretty clean already)) > download the tarball for the new version, should really the tarball be recommended at this stage, or wouldn't svn (or just http://new.scipy.org/Wiki/Download) as only pointer) be sufficient? Best, Arnd From kamrik at gmail.com Thu Dec 22 08:42:45 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Thu, 22 Dec 2005 15:42:45 +0200 Subject: [SciPy-dev] Scipy.org In-Reply-To: <200512191915.jBJJFunh027898@oobleck.astro.cornell.edu> References: <200512171542.jBHFgdJC007400@oobleck.astro.cornell.edu> <43A45BCB.5090606@astraw.com> <43A6ECB2.1000900@astraw.com> <200512191915.jBJJFunh027898@oobleck.astro.cornell.edu> Message-ID: > For the > remaining tabs, why not combine Andrew's list and mine, and try to > subset a few into one tab. For example What Is, Getting Started, and > Screenshots go under Introduction; Cookbook goes under Docs (maybe > call it "Help"?). The listing of additional software (which > shouldn't be called "Topical" anymore) should be a top-level > navigation item. I've put another layout suggestion which kind of combines the 2 posted before http://new.scipy.org/Wiki/KickTheTires#head-f5563b4dbc8608e9b6e69fc4de501559a77bb981 I think the main page should create an impression that this site represents a "tool called SciPy" rather than a mix of interrelated software pieces. The impression of a "mix" is very confusing for newbies, at least it was for me. When I first found SciPy, it took me a lot of time to figure out the relation between numeric, numpy, "numeric python", numarray and SciPy. On the other hand we do have to show that scipy_core is a separately usable library to convince the developers of other tools for scientific computing on Python to switch to it. So I think that scipy_core should have a top level navigation item too. From arnd.baecker at web.de Thu Dec 22 09:09:24 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 22 Dec 2005 15:09:24 +0100 (CET) Subject: [SciPy-dev] newcore, icc compiled ATLAS problem Message-ID: Hi, I am again trying to install scipy core on an Itanium with ATLAS. FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/baecker/python2/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] include_dirs = ['/usr/include'] The problem is that ATLAS is compiled with the intel compiler. Why is this so: According to http://math-atlas.sourceforge.net/errata.html#WhatComp on Itanium 2 icc 8.0 is MUCH (sic) faster than gcc 3.3! ====================================================================== ERROR: test_basic (scipy.base.matrix.test_matrix.test_algebra) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy_lintst_atlas/lib/python2.4/site-packages/scipy/base/tests/test_matrix.py", line 99, in test_basic from scipy.basic import linalg File "/home/baecker/python2//scipy_lintst_atlas/lib/python2.4/site-packages/scipy/basic/__init__.py", line 6, in ? import linalg File "/home/baecker/python2//scipy_lintst_atlas/lib/python2.4/site-packages/scipy/basic/linalg.py", line 9, in ? import lapack_lite ImportError: /home/baecker/python2/scipy_lintst_atlas/lib/python2.4/site-packages/scipy/basic/lapack_lite.so: undefined symbol: for_cpstr The library `libifcore.so` provides the needed routine nm /opt/intel/fc_90/lib/libifcore.so.6 | grep for_cpstr 00000000000d6900 T for_cpstr (+libcxa is needed by this one ...) So the solution is to get the following compile command /work/home/baecker/python2/bin/g77 -shared build/temp.linux-ia64-2.4/scipy/basic/lapack_lite/lapack_litemodule.o -L/home/baecker/python2/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lg2c -L /opt/intel/fc_90/lib -lifcore -lcxa -o build/lib.linux-ia64-2.4/scipy/basic/lapack_lite.so I.e.: -L/opt/intel/fc_90/lib -lifcore -lcxa has to be added and then all tests of scipy core work fine. How can one achieve this? Best, Arnd From jh at oobleck.astro.cornell.edu Thu Dec 22 10:06:48 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 22 Dec 2005 10:06:48 -0500 Subject: [SciPy-dev] old web site front page In-Reply-To: (message from Arnd Baecker on Thu, 22 Dec 2005 10:03:51 +0100 (CET)) References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> Message-ID: <200512221506.jBMF6m9E006252@oobleck.astro.cornell.edu> You and Ed came up with similar suggestions (on your first two points). Sure, take out the "VERY" and change "like" to "don't mind". But, the more you guys argue that this is not a hard install, the more you convince me that it is! Arnd, you applied FIVE conditions just to make it "pretty clean"! But, the "pretty clean" builds have binary installs already, leaving the "not-so-clean" and "decidedly rotten" ones for trial by newbies. Recall that this message is going to people who have never installed SciPy and never used it! We're not talking to list members here, we're talking to the guy that walked into my office the other day and said, hey, I hear you know something about this free IDL for Python (I cringed). Sure, he's typed "make install" a bunch in the past. But, I wouldn't wish a newcore build on him, let alone a full build. He's got work to do, and he's more likely to walk away from this if it wastes his time than to dig in. Is SVN better than tarball? I'll leave that to a developer to say. We can change it as the story changes. --jh-- Arnd: The text reads very well! FWIW, just some small comments: On Wed, 21 Dec 2005, Joe Harrington wrote: [...] > In any case, if you are VERY EXPERIENCED WITH SOFTWARE INSTALLS and The *VERY* seems too strong to me - Personally I would not call myself as "very experienced" and still I celebrated the 100th build of scipy new core (on one machine, where I could automate the stuff - no counters on the others ;-) Of course I have to admit, that I really learned a fair bit during the last years of going through the various stages of scipy ;-) Therefore I propose to omit the "VERY" to get more testing. > like getting your hands dirty, also this is maybe a bit too discouraging? (I think this phrase could be just left out) ((if one uses gcc (<4.0), g77, a plain ATLAS and a normal fftw2 on a normal machine things are pretty clean already)) > download the tarball for the new version, should really the tarball be recommended at this stage, or wouldn't svn (or just http://new.scipy.org/Wiki/Download) as only pointer) be sufficient? From perry at stsci.edu Thu Dec 22 10:50:28 2005 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 22 Dec 2005 10:50:28 -0500 Subject: [SciPy-dev] old web site front page In-Reply-To: <200512221506.jBMF6m9E006252@oobleck.astro.cornell.edu> References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> <200512221506.jBMF6m9E006252@oobleck.astro.cornell.edu> Message-ID: On Dec 22, 2005, at 10:06 AM, Joe Harrington wrote: > You and Ed came up with similar suggestions (on your first two > points). Sure, take out the "VERY" and change "like" to "don't mind". > But, the more you guys argue that this is not a hard install, the more > you convince me that it is! Arnd, you applied FIVE conditions just to > make it "pretty clean"! But, the "pretty clean" builds have binary > installs already, leaving the "not-so-clean" and "decidedly rotten" > ones for trial by newbies. Recall that this message is going to > people who have never installed SciPy and never used it! We're not > talking to list members here, we're talking to the guy that walked > into my office the other day and said, hey, I hear you know something > about this free IDL for Python (I cringed). Sure, he's typed "make > install" a bunch in the past. But, I wouldn't wish a newcore build on > him, let alone a full build. He's got work to do, and he's more > likely to walk away from this if it wastes his time than to dig in. > > Is SVN better than tarball? I'll leave that to a developer to say. > We can change it as the story changes. > > --jh-- > I'll second Joe on this issue. My sense is the ususal denizens of scipy, being comparatively sophisticated (despite proclamations to the contrary :-) in doing installations, are not aware of how an important issue this is. I have had many people come to me and say they tried scipy before and that they gave up trying to install it. When I tell them that numarray is being replaced with scipy the usual reaction is of great worry because of the installation issue (I try my best to convince them that the core will be as easy to install and so forth). You won't hear from most of these people if they try and fail. They will just give up and associate bad things with the project. For that reason I think it is important not to oversell scipy_core to the general community until it has been beaten on for a while, installation-wise and shown to be a pretty easy install, and comparatively stable (seeing weekly updates will scare them quite a bit too). I know many are eager to make the switch now, but telling the great masses to do so too soon will be deadly. We should encourage the experienced developers to do that now, but not the regular users. I think we are some months from being ready to do that. Some patience is needed here. Perry From arnd.baecker at web.de Thu Dec 22 13:06:58 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 22 Dec 2005 19:06:58 +0100 (CET) Subject: [SciPy-dev] tandg too strict? Message-ID: Is there a reason, why cephes.tandg(45) should precisely equal 1.0? On an Itanium 2 I get the following failure: ====================================================================== FAIL: check_tandg (scipy.special.basic.test_basic.test_cephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/baecker/python2/scipy_lintst_atlas_gcc/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 440, in check_tandg assert_equal(cephes.tandg(45),1.0) File "/home/baecker/python2//scipy_lintst_atlas_gcc/lib/python2.4/site-packages/scipy/test/testing.py", line 666, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: 1.0 ACTUAL: 1.0000000000000002 Should the assert_equal be changed into an assert_almost_equal? Best, Arnd From robert.kern at gmail.com Thu Dec 22 13:57:42 2005 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Dec 2005 13:57:42 -0500 Subject: [SciPy-dev] Issues "freezing" scipy using py2app In-Reply-To: References: Message-ID: <43AAF726.1040009@gmail.com> Pearu Peterson wrote: > I'd like to learn more about the magic that Python does with .zip files. > Are there similar issues when using eggs? Well, setuptools correctly recognizes scipy as "not zip-safe". We use __file__ and __path__ in a couple of places (we also use inspect.something_or_other but that's restricted to weave now, and I believe it is always inspecting code outside of scipy, so it's not a problem). pkg_resources provides a few ways of getting data from inside zipped eggs without using __file__. As Bob is working on making py2app play nicely with eggs, if we were to use pkg_resources, then we would be set. >From python setup.py bdist_egg on scipy_core (a few days old): zip_safe flag not set; analyzing archive contents... scipy.core_version: module references __file__ scipy.base.setup: module references __file__ scipy.distutils.exec_command: module references __file__ scipy.distutils.misc_util: module references __file__ scipy.distutils.system_info: module references __file__ scipy.distutils.command.build_src: module references __file__ scipy.f2py.diagnose: module references __file__ scipy.f2py.f2py2e: module references __file__ scipy.f2py.setup: module references __file__ scipy.test.logging: module references __file__ scipy.test.logging: module MAY be using inspect.stack scipy.test.testing: module references __file__ scipy.weave.blitz_spec: module references __file__ scipy.weave.blitz_tools: module references __file__ scipy.weave.bytecodecompiler: module MAY be using inspect.getsource scipy.weave.bytecodecompiler: module MAY be using inspect.getcomments scipy.weave.bytecodecompiler: module MAY be using inspect.stack scipy.weave.c_spec: module references __file__ scipy.weave.catalog: module references __file__ scipy.weave.catalog: module references __path__ scipy.weave.inline_tools: module references __file__ See http://peak.telecommunity.com/DevCenter/PkgResources#resourcemanager-api for details on how to use pkg_resources for accessing in-package resources. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From loredo at astro.cornell.edu Thu Dec 22 15:04:51 2005 From: loredo at astro.cornell.edu (Tom Loredo) Date: Thu, 22 Dec 2005 15:04:51 -0500 Subject: [SciPy-dev] Where is the hyperu source? In-Reply-To: References: Message-ID: <1135281891.43ab06e3634a7@astrosun2.astro.cornell.edu> Hi folks, Where is the source code that calculates scipy.special.hyperu? I've done a "grep -i hyperu" in cephes and other subdirs of special, and can't find it. Thanks, Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Thu Dec 22 15:11:26 2005 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Dec 2005 15:11:26 -0500 Subject: [SciPy-dev] Where is the hyperu source? In-Reply-To: <1135281891.43ab06e3634a7@astrosun2.astro.cornell.edu> References: <1135281891.43ab06e3634a7@astrosun2.astro.cornell.edu> Message-ID: <43AB086E.1090104@gmail.com> Tom Loredo wrote: > Hi folks, > > Where is the source code that calculates scipy.special.hyperu? > I've done a "grep -i hyperu" in cephes and other subdirs of > special, and can't find it. [special]$ grep -ri hyperu . Binary file ./_cephes.so matches ./_cephesmodule.c: f = PyUFunc_FromFuncAndData(cephes3_functions, hypU_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "hyperu", hyperu_doc, 0); ... hypU_data contains the appropriate function pointers. [special]$ grep -ri hypu . Binary file ./_cephes.so matches ./_cephesmodule.c:static void * hypU_data[] = { (void *)hypU_wrap, (void *)hypU_wrap, }; ./_cephesmodule.c: f = PyUFunc_FromFuncAndData(cephes3_functions, hypU_data, cephes_4_types, 2, 3, 1, PyUFunc_None, "hyperu", hyperu_doc, 0); ./specfun_wrappers.c:double hypU_wrap(double a, double b, double x) { ... Looking at hypU_wrap in specfun_wrappers.c: double hypU_wrap(double a, double b, double x) { double out; int md; /* method code --- not returned */ F_FUNC(chgu,CHGU)(&a, &b, &x, &out, &md); if (out == 1e300) out = INFINITY; return out; } scipy/Lib/special/specfun/specfun.f: SUBROUTINE CHGU(A,B,X,HU,MD) C C ======================================================= C Purpose: Compute the confluent hypergeometric function C U(a,b,x) C Input : a --- Parameter C b --- Parameter C x --- Argument ( x > 0 ) C Output: HU --- U(a,b,x) C MD --- Method code C Routines called: C (1) CHGUS for small x ( MD=1 ) C (2) CHGUL for large x ( MD=2 ) C (3) CHGUBI for integer b ( MD=3 ) C (4) CHGUIT for numerical integration ( MD=4 ) C ======================================================= -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jswhit at fastmail.fm Thu Dec 22 19:19:16 2005 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 22 Dec 2005 17:19:16 -0700 Subject: [SciPy-dev] object array question Message-ID: <43AB4284.1020100@fastmail.fm> Hi: The following script: import scipy.base as SP al = [] for j in range(10): al.append(SP.arange(j+1)) a = SP.array(al,'O') print a print a[5][:] crashes at the last line (using scipy_core 0.8.4) when trying to access the object array. [mac28:~/python/netcdf4] jsw% python test_objectarr_sp.py [array([0]), array([0, 1]), array([0, 1, 2]), array([0, 1, 2, 3]), array([0, 1, 2, 3, 4]), array([0, 1, 2, 3, 4, 5]), array([0, 1, 2, 3, 4, 5, 6]), array([0, 1, 2, 3, 4, 5, 6, 7]), array([0, 1, 2, 3, 4, 5, 6, 7, 8]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])] Traceback (most recent call last): File "test_objectarr_sp.py", line 9, in ? print a[5][:] ValueError: 0-d arrays can't be indexed. In numarray, this worked: import numarray.objects as SP al = [] for j in range(10): al.append(SP.arange(j+1)) a = SP.array(al,typecode='O') print a print a[5][:] [mac28:~/python/netcdf4] jsw% python test_objectarr_sp.py [array([0]) array([0, 1]) array([0, 1, 2]) array([0, 1, 2, 3]) array([0, 1, 2, 3, 4]) array([0, 1, 2, 3, 4, 5]) array([0, 1, 2, 3, 4, 5, 6]) array([0, 1, 2, 3, 4, 5, 6, 7]) array([0, 1, 2, 3, 4, 5, 6, 7, 8]) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])] [0 1 2 3 4 5] Any idea why scipy_core behaves this way? How do you access the data in objects stored in a object array? -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From pebarrett at gmail.com Fri Dec 23 08:46:58 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Fri, 23 Dec 2005 08:46:58 -0500 Subject: [SciPy-dev] Subclassing Traits Message-ID: <40e64fa20512230546t4bf15aadw9e56a86b8105a30f@mail.gmail.com> Could someone provide an example of how to subclass the enthought Traits class? I'd like to add a couple of attributes to the class, e.g. a units attribute. -- Paul -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Sun Dec 25 01:22:02 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 24 Dec 2005 23:22:02 -0700 Subject: [SciPy-dev] old web site front page In-Reply-To: References: <200512211429.jBLETkrM003677@oobleck.astro.cornell.edu> <200512211942.jBLJg9mO004578@oobleck.astro.cornell.edu> <200512221506.jBMF6m9E006252@oobleck.astro.cornell.edu> Message-ID: <43AE3A8A.6020702@ieee.org> Perry Greenfield wrote: >On Dec 22, 2005, at 10:06 AM, Joe Harrington wrote: > > >I'll second Joe on this issue. My sense is the ususal denizens of >scipy, being comparatively sophisticated (despite proclamations to the >contrary :-) in doing installations, are not aware of how an important >issue this is. I have had many people come to me and say they tried >scipy before and that they gave up trying to install it. When I tell >them that numarray is being replaced with scipy the usual reaction is >of great worry because of the installation issue (I try my best to >convince them that the core will be as easy to install and so forth). >You won't hear from most of these people if they try and fail. They >will just give up and associate bad things with the project. For that >reason I think it is important not to oversell scipy_core to the >general community until it has been beaten on for a while, >installation-wise and shown to be a pretty easy install, and >comparatively stable (seeing weekly updates will scare them quite a bit >too). > > We really need to separate scipy_core from full scipy. The lists are full of "install" problems that are mostly "full scipy" problems or problems related to confusion regarding the packages that exist. I think their should be plenty of advertisment for scipy_core. The flux in scipy_core right now is the naming scheme. This should settle down in a matter of weeks. I agree that *a lot* of patience is in order for full scipy. But scipy_core installs should be settling down. I agree that new users should be informed of the situation and allowed to choose their poison, recognizing that if they are the first to try it out on a new platform their may be issues. It is important that people believe scipy_core can be installed without difficulty. I think we've made lots of strides here, but no doubt will learn more as people try it out with strange configurations. As a data-point. I was recently on a totally new system with intel's mkl BLAS and python setup.py install went fine after an SVN checkout and linked against the mkl BLAS without difficulty (and I didn't need to edit any configuration files either). So, I was pretty happy with that. Thanks, Pearu.... Right now, it's just the sub-packages in basic (fft, linalg, and random) that are being shuffled a bit so that the name-space collisions with full scipy and the import magic will disappear. I'm *thrilled* to see the web-page receive the overhaul and point people in the right direction. -Travis From oliphant.travis at ieee.org Sun Dec 25 05:18:45 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 25 Dec 2005 03:18:45 -0700 Subject: [SciPy-dev] Packaging Changes to Core in SVN Message-ID: <43AE7205.3050807@ieee.org> Hi folks and Merry Christmas, I hope this is the last of the major changes to the scipy core structure. After playing around with various alternative homes for the fft, linear algebra, and random number routines, I think (with Robert's suggestiong) we've found a permanent home for them under scipy.corefft scipy.corelinalg scipy.corerandom In these packages we will try to import the full scipy versions of any routines (fft and inv in particular) so that full scipy installs can have improved (but possibly difficult to install) versions of these basic routines. I am of a mind to get rid of lib out of scipy_core and move mtrand to scipy. Then move dotblas to base and be done with the lib directory in scipy_core. I'm beginning to see the wisdom in "flat is better than nested" If we can get the any wrinkles out of the new structure, I'd like to make another release after the new year so that the improved packaging can start getting used. Best, -Travis From jonathan.taylor at utoronto.ca Sun Dec 25 18:09:43 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 25 Dec 2005 18:09:43 -0500 Subject: [SciPy-dev] Packaging Changes to Core in SVN In-Reply-To: <43AE7205.3050807@ieee.org> References: <43AE7205.3050807@ieee.org> Message-ID: <463e11f90512251509x77a72eb8od5c45d018e9c84d7@mail.gmail.com> I suspect that this is why I am having difficulties with new SVN scipy and scipy_core. After a fresh install: In [1]: import scipy Failed to import signal cannot import name random Failed to import optimize cannot import name random Failed to import cluster No module named basic.random Failed to import fftpack No module named basic.fftpack Failed to import stats cannot import name random I guess we need to import corerandom as random. Also... should it be importing a better random module from scipy main? Happy Holidays, Jon. On 12/25/05, Travis Oliphant wrote: > > Hi folks and Merry Christmas, > > I hope this is the last of the major changes to the scipy core > structure. After playing around with various alternative homes for the > fft, linear algebra, and random number routines, I think (with Robert's > suggestiong) we've found a permanent home for them under > > scipy.corefft > scipy.corelinalg > scipy.corerandom > > In these packages we will try to import the full scipy versions of any > routines (fft and inv in particular) so that full scipy installs can > have improved (but possibly difficult to install) versions of these > basic routines. > > I am of a mind to get rid of lib out of scipy_core and move mtrand to > scipy. Then move dotblas to base and be done with the lib directory in > scipy_core. I'm beginning to see the wisdom in "flat is better than > nested" > > If we can get the any wrinkles out of the new structure, I'd like to > make another release after the new year so that the improved packaging > can start getting used. > > Best, > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jonathan.taylor at utoronto.ca Sun Dec 25 18:33:43 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 25 Dec 2005 18:33:43 -0500 Subject: [SciPy-dev] What scipy release goes with scipy_core 0.8.4? Message-ID: <463e11f90512251533g5cc40d1aq1edc6b1a5582935c@mail.gmail.com> Since I would like to have something frozen for my actual work. What scipy release goes with scipy_core 0.8.4? Thanks, Jon. From jonathan.taylor at utoronto.ca Sun Dec 25 18:36:02 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 25 Dec 2005 18:36:02 -0500 Subject: [SciPy-dev] Development method Message-ID: <463e11f90512251536g81a20betecc7e56f0a2b3df5@mail.gmail.com> I am wondering what the best way to go about playing around with the code base for scipy? So far I have been running python setup.py install --prefix=~/scipyfun and then import and see what my changes do. Is this what most of you gals/guys do? Thanks for any advice Jon. From cookedm at physics.mcmaster.ca Sun Dec 25 18:39:18 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 25 Dec 2005 18:39:18 -0500 Subject: [SciPy-dev] Development method In-Reply-To: <463e11f90512251536g81a20betecc7e56f0a2b3df5@mail.gmail.com> References: <463e11f90512251536g81a20betecc7e56f0a2b3df5@mail.gmail.com> Message-ID: <20051225233918.GA29068@arbutus.physics.mcmaster.ca> On Sun, Dec 25, 2005 at 06:36:02PM -0500, Jonathan Taylor wrote: > I am wondering what the best way to go about playing around with the > code base for scipy? > > So far I have been running python setup.py install --prefix=~/scipyfun > > and then import and see what my changes do. Is this what most of you > gals/guys do? That's pretty much what I do, installing into my local python module directory (with --prefix=~/usr). That means I eat my own dogfood :) After doing that, in another window I run python -c 'import scipy; scipy.test()' -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Sun Dec 25 22:56:23 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 25 Dec 2005 20:56:23 -0700 Subject: [SciPy-dev] Packaging Changes to Core in SVN In-Reply-To: <463e11f90512251509x77a72eb8od5c45d018e9c84d7@mail.gmail.com> References: <43AE7205.3050807@ieee.org> <463e11f90512251509x77a72eb8od5c45d018e9c84d7@mail.gmail.com> Message-ID: <43AF69E7.1010307@ieee.org> Jonathan Taylor wrote: >I suspect that this is why I am having difficulties with new SVN scipy >and scipy_core. After a fresh install: > >In [1]: import scipy >Failed to import signal >cannot import name random >Failed to import optimize >cannot import name random >Failed to import cluster >No module named basic.random >Failed to import fftpack >No module named basic.fftpack >Failed to import stats >cannot import name random > >I guess we need to import corerandom as random. > > Full scipy has not been updated with the changes. This is typical of any changes to scipy_core. Scipy SVN does not always track scipy_core svn exactly. They are separate packages and need to be discussed separately. Nonetheless, full scipy will usually track scipy_core rather quickly (since most of the developers use both). This should be easy to fix and ready soon (few hours to a day). -Travis From oliphant.travis at ieee.org Sun Dec 25 22:58:02 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 25 Dec 2005 20:58:02 -0700 Subject: [SciPy-dev] What scipy release goes with scipy_core 0.8.4? In-Reply-To: <463e11f90512251533g5cc40d1aq1edc6b1a5582935c@mail.gmail.com> References: <463e11f90512251533g5cc40d1aq1edc6b1a5582935c@mail.gmail.com> Message-ID: <43AF6A4A.6050306@ieee.org> Jonathan Taylor wrote: > Since I would like to have something frozen for my actual work. >What scipy release goes with scipy_core 0.8.4? > > > There is no scipy release to correspond with scipy_core 0.8.4 because nobody has made one. As package management of scipy_core settles down, another scipy_core release will be made and release of scipy can be made. -Travis From oliphant.travis at ieee.org Mon Dec 26 04:00:20 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 26 Dec 2005 02:00:20 -0700 Subject: [SciPy-dev] Example of power of new data-type descriptors. Message-ID: <43AFB124.9060507@ieee.org> I'd like more people to know about the new power that is in scipy core due to the general data-type descriptors that can now be used to define numeric arrays. Towards that effort here is a simple example (be sure to use latest SVN -- there were a coupld of minor changes that improve usability made recently). Notice this example does not use a special "record" array subclass. This is just a regular array. I'm kind of intrigued (though not motivated to pursue) the possibility of accessing (or defining) databases directly into scipy_core arrays using the record functionality. # Define a new data-type descriptor >>> import scipy >>> dtype = scipy.dtypedescr({'names': ['name', 'age', 'weight'], 'formats': ['S30', 'i2', 'f4']}) >>> a = scipy.array([('Bill',31,260),('Fred',15,135)], dtype=dtype) # the argument to dtypedescr could have also been placed here as the argument to dtype >>> print a['name'] [Bill Fred] >>> print a['age'] [31 15] >>> print a['weight'] [ 260. 135.] >>> print a[0] ('Bill', 31, 260.0) >>> print a[1] ('Fred', 15, 135.0) It seems to me there are some very interesting possibilities with this new ability. The record array subclass adds an improved scalar type (the record) and attribute access to get at the fields: (e.g. a.name, a.age, and a.weight). But, if you don't need attribute access you can use regular arrays to do a lot of what you might need a record array to accomplish for you. I'd love to see what people come up with using this new facility. The new array PEP for Python basically proposes adding a very simple array object (just the basic PyArrayObject * of Numeric with a bare-bones type-object table) plus this new data-type descriptor object to Python and a very few builtin data-type descriptors (perhaps just object initially). This would basically add the array interface to Python directly and allow people to start using it generally. The PEP is slow going because it is not on my priority list right now because it is not essential to making scipy_core work well. But, I would love to have more people ruminating on the basic ideas which I think are crystallizing. Best wishes for a new year, -Travis Oliphant From pwang at enthought.com Wed Dec 28 10:30:38 2005 From: pwang at enthought.com (Peter Wang) Date: Wed, 28 Dec 2005 09:30:38 -0600 (CST) Subject: [SciPy-dev] Subclassing Traits In-Reply-To: <40e64fa20512230546t4bf15aadw9e56a86b8105a30f@mail.gmail.com> Message-ID: <1135783838.11385@mail.enthought.com> Paul Barrett wrote .. > Could someone provide an example of how to subclass the enthought Traits > class? I'd like to add a couple of attributes to the class, e.g. a units > attribute. Hi Paul, Do you mean subclassing the HasTraits metaclass to create a new derivative metaclass, or just subclassing an instance of HasTraits? For the former, you can look at HasPrivateTraits and HasStrictTraits classes as a pointer. In the latter case, if you just want to add a "units" trait at the top of your object hierarchy, can't you use normal Python inheritance? Sorry for the lagged response, HTH, Peter From chanley at stsci.edu Wed Dec 28 14:45:38 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 28 Dec 2005 14:45:38 -0500 Subject: [SciPy-dev] big in records fromfile Message-ID: <43B2EB62.2070907@stsci.edu> Hi Travis, I've been working on the pyfits port and have run across a problem with fromfiles in records.py. Using the testdata.fits file that is part of the numarray distribution I receive the following output during testing with numarray: In [1]: from numarray import records In [2]: from numarray import testdata In [3]: fd = open("testdata.fits") In [4]: fd.seek(2880*2) In [5]: r = records.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') In [6]: r Out[6]: array( [(5.1000000000000005, 61, 'abcde'), (5.2000000000000002, 62, 'fghij'), (5.3000000000000007, 63, 'kl')], formats=['1Float64', '1Int32', '1a5'], shape=3, names=['c1', 'c2', 'c3']) In [7]: r[0] Out[7]: In [8]: print r[0] (5.1000000000000005, 61, 'abcde') However, when using the same file with scipy.base I get the following results: In [1]: import scipy.base In [2]: from scipy.base import records as rec In [3]: fd = open("testdata.fits") In [4]: fd.seek(2880*2) In [5]: r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') In [6]: r[0] Out[6]: (1.2475423563769078e+190, 1023410176, 'abcde') In [7]: print r[0] (1.2475423563769078e+190, 1023410176, 'abcde') This example also raises another question. How would you want me to create a unit test for this type of operation? Should I check a FITS file into the test directory? Thank you for your time and help, Chris From oliphant at ee.byu.edu Wed Dec 28 14:52:16 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Dec 2005 12:52:16 -0700 Subject: [SciPy-dev] big in records fromfile In-Reply-To: <43B2EB62.2070907@stsci.edu> References: <43B2EB62.2070907@stsci.edu> Message-ID: <43B2ECF0.7040009@ee.byu.edu> Christopher Hanley wrote: >Hi Travis, > >I've been working on the pyfits port and have run across a problem with >fromfiles in records.py. Using the testdata.fits file that is part of >the numarray distribution I receive the following output during testing >with numarray: > >In [1]: from numarray import records > >In [2]: from numarray import testdata > >In [3]: fd = open("testdata.fits") > >In [4]: fd.seek(2880*2) > >In [5]: r = records.fromfile(fd, formats='f8,i4,a5', shape=3, >byteorder='big') > >In [6]: r >Out[6]: >array( >[(5.1000000000000005, 61, 'abcde'), >(5.2000000000000002, 62, 'fghij'), >(5.3000000000000007, 63, 'kl')], >formats=['1Float64', '1Int32', '1a5'], >shape=3, >names=['c1', 'c2', 'c3']) > >In [7]: r[0] >Out[7]: > >In [8]: print r[0] >(5.1000000000000005, 61, 'abcde') > > >However, when using the same file with scipy.base I get the following >results: > >In [1]: import scipy.base > >In [2]: from scipy.base import records as rec > >In [3]: fd = open("testdata.fits") > >In [4]: fd.seek(2880*2) > >In [5]: r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') > >In [6]: r[0] >Out[6]: (1.2475423563769078e+190, 1023410176, 'abcde') > >In [7]: print r[0] >(1.2475423563769078e+190, 1023410176, 'abcde') > > Hmm. I thought we fixed that problem. In fact I ran that very test to verify it. I'll have to look into it again. >This example also raises another question. How would you want me to >create a unit test for this type of operation? Should I check a FITS >file into the test directory? > > > I don't see a problem with checking in a FITS file (if it's not too big). However, setup.py needs someway to know it is supposed to be distributed (either the MANIFEST.in file) or adding it to the depends keyword in one of the packages. -Travis From oliphant at ee.byu.edu Wed Dec 28 15:09:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Dec 2005 13:09:40 -0700 Subject: [SciPy-dev] big in records fromfile In-Reply-To: <43B2EB62.2070907@stsci.edu> References: <43B2EB62.2070907@stsci.edu> Message-ID: <43B2F104.1040001@ee.byu.edu> Christopher Hanley wrote: >In [1]: import scipy.base > >In [2]: from scipy.base import records as rec > >In [3]: fd = open("testdata.fits") > >In [4]: fd.seek(2880*2) > >In [5]: r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') > >In [6]: r[0] >Out[6]: (1.2475423563769078e+190, 1023410176, 'abcde') > >In [7]: print r[0] >(1.2475423563769078e+190, 1023410176, 'abcde') > > Found the problem. I was ignoring byteorder in fromfile. I just needed to pass it along to the recarray constructor and it seems to be fixed. I've checked in the change. The testdata.fits file seems plenty small enough to include directly, so go ahead. -Travis From chanley at stsci.edu Thu Dec 29 11:03:18 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 29 Dec 2005 11:03:18 -0500 Subject: [SciPy-dev] big in records fromfile In-Reply-To: <43B2F104.1040001@ee.byu.edu> References: <43B2EB62.2070907@stsci.edu> <43B2F104.1040001@ee.byu.edu> Message-ID: <43B408C6.2030806@stsci.edu> Travis Oliphant wrote: > Found the problem. I was ignoring byteorder in fromfile. I just needed > to pass it along to the recarray constructor and it seems to be fixed. > I've checked in the change. > > The testdata.fits file seems plenty small enough to include directly, so > go ahead. > > -Travis Thank you Travis! I've added the fromfile method test to the records unittest. Now back to a scipy_core pyfits version. Chris From chanley at stsci.edu Thu Dec 29 14:30:31 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 29 Dec 2005 14:30:31 -0500 Subject: [SciPy-dev] chararry array method Message-ID: <43B43957.6080509@stsci.edu> Hi Travis, Should the array method in chararry have a shape parameter? Chris From oliphant at ee.byu.edu Thu Dec 29 16:30:17 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 29 Dec 2005 14:30:17 -0700 Subject: [SciPy-dev] chararry array method In-Reply-To: <43B43957.6080509@stsci.edu> References: <43B43957.6080509@stsci.edu> Message-ID: <43B45569.4040408@ee.byu.edu> Christopher Hanley wrote: >Hi Travis, > >Should the array method in chararry have a shape parameter? > > I'm not sure. Presumably it would pick up its shape from the input object. At some point, I think I'd like to move the chararray into C and have it be the default return object whenever a string or unicode base-type array is requested. You can already have arrays of strings without the chararray, except comparisons don't work. There are lots of ways to solve that problem, and I'm not sure the chararray is the best way. But, it's functional. -Travis From chanley at stsci.edu Thu Dec 29 16:46:00 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 29 Dec 2005 16:46:00 -0500 Subject: [SciPy-dev] chararry array method In-Reply-To: <43B45569.4040408@ee.byu.edu> References: <43B43957.6080509@stsci.edu> <43B45569.4040408@ee.byu.edu> Message-ID: <43B45918.7030902@stsci.edu> Travis Oliphant wrote: > > I'm not sure. Presumably it would pick up its shape from the input object. > > At some point, I think I'd like to move the chararray into C and have it > be the default return object whenever a string or unicode base-type > array is requested. > > You can already have arrays of strings without the chararray, except > comparisons don't work. There are lots of ways to solve that problem, > and I'm not sure the chararray is the best way. But, it's functional. > > -Travis > I guess I'm trying to understand if the following example will work. In numarray, I can say the following: In [24]: from numarray import strings as chararray In [25]: arr2 = chararray.array('abcdefg'*10,itemsize=10) In [26]: arr2 Out[26]: CharArray(['abcdefgabc', 'defgabcdef', 'gabcdefgab', 'cdefgabcde', 'fgabcdefga', 'bcdefgabcd', 'efgabcdefg']) However, when attempting to do something similar with scipy.base I receive the following: In [27]: arr = sb.chararray.array('abcdefg'*10,itemlen=10) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /data/sparty1/dev/devCode/ /data/sparty1/dev/site-packages/lib/python/scipy/base/chararray.py in array(obj, itemlen, copy, unicode, fortran) 322 return chararray(val.shape, itemlen, unicode, buffer=val, 323 strides=val.strides, --> 324 fortran=fortran) 325 326 def asarray(obj, itemlen=7, unicode=False, fortran=False): /data/sparty1/dev/site-packages/lib/python/scipy/base/chararray.py in __new__(subtype, shape, itemlen, unicode, buffer, offset, strides, fortran) 28 buffer=buffer, 29 offset=offset, strides=strides, ---> 30 fortran=fortran) 31 return self 32 ValueError: need to give a valid shape as the first argument Chris From oliphant at ee.byu.edu Thu Dec 29 17:00:55 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 29 Dec 2005 15:00:55 -0700 Subject: [SciPy-dev] chararry array method In-Reply-To: <43B45918.7030902@stsci.edu> References: <43B43957.6080509@stsci.edu> <43B45569.4040408@ee.byu.edu> <43B45918.7030902@stsci.edu> Message-ID: <43B45C97.2090103@ee.byu.edu> Christopher Hanley wrote: >Travis Oliphant wrote: > > >>I'm not sure. Presumably it would pick up its shape from the input object. >> >>At some point, I think I'd like to move the chararray into C and have it >>be the default return object whenever a string or unicode base-type >>array is requested. >> >>You can already have arrays of strings without the chararray, except >>comparisons don't work. There are lots of ways to solve that problem, >>and I'm not sure the chararray is the best way. But, it's functional. >> >>-Travis >> >> >> > >I guess I'm trying to understand if the following example will work. In >numarray, I can say the following: > > >In [24]: from numarray import strings as chararray > >In [25]: arr2 = chararray.array('abcdefg'*10,itemsize=10) > >In [26]: arr2 >Out[26]: >CharArray(['abcdefgabc', 'defgabcdef', 'gabcdefgab', 'cdefgabcde', > 'fgabcdefga', 'bcdefgabcd', 'efgabcdefg']) > > So, this is taking a buffer and chopping it into string bits. Currently, the chararray array function does not take a buffer input. I would suggest not using character arrays in pyfits just yet. They are not really necessary, because normal arrays can be of string type. If you really need the functionality of the chararray (string methods or equality testing), then create it after creating the normal array (no data will be copied). I'd like to better understand use cases of special string arrays a little better. I'm not sure I completely understand why numarray split everything into different array types. Much more is supported in the basic array type in scipy. -Travis From oliphant at ee.byu.edu Thu Dec 29 17:33:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 29 Dec 2005 15:33:11 -0700 Subject: [SciPy-dev] chararry array method In-Reply-To: <43B45918.7030902@stsci.edu> References: <43B43957.6080509@stsci.edu> <43B45569.4040408@ee.byu.edu> <43B45918.7030902@stsci.edu> Message-ID: <43B46427.9040404@ee.byu.edu> Christopher Hanley wrote: >Travis Oliphant wrote: > > >>I'm not sure. Presumably it would pick up its shape from the input object. >> >>At some point, I think I'd like to move the chararray into C and have it >>be the default return object whenever a string or unicode base-type >>array is requested. >> >>You can already have arrays of strings without the chararray, except >>comparisons don't work. There are lots of ways to solve that problem, >>and I'm not sure the chararray is the best way. But, it's functional. >> >>-Travis >> >> >> > >I guess I'm trying to understand if the following example will work. In >numarray, I can say the following: > > >In [24]: from numarray import strings as chararray > >In [25]: arr2 = chararray.array('abcdefg'*10,itemsize=10) > >In [26]: arr2 >Out[26]: >CharArray(['abcdefgabc', 'defgabcdef', 'gabcdefgab', 'cdefgabcde', > 'fgabcdefga', 'bcdefgabcd', 'efgabcdefg']) > >However, when attempting to do something similar with scipy.base I >receive the following: > >In [27]: arr = sb.chararray.array('abcdefg'*10,itemlen=10) > > O.K. try it now. I've updated chararray.array to accept a string input and split it like this. But... Change itemlen to itemsize. I fixed this for consistency. -Travis From pearu at scipy.org Fri Dec 30 03:38:38 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 30 Dec 2005 02:38:38 -0600 (CST) Subject: [SciPy-dev] big in records fromfile In-Reply-To: <43B2ECF0.7040009@ee.byu.edu> References: <43B2EB62.2070907@stsci.edu> <43B2ECF0.7040009@ee.byu.edu> Message-ID: On Wed, 28 Dec 2005, Travis Oliphant wrote: > I don't see a problem with checking in a FITS file (if it's not too > big). However, setup.py needs someway to know it is supposed to be > distributed (either the MANIFEST.in file) or adding it to the depends > keyword in one of the packages. A proper way to include this data file to distribution would be to use config.add_data_files(join('tests','testdata.fits')) method in scipy/base/setup.py file. However, since there is already config.add_data_dir('tests') that includes all files in tests/ directory to distribution, nothing needs to be done for testdata.fits. You can check this via python setup.py sdist and check the generated dist/scipy_core*.tag.gz file. Regards, Pearu From faltet at carabos.com Fri Dec 30 13:46:04 2005 From: faltet at carabos.com (Francesc Altet) Date: Fri, 30 Dec 2005 19:46:04 +0100 Subject: [SciPy-dev] Example of power of new data-type descriptors. In-Reply-To: <43AFB124.9060507@ieee.org> References: <43AFB124.9060507@ieee.org> Message-ID: <200512301946.05452.faltet@carabos.com> A Dilluns 26 Desembre 2005 10:00, Travis Oliphant va escriure: > I'd like more people to know about the new power that is in scipy core > due to the general data-type descriptors that can now be used to define > numeric arrays. Towards that effort here is a simple example (be sure > to use latest SVN -- there were a coupld of minor changes that improve > usability made recently). Notice this example does not use a special > "record" array subclass. This is just a regular array. IMO, this is very good stuff and it opens the door to support homogeneous, heterogeneous and character strings in just one object. That makes the inclusion of such an object in Python a very big improvement because people will finally have a very effective container for virtually *any* kind of large datasets in an easy way. I'm personally very excited about this new functionality :-) Just a few kirks (using scipy_core 0.9.0.1713) > >>> print a[0] > > ('Bill', 31, 260.0) For me, this prints: In [87]: a[0] Out[87]: ('Bill\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 31, 260.0) which looks a bit ugly. However: In [86]: a['name'] Out[86]: array([Bill, Fred], dtype=(string,30)) seems fine. Also, I find the name of the .getfield() method a bit confusing: In [71]: a.getfield? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: m.getfield(dtype, offset) returns a field of the given array as a certain type. A field is a view of the array's data with each itemsize determined by the given type and the offset into the current array. So, whoever that generates a heterogeneous generic array may be tempted to call getfield() in order to get an actual field of the array and get disapointed. I suggest to change this name by .viewas() or just .as() and keep the 'getfield' name for heterogeneous datasets. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Fri Dec 30 13:18:25 2005 From: faltet at carabos.com (Francesc Altet) Date: Fri, 30 Dec 2005 19:18:25 +0100 Subject: [SciPy-dev] Sorting speed Message-ID: <200512301918.25444.faltet@carabos.com> Hi, It seems that scipy_core (0.9.0.1713) is far more slower than numarray (1.5.0) when sorting arrays: In [43]: t5=timeit.Timer('a=sc.empty(shape=10000);a.sort()', 'import scipy.base as sc') In [44]: t5.repeat(3,100) Out[44]: [0.40603208541870117, 0.41605615615844727, 0.39800286293029785] In [45]: t4=timeit.Timer('a=na.array(None, shape=10000);a.sort()', 'import numarray as na') In [46]: t4.repeat(3,100) Out[46]: [0.090479135513305664, 0.086208105087280273, 0.086167097091674805] i.e. numarray is roughly 5x faster than scipy_core. As PyTables uses extensively this kind of sorting for indexing, it would be nice to see scipy_core with at least the speed of numarray in this sense. Nevertheless, as this only affects to the array package in the core, and there are no plans to migrate to scipy_core anytime soon, this is not a high priority for me. Just wanted to point it out. Thanks, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Fri Dec 30 14:21:24 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 12:21:24 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <200512301918.25444.faltet@carabos.com> References: <200512301918.25444.faltet@carabos.com> Message-ID: <43B588B4.5040001@ieee.org> Francesc Altet wrote: >Hi, > >It seems that scipy_core (0.9.0.1713) is far more slower than numarray >(1.5.0) when sorting arrays: > >In [43]: t5=timeit.Timer('a=sc.empty(shape=10000);a.sort()', 'import >scipy.base as sc') > >In [44]: t5.repeat(3,100) >Out[44]: [0.40603208541870117, 0.41605615615844727, 0.39800286293029785] > >In [45]: t4=timeit.Timer('a=na.array(None, shape=10000);a.sort()', 'import >numarray as na') > >In [46]: t4.repeat(3,100) >Out[46]: [0.090479135513305664, 0.086208105087280273, 0.086167097091674805] > >i.e. numarray is roughly 5x faster than scipy_core. > > There are many sorting algorithms in numarray. I'm not sure which one is being used regularly, but I'd like to see them brought over, though, which shouldn't be too big of a deal and is on the radar. Another opportunity for someone to get their hands dirty.... -Travis From Fernando.Perez at colorado.edu Fri Dec 30 15:30:21 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 30 Dec 2005 13:30:21 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B588B4.5040001@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B588B4.5040001@ieee.org> Message-ID: <43B598DD.20102@colorado.edu> Travis Oliphant wrote: > There are many sorting algorithms in numarray. I'm not sure which one > is being used regularly, but I'd like to see them brought over, though, > which shouldn't be too big of a deal and is on the radar. It may be worth noting that the sort code in python's core, for somelist.sort(), is to my understanding, state of the art. Tim Peters a while ago (py2.2, I think) brought the full weight of his considerable algorithmic talent to bear on this problem, as well as some recent academic results. I don't know how this compares to what numarray uses, but it may be worth investigating/copying. If Tim's work is as good as it is claimed to be (and I have much respect for his claims), it might be a good idea to use a stripped version of his code which doesn't have to deal with the generalities of python lists (and all the per-item typechecking needed there). Just an idea... Best, f From hgamboa at gmail.com Fri Dec 30 15:44:12 2005 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 30 Dec 2005 20:44:12 +0000 Subject: [SciPy-dev] Some tests for function_base.py Message-ID: <86522b1a0512301244i6df426c2y815a8a9819281c60@mail.gmail.com> Hello, Since I was trying to fix the unwrap function I decided to add some more tests to the function_base.py In the process I've also suggest some corrections to the code: unwrap: was failing by using the old typecode histogram: one variable was cleaning the input data histogram: the bin generation was not adequate creating unbalanced bin counts. I've cross checked with matplotlib code (both are very similar) and corrected the histogram functions. Hope this will be usefull. Hugo Gamboa -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: function_base.zip Type: application/zip Size: 10749 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Dec 30 15:21:56 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 13:21:56 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <200512301918.25444.faltet@carabos.com> References: <200512301918.25444.faltet@carabos.com> Message-ID: <43B596E4.8000706@ieee.org> Francesc Altet wrote: >Hi, > >It seems that scipy_core (0.9.0.1713) is far more slower than numarray >(1.5.0) when sorting arrays: > >In [43]: t5=timeit.Timer('a=sc.empty(shape=10000);a.sort()', 'import >scipy.base as sc') > >In [44]: t5.repeat(3,100) >Out[44]: [0.40603208541870117, 0.41605615615844727, 0.39800286293029785] > >In [45]: t4=timeit.Timer('a=na.array(None, shape=10000);a.sort()', 'import >numarray as na') > >In [46]: t4.repeat(3,100) >Out[46]: [0.090479135513305664, 0.086208105087280273, 0.086167097091674805] > >i.e. numarray is roughly 5x faster than scipy_core. > > Scipy core is still using the old Numeric algorithm for sorting which uses the generic qsort function and the compare function pointer. It looks like numarray is in-lining the sorting algorithm (and the argsort) for each data-type. I suspect this could be the source of the impressive speed-up. What do others think? Also, numarray has at least three different sorting algorithms (quicksort, heapsort, and mergesort). These could certainly be brought over fairly easily. Initially, I think I will try in-lining the quicksort algorithm for each data-type similar to numarray. Thoughts? -Travis From aisaac at american.edu Fri Dec 30 16:46:05 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Dec 2005 16:46:05 -0500 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B596E4.8000706@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> Message-ID: On Fri, 30 Dec 2005, Francesc Altet apparently wrote: > numarray is roughly 5x faster than scipy_core On my computer, it seems almost all that difference is due to the time difference in initializing the arrays. (I.e., much of it disappears if I put the array creation in the setup statement.) fwiw, Alan From oliphant.travis at ieee.org Fri Dec 30 15:47:03 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 13:47:03 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B598DD.20102@colorado.edu> References: <200512301918.25444.faltet@carabos.com> <43B588B4.5040001@ieee.org> <43B598DD.20102@colorado.edu> Message-ID: <43B59CC7.808@ieee.org> Fernando Perez wrote: >Travis Oliphant wrote: > > > >>There are many sorting algorithms in numarray. I'm not sure which one >>is being used regularly, but I'd like to see them brought over, though, >>which shouldn't be too big of a deal and is on the radar. >> >> > >It may be worth noting that the sort code in python's core, for >somelist.sort(), is to my understanding, state of the art. Tim Peters a while >ago (py2.2, I think) brought the full weight of his considerable algorithmic >talent to bear on this problem, as well as some recent academic results. I >don't know how this compares to what numarray uses, but it may be worth >investigating/copying. > >If Tim's work is as good as it is claimed to be (and I have much respect for >his claims), it might be a good idea to use a stripped version of his code >which doesn't have to deal with the generalities of python lists (and all the >per-item typechecking needed there). > > > Thanks for the suggestions. This sounds good. The only possible drawback is that timsort (as he calls it --- a modified mergesort) requires temporary storage on the order of N//2 * sizeof(basic_element). So, I could see it being a choice to consider when more sorting algorithms are added. It's also more complicated to grab his sorting algorithm then the nicely laid out numarray versions :-) Definitely something to think about. Anybody want a fun project :-) -Travis From oliphant.travis at ieee.org Fri Dec 30 15:05:32 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 13:05:32 -0700 Subject: [SciPy-dev] [SciPy-user] docstrings; corefft, corelinalg In-Reply-To: <43B59159.6010007@gmx.net> References: <43B586F9.1020904@ieee.org> <43B59159.6010007@gmx.net> Message-ID: <43B5930C.2060900@ieee.org> Steve Schmerler wrote: > Available subpackages > --------------------- > > > SciPy: A scientific computing package for Python > ================================================ > > Available subpackages > --------------------- > > >In [20]: >##################################################################################################################################### > >... no infos about scipy subpackages (of course `help(scipy)` has some >infos) > > These subpackage docs were at one time auto-generated. I'm not sure what the status of that is right now. It could be that the info.py file is wrong in the sub-packages. > >2) I think the doc strings of corefft and corelinalg should contain some > words explaining their existence. New scipy users would wonder > > > Good idea. >Is it possible to move all additional functionality of corefft and >corelinalg to linalg and fftpack (e.g corelinalg.eigh which seems to >have no equivalent function in linalg). > > Should be possible. The linalg approach uses f2py blas wrappers. I'm sure there is already a wrapper around the underlying function that handles eigh. Lots of little things to do so dive on in :-) Thanks for the feedback and suggestions. -Travis From oliphant.travis at ieee.org Fri Dec 30 16:47:41 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 14:47:41 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> Message-ID: <43B5AAFD.9080904@ieee.org> Alan G Isaac wrote: >On Fri, 30 Dec 2005, Francesc Altet apparently wrote: > > >>numarray is roughly 5x faster than scipy_core >> >> > >On my computer, it seems almost all that difference is due >to the time difference in initializing the arrays. (I.e., >much of it disappears if I put the array creation in the >setup statement.) > > > Could you show your results. This is surprising. -Travis From oliphant.travis at ieee.org Fri Dec 30 15:53:09 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 13:53:09 -0700 Subject: [SciPy-dev] Some tests for function_base.py In-Reply-To: <86522b1a0512301244i6df426c2y815a8a9819281c60@mail.gmail.com> References: <86522b1a0512301244i6df426c2y815a8a9819281c60@mail.gmail.com> Message-ID: <43B59E35.1040804@ieee.org> Hugo Gamboa wrote: > Hello, > > Since I was trying to fix the unwrap function I decided to add some > more tests to the function_base.py > > In the process I've also suggest some corrections to the code: > > unwrap: was failing by using the old typecode > > histogram: one variable was cleaning the input data > > histogram: the bin generation was not adequate creating unbalanced bin > counts. I've cross checked with matplotlib code (both are very > similar) and corrected the histogram functions. > fantastic. Thank you. -Travis From oliphant.travis at ieee.org Fri Dec 30 17:35:44 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 15:35:44 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> Message-ID: <43B5B640.60307@ieee.org> Alan G Isaac wrote: >On Fri, 30 Dec 2005, Francesc Altet apparently wrote: > > >>numarray is roughly 5x faster than scipy_core >> >> > >On my computer, it seems almost all that difference is due >to the time difference in initializing the arrays. (I.e., >much of it disappears if I put the array creation in the >setup statement.) > > > Hmm... This is strange to me. Look at my times for array creation... >>> t4 = timeit.Timer('res = na.array(None,shape=10000)','import numarray as na') >>> t5 = timeit.Timer('res = sc.empty(shape=10000)','import scipy as sc') >>> t5.repeat(5,10000) [0.1860051155090332, 0.13622808456420898, 0.12397193908691406, 0.13260793685913086, 0.13042497634887695] >>> t4.repeat(5,10000) [1.9458281993865967, 1.9462640285491943, 1.8813719749450684, 1.8477518558502197, 1.8829448223114014] Which seems to indicate that scipy is about 10x faster at array creation. But, in this instance the sorting is way more time consuming than array creation and so the improved numarray sort-times are holding sway. On my system for pure sorting (with new SVN which does in-place sorting --- but still the old way using generic sort algorithm with compare function pointers) I get that numarray (with it's data-type specific sorting) is about 3x faster in this case: >>> t1 = timeit.Timer('a.sort()', 'import numarray as na; a = na.array(None,shape=10000)') >>> t2 = timeit.Timer('a.sort()', 'import scipy as sc; a = sc.empty(shape=10000)') >>> t1.repeat(3,100) [0.26560115814208984, 0.25392889976501465, 0.25759387016296387] >>> t2.repeat(3,100) [0.66517782211303711, 0.6185309886932373, 0.65760207176208496] It could be that the 5x numbers were due to the copying of information over to a new array. But still, I think 3x leaves room for improvement which no doubt specific (rather than generic) sorting algorithms would provide. -Travis From aisaac at american.edu Fri Dec 30 18:58:10 2005 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 30 Dec 2005 18:58:10 -0500 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B5AAFD.9080904@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> <43B5AAFD.9080904@ieee.org> Message-ID: > Alan G Isaac wrote: >> On my computer, it seems almost all that difference is due >> to the time difference in initializing the arrays. (I.e., >> much of it disappears if I put the array creation in the >> setup statement.) Sorry; that was overstated. (I compared against Fransesco's results.) Here's the proper comparison. At least, I thought it was revealing ... Am I misinterpreting it? Alan Isaac ########### script ################ t1=timeit.Timer('a=sc.empty(shape=10000);a.sort()', 'import scipy.base as sc') print "Scipy with initialization", min(t1.repeat(30,100)) t2=timeit.Timer('a=na.array(None, shape=10000);a.sort()', 'import numarray as na') print "numarray with initialization", min(t2.repeat(30,100)) t3=timeit.Timer('a.sort()', 'import scipy.base as sc;a=sc.empty(shape=10000)') print "Scipy without initialization", min(t3.repeat(30,100)) t4=timeit.Timer('a.sort()', 'import numarray as na;a=na.array(None, shape=10000)') print "numarray without initialization", min(t4.repeat(30,100)) ########### output ################ Scipy with initialization 0.300570603247 numarray with initialization 0.177719793996 Scipy without initialization 0.215004420953 numarray without initialization 0.150071587311 From oliphant.travis at ieee.org Fri Dec 30 20:21:24 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 18:21:24 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B5B640.60307@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> <43B5B640.60307@ieee.org> Message-ID: <43B5DD14.4060802@ieee.org> Travis Oliphant wrote: > > >>> t1 = timeit.Timer('a.sort()', 'import numarray as na; a = >na.array(None,shape=10000)') > >>> t2 = timeit.Timer('a.sort()', 'import scipy as sc; a = >sc.empty(shape=10000)') > >>> t1.repeat(3,100) >[0.26560115814208984, 0.25392889976501465, 0.25759387016296387] > >>> t2.repeat(3,100) >[0.66517782211303711, 0.6185309886932373, 0.65760207176208496] > >It could be that the 5x numbers were due to the copying of information >over to a new array. But still, I think 3x leaves room for improvement >which no doubt specific (rather than generic) sorting algorithms would >provide. > > In the fixsort branch I've added the necessary tools to fix sort and argsort incrementally. This means that there is a generic sort in-place which will be called and works if the compare method is defined. However, there are hooks for adding new sorting functions and if this type-specific sort is found it is called. I just tested this on the example Francesc gave and the type-specific sorting function matches the speed of numarray. It will not be difficult to add more type-specific sorting functions. -Travis From oliphant.travis at ieee.org Fri Dec 30 20:28:20 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 18:28:20 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B5B640.60307@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> <43B5B640.60307@ieee.org> Message-ID: <43B5DEB4.3040208@ieee.org> Try this little experiment, though to see numarray performance on sorting N-d arrays: >>> t1 = timeit.Timer('a = empty(shape=(100,100));a.sort(0)','from scipy import empty') >>> t2 = timeit.Timer('a = array(None,shape=(100,100));a.sort(0)','from numarray import array') >>> t1.repeat(3,100) t2.rep[0.58475184440612793, 0.63138318061828613, 0.61224198341369629] >>> t2.repeat(3,100) [3.3455381393432617, 3.2438080310821533, 3.2775850296020508] >>> import scipy.base._sort >>> t1.repeat(3,100) [0.26787090301513672, 0.21700596809387207, 0.21253395080566406] The last one changes to a type-specific sort for int32 arrays, leading to about a 3x speed up. This is sorting over the first axis. So, it looks like even without changing the sorting, for Nd arrays sorting is much faster in scipy_core. If we improve to type-specific sorting, then the sorting is much, much faster in scipy_core. -Travis From oliphant.travis at ieee.org Fri Dec 30 23:06:52 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 30 Dec 2005 21:06:52 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B5DEB4.3040208@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B596E4.8000706@ieee.org> <43B5B640.60307@ieee.org> <43B5DEB4.3040208@ieee.org> Message-ID: <43B603DC.3070003@ieee.org> I've checked in sorting improvements to scipy_core SVN. I've also added a benchmarks directory for storing little benchmark programs. The attached program is already there. It's output for me is: 1-D length = 10000 Numarray: [0.26125383377075195, 0.26763486862182617, 0.25728297233581543] SciPy: [0.22897696495056152, 0.2154390811920166, 0.2216179370880127] Numeric: [0.3624110221862793, 0.37320113182067871, 0.3526921272277832] 2-D shape = (100,100), last-axis Numarray: [1.013077974319458, 0.93102908134460449, 1.0637660026550293] SciPy: [0.15087985992431641, 0.094407081604003906, 0.093811988830566406] Numeric: [0.29771900177001953, 0.23210597038269043, 0.22986793518066406] 2-D shape = (100,100), first-axis Numarray: [3.3508830070495605, 4.4938950538635254, 3.2825040817260742] SciPy: [0.25642704963684082, 0.18298697471618652, 0.18323087692260742] Numeric: [0.3815300464630127, 0.30514788627624512, 0.31459379196166992] I think scipy is sorting faster now.... -Travis -------------- next part -------------- A non-text attachment was scrubbed... Name: sorting.py Type: text/x-python Size: 1308 bytes Desc: not available URL: From pearu at scipy.org Sat Dec 31 00:31:20 2005 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 30 Dec 2005 23:31:20 -0600 (CST) Subject: [SciPy-dev] [SciPy-user] docstrings; corefft, corelinalg In-Reply-To: <43B5930C.2060900@ieee.org> References: <43B5930C.2060900@ieee.org> Message-ID: On Fri, 30 Dec 2005, Travis Oliphant wrote: > Steve Schmerler wrote: > >> Available subpackages >> --------------------- >> >> >> SciPy: A scientific computing package for Python >> ================================================ >> >> Available subpackages >> --------------------- >> >> >> In [20]: >> ##################################################################################################################################### >> >> ... no infos about scipy subpackages (of course `help(scipy)` has some >> infos) >> >> > These subpackage docs were at one time auto-generated. I'm not sure > what the status of that is right now. It could be that the info.py file > is wrong in the sub-packages. This is my TODO, I'll look into today. Pearu From oliphant.travis at ieee.org Sat Dec 31 02:30:39 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 00:30:39 -0700 Subject: [SciPy-dev] New release of scipy_core Message-ID: <43B6339F.1090301@ieee.org> I'd like to place a target date of Jan 4 for the new release of scipy_core. With a release of scipy occurring by Jan 6. Please have all changes to scipy_core and scipy in-place by then. We need to get a re-factored scipy_core and scipy out quickly. I think the packaging is finally stabilized and free of too much fanciness. -Travis From oliphant.travis at ieee.org Sat Dec 31 02:39:24 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 00:39:24 -0700 Subject: [SciPy-dev] Changes allow string -> number casting Message-ID: <43B635AC.80306@ieee.org> I just checked in to SVN, the changes needed to allow strings (and unicode) to be converted to numbers: >>> print array(['3.0','4','5','6']).astype(float) array([ 3., 4., 5., 6.]) It is not the fastest-possible approach (it goes through the Python int, long, float, and complex __new__ methods), but it works. -Travis From faltet at carabos.com Sat Dec 31 06:52:05 2005 From: faltet at carabos.com (Francesc Altet) Date: Sat, 31 Dec 2005 12:52:05 +0100 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B603DC.3070003@ieee.org> References: <200512301918.25444.faltet@carabos.com> <43B5DEB4.3040208@ieee.org> <43B603DC.3070003@ieee.org> Message-ID: <200512311252.06353.faltet@carabos.com> A Dissabte 31 Desembre 2005 05:06, Travis Oliphant va escriure: > I've checked in sorting improvements to scipy_core SVN. Wow! my numbers: In [60]: t1 = timeit.Timer('a=array(None,shape=10000);a.sort()','from numarray import array') In [61]: t1.repeat(3,1000) Out[61]: [0.63181209564208984, 0.60971593856811523, 0.61994194984436035] In [62]: t2 = timeit.Timer('a=empty(shape=10000);a.sort()','from scipy.base import empty') In [63]: t2.repeat(3,1000) Out[63]: [0.55068111419677734, 0.52086997032165527, 0.52183318138122559] So, I can reproduce this also: scipy_core is more than 10% faster now :-) But, interestingly enough: In [65]: t3 = timeit.Timer('a=array(None,shape=10000);a.argsort()','from numarray import array') In [66]: t3.repeat(3,1000) Out[66]: [1.2611861228942871, 1.2791898250579834, 1.2521131038665771] In [67]: t4 = timeit.Timer('a=empty(shape=10000);a.argsort()','from scipy.base import empty') In [68]: t4.repeat(3,1000) Out[68]: [0.77948904037475586, 0.76528811454772949, 0.81097602844238281] so, argsort() is more than 60% faster in scipy_core than numarray. Awesome! And argsort() is what PyTables actually use for create indexes. Thank you very much not only for your responsiveness, but also for the gorgeous quality of your work. I think I'll have to adhere to the long list willing to buy your book, Travis ;-) -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From faltet at carabos.com Sat Dec 31 07:15:31 2005 From: faltet at carabos.com (Francesc Altet) Date: Sat, 31 Dec 2005 13:15:31 +0100 Subject: [SciPy-dev] Indexing array performance Message-ID: <200512311315.32461.faltet@carabos.com> Hi, I'm doing some timings on array indexing feature. It's quite shocking to me that, provided empty() and arange() are faster in scipy_core than its counterparts in numarray: In [110]: t1 = timeit.Timer('a=empty(shape=10000);a=arange(10000)','from scipy.base import empty, arange') In [111]: t1.repeat(3,10000) Out[111]: [0.74018502235412598, 0.76141095161437988, 0.71947312355041504] In [112]: t2 = timeit.Timer('a=array(None,shape=10000);a=arange(10000)','from numarray import array, arange') In [113]: t2.repeat(3,10000) Out[113]: [2.3724348545074463, 2.4109888076782227, 2.3820669651031494] however, the next code seems to be slower in scipy_core: In [114]: t3 = timeit.Timer('a=empty(shape=10000);a[arange(10000)]','from scipy.base import empty, arange') In [115]: t3.repeat(3,1000) Out[115]: [3.5126161575317383, 3.5309510231018066, 3.5558919906616211] In [116]: t4 = timeit.Timer('a=array(None,shape=10000);a[arange(10000)]','from numarray import array, arange') In [117]: t4.repeat(3,1000) Out[117]: [2.0824751853942871, 2.1258058547973633, 2.0946059226989746] It seems like if the index array feature can be further optimized in scipy_core. -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From oliphant.travis at ieee.org Sat Dec 31 14:10:44 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 12:10:44 -0700 Subject: [SciPy-dev] Indexing array performance In-Reply-To: <200512311315.32461.faltet@carabos.com> References: <200512311315.32461.faltet@carabos.com> Message-ID: <43B6D7B4.4040103@ieee.org> Francesc Altet wrote: >Hi, > > >In [114]: t3 = timeit.Timer('a=empty(shape=10000);a[arange(10000)]','from >scipy.base import empty, arange') > >In [115]: t3.repeat(3,1000) >Out[115]: [3.5126161575317383, 3.5309510231018066, 3.5558919906616211] > >In [116]: t4 = timeit.Timer('a=array(None,shape=10000);a[arange(10000)]','from >numarray import array, arange') > >In [117]: t4.repeat(3,1000) >Out[117]: [2.0824751853942871, 2.1258058547973633, 2.0946059226989746] > > >It seems like if the index array feature can be further optimized in >scipy_core. > > Thank you very much for your timings. It is important to get things as fast as we can, especially with all the good code in numarray to borrow from. I am really committed to making scipy_core as fast as it can be. I believe you are right about the indexing. I will look more closely at this. The indexing code is still "first-cut" and has not received any optimization attention. It would be good to look at 2- and 3-D timings, however, to see if the speed up here is a 1-d optimization that scipy_core is not doing. Best regards, -Travis From oliphant.travis at ieee.org Sat Dec 31 14:42:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 12:42:30 -0700 Subject: [SciPy-dev] Indexing array performance In-Reply-To: <200512311315.32461.faltet@carabos.com> References: <200512311315.32461.faltet@carabos.com> Message-ID: <43B6DF26.5000702@ieee.org> Francesc Altet wrote: >however, the next code seems to be slower in scipy_core: > >In [114]: t3 = timeit.Timer('a=empty(shape=10000);a[arange(10000)]','from >scipy.base import empty, arange') > >In [115]: t3.repeat(3,1000) >Out[115]: [3.5126161575317383, 3.5309510231018066, 3.5558919906616211] > >In [116]: t4 = timeit.Timer('a=array(None,shape=10000);a[arange(10000)]','from >numarray import array, arange') > >In [117]: t4.repeat(3,1000) >Out[117]: [2.0824751853942871, 2.1258058547973633, 2.0946059226989746] > >It seems like if the index array feature can be further optimized in >scipy_core. > > > I just did some simple tests. It looks like 2-d indexing is quite a bit faster in scipy_core. Try these: t4 = timeit.Timer('a=array(None,shape=(1000,1000));a[arange(1000),arange(1000)]','from numarray import array, arange') t3 = timeit.Timer('a=empty(shape=(1000,1000));a[arange(1000),arange(1000)]','from scipy.base import empty, arange') My results: >>> t3.repeat(3,100) [0.18409419059753418, 0.19265508651733398, 0.18711185455322266] >>> t4.repeat(3,100) [4.0139532089233398, 3.9884538650512695, 4.405332088470459] However, as you noticed 1-d indexing is a bit slower. However if you use flat indexing (which is special-cased) it is faster: Thus: t4 = timeit.Timer('a=array(None,shape=10000);a[arange(10000)]','from numarray import array, arange') t3 = timeit.Timer('a=empty(shape=10000);a.flat[arange(10000)]','from scipy.base import empty, arange') Gives: >>> t3.repeat(3,100) [0.18227100372314453, 0.16614699363708496, 0.16269397735595703] >>> t4.repeat(3,100) [0.40496301651000977, 0.34369301795959473, 0.3347930908203125] Thus, I think it might be wise to use the flattened indexing code when the array is already 1-d. I could add this in automatically. -Travis From oliphant.travis at ieee.org Sat Dec 31 16:34:48 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 14:34:48 -0700 Subject: [SciPy-dev] Indexing array performance In-Reply-To: <200512311315.32461.faltet@carabos.com> References: <200512311315.32461.faltet@carabos.com> Message-ID: <43B6F978.7030904@ieee.org> Francesc Altet wrote: >Hi, > >I'm doing some timings on array indexing feature. It's quite shocking >to me that, provided empty() and arange() are faster in scipy_core >than its counterparts in numarray: > >In [110]: t1 = timeit.Timer('a=empty(shape=10000);a=arange(10000)','from >scipy.base import empty, arange') > >In [111]: t1.repeat(3,10000) >Out[111]: [0.74018502235412598, 0.76141095161437988, 0.71947312355041504] > >In [112]: t2 = timeit.Timer('a=array(None,shape=10000);a=arange(10000)','from >numarray import array, arange') > >In [113]: t2.repeat(3,10000) >Out[113]: [2.3724348545074463, 2.4109888076782227, 2.3820669651031494] > >however, the next code seems to be slower in scipy_core: > >In [114]: t3 = timeit.Timer('a=empty(shape=10000);a[arange(10000)]','from >scipy.base import empty, arange') > >In [115]: t3.repeat(3,1000) >Out[115]: [3.5126161575317383, 3.5309510231018066, 3.5558919906616211] > >In [116]: t4 = timeit.Timer('a=array(None,shape=10000);a[arange(10000)]','from >numarray import array, arange') > >In [117]: t4.repeat(3,1000) >Out[117]: [2.0824751853942871, 2.1258058547973633, 2.0946059226989746] > > I added a special-case for 1-d indexing that goes through the same code as a.flat would. The result seems to show a nice speed up for your test case. This is not to say that the indexing code could not be made faster, but that would require more study. Right now, the multidimensional indexing code is fairly clean as it uses the abstraction of an iterator (which also makes it hard to figure out how to make it faster). I've been curious as to how fast the result is. The results on 2-d indexing are encouraging. They show the scipy_core code to be faster than the 2-d indexing of numarray (as far as I can tell). Of course these things can usually be made better, so I'm hesitant to say we've arrived. -Travis From strawman at astraw.com Sat Dec 31 19:10:19 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 31 Dec 2005 16:10:19 -0800 Subject: [SciPy-dev] scipy.pkgload (was Re: Moving random.py) In-Reply-To: References: <43A2B069.9000300@ieee.org> <43A2C1B6.1070502@bigpond.net.au> <43A36202.7090504@colorado.edu> <43A36A45.6080107@ieee.org> <43A4A036.2050704@colorado.edu> Message-ID: <43B71DEB.3050002@astraw.com> Pearu Peterson wrote: >Hi, > >I have implemented hooks for scipy.pkgload(..) following the idea of >Fernando's scipy.mod_load(..) functions with few modifications. It's doc >string is given below. > > Hi Pearu, This is great. I've made a few changes to allow the new system to work with setuptools. Basically, the new patches don't assume that the package __path__ variable contains only a single directory, but rather searches all directories in __path__ for subpackages and modules. This allows scipy_core and full scipy to both exist as .eggs without any other changes. I hope you can commit this patch or something similar before the release this coming week. Cheers! Andrew -------------- next part -------------- A non-text attachment was scrubbed... Name: namespace_packages.patch Type: text/x-patch Size: 3911 bytes Desc: not available URL: From oliphant.travis at ieee.org Sat Dec 31 20:08:09 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 18:08:09 -0700 Subject: [SciPy-dev] Renaming scipy_core ??? Message-ID: <43B72B79.8040807@ieee.org> Fernando brings up an interesting point that might be worth some discussion. It's might be too late to have this discussion, but with a more stable release coming up, it might not be. Here is what he says: > My main point is this: I worry a little that using 'scipy' as a single > name > for both scipy_core and scipy_full is causing more problems than it > solves, > and may be an unnecessary long-term headache. There is the issue of > confusion > of installation amongst newbies, as well as technical problems with > packaging > because of file collisions, package manager issues, etc. > > Finally, there is the problem that when reading code, with something > like: > > import scipy > scipy.foo.bar() > > there is no way to tell whether this uses scipy_core or _full. > > I wish I had expressed this earlier, but these thoughts really took a > while to > clarify in my mind, as I didn't realize at the beginning that this > could be a > problem. But as time goes on, I am more and more convinced that we > may be > setting ourselves up for an unnecessary long-term maintenance and user > explanation headache. > > I wonder if we wouldn't be better off with the scipy_core package > being named > something else altogether, let's say 'ndarray' (or whatever: I suck at > coming > up with good names). That package would contain > ndarray.{f2py,distutils,...} > and it would be obviously a dependency for all scipy users. In > ndarray, we'd > define the __all__ attribute to explicitly list the publicly exported > functions, classes and packages, and scipy would be assumed to do: > > from ndarray import * > __all__ = ndarray.__all__ + ['whatever else scipy declares'] > > > So scipy would be known to be a strict superset of the core, as it is > today. I would like to hear some more opinions about this. The one that is most convincing to me is that you can't tell if something comes from full scipy or scipy_core just by looking at the import statement. That's an interesting concern. I know, this would require quite a bit of re-factoring. So, I'm not sure it is worth it. It might help with the image of scipy_core. We could call the scipy_core package scicore, or numcore, if that is the only concern. The biggest concern is that right now, a lot of the sub-package management system is builtin to scipy_core from the get-go. That would have to be re-factored. I'm not sure how easy that would be. I'm ambivalent at this point, and mostly concerned with "moving forward" and having people view scipy_core as relatively stable.... -Travis From Fernando.Perez at colorado.edu Sat Dec 31 20:12:33 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 31 Dec 2005 18:12:33 -0700 Subject: [SciPy-dev] Renaming scipy_core ??? In-Reply-To: <43B72B79.8040807@ieee.org> References: <43B72B79.8040807@ieee.org> Message-ID: <43B72C81.9020601@colorado.edu> Travis Oliphant wrote: > Fernando brings up an interesting point that might be worth some > discussion. It's might be too late to have this discussion, but with a > more stable release coming up, it might not be. Thanks for considering this :) > Here is what he says: [...] > I'm ambivalent at this point, and mostly concerned with "moving forward" > and having people view scipy_core as relatively stable.... I realize that I should have said this before, my bad. I hesitated for quite a while, afraid of sticking my foot in my mouth (I've been pretty good at that lately :) But if we end up deciding that it's the right technical decision, I think the pain may be worth enduring: it's once, as opposed to carrying the problem forward for a long time to come. Because what _is_ certain is that once 1.0 comes out, we should really clamp things down, and move forward only with higher level algorithmic/library enhancements as well as internal optimizations, but on top of a stable API. Happy new year to all! f From robert.kern at gmail.com Sat Dec 31 20:45:36 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 31 Dec 2005 19:45:36 -0600 Subject: [SciPy-dev] Renaming scipy_core ??? In-Reply-To: <43B72B79.8040807@ieee.org> References: <43B72B79.8040807@ieee.org> Message-ID: <3d375d730512311745l4114c2d1q7de121ae5ef6a14b@mail.gmail.com> On 12/31/05, Travis Oliphant wrote: > > Fernando brings up an interesting point that might be worth some > discussion. It's might be too late to have this discussion, but with a > more stable release coming up, it might not be. > > Here is what he says: > > > My main point is this: I worry a little that using 'scipy' as a single > > name > > for both scipy_core and scipy_full is causing more problems than it > > solves, > > and may be an unnecessary long-term headache. There is the issue of > > confusion > > of installation amongst newbies, as well as technical problems with > > packaging > > because of file collisions, package manager issues, etc. > I would like to hear some more opinions about this. The one that is > most convincing to me is that > you can't tell if something comes from full scipy or scipy_core just by > looking at the import statement. I, too, think that scipy_core and full scipy providing the Python package scipy probably causes more problems than it solves. One of your big concerns, the mistaken impression that the "behemoth" version of scipy is replacing Numeric is pretty much entirely a product of this choice. Some problems with packaging simply go away if we rename the package provided by scipy_core to something else. > We could call the scipy_core package scicore, or numcore, if that is the > only concern. +1 for scicore. It meshes with my preferred name for Fernando's "Scipy Kits" concept, scikits. -- Robert Kern robert.kern at gmail.com From robert.kern at gmail.com Sat Dec 31 21:06:06 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 31 Dec 2005 20:06:06 -0600 Subject: [SciPy-dev] Sorting speed In-Reply-To: <43B598DD.20102@colorado.edu> References: <200512301918.25444.faltet@carabos.com> <43B588B4.5040001@ieee.org> <43B598DD.20102@colorado.edu> Message-ID: <3d375d730512311806pda6b648i9959dbf771bdbd20@mail.gmail.com> On 12/30/05, Fernando Perez wrote: > Travis Oliphant wrote: > > > There are many sorting algorithms in numarray. I'm not sure which one > > is being used regularly, but I'd like to see them brought over, though, > > which shouldn't be too big of a deal and is on the radar. > > It may be worth noting that the sort code in python's core, for > somelist.sort(), is to my understanding, state of the art. Tim Peters a while > ago (py2.2, I think) brought the full weight of his considerable algorithmic > talent to bear on this problem, as well as some recent academic results. I > don't know how this compares to what numarray uses, but it may be worth > investigating/copying. > > If Tim's work is as good as it is claimed to be (and I have much respect for > his claims), it might be a good idea to use a stripped version of his code > which doesn't have to deal with the generalities of python lists (and all the > per-item typechecking needed there). My (limited) understanding is that timsort is specifically optimized for sorting Python lists as they are generally used. Thus, compare operations are minimized because comparison operations are potentially expensive for general Python objects where they are relatively very cheap for numeric arrays. Also, timsort scans for runs, so sorting a list that is entirely sorted except for the last item is quite fast. Since one can append to lists, this is very useful for lists, but one cannot append to arrays so it would be less useful to us. The benefits of adopting timsort are probably less than they were for Python lists. -- Robert Kern robert.kern at gmail.com From oliphant.travis at ieee.org Sat Dec 31 22:07:53 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 20:07:53 -0700 Subject: [SciPy-dev] Lexsort In-Reply-To: References: <43B719C2.3050403@ieee.org> Message-ID: <43B74789.1000806@ieee.org> Charles R Harris wrote: > Travis, > > On 12/31/05, *Travis Oliphant* > wrote: > > > Chuck, > > I've read up on lexigraphic sorting and I finally get it (It's the > kind > of sorting we do in spreadsheets all the time...First on column > D, then > on columnB, right). In the case of lexsort, it would appear that the > ordering of the keys determines which array gets sorted on first. > > > Yep. The one thing to be careful about is the key ordering. For instance, > to sort a bunch of fixed length words alphabetically the words need to be > sorted on the last letter first, then second to last, and so on. I > agonized a > bit about building this in, but decided to just sort on the keys in > the order > they appeared in the list as that seemed most intuitive, and let the > sorter beware. Ahh. Yeah, that is a bit confusing, a good example could help guide the user. I just added a merge sort for STRING arrays, so that now the lexsort could be used to do sorting on a record array just like spreadsheet's do. I thought about a general-purpose sorting function for void-type arrays for a while, then realized that memory-copies would be a big problem and decided against it. I think the index-sort you have in lexsort followed by index selection, will provide the capability and will often be faster than trying to sort the actual record elements. > > Oh my, bit fields, too? Well, no, bit-fields are below the resolution of the current type descriptors. While, the general-purpose sort might be an interesting idea, I'm inclined to leave it alone at least until someone has an actual need for it (and not a hypothetical need). > > Right now, void types can't be sorted which means that records > can't be > sorted. A lex sort would certainly fix that problem as the user > could > specify the desired keys themselves. It just seems like it could be > done more elegantly. > > > Are you are thinking of an extended comparison function, perhaps? I > think that > a lexsort followed by selection would be faster, as integers, not > large records, > will be exchanged. I came to this same conclusion and agree with you. That's why I left the general-purpose VOID sorter be for now. Right now, you can sort VOID arrays (using the builtin qsort) and the VOID-compare. We could think about a better VOID compare function than just comparing the raw bytes string-like. Perhaps using any defined fields. But, for now, I'm just going to sit on it. > Of course, there is always the memory problem. I am not > sure of the best way to sort large memmapped data files, but I think > it is a > different sort of sort. A merge sort with an extra memmapped file > might be the > way to go. Or the merge sort with lots of swap. In either case, I > think it will be > important to access the records sequentially, which I guess selection > doesn't > do. A special comparison function could be useful in that case because > record > access time might be the limiting factor. Hmm... or swapping records > instead > of indices. In that case, you could use a callback function for > swapping. I suspect > a little experimentation would be in order. I'm going to leave this alone for a while. If you'd like to continue to play with it, feel free :-) > > I'm happy to see someone else use the code. I suspect it has been kind > of hidden > up to now. Maybe I should have finished documenting it ;) Feel free to add to the docstring in multiarraymodule.c (doc_lexsort). You know, I think it would be nice to have some means to edit docstrings without re-compiling. I wonder if there is some-way to write a function that would dynamically add to the docstrings of builtins... Thanks for all your sorting expertise.... -Travis From oliphant.travis at ieee.org Sat Dec 31 21:31:25 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 19:31:25 -0700 Subject: [SciPy-dev] Renaming scipy_core ??? In-Reply-To: <3d375d730512311745l4114c2d1q7de121ae5ef6a14b@mail.gmail.com> References: <43B72B79.8040807@ieee.org> <3d375d730512311745l4114c2d1q7de121ae5ef6a14b@mail.gmail.com> Message-ID: <43B73EFD.5070406@ieee.org> Robert Kern wrote: >I, too, think that scipy_core and full scipy providing the Python >package scipy probably causes more problems than it solves. One of >your big concerns, the mistaken impression that the "behemoth" version >of scipy is replacing Numeric is pretty much entirely a product of >this choice. Some problems with packaging simply go away if we rename >the package provided by scipy_core to something else. > > I'm actually quite happy to do this. Now, better than later. I need Pearu to chime in on how difficult this might be, because distutils is part of scicore now. So, scipy.distutils would become scicore.disutils and so forth. It's the packaging stuff that has me concerned a bit. I would be happy to move forward with scicore. -Travis From oliphant.travis at ieee.org Sat Dec 31 21:29:18 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 31 Dec 2005 19:29:18 -0700 Subject: [SciPy-dev] Sorting speed In-Reply-To: <3d375d730512311806pda6b648i9959dbf771bdbd20@mail.gmail.com> References: <200512301918.25444.faltet@carabos.com> <43B588B4.5040001@ieee.org> <43B598DD.20102@colorado.edu> <3d375d730512311806pda6b648i9959dbf771bdbd20@mail.gmail.com> Message-ID: <43B73E7E.1040002@ieee.org> Robert Kern wrote: >My (limited) understanding is that timsort is specifically optimized >for sorting Python lists as they are generally used. Thus, compare >operations are minimized because comparison operations are potentially >expensive for general Python objects where they are relatively very >cheap for numeric arrays. Also, timsort scans for runs, so sorting a >list that is entirely sorted except for the last item is quite fast. >Since one can append to lists, this is very useful for lists, but one >cannot append to arrays so it would be less useful to us. The benefits >of adopting timsort are probably less than they were for Python lists. > > > That was my impression as well after looking at the code for a bit. I've moved on. -Travis From strawman at astraw.com Sat Dec 31 23:12:07 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 31 Dec 2005 20:12:07 -0800 Subject: [SciPy-dev] Renaming scipy_core ??? In-Reply-To: <43B73EFD.5070406@ieee.org> References: <43B72B79.8040807@ieee.org> <3d375d730512311745l4114c2d1q7de121ae5ef6a14b@mail.gmail.com> <43B73EFD.5070406@ieee.org> Message-ID: <43B75697.1090306@astraw.com> Travis Oliphant wrote: >Robert Kern wrote: > > > >>I, too, think that scipy_core and full scipy providing the Python >>package scipy probably causes more problems than it solves. One of >>your big concerns, the mistaken impression that the "behemoth" version >>of scipy is replacing Numeric is pretty much entirely a product of >>this choice. Some problems with packaging simply go away if we rename >>the package provided by scipy_core to something else. >> >> >> >> >I'm actually quite happy to do this. Now, better than later. > >I need Pearu to chime in on how difficult this might be, because >distutils is part of scicore now. So, scipy.distutils would become >scicore.disutils and so forth. > >It's the packaging stuff that has me concerned a bit. I would be happy >to move forward with scicore. > > > I'm not sure I see the compelling argument for change, but if we do it: Could scicore use standard Python distutils, and move scipy.distutils (and scipy.f2py and scipy.weave) into full scipy? I think this might lower the energy barrier for entry into scicore.