From cimrman3 at ntc.zcu.cz Tue Nov 1 03:09:23 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Nov 2005 09:09:23 +0100 Subject: [SciPy-dev] array logical ops error? In-Reply-To: References: <4365F457.4060202@ntc.zcu.cz> Message-ID: <436722B3.7030003@ntc.zcu.cz> David M. Cooke wrote: > On Oct 31, 2005, at 07:37, Alan G Isaac wrote: > > >>On Mon, 31 Oct 2005, Robert Cimrman apparently wrote: >> >> >>>is this the expected behaviour? >>>IMHO (b * c) == (b and c), (b + c) == (b or c) should hold... >>> >> >>I expected the Boolean operations to yield >>element-by-element comparisons. What are they?? In >>contrast, the + and * operators give the expected results. > > > Use & and | instead of 'and' and 'or'. Thanks, this works as expected. Is there a place where such things are documented? I suspect many people would expect 'and' and '&' to have the same behaviour. Of course, looking at the numeric operations special methods should reveal the problem, but who would do it ;) r. From kamrik at gmail.com Tue Nov 1 03:18:52 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Tue, 1 Nov 2005 10:18:52 +0200 Subject: [SciPy-dev] scipy.org and numarray In-Reply-To: <200510312138.49269.dd55@cornell.edu> References: <200510312138.49269.dd55@cornell.edu> Message-ID: Speaking of outdated content on the site, here is another one: http://www.scipy.org/documentation/FAQ.html#contrib It is a question in FAQ: "I'd like to contribute code/time/other to SciPy. How can I get involved?" A citation from there: "If you're bored, have a look at the list and start plugging the holes. Note: Hot-list not available yet (we just haven't had time to stick it on the web site)." Having this for long on a web site of an open source project is just like deliberately sending potential contributors away. Generally, the scipy web site gives an impression of a project with very little activity. This also turns potential users away. By the way what are the plans for the site? Was it finally decided that it will be moved to Trac? If yes, when? I think, I would be interested in helping with the site organization. On 11/1/05, Darren Dale wrote: > I just wanted to suggest that the following page on www.scipy.org is now out > of date. > > http://www.scipy.org/documentation/numarraydiscuss.html > > Darren > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From cimrman3 at ntc.zcu.cz Tue Nov 1 03:25:23 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Nov 2005 09:25:23 +0100 Subject: [SciPy-dev] Adding to an array with __radd__() In-Reply-To: <4366B1CD.6050805@ee.byu.edu> References: <4366139C.2060007@ftw.at> <43665799.9010801@ee.byu.edu> <436662A3.4020101@ftw.at> <4366B1CD.6050805@ee.byu.edu> Message-ID: <43672673.3000006@ntc.zcu.cz> Travis Oliphant wrote: > Ed Schofield wrote: > > I think this is a result of the fact that arrays are now new style > numbers and the fact that array(B) created a perfectly nice object array. > > I've committed a change that makes special cases this case so the > reflected operands will work if they are defined for something that > becomes an object array. Is it possible, in principle, to call always the sparse matrix operator method when it enters an operation with a dense array? (a .. dense array, b sparse -- a + b calls b.__radd__, b + a calls b.__add__) The reason I would support this is, that IMHO the higher level object (a sparse matrix) knows well how to handle the lower level object (a dense array) in numeric operations, but not vice-versa -- there is already a number of sparse matrix formats, so the dense array object definitely cannot understand them all. r. From arnd.baecker at web.de Tue Nov 1 05:06:20 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 1 Nov 2005 11:06:20 +0100 (CET) Subject: [SciPy-dev] PyArray_CanCastSafely(exact,inexact) on 64-bit In-Reply-To: <43669E11.7000605@ee.byu.edu> References: <436575FF.8080201@ee.byu.edu> <43669E11.7000605@ee.byu.edu> Message-ID: On Mon, 31 Oct 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >That is great news! I also get this on my 64 Bit machine! > > > Wonderful... > > >Just in case it has fallen through the cracks: > >Concerning the check_integer problem: > > > > def check_integer(self): > > from scipy import stats > > a = stats.randint.rvs(1,20,size=(3,4)) > > fname = tempfile.mktemp('.dat') > > io.write_array(fname,a) > > b = io.read_array(fname,atype=N.Int) > > assert_array_equal(a,b) > > os.remove(fname) > > > >Executing this line by line shows the error for > > b = io.read_array(fname,atype=N.Int) > > > >Doing > > b = io.read_array(fname) > >reads in the array, but it gives floats. > > > >However, > > b = io.read_array(fname,atype=N.Int32) > >works. > > > >If this is the intended behaviour (also on 32Bit), > >the unit test should be changed accordingly... > > > > > What is the type of 'a' (i.e. stats.randint.rvs(1,20,size=(3,4)) ) > on your platform? > > This does look like a problem of type mismatch on 64-bit that we can > handle much better now. > > It looks like randint is returning 32-bit numbers, but N.Int is 'l' > which on your 64-bit platform is a 64-bit integer. This would > definitely be the problem. In [1]: import scipy In [2]: print scipy.__core_version__, scipy.__scipy_version__ 0.4.3.1408 0.4.2_1409 In [3]: a=scipy.stats.randint.rvs(1,20,size=(3,4)) In [4]: a.dtypechar Out[4]: 'l' In [5]: a.dtypestr Out[5]: ' I've changed the test... Hmm - can't see any difference to previous versions? Best, Arnd From schofield at ftw.at Tue Nov 1 06:39:49 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 1 Nov 2005 12:39:49 +0100 (CET) Subject: [SciPy-dev] Adding to an array with __radd__() In-Reply-To: <43672673.3000006@ntc.zcu.cz> References: <4366139C.2060007@ftw.at> <43665799.9010801@ee.byu.edu> <43672673.3000006@ntc.zcu.cz> Message-ID: Robert Cimrman wrote: > Travis Oliphant wrote: > > > > I think this is a result of the fact that arrays are now new style > > numbers and the fact that array(B) created a perfectly nice object array. > > > > I've committed a change that makes special cases this case so the > > reflected operands will work if they are defined for something that > > becomes an object array. > > Is it possible, in principle, to call always the sparse matrix operator > method when it enters an operation with a dense array? > (a .. dense array, b sparse -- a + b calls b.__radd__, b + a calls > b.__add__) > > The reason I would support this is, that IMHO the higher level object (a > sparse matrix) knows well how to handle the lower level object (a dense > array) in numeric operations, but not vice-versa -- there is already a > number of sparse matrix formats, so the dense array object definitely > cannot understand them all. Travis's patch has this effect. Python calls the left operand's __op__ method, but a dense array on the left now yields control to the right operand's __rop__ if it can't interpret the right operand as a normal (non-Object) array. So the tests that were previously commented in test_sparse.py now pass for CSC and CSR matrices. There is one remaining problem with DOK matrices: it never gets this far, first raising a ValueError while trying to interpret it as a sequence. I committed a patch for more graceful handling of such objects that claim to be sequences but don't allow integer indexing. I then reverted this, thinking we should instead fix dok_matrix so PySequence_Check() doesn't return true. But I'm not sure if this is possible. It seems that since Python 2.2 PySequence_Check() returns true for dictionaries, even though they don't allow integer indexing. Isn't this a blatant violation of the sequence protocol? (cf http://docs.python.org/ref/sequence-types.html) Is there any way to override PySequence_Check() to return false for a subclassed dict like dok_matrix? If not I suggest we apply my patch after all -- and complain to the Python developers ;) -- Ed From schofield at ftw.at Tue Nov 1 07:02:24 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 1 Nov 2005 13:02:24 +0100 (CET) Subject: [SciPy-dev] array logical ops error? In-Reply-To: <436653C2.5030003@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436653C2.5030003@ee.byu.edu> Message-ID: Travis Oliphant wrote: > > Robert Cimrman apparently wrote: > > > >>is this the expected behaviour? > >>IMHO (b * c) == (b and c), (b + c) == (b or c) should hold... > > > >I expected the Boolean operations to yield > >element-by-element comparisons. What are they?? In > >contrast, the + and * operators give the expected results. > > > > This is a Python deal. It would be nice if b and c did the same thing > as b * c, but Python does not allow overloading of the "and" and "or" > operators (A PEP to say it should would be possible). > > Thus, "b and c" evaluates the truth of b and the truth of c as a whole > (not elementwise), and there is no way to over-ride this. Hmmm. Could we somehow get "b and c" to return a simple boolean rather than an array of booleans? This would be more consistent. At the moment it's totally mad: "b and c" is different to "c and b" ;) -- Ed From cimrman3 at ntc.zcu.cz Tue Nov 1 07:07:28 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Nov 2005 13:07:28 +0100 Subject: [SciPy-dev] Adding to an array with __radd__() In-Reply-To: References: <4366139C.2060007@ftw.at> <43665799.9010801@ee.byu.edu> <43672673.3000006@ntc.zcu.cz> Message-ID: <43675A7F.9040609@ntc.zcu.cz> Ed Schofield wrote: > > Robert Cimrman wrote: > >>Travis Oliphant wrote: >> >>>I think this is a result of the fact that arrays are now new style >>>numbers and the fact that array(B) created a perfectly nice object array. >>> >>>I've committed a change that makes special cases this case so the >>>reflected operands will work if they are defined for something that >>>becomes an object array. >> >>Is it possible, in principle, to call always the sparse matrix operator >>method when it enters an operation with a dense array? >>(a .. dense array, b sparse -- a + b calls b.__radd__, b + a calls >>b.__add__) >> >>The reason I would support this is, that IMHO the higher level object (a >>sparse matrix) knows well how to handle the lower level object (a dense >>array) in numeric operations, but not vice-versa -- there is already a >>number of sparse matrix formats, so the dense array object definitely >>cannot understand them all. > > > Travis's patch has this effect. Python calls the left operand's __op__ > method, but a dense array on the left now yields control to the right > operand's __rop__ if it can't interpret the right operand as a normal > (non-Object) array. So the tests that were previously commented in > test_sparse.py now pass for CSC and CSR matrices. Cool, Travis obviously works faster than I am able to write e-mails... :-) > There is one remaining problem with DOK matrices: it never gets this far, > first raising a ValueError while trying to interpret it as a sequence. I > committed a patch for more graceful handling of such objects that claim to > be sequences but don't allow integer indexing. I then reverted this, > thinking we should instead fix dok_matrix so PySequence_Check() doesn't > return true. But I'm not sure if this is possible. It seems that since > Python 2.2 PySequence_Check() returns true for dictionaries, even though > they don't allow integer indexing. Isn't this a blatant violation of the > sequence protocol? (cf http://docs.python.org/ref/sequence-types.html) > Is there any way to override PySequence_Check() to return false for a > subclassed dict like dok_matrix? If not I suggest we apply my patch after > all -- and complain to the Python developers ;) I have just tested with: PyObject *isSequence( PyObject *input ) { if (PySequence_Check( input )) { return( PyBool_FromLong( 1 ) ); } else { return( PyBool_FromLong( 0 ) ); } } print isSequence( [] ) print isSequence( (1,) ) print isSequence( {} ) print isSequence( scipy.sparse.dok_matrix( scipy.array( [[1,2,3]] ) ) ) and got (Python 2.4.2): True True False True ... so the problem is not in PySequence_Check(). The DOK matrix inherits not only from dict, but also from spmatrix. Could this cause such a behaviour?? r. From aisaac at american.edu Tue Nov 1 07:48:40 2005 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 1 Nov 2005 07:48:40 -0500 Subject: [SciPy-dev] array logical ops error? In-Reply-To: References: <4365F457.4060202@ntc.zcu.cz><436653C2.5030003@ee.byu.edu> Message-ID: On Tue, 1 Nov 2005, Ed Schofield apparently wrote: > Hmmm. Could we somehow get "b and c" to return a simple > boolean rather than an array of booleans? This would be > more consistent. At the moment it's totally mad: "b and > c" is different to "c and b" ;) You're just getting back c in the first case and b in the second case, as "expected" (once reminded of this Python behavior). You can look at bool(b and c) and get the right result. fwiw, Alan Isaac From perry at stsci.edu Tue Nov 1 10:26:31 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 1 Nov 2005 10:26:31 -0500 Subject: [SciPy-dev] array logical ops error? In-Reply-To: <436722B3.7030003@ntc.zcu.cz> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> Message-ID: <582a8331dbf526abd7c81a40d05a8b39@stsci.edu> On Nov 1, 2005, at 3:09 AM, Robert Cimrman wrote: > David M. Cooke wrote: >> On Oct 31, 2005, at 07:37, Alan G Isaac wrote: >> >> >>> On Mon, 31 Oct 2005, Robert Cimrman apparently wrote: >>> >>> >>>> is this the expected behaviour? >>>> IMHO (b * c) == (b and c), (b + c) == (b or c) should hold... >>>> >>> >>> I expected the Boolean operations to yield >>> element-by-element comparisons. What are they?? In >>> contrast, the + and * operators give the expected results. >> >> >> Use & and | instead of 'and' and 'or'. > > Thanks, this works as expected. Is there a place where such things are > documented? I suspect many people would expect 'and' and '&' to have > the > same behaviour. Of course, looking at the numeric operations special > methods should reveal the problem, but who would do it ;) > This is what I usually tell people. But do note doing it this way can bite people on occasion. This works if the arguments are boolean arrays or values. But if they aren't the operations are bitwise and thus may result in surprising results. Perry Greenfield From schofield at ftw.at Tue Nov 1 10:37:38 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 01 Nov 2005 16:37:38 +0100 Subject: [SciPy-dev] Adding to an array with __radd__() In-Reply-To: <43675A7F.9040609@ntc.zcu.cz> References: <4366139C.2060007@ftw.at> <43665799.9010801@ee.byu.edu> <43672673.3000006@ntc.zcu.cz> <43675A7F.9040609@ntc.zcu.cz> Message-ID: <43678BC2.30203@ftw.at> Robert Cimrman wrote: >I have just tested with: > >PyObject *isSequence( PyObject *input ) { > > if (PySequence_Check( input )) { > return( PyBool_FromLong( 1 ) ); > } else { > return( PyBool_FromLong( 0 ) ); > } >} > >print isSequence( [] ) >print isSequence( (1,) ) >print isSequence( {} ) >print isSequence( scipy.sparse.dok_matrix( scipy.array( [[1,2,3]] ) ) ) > >and got (Python 2.4.2): > >True >True >False >True > >... so the problem is not in PySequence_Check(). The DOK matrix inherits >not only from dict, but also from spmatrix. Could this cause such a >behaviour?? > > Ah, well done. So a dict doesn't define PySequence_Check(). But, according to my tests, neither does an instance of the spmatrix base class. Instead it seems that any class that inherits from a dict does define PySequence_Check(): class E(dict): pass d = {} e = E() print isSequence(d) print isSequence(e) gives False True Very strange. The same is true with a class derived from UserDict. Can someone explain this behaviour? Meanwhile it seems that any Python class instance that defines the __getitem__ method has PySequence_Check() true by default. Another cruel trick! Can this be overridden without using C? I've found this (somewhat old) comment by GvR, admitting that the sequence protocol is poorly defined: http://mail.python.org/pipermail/python-checkins/2001-September/021227.html Perhaps another PEP is in order? Meanwhile I'll reapply my patch to handle sequences more cautiously... -- Ed From perry at stsci.edu Tue Nov 1 10:39:59 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 1 Nov 2005 10:39:59 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <436722B3.7030003@ntc.zcu.cz> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> Message-ID: <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> The recent question about use of "and" and "or" highlighted an issue I think is worth at least a little discussion (I don't recall it being discussed for scipy_core, but maybe it already has been). numarray doesn't permit using arrays as truth values since we figured that it wasn't very clear what people expected to happen (this case is a good illustration). I'd like to make the argument that scipy_core also not permit arrays to be used as truth values (i.e., one should get an exception if one tries to use it that way). I realize that this will break some old code, but since incompatible changes are being made, this is the time to make this sort of change. If left in, it is going to bite people, often quietly. Perry From oliphant at ee.byu.edu Tue Nov 1 10:52:59 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 08:52:59 -0700 Subject: [SciPy-dev] array logical ops error? In-Reply-To: <436722B3.7030003@ntc.zcu.cz> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> Message-ID: <43678F5B.3050905@ee.byu.edu> Robert Cimrman wrote: >Thanks, this works as expected. Is there a place where such things are >documented? I suspect many people would expect 'and' and '&' to have the >same behaviour. Of course, looking at the numeric operations special >methods should reveal the problem, but who would do it ;) > > Yep. There's a nice warning about that one in my book :-) -Travis From oliphant at ee.byu.edu Tue Nov 1 11:06:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 09:06:33 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> Message-ID: <43679289.6000404@ee.byu.edu> Perry Greenfield wrote: >The recent question about use of "and" and "or" highlighted an issue I >think is worth at least a little discussion (I don't recall it being >discussed for scipy_core, but maybe it already has been). > > > It's a good discussion to have. I don't recall talking about it. >numarray doesn't permit using arrays as truth values since we figured >that it wasn't very clear what people expected to happen (this case is >a good illustration). I'd like to make the argument that scipy_core >also not permit arrays to be used as truth values (i.e., one should get >an exception if one tries to use it that way). I realize that this will >break some old code, but since incompatible changes are being made, >this is the time to make this sort of change. If left in, it is going >to bite people, often quietly. > > > I agree it can bite people, but I'm concerned that arrays not having a truth value is an odd thing in Python --- you have to implement it by raising an error when __nonzero__ is called right? All other objects in Python have truth values (including its built-in array). My attitude is that its just better to teach people the proper use of truth values, then to break form with the rest of Python. I'm would definitely like to hear more opinions though. It would be very easy to simply raise and error when __nonzero__ is called. -Travis From cimrman3 at ntc.zcu.cz Tue Nov 1 11:27:11 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Nov 2005 17:27:11 +0100 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43679289.6000404@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> Message-ID: <4367975F.9000809@ntc.zcu.cz> >>numarray doesn't permit using arrays as truth values since we figured >>that it wasn't very clear what people expected to happen (this case is >>a good illustration). I'd like to make the argument that scipy_core >>also not permit arrays to be used as truth values (i.e., one should get >>an exception if one tries to use it that way). I realize that this will >>break some old code, but since incompatible changes are being made, >>this is the time to make this sort of change. If left in, it is going >>to bite people, often quietly. >> >> >> > > I agree it can bite people, but I'm concerned that arrays not having a > truth value is an odd thing in Python --- you have to implement it by > raising an error when __nonzero__ is called right? > > All other objects in Python have truth values (including its built-in > array). My attitude is that its just better to teach people the proper > use of truth values, then to break form with the rest of Python. > > I'm would definitely like to hear more opinions though. It would be > very easy to simply raise and error when __nonzero__ is called. Speaking about 'and' only, my problem with the current implementation of it is that it _looks_ like working as '*' in some cases - 'b and c' returns an array whose length is that of b and c (if the lengths are equal, that is). I would not be against 'b and c' giving a single True or False... But this also breaks the Python semantics of 'and'. The same holds for other logical ops, of course. So I don't know :-) - I can live with the current state. r. From chanley at stsci.edu Tue Nov 1 12:52:28 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 01 Nov 2005 12:52:28 -0500 Subject: [SciPy-dev] adding new attribute to data type objects Message-ID: <4367AB5C.1010109@stsci.edu> Hi Travis, In numarray a user is able to say the following in order to get the size of a data type in bytes: numarray.Int32.bytes This is a useful feature that is used in multiple locations within the Records implementation. I cannot find an equally simple functionality within newcore. There are obvious work arounds that I could use but I fear that they aren't as efficient as having a simple attribute on the data type object. Is there some easy way of adding a similar feature to scipy_core? Thanks, Chris From jmiller at stsci.edu Tue Nov 1 13:48:20 2005 From: jmiller at stsci.edu (Todd Miller) Date: Tue, 01 Nov 2005 13:48:20 -0500 Subject: [SciPy-dev] newcore string array bug? Message-ID: <4367B874.8040003@stsci.edu> With a --with-pydebug configured Python-1.4.2 on i386-Linux I just saw: >>> scipy.array(["this"]*10000) Fatal Python error: UNREF invalid object Abort (core dumped) From oliphant at ee.byu.edu Tue Nov 1 13:52:52 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 11:52:52 -0700 Subject: [SciPy-dev] [Numpy-discussion] Performance of the array protocol In-Reply-To: <4367AEC8.4010707@noaa.gov> References: <1130846099.2968.39.camel@localhost.localdomain> <43679557.70508@ee.byu.edu> <4367AEC8.4010707@noaa.gov> Message-ID: <4367B984.2060303@ee.byu.edu> Chris Barker wrote: > Travis Oliphant wrote: > >> So, I guess the answer to your question is that for small arrays it >> is an intrinsic limitation of the use of Python attributes in the >> array protocol. > > > IIRC, in the early discussion of the array protocol, we had talked > about defining a C struct, and a set of utilities to query that > struct. Now, I guess it uses Python attributes. Do I recall > incorrectly, or has there been a design change, or is this a prototype > implementation? I guess I'd like to see an array protocol that is as > fast as fromstring(), even for small arrays, though it's probably not > a big deal. Also, when writing C code to work with an array, it might > be easier, as well as faster, to not have to query Python attribute You are correct that it would be good to have a C-protocol that did not require attribute lookups (the source of any speed difference with fromstring). Not much progress has been made on a C-version of the protocol, though. I don't know how to do it without adding something to Python itself. At SciPy 2005, I outlined my vision for how we could proceed in that direction. There is a PEP in a subversion repository at http://svn.scipy.org/svn/PEP that explains my view. Basically, I think we should push for a simple (C-based) Nd array object that is nothing more than the current C-structure of the N-d array. Then, arrays would inherit from this base class but all of Python would be able to understand and query it's C-structure. If we could also get an array interface into the typeobject table, it would be a simple thing to populate this structure even with objects that didn't inherit from the base object. I am still interested in other ideas for how to implement the array interface in C, without adding something to the type-object table (we could push for that, but it might take more political effort). -Travis From strawman at astraw.com Tue Nov 1 14:04:38 2005 From: strawman at astraw.com (Andrew Straw) Date: Tue, 01 Nov 2005 11:04:38 -0800 Subject: [SciPy-dev] float exception, fblas cgemv - newscipy In-Reply-To: References: Message-ID: <4367BC46.1040205@astraw.com> Hi Arnd, Make sure you're not being bitten by this bug in glibc < 2.3.3: http://sources.redhat.com/bugzilla/show_bug.cgi?id=10 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294 Arnd Baecker wrote: > Hi, > > on another debian sarge based 32Bit machine (where I made a couple of > links so that ATLAS is found) I get the following > > check_default_beta_y (scipy.linalg.fblas.test_fblas.test_cgemv) ... zsh: > 8527 floating point exception > > The shortest example to reproduce this is > > gdb > file /usr/bin/python > run > > from scipy import * > from scipy.basic.random import normal > a = normal(0.,1.,(3,3)) > x = arange(shape(a)[0]) > print a > print x > linalg.fblas.cgemv(1,a,x) > > Giving > > >>>>print a > > [[-0.70685647 0.65494695 0.9179007 ] > [-0.30862233 0.58137533 2.01614468] > [ 0.18585164 -0.8134587 1.23613689]] > >>>>print x > > [0 1 2] > >>>>linalg.fblas.cgemv(1,a,x) > > > Program received signal SIGFPE, Arithmetic exception. > [Switching to Thread 16384 (LWP 26791)] > 0x404167a0 in ATL_cgemvC_a1_x1_bXi0_y1 () from /usr/lib/sse2/libatlas.so.3 > > Does anyone else see this? > Any ideas what to do with this? > > Best, Arnd > > > System details: > > ipython > import scipy > scipy.__core_version__ > scipy.__scipy_version__ > scipy.__scipy_config__.show() > > gives > > > Python 2.3.5 (#2, Sep 4 2005, 22:01:42) > > In [1]:import scipy > Importing io to scipy > Importing fftpack to scipy > Importing special to scipy > Importing cluster to scipy > Importing sparse to scipy > Importing utils to scipy > Importing interpolate to scipy > Importing integrate to scipy > Importing signal to scipy > Importing optimize to scipy > Importing linalg to scipy > Importing stats to scipy > In [2]:scipy.__core_version__ > Out[2]:'0.4.3.1376' > In [3]:scipy.__scipy_version__ > Out[3]:'0.4.2_1400' > In [4]:scipy.__scipy_config__.show() > dfftw_info: > NOT AVAILABLE > > blas_opt_info: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > language = c > > djbfft_info: > NOT AVAILABLE > > atlas_blas_threads_info: > NOT AVAILABLE > > lapack_opt_info: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/atlas', '/usr/lib/'] > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > language = f77 > > atlas_info: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/atlas', '/usr/lib/'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > > fftw_info: > NOT AVAILABLE > > atlas_blas_info: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib/sse2'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] > > atlas_threads_info: > NOT AVAILABLE > > > > > Atlas: > > ls -l /usr/lib/atlas > total 17168 > lrwxrwxrwx 1 root root 12 Oct 25 00:32 libblas.so -> libblas.so.3 > lrwxrwxrwx 1 root root 14 Mar 23 2005 libblas.so.2 -> > libblas.so.2.3 > -rw-r--r-- 1 root root 3051888 Mar 4 2005 libblas.so.2.3 > lrwxrwxrwx 1 root root 14 Nov 18 2004 libblas.so.3 -> > libblas.so.3.0 > -rw-r--r-- 1 root root 3443424 Oct 27 2004 libblas.so.3.0 > lrwxrwxrwx 1 root root 14 Oct 25 00:32 liblapack.so -> > liblapack.so.3 > lrwxrwxrwx 1 root root 16 Mar 23 2005 liblapack.so.2 -> > liblapack.so.2.3 > -rw-r--r-- 1 root root 5500688 Mar 4 2005 liblapack.so.2.3 > lrwxrwxrwx 1 root root 16 Nov 18 2004 liblapack.so.3 -> > liblapack.so.3.0 > -rw-r--r-- 1 root root 5537840 Oct 27 2004 liblapack.so.3.0 > drwxr-xr-x 2 root root 4096 Nov 18 2004 sse2 > > > ls -l /usr/lib/ | grep atlas > drwxr-xr-x 3 root root 4096 Oct 25 00:32 atlas > lrwxrwxrwx 1 root root 16 Mar 22 2004 libatlas.so -> > sse2/libatlas.so > lrwxrwxrwx 1 root root 15 Mar 23 2005 libatlas.so.2 -> > libatlas.so.2.3 > -rw-r--r-- 1 root root 2835272 Mar 4 2005 libatlas.so.2.3 > lrwxrwxrwx 1 root root 15 Nov 18 2004 libatlas.so.3 -> > libatlas.so.3.0 > -rw-r--r-- 1 root root 3234288 Oct 27 2004 libatlas.so.3.0 > lrwxrwxrwx 1 root root 34 Mar 22 2004 liblapack.so -> > /usr/lib/atlas/sse2/liblapack.so.3 > lrwxrwxrwx 1 root root 25 Mar 22 2004 liblapack_atlas.so -> > sse2/liblapack_atlas.so.3 > lrwxrwxrwx 1 root root 22 Mar 23 2005 liblapack_atlas.so.2 -> > liblapack_atlas.so.2.3 > -rw-r--r-- 1 root root 60568 Mar 4 2005 liblapack_atlas.so.2.3 > lrwxrwxrwx 1 root root 22 Nov 18 2004 liblapack_atlas.so.3 -> > liblapack_atlas.so.3.0 > -rw-r--r-- 1 root root 131224 Oct 27 2004 liblapack_atlas.so.3.0 > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From oliphant at ee.byu.edu Tue Nov 1 15:23:12 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 13:23:12 -0700 Subject: [SciPy-dev] adding new attribute to data type objects In-Reply-To: <4367AB5C.1010109@stsci.edu> References: <4367AB5C.1010109@stsci.edu> Message-ID: <4367CEB0.9040700@ee.byu.edu> Christopher Hanley wrote: >Hi Travis, > >In numarray a user is able to say the following in order to get the size >of a data type in bytes: > >numarray.Int32.bytes > >This is a useful feature that is used in multiple locations within the >Records implementation. I cannot find an equally simple functionality >within newcore. There are obvious work arounds that I could use but I >fear that they aren't as efficient as having a simple attribute on the >data type object. Is there some easy way of adding a similar feature to >scipy_core? > > Yes, I've thought about this. The problem is I don't know exactly how to add a class attribute to a built in type object. I suppose, this could be done using metaclasses. But, I think it would just be easier to build a bytes dictionary so that scipy.bytes[int32] gave the information you wanted. Of course this example is rather pointless because the number of bytes is just the number of bits divided by 8. The information is there right now in the typeinfo dictionary in the multiarray module (it's just not well exposed). -Travis From oliphant at ee.byu.edu Tue Nov 1 15:26:58 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 13:26:58 -0700 Subject: [SciPy-dev] newcore string array bug? In-Reply-To: <4367B874.8040003@stsci.edu> References: <4367B874.8040003@stsci.edu> Message-ID: <4367CF92.80909@ee.byu.edu> Todd Miller wrote: >With a --with-pydebug configured Python-1.4.2 on i386-Linux I just saw: > > >>> scipy.array(["this"]*10000) >Fatal Python error: UNREF invalid object >Abort (core dumped) > > I can't reproduce this. But, check to see if the problem is in creating or printing the array. I do see problems printing large string arrays right now. -Travis From jmiller at stsci.edu Tue Nov 1 15:48:01 2005 From: jmiller at stsci.edu (Todd Miller) Date: Tue, 01 Nov 2005 15:48:01 -0500 Subject: [SciPy-dev] newcore string array bug? In-Reply-To: <4367CF92.80909@ee.byu.edu> References: <4367B874.8040003@stsci.edu> <4367CF92.80909@ee.byu.edu> Message-ID: <4367D481.9090309@stsci.edu> Travis Oliphant wrote: >Todd Miller wrote: > > > >>With a --with-pydebug configured Python-1.4.2 on i386-Linux I just saw: >> >> >> >>>>>scipy.array(["this"]*10000) >>>>> >>>>> >>Fatal Python error: UNREF invalid object >>Abort (core dumped) >> >> >> >> >I can't reproduce this. But, check to see if the problem is in creating >or printing the array. > > I was able to create an array and extract a few elements w/o problems. Trying to print it dumps core. >I do see problems printing large string arrays right now. > > Todd From cookedm at physics.mcmaster.ca Tue Nov 1 17:05:58 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 01 Nov 2005 17:05:58 -0500 Subject: [SciPy-dev] updated Blitz++ to 0.9 Message-ID: I've updated Blitz++ for weave to 0.9. It only has a configuration file for g++ right now, but it's easy to add other compilers if someone one wishs. To do so, 1) get the Blitz++ 0.9 source from http://sourceforge.net/project/showfiles.php?group_id=63961 2) unpack, then mkdir build; chdir build; ../configure 3) if this picks up your C++ compiler correctly, you'll find a directory named after your compiler in blitz/ (for instance, configuring with g++ makes a directory blitz/gnu) that has a file bzconfig.h in it 4) copy that directory to weave's blitz source directory. If you're a developer, commit it, else send it to the list :-) Blitz++ will pick the right configuration file depending on the compiler used. It looks like there's nothing special to support gcc 3.3 or 3.4 vs. 4.0, which is good. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Fernando.Perez at colorado.edu Tue Nov 1 17:14:08 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 01 Nov 2005 15:14:08 -0700 Subject: [SciPy-dev] updated Blitz++ to 0.9 In-Reply-To: References: Message-ID: <4367E8B0.2050907@colorado.edu> David M. Cooke wrote: > I've updated Blitz++ for weave to 0.9. It only has a configuration > file for g++ right now, but it's easy to add other compilers if > someone one wishs. To do so, Did you commit this? Just this morning I was doing some ubuntu work (gcc 4.0.2) and had to do the manual overcopy of the weave/blitz/ directory (running on old scipy, not the new one). > It looks like there's nothing special to support gcc 3.3 or 3.4 vs. > 4.0, which is good. Yes, Julian Cummings has been working a fair bit to accomodate gcc 4.0.x's idiosyncracies regarding the C++ standard. I can only thank him for that... Cheers, f From cookedm at physics.mcmaster.ca Tue Nov 1 17:19:24 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 01 Nov 2005 17:19:24 -0500 Subject: [SciPy-dev] updated Blitz++ to 0.9 In-Reply-To: <4367E8B0.2050907@colorado.edu> (Fernando Perez's message of "Tue, 01 Nov 2005 15:14:08 -0700") References: <4367E8B0.2050907@colorado.edu> Message-ID: Fernando Perez writes: > David M. Cooke wrote: >> I've updated Blitz++ for weave to 0.9. It only has a configuration >> file for g++ right now, but it's easy to add other compilers if >> someone one wishs. To do so, > > Did you commit this? Just this morning I was doing some ubuntu work (gcc > 4.0.2) and had to do the manual overcopy of the weave/blitz/ directory > (running on old scipy, not the new one). Yep, in newcore: revision 1413 and 1414. I had to do it in two chunks: svn.scipy.org was timing out on me. No reason it shouldn't work with the old scipy. >> It looks like there's nothing special to support gcc 3.3 or 3.4 vs. >> 4.0, which is good. > > Yes, Julian Cummings has been working a fair bit to accomodate gcc 4.0.x's > idiosyncracies regarding the C++ standard. I can only thank him for that... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Tue Nov 1 17:31:57 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 15:31:57 -0700 Subject: [SciPy-dev] newcore string array bug? In-Reply-To: <4367D481.9090309@stsci.edu> References: <4367B874.8040003@stsci.edu> <4367CF92.80909@ee.byu.edu> <4367D481.9090309@stsci.edu> Message-ID: <4367ECDD.20801@ee.byu.edu> Todd Miller wrote: >I was able to create an array and extract a few elements w/o problems. >Trying to print it dumps core. > > I don't get a core dump, but I do get an error in concatenate. What version are you running? On which platform? What do you get when you run a = array(["this"]*10000) b=a[:3] c=a[-3:] concantenate((b,c)) -Travis From Fernando.Perez at colorado.edu Tue Nov 1 17:44:23 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 01 Nov 2005 15:44:23 -0700 Subject: [SciPy-dev] updated Blitz++ to 0.9 In-Reply-To: References: <4367E8B0.2050907@colorado.edu> Message-ID: <4367EFC7.2080702@colorado.edu> David M. Cooke wrote: > Fernando Perez writes: > > >>David M. Cooke wrote: >> >>>I've updated Blitz++ for weave to 0.9. It only has a configuration >>>file for g++ right now, but it's easy to add other compilers if >>>someone one wishs. To do so, >> >>Did you commit this? Just this morning I was doing some ubuntu work (gcc >>4.0.2) and had to do the manual overcopy of the weave/blitz/ directory >>(running on old scipy, not the new one). > > > Yep, in newcore: revision 1413 and 1414. I had to do it in two chunks: > svn.scipy.org was timing out on me. No reason it shouldn't work with > the old scipy. Great, many thanks. I'd been keeping Ryan Krauss' email about this around so it wouldn't fall through the cracks. Cheers, f From oliphant at ee.byu.edu Tue Nov 1 17:49:26 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 15:49:26 -0700 Subject: [SciPy-dev] newcore string array bug? In-Reply-To: <4367D481.9090309@stsci.edu> References: <4367B874.8040003@stsci.edu> <4367CF92.80909@ee.byu.edu> <4367D481.9090309@stsci.edu> Message-ID: <4367F0F6.1070501@ee.byu.edu> I just fixed concatenate to work with Flexible arrays (version 1415). This resolved any printing problems that I was having with printing string arrays. -Travis From oliphant at ee.byu.edu Tue Nov 1 18:13:54 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 16:13:54 -0700 Subject: [SciPy-dev] adding new attribute to data type objects In-Reply-To: <4367AB5C.1010109@stsci.edu> References: <4367AB5C.1010109@stsci.edu> Message-ID: <4367F6B2.4020204@ee.byu.edu> Christopher Hanley wrote: >Hi Travis, > >In numarray a user is able to say the following in order to get the size >of a data type in bytes: > > Chris (and everyone else): I've just added an nbytes dictionary to scipy core. You can now say scipy.nbytes[] and get the number of bytes thus scipy.nbytes['d'] scipy.nbytes[scipy.float_] scipy.nbytes[float] scipy.nbytes[scipy.float64] scipy.nbytes[scipy.Float64] all return the same thing: 64 There is also a new nbytes attribute of ndarray's that does the itemsize * size multiplication for you to give the number of bytes in the array. -Travis From oliphant at ee.byu.edu Tue Nov 1 19:48:54 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 01 Nov 2005 17:48:54 -0700 Subject: [SciPy-dev] [Numpy-discussion] Performance of the array protocol In-Reply-To: <43680252.50009@noaa.gov> References: <1130846099.2968.39.camel@localhost.localdomain> <43679557.70508@ee.byu.edu> <4367AEC8.4010707@noaa.gov> <4367D279.1010700@ee.byu.edu> <43680252.50009@noaa.gov> Message-ID: <43680CF6.4010000@ee.byu.edu> Chris Barker wrote: > Travis Oliphant wrote: > >> like. We could push to get something like this in Python core, I >> think, so this Array_Interface header was available to everybody. > > > That would be great. Until then, it would still be a tiny header that > others could easily include with their code. David version number > would help keep things sorted out. > I've placed an updated array interface description that includes this struct-based access on http://numeric.scipy.org Somebody will add support for this in scipy soon too. -Travis From jmiller at stsci.edu Wed Nov 2 09:48:47 2005 From: jmiller at stsci.edu (Todd Miller) Date: Wed, 02 Nov 2005 09:48:47 -0500 Subject: [SciPy-dev] newcore string array bug? In-Reply-To: <4367F0F6.1070501@ee.byu.edu> References: <4367B874.8040003@stsci.edu> <4367CF92.80909@ee.byu.edu> <4367D481.9090309@stsci.edu> <4367F0F6.1070501@ee.byu.edu> Message-ID: <4368D1CF.60408@stsci.edu> Travis Oliphant wrote: >I just fixed concatenate to work with Flexible arrays (version 1415). >This resolved any printing problems that I was having with printing >string arrays. > > That fixed the problem for me as well. Todd From chanley at stsci.edu Wed Nov 2 09:58:46 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 02 Nov 2005 09:58:46 -0500 Subject: [SciPy-dev] adding new attribute to data type objects In-Reply-To: <4367F6B2.4020204@ee.byu.edu> References: <4367AB5C.1010109@stsci.edu> <4367F6B2.4020204@ee.byu.edu> Message-ID: <4368D426.4050506@stsci.edu> Travis Oliphant wrote: > Chris (and everyone else): > > I've just added an nbytes dictionary to scipy core. > > You can now say > > scipy.nbytes[] > > and get the number of bytes > > thus > > scipy.nbytes['d'] > scipy.nbytes[scipy.float_] > scipy.nbytes[float] > scipy.nbytes[scipy.float64] > scipy.nbytes[scipy.Float64] > > all return the same thing: 64 > > There is also a new nbytes attribute of ndarray's that does the itemsize > * size multiplication for you to give the number of bytes in the array. > > -Travis Thank you Travis, I like this solution much more than going through the multiarray module. Chris From arnd.baecker at web.de Wed Nov 2 12:27:02 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 2 Nov 2005 18:27:02 +0100 (CET) Subject: [SciPy-dev] fftw3 with newcore Message-ID: Hi, just a quick question: is fftw3 already supported with newcore? I have the impression that it is not detected properly. system_info.py for the "old" scipy_distutils does some effort to figure out about fftw3: class fftw_info(system_info): #variables to override section = 'fftw' dir_env_var = 'FFTW' notfounderror = FFTWNotFoundError ver_info = [ { 'name':'fftw3', 'libs':['fftw3'], 'includes':['fftw3.h'], 'macros':[('SCIPY_FFTW3_H',None)]}, { 'name':'fftw2', 'libs':['rfftw', 'fftw'], 'includes':['fftw.h','rfftw.h'], 'macros':[('SCIPY_FFTW_H',None)]}] which is not present in the newcore system_info It would be nice if newscipy could support fftw3 as well, as fftw3 performs by approx. a factor of 3 faster than fftw2 on the opteron! BTW: did anyone of you ever compare the speed of the wrapped fftw with the mflops as given by the benchmarks on http://www.fftw.org/speed/ ? Best, Arnd From chanley at stsci.edu Wed Nov 2 13:46:50 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 02 Nov 2005 13:46:50 -0500 Subject: [SciPy-dev] adding new attribute to data type objects In-Reply-To: <4368D426.4050506@stsci.edu> References: <4367AB5C.1010109@stsci.edu> <4367F6B2.4020204@ee.byu.edu> <4368D426.4050506@stsci.edu> Message-ID: <4369099A.8000701@stsci.edu> Christopher Hanley wrote: >>scipy.nbytes['d'] >>scipy.nbytes[scipy.float_] >>scipy.nbytes[float] >>scipy.nbytes[scipy.float64] >>scipy.nbytes[scipy.Float64] >> >>all return the same thing: 64 >> >>There is also a new nbytes attribute of ndarray's that does the itemsize >>* size multiplication for you to give the number of bytes in the array. >> >>-Travis > > > Thank you Travis, > > I like this solution much more than going through the multiarray module. > > Chris By the way, I've modified the nbytes dictionary to return the value in bytes instead of bits. From chanley at stsci.edu Wed Nov 2 14:05:18 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 02 Nov 2005 14:05:18 -0500 Subject: [SciPy-dev] Can't build newcore revision 1420 Message-ID: <43690DEE.5090005@stsci.edu> In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c: In function `array_frominterface': scipy/base/src/arrayobject.c:5388: `num' undeclared (first use in this function) scipy/base/src/arrayobject.c:5388: (Each undeclared identifier is reported only once scipy/base/src/arrayobject.c:5388: for each function it appears in.) In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c: In function `array_frominterface': scipy/base/src/arrayobject.c:5388: `num' undeclared (first use in this function) scipy/base/src/arrayobject.c:5388: (Each undeclared identifier is reported only once scipy/base/src/arrayobject.c:5388: for each function it appears in.) error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -Ibuild/src/scipy/base/src -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/stsci/pyssgx/Python-2.4.1/include/python2.4 -c scipy/base/src/multiarraymodule.c -o build/temp.linux-i686-2.4/scipy/base/src/multiarraymodule.o" failed with exit status 1 removed scipy/base/__svn_version__.py removed scipy/base/__svn_version__.pyc removed scipy/f2py/__svn_version__.py removed scipy/f2py/__svn_version__.pyc From cookedm at physics.mcmaster.ca Wed Nov 2 14:32:33 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 02 Nov 2005 14:32:33 -0500 Subject: [SciPy-dev] Can't build newcore revision 1420 In-Reply-To: <43690DEE.5090005@stsci.edu> (Christopher Hanley's message of "Wed, 02 Nov 2005 14:05:18 -0500") References: <43690DEE.5090005@stsci.edu> Message-ID: Christopher Hanley writes: > In file included from scipy/base/src/multiarraymodule.c:44: > scipy/base/src/arrayobject.c: In function `array_frominterface': > scipy/base/src/arrayobject.c:5388: `num' undeclared (first use in this > function) > scipy/base/src/arrayobject.c:5388: (Each undeclared identifier is > reported only once > scipy/base/src/arrayobject.c:5388: for each function it appears in.) > In file included from scipy/base/src/multiarraymodule.c:44: > scipy/base/src/arrayobject.c: In function `array_frominterface': > scipy/base/src/arrayobject.c:5388: `num' undeclared (first use in this > function) > scipy/base/src/arrayobject.c:5388: (Each undeclared identifier is > reported only once > scipy/base/src/arrayobject.c:5388: for each function it appears in.) > error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC -Ibuild/src/scipy/base/src > -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src > -I/usr/stsci/pyssgx/Python-2.4.1/include/python2.4 -c > scipy/base/src/multiarraymodule.c -o > build/temp.linux-i686-2.4/scipy/base/src/multiarraymodule.o" failed with > exit status 1 > removed scipy/base/__svn_version__.py > removed scipy/base/__svn_version__.pyc > removed scipy/f2py/__svn_version__.py > removed scipy/f2py/__svn_version__.pyc Fixed. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From schofield at ftw.at Wed Nov 2 17:10:13 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 02 Nov 2005 23:10:13 +0100 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <4367975F.9000809@ntc.zcu.cz> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <4367975F.9000809@ntc.zcu.cz> Message-ID: <43693945.5080602@ftw.at> Robert Cimrman wrote: >>>numarray doesn't permit using arrays as truth values since we figured >>>that it wasn't very clear what people expected to happen (this case is >>>a good illustration). I'd like to make the argument that scipy_core >>>also not permit arrays to be used as truth values (i.e., one should get >>>an exception if one tries to use it that way). I realize that this will >>>break some old code, but since incompatible changes are being made, >>>this is the time to make this sort of change. If left in, it is going >>>to bite people, often quietly. >>> >>> >>I agree it can bite people, but I'm concerned that arrays not having a >>truth value is an odd thing in Python --- you have to implement it by >>raising an error when __nonzero__ is called right? >> >>All other objects in Python have truth values (including its built-in >>array). My attitude is that its just better to teach people the proper >>use of truth values, then to break form with the rest of Python. >> >>I'm would definitely like to hear more opinions though. It would be >>very easy to simply raise and error when __nonzero__ is called. >> >> > >Speaking about 'and' only, my problem with the current implementation of >it is that it _looks_ like working as '*' in some cases - 'b and c' >returns an array whose length is that of b and c (if the lengths are >equal, that is). I would not be against 'b and c' giving a single True >or False... But this also breaks the Python semantics of 'and'. The same >holds for other logical ops, of course. > >So I don't know :-) - I can live with the current state. > > I'm slightly in favour of raising an exception like in nummaray. But I'd like to point out an inconsistency between the current truth values of ndarrays and those of other Python objects: >>> l = [False, False] >>> print bool(l) >>> s = set(l) >>> print bool(s) >>> import array >>> a = array.array('b',l) >>> print bool(a) >>> import scipy >>> nd1 = scipy.array(l, 'b') >>> nd2 = scipy.array(l, '?') >>> print bool(nd1) >>> print bool(nd2) gives: True True True False False If we do adopt Python's strange idiom with logical operators we should probably make arrays' truth values consistent with Python objects too. -- Ed From oliphant at ee.byu.edu Wed Nov 2 17:19:20 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Nov 2005 15:19:20 -0700 Subject: [SciPy-dev] Feedback and help on what to do with flexible arrays and arithmetic operations Message-ID: <43693B68.9080004@ee.byu.edu> Currently arrays of Flexible type (string, unicode, and void) are not supported by universal functions, and therefore arithmetic operations on arrays of these types are not supported. However, it seems that it would be useful to support certain arithmetic operations for at least the string and unicode types. I can see at least three general ways to do this with various consequences: 1) Create a separate string array class similar to numarray that inherits from the ndarray but adds support for the arithmetic opertions. a) Support unicode arrays with the same string array class b) Have a separate unicode array class. This is the easiest for me. Is having a separate string array class a problem? 2) Adapt ufuncs so that support can be added for the flexible arrays on a ufunc by ufunc basis as appropriate. In principle this could be done and would provide a more complete solution. 3) Place a check in array_add and friends in arrayobject.c for the flexible types and call a separate function. a) Fix this specifically in code b) Allow a general mechanism that allows users to add their own funcion for array operations on a specific type -- This could get messy with coercion issues. I think I like option 2 the best, I suppose, but it would take a bit more work than option 1. -Travis From oliphant at ee.byu.edu Wed Nov 2 18:26:28 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Nov 2005 16:26:28 -0700 Subject: [SciPy-dev] Thoughts on string arrays Message-ID: <43694B24.3000806@ee.byu.edu> After some more thought, I've decided that I like the numarray approach better to define a separate string array type (as a subclass of the ndarray). My main reason for thinking this is that then the string array can have the Python string methods which would be applied to every string in the array. I lean toward having a character array class that supports both strings and unicode arrays. This subclass could be written in Python to begin with and then moved to C as needed. -Travis From Fernando.Perez at colorado.edu Thu Nov 3 00:23:33 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 22:23:33 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... Message-ID: <43699ED5.20905@colorado.edu> Hi all, Andrew Docherty (who recently posted) and I have been pounding on the problem of getting newcore to build/link correctly with the intel compilers on an Itanium2 architecture. While things are _almost_ working, the link steps are not being done correctly, and this leads to failed imports (unresolved symbols). Once we figure it all out, I'll write up and commit some notes and probably apply a few small patches here and there, but right now I'm needing a bit of help. The gist of the problem seems to be to convince distutils to link with icc, and NOT with gcc. I'm building with an environment where CC=icc and with the following build command: python setup.py config_fc --fcompiler=intele build While this picks up the correct fortran Itanium2 compiler (ifort), and the C sources are actually also compiled with icc, the actual link steps are still being done with gcc, for some reason. From the build log: gcc -pthread -shared build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o build\ /lib.linux-ia64-2.3/scipy/base/multiarray.so While the build completes successfully, this leads to an import error: phillips[~]> python -c 'import scipy' Traceback (most recent call last): File "", line 1, in ? File "/home/phillips/student/fperez/usr/local/lib/python2.3/site-packages/scipy/__init__.py", line 30, in ? from scipy.base import * File "/home/phillips/student/fperez/usr/local/lib/python2.3/site-packages/scipy/base/__init__.py", line 5, in ? import multiarray ImportError: /home/phillips/student/fperez/usr/local/lib/python2.3/site-packages/scipy/base/multiarray.so: undefined symbol: ?1__serial_memmove That undefined symbol is part of the intel libraries, and it would get picked up if the link step was done with icc, I think (or at least one could ask it to link against the needed library). Or is it in fact correct to link using gcc (since that's what python itself was built with) and should we just try to get gcc to pick up the intel libraries in the final link step? I'll try to get that to work as well, in teh meantime... Any help would be much appreciated at this point. Cheers, f From Fernando.Perez at colorado.edu Thu Nov 3 00:39:29 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 22:39:29 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <43699ED5.20905@colorado.edu> References: <43699ED5.20905@colorado.edu> Message-ID: <4369A291.9040407@colorado.edu> Fernando Perez wrote: > Or is it in fact correct to link using gcc (since that's what python itself > was built with) and should we just try to get gcc to pick up the intel > libraries in the final link step? I'll try to get that to work as well, in > teh meantime... OK, just to provide more info: I've confirmed that linking with gcc, even if you explicitly ask for the library that contains that symbol, doesn't work. However, if I manually rerun the link steps as: icc -pthread -shared build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o build/lib.linux-ia64-2.3/scipy/base/multiarray.so and then run the 'python setup.py install' step with this rebuilt multiarray.so object, the problem goes away. So the question really is: how do we get scipy.distutils to use icc as the LINKER and not to use gcc at all? Thanks for any pointers... Cheeers, f From Fernando.Perez at colorado.edu Thu Nov 3 00:39:29 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 22:39:29 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <43699ED5.20905@colorado.edu> References: <43699ED5.20905@colorado.edu> Message-ID: <4369A291.9040407@colorado.edu> Fernando Perez wrote: > Or is it in fact correct to link using gcc (since that's what python itself > was built with) and should we just try to get gcc to pick up the intel > libraries in the final link step? I'll try to get that to work as well, in > teh meantime... OK, just to provide more info: I've confirmed that linking with gcc, even if you explicitly ask for the library that contains that symbol, doesn't work. However, if I manually rerun the link steps as: icc -pthread -shared build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o build/lib.linux-ia64-2.3/scipy/base/multiarray.so and then run the 'python setup.py install' step with this rebuilt multiarray.so object, the problem goes away. So the question really is: how do we get scipy.distutils to use icc as the LINKER and not to use gcc at all? Thanks for any pointers... Cheeers, f From rkern at ucsd.edu Thu Nov 3 00:53:52 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 02 Nov 2005 21:53:52 -0800 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369A291.9040407@colorado.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> Message-ID: <4369A5F0.4090109@ucsd.edu> Fernando Perez wrote: > Fernando Perez wrote: > >>Or is it in fact correct to link using gcc (since that's what python itself >>was built with) and should we just try to get gcc to pick up the intel >>libraries in the final link step? I'll try to get that to work as well, in >>teh meantime... > > OK, just to provide more info: I've confirmed that linking with gcc, even if > you explicitly ask for the library that contains that symbol, doesn't work. > However, if I manually rerun the link steps as: > > icc -pthread -shared > build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o > build/lib.linux-ia64-2.3/scipy/base/multiarray.so > > and then run the 'python setup.py install' step with this rebuilt > multiarray.so object, the problem goes away. > > So the question really is: how do we get scipy.distutils to use icc as the > LINKER and not to use gcc at all? Thanks for any pointers... How was the interpreter built? What linker do other extensions get linked with when using plain-old-distutils? -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Thu Nov 3 01:05:25 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 23:05:25 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369A5F0.4090109@ucsd.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369A5F0.4090109@ucsd.edu> Message-ID: <4369A8A5.4090409@colorado.edu> Robert Kern wrote: > Fernando Perez wrote: > >>So the question really is: how do we get scipy.distutils to use icc as the >>LINKER and not to use gcc at all? Thanks for any pointers... > > > How was the interpreter built? What linker do other extensions get > linked with when using plain-old-distutils? You mean this? >>Fernando Perez wrote: >>>Or is it in fact correct to link using gcc (since that's what python itself >>>was built with) The version of python we're using (2.3) was built with gcc, so plain-old-distutils will link with gcc as well. However, as we've been able to establish, if you compile the extension modules using icc, you must ALSO link them with icc, not with gcc. Basically we need a mechanism to fully override the C compiler/linker choices much like there is one to specify the fortran compiler. I got pretty lost in the maze that distutils/scipy.distutils is, but I'll be happy to give it another go if someone can point me in the right direction. Cheers, f From pearu at scipy.org Thu Nov 3 00:07:31 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 2 Nov 2005 23:07:31 -0600 (CST) Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369A291.9040407@colorado.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> Message-ID: On Wed, 2 Nov 2005, Fernando Perez wrote: > Fernando Perez wrote: > >> Or is it in fact correct to link using gcc (since that's what python itself >> was built with) and should we just try to get gcc to pick up the intel >> libraries in the final link step? I'll try to get that to work as well, in >> teh meantime... > > OK, just to provide more info: I've confirmed that linking with gcc, even if > you explicitly ask for the library that contains that symbol, doesn't work. > However, if I manually rerun the link steps as: > > icc -pthread -shared > build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o > build/lib.linux-ia64-2.3/scipy/base/multiarray.so > > and then run the 'python setup.py install' step with this rebuilt > multiarray.so object, the problem goes away. > > So the question really is: how do we get scipy.distutils to use icc as the > LINKER and not to use gcc at all? Thanks for any pointers... This is not really a scipy.distutils issue. A simple answer to your question would be building python with icc. Standard distutils has limited support switching C compilers, see `setup.py build_ext --help`. It is also possible to implement support for your own c compiler in scipy.distutils but first I would suggest following the simple answer. Pearu From Fernando.Perez at colorado.edu Thu Nov 3 01:23:23 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 23:23:23 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> Message-ID: <4369ACDB.1090109@colorado.edu> Pearu Peterson wrote: > > On Wed, 2 Nov 2005, Fernando Perez wrote: >>So the question really is: how do we get scipy.distutils to use icc as the >>LINKER and not to use gcc at all? Thanks for any pointers... > > > This is not really a scipy.distutils issue. A simple answer to your > question would be building python with icc. > Standard distutils has limited support switching C compilers, see > `setup.py build_ext --help`. It is also possible to implement support for > your own c compiler in scipy.distutils but first I would suggest following > the simple answer. OK, thanks for the info. I'd like to understand something, though. Once you homebrew this icc-based python, don't you end up forced to maintain off-distro ALL python packages you want to use? This solution would require building a parallel python, losing most of the benefits of easy package handling in a distro, etc. As we've seen, all that's needed is to fix the link calls. If that is done, one could use the base platform python for everything, and only build numerically critical code with icc. I really don't look forward to the idea that using icc for scipy will force people to build wxpython, pygtk, pyvtk and other similarly pleasant-to-build packages ALL from source, and maintain a fully separate tree. For now, a more viable solution (for me) seems to be just to use gcc and lose some of the benefits of icc on this architecture. But I think it would be great to be able to offer users the ability to use the C compiler of their choice for scipy. From looking at the code, I wonder if it's just a matter of overriding the link() method in ccompiler, to honor the CC env. variable (as a start). I may have a go at it, at least out of curiosity... Just as an FYI, with the following two manual link steps: phillips[newcore]> icc -pthread -shared build/temp.linux-ia64-2.3/scipy/base/src/multiarraymodule.o -o build/lib.linux-ia64-2.3/scipy/base/multiarray.so phillips[newcore]> icc -pthread -shared build/temp.linux-ia64-2.3/build/src/scipy/base/src/umathmodule.o -o build/lib.linux-ia64-2.3/scipy/base/umath.so I can get scipy newcore to pass all tests on this Itanium2 box: In [4]: scipy.test(10,10) ... ---------------------------------------------------------------------- Ran 140 tests in 2.025s OK One more 64-bit architecture where things look good, thanks to Travis and Arnd's relentless work! Unfortunately for now I won't be able to test full scipy here, as the number of similar manual link steps for the full one is enormous. Anyway, many thanks for clarifying the issue. Cheers, f From Fernando.Perez at colorado.edu Thu Nov 3 01:42:03 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 02 Nov 2005 23:42:03 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369ACDB.1090109@colorado.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> Message-ID: <4369B13B.5090007@colorado.edu> Fernando Perez wrote: > From looking at the code, I wonder if it's just a matter of overriding the > link() method in ccompiler, to honor the CC env. variable (as a start). I may > have a go at it, at least out of curiosity... OK, just for informational purposes... I overrode the link() method in unixccompiler.py in a similar manner to the others, and just added this hack on top (I just copied the existing method from std.distutils): def UnixCCompile_link(self, target_desc, objects, output_filename, output_dir=None, libraries=None, library_dirs=None, runtime_library_dirs=None, export_symbols=None, debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, target_lang=None): print '* fperez' self.linker_so[:] = ['icc','-shared'] With this, it all works fine (meaning, a straight install passes all tests without needing any manual hacking). Now, I understand _perfectly_ that this is not the kind of hack that can go into something permanent, and I don't pretend to understand distutils even a .1% of how much Pearu knows about it. But at least this shows that it's feasible to somehow allow different linker selections for shared library building. Could we perhaps add a --linker option to setup.py? If given, it would simply be used to build the compiler.linker_so call list. This is not the same as implementing the full compiler machinery that exists for Fortran compilers, but perhaps this is a case where 'good enough beats perfect'. It would allow users to link using icc and keep their regular python, with all the attending benefits of that. Since icc is explicitly known to be strongly binarly compatible with gcc (quoting the icc manpage): gcc* Interoperability C language object files created with the Intel C++ Compiler are binary compatible with the GNU* gcc* compiler and glibc, the GNU C language library. This binary compatibility results in the following key fea- tures: o C objects generated by icc are interoperable with C objects gener- ated by gcc. o C++ objects generated by icpc using the -cxxlib-gcc option are interoperable with C++ objects generated by g++. This means that third-party C++ libraries built by gcc++ will work with C++ objects generated by the Intel C++ Compiler 8.0. o C++ objects generated by icpc without using the -cxxlib-gcc option are not guaranteed to be interoperable with C++ objects generated by g++. o Preprocessor macros predefined by gcc are also predefined by icc. The Intel C++ Compiler 8.0 has made significant improvements towards interoperability and compatibility with the GNU gcc compiler. See the Intel C++ Compiler User's Guide for more information. this might be a reasonable compromise solution. If those who know distutils better than me approve, I'm even willing to do the implementation work (for a simple approach, not for building a full machinery like the FCompiler system). I figure I should at least do something useful :) Cheers, f From rkern at ucsd.edu Thu Nov 3 02:00:33 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 02 Nov 2005 23:00:33 -0800 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369B13B.5090007@colorado.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> <4369B13B.5090007@colorado.edu> Message-ID: <4369B591.8010704@ucsd.edu> Fernando Perez wrote: > If those who know distutils > better than me approve, I'm even willing to do the implementation work (for a > simple approach, not for building a full machinery like the FCompiler system). > I figure I should at least do something useful :) Probably the best thing to do would be to make an ICCCompiler which is a subclass of UnixCCompiler. Be sure to register it with the dictionary distutils.ccompiler.compiler_class . Then the user can set the compiler with --compiler=icc . -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Thu Nov 3 02:29:37 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 3 Nov 2005 01:29:37 -0600 (CST) Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369B591.8010704@ucsd.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> <4369B13B.5090007@colorado.edu> <4369B591.8010704@ucsd.edu> Message-ID: On Wed, 2 Nov 2005, Robert Kern wrote: > Fernando Perez wrote: >> If those who know distutils >> better than me approve, I'm even willing to do the implementation work (for a >> simple approach, not for building a full machinery like the FCompiler system). >> I figure I should at least do something useful :) > > Probably the best thing to do would be to make an ICCCompiler which is a > subclass of UnixCCompiler. Be sure to register it with the dictionary > distutils.ccompiler.compiler_class . Then the user can set the compiler > with --compiler=icc . Right. I am creating the ICCCompiler at the moment and I'll let you know when you could test it. Pearu From nwagner at mecha.uni-stuttgart.de Thu Nov 3 02:35:30 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Nov 2005 08:35:30 +0100 Subject: [SciPy-dev] Temporary failure in name resolution (http://svn.scipy.org) Message-ID: <4369BDC2.4090501@mecha.uni-stuttgart.de> svn: PROPFIND request failed on '/svn/scipy/branches/newscipy' svn: PROPFIND of '/svn/scipy/branches/newscipy': Could not resolve hostname `svn.scipy.org': Temporary failure in name resolution (http://svn.scipy.org) From Fernando.Perez at colorado.edu Thu Nov 3 04:00:45 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 02:00:45 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> <4369B13B.5090007@colorado.edu> <4369B591.8010704@ucsd.edu> Message-ID: <4369D1BD.4000906@colorado.edu> Pearu Peterson wrote: > > On Wed, 2 Nov 2005, Robert Kern wrote: >>Probably the best thing to do would be to make an ICCCompiler which is a >>subclass of UnixCCompiler. Be sure to register it with the dictionary >>distutils.ccompiler.compiler_class . Then the user can set the compiler >>with --compiler=icc . > > > Right. I am creating the ICCCompiler at the moment and I'll let you know > when you could test it. Great! Many thanks. In the meantime, I tried building the full scipy with icc, but chepes.c is giving me an error: Lib/special/cephes/const.c(92): error: floating-point operation result is out of range double INFINITY = 1.0/0.0; /* 99e999; */ ^ Lib/special/cephes/const.c(97): error: floating-point operation result is out of range double NAN = 1.0/0.0 - 1.0/0.0; ^ compilation aborted for Lib/special/cephes/const.c (code 2) Since I'm not terribly familiar with icc, I'll wait for help from Andrew locally tomorrow. It may well be a simple command-line switch issue or some other stupid error on my part (though the manpage doesn't give me any pointers yet). Need to sleep now. And thanks for doing the implementation work on ICCCompiler, I was honestly willing to give it a go :) Cheers, f From pearu at scipy.org Thu Nov 3 03:02:22 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 3 Nov 2005 02:02:22 -0600 (CST) Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> <4369B13B.5090007@colorado.edu> <4369B591.8010704@ucsd.edu> Message-ID: I have commited Intel C compiler (untested) support to newcore. It consists of intelccompiler.py file and of view lines in ccompiler.py. So, to build newcore with intele compiler, use python setup.py config --compiler=intele build_ext --compiler=intele build To build newscipy, use python setup.py config --compiler=intele --fcompiler=intele \ build_ext --compiler=intele --fcompiler=intele \ build_clib --compiler=intele --fcompiler=intele build To shorten these command lines, cc_config needs to be implemented similar to fc_config in scipy/distutils/command/config_compiler.py Pearu From arnd.baecker at web.de Thu Nov 3 05:11:06 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 3 Nov 2005 11:11:06 +0100 (CET) Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: <4369ACDB.1090109@colorado.edu> References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> Message-ID: Hi Fernando, On Wed, 2 Nov 2005, Fernando Perez wrote: > For now, a more viable solution (for me) seems to be just to use gcc and lose > some of the benefits of icc on this architecture. But I think it would be > great to be able to offer users the ability to use the C compiler of their > choice for scipy. Just out of curiosity: how big are the benefits of icc? (The computing center here in Dresden is just installing the first parts of 1500 ItaniumR2 and so we will very much benifit from your pioneering work on Itanium ...). Do you also have the Intel Math Kernel Library http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/index.htm ? It would be very interesting to see how much performance increase one can get in practice when using their Linear Algebra, FFT, vector math, ... compared to ATLAS, fftw, etc.). For example, for fft it seems to be really good: http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219662.htm And also for atlas (especially multiprocesssor support): http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219823.htm ((No - I don't have stock options with them ;-)) > I can get scipy newcore to pass all tests on this Itanium2 box: > > In [4]: scipy.test(10,10) > ... > ---------------------------------------------------------------------- > Ran 140 tests in 2.025s > > OK > > One more 64-bit architecture where things look good, thanks to Travis and > Arnd's relentless work! Excellent! ((Remark: even if all the unit tests pass, it does not mean that code which used Numeric directly works with newcore - we just had such a case yesterday, where it should have worked ... so far we have not yet managed to isolate the problem.)) Best, Arnd From nwagner at mecha.uni-stuttgart.de Thu Nov 3 05:16:43 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Nov 2005 11:16:43 +0100 Subject: [SciPy-dev] AttributeError: __array_offset__ Message-ID: <4369E38B.5040101@mecha.uni-stuttgart.de> ====================================================================== ERROR: check_djbfft (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 133, in check_djbfft assert_array_almost_equal(y,y2) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 718, in assert_array_almost_equal y = asarray(y) File "/usr/local/lib/python2.4/site-packages/scipy/base/numeric.py", line 71, in asarray return array(a, dtype, copy=False, fortran=fortran) AttributeError: __array_offset__ '0.4.3.1426' From cimrman3 at ntc.zcu.cz Thu Nov 3 06:07:32 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 03 Nov 2005 12:07:32 +0100 Subject: [SciPy-dev] logical array TypeError Message-ID: <4369EF74.1030901@ntc.zcu.cz> In [2]:a = scipy.rand(10 ) In [3]:a Out[3]: array([ 0.70999964, 0.33431269, 0.64182382, 0.5041205 , 0.58857909, 0.83947773, 0.4660244 , 0.72767477, 0.73517386, 0.66661387]) In [4]:b = a > 0.2 In [5]:b Out[5]:array([True, True, True, True, True, True, True, True, True, True], dtype=bool) In [6]:b.dtype() --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) TypeError: function takes exactly 1 argument (0 given) In [7]:a.dtype() Out[7]:0.0 From oliphant at ee.byu.edu Thu Nov 3 10:54:27 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 08:54:27 -0700 Subject: [SciPy-dev] logical array TypeError In-Reply-To: <4369EF74.1030901@ntc.zcu.cz> References: <4369EF74.1030901@ntc.zcu.cz> Message-ID: <436A32B3.6040707@ee.byu.edu> Robert Cimrman wrote: >In [2]:a = scipy.rand(10 ) >In [3]:a >Out[3]: >array([ 0.70999964, 0.33431269, 0.64182382, 0.5041205 , 0.58857909, > 0.83947773, 0.4660244 , 0.72767477, 0.73517386, 0.66661387]) > >In [4]:b = a > 0.2 >In [5]:b >Out[5]:array([True, True, True, True, True, True, True, True, True, >True], dtype=bool) > >In [6]:b.dtype() >--------------------------------------------------------------------------- >exceptions.TypeError Traceback (most >recent call last) > > b.dtype (no parenthesis) is what you are looking for. Attributes are different from methods. b.dtype() is trying to call the bool_ array scalar type object (which is the b.dtype attribute) thus it needs an argument. -Travis From oliphant at ee.byu.edu Thu Nov 3 11:07:09 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 09:07:09 -0700 Subject: [SciPy-dev] AttributeError: __array_offset__ In-Reply-To: <4369E38B.5040101@mecha.uni-stuttgart.de> References: <4369E38B.5040101@mecha.uni-stuttgart.de> Message-ID: <436A35AD.6090400@ee.byu.edu> Nils Wagner wrote: >====================================================================== >ERROR: check_djbfft (scipy.fftpack.basic.test_basic.test_fft) >---------------------------------------------------------------------- >Traceback (most recent call last): > File >"/usr/local/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", >line 133, in check_djbfft > assert_array_almost_equal(y,y2) > File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", >line 718, in assert_array_almost_equal > y = asarray(y) > File "/usr/local/lib/python2.4/site-packages/scipy/base/numeric.py", >line 71, in asarray > return array(a, dtype, copy=False, fortran=fortran) >AttributeError: __array_offset__ > > > > Hmm. I see where this error could come from (I was not clearing the error if the__array_offset__ was not present in the array protocol consumer code). What I don't see is why this function was calling the array protocol --- oh, wait I guess we are really testing against the old Numeric and FFT functions (if they are present). I did not realize that. I suppose that is not a bad idea. That would definitely require the array interface to convert the Numeric arrays to scipy ones. At any rate, I've fixed the problem. -Travis From cimrman3 at ntc.zcu.cz Thu Nov 3 11:16:26 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 03 Nov 2005 17:16:26 +0100 Subject: [SciPy-dev] logical array TypeError In-Reply-To: <436A32B3.6040707@ee.byu.edu> References: <4369EF74.1030901@ntc.zcu.cz> <436A32B3.6040707@ee.byu.edu> Message-ID: <436A37DA.9060409@ntc.zcu.cz> Travis Oliphant wrote: > Robert Cimrman wrote: > >>In [4]:b = a > 0.2 >>In [5]:b >>Out[5]:array([True, True, True, True, True, True, True, True, True, >>True], dtype=bool) >> >>In [6]:b.dtype() >>--------------------------------------------------------------------------- >>exceptions.TypeError Traceback (most >>recent call last) >> >> > > b.dtype (no parenthesis) is what you are looking for. > > Attributes are different from methods. > > b.dtype() is trying to call the bool_ array scalar type object (which is > the b.dtype attribute) thus it needs an argument. Thanks! I was fooled by the fact that a.dtype() did not raise an exception for a non-bool array. r. From Fernando.Perez at colorado.edu Thu Nov 3 11:42:01 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 09:42:01 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> Message-ID: <436A3DD9.8010300@colorado.edu> Hey Arnd, Arnd Baecker wrote: > Just out of curiosity: how big are the benefits of icc? > (The computing center here in Dresden is just > installing the first parts of 1500 ItaniumR2 > and so we will very much benifit from your pioneering work on > Itanium ...). > Do you also have the Intel Math Kernel Library > http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/index.htm > ? > It would be very interesting to see how much performance > increase one can get in practice when using their > Linear Algebra, FFT, vector math, ... > compared to ATLAS, fftw, etc.). > For example, for fft it seems to be really good: > http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219662.htm > And also for atlas (especially multiprocesssor support): > http://www.intel.com/cd/software/products/asmo-na/eng/perflib/mkl/219823.htm > > ((No - I don't have stock options with them ;-)) Well, I was really just helping Andrew (a colleague and part of my 'convert or exterminate the matlab users' crusade ;), so I'll let him answer in more detail. On my own codes, a while back I did some tests with icc/ifort and didn't notice improvements that were worthwhile enough to deal with the hassles. Going with gcc is just the path of least resistance, especially back in the intel 7.0 days which had far less gcc compatibility than they do now. > ((Remark: even if all the unit tests pass, it does > not mean that code which used Numeric directly works with newcore > - we just had such a case yesterday, where it should have worked ... > so far we have not yet managed to isolate the problem.)) Good point. I haven't converted real code yet, so far just testing builds and so forth. Cheers, f From Fernando.Perez at colorado.edu Thu Nov 3 13:45:02 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 11:45:02 -0700 Subject: [SciPy-dev] A stray pointer somwehere... Message-ID: <436A5AAE.3010006@colorado.edu> Hi all, I can consistently reproduce a segfault by doing the following at an interactive prompt (only newcore is installed, not full newscipy, current svn - rev 1429): import scipy scipy.test() scipy.test(10,10) scipy.test(10,10) scipy.test(10,10) scipy.test(10,10) scipy.test(10,10) Quit the interpreter -> segfault. This is on the Itanium2 boxes, icc build. Note that the segfault only occurs upon quitting python, and it only happens if I run the test(10,10) _many times_ in a row. Just running it once or twice doesn't seem to produce the problem. I know this can be very hard to track, I just leave it here for now to have it reported. I'm working on the icc build still... Cheers, f From oliphant at ee.byu.edu Thu Nov 3 15:23:35 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 13:23:35 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A5AAE.3010006@colorado.edu> References: <436A5AAE.3010006@colorado.edu> Message-ID: <436A71C7.1080904@ee.byu.edu> Fernando Perez wrote: >Hi all, > >I can consistently reproduce a segfault by doing the following at an >interactive prompt (only newcore is installed, not full newscipy, current svn >- rev 1429): > >import scipy >scipy.test() >scipy.test(10,10) >scipy.test(10,10) >scipy.test(10,10) >scipy.test(10,10) >scipy.test(10,10) > >Quit the interpreter -> segfault. > >This is on the Itanium2 boxes, icc build. > > Can anybody else reproduce this or is it just on 64-bit boxes. I can't get the problem to show up on 32-bit box. I suspect a reference count issue or an unitialized PyObject variable. It might help if where the segfault is occurring is tracked down (since you are running test(10,10) you should be able to see which tests are failing. Is the segfault happening in the same place all the time? -Travis From chanley at stsci.edu Thu Nov 3 15:33:09 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 03 Nov 2005 15:33:09 -0500 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A71C7.1080904@ee.byu.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> Message-ID: <436A7405.6010004@stsci.edu> Travis Oliphant wrote: > Can anybody else reproduce this or is it just on 64-bit boxes. I can't > get the problem to show up on 32-bit box. > > I suspect a reference count issue or an unitialized PyObject variable. > It might help if where the segfault is occurring is tracked down (since > you are running test(10,10) you should be able to see which tests are > failing. Is the segfault happening in the same place all the time? > > -Travis > I can reproduce this error on my 32-bit Redhat box. It occurs in the 17 th iteration of scipy.test(10,10) with the failure occurring at: check_default_1 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_default_2 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_default_3 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_complex_bad (scipy.base.type_check.test_type_check.test_nan_to_num) ... ok check_complex_bad2 (scipy.base.type_check.test_type_check.test_nan_to_num) ... Segmentation fault (core dumped) From Fernando.Perez at colorado.edu Thu Nov 3 15:38:45 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 13:38:45 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A7405.6010004@stsci.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> Message-ID: <436A7555.30809@colorado.edu> Christopher Hanley wrote: > > Travis Oliphant wrote: > > >>Can anybody else reproduce this or is it just on 64-bit boxes. I can't >>get the problem to show up on 32-bit box. >> >>I suspect a reference count issue or an unitialized PyObject variable. >>It might help if where the segfault is occurring is tracked down (since >>you are running test(10,10) you should be able to see which tests are >>failing. Is the segfault happening in the same place all the time? >> >>-Travis >> > > > I can reproduce this error on my 32-bit Redhat box. It occurs in the 17 > th iteration of scipy.test(10,10) with the failure occurring at: Actually in my case it's different: all the tests pass _always_, it segfaults only when I exit the python shell (tested both with ipython and with the default interpreter): ... run several times test(10,10) check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) ... ok check_1 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok check_2 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok check_3 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok ---------------------------------------------------------------------- Ran 140 tests in 0.291s OK Out[6]: In [7]: ### EXIT HERE via Ctrl-D Segmentation fault So the tests actually pass, but they corrupt something inside the python interpreter's state, so it segfaults on exit, while cleaning up. Cheers, f From chanley at stsci.edu Thu Nov 3 15:51:04 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 03 Nov 2005 15:51:04 -0500 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A7555.30809@colorado.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> <436A7555.30809@colorado.edu> Message-ID: <436A7838.9010503@stsci.edu> Fernando Perez wrote: > Actually in my case it's different: all the tests pass _always_, it segfaults > only when I exit the python shell (tested both with ipython and with the > default interpreter): > > ... run several times test(10,10) > > check_nd (scipy.base.index_tricks.test_index_tricks.test_grid) ... ok > check_1 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok > check_2 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok > check_3 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ... ok > > ---------------------------------------------------------------------- > Ran 140 tests in 0.291s > > OK > Out[6]: > > In [7]: ### EXIT HERE via Ctrl-D > > Segmentation fault > > > So the tests actually pass, but they corrupt something inside the python > interpreter's state, so it segfaults on exit, while cleaning up. > > Cheers, > > f Sorry, missed the part about segfault on exit only. Still I'm surprised I could get it to segfault by just looping the scipy.test(10,10) command. I have only tested this in the ipython shell. Chris From Fernando.Perez at colorado.edu Thu Nov 3 16:03:42 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 14:03:42 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A7838.9010503@stsci.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> <436A7555.30809@colorado.edu> <436A7838.9010503@stsci.edu> Message-ID: <436A7B2E.1070704@colorado.edu> Christopher Hanley wrote: > Sorry, missed the part about segfault on exit only. Still I'm surprised > I could get it to segfault by just looping the scipy.test(10,10) > command. I have only tested this in the ipython shell. Actually it was _my_ bad: I hadn't run the tests enough times. I now scripted it, and it reproducibly happens here: ---------------------------------------------------------------------- Ran 140 tests in 0.273s OK *************************************************************************** ****************************** TEST PASS # 18 ****************************** Found 2 tests for scipy.base.umath Found 24 tests for scipy.base.function_base Found 4 tests for scipy.base.getlimits Found 9 tests for scipy.base.twodim_base Found 3 tests for scipy.base.matrix Found 44 tests for scipy.base.shape_base Found 42 tests for scipy.base.type_check Found 3 tests for scipy.basic.helper Found 3 tests for scipy.distutils.misc_util Found 4 tests for scipy.base.index_tricks Found 0 tests for __main__ Segmentation fault If I increase the verbosity: ---------------------------------------------------------------------- Ran 140 tests in 0.284s OK *************************************************************************** ****************************** TEST PASS # 18 ****************************** /home/phillips/student/fperez/usr/local/lib/python2.3/site-packages/scipy/base/tests/test_scimath.py [...] check_default_1 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_default_2 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_default_3 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok check_complex_bad (scipy.base.type_check.test_type_check.test_nan_to_num) ... ok check_complex_bad2 (scipy.base.type_check.test_type_check.test_nan_to_num) ... Segmentation fault Same point you see it at. So yes, the problem is reproducible and it always happens at the same pass. Here's the crash script for testing: phillips[~]> cat crash_scipy.py #!/usr/bin/env python import scipy level = 10 verb = 10 for i in range(1,100): print '*'*75 print '*'*30,'TEST PASS #',i,'*'*30 scipy.test(level,verb) #### Cheers, f From Fernando.Perez at colorado.edu Thu Nov 3 16:48:25 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 14:48:25 -0700 Subject: [SciPy-dev] Building newcore with the intel compilers on Itanium2... In-Reply-To: References: <43699ED5.20905@colorado.edu> <4369A291.9040407@colorado.edu> <4369ACDB.1090109@colorado.edu> <4369B13B.5090007@colorado.edu> <4369B591.8010704@ucsd.edu> Message-ID: <436A85A9.7000704@colorado.edu> Pearu Peterson wrote: > I have commited Intel C compiler (untested) support to newcore. It > consists of intelccompiler.py file and of view lines in ccompiler.py. > > So, to build newcore with intele compiler, use > > python setup.py config --compiler=intele build_ext --compiler=intele build OK, many thanks for doing this, I really appreciate it. It didn't quite work out of the box, but I've committed the fixes (r. 1430) and I can now successfully build/install with the following little script: phillips[scipy_core]> cat make_icc_scicore #!/bin/sh # Build core scipy with the Intel compilers for Itanium rm -rf build/ rm -rf ~/usr/local/lib/python2.3/site-packages/scipy python setup.py \ config --compiler=intele \ config_fc --fcompiler=intele \ build_ext --compiler=intele \ build | tee icc_build_scicore.log python setup.py install --prefix=~/usr/local > To build newscipy, use > > python setup.py config --compiler=intele --fcompiler=intele \ > build_ext --compiler=intele --fcompiler=intele \ > build_clib --compiler=intele --fcompiler=intele build I have an fft-related problem here, but I'll look into it further (I'm pretty sure it's on my side at this point). Will report when success comes. > To shorten these command lines, cc_config needs to be implemented similar > to fc_config in scipy/distutils/command/config_compiler.py OK. I was wondering if it makes sense/is feasible to have the possibility of specifying compiler choices in site.cfg. It would seem nice to me for people to be able to keep a site.cfg around with all their local settings to build off of. That way they can just do a checkout, drop their site.cfg file and hit python setup.py install That would be nice, I think. Any reson why it can't be done? Also, should we patch distutils to search for site.cfg in the original setup() caller's directory? Right now, you have to drop it in checkoutdir/scipy/distutils. I tend to prefer, for usability reasons, that user configuration only happens in the top-level dir. Regards, f From oliphant at ee.byu.edu Thu Nov 3 17:36:24 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 15:36:24 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A7B2E.1070704@colorado.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> <436A7555.30809@colorado.edu> <436A7838.9010503@stsci.edu> <436A7B2E.1070704@colorado.edu> Message-ID: <436A90E8.5030101@ee.byu.edu> Fernando Perez wrote: >Christopher Hanley wrote: > > > >>Sorry, missed the part about segfault on exit only. Still I'm surprised >>I could get it to segfault by just looping the scipy.test(10,10) >>command. I have only tested this in the ipython shell. >> >> > >Actually it was _my_ bad: I hadn't run the tests enough times. I now scripted >it, and it reproducibly happens here: > >---------------------------------------------------------------------- >Ran 140 tests in 0.273s > >OK >*************************************************************************** >****************************** TEST PASS # 18 ****************************** > Found 2 tests for scipy.base.umath > Found 24 tests for scipy.base.function_base > Found 4 tests for scipy.base.getlimits > Found 9 tests for scipy.base.twodim_base > Found 3 tests for scipy.base.matrix > Found 44 tests for scipy.base.shape_base > Found 42 tests for scipy.base.type_check > Found 3 tests for scipy.basic.helper > Found 3 tests for scipy.distutils.misc_util > Found 4 tests for scipy.base.index_tricks > Found 0 tests for __main__ >Segmentation fault > > >If I increase the verbosity: > >---------------------------------------------------------------------- >Ran 140 tests in 0.284s > >OK >*************************************************************************** >****************************** TEST PASS # 18 ****************************** >/home/phillips/student/fperez/usr/local/lib/python2.3/site-packages/scipy/base/tests/test_scimath.py > >[...] > >check_default_1 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok >check_default_2 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok >check_default_3 (scipy.base.type_check.test_type_check.test_mintypecode) ... ok >check_complex_bad (scipy.base.type_check.test_type_check.test_nan_to_num) ... ok >check_complex_bad2 (scipy.base.type_check.test_type_check.test_nan_to_num) ... >Segmentation fault > > >Same point you see it at. So yes, the problem is reproducible and it always >happens at the same pass. > >Here's the crash script for testing: > > Thanks very much for this. I've tracked it to the reference count on the array scalars going down by one on each test loop. So, the array scalars object is not being INCREF'd somewhere where it should be. I should now be able to find it. -Travis From oliphant at ee.byu.edu Thu Nov 3 21:12:08 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 19:12:08 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436A7B2E.1070704@colorado.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> <436A7555.30809@colorado.edu> <436A7838.9010503@stsci.edu> <436A7B2E.1070704@colorado.edu> Message-ID: <436AC378.3080409@ee.byu.edu> Fernando Perez wrote: >Here's the crash script for testing: > >phillips[~]> cat crash_scipy.py >#!/usr/bin/env python >import scipy > >level = 10 >verb = 10 > >for i in range(1,100): > print '*'*75 > print '*'*30,'TEST PASS #',i,'*'*30 > scipy.test(level,verb) > > > Not a bad script for testing long term effects. I've found the problem. There were two: a scalar array object was not being incref'd when retrieved from a.dtype. Also, and more relevevant to this issue was that .put for OBJECT arrays was not holding on to a reference to the objects placed in the array (nor was it DECREF'ing the current contents). This bug is in Numeric as well, but was never discovered -- the put and putmask code came directly from Numeric. -Travis From Fernando.Perez at colorado.edu Thu Nov 3 22:31:21 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 03 Nov 2005 20:31:21 -0700 Subject: [SciPy-dev] A stray pointer somwehere... In-Reply-To: <436AC378.3080409@ee.byu.edu> References: <436A5AAE.3010006@colorado.edu> <436A71C7.1080904@ee.byu.edu> <436A7405.6010004@stsci.edu> <436A7555.30809@colorado.edu> <436A7838.9010503@stsci.edu> <436A7B2E.1070704@colorado.edu> <436AC378.3080409@ee.byu.edu> Message-ID: <436AD609.7070504@colorado.edu> Travis Oliphant wrote: > Not a bad script for testing long term effects. I've found the > problem. There were two: a scalar array object was not being incref'd > when retrieved from a.dtype. Also, and more relevevant to this issue > was that .put for OBJECT arrays was not holding on to a reference to the > objects placed in the array (nor was it DECREF'ing the current contents). > > This bug is in Numeric as well, but was never discovered -- the put and > putmask code came directly from Numeric. I'm glad to report that your fix checks here as well (on the itanium2 box): ---------------------------------------------------------------------- Ran 134 tests in 0.285s OK The tests were run 50 times. I've attached a slightly fancier version of the test script, in case you want to keep it around and occasionally run it. You can set the verbosity at the command line if something out of the ordinary happens. Thanks for tracking this down so fast! Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_test_spin.py Type: application/x-python Size: 799 bytes Desc: not available URL: From stephen.walton at csun.edu Thu Nov 3 23:41:54 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Thu, 03 Nov 2005 20:41:54 -0800 Subject: [SciPy-dev] No scipy tests? Message-ID: <436AE692.1090504@csun.edu> I just synced up my copy of newcore and newscipy to versions 1433 and 1418, respectively. Now scipy.test() only runs the 134 tests which are in scipy.base. From oliphant at ee.byu.edu Fri Nov 4 01:51:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 23:51:11 -0700 Subject: [SciPy-dev] Final release of Numeric is actually 24.1 Message-ID: <436B04DF.9050005@ee.byu.edu> I made a final release of Numeric (24.1). This is "really" the last release of Numeric ;-) I did it because of David Cooke's excellent array_protocol enhancement of __array_struct__. I wanted to make sure the final Numeric fully supported the array protocol including the "faster version." I also tracked down a bug today in scipy core that was inherited from Numeric. A part of me wanted to not fix the bug in Numeric so that people who need a stable platform will move to scipy core :-) But the better part of me won, and I fixed the problem and made a new Numeric release. I do hope people are encouraged to move toward scipy core, however. It is stabilizing quite rapidly. All of scipy now builds with the new scipy core, and all of (full) scipy's 1300 tests pass for both 32-bit and 64-bit systems. I will be making a release of scipy_core (probably called version 0.6 this weekend). It is still missing finished records.py, ma.py, and chararray.py modules (being worked on). When these are available I will make release of scipy core version 1.0 Best regards, -Travis From nwagner at mecha.uni-stuttgart.de Fri Nov 4 02:32:36 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Nov 2005 08:32:36 +0100 Subject: [SciPy-dev] Final release of Numeric is actually 24.1 In-Reply-To: <436B04DF.9050005@ee.byu.edu> References: <436B04DF.9050005@ee.byu.edu> Message-ID: <436B0E94.1040708@mecha.uni-stuttgart.de> Travis Oliphant wrote: >I made a final release of Numeric (24.1). This is "really" the last >release of Numeric ;-) > >I did it because of David Cooke's excellent array_protocol enhancement >of __array_struct__. I wanted to make sure the final Numeric fully >supported the array protocol including the "faster version." > >I also tracked down a bug today in scipy core that was inherited from >Numeric. A part of me wanted to not fix the bug in Numeric so that >people who need a stable platform will move to scipy core :-) But the >better part of me won, and I fixed the problem and made a new Numeric >release. > >I do hope people are encouraged to move toward scipy core, however. It >is stabilizing quite rapidly. All of scipy now builds with the new >scipy core, and all of (full) scipy's 1300 tests pass for both 32-bit >and 64-bit systems. I will be making a release of scipy_core (probably >called version 0.6 this weekend). It is still missing finished >records.py, ma.py, and chararray.py modules (being worked on). When >these are available I will make release of scipy core version 1.0 > >Best regards, > >-Travis > > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, I cannot confirm that all tests pass (at least when ATLAS is n o t available). I still get these nasty failure/error messages ====================================================================== ERROR: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_logm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 82, in check_nils logm((identity(7)*3.1+0j)-a) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 232, in logm errest = norm(expm(F)-A,1) / norm(A,1) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 255, in norm x = asarray_chkfinite(x) File "/usr/local/lib/python2.4/site-packages/scipy/base/function_base.py", line 211, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 44, in check_nils assert_array_almost_equal(r,cr) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 735, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 1.2104571e+01 -2.3056788e-02j -1.0034348e+00 -1.8445430e-01j 1.5239715e+01 +1.1528394e-02j 2.1808571e+01 -2.3... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ---------------------------------------------------------------------- Ran 1319 tests in 7.417s FAILED (failures=1, errors=1) And how about this message >>import scipy ... Importing signal to scipy Failed to import signal cannot import name comb Nils From oliphant at ee.byu.edu Fri Nov 4 02:39:12 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 00:39:12 -0700 Subject: [SciPy-dev] Final release of Numeric is actually 24.1 In-Reply-To: <436B0E94.1040708@mecha.uni-stuttgart.de> References: <436B04DF.9050005@ee.byu.edu> <436B0E94.1040708@mecha.uni-stuttgart.de> Message-ID: <436B1020.2050607@ee.byu.edu> Nils Wagner wrote: >Hi Travis, > >I cannot confirm that all tests pass (at least when ATLAS is n o t >available). > > Perhaps you could help us track these problems down. I think you are familiar enough with the system to at least localise the problem, and write a simple script that shows the failure. That's what I would have to do to fix it. From there you could investigate further and determine what is causing the error. Thanks for any help, -Travis From nwagner at mecha.uni-stuttgart.de Fri Nov 4 03:03:25 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Nov 2005 09:03:25 +0100 Subject: [SciPy-dev] Final release of Numeric is actually 24.1 In-Reply-To: <436B1020.2050607@ee.byu.edu> References: <436B04DF.9050005@ee.byu.edu> <436B0E94.1040708@mecha.uni-stuttgart.de> <436B1020.2050607@ee.byu.edu> Message-ID: <436B15CD.5070205@mecha.uni-stuttgart.de> Travis Oliphant wrote: >Nils Wagner wrote: > > >>Hi Travis, >> >>I cannot confirm that all tests pass (at least when ATLAS is n o t >>available). >> >> >> >Perhaps you could help us track these problems down. I think you are >familiar enough with the system to at least localise the problem, and >write a simple script that shows the failure. That's what I would have >to do to fix it. > > From there you could investigate further and determine what is causing >the error. > >Thanks for any help, > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > I guess the problem with signm is related to the algorithm available in matfuncs.py. However it works with old scipy. In case of logm I have no idea. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_signm.py Type: text/x-python Size: 884 bytes Desc: not available URL: From joe at enthought.com Fri Nov 4 05:03:27 2005 From: joe at enthought.com (Joe Cooper) Date: Fri, 04 Nov 2005 04:03:27 -0600 Subject: [SciPy-dev] scipy.test() 15 failures Message-ID: <436B31EF.1010604@enthought.com> Hi all, This has been discussed in the past, with the reported solution being a change of Atlas build, but I'm having no luck making that work for me. Here's the error: ERROR: check_sh_legendre (scipy.special.basic.test_basic.test_sh_legendre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/scipy/special/tests/test_basic.py", line 1806, in check_sh_legendre Ps1 = sh_legendre(1) File "/usr/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 593, in sh_legendre x,w,mu0 = ps_roots(n,mu=1) File "/usr/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 584, in ps_roots return js_roots(n,1.0,1.0,mu=mu) File "/usr/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 205, in js_roots val = gen_roots_and_weights(n,an_Js,sbn_Js,mu0) File "/usr/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 121, in gen_roots_and_weights eig = get_eig_func() File "/usr/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 91, in get_eig_func eig = scipy.linalg.eig AttributeError: 'module' object has no attribute 'eig' ---------------------------------------------------------------------- Ran 865 tests in 1.131s FAILED (errors=15) Previous discussion of seemingly this exact error from November of 2004: http://www.scipy.net/pipermail/scipy-user/2004-November/003642.html I would like to use a scipy snapshot of scipy from a couple of months ago (because it is what is in the upcoming Windows Enthon release), but I just upgraded scipy_core to the very latest Subversion checkout from about an hour ago, which I was under the impression was where the problem occurs. I've tried setting environment variable PTATLAS="None" with no impact (against both a threaded and non-threaded Atlas). I've tried building a threaded and non-threaded Atlas, and I've tried using an atlas binary from SciPy.org (which actually did worse and segfaulted--but perhaps I tried the wrong one for my architecture). Finally, I've tried not using Atlas, and just using the standard blas and lapack, which results in a different set of test failures--I believe Nils or someone has already reported the problems with standard blas and lapack, so I won't worry over them for now. Anyway, in all Atlas-built cases, except the binaries from SciPy.org, I get the same errors=15 and the same eig error at the end of the tests. Platform is Red Hat Enterprise Linux 3.5 with Python 2.3.5, Numeric 23.8, a recent F2PY, and two different scipy_core packages (one from the same time as the SciPy snapshot, and another from today). The same SciPy snapshot is working fine with the same version of Atlas (4.6.0) on Windows, so I'm out of ideas. Anyone have a clue they can lend me? From nwagner at mecha.uni-stuttgart.de Fri Nov 4 07:04:49 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Nov 2005 13:04:49 +0100 Subject: [SciPy-dev] Final release of Numeric is actually 24.1 In-Reply-To: <436B1020.2050607@ee.byu.edu> References: <436B04DF.9050005@ee.byu.edu> <436B0E94.1040708@mecha.uni-stuttgart.de> <436B1020.2050607@ee.byu.edu> Message-ID: <436B4E61.9060806@mecha.uni-stuttgart.de> Travis Oliphant wrote: >Nils Wagner wrote: > > >>Hi Travis, >> >>I cannot confirm that all tests pass (at least when ATLAS is n o t >>available). >> >> >> >Perhaps you could help us track these problems down. I think you are >familiar enough with the system to at least localise the problem, and >write a simple script that shows the failure. That's what I would have >to do to fix it. > > From there you could investigate further and determine what is causing >the error. > >Thanks for any help, > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, BTW, what is the difference between A = asarray(A) and A=mat(asarray(A)) ? In matfuncs.py def logm(A,disp=1): """Matrix logarithm, inverse of expm.""" # Compute using general funm but then use better error estimator and # make one step in improving estimate using a rotation matrix. A = mat(asarray(A)) All other matrix functions use A=asarray(A). Nils From Norbert.Nemec.list at gmx.de Fri Nov 4 11:00:04 2005 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Fri, 04 Nov 2005 17:00:04 +0100 Subject: [SciPy-dev] Dangerous Matrix +/- Scalar behavior Message-ID: <436B8584.1090404@gmx.de> Hi there, just got bitten by the same bug as often before: In physics, it is not unusual to abbreviate the notation E*eye(n) - H for a scalar value E and a n*n-matrix H by simply writing E - H With the current definition of the matrix class, however, this will be interpreted as elementwise subtraction and lead to a hard-to-find error. Since elementwise addition/subtraction of a scalar value to a matrix appears extremely rarely in linear algebra, wouldn't it be an idea to disallow this operation on matrices (which usually indicates an error) and raise an exception instead? Greetings, Norbert From stephen.walton at csun.edu Fri Nov 4 11:48:56 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 04 Nov 2005 08:48:56 -0800 Subject: [SciPy-dev] No scipy tests? In-Reply-To: <436AE692.1090504@csun.edu> References: <436AE692.1090504@csun.edu> Message-ID: <436B90F8.5010902@csun.edu> Stephen Walton wrote: >I just synced up my copy of newcore and newscipy to versions 1433 and >1418, respectively. Now scipy.test() only runs the 134 tests which are >in scipy.base. > > Problem fixed at SVN 1436 and 1419. From Fernando.Perez at colorado.edu Fri Nov 4 14:01:45 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 04 Nov 2005 12:01:45 -0700 Subject: [SciPy-dev] 'O' type arrays and containers as values... Message-ID: <436BB019.4020902@colorado.edu> Hi all, If I understand things correctly, the following should work fine: In [3]: import scipy In [4]: scipy.__core_version__ Out[4]: '0.4.3.1433' In [5]: a=scipy.empty((2,2),'O') In [6]: a[0,0] = (1,2,3) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/phillips/student/fperez/ ValueError: number of elements in destination must be integer multiple of number of elements in source Why can't I assign a container (it fails for tuples, lists and arrays) as the _value_ of an 'O' array? Note that strings work fine: In [7]: a[0,0] = '(1,2,3)' Am I missing something here? It seems to me that 'O' type arrays should be able to hold as values arbitrary python objects, no? Cheers, f From oliphant at ee.byu.edu Fri Nov 4 16:00:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 14:00:51 -0700 Subject: [SciPy-dev] 'O' type arrays and containers as values... In-Reply-To: <436BB019.4020902@colorado.edu> References: <436BB019.4020902@colorado.edu> Message-ID: <436BCC03.3030803@ee.byu.edu> Fernando Perez wrote: >Hi all, > >If I understand things correctly, the following should work fine: > >In [3]: import scipy > >In [4]: scipy.__core_version__ >Out[4]: '0.4.3.1433' > >In [5]: a=scipy.empty((2,2),'O') > >In [6]: a[0,0] = (1,2,3) > > The problem here is the ambiguity of the left hand side and how assignment is generally done. There is no special-check for this case. All that happens is that (1,2,3) gets converted to an object array and the elements copied over. So, you are trying to do the equivalent of a[0,0] = array((1,2,3),'O') But, the right-hand side is converted to a length-3 object array --- with container types it's ambiguous as to what you really want for the object array. Then of course the length-3 array cannot be copied into the result. Now, one solution is to special-case the PyArrayObject assignment and check for single-index assigment and just copy whatever the value is directly over. Of course the more special-cases, the slower all code gets. This has always been a problem. You would get a similar error in Numeric. Suggestions welcome. -Travis From oliphant at ee.byu.edu Fri Nov 4 16:22:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 14:22:40 -0700 Subject: [SciPy-dev] [SciPy-user] MemoryError in scipy_core In-Reply-To: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> References: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> Message-ID: <436BD120.4060302@ee.byu.edu> Chris Fonnesbeck wrote: >In the course of moving PyMC from Numeric to scipy_core, I am running >into some pretty serious memory issues. For those of you unfamilair >with PyMC, it is simply a Bayesian simulation module that estimates >model paramters by iteratively sampling from the joint posterior >distribution of the model, and saving each sample to an array. Under >Numeric, I could safely run several hundered thousand iterations of >pretty complex models (i.e. lots of paramters) without trouble. Under >scipy_core, PyMC hogs most of the system resources (you really cant do >anything else while its running), and crashes after just over 10K >iterations, under a pretty simple model. Here is the end of the >output: > > I just ran scipy's testing suite through valgrind. I found a couple of small items. I don't think they are related to what is happening here. I'll keep looking. I'm still interested to know what version of scipy core you are trying.. -Travis From Fernando.Perez at colorado.edu Fri Nov 4 16:26:50 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 04 Nov 2005 14:26:50 -0700 Subject: [SciPy-dev] 'O' type arrays and containers as values... In-Reply-To: <436BCC03.3030803@ee.byu.edu> References: <436BB019.4020902@colorado.edu> <436BCC03.3030803@ee.byu.edu> Message-ID: <436BD21A.1030505@colorado.edu> Travis Oliphant wrote: >>In [5]: a=scipy.empty((2,2),'O') >> >>In [6]: a[0,0] = (1,2,3) >> >> > > The problem here is the ambiguity of the left hand side and how > assignment is generally done. There is no special-check for this > case. All that happens is that (1,2,3) gets converted to an object > array and the elements copied over. So, you are trying to do the > equivalent of > > a[0,0] = array((1,2,3),'O') > > But, the right-hand side is converted to a length-3 object array --- > with container types it's ambiguous as to what you really want for the > object array. Buit I'd argue that if the LHS is of type 'O', even that should work, no? Basically my way of looking at this is: type 'O' arrays hold opaque pointers to arbitrary Python objects. In a sense they are almost like a dictionary with tuple keys, but with the usual semantics of scipy arrays. Since the index given is a single-element index, then the object on the RHS should be assigned to that 'pointer' in the table, no? > Then of course the length-3 array cannot be copied into the result. > Now, one solution is to special-case the PyArrayObject assignment and > check for single-index assigment and just copy whatever the value is > directly over. Of course the more special-cases, the slower all code > gets. I realize that this is unpleasant, but as they stand now, the 'O' arrays are rather strange (as I just found out). They can hold many python objects as their values, even some containers (lists are OK), but not tuples, lists or other arrays. I think that kind of special-casing should be avoided. Since single-element assignment for the 'O' type _is_ special (the arrays can contain inhomogeneous entries already): In [11]: a = scipy.array(['hi Travis',3.14,scipy.cos],'O') In [12]: a Out[12]: array([hi Travis, 3.14, ], dtype=object) then I think that this should work. Because it gets worse: In [20]: a = scipy.array([[0,(1,2,3)],[2,3]],'O') In [21]: a.shape Out[21]: (2, 2) In [22]: a[0,1] Out[22]: (1, 2, 3) In [23]: a[0,1] = (1,2,3) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/phillips/student/fperez/ ValueError: number of elements in destination must be integer multiple of number of elements in source This shows that an 'O' array can in fact _contain_ tuples if given at construction time, but you can't assign to them later. IMHO this qualifies as a wart in capital letters :) Can you somehow just special-case the = operator for 'O' at the 'top level', so that it doesn't do anything at all to the RHS? As I see it, the RHS should just be held unmodified, in essence taking the address of the underlying PyObject pointer and little more... But I don't really know that code in detail, so I may be missing something here. I realize that the fix may be a bit painful, and I'm willing to look at ways of finding the least-nasty solution, but I honestly think this needs fixing. It's too bizarre of a special behavior to live in the Python world... > This has always been a problem. You would get a similar error in Numeric. I actually saw it in Numeric, but preferred to report for scipy core. I want you to keep your word that 24.1 is the last of Numeric :) Best, f From oliphant at ee.byu.edu Fri Nov 4 17:12:22 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 15:12:22 -0700 Subject: [SciPy-dev] 'O' type arrays and containers as values... In-Reply-To: <436BD21A.1030505@colorado.edu> References: <436BB019.4020902@colorado.edu> <436BCC03.3030803@ee.byu.edu> <436BD21A.1030505@colorado.edu> Message-ID: <436BDCC6.4030805@ee.byu.edu> Fernando Perez wrote: >Buit I'd argue that if the LHS is of type 'O', even that should work, no? >Basically my way of looking at this is: type 'O' arrays hold opaque pointers >to arbitrary Python objects. In a sense they are almost like a dictionary >with tuple keys, but with the usual semantics of scipy arrays. > > I just added the special-case check so that now single-item assignment works as expected for object arrays. It's in SVN. It's still ambiguous as to what to do with initialization from container types. For example, array([[1,2,3],[3,4,5]],'O') produces an 2x3 array of integer objects. Or did you want a 1x2 array of list objects? Not sure how to fix that one without a special-purpose object array creation function that fixes the shape. -Travis From Fernando.Perez at colorado.edu Fri Nov 4 17:25:23 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 04 Nov 2005 15:25:23 -0700 Subject: [SciPy-dev] 'O' type arrays and containers as values... In-Reply-To: <436BDCC6.4030805@ee.byu.edu> References: <436BB019.4020902@colorado.edu> <436BCC03.3030803@ee.byu.edu> <436BD21A.1030505@colorado.edu> <436BDCC6.4030805@ee.byu.edu> Message-ID: <436BDFD3.5090409@colorado.edu> Travis Oliphant wrote: > I just added the special-case check so that now single-item assignment > works as expected > for object arrays. It's in SVN. Many thanks, Travis. You made the right choice :) > It's still ambiguous as to what to do with initialization from container > types. > > For example, > > array([[1,2,3],[3,4,5]],'O') produces an 2x3 array of integer objects. > Or did you want a 1x2 array of list objects? > > Not sure how to fix that one without a special-purpose object array > creation function that fixes the shape. Yes, this one is nasty. I'd say leave it, because it will match most closely the behavior of the default constructor, and honestly I can't think of a better approach. But I think fixing the assignment problem was a real wart, not an inherent ambiguity as in this case. Best, f From oliphant at ee.byu.edu Fri Nov 4 19:27:52 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 17:27:52 -0700 Subject: [SciPy-dev] Valgrind primer Message-ID: <436BFC88.7090704@ee.byu.edu> I've been playing with valgrind to detect memory leaks and other memory errors in scipy. It's a pretty nice tool. It often comes with most Linux distributions (on my system urpmi valgrind installed it for me). To use it to debug extension modules, you should have a Python compiled with debugging symbols (-g) option. This will add debugging symbols to extension modules as well. There is a file Misc/valgrind-python.supp in the standard Python distribution that is needed to suppress some useless Python memory warnings. Then you can generate some useful output to the file testmem.pid using something like valgrind --tool=memcheck --leak-check=yes --error-limit=no -v --log-file=testmem --suppressions=valgrind-python.supp --show-reachable=yes --num-callers=7 python leaktest.py where leaktest.py is a file that exercises your code: i.e import scipy scipy.test(1,10) Currently, my version of valgrind is issuing an illegal instruction in eigenvalue computation when I run scipy.test(10,10) under valgrind. That's probably O.K. because it would take a long time to finish anyway. Everything runs a lot slower under valgrind, but you get some useful information when it's done. Feel free to post questionable items from your valgrind-generated log files to this list (be sure they involve the PyArray_ or PyUFunc_ C-API though. You may get errors from the Python C-API too, but we can't fix those if needed here.). From strawman at astraw.com Fri Nov 4 21:01:22 2005 From: strawman at astraw.com (Andrew Straw) Date: Fri, 04 Nov 2005 18:01:22 -0800 Subject: [SciPy-dev] bugs in scipy / Numeric Message-ID: <436C1272.6030306@astraw.com> The follwing code apparently exhibits 2 bugs on my linux (Debian sarge) AMD64 system. I'm using scipy/newcore scipy/newscipy from SVN as of a few minutes ago and Numeric from CVS ditto (from :pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy/Numerical is this right?). Numeric 24.1 was used in the initial version of this test, but I updated to CVS just in case something had changed to fix the bug. Nothing did. = system info = $ python Python 2.3.5 (#2, Nov 3 2005, 02:44:38) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> $ uname -a Linux hdmg 2.6.14 #1 SMP Tue Nov 1 23:42:27 PST 2005 x86_64 GNU/Linux = The program = import scipy import Numeric print 'scipy.__scipy_version__',scipy.__scipy_version__ print 'Numeric.__version__',Numeric.__version__ print a=[scipy.array(5000,scipy.Float32),scipy.array(6000,scipy.Float32)] b=Numeric.array(a) print 'list of scipy scalars -> Numeric array' print a print b print a=[Numeric.array(5000,scipy.Float32),Numeric.array(6000,scipy.Float32)] b=scipy.array(a) print 'list of Numeric scalars -> scipy array' print a print b = The output = $ python test2.py Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing cluster to scipy Importing sparse to scipy Importing utils to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy scipy.__scipy_version__ 0.4.2_1420 Numeric.__version__ 24.1 list of scipy scalars -> Numeric array [array(5000.0, dtype=float32), array(6000.0, dtype=float32)] [? p] list of Numeric scalars -> scipy array [5000.0, 6000.0] Traceback (most recent call last): File "test2.py", line 21, in ? print b File "/home/astraw/py23-amd64/lib/python2.3/site-packages/scipy/base/numeric.py", line 238, in array_str return array2string(a, max_line_width, precision, suppress_small, ' ', "") File "/home/astraw/py23-amd64/lib/python2.3/site-packages/scipy/base/arrayprint.py", line 194, in array2string elif reduce(product, a.shape) == 0: TypeError: a float is required From oliphant at ee.byu.edu Fri Nov 4 23:34:07 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 21:34:07 -0700 Subject: [SciPy-dev] Recompile your extension modules if you check out latest newcore Message-ID: <436C363F.2040509@ee.byu.edu> If you check out the latest newcore, you will need to recompile your extension modules built against newcore. I removed a few cluttering C-API functions (that changes the pointer list and so the old pointers your extension modules are really using won't work). -Travis From pearu at scipy.org Sat Nov 5 14:04:10 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 5 Nov 2005 13:04:10 -0600 (CST) Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries Message-ID: Hi, When building newscipy against non-atlas blas/lapack libraries it is crusial that you build blas/lapack libraries properly. See the relevant sections in http://www.scipy.org/documentation/buildnewscipy.txt http://www.scipy.org/documentation/buildatlas4scipy.txt When using the GNU compiler, it is important that the lapack library is compiled with optimisaton flag -O2 instead of -fno-f2c -O3 that is default OPTS in INSTALL/make.inc.LINUX. If you do not fix OPTS variable in make.inc, scipy tests will fail as has been reported in the list. And if you use system lapack libraries that are compiled with `-fno-f2c -O3`, the tests will also fail. So, please build lapack library with proper optimisation flags when using it in scipy. Pearu From stephen.walton at csun.edu Sun Nov 6 10:49:15 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun, 06 Nov 2005 07:49:15 -0800 Subject: [SciPy-dev] bugs in scipy / Numeric In-Reply-To: <436C1272.6030306@astraw.com> References: <436C1272.6030306@astraw.com> Message-ID: <436E25FB.4040408@csun.edu> Andrew Straw wrote: >The follwing code apparently exhibits 2 bugs on my linux (Debian sarge) >AMD64 system. I'm using scipy/newcore scipy/newscipy from SVN as of a >few minutes ago and Numeric from CVS ditto (from >:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy/Numerical is this >right?). Numeric 24.1 was used in the initial version of this test, but >I updated to CVS just in case something had changed to fix the bug. > > I get different output on my system (Fedora Core 4): scipy.__scipy_version__ 0.4.2_1422 Numeric.__version__ 24.1 Traceback (most recent call last): File "andrew.py", line 10, in ? b=Numeric.array(a) ValueError: Invalid type for array Notice that in my case the conversion of a list of Scipy scalars to a Numeric array fails. From nwagner at mecha.uni-stuttgart.de Sun Nov 6 13:13:34 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sun, 06 Nov 2005 19:13:34 +0100 Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: References: Message-ID: On Sat, 5 Nov 2005 13:04:10 -0600 (CST) Pearu Peterson wrote: > > Hi, > > When building newscipy against non-atlas blas/lapack >libraries it is > crusial that you build blas/lapack libraries properly. >See the relevant > sections in > > http://www.scipy.org/documentation/buildnewscipy.txt > http://www.scipy.org/documentation/buildatlas4scipy.txt > > When using the GNU compiler, it is important that the >lapack library is > compiled with optimisaton flag > > -O2 > > instead of > > -fno-f2c -O3 > > that is default OPTS in INSTALL/make.inc.LINUX. > > If you do not fix OPTS variable in make.inc, scipy >tests will fail as has > been reported in the list. And if you use system lapack >libraries that > are compiled with `-fno-f2c -O3`, the tests will also >fail. > > So, please build lapack library with proper optimisation >flags when using > it in scipy. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Hi Pearu, Which tests will exactly fail ? I have used export BLAS_SRC=~/src/blas export LAPACK_SRC=~/src/lapack mkdir $BLAS_SRC cd $BLAS_SRC wget http://www.netlib.org/blas/blas.tgz tar xzf blas.tgz cd $LAPACK_SRC/.. wget http://www.netlib.org/lapack/lapack.tgz tar xzf lapack.tgz export BLAS=None export LAPACK=None export ATLAS=None I have already reported the bugs with respect to this approach. Nils From pearu at scipy.org Sun Nov 6 13:39:43 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 6 Nov 2005 12:39:43 -0600 (CST) Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: References: Message-ID: On Sun, 6 Nov 2005, Nils Wagner wrote: > Hi Pearu, > > Which tests will exactly fail ? > > I have used > > export BLAS_SRC=~/src/blas > export LAPACK_SRC=~/src/lapack > mkdir $BLAS_SRC > cd $BLAS_SRC > wget http://www.netlib.org/blas/blas.tgz > tar xzf blas.tgz > > cd $LAPACK_SRC/.. > wget http://www.netlib.org/lapack/lapack.tgz > tar xzf lapack.tgz > > export BLAS=None > export LAPACK=None > export ATLAS=None > > I have already reported the bugs with respect to this approach. Using LAPACK_SRC approuch will not work as some LAPACK source files need to be compiled without optimization flags but currently scipy.distutils does not support switching off optimization for certain files. Using `config_fc --noopt` would switch off optimization for all Fortran sources. So, you need to build at least LAPACK library manually, see `Building the LAPACK library` section in http://www.scipy.org/documentation/buildatlas4scipy.txt Pearu From nwagner at mecha.uni-stuttgart.de Mon Nov 7 03:40:48 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Nov 2005 09:40:48 +0100 Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: References: Message-ID: <436F1310.100@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Sun, 6 Nov 2005, Nils Wagner wrote: > > >>Hi Pearu, >> >>Which tests will exactly fail ? >> >>I have used >> >> export BLAS_SRC=~/src/blas >> export LAPACK_SRC=~/src/lapack >> mkdir $BLAS_SRC >> cd $BLAS_SRC >> wget http://www.netlib.org/blas/blas.tgz >> tar xzf blas.tgz >> >> cd $LAPACK_SRC/.. >> wget http://www.netlib.org/lapack/lapack.tgz >> tar xzf lapack.tgz >> >> export BLAS=None >> export LAPACK=None >> export ATLAS=None >> >>I have already reported the bugs with respect to this approach. >> > >Using LAPACK_SRC approuch will not work as some LAPACK source files need >to be compiled without optimization flags but currently scipy.distutils >does not support switching off optimization for certain files. Using >`config_fc --noopt` would switch off optimization for all Fortran sources. > >So, you need to build at least LAPACK library manually, see `Building >the LAPACK library` section in > > http://www.scipy.org/documentation/buildatlas4scipy.txt > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Pearu, Following your advice, I have build the LAPACK library from scratch using OPTS = -O2. /home/nwagner> echo $LAPACK /var/tmp/LAPACK/lapack_LINUX.a /home/nwagner> echo $LAPACK_SRC LAPACK_SRC: Undefined variable. /home/nwagner> echo $BLAS_SRC /var/tmp/src/blas Now scipy.test(10,10) yields one error ====================================================================== FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 44, in check_nils assert_array_almost_equal(r,cr) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 735, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ -5.2848319e-01 +3.8543597e+00j -6.2440787e+01 +6.0410538e+00j 2.7508638e+01 +1.3623836e+01j 5.9339443e+00 +3.4... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ---------------------------------------------------------------------- Ran 1331 tests in 179.443s FAILED (failures=1) 0.4.3.1440 0.4.2_1422 The repeated computation of logm(A) yields different results, but for what reason ? See the imaginary part -9.91687203e+00j in the last loop. >>> A array([[ 3.1, 0. ], [ 0. , 3.1]]) >>> linalg.logm(A) array([[ 1.13140211e+000, 1.49230383e-269], [ 0.00000000e+000, 1.13140211e+000]]) >>> linalg.logm(A) array([[ 1.13140211e+000, 1.33902098e-270], [ 0.00000000e+000, 1.13140211e+000]]) >>> linalg.logm(A) array([[ 1.13140211e+000, 1.62403951e-270], [ 0.00000000e+000, 1.13140211e+000]]) >>> linalg.logm(A) array([[ 1.13140211e+000, -2.19984796e-269], [ 0.00000000e+000, 1.13140211e+000]]) >>> linalg.logm(A) array([[ 1.13140211 +0.00000000e+00j, 0. -9.91687203e+00j], [ 0. +0.00000000e+00j, 1.13140211 +0.00000000e+00j]]) Any pointer how to resolve these problems ? Newscipy should work without ATLAS. Nils From pearu at scipy.org Mon Nov 7 02:58:17 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 7 Nov 2005 01:58:17 -0600 (CST) Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: <436F1310.100@mecha.uni-stuttgart.de> References: <436F1310.100@mecha.uni-stuttgart.de> Message-ID: On Mon, 7 Nov 2005, Nils Wagner wrote: >> Using LAPACK_SRC approuch will not work as some LAPACK source files need >> to be compiled without optimization flags but currently scipy.distutils >> does not support switching off optimization for certain files. Using >> `config_fc --noopt` would switch off optimization for all Fortran sources. >> >> So, you need to build at least LAPACK library manually, see `Building >> the LAPACK library` section in >> >> http://www.scipy.org/documentation/buildatlas4scipy.txt >> > Following your advice, I have build the LAPACK library from scratch > using OPTS = -O2. > > /home/nwagner> echo $LAPACK > /var/tmp/LAPACK/lapack_LINUX.a > /home/nwagner> echo $LAPACK_SRC > LAPACK_SRC: Undefined variable. > /home/nwagner> echo $BLAS_SRC > /var/tmp/src/blas > > Now scipy.test(10,10) yields one error > > ====================================================================== > FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", > line 44, in check_nils > assert_array_almost_equal(r,cr) > File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", > line 735, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [[ -5.2848319e-01 +3.8543597e+00j -6.2440787e+01 > +6.0410538e+00j > 2.7508638e+01 +1.3623836e+01j 5.9339443e+00 +3.4... > Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 > -2.2453333] > [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... > > > ---------------------------------------------------------------------- > Ran 1331 tests in 179.443s > > FAILED (failures=1) > > 0.4.3.1440 > 0.4.2_1422 > > > The repeated computation of logm(A) yields different results, but for > what reason ? > See the imaginary part -9.91687203e+00j in the last loop. > >>>> A > array([[ 3.1, 0. ], > [ 0. , 3.1]]) > >>>> linalg.logm(A) > array([[ 1.13140211e+000, 1.49230383e-269], > [ 0.00000000e+000, 1.13140211e+000]]) >>>> linalg.logm(A) > array([[ 1.13140211e+000, 1.33902098e-270], > [ 0.00000000e+000, 1.13140211e+000]]) >>>> linalg.logm(A) > array([[ 1.13140211e+000, 1.62403951e-270], > [ 0.00000000e+000, 1.13140211e+000]]) >>>> linalg.logm(A) > array([[ 1.13140211e+000, -2.19984796e-269], > [ 0.00000000e+000, 1.13140211e+000]]) >>>> linalg.logm(A) > array([[ 1.13140211 +0.00000000e+00j, 0. -9.91687203e+00j], > [ 0. +0.00000000e+00j, 1.13140211 +0.00000000e+00j]]) > > Any pointer how to resolve these problems ? > > Newscipy should work without ATLAS. Agreed. But I cannot reproduce this problem with or without ATLAS. Try building scipy without optimization: rm -rf build python setup.py config_fc --noopt install python -c 'import scipy;scipy.linalg.test()' And then also try building BLAS libraries manually. Also note that I have removed Numeric from my system (moving Numeric to Numeric.hide would be enough). Try the same. Pearu From nwagner at mecha.uni-stuttgart.de Mon Nov 7 04:55:01 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Nov 2005 10:55:01 +0100 Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: References: <436F1310.100@mecha.uni-stuttgart.de> Message-ID: <436F2475.1060708@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Mon, 7 Nov 2005, Nils Wagner wrote: > > >>>Using LAPACK_SRC approuch will not work as some LAPACK source files need >>>to be compiled without optimization flags but currently scipy.distutils >>>does not support switching off optimization for certain files. Using >>>`config_fc --noopt` would switch off optimization for all Fortran sources. >>> >>>So, you need to build at least LAPACK library manually, see `Building >>>the LAPACK library` section in >>> >>> http://www.scipy.org/documentation/buildatlas4scipy.txt >>> >>> >>Following your advice, I have build the LAPACK library from scratch >>using OPTS = -O2. >> >>/home/nwagner> echo $LAPACK >>/var/tmp/LAPACK/lapack_LINUX.a >>/home/nwagner> echo $LAPACK_SRC >>LAPACK_SRC: Undefined variable. >>/home/nwagner> echo $BLAS_SRC >>/var/tmp/src/blas >> >>Now scipy.test(10,10) yields one error >> >>====================================================================== >>FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File >>"/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", >>line 44, in check_nils >> assert_array_almost_equal(r,cr) >> File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", >>line 735, in assert_array_almost_equal >> assert cond,\ >>AssertionError: >>Arrays are not almost equal (mismatch 100.0%): >> Array 1: [[ -5.2848319e-01 +3.8543597e+00j -6.2440787e+01 >>+6.0410538e+00j >> 2.7508638e+01 +1.3623836e+01j 5.9339443e+00 +3.4... >> Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 >>-2.2453333] >>[ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... >> >> >>---------------------------------------------------------------------- >>Ran 1331 tests in 179.443s >> >>FAILED (failures=1) >> >>0.4.3.1440 >>0.4.2_1422 >> >> >>The repeated computation of logm(A) yields different results, but for >>what reason ? >>See the imaginary part -9.91687203e+00j in the last loop. >> >> >>>>>A >>>>> >>array([[ 3.1, 0. ], >> [ 0. , 3.1]]) >> >> >>>>>linalg.logm(A) >>>>> >>array([[ 1.13140211e+000, 1.49230383e-269], >> [ 0.00000000e+000, 1.13140211e+000]]) >> >>>>>linalg.logm(A) >>>>> >>array([[ 1.13140211e+000, 1.33902098e-270], >> [ 0.00000000e+000, 1.13140211e+000]]) >> >>>>>linalg.logm(A) >>>>> >>array([[ 1.13140211e+000, 1.62403951e-270], >> [ 0.00000000e+000, 1.13140211e+000]]) >> >>>>>linalg.logm(A) >>>>> >>array([[ 1.13140211e+000, -2.19984796e-269], >> [ 0.00000000e+000, 1.13140211e+000]]) >> >>>>>linalg.logm(A) >>>>> >>array([[ 1.13140211 +0.00000000e+00j, 0. -9.91687203e+00j], >> [ 0. +0.00000000e+00j, 1.13140211 +0.00000000e+00j]]) >> >>Any pointer how to resolve these problems ? >> >>Newscipy should work without ATLAS. >> > >Agreed. But I cannot reproduce this problem with or without ATLAS. >Try building scipy without optimization: > > rm -rf build > python setup.py config_fc --noopt install > python -c 'import scipy;scipy.linalg.test()' > >And then also try building BLAS libraries manually. > >Also note that I have removed Numeric from my system (moving >Numeric to Numeric.hide would be enough). Try the same. > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Pearu, I have also build a BLAS library from scractch. /home/nwagner> echo $BLAS /var/tmp/src/lapack/LAPACK/blas_LINUX.a /home/nwagner> echo $LAPACK /var/tmp/src/lapack/LAPACK/lapack_LINUX.a If I import scipy now >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line 32, in ? from scipy.basic.fft import fft, ifft File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft.py", line 22, in ? from fft_lite import * File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft_lite.py", line 23, in ? import scipy.lib.fftpack_lite as fftpack File "/usr/local/lib/python2.4/site-packages/scipy/lib/__init__.py", line 9, in ? import blas File "/usr/local/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", line 9, in ? import fblas ImportError: /usr/local/lib/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: srotmg_ How can I fix this problem ? Nils From pearu at scipy.org Mon Nov 7 04:07:34 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 7 Nov 2005 03:07:34 -0600 (CST) Subject: [SciPy-dev] Building newscipy against non-atlas blas/lapack libraries In-Reply-To: <436F2475.1060708@mecha.uni-stuttgart.de> References: <436F1310.100@mecha.uni-stuttgart.de> <436F2475.1060708@mecha.uni-stuttgart.de> Message-ID: On Mon, 7 Nov 2005, Nils Wagner wrote: > I have also build a BLAS library from scractch. > > /home/nwagner> echo $BLAS > /var/tmp/src/lapack/LAPACK/blas_LINUX.a > /home/nwagner> echo $LAPACK > /var/tmp/src/lapack/LAPACK/lapack_LINUX.a > > > If I import scipy now > >>>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line 32, in ? > from scipy.basic.fft import fft, ifft > File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft.py", line 22, in ? > from fft_lite import * > File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft_lite.py", line 23, in ? > import scipy.lib.fftpack_lite as fftpack > File "/usr/local/lib/python2.4/site-packages/scipy/lib/__init__.py", line 9, in ? > import blas > File "/usr/local/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", line 9, in ? > import fblas > ImportError: /usr/local/lib/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: srotmg_ > > How can I fix this problem ? Don't use blas sources shipped with LAPACK libraries. They are incomplete. Just follow the instructions given in the link I have gave. Pearu From nwagner at mecha.uni-stuttgart.de Mon Nov 7 05:09:13 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Nov 2005 11:09:13 +0100 Subject: [SciPy-dev] libblas.so.3: cannot open shared object file: No such file or directory Message-ID: <436F27C9.8070507@mecha.uni-stuttgart.de> Hi Pearu, I still try to build newscipy without ATLAS... svn/newscipy> /usr/local/bin/python setup.py build Traceback (most recent call last): File "setup.py", line 42, in ? setup_package() File "setup.py", line 7, in setup_package from scipy.distutils.core import setup File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line 32, in ? from scipy.basic.fft import fft, ifft File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft.py", line 22, in ? from fft_lite import * File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft_lite.py", line 23, in ? import scipy.lib.fftpack_lite as fftpack File "/usr/local/lib/python2.4/site-packages/scipy/lib/__init__.py", line 9, in ? import blas File "/usr/local/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", line 9, in ? import fblas ImportError: libblas.so.3: cannot open shared object file: No such file or directory How can I build libblas.so from libblas.a ? Nils From pearu at scipy.org Mon Nov 7 04:29:07 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 7 Nov 2005 03:29:07 -0600 (CST) Subject: [SciPy-dev] libblas.so.3: cannot open shared object file: No such file or directory In-Reply-To: <436F27C9.8070507@mecha.uni-stuttgart.de> References: <436F27C9.8070507@mecha.uni-stuttgart.de> Message-ID: On Mon, 7 Nov 2005, Nils Wagner wrote: > I still try to build newscipy without ATLAS... > > svn/newscipy> /usr/local/bin/python setup.py build > Traceback (most recent call last): > File "setup.py", line 42, in ? > setup_package() > File "setup.py", line 7, in setup_package > from scipy.distutils.core import setup > File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line > 32, in ? > from scipy.basic.fft import fft, ifft > File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft.py", line > 22, in ? > from fft_lite import * > File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft_lite.py", > line 23, in ? > import scipy.lib.fftpack_lite as fftpack > File "/usr/local/lib/python2.4/site-packages/scipy/lib/__init__.py", > line 9, in ? > import blas > File > "/usr/local/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", > line 9, in ? > import fblas > ImportError: libblas.so.3: cannot open shared object file: No such file > or directory > > How can I build libblas.so from libblas.a ? You don't need libblas.so. You obviously have failed to follow instructions from http://www.scipy.org/documentation/buildatlas4scipy.txt about building blas and lapack libraries and setting proper environment variables so that scipy build scripts can find them. Remove everything that you have build or installed and try again. Pearu From nwagner at mecha.uni-stuttgart.de Mon Nov 7 06:49:25 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Nov 2005 12:49:25 +0100 Subject: [SciPy-dev] libblas.so.3: cannot open shared object file: No such file or directory In-Reply-To: References: <436F27C9.8070507@mecha.uni-stuttgart.de> Message-ID: <436F3F45.1050503@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Mon, 7 Nov 2005, Nils Wagner wrote: > > >>I still try to build newscipy without ATLAS... >> >>svn/newscipy> /usr/local/bin/python setup.py build >>Traceback (most recent call last): >> File "setup.py", line 42, in ? >> setup_package() >> File "setup.py", line 7, in setup_package >> from scipy.distutils.core import setup >> File "/usr/local/lib/python2.4/site-packages/scipy/__init__.py", line >>32, in ? >> from scipy.basic.fft import fft, ifft >> File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft.py", line >>22, in ? >> from fft_lite import * >> File "/usr/local/lib/python2.4/site-packages/scipy/basic/fft_lite.py", >>line 23, in ? >> import scipy.lib.fftpack_lite as fftpack >> File "/usr/local/lib/python2.4/site-packages/scipy/lib/__init__.py", >>line 9, in ? >> import blas >> File >>"/usr/local/lib/python2.4/site-packages/scipy/lib/blas/__init__.py", >>line 9, in ? >> import fblas >>ImportError: libblas.so.3: cannot open shared object file: No such file >>or directory >> >>How can I build libblas.so from libblas.a ? >> > >You don't need libblas.so. You obviously have failed to follow >instructions from > > http://www.scipy.org/documentation/buildatlas4scipy.txt > >about building blas and lapack libraries and setting proper environment >variables so that scipy build scripts can find them. > >Remove everything that you have build or installed and try again. > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Pearu, I have removed everything and installed it again. Now python -c 'import scipy;scipy.linalg.test()' results in ====================================================================== ERROR: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_logm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 82, in check_nils logm((identity(7)*3.1+0j)-a) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 232, in logm errest = norm(expm(F)-A,1) / norm(A,1) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 255, in norm x = asarray_chkfinite(x) File "/usr/local/lib/python2.4/site-packages/scipy/base/function_base.py", line 211, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: check_nils (scipy.linalg.matfuncs.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 44, in check_nils assert_array_almost_equal(r,cr) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 735, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [[ 1.2104571e+01 -2.3056788e-02j -1.0034348e+00 -1.8445430e-01j 1.5239715e+01 +1.1528394e-02j 2.1808571e+01 -2.3... Array 2: [[ 11.9493333 -2.2453333 15.3173333 21.6533333 -2.2453333] [ -3.8426667 0.4986667 -4.5906667 -7.1866667 0.498... ---------------------------------------------------------------------- Ran 230 tests in 0.665s FAILED (failures=1, errors=1) It looks like a nightmare. :'( Is valgrind an option to detect the reason for the failure/error ? Nils From steve at shrogers.com Mon Nov 7 08:20:37 2005 From: steve at shrogers.com (Steven H. Rogers) Date: Mon, 07 Nov 2005 06:20:37 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43679289.6000404@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> Message-ID: <436F54A5.8070909@shrogers.com> Concur that allowing arrays as truth values is the pythonic thing to do. Please do it as long as it's documented. Regards, Steve Travis Oliphant wrote: > ... > > I agree it can bite people, but I'm concerned that arrays not having a > truth value is an odd thing in Python --- you have to implement it by > raising an error when __nonzero__ is called right? > > All other objects in Python have truth values (including its built-in > array). My attitude is that its just better to teach people the proper > use of truth values, then to break form with the rest of Python. > > I'm would definitely like to hear more opinions though. It would be > very easy to simply raise and error when __nonzero__ is called. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From oliphant at ee.byu.edu Mon Nov 7 12:41:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 07 Nov 2005 10:41:51 -0700 Subject: [SciPy-dev] [Numpy-discussion] bus error in check_kurtosis In-Reply-To: References: <436F8AA1.7090806@ee.byu.edu> Message-ID: <436F91DF.6010405@ee.byu.edu> Rob Managan wrote: >> Rob Managan wrote: >> >>> With todays svn versions (newcore 1440, newscipy 1423) >>> >>> scipy.test(level=1,verbosity=2) dies with a bus error on check_kurtosis >>> >>> check_basic (scipy.stats.stats.test_stats.test_mean) ... ok >>> check_ravel (scipy.stats.stats.test_stats.test_mean) ... ok >>> check_basic (scipy.stats.stats.test_stats.test_median) ... ok >>> check_basic (scipy.stats.stats.test_stats.test_mode) ... ok >>> check_kurtosis (scipy.stats.stats.test_stats.test_moments)Bus error >>> >> Did you rebuild full scipy? (i.e. remove the build directory and >> build it again). At least you need to recompile all the extension >> modules. >> >> -Travis > > > I guess I thought a "pyhton setup.py install" would rebuild enough > things. Apparently not. Removing the build directory fixed that problem. The problem is that I altered the C-API in newcore (trimmed out some useless calls). This should happen rarely (especially once a release is made), but when it does the table of function pointers (the function names are actually dereferencing a table of function pointers in scipy extension modules) is altered. Thus, old extension module binaries that are not recompiled will be pointing to the wrong function and a fatal error is very likely (wrong number of arguments passed, for example). Sorry for the trouble. -Travis From oliphant at ee.byu.edu Mon Nov 7 14:09:16 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 07 Nov 2005 12:09:16 -0700 Subject: [SciPy-dev] Numeric 24.1 tar ball and CVS were not consistent Message-ID: <436FA65C.3010202@ee.byu.edu> I just updated the CVS of Numeric with changes that went into the Numeric 24.1 release but were (unfortunately) not committed to CVS. These changes are likely responsible for the test failures that people have experienced when testing scipy with CVS versions of Numeric. Sorry about --- too many things to think about apparently... -Travis From strawman at astraw.com Mon Nov 7 15:09:50 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 07 Nov 2005 12:09:50 -0800 Subject: [SciPy-dev] [Numpy-discussion] Numeric 24.1 tar ball and CVS were not consistent In-Reply-To: <436FA65C.3010202@ee.byu.edu> References: <436FA65C.3010202@ee.byu.edu> Message-ID: <436FB48E.1030705@astraw.com> Hi Travis, The bug I reported[1] was originally detected with the Numeric 24.1 tar ball, but then I "upgraded" to CVS just to be sure that it hadn't been fixed. [1]: http://www.scipy.net/pipermail/scipy-dev/2005-November/003883.html So, I don't think the issue has been fixed. Let me know if you need more detailed logs, test output, whatever. Cheers! Andrew Travis Oliphant wrote: > > I just updated the CVS of Numeric with changes that went into the > Numeric 24.1 release but were (unfortunately) not committed to CVS. > These changes are likely responsible for the test failures that people > have experienced when testing scipy with CVS versions of Numeric. > > Sorry about --- too many things to think about apparently... > > -Travis > > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. > Download > it for free - -and be entered to win a 42" plasma tv or your very own > Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From perry at stsci.edu Mon Nov 7 16:32:19 2005 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 7 Nov 2005 16:32:19 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <436F54A5.8070909@shrogers.com> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> Message-ID: <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> On Nov 7, 2005, at 8:20 AM, Steven H. Rogers wrote: > Concur that allowing arrays as truth values is the pythonic thing to > do. > Please do it as long as it's documented. > > Regards, > Steve > > Travis Oliphant wrote: >> ... >> >> I agree it can bite people, but I'm concerned that arrays not having a >> truth value is an odd thing in Python --- you have to implement it by >> raising an error when __nonzero__ is called right? >> >> All other objects in Python have truth values (including its built-in >> array). My attitude is that its just better to teach people the >> proper >> use of truth values, then to break form with the rest of Python. >> >> I'm would definitely like to hear more opinions though. It would be >> very easy to simply raise and error when __nonzero__ is called. >> >> -Travis I guess I'd like to challenge the pythonicity of always having truth values. The situation here may be unique (so far, I'm not aware of any other similar cases). When Python allowed rich comparisons, it meant such comparisons didn't have to return simple boolean values. It was implemented almost entirely to satisfy Numeric users. This lead to many users believing that the results of such comparisons could now be used with "and" and "or" since as far as they were concerned, the results did result in booleans (arrays, of course). My argument is that because of this mismatch we can be *sure* that many users will try to use arrays this way, regardless of how many warnings you put in the documentation. And the vast majority of the time, they will not get what they intended. And for many of these cases, that there was a mistake may not surface right away (they are going to get as a result an array, usually of the right shape, if not the right type). For me this is a very big drawback. Let's look at the other side. Is it obvious what arrays are false? The pythonic thing would to make them false if they could be empty. And sure, they can be. But that is a pretty rare case of usage. Who is going to use that case very often? Besides, with rank-0 values, then people would think the pythonic thing is to treat them as false, but they aren't empty. But some might think they should be false if all values are 0 (the current Numeric behavior). This case would at least be used practically, but it is at odds with how lists and other sequences behave. Essentially, my argument is that this case presents many likely surprises for users and when there isn't one obvious way to do it, it shouldn't be done. I'd argue that not allowing arrays as truth values it is more Pythonic. (Existing arrays don't support rich comparisons, and they are false only when empty, as would be expected). Does anyone else have examples that use rich comparisons to produce sequences in other libraries that are treated as truth values? Perry From oliphant at ee.byu.edu Mon Nov 7 16:54:44 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 07 Nov 2005 14:54:44 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> Message-ID: <436FCD24.4030305@ee.byu.edu> Perry Greenfield wrote: >Essentially, my argument is that this case presents many likely >surprises for users and when there isn't one obvious way to do it, it >shouldn't be done. I'd argue that not allowing arrays as truth values >it is more Pythonic. (Existing arrays don't support rich comparisons, >and they are false only when empty, as would be expected). > > Right now, I'm leaning towards raising an error. A big explanation for that is the idea that "explicit" is better than "implicit." I'm persuaded by Perry's argument that a casual user is going to think that "and" and "&" have the same behavior for arrays. The experienced user is going to understand the difference, anyway. So ultimately, I don't really see an impressive use case for returning a truth value for arrays, but I do see a "new-user" issue if an error is not raised. So, right now in SVN, arrays as truth values raise erros unless the array contains only one element (in which case it is unambiguous). -Travis From schofield at ftw.at Mon Nov 7 18:16:33 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 00:16:33 +0100 (CET) Subject: [SciPy-dev] Infinite loop in machar.py Message-ID: Hi Pearu, I'm getting an infinite loop in machar.py when running newcore's long double unit test: check_singleton (scipy.base.getlimits.test_getlimits.test_longdouble) on a G4 PPC. I think the problem is with the "while 1" loop around line 123 under the comment "Determine negep and epsneg" and that the break condition is never true. Compilation and all the level-1 tests are fine. I'm using OS X 10.4 with the default gcc 4.0.0 and AltiVec libraries. One of the Apple docs (http://images.apple.com/macosx/pdf/MacOSX_UNIX_TB.pdf) states: "In Tiger [10.4], long double always defaults to 16 bytes (128-bit head-tail), compared with 8 bytes in Mac OS X Panther [10.3] and earlier versions," and that this is true for both 32-bit and 64-bit processors. Can I do anything else to shed more light on this problem? Cheers, Ed From rkern at ucsd.edu Mon Nov 7 19:10:25 2005 From: rkern at ucsd.edu (Robert Kern) Date: Mon, 07 Nov 2005 16:10:25 -0800 Subject: [SciPy-dev] Infinite loop in machar.py In-Reply-To: References: Message-ID: <436FECF1.2080405@ucsd.edu> Ed Schofield wrote: > I'm using OS X 10.4 with the default gcc 4.0.0 and AltiVec libraries. > > One of the Apple docs > (http://images.apple.com/macosx/pdf/MacOSX_UNIX_TB.pdf) states: > > "In Tiger [10.4], long double always defaults to 16 bytes (128-bit > head-tail), compared with 8 bytes in Mac OS X Panther [10.3] and earlier > versions," > > and that this is true for both 32-bit and 64-bit processors. > > Can I do anything else to shed more light on this problem? I can't recommend using gcc 4.0 at this time. I've never had much luck with it on Tiger for scipy or anything else, really. With gcc 3.3 on Tiger on a G4, sizeof(long double) == 8, and test_longdouble passes. You can choose to use gcc 3.3 with the following command: $ sudo gcc_select 3.3 To change back: # sudo gcc_select 4.0 -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From steve at shrogers.com Mon Nov 7 22:01:49 2005 From: steve at shrogers.com (Steven H. Rogers) Date: Mon, 07 Nov 2005 20:01:49 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <436FCD24.4030305@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> Message-ID: <4370151D.3090609@shrogers.com> OK. I can't think of a really good use case for using an array as a truth value. I would argue though, that it would make sense for an array of zeros to be False and an array with any non-zero values to be True. Travis Oliphant wrote: > Perry Greenfield wrote: > > >>Essentially, my argument is that this case presents many likely >>surprises for users and when there isn't one obvious way to do it, it >>shouldn't be done. I'd argue that not allowing arrays as truth values >>it is more Pythonic. (Existing arrays don't support rich comparisons, >>and they are false only when empty, as would be expected). >> >> > > Right now, I'm leaning towards raising an error. A big explanation for > that is the idea that "explicit" is better than "implicit." I'm > persuaded by Perry's argument that a casual user is going to think that > "and" and "&" have the same behavior for arrays. The experienced user > is going to understand the difference, anyway. > > So ultimately, I don't really see an impressive use case for returning a > truth value for arrays, but I do see a "new-user" issue if an error is > not raised. > > So, right now in SVN, arrays as truth values raise erros unless the > array contains only one element (in which case it is unambiguous). > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From oliphant at ee.byu.edu Mon Nov 7 22:27:55 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 07 Nov 2005 20:27:55 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <4370151D.3090609@shrogers.com> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> Message-ID: <43701B3B.1010204@ee.byu.edu> Steven H. Rogers wrote: >OK. I can't think of a really good use case for using an array as a truth >value. I would argue though, that it would make sense for an array of zeros >to be False and an array with any non-zero values to be True. > > I agree this makes sense. That's why it used to be the default behavior. But you can already get that behavior with any(a). There will be many though, I'm afraid, who think b or a ought to return element-wise like b | a does. This is not possible in Python. Raising an error will at least alert them to the problem which might otherwise give them misleading results. -Travis From aisaac at american.edu Tue Nov 8 07:31:27 2005 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 8 Nov 2005 07:31:27 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43701B3B.1010204@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz><436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu><43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com><96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu><4370151D.3090609@shrogers.com><43701B3B.1010204@ee.byu.edu> Message-ID: On Mon, 07 Nov 2005, Travis Oliphant apparently wrote: > There will be many though, I'm afraid, who think b or > a ought to return element-wise like b | a does. This is > not possible in Python. Raising an error will at least > alert them to the problem which might otherwise give them > misleading results. User perspective: I find this persuasive, even though I am unhappy that it means a standard Python behavior will disappear. What's more, if the SciPy community should ultimately change its mind, it looks easy to back out of raising an error while hard to back out of accepting the standard behavior. So it seems the right way to go now as long as the ultimate outcome is in doubt. Cheers, Alan Isaac From schofield at ftw.at Tue Nov 8 09:21:08 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 15:21:08 +0100 (CET) Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43701B3B.1010204@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> Message-ID: On Mon, 7 Nov 2005, Travis Oliphant wrote: > Steven H. Rogers wrote: > > >OK. I can't think of a really good use case for using an array as a truth > >value. I would argue though, that it would make sense for an array of zeros > >to be False and an array with any non-zero values to be True. > > > > > I agree this makes sense. That's why it used to be the default > behavior. But you can already get that behavior with any(a). > > There will be many though, I'm afraid, who think b or a ought to return > element-wise like b | a does. This is not possible in Python. Raising > an error will at least alert them to the problem which might otherwise > give them misleading results. I agree with this reasoning, but I'd like to illustrate a drawback to the new behaviour: >>> a1 = array([1,2,3]) >>> a2 = a1 >>> if a1 == a2: ... print "equal" ... Traceback (most recent call last): File "", line 1, in ? ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Using == and != to compare arrays was simple and (I think) unambiguous before. It would be nice to allow these comparisons again, while raising an exception for the general case. Perhaps we could modify arrays' __eq__ and __neq__ methods to call .any() and .all() for us, returning a single truth value, rather than returning an array of truth values as it does currently? We still have scipy.equal() and scipy.not_equal() for elementwise comparisons. This might actually cause less code breakage (like for me ;), since using == and != in conditional expressions would work as before. It would also have the bonus of bringing SciPy's behaviour closer to that of Python's builtin objects and existing 1-d array module: >>> l1 = [1,2,3] >>> l2 = [1,2,3] >>> l1 == l2 True >>> import array >>> b1 = array.array('d',[1,2,3]) >>> b2 = b1 >>> b1 == b2 True -- Ed From rkern at ucsd.edu Tue Nov 8 09:34:30 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 06:34:30 -0800 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> Message-ID: <4370B776.8020509@ucsd.edu> Ed Schofield wrote: > I agree with this reasoning, but I'd like to illustrate a drawback to the > new behaviour: > >>>>a1 = array([1,2,3]) >>>>a2 = a1 >>>>if a1 == a2: > > ... print "equal" > ... > Traceback (most recent call last): > File "", line 1, in ? > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() It's not a drawback. This is exactly the reason we're making the change. > Using == and != to compare arrays was simple and (I think) unambiguous > before. It would be nice to allow these comparisons again, while raising > an exception for the general case. Perhaps we could modify arrays' __eq__ > and __neq__ methods to call .any() and .all() for us, returning a single > truth value, rather than returning an array of truth values as it does > currently? No. Rich comparisons were added to the language precisely *for Numeric* so we could return arrays instead of just True or False. We're not going to regress to the Python 2.0 era. In any case, you can't have those methods "call .any() and .all() for us;" there really is an ambiguity. We don't know beforehand which one you want called. And in the face of ambiguity, we're refusing the temptation to guess. > We still have scipy.equal() and scipy.not_equal() for > elementwise comparisons. > > This might actually cause less code breakage (like for me ;), since using > == and != in conditional expressions would work as before. I think you may need to reexamine your code that uses that construction. The old behavior was implicitly the same as any(a1 == a2) not all(a1 == a2). It's likely that you wanted the latter not the former. > It would also > have the bonus of bringing SciPy's behaviour closer to that of Python's > builtin objects and existing 1-d array module: Any API compatibility with the stdlib's array module is entirely coincidental. It's not a goal and never will be. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Tue Nov 8 09:35:06 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 15:35:06 +0100 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla Message-ID: <4370B79A.2000005@mecha.uni-stuttgart.de> Hi all, How can I apply multiple constraints (e.g. all design variables x \in \mathds{R}^n should be positive) in fmin_cobyla ? x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) def cons(x): ????? Nils From rkern at ucsd.edu Tue Nov 8 09:40:48 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 06:40:48 -0800 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370B79A.2000005@mecha.uni-stuttgart.de> References: <4370B79A.2000005@mecha.uni-stuttgart.de> Message-ID: <4370B8F0.7020200@ucsd.edu> Nils Wagner wrote: > Hi all, > > How can I apply multiple constraints (e.g. all design variables x \in > \mathds{R}^n should be positive) in fmin_cobyla ? > > x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) > > def cons(x): > > ????? The documentation is pretty clear: cons -- a list of functions that all must be >=0 (a single function if only 1 constraint) -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Tue Nov 8 10:04:24 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 16:04:24 +0100 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370B8F0.7020200@ucsd.edu> References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> Message-ID: <4370BE78.9020908@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Hi all, >> >>How can I apply multiple constraints (e.g. all design variables x \in >>\mathds{R}^n should be positive) in fmin_cobyla ? >> >>x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) >> >>def cons(x): >> >> ????? >> > >The documentation is pretty clear: > > cons -- a list of functions that all must be >=0 (a single function > if only 1 constraint) > > I am aware of the help function :-) Anyway, how do I define a l i s t of functions ? From rkern at ucsd.edu Tue Nov 8 10:10:31 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 07:10:31 -0800 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370BE78.9020908@mecha.uni-stuttgart.de> References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> <4370BE78.9020908@mecha.uni-stuttgart.de> Message-ID: <4370BFE7.6020001@ucsd.edu> Nils Wagner wrote: > Robert Kern wrote: > >>Nils Wagner wrote: >> >> >>>Hi all, >>> >>>How can I apply multiple constraints (e.g. all design variables x \in >>>\mathds{R}^n should be positive) in fmin_cobyla ? >>> >>>x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) >>> >>>def cons(x): >>> >>> ????? >>> >> >>The documentation is pretty clear: >> >> cons -- a list of functions that all must be >=0 (a single function >> if only 1 constraint) > > I am aware of the help function :-) > Anyway, how do I define a l i s t of functions ? It's a regular Python list which contains functions. I can't make it any clearer than that. This is pretty fundamental stuff. def cons0(x): return x[0] def cons1(x): return x[1] x_opt = optimize.fmin_cobyla(func, x, [cons0, cons1]) -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Tue Nov 8 10:14:40 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 16:14:40 +0100 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370BFE7.6020001@ucsd.edu> References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> <4370BE78.9020908@mecha.uni-stuttgart.de> <4370BFE7.6020001@ucsd.edu> Message-ID: <4370C0E0.5090505@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Robert Kern wrote: >> >> >>>Nils Wagner wrote: >>> >>> >>> >>>>Hi all, >>>> >>>>How can I apply multiple constraints (e.g. all design variables x \in >>>>\mathds{R}^n should be positive) in fmin_cobyla ? >>>> >>>>x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) >>>> >>>>def cons(x): >>>> >>>> ????? >>>> >>>> >>>The documentation is pretty clear: >>> >>> cons -- a list of functions that all must be >=0 (a single function >>> if only 1 constraint) >>> >>I am aware of the help function :-) >>Anyway, how do I define a l i s t of functions ? >> > >It's a regular Python list which contains functions. I can't make it any >clearer than that. This is pretty fundamental stuff. > >def cons0(x): > return x[0] > >def cons1(x): > return x[1] > >x_opt = optimize.fmin_cobyla(func, x, [cons0, cons1]) > > Thank you for the note. Now assume that we have 10^3 constraints. Is there any better way than typing def cons0(x): return x[0] . . . def cons999(x): return x[999] Nils From rkern at ucsd.edu Tue Nov 8 10:24:37 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 07:24:37 -0800 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370C0E0.5090505@mecha.uni-stuttgart.de> References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> <4370BE78.9020908@mecha.uni-stuttgart.de> <4370BFE7.6020001@ucsd.edu> <4370C0E0.5090505@mecha.uni-stuttgart.de> Message-ID: <4370C335.6080700@ucsd.edu> Nils Wagner wrote: > Thank you for the note. > Now assume that we have 10^3 constraints. Is there any better way than > typing > def cons0(x): > return x[0] > . > . > . > def cons999(x): > return x[999] If the dimensionality of your problem is 1000, and you only have positivity constraints, then you probably shouldn't be using 1000 separate constraints. def cons(x): if (x <= 0.0).any(): return -dot(x, x) else: return dot(x, x) But if you must have separate constraints: cons = [] for i in xrange(1000): cons.append(lambda x, i=i: x[i]) -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From schofield at ftw.at Tue Nov 8 10:29:28 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 16:29:28 +0100 (CET) Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <4370B776.8020509@ucsd.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <4370B776.8020509@ucsd.edu> Message-ID: On Tue, 8 Nov 2005, Robert Kern wrote: > Ed Schofield wrote: > > > Using == and != to compare arrays was simple and (I think) unambiguous > > before. It would be nice to allow these comparisons again, while raising > > an exception for the general case. Perhaps we could modify arrays' __eq__ > > and __neq__ methods to call .any() and .all() for us, returning a single > > truth value, rather than returning an array of truth values as it does > > currently? > > In any case, you can't have those methods "call .any() and .all() for > us;" there really is an ambiguity. We don't know beforehand which one > you want called. And in the face of ambiguity, we're refusing the > temptation to guess. We've seen that there's an ambiguity in the case of logical operations on arrays like "a and b", "a or b". But I don't see any ambiguity in the case of == and !=. Can two arrays be considered 'equal' if any of their elements differ? ;) You're right that my code that now raises an exception was silently wrong before. That's a definite step forward. > Any API compatibility with the stdlib's array module is entirely > coincidental. It's not a goal and never will be. Yes, it is a goal. The recent conversion of typecodes for better compatibility with 'array' and 'struct' is an example. When there's no good argument to differ we might as well be consistent with the standard library. -- Ed From perry at stsci.edu Tue Nov 8 10:35:42 2005 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 8 Nov 2005 10:35:42 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> Message-ID: <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> On Nov 8, 2005, at 9:21 AM, Ed Schofield wrote: > > I agree with this reasoning, but I'd like to illustrate a drawback to > the > new behaviour: > >>>> a1 = array([1,2,3]) >>>> a2 = a1 >>>> if a1 == a2: > ... print "equal" > ... What about: >>> a1 = array([[1,2,3],[1,2,3]]) >>> a2 = array([1,2,3]) >>> a1 == a2 These two arrays are not the same shape but because of broadcasting will show to be equal. Is this what you intended? Some might, some might not. Robert has already pointed out that lots of people want == to result in an array of booleans (most I'd argue) rather than a single boolean value. And if you wanted to use the current Numeric behavior, then >>> array([0,0]) == array([0,1]) will not do what you wish it since there is at least one equal element, it is treated as true. (again reiterating Robert's point.) Your example illustrates exactly why allowing this behavior is dangerous. Two different people looking at this may expect two different results. Perry From schofield at ftw.at Tue Nov 8 10:43:04 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 16:43:04 +0100 (CET) Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> Message-ID: On Tue, 8 Nov 2005, Perry Greenfield wrote: > What about: > > >>> a1 = array([[1,2,3],[1,2,3]]) > >>> a2 = array([1,2,3]) > >>> a1 == a2 > > These two arrays are not the same shape but because of broadcasting > will show to be equal. Is this what you intended? Some might, some > might not. Ah, that's a good argument. Okay, I'm sold ;) -- Ed From rkern at ucsd.edu Tue Nov 8 10:46:14 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 07:46:14 -0800 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <4370B776.8020509@ucsd.edu> Message-ID: <4370C846.9090805@ucsd.edu> Ed Schofield wrote: > > On Tue, 8 Nov 2005, Robert Kern wrote: > >>In any case, you can't have those methods "call .any() and .all() for >>us;" there really is an ambiguity. We don't know beforehand which one >>you want called. And in the face of ambiguity, we're refusing the >>temptation to guess. > > We've seen that there's an ambiguity in the case of logical operations on > arrays like "a and b", "a or b". But I don't see any ambiguity in the > case of == and !=. Can two arrays be considered 'equal' if any of their > elements differ? ;) == as an operation doesn't test the equality of the arrays as a whole. It returns an array with the results of the == comparison element-by-element. We don't want it to return a single truth value. If you need more semantics on top of that, then use .any() or .all() or whatever else. We lobbied extensively for Python to allow ==, !=, <, >, etc. to be able to return non-Boolean values. We got that capability in Python 2.1. We're not going to change that decision now years after the fact. If you have a concern about code breakage, regressing now would break a huge amount of code, much more than disabling .__nonzero__(). > You're right that my code that now raises an exception was silently wrong > before. That's a definite step forward. I hoped that would have been dispositive. >>Any API compatibility with the stdlib's array module is entirely >>coincidental. It's not a goal and never will be. > > Yes, it is a goal. The recent conversion of typecodes for better > compatibility with 'array' and 'struct' is an example. When there's no > good argument to differ we might as well be consistent with the standard > library. Numeric exists because the API of stdlib's array module was inadequate for our purposes. We've rationalized the type characters primarily to match Python as a whole, not array specifically. Conventions about data are a different thing than object behavior. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Tue Nov 8 12:10:26 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 08 Nov 2005 12:10:26 -0500 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: <4370C0E0.5090505@mecha.uni-stuttgart.de> (Nils Wagner's message of "Tue, 08 Nov 2005 16:14:40 +0100") References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> <4370BE78.9020908@mecha.uni-stuttgart.de> <4370BFE7.6020001@ucsd.edu> <4370C0E0.5090505@mecha.uni-stuttgart.de> Message-ID: Nils Wagner writes: > Robert Kern wrote: >>Nils Wagner wrote: >> >>>Robert Kern wrote: >>> >>> >>>>Nils Wagner wrote: >>>> >>>> >>>> >>>>>Hi all, >>>>> >>>>>How can I apply multiple constraints (e.g. all design variables x \in >>>>>\mathds{R}^n should be positive) in fmin_cobyla ? >>>>> >>>>>x_opt=optimize.fmin_cobyla(func, x, cons, args=(), consargs=(),maxfun=1200) >>>>> >>>>>def cons(x): >>>>> >>>>> ????? >>>>> >>>>> >>>>The documentation is pretty clear: >>>> >>>> cons -- a list of functions that all must be >=0 (a single function >>>> if only 1 constraint) >>>> >>>I am aware of the help function :-) >>>Anyway, how do I define a l i s t of functions ? >>> >> >>It's a regular Python list which contains functions. I can't make it any >>clearer than that. This is pretty fundamental stuff. >> >>def cons0(x): >> return x[0] >> >>def cons1(x): >> return x[1] >> >>x_opt = optimize.fmin_cobyla(func, x, [cons0, cons1]) >> >> > Thank you for the note. > Now assume that we have 10^3 constraints. Is there any better way than > typing > def cons0(x): > return x[0] > . > . > . > def cons999(x): > return x[999] I've gone and redone the code (revision 1426 in the newscipy branch) so that it requires a generic sequence instead: something that you can do len() of, and that you can iterate over [*]. So you could pass an instance of a class like this (untested): class Contraint: def __init__(self, constraintList): self.constraintList = constraintList def __len__(self): return len(self.constraintList) def __getitem__(self, i): def c(x): # some parameterized constraint return self.constraintList[i] * x[i]**2 return c [*] yes, that means dictionaries can be passed. Don't do that: the sequence of contraints has to remain in the same order. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Tue Nov 8 13:00:19 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 11:00:19 -0700 Subject: [SciPy-dev] 64-bit testing needed Message-ID: <4370E7B3.3050205@ee.byu.edu> I just made some changes to the slice parsing code to allow numbers larger than a platform int. This only affects 64-bit platforms where sizeof(intp) != sizeof(int). I need someone to test the new SVN to make sure I changed all the right places. The changed subroutines are in arrayobject.c parse_index parse_subindex slice_GetIndices slice_coerce_index I'm especially interested in warnings on compile. -Travis From nwagner at mecha.uni-stuttgart.de Tue Nov 8 13:30:07 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 19:30:07 +0100 Subject: [SciPy-dev] Multiple constraints in fmin_cobyla In-Reply-To: References: <4370B79A.2000005@mecha.uni-stuttgart.de> <4370B8F0.7020200@ucsd.edu> <4370BE78.9020908@mecha.uni-stuttgart.de> <4370BFE7.6020001@ucsd.edu> <4370C0E0.5090505@mecha.uni-stuttgart.de> Message-ID: On Tue, 08 Nov 2005 12:10:26 -0500 cookedm at physics.mcmaster.ca (David M. Cooke) wrote: > Nils Wagner writes: > >> Robert Kern wrote: >>>Nils Wagner wrote: >>> >>>>Robert Kern wrote: >>>> >>>> >>>>>Nils Wagner wrote: >>>>> >>>>> >>>>> >>>>>>Hi all, >>>>>> >>>>>>How can I apply multiple constraints (e.g. all design >>>>>>variables x \in >>>>>>\mathds{R}^n should be positive) in fmin_cobyla ? >>>>>> >>>>>>x_opt=optimize.fmin_cobyla(func, x, cons, args=(), >>>>>>consargs=(),maxfun=1200) >>>>>> >>>>>>def cons(x): >>>>>> >>>>>> ????? >>>>>> >>>>>> >>>>>The documentation is pretty clear: >>>>> >>>>> cons -- a list of functions that all must be >=0 (a >>>>>single function >>>>> if only 1 constraint) >>>>> >>>>I am aware of the help function :-) >>>>Anyway, how do I define a l i s t of functions ? >>>> >>> >>>It's a regular Python list which contains functions. I >>>can't make it any >>>clearer than that. This is pretty fundamental stuff. >>> >>>def cons0(x): >>> return x[0] >>> >>>def cons1(x): >>> return x[1] >>> >>>x_opt = optimize.fmin_cobyla(func, x, [cons0, cons1]) >>> >>> >> Thank you for the note. >> Now assume that we have 10^3 constraints. Is there any >>better way than >> typing >> def cons0(x): >> return x[0] >> . >> . >> . >> def cons999(x): >> return x[999] > > I've gone and redone the code (revision 1426 in the >newscipy branch) > so that it requires a generic sequence instead: >something that you can > do len() of, and that you can iterate over [*]. So you >could pass an > instance of a class like this (untested): > > class Contraint: > def __init__(self, constraintList): > self.constraintList = constraintList > def __len__(self): > return len(self.constraintList) > def __getitem__(self, i): > def c(x): > # some parameterized constraint > return self.constraintList[i] * x[i]**2 > return c > > [*] yes, that means dictionaries can be passed. Don't do >that: the > sequence of contraints has to remain in the same order. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev ====================================================================== ERROR: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 114, in check_l_bfgs_b args=(), maxfun=self.maxiter) File "/usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py", line 207, in fmin_l_bfgs_b return x, f[0], d ValueError: 0-d arrays can't be indexed. ====================================================================== ERROR: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 114, in check_l_bfgs_b args=(), maxfun=self.maxiter) File "/usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py", line 207, in fmin_l_bfgs_b return x, f[0], d ValueError: 0-d arrays can't be indexed. ---------------------------------------------------------------------- Ran 1328 tests in 6.467s FAILED (errors=2) From nwagner at mecha.uni-stuttgart.de Tue Nov 8 14:02:51 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 20:02:51 +0100 Subject: [SciPy-dev] ERROR: limited-memory bound-constrained BFGS algorithm Message-ID: ====================================================================== ERROR: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 114, in check_l_bfgs_b args=(), maxfun=self.maxiter) File "/usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py", line 207, in fmin_l_bfgs_b return x, f[0], d ValueError: 0-d arrays can't be indexed. From schofield at ftw.at Tue Nov 8 14:06:17 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 20:06:17 +0100 (CET) Subject: [SciPy-dev] Infinite loop in machar.py In-Reply-To: References: Message-ID: > Hi Pearu, > > I'm getting an infinite loop in machar.py when running newcore's long > double unit test: > > check_singleton (scipy.base.getlimits.test_getlimits.test_longdouble) > > on a G4 PPC. Here's a little more detective work: >>> a = scipy.ones(3, 'g') >>> a array([ 0., 0., 0.], dtype=float128) >>> b = a*10 >>> b array([ 0., 0., 0.], dtype=float128) >>> a == b array([False, False, False], dtype=bool) >>> a == a array([True, True, True], dtype=bool) >>> 9*a - b == 0 array([False, False, False], dtype=bool) >>> 10*a - b == 0 array([True, True, True], dtype=bool) >>> 10*sum(a) - sum(b) == 0 True >>> 9*sum(a) - sum(b) == 0 False >>> a = arange(1, 11) >>> a.prod() == array(3628800, 'g') True >>> a.prod() == array(3628801, 'g') False >>> c = array(a, 'g') >>> c.prod() == 3628800 True So it appears that simple arithmetic operations on long doubles work partially, but don't display. Pearu, do you think there really is a bug with long doubles in gcc 4.0 on PPC, like Robert suggests? If so, I could add his workaround to the documentation. But better would be for us just to disable the float128 type for this architecture (G4) and compiler, since all other tests pass (well, all those one expects to pass ;) -- Ed From oliphant at ee.byu.edu Tue Nov 8 14:17:04 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 12:17:04 -0700 Subject: [SciPy-dev] Infinite loop in machar.py In-Reply-To: References: Message-ID: <4370F9B0.3070600@ee.byu.edu> Ed Schofield wrote: > > >>Hi Pearu, >> >>I'm getting an infinite loop in machar.py when running newcore's long >>double unit test: >> >>check_singleton (scipy.base.getlimits.test_getlimits.test_longdouble) >> >>on a G4 PPC. >> >> > > >Here's a little more detective work: > > > >>>>a = scipy.ones(3, 'g') >>>>a >>>> >>>> >array([ 0., 0., 0.], dtype=float128) > > >>>>b = a*10 >>>>b >>>> >>>> >array([ 0., 0., 0.], dtype=float128) > > I had to write special printing code for long doubles. I noticed that this did not seem to work on the MAC. I think the printf is broken for long doubles with gcc 4.0. It would be possible to fix the printing code separately, if math is working -Travis From schofield at ftw.at Tue Nov 8 14:17:39 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 8 Nov 2005 20:17:39 +0100 (CET) Subject: [SciPy-dev] ERROR: limited-memory bound-constrained BFGS algorithm In-Reply-To: References: Message-ID: On Tue, 8 Nov 2005, Nils Wagner wrote: > ====================================================================== > ERROR: limited-memory bound-constrained BFGS algorithm > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", > line 114, in check_l_bfgs_b > args=(), maxfun=self.maxiter) > File > "/usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py", > line 207, in fmin_l_bfgs_b > return x, f[0], d > ValueError: 0-d arrays can't be indexed. Yes, I know this is broken. I added some new unit tests today, partly to highlight this problem. It hasn't ever worked with the new scipy core. I'm working on it :) -- Ed From cookedm at physics.mcmaster.ca Tue Nov 8 15:10:46 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 8 Nov 2005 15:10:46 -0500 Subject: [SciPy-dev] 64-bit testing needed In-Reply-To: <4370E7B3.3050205@ee.byu.edu> References: <4370E7B3.3050205@ee.byu.edu> Message-ID: <015FD147-CA13-4AA7-ACD4-598ED33AA42D@physics.mcmaster.ca> On Nov 8, 2005, at 13:00 , Travis Oliphant wrote: > I just made some changes to the slice parsing code to allow numbers > larger than a platform int. This only affects 64-bit platforms where > sizeof(intp) != sizeof(int). > > I need someone to test the new SVN to make sure I changed all the > right > places. The changed subroutines are in arrayobject.c > > parse_index > parse_subindex > slice_GetIndices > slice_coerce_index > > I'm especially interested in warnings on compile. gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -fPIC' compile options: '-Ibuild/src/scipy/base/src -Iscipy/base/include - Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.4 -c' gcc: scipy/base/src/multiarraymodule.c In file included from scipy/base/src/multiarraymodule.c:44: scipy/base/src/arrayobject.c: In function ?parse_subindex?: scipy/base/src/arrayobject.c:1368: warning: passing argument 4 of ?slice_GetIndices? from incompatible pointer type After fixing this (revision 1447; 'stop' before this warning was declared int, not intp), everything seems to work fine on an Athlon 64 running Linux. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Fernando.Perez at colorado.edu Tue Nov 8 15:14:51 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 08 Nov 2005 13:14:51 -0700 Subject: [SciPy-dev] 64-bit testing needed In-Reply-To: <015FD147-CA13-4AA7-ACD4-598ED33AA42D@physics.mcmaster.ca> References: <4370E7B3.3050205@ee.byu.edu> <015FD147-CA13-4AA7-ACD4-598ED33AA42D@physics.mcmaster.ca> Message-ID: <4371073B.2050809@colorado.edu> David M. Cooke wrote: > On Nov 8, 2005, at 13:00 , Travis Oliphant wrote: > > >>I just made some changes to the slice parsing code to allow numbers >>larger than a platform int. This only affects 64-bit platforms where >>sizeof(intp) != sizeof(int). >> >>I need someone to test the new SVN to make sure I changed all the >>right >>places. The changed subroutines are in arrayobject.c >> >>parse_index >>parse_subindex >>slice_GetIndices >>slice_coerce_index >> >>I'm especially interested in warnings on compile. Since I'm not sure exactly what you are looking for, I'm including the build log for scipy_newcore with the Intel Itanium compiler. All tests pass with the 'spinner' running them 50 times. Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: icc_build_scicore.log.gz Type: application/x-gzip Size: 3571 bytes Desc: not available URL: From oliphant at ee.byu.edu Tue Nov 8 18:18:49 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 16:18:49 -0700 Subject: [SciPy-dev] [SciPy-user] random number sampling performance in newcore In-Reply-To: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> References: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> Message-ID: <43713259.9020006@ee.byu.edu> Chris Fonnesbeck wrote: >I was surprised to find that the random number sampler in newcore is >significantly slower than RandomArray in Numeric (at least for >binomial sampling). A quick comparison showed the >scipy.basic.random.binomial sampler to be over 8 times slower than >RandomArray.binomial. I was surprised, since the newcore stuff is >Pyrex-based (isnt it?). Am I the only one observing such differences, >or am I using the wrong method? If not, is the performance expected to >improve significantly. > > I see about 4x slower as well for this particular distribution. I'm not sure if this is due to the Pyrex interface (hand-wrapped extension modules can be faster), or to the use of a different random-number generator. But, I'm getting that the random number generator itself is about 5x faster. So, perhaps there is something going on with the binomial generator or interface. >>> t1 = Timer('a = RandomArray.random((500,))','import RandomArray') >>> t1.timeit(100) 0.010856151580810547 >>> t2 = Timer('a = scipy.random.random((500,))','import scipy') >>> t2.timeit(100) 0.0023119449615478516 From rng7 at cornell.edu Tue Nov 8 19:55:08 2005 From: rng7 at cornell.edu (Ryan Gutenkunst) Date: Tue, 8 Nov 2005 19:55:08 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> Message-ID: <0e1b19648ccd04b1f42243b437860139@cornell.edu> On Nov 8, 2005, at 10:35 AM, Perry Greenfield wrote: > Robert has already pointed out that lots of people want == to result in > an array of booleans (most I'd argue) rather than a single boolean > value. I'm coming late to this discussion, but I'd like to mention that a similar issue with the old scipy burned me just today. Guido's PEP 8 Style Guide suggests: - For sequences, (strings, lists, tuples), use the fact that empty sequences are false, so "if not seq" or "if seq" is preferable to "if len(seq)" or "if not len(seq)". However, an old scipy array containing any number of zeros is 'False', as illustrated below. (I haven't tried this on new scipy.) >>> scipy.__version__ '0.3.3_303.4601' >>> if [0]: ... print 'hello' ... hello >>> if scipy.array([0, 0]): ... print 'hello' ... >>> It took me quite a while to track down a bug in my code caused by this behavior. I'd at least call this difference from other sequences a wart, if not a bug. Perhaps something to consider for new scipy... Cheers, Ryan -- Ryan Gutenkunst | Cornell Dept. of Physics | "It is not the mountain | we conquer but ourselves." Clark 535 / (607)255-6068 | -- Sir Edmund Hillary AIM: JepettoRNG | http://www.physics.cornell.edu/~rgutenkunst/ From oliphant at ee.byu.edu Tue Nov 8 20:05:06 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 18:05:06 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <0e1b19648ccd04b1f42243b437860139@cornell.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> <0e1b19648ccd04b1f42243b437860139@cornell.edu> Message-ID: <43714B42.8070505@ee.byu.edu> Ryan Gutenkunst wrote: >On Nov 8, 2005, at 10:35 AM, Perry Greenfield wrote: > > >>Robert has already pointed out that lots of people want == to result in >>an array of booleans (most I'd argue) rather than a single boolean >>value. >> >> > >I'm coming late to this discussion, but I'd like to mention that a >similar issue with the old scipy burned me just today. > >Guido's PEP 8 Style Guide suggests: > - For sequences, (strings, lists, tuples), use the fact that empty > sequences are false, so "if not seq" or "if seq" is preferable > to "if len(seq)" or "if not len(seq)". > >However, an old scipy array containing any number of zeros is 'False', >as illustrated below. (I haven't tried this on new scipy.) > > >>> scipy.__version__ >'0.3.3_303.4601' > >>> if [0]: >... print 'hello' >... >hello > >>> if scipy.array([0, 0]): >... print 'hello' >... > >>> > > So, do we want empty arrays to return false and not return an error? This is definitely a possibility. -Travis From jonathan.taylor at utoronto.ca Tue Nov 8 23:41:12 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 09 Nov 2005 04:41:12 +0000 Subject: [SciPy-dev] What code base to use? Message-ID: Hi, I am very pleased to see scipy consolidating the other numerical python modules. I am hoping to check out the current development. I am a little confused which subversion directory to take a look at? What are the differences between the current branches? Thanks for any help. Jon. From rkern at ucsd.edu Tue Nov 8 23:52:49 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 20:52:49 -0800 Subject: [SciPy-dev] What code base to use? In-Reply-To: References: Message-ID: <437180A1.60509@ucsd.edu> Jonathan Taylor wrote: > Hi, > > I am very pleased to see scipy consolidating the other numerical python > modules. I am hoping to check out the current development. I am a > little confused which subversion directory to take a look at? What are > the differences between the current branches? If you want to follow current development, then you need to track the following branches http://svn.scipy.org/svn/scipy_core/branches/newcore/ http://svn.scipy.org/svn/scipy/branches/newscipy/ Install them in that order. All of the other branches are carryovers from the old CVS. You can safely ignore them. Scipy dev's: what do you think about moving the current trunks into branches (or tags) and making the "new" branches the trunks ASAP? -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Tue Nov 8 23:56:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 21:56:03 -0700 Subject: [SciPy-dev] What code base to use? In-Reply-To: References: Message-ID: <43718163.8020200@ee.byu.edu> Jonathan Taylor wrote: >Hi, > >I am very pleased to see scipy consolidating the other numerical python >modules. I am hoping to check out the current development. I am a >little confused which subversion directory to take a look at? What are >the differences between the current branches? > > > The newcore branch of scipy_core is where development of the Numerical Python replacement is taking place. svn co http://svn.scipy.org/svn/scipy_core/branches/newcore The newscipy branch of scipy is where the version of scipy ported to work with the replacement to Numeric is being developed. svn co http://svn.scipy.org/svn/scipy/branches/newscipy Both branches should build. -Travis From jonathan.taylor at utoronto.ca Wed Nov 9 00:07:39 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 09 Nov 2005 05:07:39 +0000 Subject: [SciPy-dev] What code base to use? In-Reply-To: <43718163.8020200@ee.byu.edu> Message-ID: Thanks to you both. I just didnt realize that there was a scipy_core dir before. I found the scipy one and was wondering where the array code was :) Everything seems to be working fine. Cheers, J. On 11/9/2005, "Travis Oliphant" wrote: >Jonathan Taylor wrote: > >>Hi, >> >>I am very pleased to see scipy consolidating the other numerical python >>modules. I am hoping to check out the current development. I am a >>little confused which subversion directory to take a look at? What are >>the differences between the current branches? >> >> >> >The newcore branch of scipy_core is where development of the Numerical >Python replacement is taking place. > >svn co http://svn.scipy.org/svn/scipy_core/branches/newcore > >The newscipy branch of scipy is where the version of scipy ported to >work with the replacement to Numeric is being developed. > >svn co http://svn.scipy.org/svn/scipy/branches/newscipy > >Both branches should build. > > > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From Fernando.Perez at colorado.edu Wed Nov 9 00:23:15 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 08 Nov 2005 22:23:15 -0700 Subject: [SciPy-dev] What code base to use? In-Reply-To: <437180A1.60509@ucsd.edu> References: <437180A1.60509@ucsd.edu> Message-ID: <437187C3.2090904@colorado.edu> Robert Kern wrote: > Scipy dev's: what do you think about moving the current trunks into > branches (or tags) and making the "new" branches the trunks ASAP? +1 It would probably be a good idea to keep a static tarball of old-scipy around. There's one from just before the cvs2svn transition at: http://ipython.scipy.org/tmp/scipy_cvs_2005-07-29.tgz I don't know if there were any commits after that (I think not, but I don't know for sure). Cheers, f From rkern at ucsd.edu Wed Nov 9 00:28:41 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 21:28:41 -0800 Subject: [SciPy-dev] What code base to use? In-Reply-To: <437187C3.2090904@colorado.edu> References: <437180A1.60509@ucsd.edu> <437187C3.2090904@colorado.edu> Message-ID: <43718909.1090508@ucsd.edu> Fernando Perez wrote: > Robert Kern wrote: > >>Scipy dev's: what do you think about moving the current trunks into >>branches (or tags) and making the "new" branches the trunks ASAP? > > +1 > > It would probably be a good idea to keep a static tarball of old-scipy around. > There's one from just before the cvs2svn transition at: > > http://ipython.scipy.org/tmp/scipy_cvs_2005-07-29.tgz > > I don't know if there were any commits after that (I think not, but I don't > know for sure). I know I made one fix in the kde.py code. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Wed Nov 9 00:49:32 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 08 Nov 2005 22:49:32 -0700 Subject: [SciPy-dev] What code base to use? In-Reply-To: <437180A1.60509@ucsd.edu> References: <437180A1.60509@ucsd.edu> Message-ID: <43718DEC.30301@ee.byu.edu> Robert Kern wrote: >Scipy dev's: what do you think about moving the current trunks into >branches (or tags) and making the "new" branches the trunks ASAP? > > > I think that is a great idea. I'm not sure exactly what that means for everybody's tree and if they will have to check-out again. I know there a few people still working on some parts and haven't checked in what they have. I would suggest that people check in their code as soon as possible. -Travis From rkern at ucsd.edu Wed Nov 9 02:00:19 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 23:00:19 -0800 Subject: [SciPy-dev] What code base to use? In-Reply-To: <43718DEC.30301@ee.byu.edu> References: <437180A1.60509@ucsd.edu> <43718DEC.30301@ee.byu.edu> Message-ID: <43719E83.6010903@ucsd.edu> Travis Oliphant wrote: > Robert Kern wrote: > >>Scipy dev's: what do you think about moving the current trunks into >>branches (or tags) and making the "new" branches the trunks ASAP? > > I think that is a great idea. > > I'm not sure exactly what that means for everybody's tree and if they > will have to check-out again. I know there a few people still working > on some parts and haven't checked in what they have. > > I would suggest that people check in their code as soon as possible. I'm pretty sure we'll have to check out again (if nothing else, that's probably the most obvious, least complicated method). We're going to have to pay the cost eventually, so we might as well do it now. I suggest laying down a firm timetable, e.g. declaring a code freeze beginning at such-and-such hour on Thursday. I'll also take the opportunity to plug SVK (http://svk.elixus.org/) while I'm at it. SVK is a form of distributed RCS that uses SVN's database underneath. It's also why my checkin messages look funny. I use SVK to mirror the main repository and make a local branch. I can do all of my work locally and make checkins to the local branch without needing to talk to the main repository. When you switch the directories around, I'll have to checkout the new trunks as new mirrors $ svk mirror http://svn.scipy.org/svn/scipy_core/trunk //mirror/scipy_core $ svk sync --skipto HEAD //mirror/scipy_core $ svk cp //mirror/scipy_core //local/scipy_core but then I can switch my working copy while keeping all of my uncommited changes $ cd ~/svk-projects/scipy_core $ svk switch //local/scipy_core One downside to SVK for working with scipy is that it doesn't use the .svn/ directories, so I have to forge __svn_version__.py files in order to build. Natch. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From arnd.baecker at web.de Wed Nov 9 02:27:36 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 9 Nov 2005 08:27:36 +0100 (CET) Subject: [SciPy-dev] 64-bit testing needed In-Reply-To: <4370E7B3.3050205@ee.byu.edu> References: <4370E7B3.3050205@ee.byu.edu> Message-ID: On Tue, 8 Nov 2005, Travis Oliphant wrote: > I just made some changes to the slice parsing code to allow numbers > larger than a platform int. This only affects 64-bit platforms where > sizeof(intp) != sizeof(int). > > I need someone to test the new SVN to make sure I changed all the right > places. The changed subroutines are in arrayobject.c > > parse_index > parse_subindex > slice_GetIndices > slice_coerce_index > > I'm especially interested in warnings on compile. There are no pointer ones - so this looks good. scipy.__core_version__: '0.4.3.1450' scipy.__scipy_version__: '0.4.2_1429' scipy.test(10,10) gives ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_68/inst_scipy_newcore/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_68/inst_scipy_newcore/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ---------------------------------------------------------------------- Ran 1346 tests in 43.527s You asked for warnings - well there are many, in various parts. Somehow it would be nice to get rid of those, as they distract from real problems, but it would be a big task. Maybe new code should be required to be free of warnings... Anyway, here is an incomplete overview: - the usual _configtest.c ones ... - scipy/corelib/mtrand/mtrand.c: In function `__pyx_tp_new_6mtrand_RandomState': scipy/corelib/mtrand/mtrand.c:4908: warning: unused variable `p' - Constructing wrapper function "dvode"... warning: callstatement is defined without callprotoargument y,t,istate = dvode(f,jac,y,t,tout,rtol,atol,itask,istate,rwork - many of the type: warning: 'ntry' might be used uninitialized in this function in various fortran sources - gcc: Lib/special/cephes/simpsn.c Lib/special/cephes/simpsn.c:25: warning: 'simcon' defined but not used - gcc: Lib/special/cephes/sincos.c Lib/special/cephes/sincos.c:232: warning: conflicting types for built-in function 'sincos' - gcc: Lib/special/cephes/euclid.c Lib/special/cephes/euclid.c:55: warning: 'radd' defined but not used Lib/special/cephes/euclid.c:90: warning: 'rsub' defined but not used Lib/special/cephes/euclid.c:124: warning: 'rmul' defined but not used Lib/special/cephes/euclid.c:157: warning: 'rdiv' defined but not used - SuperLU: many warnings (parentheses, implicit declarations, ...) - Lib/integrate/__quadpack.h: In function `quad_function': Lib/integrate/__quadpack.h:60: warning: unused variable `nb' Lib/integrate/_quadpackmodule.c: At top level: Lib/integrate/_quadpackmodule.c:17: warning: function declaration isn't a prototype - compile options: '-I/home/abaecker/BUILDS2/Build_68/inst_scipy_newcore/lib/python2.4/site-packages/scipy/base/include -I/scr/python/include/python2.4 -c' gcc: Lib/signal/Z_bspline_util.c In file included from /scr/python/include/python2.4/Python.h:8, from Lib/signal/Z_bspline_util.c:6: /scr/python/include/python2.4/pyconfig.h:835:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/math.h:27, from Lib/signal/Z_bspline_util.c:1: /usr/include/features.h:190:1: warning: this is the location of the previous definition Best, Arnd From bgoli at sun.ac.za Wed Nov 9 02:42:14 2005 From: bgoli at sun.ac.za (Brett Olivier) Date: Wed, 9 Nov 2005 09:42:14 +0200 Subject: [SciPy-dev] Problem building newcore on windows with MinGW Message-ID: <200511090942.14576.bgoli@sun.ac.za> Hi I'm trying to build yesterdays newcore on Windows (ATLAS 3.7.11, MinGW gcc 3.4.4) and distutils does not seem to recognise the --compiler=mingw32 switch anymore: ++++ c:\work\newcore> python setup.py build --compiler=mingw32 Running from scipy core source directory. Assuming default configuration (scipy\distutils\command/{setup_command,setup}.py was not found) Appending scipy.distutils.command configuration to scipy.distutils Assuming default configuration (scipy\distutils\fcompiler/{setup_fcompiler,setup}.py was not found) Appending scipy.distutils.fcompiler configuration to scipy.distutils Appending scipy.distutils configuration to scipy Appending scipy.weave configuration to scipy Assuming default configuration (scipy\test/{setup_test,setup}.py was not found) Appending scipy.test configuration to scipy . . . Generating build\src\scipy\base\config.h No module named msvccompiler in scipy.distutils, trying from distutils.. error: Python was built with version 6 of Visual Studio, and extensions need to be built with the same version of the compiler, but it isn't installed. Where would I need to start looking to fix this? thanks. Brett -- Brett G. Olivier Triple-J Group for Molecular Cell Physiology Stellenbosch University From arnd.baecker at web.de Wed Nov 9 04:56:03 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 9 Nov 2005 10:56:03 +0100 (CET) Subject: [SciPy-dev] newscipy, band matrix routines Message-ID: Hi, a while ago I created f2py wrappers for - dsbev - dsbevd - dgbtrf - dgbtrs - dgbrfs which deal with eigenvalue/vector computation, LU decomposition, ... for band matrices. The present version of this, including examples/tests and a little bit of documentation is at http://www.physik.tu-dresden.de/~baecker/python/Band2.zip I would like to get this into newscipy but we need some assistance: - should the wrappers be added to generic_flapack.pyf ? (+adding the new functions to setup_linalg.py so that the corresponding single precision wrappers can be skipped) - Are the only scipy specific tricks the use of and to support the different types? - For testing/development purpuses, is there a way to put this into a separate sandbox or is it just easier to use all the tools which are provided by Lib/linalg/? - it would be great if an expert (Pearu - are you listening? ;-) has a look at the wrapper (either already now, or when the above steps are done) - see also the questions below. Any further guidance/comments/suggestions are welcome! Best, Arnd Some more technical questions Concerning dsbevx: a) abstol parameter: from dsbev.f: "Eigenvalues will be computed most accurately when ABSTOL is set to twice the underflow threshold 2*DLAMCH('S'), not zero." Maybe the easiest (best?) is to wrap DLAMCH as well (easy) and let the user provide the value, if necessary? ((Even nicer would be to compute this value internally - is this possible with f2py ? If not, I am not sure if adding this is worth the effort...)) b) Presently I suppress the array q Q (output) DOUBLE PRECISION array, dimension (LDQ, N) If JOBZ = 'V', the N-by-N orthogonal matrix used in the reduction to tridiagonal form. If JOBZ = 'N', the array Q is not referenced. Should one better return this as well and therefore replace double precision dimension(ldq,ldq),intent(hide),depend(ldq) :: q by double precision dimension(ldq,ldq),intent(out),depend(ldq) :: q ? c) Should one make vl,vu,il,iu, optional as well ? Background: dsbevx allows to compute range: 0 = 'A': all eigenvalues will be found; 1 = 'V': all eigenvalues in the half-open interval (vl,vu] will be found; 2 = 'I': the il-th through iu-th eigenvalues will be found. so we may have that either vl,vu or il,iu or neither of the two pairs are used. If one makes all vl,vu,il,iu, optional, one could set set the defaults to il=0 iu=10 vl=0.0 vu=20.0 # this of course absolutely # dependent on the problem, # i.e in general nonsense... From strawman at astraw.com Wed Nov 9 05:24:36 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 09 Nov 2005 02:24:36 -0800 Subject: [SciPy-dev] 64-bit testing needed In-Reply-To: <4370E7B3.3050205@ee.byu.edu> References: <4370E7B3.3050205@ee.byu.edu> Message-ID: <4371CE64.1090706@astraw.com> Travis Oliphant wrote: > I just made some changes to the slice parsing code to allow numbers > larger than a platform int. This only affects 64-bit platforms where > sizeof(intp) != sizeof(int). > > I need someone to test the new SVN to make sure I changed all the right > places. The changed subroutines are in arrayobject.c > > parse_index > parse_subindex > slice_GetIndices > slice_coerce_index > > I'm especially interested in warnings on compile. > > -Travis I'm getting a segfault on my AMD64 debian sarge system in scipy/fftpack/tests/test_basic.py. I haven't delved into it, and I don't have fftw installed, FWIW. newcore v. 1452 newscipy v. 1429 $ gdb ~/py24-amd64-dbg/bin/python GNU gdb 6.3-debian Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-linux"...Using host libthread_db library "/lib/libthread_db.so.1". (gdb) run Starting program: /mnt/s2/home/astraw/py24-amd64-dbg/bin/python [Thread debugging using libthread_db enabled] [New Thread 46912507521968 (LWP 21586)] Python 2.4.2 (#1, Nov 9 2005, 01:00:47) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing cluster to scipy Importing sparse to scipy Importing utils to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy [190578 refs] >>> scipy.test(10,10) [snip: lots of tests passed] bench_random (scipy.fftpack.basic.test_basic.test_fft) Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.18 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912507521968 (LWP 21586)] 0x00002aaab2b06e35 in array_getattr (self=0x2aaab29cca48, name=0x2aaaabb51671 "__array_struct__") at Src/arrayobject.c:2206 2206 inter->shape[i] = self->dimensions[i]; (gdb) bt #0 0x00002aaab2b06e35 in array_getattr (self=0x2aaab29cca48, name=0x2aaaabb51671 "__array_struct__") at Src/arrayobject.c:2206 #1 0x00000000004401bb in PyObject_GetAttrString (v=0x2aaab29cca48, name=0x2aaaabb51671 "__array_struct__") at Objects/object.c:1012 #2 0x00002aaaabb3c221 in array_fromstructinterface (input=0x2aaab29cca48, intype=0x7fffffb0e620, flags=64) at arrayobject.c:5377 #3 0x00002aaaabb3d199 in array_fromobject (op=0x2aaab29cca48, typecode=0x7fffffb0e620, min_depth=0, max_depth=0, flags=64) at arrayobject.c:5606 #4 0x00002aaaabb3d587 in PyArray_FromAny (op=0x2aaab29cca48, typecode=0x7fffffb0e620, min_depth=0, max_depth=0, requires=64) at arrayobject.c:5758 #5 0x00002aaaabb4e038 in _array_fromobject (ignored=0x0, args=0x2aaab2906260, kws=0x103ce08) at scipy/base/src/multiarraymodule.c:2959 #6 0x00000000004dd3a0 in PyCFunction_Call (func=0x2aaaab6f7670, arg=0x2aaab2906260, kw=0x103ce08) at Objects/methodobject.c:77 #7 0x0000000000418982 in PyObject_Call (func=0x2aaaab6f7670, arg=0x2aaab2906260, kw=0x103ce08) at Objects/abstract.c:1756 #8 0x000000000048a83e in do_call (func=0x2aaaab6f7670, pp_stack=0x7fffffb0ece0, na=2, nk=2) at Python/ceval.c:3766 #9 0x000000000048a087 in call_function (pp_stack=0x7fffffb0ece0, oparg=514) at Python/ceval.c:3581 #10 0x0000000000485bb6 in PyEval_EvalFrame (f=0x1023958) at Python/ceval.c:2163 [snip: rest of backtrace] (gdb) up 10 #10 0x0000000000485bb6 in PyEval_EvalFrame (f=0x1023958) at Python/ceval.c:2163 2163 x = call_function(&sp, oparg); (gdb) pystack /home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/base/numeric.py (66): asarray /home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/test/testing.py (716): assert_array_almost_equal /home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py (137): bench_random /home/astraw/py24-amd64-dbg/lib/python2.4/unittest.py (245): run /home/astraw/py24-amd64-dbg/lib/python2.4/unittest.py (280): __call__ /home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/test/testing.py (159): __call__ /home/astraw/py24-amd64-dbg/lib/python2.4/unittest.py (420): run /home/astraw/py24-amd64-dbg/lib/python2.4/unittest.py (427): __call__ /home/astraw/py24-amd64-dbg/lib/python2.4/unittest.py (692): run /home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/test/testing.py (348): test (1): ? (gdb) From nwagner at mecha.uni-stuttgart.de Wed Nov 9 05:36:41 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 11:36:41 +0100 Subject: [SciPy-dev] newscipy, band matrix routines In-Reply-To: References: Message-ID: <4371D139.2010603@mecha.uni-stuttgart.de> Arnd Baecker wrote: >Hi, > >a while ago I created f2py wrappers for >- dsbev >- dsbevd >- dgbtrf >- dgbtrs >- dgbrfs >which deal with eigenvalue/vector computation, >LU decomposition, ... for band matrices. > >The present version of this, including examples/tests and >a little bit of documentation is at > http://www.physik.tu-dresden.de/~baecker/python/Band2.zip > >I would like to get this into newscipy but we need some assistance: >- should the wrappers be added to generic_flapack.pyf ? > > (+adding the new functions to setup_linalg.py so that the > corresponding single precision wrappers can be skipped) > >- Are the only scipy specific tricks the use of > and > to support the different types? >- For testing/development purpuses, is there > a way to put this into a separate sandbox > or is it just easier to use all the tools which > are provided by Lib/linalg/? >- it would be great if an expert (Pearu - are you listening? ;-) > has a look at the wrapper (either already now, or > when the above steps are done) - see also the questions below. > >Any further guidance/comments/suggestions are welcome! > >Best, Arnd > > >Some more technical questions > >Concerning dsbevx: > a) abstol parameter: > from dsbev.f: > "Eigenvalues will be computed most accurately when ABSTOL is > set to twice the underflow threshold 2*DLAMCH('S'), not zero." > > Maybe the easiest (best?) is to wrap DLAMCH as well (easy) > and let the user provide the value, if necessary? > ((Even nicer would be to compute this value internally - > is this possible with f2py ? If not, I am not sure if adding this > is worth the effort...)) > > b) Presently I suppress the array q > Q (output) DOUBLE PRECISION array, dimension (LDQ, N) > If JOBZ = 'V', the N-by-N orthogonal matrix used in the > reduction to tridiagonal form. > If JOBZ = 'N', the array Q is not referenced. > Should one better return this as well and therefore replace > double precision dimension(ldq,ldq),intent(hide),depend(ldq) :: q > by > double precision dimension(ldq,ldq),intent(out),depend(ldq) :: q > ? > > c) Should one make vl,vu,il,iu, optional as well ? > Background: dsbevx allows to compute > range: 0 = 'A': all eigenvalues will be found; > 1 = 'V': all eigenvalues in the half-open interval (vl,vu] > will be found; > 2 = 'I': the il-th through iu-th eigenvalues will be found. > so we may have that either vl,vu or il,iu or neither of > the two pairs are used. > > If one makes all vl,vu,il,iu, optional, one could set > set the defaults to > il=0 iu=10 > vl=0.0 vu=20.0 # this of course absolutely > # dependent on the problem, > # i.e in general nonsense... > > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > There is also some progress in improving LAPACK in this context. http://www.cs.berkeley.edu/~demmel/Sca-LAPACK-Proposal.pdf Nils From arnd.baecker at web.de Wed Nov 9 07:37:26 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 9 Nov 2005 13:37:26 +0100 (CET) Subject: [SciPy-dev] 64-bit testing needed In-Reply-To: <4371CE64.1090706@astraw.com> References: <4370E7B3.3050205@ee.byu.edu> <4371CE64.1090706@astraw.com> Message-ID: Hi Andrew, On Wed, 9 Nov 2005, Andrew Straw wrote: > I'm getting a segfault on my AMD64 debian sarge system in > scipy/fftpack/tests/test_basic.py. I haven't delved into it, and I don't > have fftw installed, FWIW. > > newcore v. 1452 > newscipy v. 1429 [...] > [snip: lots of tests passed] > > bench_random (scipy.fftpack.basic.test_basic.test_fft) > Fast Fourier Transform > ================================================= > | real input | complex input > ------------------------------------------------- > size | scipy | Numeric | scipy | Numeric > ------------------------------------------------- > 100 | 0.18 > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 46912507521968 (LWP 21586)] > 0x00002aaab2b06e35 in array_getattr (self=0x2aaab29cca48, > name=0x2aaaabb51671 "__array_struct__") at Src/arrayobject.c:2206 > 2206 inter->shape[i] = self->dimensions[i]; I am not sure, but it looks as if the scipy test works, but it segfaults when calling Numeric. One guess is that your version of Numeric does not yet support the new array protocol and this causes the problem? Best, Arnd From nwagner at mecha.uni-stuttgart.de Wed Nov 9 07:48:53 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 13:48:53 +0100 Subject: [SciPy-dev] newscipy on 64 bit Message-ID: <4371F035.1010605@mecha.uni-stuttgart.de> Hi all, I am trying to install newscipy on a 64 bit system. python setup.py install in ~/newscipy failed with Traceback (most recent call last): File "setup.py", line 42, in ? setup_package() File "setup.py", line 7, in setup_package from scipy.distutils.core import setup ImportError: No module named scipy.distutils.core How can I resolve this problem ? Nils From rkern at ucsd.edu Wed Nov 9 07:50:58 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 09 Nov 2005 04:50:58 -0800 Subject: [SciPy-dev] newscipy on 64 bit In-Reply-To: <4371F035.1010605@mecha.uni-stuttgart.de> References: <4371F035.1010605@mecha.uni-stuttgart.de> Message-ID: <4371F0B2.7090103@ucsd.edu> Nils Wagner wrote: > Hi all, > > I am trying to install newscipy on a 64 bit system. > > python setup.py install in ~/newscipy failed with > > Traceback (most recent call last): > File "setup.py", line 42, in ? > setup_package() > File "setup.py", line 7, in setup_package > from scipy.distutils.core import setup > ImportError: No module named scipy.distutils.core > > How can I resolve this problem ? Do you have an old version of scipy installed? -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Wed Nov 9 07:54:38 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 13:54:38 +0100 Subject: [SciPy-dev] newcore on 64 bit systems Message-ID: <4371F18E.4050408@mecha.uni-stuttgart.de> I have also a problem with python setup.py install in newcore gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -g -fPIC' compile options: '-DNO_ATLAS_INFO=1 -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.4 -c' /usr/bin/g77 -shared build/temp.linux-x86_64-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o -L/usr/local/builds/src/blas -Lbuild/temp.linux-x86_64-2.4 -lflapack_src -lfblas -lg2c -o build/lib.linux-x86_64-2.4/scipy/lib/lapack_lite.so /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: /usr/local/builds/src/blas/libfblas.a(dgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/builds/src/blas/libfblas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: /usr/local/builds/src/blas/libfblas.a(dgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/builds/src/blas/libfblas.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -shared build/temp.linux-x86_64-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o -L/usr/local/builds/src/blas -Lbuild/temp.linux-x86_64-2.4 -lflapack_src -lfblas -lg2c -o build/lib.linux-x86_64-2.4/scipy/lib/lapack_lite.so" failed with exit status 1 removed scipy/base/__svn_version__.py removed scipy/base/__svn_version__.pyc removed scipy/f2py/__svn_version__.py removed scipy/f2py/__svn_version__.pyc How can I fix these problems ? I strictly followed the instructions given at http://www.scipy.org/documentation/buildatlas4scipy.txt Are there special instructions how to build newcore/newscipy on 64 bit machines ? Nils From nwagner at mecha.uni-stuttgart.de Wed Nov 9 07:55:24 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 13:55:24 +0100 Subject: [SciPy-dev] newscipy on 64 bit In-Reply-To: <4371F0B2.7090103@ucsd.edu> References: <4371F035.1010605@mecha.uni-stuttgart.de> <4371F0B2.7090103@ucsd.edu> Message-ID: <4371F1BC.8080208@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Hi all, >> >>I am trying to install newscipy on a 64 bit system. >> >>python setup.py install in ~/newscipy failed with >> >>Traceback (most recent call last): >> File "setup.py", line 42, in ? >> setup_package() >> File "setup.py", line 7, in setup_package >> from scipy.distutils.core import setup >>ImportError: No module named scipy.distutils.core >> >>How can I resolve this problem ? >> > >Do you have an old version of scipy installed? > > No I just started from scratch. From arnd.baecker at web.de Wed Nov 9 08:13:56 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 9 Nov 2005 14:13:56 +0100 (CET) Subject: [SciPy-dev] newcore on 64 bit systems In-Reply-To: <4371F18E.4050408@mecha.uni-stuttgart.de> References: <4371F18E.4050408@mecha.uni-stuttgart.de> Message-ID: Hi Nils, On Wed, 9 Nov 2005, Nils Wagner wrote: > I have also a problem with python setup.py install in newcore > > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -g -fPIC' > compile options: '-DNO_ATLAS_INFO=1 -Iscipy/base/include > -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.4 -c' > /usr/bin/g77 -shared > build/temp.linux-x86_64-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o > -L/usr/local/builds/src/blas -Lbuild/temp.linux-x86_64-2.4 -lflapack_src > -lfblas -lg2c -o build/lib.linux-x86_64-2.4/scipy/lib/lapack_lite.so > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: > /usr/local/builds/src/blas/libfblas.a(dgemm.o): relocation R_X86_64_32 > against `a local symbol' can not be used when making a shared object; > recompile with -fPIC > /usr/local/builds/src/blas/libfblas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: > /usr/local/builds/src/blas/libfblas.a(dgemm.o): relocation R_X86_64_32 > against `a local symbol' can not be used when making a shared object; > recompile with -fPIC > /usr/local/builds/src/blas/libfblas.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > error: Command "/usr/bin/g77 -shared > build/temp.linux-x86_64-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o > -L/usr/local/builds/src/blas -Lbuild/temp.linux-x86_64-2.4 -lflapack_src > -lfblas -lg2c -o build/lib.linux-x86_64-2.4/scipy/lib/lapack_lite.so" > failed with exit status 1 > removed scipy/base/__svn_version__.py > removed scipy/base/__svn_version__.pyc > removed scipy/f2py/__svn_version__.py > removed scipy/f2py/__svn_version__.pyc > > How can I fix these problems ? > > I strictly followed the instructions given at > http://www.scipy.org/documentation/buildatlas4scipy.txt > > Are there special instructions how to build newcore/newscipy on 64 bit > machines ? This is a known problem on 64 Bit machines. http://www.scipy.org/mailinglists/mailman?fn=scipy-user/2005-February/004066.html http://www.scipy.net/pipermail/scipy-user/2005-September/005265.html Below I post my complete notes on the installation of **old** scipy on 64 Bit. The only difference is that you want to use newscipy/newcore. Everything else should be the same (apart from fftw3, it seems that only fftw2 is supported in newscipy at the moment. Also no guarantee on performance for that one). Note that the whole text is written in ReSt, so it should be no problem to use the text or fragments for the scipy wiki ... HTH, Arnd Get the sources =============== Create directories:: mkdir -p INSTALL_PYTHON/Sources cd INSTALL_PYTHON/Sources `gcc `_:: wget ftp://ftp.gwdg.de/pub/misc/gcc/releases/gcc-3.4.4/gcc-3.4.4.tar.bz2 python and docu:: wget ftp://ftp.python.org/pub/python/2.4.2/Python-2.4.2.tar.bz2 wget http://www.python.org/ftp/python/doc/2.4.2/html-2.4.2.tar.bz2 Ipython via svn:: svn co http://ipython.scipy.org/svn/ipython/ipython/trunk ipython zip -9r ipython ipython rm -rf ipython Numeric:: (press for CVS password) cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P Numerical zip -9r Numerical Numerical rm -rf Numerical f2p2e:: (CVS password: guest) cvs -d :pserver:anonymous at cens.ioc.ee:/home/cvs login cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e zip -9r f2py2e f2py2e rm -rf f2py2e `fftw `_:: wget http://www.fftw.org/fftw-3.0.1.tar.gz `ATLAS _`:: wget http://cesnet.dl.sourceforge.net/sourceforge/math-atlas/atlas3.6.0.tar.bz2 **OLD** scipy svn:: svn co http://svn.scipy.org/svn/scipy/trunk scipy cd scipy svn co http://svn.scipy.org/svn/scipy_core/trunk/ scipy_core cd .. zip -9r scipy scipy rm -rf scipy Directory ``Sources`` ===================== After this we have:: 2032724 atlas3.7.11.tar.bz2 520527 f2py2e.zip 1946361 fftw-3.0.1.tar.gz 27565872 gcc-3.4.4.tar.bz2 1395537 html-2.4.2.tar.bz2 953884 ipython.zip 796402 Numerical.zip 7853169 Python-2.4.2.tar.bz2 16783889 scipy.zip Environment variables ====================== :: export PHOME=/scr/python/ export INFODIR=${PHOME}/info:$INFODIR export INFOPATH=${PHOME}/info:$INFOPATH export MANPATH=${PHOME}/man:$MANPATH export LD_LIBRARY_PATH=${PHOME}/lib64:${PHOME}/lib:$LD_LIBRARY_PATH export LD_RUN_PATH=${PHOME}/lib64:${PHOME}/lib:$LD_RUN_PATH export PYTHONDOCS=${PHOME}/docs/Python-Docs-2.4.2/ export PATH=${PHOME}/bin:${PATH} export PAGER="less -R" export CFLAGS=-fPIC For the installation:: cd .. mkdir CompileDir cd CompileDir (this should be on the same level as ``Sources``) gcc === :: tar xjf ../Sources/gcc-3.4.4.tar.bz2 mkdir gcc-build cd gcc-build ../gcc-3.4.4/configure --prefix=${PHOME} --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu --enable-languages=c,c++,f77,objc make make install cd ${PHOME}/bin/ ln -s gcc ${PHOME}/bin/cc cd - Python ====== :: tar xjf ../Sources/Python-2.4.2.tar.bz2 cd Python-2.4.2 ./configure --prefix=${PHOME} make make install make test 255 tests OK. 35 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses test_dl test_gl test_imageop test_imgfile test_linuxaudiodev test_macfs test_macostools test_nis test_normalization test_ossaudiodev test_pep277 test_plistlib test_rgbimg test_scriptpackages test_socket_ssl test_socketserver test_sunaudiodev test_timeout test_urllib2net test_urllibnet test_winreg test_winsound Those skips are all expected on linux2. cd .. IPython ======= :: unzip ../Sources/ipython.zip cd ipython python setup.py install cd .. rehash # default Farben ipython ..., no question on CTRL-D:: xemacs -nw /scr/python/lib/python2.4/site-packages/IPython/UserConfig/ipythonrc #colors Linux colors LightBG confirm_exit 0 ATLAS and BLAS/LAPACK ===================== See: http://www.scipy.org/documentation/buildatlas4scipy.txt BLAS ---- :: export BLAS_SRC=${HOME}/INSTALL_PYTHON/CompileDir/BLAS export LAPACK_SRC=${HOME}/INSTALL_PYTHON/CompileDir/LAPACK mkdir -p $BLAS_SRC cd $BLAS_SRC wget http://www.netlib.org/blas/blas.tgz tar xzf blas.tgz g77 -fno-second-underscore -O2 -fPIC -c *.f ar r libfblas.a *.o ranlib libfblas.a rm -rf *.o LAPACK ------ :: mkdir -p $LAPACK_SRC cd $LAPACK_SRC/.. wget http://www.netlib.org/lapack/lapack.tgz tar xzf lapack.tgz cd $LAPACK_SRC cp INSTALL/make.inc.LINUX make.inc # on LINUX Edit ``make.inc``:: OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC NOOPT = -m64 -fno-second-underscore -fPIC and do:: make lapacklib cd .. ATLAS ----- Just follow:: export BLAS=$BLAS_SRC/libfblas.a tar xjf ../Sources/atlas3.7.11.tar.bz2 cd ATLAS make xconfig ./xconfig -F f "-fomit-frame-pointer -O -fno-second-underscore -fPIC" -F c " -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC" -F m "-fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC" -b $BLAS Settings: - 64 Bit - Posix threads - don't stop - use express setup? [y]: - Enter f77 compiler [g77]: Enter F77 Flags [-fomit-frame-pointer -O -m64]: - Tune the Level 1 BLAS? [y]: Gives for the compile commands:: F77 = /scr/python3//bin/g77 -fomit-frame-pointer -O -fno-second-underscore -fPIC CC = /scr/python3//bin/gcc -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC MCC = /scr/python3//bin/gcc -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC Have a couple of cups of coffee while doing:: make install arch=Linux_HAMMER64SSE3_2 make sanity_test arch=Linux_HAMMER64SSE3_2 make ptsanity_test arch=Linux_HAMMER64SSE3_2 Make optimized LAPACK library:: cd lib//Linux_HAMMER64SSE3_2/ mkdir tmp; cd tmp ar x ../liblapack.a cp ../liblapack.a ../liblapack.a_SAVE cp ../../../../LAPACK/lapack_LINUX.a ../liblapack.a ar r ../liblapack.a *.o cd ..; rm -rf tmp ls -lh liblapack.a -rw-r--r-- 1 abaecker users 11M 2005-10-04 17:32 liblapack.a cp *.a $PHOME/lib64 cd ../../ cp include/{cblas.h,clapack.h} $PHOME/include cd .. fftw ==== :: tar xzf ../Sources/fftw-3.0.1.tar.gz cd fftw-3.0.1/ ./configure CFLAGS=-fPIC --prefix=$PHOME make make install Attempt with optimization:: ./configure CFLAGS="-fPIC -O3 -fomit-frame-pointer -fno-schedule-insns -fstrict-aliasing -mpreferred-stack-boundary=4" --prefix=$PHOME Numerical ========= :: unzip ../Sources/Numerical.zip cd Numerical emacs customize.py if 1: use_system_lapack = 1 lapack_library_dirs = ['/scr/python//lib64/'] lapack_libraries = ['lapack', 'cblas', 'f77blas', 'atlas', 'g2c'] if 1: use_dotblas = 1 dotblas_include_dirs = ['/scr/python3/include/atlas'] dotblas_library_dirs = lapack_library_dirs dotblas_libraries = ['cblas', 'atlas', 'g2c'] python setup.py build | tee ../Numeric_install_log.txt python setup.py install | tee ../Numeric_install_log.txt cd .. scipy ===== f2py ---- :: unzip ../Sources/f2py2e.zip cd f2py2e make install cd .. scipy itself ------------ :: unzip ../Sources/scipy.zip cd scipy export ATLAS=/scr/python/lib64:/scr/python/include cd scipy_core/scipy_distutils put the following into site.cfg:: [x11] library_dirs = /usr/X11R6/lib64 include_dirs = /usr/X11R6/include Then install first ``scipy_distutils``:: python setup.py install cd ../.. Get ``scipy_systeminfo.txt`` for report on installed packages:: python scipy_core/scipy_distutils/system_info.py > ../scipy_systeminfo.txt Install the beast:: python setup.py build | tee ../scipy_install_log.txt python setup.py install Get ``xplt`` working:: emacs /scr/python/lib/python2.4/site-packages/scipy/xplt/slice3.py routine _construct3 change mask = find_mask (below, _node_edges3 [itype]) to mask = find_mask (below.astype("b"), _node_edges3 [itype].astype("b")) Test:: cd # Start python and run as tests: # (level 10 will take a while ...) import scipy scipy.test(1,verbosity=10) scipy.test(10,verbosity=10) Only error:: ====================================================================== FAIL: check_round (scipy.special.basic.test_basic.test_round) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scr/python/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1789, in check_round assert_array_equal(rnd,rndrl) File "/scr/python/lib/python2.4/site-packages/scipy_test/testing.py", line 715, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 25.0%): Array 1: [10 10 11 11] Array 2: [10 10 10 11] ---------------------------------------------------------------------- Ran 1279 tests in 70.833s From nwagner at mecha.uni-stuttgart.de Wed Nov 9 09:00:53 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 15:00:53 +0100 Subject: [SciPy-dev] segfault for scipy.test(1,10) on 64 bit Message-ID: <43720115.9050500@mecha.uni-stuttgart.de> check_djbfft (scipy.fftpack.basic.test_basic.test_fft) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509469440 (LWP 31268)] 0x00002aaab12707e1 in array_getattr (self=0x2aaab0d543a0, name=) at arrayobject.c:2207 2207 inter->strides[i] = self->dimensions[i]; (gdb) bt #0 0x00002aaab12707e1 in array_getattr (self=0x2aaab0d543a0, name=) at arrayobject.c:2207 #1 0x00002aaaaac1dc90 in PyObject_GetAttrString () from /usr/lib64/libpython2.4.so.1.0 #2 0x00002aaaabb94ae8 in PyArray_FromAny (op=0x2aaab0d543a0, typecode=, min_depth=0, max_depth=0, requires=64) at arrayobject.c:5377 #3 0x00002aaaabb9ecab in _array_fromobject (ignored=, args=, kws=) at multiarraymodule.c:2959 #4 0x00002aaaaac1c17e in PyCFunction_Call () from /usr/lib64/libpython2.4.so.1.0 #5 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #6 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #7 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #8 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #9 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #10 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #11 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #12 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #13 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #14 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #15 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #16 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #17 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #18 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #19 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #20 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #21 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #22 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #23 0x00002aaaaac2df14 in slot_tp_call () from /usr/lib64/libpython2.4.so.1.0 #24 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #25 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #26 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #27 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #28 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #29 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #30 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #31 0x00002aaaaac2df14 in slot_tp_call () from /usr/lib64/libpython2.4.so.1.0 #32 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #33 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #34 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #35 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #36 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #37 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #38 0x00002aaaaac51992 in PyEval_EvalCode () from /usr/lib64/libpython2.4.so.1.0 #39 0x00002aaaaac6aeb9 in run_node () from /usr/lib64/libpython2.4.so.1.0 #40 0x00002aaaaac6c67f in PyRun_InteractiveOneFlags () from /usr/lib64/libpython2.4.so.1.0 #41 0x00002aaaaac6c7ee in PyRun_InteractiveLoopFlags () from /usr/lib64/libpython2.4.so.1.0 #42 0x00002aaaaac6c8ec in PyRun_AnyFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #43 0x00002aaaaac72023 in Py_Main () from /usr/lib64/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #44 0x00000000004008d9 in main () From nwagner at mecha.uni-stuttgart.de Wed Nov 9 09:09:07 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 15:09:07 +0100 Subject: [SciPy-dev] Useful build instructions for newcore/newscipy on 64 bit systems Message-ID: <43720303.4050506@mecha.uni-stuttgart.de> Hi all, It would be very helpful, if some information would be available at a c e n t r a l place e.g. http://www.scipy.org/documentation/buildatlas4scipy.txt (instead of scanning mailing lists ;-) ) on how to install newcore/newscipy on 64 bit systems. I have used g77 -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC -c *.f ar r libfblas.a *.o ranlib libfblas.a to build the BLAS library. The modified entries in make.inc for LAPACK are OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC NOOPT = -m64 -fno-second-underscore -fPIC Some information on ATLAS should be added as well. So far I have only used BLAS/LAPACK. Nils From arnd.baecker at web.de Wed Nov 9 09:23:38 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 9 Nov 2005 15:23:38 +0100 (CET) Subject: [SciPy-dev] Useful build instructions for newcore/newscipy on 64 bit systems In-Reply-To: <43720303.4050506@mecha.uni-stuttgart.de> References: <43720303.4050506@mecha.uni-stuttgart.de> Message-ID: On Wed, 9 Nov 2005, Nils Wagner wrote: > Hi all, > > It would be very helpful, if some information would be available at a c > e n t r a l place > e.g. http://www.scipy.org/documentation/buildatlas4scipy.txt > (instead of scanning mailing lists ;-) ) At the end of the page you are referring to there is a button "log in to add comments". > on how to install newcore/newscipy on 64 bit systems. > > I have used > > g77 -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC -c *.f > ar r libfblas.a *.o > ranlib libfblas.a > > to build the BLAS library. > > The modified entries in make.inc for LAPACK are > > OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC > NOOPT = -m64 -fno-second-underscore -fPIC > > Some information on ATLAS should be added as well. So far I have only > used BLAS/LAPACK. Indeed, it would be good if you add the above information. ATLAS should work just the way it is described in my notes. Arnd From nwagner at mecha.uni-stuttgart.de Wed Nov 9 09:33:26 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 15:33:26 +0100 Subject: [SciPy-dev] Useful build instructions for newcore/newscipy on 64 bit systems In-Reply-To: References: <43720303.4050506@mecha.uni-stuttgart.de> Message-ID: <437208B6.8080503@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Wed, 9 Nov 2005, Nils Wagner wrote: > > >>Hi all, >> >>It would be very helpful, if some information would be available at a c >>e n t r a l place >>e.g. http://www.scipy.org/documentation/buildatlas4scipy.txt >>(instead of scanning mailing lists ;-) ) >> > >At the end of the page you are referring to >there is a button "log in to add comments". > > >>on how to install newcore/newscipy on 64 bit systems. >> >>I have used >> >>g77 -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC -c *.f >>ar r libfblas.a *.o >>ranlib libfblas.a >> >>to build the BLAS library. >> >>The modified entries in make.inc for LAPACK are >> >>OPTS = -funroll-all-loops -O3 -m64 -fno-second-underscore -fPIC >>NOOPT = -m64 -fno-second-underscore -fPIC >> >>Some information on ATLAS should be added as well. So far I have only >>used BLAS/LAPACK. >> > >Indeed, it would be good if you add the above information. >ATLAS should work just the way it is described in my notes. > >Arnd > > Done :-) >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From cimrman3 at ntc.zcu.cz Wed Nov 9 09:41:20 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 09 Nov 2005 15:41:20 +0100 Subject: [SciPy-dev] swig vs. f2py Message-ID: <43720A90.4040504@ntc.zcu.cz> Is it ok to use swig with scipy? I assume the asnwer is yes, because of the *.i files in both newcore and newscipy, but asking never hurts... :) cheers, r. From perry at stsci.edu Wed Nov 9 11:23:07 2005 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 9 Nov 2005 11:23:07 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43714B42.8070505@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz><43679289.6000404@ee.byu.edu> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <3ab4ecdb87819f2bb8eb861a520ad7f9@stsci.edu> <0e1b19648ccd04b1f42243b437860139@cornell.edu> <43714B42.8070505@ee.byu.edu> Message-ID: <2f850f0e4f66768befe7cae17c874534@stsci.edu> On Nov 8, 2005, at 8:05 PM, Travis Oliphant wrote: > Ryan Gutenkunst wrote: > >> On Nov 8, 2005, at 10:35 AM, Perry Greenfield wrote: >> >> >>> Robert has already pointed out that lots of people want == to result >>> in >>> an array of booleans (most I'd argue) rather than a single boolean >>> value. >>> >>> >> >> I'm coming late to this discussion, but I'd like to mention that a >> similar issue with the old scipy burned me just today. >> >> Guido's PEP 8 Style Guide suggests: >> - For sequences, (strings, lists, tuples), use the fact that empty >> sequences are false, so "if not seq" or "if seq" is preferable >> to "if len(seq)" or "if not len(seq)". >> >> However, an old scipy array containing any number of zeros is 'False', >> as illustrated below. (I haven't tried this on new scipy.) >> >>>>> scipy.__version__ >> '0.3.3_303.4601' >>>>> if [0]: >> ... print 'hello' >> ... >> hello >>>>> if scipy.array([0, 0]): >> ... print 'hello' >> ... >>>>> >> >> > So, do we want empty arrays to return false and not return an error? > This is definitely a possibility. I'm not sure what is being suggested here. One interpretation is that empty arrays are false and non-empty arrays are true. That doesn't really solve the problems raised before regarding misinterpretations of these logical expressions. The other interpretation is that empty arrays are false and anything else raises an exception. This would be really bizarre IMHO (writing a logical test assuming that it is always false and having to trap it if it isn't?). I don't think any decision is going to remain entirely Pythonic since we are seeing colliding issues that will cause some sort of conflict with established principles one way or another. Look at it this way, we haven't met the above expectation for empty sequences for a long time. How much of a problem has that been compared to the other problems? Another alternative is to ask WWGD. Since he's still around we can ask him ;-). I tend to think practicality beats purity on this issue though. Perry From strawman at astraw.com Wed Nov 9 13:33:01 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 09 Nov 2005 10:33:01 -0800 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms Message-ID: <437240DD.9060203@astraw.com> Hi, A couple bugs have been reported (including mine last night) which indicate a problem with the following bit of code in Numerical/Src/arrayobject.c (near line 2200) on 64 bit platforms. I think it'll be a lot faster for someone else to fix it, so I'll leave it at this for now. if (strcmp(name, "__array_struct__") == 0) { PyArrayInterface *inter; inter = (PyArrayInterface *)malloc(sizeof(PyArrayInterface)); inter->version = 2; inter->nd = self->nd; if ((inter->nd == 0) || (sizeof(int) == sizeof(Py_intptr_t))) { inter->shape = (Py_intptr_t *)self->dimensions; inter->strides = (Py_intptr_t *)self->strides; } else { int i; for (i=0; ind; i++) { inter->shape[i] = self->dimensions[i]; inter->strides[i] = self->strides[i]; } } From oliphant at ee.byu.edu Wed Nov 9 14:02:01 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:02:01 -0700 Subject: [SciPy-dev] Problem building newcore on windows with MinGW In-Reply-To: <200511090942.14576.bgoli@sun.ac.za> References: <200511090942.14576.bgoli@sun.ac.za> Message-ID: <437247A9.1000700@ee.byu.edu> Brett Olivier wrote: >Hi > >I'm trying to build yesterdays newcore on Windows (ATLAS 3.7.11, MinGW gcc >3.4.4) and distutils does not seem to recognise the --compiler=mingw32 switch >anymore: > >++++ > > You need to use the --compiler=mingw32 switch for the config command as well: python setup.py config --compiler=mingw32 build --compiler=mingw32 There may be a way we can fix this in the setup.py script (so that one compiler switch gets applied to all the commands, but it hasn't been done yet). -Travis From oliphant at ee.byu.edu Wed Nov 9 14:11:46 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:11:46 -0700 Subject: [SciPy-dev] segfault for scipy.test(1,10) on 64 bit In-Reply-To: <43720115.9050500@mecha.uni-stuttgart.de> References: <43720115.9050500@mecha.uni-stuttgart.de> Message-ID: <437249F2.50303@ee.byu.edu> Nils Wagner wrote: >check_djbfft (scipy.fftpack.basic.test_basic.test_fft) >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 46912509469440 (LWP 31268)] >0x00002aaab12707e1 in array_getattr (self=0x2aaab0d543a0, name=optimized out>) at arrayobject.c:2207 >2207 inter->strides[i] = self->dimensions[i]; > > This is the problem somebody else showed too. It is a problem with the array interface __struct__ version for 64-bit systems. Argh!. I guess I'll have to make another Numeric release to fix it. -Travis From oliphant at ee.byu.edu Wed Nov 9 14:16:48 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:16:48 -0700 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms In-Reply-To: <437240DD.9060203@astraw.com> References: <437240DD.9060203@astraw.com> Message-ID: <43724B20.60005@ee.byu.edu> Andrew Straw wrote: >Hi, > >A couple bugs have been reported (including mine last night) which >indicate a problem with the following bit of code in >Numerical/Src/arrayobject.c (near line 2200) on 64 bit platforms. I >think it'll be a lot faster for someone else to fix it, so I'll leave it >at this for now. > > if (strcmp(name, "__array_struct__") == 0) { > PyArrayInterface *inter; > inter = (PyArrayInterface *)malloc(sizeof(PyArrayInterface)); > inter->version = 2; > inter->nd = self->nd; > if ((inter->nd == 0) || (sizeof(int) == sizeof(Py_intptr_t))) { > inter->shape = (Py_intptr_t *)self->dimensions; > inter->strides = (Py_intptr_t *)self->strides; > } > else { > int i; > for (i=0; ind; i++) { > inter->shape[i] = self->dimensions[i]; > inter->strides[i] = self->strides[i]; > } > } > > Yes, this is a bug, The else condition is stupid as inter->shape and inter->strides don't have any memory allocated. Only on a 64-bit system does the problem show up. -Travis From oliphant at ee.byu.edu Wed Nov 9 14:35:34 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:35:34 -0700 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms In-Reply-To: <437240DD.9060203@astraw.com> References: <437240DD.9060203@astraw.com> Message-ID: <43724F86.4040709@ee.byu.edu> Andrew Straw wrote: >Hi, > >A couple bugs have been reported (including mine last night) which >indicate a problem with the following bit of code in >Numerical/Src/arrayobject.c (near line 2200) on 64 bit platforms. I >think it'll be a lot faster for someone else to fix it, so I'll leave it >at this for now. > > Try it out of CVS now. I think I've fixed it. -Travis From nwagner at mecha.uni-stuttgart.de Wed Nov 9 14:42:31 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 20:42:31 +0100 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms In-Reply-To: <43724F86.4040709@ee.byu.edu> References: <437240DD.9060203@astraw.com> <43724F86.4040709@ee.byu.edu> Message-ID: On Wed, 09 Nov 2005 12:35:34 -0700 Travis Oliphant wrote: > Andrew Straw wrote: > >>Hi, >> >>A couple bugs have been reported (including mine last >>night) which >>indicate a problem with the following bit of code in >>Numerical/Src/arrayobject.c (near line 2200) on 64 bit >>platforms. I >>think it'll be a lot faster for someone else to fix it, >>so I'll leave it >>at this for now. >> >> > Try it out of CVS now. I think I've fixed it. > cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P Numerical cvs [checkout aborted]: connect to cvs.sourceforge.net(66.35.250.207):2401 failed: Connection refused How can I fix this cvs problem ? > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From oliphant at ee.byu.edu Wed Nov 9 14:50:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:50:42 -0700 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms In-Reply-To: References: <437240DD.9060203@astraw.com> <43724F86.4040709@ee.byu.edu> Message-ID: <43725312.4030609@ee.byu.edu> Nils Wagner wrote: >On Wed, 09 Nov 2005 12:35:34 -0700 > Travis Oliphant wrote: > > >>Andrew Straw wrote: >> >> >> >>>Hi, >>> >>>A couple bugs have been reported (including mine last >>>night) which >>>indicate a problem with the following bit of code in >>>Numerical/Src/arrayobject.c (near line 2200) on 64 bit >>>platforms. I >>>think it'll be a lot faster for someone else to fix it, >>>so I'll leave it >>>at this for now. >>> >>> >>> >>> >>Try it out of CVS now. I think I've fixed it. >> >> >> >cvs >-d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co >-P Numerical >cvs [checkout aborted]: connect to >cvs.sourceforge.net(66.35.250.207):2401 failed: Connection >refused > > I don't know, must be a sourceforge issue. Note, however, that anonymous check-out from source forge usually lags. I'm attaching a patch against Numeric 24.1 that fixes the problem (and a couple more as well). -Travis -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: patch.txt URL: From strawman at astraw.com Wed Nov 9 16:17:03 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 09 Nov 2005 13:17:03 -0800 Subject: [SciPy-dev] Numeric CVS __array_struct__ interface is broken on 64 bit platforms In-Reply-To: <43725312.4030609@ee.byu.edu> References: <437240DD.9060203@astraw.com> <43724F86.4040709@ee.byu.edu> <43725312.4030609@ee.byu.edu> Message-ID: <4372674F.7010008@astraw.com> Thanks, Travis. That works and now scipy.test(10,10) passes with 2 failures in check_l_bfgs_b. Thanks again, I'm very excited about the new scipy! ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/astraw/py24-amd64-dbg/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ---------------------------------------------------------------------- Ran 1346 tests in 127.339s FAILED (failures=2) [2132778 refs] Travis Oliphant wrote: > Nils Wagner wrote: > >> On Wed, 09 Nov 2005 12:35:34 -0700 >> Travis Oliphant wrote: >> >> >>> Andrew Straw wrote: >>> >>> >>> >>>> Hi, >>>> >>>> A couple bugs have been reported (including mine last night) which >>>> indicate a problem with the following bit of code in >>>> Numerical/Src/arrayobject.c (near line 2200) on 64 bit platforms. I >>>> think it'll be a lot faster for someone else to fix it, so I'll >>>> leave it >>>> at this for now. >>>> >>>> >>>> >>> >>> Try it out of CVS now. I think I've fixed it. >>> >>> >> >> cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P >> Numerical >> cvs [checkout aborted]: connect to >> cvs.sourceforge.net(66.35.250.207):2401 failed: Connection refused >> >> > I don't know, must be a sourceforge issue. Note, however, that > anonymous check-out from source forge usually lags. I'm attaching a > patch against Numeric 24.1 that fixes the problem (and a couple more as > well). > > -Travis > > > ------------------------------------------------------------------------ > > 15c15 > < /* $Id: arrayobject.c,v 1.103 2005/11/04 02:23:22 teoliphant Exp $ */ > --- > >>/* $Id: arrayobject.c,v 1.107 2005/11/09 19:43:39 teoliphant Exp $ */ > > 2125a2126,2127 > >> PyArrayInterface *inter=ptr; >> > > 2126a2129,2131 > >> if (inter->nd != 0 && (sizeof(int) != sizeof(Py_intptr_t))) { >> free(inter->shape); >> } > > 2204a2210,2211 > >> inter->shape = (Py_intptr_t *)malloc(self->nd*2*sizeof(Py_intptr_t)); >> inter->strides = inter->shape + (self->nd); > > 2207c2214 > < inter->strides[i] = self->dimensions[i]; > --- > >> inter->strides[i] = self->strides[i]; > > 2492a2500,2502 > >>static PyArray_Descr *_array_descr_fromstr(char *, int *); >> >> > > 2498a2509 > >> PyArray_Descr* descr; > > 2507a2519,2548 > >> >> if ((ip=PyObject_GetAttrString(op, "__array_typestr__"))!=NULL) { >> int swap=0; >> descr=NULL; >> if (PyString_Check(ip)) { >> descr = _array_descr_fromstr(PyString_AS_STRING(ip), &swap); >> } >> Py_DECREF(ip); >> if (descr) { >> return max(minimum_type, descr->type_num); >> } >> } >> else PyErr_Clear(); >> >> if ((ip=PyObject_GetAttrString(op, "__array_struct__")) != NULL) { >> PyArrayInterface *inter; >> char buf[40]; >> int swap=0; >> descr=NULL; >> if (PyCObject_Check(ip)) { >> inter=(PyArrayInterface *)PyCObject_AsVoidPtr(ip); >> if (inter->version == 2) { >> snprintf(buf, 40, "|%c%d", inter->typekind, inter->itemsize); >> descr = _array_descr_fromstr(buf, &swap); >> } >> } >> Py_DECREF(ip); >> if (descr) return max(minimum_type, descr->type_num); >> } >> else PyErr_Clear(); > > 2511,2521c2552,2557 > < if(!ip) { > < /* the original code seems to make no provision for the __array__ */ > < /* call to fail. I do this is a fallback, and the */ > < /* actual call to __array__ does get checked elsewhere. */ > < /* maybe ok? Probably the Python error flag is on at this point*/ > < /* -- Dubois 3/2002 */ > < return (int)PyArray_OBJECT; > < } > < result = max(minimum_type, (int)((PyArrayObject *)ip)->descr->type_num); > < Py_DECREF(ip); > < return result; > --- > >> if(ip && PyArray_Check(ip)) { >> result = max(minimum_type, (int)((PyArrayObject *)ip)->descr->type_num); >> Py_DECREF(ip); >> return result; >> } >> else Py_XDECREF(ip); > > 2523a2560 > >> > > 3006a3044,3049 > >> if (!PyArray_Check(op)) { >> Py_DECREF(op); >> PyErr_SetString(PyExc_TypeError, >> "No array interface and __array__"\ >> " method not returning Numeric array."); >> return NULL; > > 3007a3051 > >> } >> >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev From jonathan.taylor at utoronto.ca Wed Nov 9 21:18:52 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 02:18:52 +0000 Subject: [SciPy-dev] Some Feedback Message-ID: Just wanted to give some feedback. I am very impressed with the new scipy. It really feels like a numerical computing environment instead of just a bunch of modules. I looked at it a year ago and I didn't get this feeling at all so I stuck with R. I am going to be able to port some of my research code to python now and write in a "sane" object oriented language finally. On another note, and in the hopes of not offending anyone, I should mention that when I visited the scipy web site I thought it was dead. Luckily I tracked down the mailing lists. I was happy to see the lists have been on fire and the project is very much alive. It seems likely that others jumped to the same conclusion at first and moved on. Is there a plan to rework the web site into something that represents the active nature of the project? My suggestion would be Trac as it's Wiki nature makes it easy for anyone to contribute and it's link to the subversion repository (via timeline, and browse source) show off the activity of development. There is also a plug in that allows latex equations to be rendered which would be very useful given scipy's mathematical nature. I'd be interested in helping with the website and/or writing various introductory tutorials to encourage more people to check it out. Thanks again, Jon. From oliphant at ee.byu.edu Wed Nov 9 21:23:20 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 19:23:20 -0700 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice Message-ID: <4372AF18.4030208@ee.byu.edu> Thursday evening I'm going to move the contents of newscipy and newcore to the trunk positions on the scipy svn tree and copy the current trunk contents to branches/oldscipy branches/oldcore To prepare for this. Please make all commits by Thursday at 5:00pm. I will also tag the contents of the tree at that point and make a release of both packages (and a final-final Numeric release). The scipy_core version number will be 0.6 Please respond with comments or concerns. -Travis From oliphant at ee.byu.edu Wed Nov 9 21:27:53 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 19:27:53 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: References: Message-ID: <4372B029.5020000@ee.byu.edu> Jonathan Taylor wrote: >On another note, and in the hopes of not offending anyone, I should >mention that when I visited the scipy web site I thought it was dead. >Luckily I tracked down the mailing lists. I was happy to see the lists >have been on fire and the project is very much alive. It seems likely >that others jumped to the same conclusion at first and moved on. > > The scipy website is a problem. Currently it's a Plone site that is theoretically supposed to be modifiable by the community. There are two problems with it: 1) Most people have not absorbed the Zen of Plone yet, and so modifications to the site don't happen as they could/should. 2) Editing the site becomes painfully slow as Plone starts to use up lots of memory. Result: scipy site is more static then it should be. Any help in the web-site area would be very useful. -Travis From jonathan.taylor at utoronto.ca Wed Nov 9 21:33:03 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 02:33:03 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372B029.5020000@ee.byu.edu> Message-ID: >Any help in the web-site area would be very useful. I do see the advantages of a Plone site, but I guess (as you point out) they are only advantages if people take advantage of them. Are you and others open to changing to a different format such as Trac? Jon. From rkern at ucsd.edu Wed Nov 9 21:56:24 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 09 Nov 2005 18:56:24 -0800 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372B029.5020000@ee.byu.edu> References: <4372B029.5020000@ee.byu.edu> Message-ID: <4372B6D8.6050906@ucsd.edu> Travis Oliphant wrote: > The scipy website is a problem. Currently it's a Plone site that is > theoretically supposed to be modifiable by the community. There are > two problems with it: > > 1) Most people have not absorbed the Zen of Plone yet, and so > modifications to the site don't happen as they could/should. > > 2) Editing the site becomes painfully slow as Plone starts to use up > lots of memory. > > Result: scipy site is more static then it should be. > > Any help in the web-site area would be very useful. I think there's a reasonable amount of consensus that we should have a Trac instance for scipy. Personally, I would go a bit farther to suggest that the Trac instance should be the public face of the project. That is, http://www.scipy.org/ should give the Trac Welcome page for the project. We've never really used Plone's CMS capabilities to their full extent. Most of the content on the site is perfectly suitable for going into a Wiki. I can see two major roadblocks to switching to Trac at this time: 1) By default, Trac doesn't give a way for individuals to register an account for themselves. Consequently, an administrator either has to manually add people so they can edit the Wiki, or we allow Wiki editing without any kind of login (== Wikispam). There's a plugin to add self-registration, but it has to be installed and tested, which leads me to: 2) Joe Cooper's schedule. He's a busy man, and even transferring some of the admin's duties to someone else is going to take some time. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Wed Nov 9 21:57:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 19:57:40 -0700 Subject: [SciPy-dev] Memory leaks Message-ID: <4372B724.2030800@ee.byu.edu> I'm interested in memory-leak tests. I've used valgrind, and I've gone over all the C-code with specific eyeballing for INCREF-DECREF errors. I've closed many problems. I may definitely have missed some. If any body can see continuing memory leak issues please chime in. -Travis From oliphant at ee.byu.edu Wed Nov 9 22:02:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 20:02:31 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372B6D8.6050906@ucsd.edu> References: <4372B029.5020000@ee.byu.edu> <4372B6D8.6050906@ucsd.edu> Message-ID: <4372B847.6080808@ee.byu.edu> Robert Kern wrote: >Tr > >I think there's a reasonable amount of consensus that we should have a >Trac instance for scipy. Personally, I would go a bit farther to suggest >that the Trac instance should be the public face of the project. That >is, http://www.scipy.org/ should give the Trac Welcome page for the >project. We've never really used Plone's CMS capabilities to their full >extent. Most of the content on the site is perfectly suitable for going >into a Wiki. > > There is a Trac instance for scipy core already. http://projects.scipy.org/scipy/scipy_core I'd like to see one for full scipy, too. >I can see two major roadblocks to switching to Trac at this time: > >1) By default, Trac doesn't give a way for individuals to register an >account for themselves. Consequently, an administrator either has to >manually add people so they can edit the Wiki, or we allow Wiki editing >without any kind of login (== Wikispam). There's a plugin to add >self-registration, but it has to be installed and tested, which leads me to: > > > I think Trac is a good option. I don't see too much of a drawback to having to register people manually given that nobody is really participating in the old www.scipy.org site, anyway. -Travis From rkern at ucsd.edu Wed Nov 9 22:12:43 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 09 Nov 2005 19:12:43 -0800 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372B847.6080808@ee.byu.edu> References: <4372B029.5020000@ee.byu.edu> <4372B6D8.6050906@ucsd.edu> <4372B847.6080808@ee.byu.edu> Message-ID: <4372BAAB.2030700@ucsd.edu> Travis Oliphant wrote: > Robert Kern wrote: >>I can see two major roadblocks to switching to Trac at this time: >> >>1) By default, Trac doesn't give a way for individuals to register an >>account for themselves. Consequently, an administrator either has to >>manually add people so they can edit the Wiki, or we allow Wiki editing >>without any kind of login (== Wikispam). There's a plugin to add >>self-registration, but it has to be installed and tested, which leads me to: > > I think Trac is a good option. I don't see too much of a drawback to > having to register people manually given that nobody is really > participating in the old www.scipy.org site, anyway. It also means that people can't submit bug reports without having Joe make an account for them. Or installing/testing the self-registration plugin. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rkern at ucsd.edu Wed Nov 9 22:23:15 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 09 Nov 2005 19:23:15 -0800 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372BAAB.2030700@ucsd.edu> References: <4372B029.5020000@ee.byu.edu> <4372B6D8.6050906@ucsd.edu> <4372B847.6080808@ee.byu.edu> <4372BAAB.2030700@ucsd.edu> Message-ID: <4372BD23.2000608@ucsd.edu> Robert Kern wrote: > Travis Oliphant wrote: >>I think Trac is a good option. I don't see too much of a drawback to >>having to register people manually given that nobody is really >>participating in the old www.scipy.org site, anyway. > > It also means that people can't submit bug reports without having Joe > make an account for them. Or installing/testing the self-registration > plugin. That said, even a restricted Trac instance (or a spammable one) would be an enormous help at this point. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jonathan.taylor at utoronto.ca Wed Nov 9 22:30:36 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 03:30:36 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372BAAB.2030700@ucsd.edu> Message-ID: On 11/10/2005, "Robert Kern" wrote: >Travis Oliphant wrote: >> Robert Kern wrote: > >>>I can see two major roadblocks to switching to Trac at this time: >>> >>>1) By default, Trac doesn't give a way for individuals to register an >>>account for themselves. Consequently, an administrator either has to >>>manually add people so they can edit the Wiki, or we allow Wiki editing >>>without any kind of login (== Wikispam). There's a plugin to add >>>self-registration, but it has to be installed and tested, which leads me to: >> >> I think Trac is a good option. I don't see too much of a drawback to >> having to register people manually given that nobody is really >> participating in the old www.scipy.org site, anyway. > >It also means that people can't submit bug reports without having Joe >make an account for them. Or installing/testing the self-registration >plugin. > I think that (for the time being at least) it might be enough to allow public access to wiki and bug reports. Or maybe we could restrict wiki editing but allow public bug reports? Jon. From cookedm at physics.mcmaster.ca Wed Nov 9 22:39:46 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 9 Nov 2005 22:39:46 -0500 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372B847.6080808@ee.byu.edu> References: <4372B029.5020000@ee.byu.edu> <4372B6D8.6050906@ucsd.edu> <4372B847.6080808@ee.byu.edu> Message-ID: <9BA04615-3716-435E-890A-937EF8295575@physics.mcmaster.ca> On Nov 9, 2005, at 22:02 , Travis Oliphant wrote: > I think Trac is a good option. I don't see too much of a drawback to > having to register people manually given that nobody is really > participating in the old www.scipy.org site, anyway. +1 -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jonathan.taylor at utoronto.ca Wed Nov 9 22:42:18 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 03:42:18 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372BD23.2000608@ucsd.edu> Message-ID: >That said, even a restricted Trac instance (or a spammable one) would be >an enormous help at this point. I agree. One more thing. Should scipy_core repos be put into a directory of the scipy repository? Presumably, at some points we may want to be able to svn mv files between the two anyways and this of course makes more sense to have them both available through the same trac interface. Maybe I am missing the original reasoning for the seperation though. Jon. From jh at oobleck.astro.cornell.edu Wed Nov 9 22:52:56 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 9 Nov 2005 22:52:56 -0500 Subject: [SciPy-dev] Some Feedback Message-ID: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> Hi Jon, There is in fact a plan to rework the web site. It's an initiative of the ASP project. I have to admit some frustration that I post here semi-regularly discussing these initiatives, few people sign up for them (thanks to those who have), and then people comment here that they wish something was being done about things other than the software itself. So, please go to http://www.scipy.org/wikis/accessible_scipy/AccessibleSciPy sign up to do some web work, and let's get this thing moving! I hope others will do likewise. The roadmap for ASP is linked at the top of the ASP page, and is also here: http://www.scipy.org/mailinglists/mailman?fn=scipy-dev/2004-October/002412.html Thanks, --jh-- From oliphant at ee.byu.edu Wed Nov 9 22:58:06 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 20:58:06 -0700 Subject: [SciPy-dev] SciPy Website (Was Feedback) In-Reply-To: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> Message-ID: <4372C54E.6070506@ee.byu.edu> Joe Harrington wrote: >Hi Jon, > >There is in fact a plan to rework the web site. It's an initiative of >the ASP project. I have to admit some frustration that I post here >semi-regularly discussing these initiatives, few people sign up for >them (thanks to those who have), and then people comment here that >they wish something was being done about things other than the >software itself. So, please go to > > Thanks for pointing out your efforts. We should all take better note. We should not overlook the problem of Plone, however. I tried to edit a page today under plone (the numarray discussion page). It took too long, and then when I was done, the system just hung and did not save my edits. I wonder if others have experienced this and simply given up. -Travis From jh at oobleck.astro.cornell.edu Wed Nov 9 23:07:34 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 9 Nov 2005 23:07:34 -0500 Subject: [SciPy-dev] Some Feedback In-Reply-To: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> (message from Joe Harrington on Wed, 9 Nov 2005 22:52:56 -0500) References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> Message-ID: <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> What's wrong with the zwiki that's already on the site? Perhaps it needs to be made the top-level page, though it seems a little dangerous for *any* wiki to be the top-level page on the site. Given that we hope the site to be the major portal for the software package of choice in scientific computing, I should think we would *want* Plone's capability for doing review-before-commit and the like. The problem is not Plone, it is the lack of interest. Even the non-wiki Plone stuff is very straightforward (in under an hour with Meloni's book, I was out of normal-user usage and into installing and configuring extension modules). Why do we believe that switching to Trac or anything else will make volunteers appear and do work? Why not just start *doing* the work? This isn't a technical problem. --jh-- From Fernando.Perez at colorado.edu Wed Nov 9 23:34:41 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 09 Nov 2005 21:34:41 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> Message-ID: <4372CDE1.8060902@colorado.edu> Joe Harrington wrote: > The problem is not Plone, it is the lack of interest. Even the > non-wiki Plone stuff is very straightforward (in under an hour with > Meloni's book, I was out of normal-user usage and into installing and > configuring extension modules). Why do we believe that switching to > Trac or anything else will make volunteers appear and do work? Why > not just start *doing* the work? This isn't a technical problem. In fact to _some_ extent plone is the problem. I fully recognize what you say about people doing the work, but plone has serious memory and scalability problems. I've discussed this at length with Joe Cooper (the one who has to get out of bed at 4 am to reboot the server when plone throws a fit), and I know they've just about had it with plone. While I've never had it eat up an edit like Travis just suffered, I've seen it take a frighteningly long time to accept a minor edit to a wiki page. It may be possible to address the Trac login issue with the plugin Robert found. Eric just mentioned (off-list) they may try to look into it, other priorities permitting, so hang tight. I've been using their trac system a fair bit for ipython, and apart from the login issue, I'm extremely happy. So perhaps we may have a solution for the technical side of this problem soon, let's just give them a bit of time. As for the social one, I can only wish you the best of success. My hands are already pretty full with other things which I hope are considered contributions, but this is certainly an area where the community can help. Cheers, f From rkern at ucsd.edu Thu Nov 10 00:05:13 2005 From: rkern at ucsd.edu (Robert Kern) Date: Wed, 09 Nov 2005 21:05:13 -0800 Subject: [SciPy-dev] Some Feedback In-Reply-To: <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> Message-ID: <4372D509.40009@ucsd.edu> Joe Harrington wrote: > What's wrong with the zwiki that's already on the site? Perhaps it > needs to be made the top-level page, though it seems a little > dangerous for *any* wiki to be the top-level page on the site. Okay, I'd be just as happy with a staticy front page and the majority of the content in the Trac instance. I think it may even be possible to restrict editing of the Trac Wiki Welcome page. > Given > that we hope the site to be the major portal for the software package > of choice in scientific computing, I should think we would *want* > Plone's capability for doing review-before-commit and the like. > > The problem is not Plone, it is the lack of interest. Even the > non-wiki Plone stuff is very straightforward (in under an hour with > Meloni's book, I was out of normal-user usage and into installing and > configuring extension modules). Why do we believe that switching to > Trac or anything else will make volunteers appear and do work? Why > not just start *doing* the work? This isn't a technical problem. As Travis pointed out there is a real technical problem with Plone/Zope/Zwiki as it is deployed at www.scipy.org. It is usually unusably slow. The Zope process grows monotonically and, IIRC, needs to be restarted every week or so in order to avoid bringing down the server. The evidence suggests that Plone just isn't suited to the kind and amount of content that our community is producing. Plone is a great CMS, but it's not so great when you don't have a lot of content to manage. Plone was selected for the website years ago on the assumption that people were going to register and stake out little homepages of their own to post their stuff. Years later, we don't have many members doing that but a fair amount of people posting to the Wiki. The review-by-Wiki process is, I think, tenable. Or at least as tenable as the Plone review process which we've never used to any real extent. I don't think that switching to Trac is going to make volunteers appear and work. I do know for a fact that switching to Trac is going to allow me to volunteer my time to work on the website without feeling like I'm pulling my teeth out. Verrrry sloooowly. I'm fairly confident that Joe Cooper, who has to maintain the website, would be happier dealing with a Trac instance than Plone. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Thu Nov 10 00:55:47 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 22:55:47 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> Message-ID: <4372E0E3.5000806@ee.byu.edu> Joe Harrington wrote: > >The problem is not Plone, it is the lack of interest. Even the >non-wiki Plone stuff is very straightforward (in under an hour with >Meloni's book, I was out of normal-user usage and into installing and >configuring extension modules). Why do we believe that switching to >Trac or anything else will make volunteers appear and do work? Why >not just start *doing* the work? This isn't a technical problem. > > I agree in principle that the tool does not make the work appear. However, a poorly-functioning tool can make things very difficult. My experience with Plone has not been especially pleasant as I described before. I would like to see something, for example, that let's people use an email interface to get things to the web. My main problem with Plone is how slow it is to commit changes. After I edit something, I want those changes to be immediately visible. Waiting for 30 seconds to 3 minutes for Plone to is not conducive to getting results. I realize that this is due to memory management so that Plone must be restarted regularly to maintain performance. This is problematic. -Travis From prabhu_r at users.sf.net Thu Nov 10 01:37:40 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Thu, 10 Nov 2005 12:07:40 +0530 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372BAAB.2030700@ucsd.edu> References: <4372B029.5020000@ee.byu.edu> <4372B6D8.6050906@ucsd.edu> <4372B847.6080808@ee.byu.edu> <4372BAAB.2030700@ucsd.edu> Message-ID: <17266.60084.852695.448161@vulcan.linux.in> >>>>> "Robert" == Robert Kern writes: >> I think Trac is a good option. I don't see too much of a >> drawback to having to register people manually given that >> nobody is really participating in the old www.scipy.org site, >> anyway. Robert> It also means that people can't submit bug reports without Robert> having Joe make an account for them. Or installing/testing Robert> the self-registration plugin. That is inaccurate. It is possible to allow for anonymous ticket creation/modification (bug reports) whilst disabling anonymous wiki edits/creates[1]. So, this is a non-issue, really. The only concern is one of wiki philosophy and the wiki not being editable by everyone. cheers, prabhu [1] http://mail.enthought.com/pipermail/envisage-dev/2005-September/000712.html From prabhu_r at users.sf.net Thu Nov 10 01:52:59 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Thu, 10 Nov 2005 12:22:59 +0530 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372CDE1.8060902@colorado.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372CDE1.8060902@colorado.edu> Message-ID: <17266.61003.489184.105335@vulcan.linux.in> >>>>> "Fernando" == Fernando Perez writes: Fernando> Joe Harrington wrote: >> The problem is not Plone, it is the lack of interest. Even the >> non-wiki Plone stuff is very straightforward (in under an hour >> with Meloni's book, I was out of normal-user usage and into >> installing and configuring extension modules). Why do we >> believe that switching to Trac or anything else will make >> volunteers appear and do work? Why not just start *doing* the >> work? This isn't a technical problem. [...] Fernando> As for the social one, I can only wish you the best of Fernando> success. My hands are already pretty full with other Fernando> things which I hope are considered contributions, but Fernando> this is certainly an area where the community can help. Joe, I know I fall squarely into one of the lists of people who promised but never came up with any docs. It has not been easy. Over time, the number of projects I handle increases and the amount of available time only reduces. I've almost completely fallen behind on newscipy/newcore and am only barely keeping up with discussions on the m/l. Sorry. I just hope I can do some justice by spreading the good word about Python/SciPy. Hopefully, more folks will get excited about the possibilities and contribute. cheers, prabhu From nwagner at mecha.uni-stuttgart.de Thu Nov 10 03:08:12 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Nov 2005 09:08:12 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <4372AF18.4030208@ee.byu.edu> References: <4372AF18.4030208@ee.byu.edu> Message-ID: <4372FFEC.3070508@mecha.uni-stuttgart.de> Travis Oliphant wrote: >Thursday evening I'm going to move the contents of newscipy and newcore >to the trunk positions on the scipy svn tree and copy the current trunk >contents to branches/oldscipy branches/oldcore > >To prepare for this. Please make all commits by Thursday at 5:00pm. > >I will also tag the contents of the tree at that point and make a >release of both packages (and a final-final Numeric release). > >The scipy_core version number will be 0.6 > >Please respond with comments or concerns. > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, The segfault is still there check_djbfft (scipy.fftpack.basic.test_basic.test_fft) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509469440 (LWP 4397)] 0x00002aaab66117d1 in array_getattr (self=0x2aaab60e9760, name=) at arrayobject.c:2206 warning: Source file is more recent than executable. 2206 inter->strides = (Py_intptr_t *)self->strides; (gdb) bt #0 0x00002aaab66117d1 in array_getattr (self=0x2aaab60e9760, name=) at arrayobject.c:2206 #1 0x00002aaaaac1dc90 in PyObject_GetAttrString () from /usr/lib64/libpython2.4.so.1.0 #2 0x00002aaaabb54b68 in PyArray_FromAny (op=0x2aaab60e9760, typecode=, min_depth=0, max_depth=0, requires=64) at arrayobject.c:5377 #3 0x00002aaaabb5ed5b in _array_fromobject (ignored=, args=, kws=) at multiarraymodule.c:2959 >>> print scipy.__core_version__ 0.4.3.1456 >>> print scipy.__scipy_version__ 0.4.2_1429 Nils From nwagner at mecha.uni-stuttgart.de Thu Nov 10 03:14:56 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Nov 2005 09:14:56 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <4372FFEC.3070508@mecha.uni-stuttgart.de> References: <4372AF18.4030208@ee.byu.edu> <4372FFEC.3070508@mecha.uni-stuttgart.de> Message-ID: <43730180.6060701@mecha.uni-stuttgart.de> Nils Wagner wrote: >Travis Oliphant wrote: > >>Thursday evening I'm going to move the contents of newscipy and newcore >>to the trunk positions on the scipy svn tree and copy the current trunk >>contents to branches/oldscipy branches/oldcore >> >>To prepare for this. Please make all commits by Thursday at 5:00pm. >> >>I will also tag the contents of the tree at that point and make a >>release of both packages (and a final-final Numeric release). >> >>The scipy_core version number will be 0.6 >> >>Please respond with comments or concerns. >> >>-Travis >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > >Hi Travis, > >The segfault is still there > >check_djbfft (scipy.fftpack.basic.test_basic.test_fft) >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 46912509469440 (LWP 4397)] >0x00002aaab66117d1 in array_getattr (self=0x2aaab60e9760, name=optimized out>) at arrayobject.c:2206 >warning: Source file is more recent than executable. > >2206 inter->strides = (Py_intptr_t *)self->strides; >(gdb) bt >#0 0x00002aaab66117d1 in array_getattr (self=0x2aaab60e9760, >name=) at arrayobject.c:2206 >#1 0x00002aaaaac1dc90 in PyObject_GetAttrString () from >/usr/lib64/libpython2.4.so.1.0 >#2 0x00002aaaabb54b68 in PyArray_FromAny (op=0x2aaab60e9760, >typecode=, min_depth=0, max_depth=0, > requires=64) at arrayobject.c:5377 >#3 0x00002aaaabb5ed5b in _array_fromobject (ignored=out>, args=, > kws=) at multiarraymodule.c:2959 > > >>>>print scipy.__core_version__ >>>> >0.4.3.1456 > >>>>print scipy.__scipy_version__ >>>> >0.4.2_1429 > >Nils > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Oops - I forgot python setup.py install. Sorry for the noise ! Only two failures ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ---------------------------------------------------------------------- Ran 1348 tests in 3.008s FAILED (failures=2) Nils From pearu at scipy.org Thu Nov 10 02:41:56 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 10 Nov 2005 01:41:56 -0600 (CST) Subject: [SciPy-dev] swig vs. f2py In-Reply-To: <43720A90.4040504@ntc.zcu.cz> References: <43720A90.4040504@ntc.zcu.cz> Message-ID: On Wed, 9 Nov 2005, Robert Cimrman wrote: > Is it ok to use swig with scipy? I assume the asnwer is yes, because of > the *.i files in both newcore and newscipy, but asking never hurts... :) It is ok and is encouraged. However, the generated sources should be commited to the repository as well. This is because we cannot assume that swig or pyrex or an other such tool is installed. And with scipy.distutils we could add an option to setup.py file that the swig-generated sources will be regenerated in source tree when .i files are changed. With f2py things are a bit different. f2py is a pure python package and included to newcore. Pearu From nwagner at mecha.uni-stuttgart.de Thu Nov 10 03:46:38 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Nov 2005 09:46:38 +0100 Subject: [SciPy-dev] swig vs. f2py In-Reply-To: References: <43720A90.4040504@ntc.zcu.cz> Message-ID: <437308EE.7080404@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Wed, 9 Nov 2005, Robert Cimrman wrote: > > >>Is it ok to use swig with scipy? I assume the asnwer is yes, because of >>the *.i files in both newcore and newscipy, but asking never hurts... :) >> > >It is ok and is encouraged. > >However, the generated sources should be commited to the repository as >well. This is because we cannot assume that swig or pyrex or an other such >tool is installed. And with scipy.distutils we could add an option to >setup.py file that the swig-generated sources will be regenerated in >source tree when .i files are changed. > >With f2py things are a bit different. f2py is a pure python package and >included to newcore. > > BTW, is there a difference between f2py in newcore and f2py available via cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e Nils >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Nov 10 03:03:07 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 10 Nov 2005 02:03:07 -0600 (CST) Subject: [SciPy-dev] swig vs. f2py In-Reply-To: <437308EE.7080404@mecha.uni-stuttgart.de> References: <43720A90.4040504@ntc.zcu.cz> <437308EE.7080404@mecha.uni-stuttgart.de> Message-ID: On Thu, 10 Nov 2005, Nils Wagner wrote: > BTW, is there a difference between f2py in newcore and f2py available via > cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e Yes. scipy.f2py will be developed further in newcore and it does not support building against Numeric or numarray arrays, only newcore arrays. f2py2e has Numeric and numarray support, and very limited newcore support. I'll keep it around only for those who cannot upgrade Numeric to newcore for whatever reasons. I can fix some bugs but I'll not develop f2py2e further. Pearu From arnd.baecker at web.de Thu Nov 10 05:40:07 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 10 Nov 2005 11:40:07 +0100 (CET) Subject: [SciPy-dev] newscipy, fftw3 - solution Message-ID: Hi, I had a closer look why fftw3 was not detected with newscipy (whereas this works with one of the latest old scipy ...). The solution is simple: One has to take all the routines fftw_*info from "old" scipy/distutils/system_info.py and put them into system_info.py of newcore. Then fftw3 is detected (and wins over fftw2). (For me this only worked, when the FFTW environment variable is set to the directory eg. libfftw3.a). It would be great if someone with write-access could do the corresponding changes so that fftw3 is supported out of the box. Many thanx, Arnd From schofield at ftw.at Thu Nov 10 05:47:16 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 10 Nov 2005 11:47:16 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <4372AF18.4030208@ee.byu.edu> References: <4372AF18.4030208@ee.byu.edu> Message-ID: <43732534.7030204@ftw.at> Travis Oliphant wrote: >Thursday evening I'm going to move the contents of newscipy and newcore >to the trunk positions on the scipy svn tree and copy the current trunk >contents to branches/oldscipy branches/oldcore > >To prepare for this. Please make all commits by Thursday at 5:00pm. > > 5pm in which time zone? SciPy never sleeps! -- Ed From cimrman3 at ntc.zcu.cz Thu Nov 10 06:06:14 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 10 Nov 2005 12:06:14 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <43732534.7030204@ftw.at> References: <4372AF18.4030208@ee.byu.edu> <43732534.7030204@ftw.at> Message-ID: <437329A6.1050806@ntc.zcu.cz> Ed Schofield wrote: > Travis Oliphant wrote: > > >>Thursday evening I'm going to move the contents of newscipy and newcore >>to the trunk positions on the scipy svn tree and copy the current trunk >>contents to branches/oldscipy branches/oldcore >> >>To prepare for this. Please make all commits by Thursday at 5:00pm. >> >> > > 5pm in which time zone? SciPy never sleeps! ... where the sun never sets. :-) r. From stefan at sun.ac.za Thu Nov 10 06:20:20 2005 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 10 Nov 2005 13:20:20 +0200 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <437329A6.1050806@ntc.zcu.cz> References: <4372AF18.4030208@ee.byu.edu> <43732534.7030204@ftw.at> <437329A6.1050806@ntc.zcu.cz> Message-ID: <20051110112020.GA10784@sun.ac.za> On Thu, Nov 10, 2005 at 12:06:14PM +0100, Robert Cimrman wrote: > >>To prepare for this. Please make all commits by Thursday at 5:00pm. > >> > > 5pm in which time zone? SciPy never sleeps! > > ... where the sun never sets. :-) I assume that is GMT, along with the saying "the sun never sets on the British empire?" (whereupon some can't help but say that God doesn't trust the British in the dark ;) See http://www.friesian.com/british.htm Cheers St??fan From dalcinl at gmail.com Thu Nov 10 08:01:13 2005 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 10 Nov 2005 10:01:13 -0300 Subject: [SciPy-dev] swig vs. f2py In-Reply-To: References: <43720A90.4040504@ntc.zcu.cz> Message-ID: On 11/10/05, Pearu Peterson wrote: > However, the generated sources should be commited to the repository as > well. This is because we cannot assume that swig or pyrex or an other such > tool is installed. And with scipy.distutils we could add an option to > setup.py file that the swig-generated sources will be regenerated in > source tree when .i files are changed. > I use for that: $ python setup.py build_src --inplace > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From bgoli at sun.ac.za Thu Nov 10 08:28:26 2005 From: bgoli at sun.ac.za (Brett Olivier) Date: Thu, 10 Nov 2005 15:28:26 +0200 Subject: [SciPy-dev] Problem building newcore on windows with MinGW In-Reply-To: <437247A9.1000700@ee.byu.edu> References: <200511090942.14576.bgoli@sun.ac.za> <437247A9.1000700@ee.byu.edu> Message-ID: <200511101528.27085.bgoli@sun.ac.za> Thanks, works like a charm, the following is the result of scipy.test(10,10): ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python23\lib\site-packages\scipy\optimize\tests\test_optimize.py", li ne 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ====================================================================== FAIL: check_round (scipy.special.basic.test_basic.test_round) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python23\lib\site-packages\scipy\special\tests\test_basic.py", line 1 793, in check_round assert_array_equal(rnd,rndrl) File "c:\python23\Lib\site-packages\scipy\test\testing.py", line 710, in asser t_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 25.0%): Array 1: [10 10 10 11] Array 2: [10 10 11 11] ====================================================================== FAIL: check_1 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ---------------------------------------------------------------------- Traceback (most recent call last): File "C: \Python23\lib\site-packages\scipy\distutils\tests\test_misc_util.py", line 10, in check_1 assert_equal(appendpath('/prefix','name'),join('/prefix','name')) File "c:\python23\Lib\site-packages\scipy\test\testing.py", line 638, in asser t_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: '/prefix\\name' ACTUAL: '\\prefix\\name' ====================================================================== FAIL: check_2 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ---------------------------------------------------------------------- Traceback (most recent call last): File "C: \Python23\lib\site-packages\scipy\distutils\tests\test_misc_util.py", line 20, in check_2 join('/prefix','sub','name')) File "c:\python23\Lib\site-packages\scipy\test\testing.py", line 638, in asser t_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: '/prefix\\sub\\name' ACTUAL: '\\prefix\\sub\\prefix\\name' ====================================================================== FAIL: check_3 (scipy.distutils.misc_util.test_misc_util.test_appendpath) ---------------------------------------------------------------------- Traceback (most recent call last): File "C: \Python23\lib\site-packages\scipy\distutils\tests\test_misc_util.py", line 24, in check_3 join('/prefix','sub','sup','name')) File "c:\python23\Lib\site-packages\scipy\test\testing.py", line 638, in asser t_equal assert desired == actual, msg AssertionError: Items are not equal: DESIRED: '/prefix\\sub\\sup\\name' ACTUAL: '\\prefix\\sub\\prefix\\sup\\name' ====================================================================== FAIL: limited-memory bound-constrained BFGS algorithm ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python23\lib\site-packages\scipy\optimize\tests\test_optimize.py", li ne 120, in check_l_bfgs_b assert err < 1e-6 AssertionError ---------------------------------------------------------------------- Ran 1366 tests in 80.636s FAILED (failures=6) Out[5]: On Wednesday 09 November 2005 21:02, Travis Oliphant wrote: > Brett Olivier wrote: > >Hi > > > >I'm trying to build yesterdays newcore on Windows (ATLAS 3.7.11, MinGW gcc > >3.4.4) and distutils does not seem to recognise the --compiler=mingw32 > > switch anymore: > > > >++++ > > You need to use the --compiler=mingw32 switch for the config command as > well: > > python setup.py config --compiler=mingw32 build --compiler=mingw32 > > There may be a way we can fix this in the setup.py script (so that one > compiler switch gets applied to all the commands, but it hasn't been > done yet). > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Brett G. Olivier Postdoctoral Fellow Triple-J Group for Molecular Cell Physiology Stellenbosch University From chanley at stsci.edu Thu Nov 10 08:51:40 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 10 Nov 2005 08:51:40 -0500 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris Message-ID: <4373506C.6040908@stsci.edu> Greetings, Revision 1456 of scipy_core causes a segfault running under Solaris in "Test count" while running scipy.test(10,10). However, this problem does NOT occur on my Redhat Enterprise 3 workstation. Test of pickling ... ok Test of put ... ok Test of take, transpose, inner, outer products ... ok Test various functions such as sin, cos. ... ok Test countSegmentation fault (core dumped) Chris From dd55 at cornell.edu Thu Nov 10 09:28:04 2005 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 10 Nov 2005 09:28:04 -0500 Subject: [SciPy-dev] newcore atlas info on Gentoo In-Reply-To: <43566FB3.4090108@csun.edu> References: <200510181341.12351.dd55@cornell.edu> <200510191136.31418.dd55@cornell.edu> <43566FB3.4090108@csun.edu> Message-ID: <200511100928.04217.dd55@cornell.edu> On Wednesday 19 October 2005 12:09 pm, Stephen Walton wrote: > Darren Dale wrote: > >I was thinking that it would be good for scipy to use a site.cfg by > > default. > > site.cfg does work. I created a site.cfg in newcore/scipy/distutils > containing > > [lapack_src] > src_dirs=/home/swalton/src/LAPACK/SRC > > which is where, in fact, my LAPACK sources are. "python system_info.py" > in this directory then correctly finds the sources. Well, in my case, editing site.cfg alone does not work, because my fortran blas libraries are named "blas" instead of "f77blas". system_info.py has "f77blas" hardcoded in several places, which I have to change in order to build scipy. I think system_info.py should be set up to respect the library list in site.cfg. Darren From schofield at ftw.at Thu Nov 10 10:10:40 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 10 Nov 2005 16:10:40 +0100 Subject: [SciPy-dev] f2py and character arrays In-Reply-To: References: Message-ID: <437362F0.8090800@ftw.at> Ed Schofield wrote: >On Tue, 8 Nov 2005, Nils Wagner wrote: > > >>====================================================================== >>ERROR: limited-memory bound-constrained BFGS algorithm >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File >>"/usr/local/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", >>line 114, in check_l_bfgs_b >> args=(), maxfun=self.maxiter) >> File >>"/usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py", >>line 207, in fmin_l_bfgs_b >> return x, f[0], d >>ValueError: 0-d arrays can't be indexed. >> >> >Yes, I know this is broken. I added some new unit tests today, partly to >highlight this problem. It hasn't ever worked with the new scipy core. >I'm working on it :) > > I'd like to call for help here. It would be nice to fix this so these tests pass before the code freeze / new release today. As Travis described (http://www.scipy.net/pipermail/scipy-dev/2005-October/003685.html), f2py probably needs changing to support the new format for character arrays (which the lbfgsb.py module uses). I don't understand f2py well enough to fix it myself. It seems that lbfgsb.py passes the "task" character array to the Fortran subroutine "setulb" in the file optimize/lbfgsb-0.9/routines.f without problem and the Fortran code then manipulates this character array correctly. The problem is that only the first character of "task" is modified when control returns to the Python code. Pearu? :) -- Ed From chanley at stsci.edu Thu Nov 10 10:39:24 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 10 Nov 2005 10:39:24 -0500 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <4373506C.6040908@stsci.edu> References: <4373506C.6040908@stsci.edu> Message-ID: <437369AC.7050406@stsci.edu> Christopher Hanley wrote: > Greetings, > > Revision 1456 of scipy_core causes a segfault running under Solaris in > "Test count" while running scipy.test(10,10). However, this problem > does NOT occur on my Redhat Enterprise 3 workstation. > > Test of pickling ... ok > Test of put ... ok > Test of take, transpose, inner, outer products ... ok > Test various functions such as sin, cos. ... ok > Test countSegmentation fault (core dumped) > So far, I have localized the failure to line "self.failUnless (eq(0, array(1,mask=[1])))" in def check_xtestCount(self) of test_ma.py. Chris From schofield at ftw.at Thu Nov 10 11:31:16 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 10 Nov 2005 17:31:16 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <20051110112020.GA10784@sun.ac.za> References: <4372AF18.4030208@ee.byu.edu> <43732534.7030204@ftw.at> <437329A6.1050806@ntc.zcu.cz> <20051110112020.GA10784@sun.ac.za> Message-ID: <437375D4.40605@ftw.at> Stefan van der Walt wrote: >On Thu, Nov 10, 2005 at 12:06:14PM +0100, Robert Cimrman wrote: > > >>>>To prepare for this. Please make all commits by Thursday at 5:00pm. >>>> >>>> >>>> >>>5pm in which time zone? SciPy never sleeps! >>> >>> >>... where the sun never sets. :-) >> >> > >I assume that is GMT, along with the saying "the sun never sets on the >British empire?" (whereupon some can't help but say that God doesn't >trust the British in the dark ;) > > I assumed he meant Antarctica. But doesn't it have seven different time zones?? Now I'm really confused ... -- Ed From cimrman3 at ntc.zcu.cz Thu Nov 10 11:39:43 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 10 Nov 2005 17:39:43 +0100 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <437375D4.40605@ftw.at> References: <4372AF18.4030208@ee.byu.edu> <43732534.7030204@ftw.at> <437329A6.1050806@ntc.zcu.cz> <20051110112020.GA10784@sun.ac.za> <437375D4.40605@ftw.at> Message-ID: <437377CF.4030102@ntc.zcu.cz> Ed Schofield wrote: > Stefan van der Walt wrote: > > >>On Thu, Nov 10, 2005 at 12:06:14PM +0100, Robert Cimrman wrote: >> >> >> >>>>>To prepare for this. Please make all commits by Thursday at 5:00pm. >>>>> >>>>> >>>>> >>>> >>>>5pm in which time zone? SciPy never sleeps! >>>> >>>> >>> >>>... where the sun never sets. :-) >>> >>> >> >>I assume that is GMT, along with the saying "the sun never sets on the >>British empire?" (whereupon some can't help but say that God doesn't >>trust the British in the dark ;) >> >> > > I assumed he meant Antarctica. But doesn't it have seven different time > zones?? Now I'm really confused ... Oh, sorry, my post was not _the answer_ to the original question! It was just a joke after your 'SciPy never sleeps!'. It did paraphrase the saying about the British empire, though... ;-) r. From swisher at enthought.com Thu Nov 10 12:34:43 2005 From: swisher at enthought.com (Janet M. Swisher) Date: Thu, 10 Nov 2005 11:34:43 -0600 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372D509.40009@ucsd.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> Message-ID: <437384B3.3060308@enthought.com> Robert Kern wrote: >The evidence suggests that Plone just isn't suited to the kind and >amount of content that our community is producing. Plone is a great CMS, >but it's not so great when you don't have a lot of content to manage. >Plone was selected for the website years ago on the assumption that >people were going to register and stake out little homepages of their >own to post their stuff. Years later, we don't have many members doing >that but a fair amount of people posting to the Wiki. The review-by-Wiki >process is, I think, tenable. Or at least as tenable as the Plone review >process which we've never used to any real extent. > > I think there is a process/access issue introduced by Plone, or by the way we've implemented it. Yes, site members can create content in their home folders. However, I don't think most members realize that's possible, or that such content would be accessible to other members. Also, people like to collaborate, not just work in their own silos. They do contribute to the existing wikis, because that is the only "public" area that they're allowed to change. The non-wiki public areas are locked down except for commenting (and I think not all pages allow comments). Only a small handful of users currently have access to change the public pages, so pages grow stale when the original owners don't have time to maintain them. Plone seems to be oriented toward PR-type content management, where you publish content that remains static unless you replace it with totally new content. (Plone doesn't really manage versions, because, in the intended workflow, you don't really need that.) It's possible to use Plone in a more collaborative mode, as is done at oooauthors.org (now also hosted by Enthought). However, the content being developed there isn't even really published there -- it's officially published on documentation.openoffice.org -- so the community doesn't worry too much about its "public face". For the scipy.org site, we haven't figured out (partly for lack of trying) how to balance the publishing mode and the collaborative mode. How do you balance preserving "blessed" content and organization against making it easy for people to contribute? I think it's possible to do that within Plone; it has a great deal of flexibility that we haven't taken advantage of. However, given Plone's apparent performance drawbacks, switching to Trac is a reasonable alternative. Enthought now uses Trac internally for other projects, so we (especially Joe) would get the benefit of consistency. -- Janet Swisher --- Senior Technical Writer Enthought, Inc. http://www.enthought.com From strawman at astraw.com Thu Nov 10 13:51:37 2005 From: strawman at astraw.com (Andrew Straw) Date: Thu, 10 Nov 2005 10:51:37 -0800 Subject: [SciPy-dev] Some Feedback In-Reply-To: <437384B3.3060308@enthought.com> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> Message-ID: <437396B9.5080301@astraw.com> In addition to Trac, we could consider MoinMoin [1] as the "public face" of scipy.org. Let me declare immediately that I'm no Trac expert, so if Trac can do all of these things, great, and please ignore this suggestion. Reading the descriptions of Trac on this thread as well as my limited experience as a user is all that I'm going on here. [1] http://moinmoin.wikiwikiweb.de/ As the administrator of several MoinMoin sites, I can vouch that it's easy to setup, easy to maintain, and easy to contribute to. MoinMoin has some protection against Wiki Spam [2], an apparent shortcoming mentioned here in considering Trac for the public face. Don't get me wrong -- I think Trac is super for viewing an svn repository, submitting bug reports, and so on, but perhaps we could use it just for that role and use MoinMoin in the public face. [2] http://moinmoin.wikiwikiweb.de/AntiSpamFeatures Another point in MoinMoin's favor as the public face is that if we want scipy.org to be a starting point into the universe of scientific computing in Python (which I think there's general consensus for), a Trac wiki may give the wrong message. My impression of a Trac instance is one wiki==one project. I think a MoinMoin wiki may make it easier to create the distinction between scipy.org-the-website and scipy-the-Python-package. Regardless of wiki choice, I agree with Robert Kern's suggestion that review-by-Wiki is tenable, at least until proven otherwise. If it came to trouble, MoinMoin (I don't know about Trac) has a sophisticated access control list model which can be tuned to a per-page granularity. Janet M. Swisher wrote: > Robert Kern wrote: > > >>The evidence suggests that Plone just isn't suited to the kind and >>amount of content that our community is producing. Plone is a great CMS, >>but it's not so great when you don't have a lot of content to manage. >>Plone was selected for the website years ago on the assumption that >>people were going to register and stake out little homepages of their >>own to post their stuff. Years later, we don't have many members doing >>that but a fair amount of people posting to the Wiki. The review-by-Wiki >>process is, I think, tenable. Or at least as tenable as the Plone review >>process which we've never used to any real extent. >> >> > > I think there is a process/access issue introduced by Plone, or by the > way we've implemented it. Yes, site members can create content in their > home folders. However, I don't think most members realize that's > possible, or that such content would be accessible to other members. > Also, people like to collaborate, not just work in their own silos. They > do contribute to the existing wikis, because that is the only "public" > area that they're allowed to change. The non-wiki public areas are > locked down except for commenting (and I think not all pages allow > comments). Only a small handful of users currently have access to change > the public pages, so pages grow stale when the original owners don't > have time to maintain them. > > Plone seems to be oriented toward PR-type content management, where you > publish content that remains static unless you replace it with totally > new content. (Plone doesn't really manage versions, because, in the > intended workflow, you don't really need that.) It's possible to use > Plone in a more collaborative mode, as is done at oooauthors.org (now > also hosted by Enthought). However, the content being developed there > isn't even really published there -- it's officially published on > documentation.openoffice.org -- so the community doesn't worry too much > about its "public face". > > For the scipy.org site, we haven't figured out (partly for lack of > trying) how to balance the publishing mode and the collaborative mode. > How do you balance preserving "blessed" content and organization against > making it easy for people to contribute? I think it's possible to do > that within Plone; it has a great deal of flexibility that we haven't > taken advantage of. However, given Plone's apparent performance > drawbacks, switching to Trac is a reasonable alternative. Enthought now > uses Trac internally for other projects, so we (especially Joe) would > get the benefit of consistency. > From oliphant at ee.byu.edu Thu Nov 10 14:16:20 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 12:16:20 -0700 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <4373506C.6040908@stsci.edu> References: <4373506C.6040908@stsci.edu> Message-ID: <43739C84.70009@ee.byu.edu> Christopher Hanley wrote: >Greetings, > >Revision 1456 of scipy_core causes a segfault running under Solaris in >"Test count" while running scipy.test(10,10). However, this problem >does NOT occur on my Redhat Enterprise 3 workstation. > >Test of pickling ... ok >Test of put ... ok >Test of take, transpose, inner, outer products ... ok >Test various functions such as sin, cos. ... ok >Test countSegmentation fault (core dumped) > > > Hi Chris, Your tests are always very valuable. I want to verify that you rebuit the source tree before installing the new version: i.e. rm -fr build/ python setup.py install Also, just for good measure you should remove the old scipy directory where it was previously installed, and any include/python/scipy directories that may be lingering from old installs I just want to make sure this is a real problem. I'll track it down if it is. -Travis From jonathan.taylor at utoronto.ca Thu Nov 10 14:20:24 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 19:20:24 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <437396B9.5080301@astraw.com> Message-ID: Do you think that while scipy is in development mode, it may be enough to just link to other sites from a main trac page? Later on, we could move trac behind a static/moinmoin/etc front page. Jon. On 11/10/2005, "Andrew Straw" wrote: >In addition to Trac, we could consider MoinMoin [1] as the "public face" >of scipy.org. Let me declare immediately that I'm no Trac expert, so if >Trac can do all of these things, great, and please ignore this >suggestion. Reading the descriptions of Trac on this thread as well as >my limited experience as a user is all that I'm going on here. > >[1] http://moinmoin.wikiwikiweb.de/ > >As the administrator of several MoinMoin sites, I can vouch that it's >easy to setup, easy to maintain, and easy to contribute to. MoinMoin >has some protection against Wiki Spam [2], an apparent shortcoming >mentioned here in considering Trac for the public face. Don't get me >wrong -- I think Trac is super for viewing an svn repository, submitting >bug reports, and so on, but perhaps we could use it just for that role >and use MoinMoin in the public face. > >[2] http://moinmoin.wikiwikiweb.de/AntiSpamFeatures > >Another point in MoinMoin's favor as the public face is that if we want >scipy.org to be a starting point into the universe of scientific >computing in Python (which I think there's general consensus for), a >Trac wiki may give the wrong message. My impression of a Trac instance >is one wiki==one project. I think a MoinMoin wiki may make it easier to >create the distinction between scipy.org-the-website and >scipy-the-Python-package. > >Regardless of wiki choice, I agree with Robert Kern's suggestion that >review-by-Wiki is tenable, at least until proven otherwise. If it came >to trouble, MoinMoin (I don't know about Trac) has a sophisticated >access control list model which can be tuned to a per-page granularity. > >Janet M. Swisher wrote: >> Robert Kern wrote: >> >> >>>The evidence suggests that Plone just isn't suited to the kind and >>>amount of content that our community is producing. Plone is a great CMS, >>>but it's not so great when you don't have a lot of content to manage. >>>Plone was selected for the website years ago on the assumption that >>>people were going to register and stake out little homepages of their >>>own to post their stuff. Years later, we don't have many members doing >>>that but a fair amount of people posting to the Wiki. The review-by-Wiki >>>process is, I think, tenable. Or at least as tenable as the Plone review >>>process which we've never used to any real extent. >>> >>> >> >> I think there is a process/access issue introduced by Plone, or by the >> way we've implemented it. Yes, site members can create content in their >> home folders. However, I don't think most members realize that's >> possible, or that such content would be accessible to other members. >> Also, people like to collaborate, not just work in their own silos. They >> do contribute to the existing wikis, because that is the only "public" >> area that they're allowed to change. The non-wiki public areas are >> locked down except for commenting (and I think not all pages allow >> comments). Only a small handful of users currently have access to change >> the public pages, so pages grow stale when the original owners don't >> have time to maintain them. >> >> Plone seems to be oriented toward PR-type content management, where you >> publish content that remains static unless you replace it with totally >> new content. (Plone doesn't really manage versions, because, in the >> intended workflow, you don't really need that.) It's possible to use >> Plone in a more collaborative mode, as is done at oooauthors.org (now >> also hosted by Enthought). However, the content being developed there >> isn't even really published there -- it's officially published on >> documentation.openoffice.org -- so the community doesn't worry too much >> about its "public face". >> >> For the scipy.org site, we haven't figured out (partly for lack of >> trying) how to balance the publishing mode and the collaborative mode. >> How do you balance preserving "blessed" content and organization against >> making it easy for people to contribute? I think it's possible to do >> that within Plone; it has a great deal of flexibility that we haven't >> taken advantage of. However, given Plone's apparent performance >> drawbacks, switching to Trac is a reasonable alternative. Enthought now >> uses Trac internally for other projects, so we (especially Joe) would >> get the benefit of consistency. >> > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From jonathan.taylor at utoronto.ca Thu Nov 10 14:26:03 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 19:26:03 +0000 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <4372AF18.4030208@ee.byu.edu> Message-ID: Hi Travis, If we go to trac, it would probabally be useful to have scipy and scipy_core in the same repository. If so, then 5:00 might be the best time to put both in the same repository. We would also get the benefit of being able to svn mv ad svn cp files from scipy_core to scipy and vice versa. Maybe I am not understanding a good reason for their seperation though? Regards, Jon. On 11/10/2005, "Travis Oliphant" wrote: > >Thursday evening I'm going to move the contents of newscipy and newcore >to the trunk positions on the scipy svn tree and copy the current trunk >contents to branches/oldscipy branches/oldcore > >To prepare for this. Please make all commits by Thursday at 5:00pm. > >I will also tag the contents of the tree at that point and make a >release of both packages (and a final-final Numeric release). > >The scipy_core version number will be 0.6 > >Please respond with comments or concerns. > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at scipy.org Thu Nov 10 13:45:29 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 10 Nov 2005 12:45:29 -0600 (CST) Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: References: Message-ID: On Thu, 10 Nov 2005, Jonathan Taylor wrote: > If we go to trac, it would probabally be useful to have scipy and > scipy_core in the same repository. If so, then 5:00 might be the best > time to put both in the same repository. We would also get the benefit > of being able to svn mv ad svn cp files from scipy_core to scipy and > vice versa. Maybe I am not understanding a good reason for their > seperation though? The separation is due the fact that newcore is a replacement for Numeric and not all people need full scipy. Ideally, when checking out scipy, one should get also scipy_core: And when checking out scipy_core, one should get only scipy_core. This was easy to handle with CVS but as I have understood not so with SVN. In addtion, one needs to have scipy_core installed before building scipy. I think this is the main reason for keeping scipy_core and scipy separate. On the other hand, we could move the contents of scipy/Lib/ directory to scipy_core/scipy and have separate setup.py files for scipy and scipy_core. But that would mean that only-scipy_core interested people would get full scipy sources from SVN (unless there are some SVN tricks to prevent that). Pearu From jonathan.taylor at utoronto.ca Thu Nov 10 14:56:13 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 19:56:13 +0000 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: Message-ID: What about just putting scipy_core into a new directory. I think that is nearly equivelent functionally as having them in seperate repositories. Maybe I am wrong? Jon. On 11/10/2005, "Pearu Peterson" wrote: > > >On Thu, 10 Nov 2005, Jonathan Taylor wrote: > >> If we go to trac, it would probabally be useful to have scipy and >> scipy_core in the same repository. If so, then 5:00 might be the best >> time to put both in the same repository. We would also get the benefit >> of being able to svn mv ad svn cp files from scipy_core to scipy and >> vice versa. Maybe I am not understanding a good reason for their >> seperation though? > >The separation is due the fact that newcore is a replacement for Numeric >and not all people need full scipy. > >Ideally, when checking out scipy, one should get also scipy_core: >And when checking out scipy_core, one should get only scipy_core. >This was easy to handle with CVS but as I have understood not so with SVN. > >In addtion, one needs to have scipy_core installed before building >scipy. I think this is the main reason for keeping scipy_core and >scipy separate. > >On the other hand, we could move the contents of scipy/Lib/ directory >to scipy_core/scipy and have separate setup.py files for scipy and >scipy_core. But that would mean that only-scipy_core interested people >would get full scipy sources from SVN (unless there are some SVN tricks >to prevent that). > >Pearu > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From chanley at stsci.edu Thu Nov 10 15:00:46 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 10 Nov 2005 15:00:46 -0500 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <43739C84.70009@ee.byu.edu> References: <4373506C.6040908@stsci.edu> <43739C84.70009@ee.byu.edu> Message-ID: <4373A6EE.9090306@stsci.edu> Travis Oliphant wrote: > Your tests are always very valuable. I want to verify that you rebuit > the source tree before installing the new version: i.e. > > rm -fr build/ > python setup.py install > I always rebuild the source tree before installing a new version. These are the first commands in my regression testing system. > Also, just for good measure you should remove the old scipy directory > where it was previously installed, and any include/python/scipy > directories that may be lingering from old installs These are also removed by my regression tests. Below is the output from the test script you sent me: 1. 2. 3. Bus error (core dumped) Chris From oliphant at ee.byu.edu Thu Nov 10 15:04:12 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 13:04:12 -0700 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <4373A6EE.9090306@stsci.edu> References: <4373506C.6040908@stsci.edu> <43739C84.70009@ee.byu.edu> <4373A6EE.9090306@stsci.edu> Message-ID: <4373A7BC.9070409@ee.byu.edu> Christopher Hanley wrote: >Travis Oliphant wrote: > > >>Your tests are always very valuable. I want to verify that you rebuit >>the source tree before installing the new version: i.e. >> >>rm -fr build/ >>python setup.py install >> >> >> >I always rebuild the source tree before installing a new version. These >are the first commands in my regression testing system. > > > >>Also, just for good measure you should remove the old scipy directory >>where it was previously installed, and any include/python/scipy >>directories that may be lingering from old installs >> >> > >These are also removed by my regression tests. > > >Below is the output from the test script you sent me: > >1. >2. >3. >Bus error (core dumped) > > Great. This will help. -Travis From fonnesbeck at gmail.com Thu Nov 10 15:15:42 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 15:15:42 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <43701B3B.1010204@ee.byu.edu> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> Message-ID: <723eb6930511101215x7be5fe4fv44497f9b7e13d736@mail.gmail.com> On 11/7/05, Travis Oliphant wrote: > Steven H. Rogers wrote: > > >OK. I can't think of a really good use case for using an array as a truth > >value. I would argue though, that it would make sense for an array of zeros > >to be False and an array with any non-zero values to be True. > > > > > I agree this makes sense. That's why it used to be the default > behavior. But you can already get that behavior with any(a). > > There will be many though, I'm afraid, who think b or a ought to return > element-wise like b | a does. This is not possible in Python. Raising > an error will at least alert them to the problem which might otherwise > give them misleading results. > I'm not quite sure how any() is supposed to work; does it just return true if one or more element evaluates to true? In my current code, I have the following: if sum(lower>=median or median>=upper): which returns a ValueError. What is the best way to detect elements in one array that are less than the corresponding element in the other without constructing a list comprehension? Thanks for the clarification, -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Thu Nov 10 15:18:50 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 15:18:50 -0500 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <723eb6930511101215x7be5fe4fv44497f9b7e13d736@mail.gmail.com> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <723eb6930511101215x7be5fe4fv44497f9b7e13d736@mail.gmail.com> Message-ID: <723eb6930511101218k6de92da7q5aa5725c96337b4b@mail.gmail.com> On 11/10/05, Chris Fonnesbeck wrote: > On 11/7/05, Travis Oliphant wrote: > > Steven H. Rogers wrote: > > > > >OK. I can't think of a really good use case for using an array as a truth > > >value. I would argue though, that it would make sense for an array of zeros > > >to be False and an array with any non-zero values to be True. > > > > > > > > I agree this makes sense. That's why it used to be the default > > behavior. But you can already get that behavior with any(a). > > > > There will be many though, I'm afraid, who think b or a ought to return > > element-wise like b | a does. This is not possible in Python. Raising > > an error will at least alert them to the problem which might otherwise > > give them misleading results. > > > > I'm not quite sure how any() is supposed to work; does it just return > true if one or more element evaluates to true? > > In my current code, I have the following: > > if sum(lower>=median or median>=upper): > > which returns a ValueError. What is the best way to detect elements in > one array that are less than the corresponding element in the other > without constructing a list comprehension? > I think I see the problem. Under scipy_core, this now needs to be: if sum(lower>=median) or sum(median>=upper): -- Chris Fonnesbeck Atlanta, GA From oliphant at ee.byu.edu Thu Nov 10 15:35:17 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 13:35:17 -0700 Subject: [SciPy-dev] Arrays as truth values? In-Reply-To: <723eb6930511101215x7be5fe4fv44497f9b7e13d736@mail.gmail.com> References: <4365F457.4060202@ntc.zcu.cz> <436722B3.7030003@ntc.zcu.cz> <8e88429e2ddec19acf77cb091e0ebd34@stsci.edu> <43679289.6000404@ee.byu.edu> <436F54A5.8070909@shrogers.com> <96f3afab6f31d86aaf8906ce07f8801b@stsci.edu> <436FCD24.4030305@ee.byu.edu> <4370151D.3090609@shrogers.com> <43701B3B.1010204@ee.byu.edu> <723eb6930511101215x7be5fe4fv44497f9b7e13d736@mail.gmail.com> Message-ID: <4373AF05.4090404@ee.byu.edu> >I'm not quite sure how any() is supposed to work; does it just return >true if one or more element evaluates to true? > > > Yes. Exactly. There is an axis argument, but for just testing truth of anything in the array you don't want to use it. >In my current code, I have the following: > >if sum(lower>=median or median>=upper): > >which returns a ValueError. What is the best way to detect elements in >one array that are less than the corresponding element in the other >without constructing a list comprehension? > > The ValueError was just recently added because it is ambiguous as to what you meant by this. I would say if any(lower>=media) or any(median>=upper): -Travis From oliphant at ee.byu.edu Thu Nov 10 15:38:37 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 13:38:37 -0700 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <4373A6EE.9090306@stsci.edu> References: <4373506C.6040908@stsci.edu> <43739C84.70009@ee.byu.edu> <4373A6EE.9090306@stsci.edu> Message-ID: <4373AFCD.6090203@ee.byu.edu> Christopher Hanley wrote: >Travis Oliphant wrote: > > >>Your tests are always very valuable. I want to verify that you rebuit >>the source tree before installing the new version: i.e. >> >>rm -fr build/ >>python setup.py install >> >> >> >I always rebuild the source tree before installing a new version. These >are the first commands in my regression testing system. > > > >>Also, just for good measure you should remove the old scipy directory >>where it was previously installed, and any include/python/scipy >>directories that may be lingering from old installs >> >> > >These are also removed by my regression tests. > > >Below is the output from the test script you sent me: > >1. >2. >3. >Bus error (core dumped) > > > > O.K. so it's in the allclose code of masked arrays. Could you look through that code and test each line separately. Just placing print statements in your copy of scipy/base/ma.py can help. Alternatively copy the allclose code to a test script and spread out the code so only one thing gets done at a time.... If you can isolate the problem to a single extension-code call, it can really help me. -Travis From jh at oobleck.astro.cornell.edu Thu Nov 10 16:02:40 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 10 Nov 2005 16:02:40 -0500 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4372D509.40009@ucsd.edu> (message from Robert Kern on Wed, 09 Nov 2005 21:05:13 -0800) References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> Message-ID: <200511102102.jAAL2eWq018954@oobleck.astro.cornell.edu> Ok, in light of what's been said, if the technical issues of Plone really are chasing people away, I'm not opposed to a switch, especially if it makes Joe C.'s life easier. I'm a little sad to see the possibility of using Plone's many modules disappear. Maybe we don't need ecommerce or photo galleries now, but maybe later. Let's do a quick list of requirements. I started one here... http://www.scipy.org/wikis/accessible_scipy/AccessibleSciPy (no surprises there...) If you don't want to wait the 90 seconds (ok, it is tedious), post here and someone will update the wiki. --jh-- From jonathan.taylor at utoronto.ca Thu Nov 10 16:19:51 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 10 Nov 2005 21:19:51 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <200511102102.jAAL2eWq018954@oobleck.astro.cornell.edu> Message-ID: You can put me down for crafting and maintaining a "Getting Started with Scipy" page. Jon. On 11/10/2005, "Joe Harrington" wrote: >Ok, in light of what's been said, if the technical issues of Plone >really are chasing people away, I'm not opposed to a switch, >especially if it makes Joe C.'s life easier. > >I'm a little sad to see the possibility of using Plone's many modules >disappear. Maybe we don't need ecommerce or photo galleries now, but >maybe later. > >Let's do a quick list of requirements. I started one here... > >http://www.scipy.org/wikis/accessible_scipy/AccessibleSciPy > >(no surprises there...) If you don't want to wait the 90 seconds (ok, >it is tedious), post here and someone will update the wiki. > >--jh-- > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Thu Nov 10 14:31:46 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 12:31:46 -0700 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: References: Message-ID: <4373A022.8080809@ee.byu.edu> Jonathan Taylor wrote: >Hi Travis, > >If we go to trac, it would probabally be useful to have scipy and >scipy_core in the same repository. If so, then 5:00 might be the best >time to put both in the same repository. We would also get the benefit >of being able to svn mv ad svn cp files from scipy_core to scipy and >vice versa. Maybe I am not understanding a good reason for their >seperation though? > > They are in separate repositories because scipy core is the replacement for Numeric only. Several people don't like the monolithic nature of scipy. We want to alleviate their concerns by having scipy core as a separate project. I think we should have two trac pages. One for the core, and one for the rest. So, I don't think we will be joining them under the same repository any time soon. -Travis From Fernando.Perez at colorado.edu Thu Nov 10 18:49:24 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 10 Nov 2005 16:49:24 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: <437396B9.5080301@astraw.com> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> <437396B9.5080301@astraw.com> Message-ID: <4373DC84.4030600@colorado.edu> Andrew Straw wrote: > In addition to Trac, we could consider MoinMoin [1] as the "public face" > of scipy.org. Let me declare immediately that I'm no Trac expert, so if > Trac can do all of these things, great, and please ignore this > suggestion. Reading the descriptions of Trac on this thread as well as > my limited experience as a user is all that I'm going on here. I kind of like Andrew's approach on this one, as in a sense it provides at the website level an equivalence with the mailing lists: Moin site <-> scipy-user list Trac site <-> scipy-dev list It can also mean that the -user site will have a bit more emphasis on polish and information for newcomers, while the Trac one is much more technically oriented, and it can be labeled a 'hard hat area'. As long as both sites' front pages prominently indicate their purpose and a link to the other for those who end up in the wrong place by accident, this could be a nice solution. While it seems like more work (two systems), in fact it may actually be less in the long run (as long as moin is very easy to maintain). Two well-separated systems may be less work to manage than one hybrid trying to do two things. In addition, this would allow each SVN repo (core/full, which Travis wants to keep separate) to maintain its own Trac instance without confusing the users, who can gather all the info they want from the unified Moin front. Cheers, f From oliphant at ee.byu.edu Thu Nov 10 19:29:04 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 17:29:04 -0700 Subject: [SciPy-dev] SciPy SVN altered so that newscipy and newcore are now the trunk. Message-ID: <4373E5D0.5050305@ee.byu.edu> I've finished altering the subversion repository of SciPy so that the new development is taking place on the trunk of both scipy and scipy_core. The old versions are under branches named oldcore and oldscipy. Get the new repositor(y,ies) using: *Core*: svn co http://svn.scipy.org/svn/scipy_core/trunk core *Full SciPy*: svn co http://svn.scipy.org/svn/scipy/trunk scipy Doing both will place two directories named core and scipy in your current directory containing the current state of both repositories. python setup.py install should work in each directory. The Freeze is now over. I want to track down the bug that Christopher Hanley noted and another f2py-related bug before making a release, which I expect to happen by the weekend. -Travis From chanley at stsci.edu Thu Nov 10 20:23:19 2005 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 10 Nov 2005 20:23:19 -0500 Subject: [SciPy-dev] scipy_core revision 1456 segfaults on Solaris In-Reply-To: <4373AFCD.6090203@ee.byu.edu> Message-ID: <200511110123.CTN08480@stsci.edu> > > > O.K. so it's in the allclose code of masked arrays. > > Could you look through that code and test each line separately. Just > placing print statements in your copy of scipy/base/ma.py can help. > > Alternatively copy the allclose code to a test script and spread out the > code so only one thing gets done at a time.... > > If you can isolate the problem to a single extension-code call, it can > really help me. > > -Travis > O.K. Travis. I will track down this problem in the morning. Chris From byrnes at bu.edu Thu Nov 10 20:38:32 2005 From: byrnes at bu.edu (John Byrnes) Date: Thu, 10 Nov 2005 20:38:32 -0500 Subject: [SciPy-dev] pilutils bug Message-ID: <20051111013832.GB20658@localhost.localdomain> Hello all, I've discovered a bug in the pilutils fromimage function. On line 93 of Lib/utils/pilutil.py, arr.shape expects a tuple, apparently it is not recieving one. I fixed it by casting the new shape to a tuple. I'm sure there is a better way of fixing it, but this is what worked for me. Patch is attached. Best regards, John Byrnes -------------- next part -------------- Index: Lib/utils/pilutil.py =================================================================== --- Lib/utils/pilutil.py (revision 1431) +++ Lib/utils/pilutil.py (working copy) @@ -78,7 +78,7 @@ shape = list(im.size) shape.reverse() if mode == 'P': - arr.shape = shape + arr.shape = tuple(shape) if im.palette.rawmode != 'RGB': print "Warning: Image has invalid palette." return arr @@ -90,7 +90,7 @@ shape += [3] elif mode in ['CMYK','RGBA']: shape += [4] - arr.shape = shape + arr.shape = tuple(shape) if adjust: arr = (arr != 0) return arr From brendansimons at yahoo.ca Thu Nov 10 21:37:30 2005 From: brendansimons at yahoo.ca (Brendan Simons) Date: Thu, 10 Nov 2005 21:37:30 -0500 Subject: [SciPy-dev] Some Feedback - Best place for large recipe? In-Reply-To: References: Message-ID: <867347BC-4525-47D7-B2FC-26AECBC6EAC3@yahoo.ca> I checked out the accessibleSciPy wiki, and its the first time I've seen any reference to the Scipy cookbook. http://www.scipy.org/wikis/ accessible_scipy/ScipyCookbook I have a couple of modules which define a fairly handy framework for fitting linear and non-linear 2D shapes to data with uncertainty (plus concrete classes polynomials, sinusoids and circles). I don't think the code is good enough yet to include in Scipy, but it passes all my test suites, and since curve fitting is so common, others might be interested. I thought I might add it to the Scipy Cookbook, but it isn't clear exactly how recipes should be added to the wiki. Plus if the whole thing is switching to Trac, I might be better to wait. What's the best way then to add a scipy recipe? Brendan -- Brendan Simons __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From oliphant at ee.byu.edu Thu Nov 10 21:42:06 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 19:42:06 -0700 Subject: [SciPy-dev] pilutils bug In-Reply-To: <20051111013832.GB20658@localhost.localdomain> References: <20051111013832.GB20658@localhost.localdomain> Message-ID: <437404FE.4090809@ee.byu.edu> John Byrnes wrote: >Hello all, > >I've discovered a bug in the pilutils fromimage function. > >On line 93 of Lib/utils/pilutil.py, arr.shape expects a tuple, apparently >it is not recieving one. I fixed it by casting the new shape to a tuple. I'm >sure there is a better way of fixing it, but this is what worked for me. > > Thanks for the bug report. I changed arr.shape setting to accept any sequence. -Travis From cookedm at physics.mcmaster.ca Thu Nov 10 22:21:53 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 10 Nov 2005 22:21:53 -0500 Subject: [SciPy-dev] [Numpy-discussion] SciPy SVN altered so that newscipy and newcore are now the trunk. In-Reply-To: <4373E5D0.5050305@ee.byu.edu> (Travis Oliphant's message of "Thu, 10 Nov 2005 17:29:04 -0700") References: <4373E5D0.5050305@ee.byu.edu> Message-ID: Travis Oliphant writes: > I've finished altering the subversion repository of SciPy so that the > new development is taking place on the trunk of both scipy and scipy_core. > > The old versions are under branches named oldcore and oldscipy. > > Get the new repositor(y,ies) using: > > *Core*: > svn co http://svn.scipy.org/svn/scipy_core/trunk core > > *Full SciPy*: > svn co http://svn.scipy.org/svn/scipy/trunk scipy > > Doing both will place two directories named core and scipy in your > current directory containing the current state of both repositories. Alternatively, instead of doing a full checkout, you can switch your working copies: Within your (old) newcore directory: svn sw http://svn.scipy.org/svn/scipy_core/trunk and same for full scipy. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From byrnes at bu.edu Fri Nov 11 14:23:02 2005 From: byrnes at bu.edu (John Byrnes) Date: Fri, 11 Nov 2005 14:23:02 -0500 Subject: [SciPy-dev] toimage broken Message-ID: <20051111192302.GA19207@localhost.localdomain> Hello all, The following script throws an error ----Code---- import scipy as sp a = sp.zeros((10,10)) sp.utils.toimage(a) --------- ----Error---- AttributeError: 'scipy.ndarray' object has no attribute 'typecode' --------- Presumably this is a problem in scipy_core? Regards, John -- Imbalance of power corrupts and monopoly of power corrupts absolutely. -- Genji -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From stephen.walton at csun.edu Fri Nov 11 15:16:14 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 11 Nov 2005 12:16:14 -0800 Subject: [SciPy-dev] newcore atlas info on Gentoo In-Reply-To: <200511100928.04217.dd55@cornell.edu> References: <200510181341.12351.dd55@cornell.edu> <200510191136.31418.dd55@cornell.edu> <43566FB3.4090108@csun.edu> <200511100928.04217.dd55@cornell.edu> Message-ID: <4374FC0E.1090700@csun.edu> Darren Dale wrote: >Well, in my case, editing site.cfg alone does not work, because my fortran >blas libraries are named "blas" instead of "f77blas". system_info.py has >"f77blas" hardcoded in several places... > Only in the parts related to ATLAS, because the ATLAS-generated BLAS libraries are named this. I deliberately moved my libf77blas.a and libcblas.a to a place where distutils doesn't look, and did a "touch /usr/lib/libblas.a" to produce a bogus BLAS library. "python system_info.py" then produces, in part: blas_info: ( library_dirs = /usr/local/lib:/opt/lib:/usr/lib ) ( paths: /usr/lib/libblas.a ) ( library_dirs = /usr/local/lib:/opt/lib:/usr/lib ) FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 Is your libblas.a in a different directory than the three listed as "library_dirs" up there? If so, you can change that too in site.cfg. From dd55 at cornell.edu Fri Nov 11 16:08:49 2005 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 11 Nov 2005 16:08:49 -0500 Subject: [SciPy-dev] newcore atlas info on Gentoo In-Reply-To: <4374FC0E.1090700@csun.edu> References: <200510181341.12351.dd55@cornell.edu> <200511100928.04217.dd55@cornell.edu> <4374FC0E.1090700@csun.edu> Message-ID: <200511111608.49972.dd55@cornell.edu> On Friday 11 November 2005 03:16 pm, Stephen Walton wrote: > Darren Dale wrote: > >Well, in my case, editing site.cfg alone does not work, because my fortran > >blas libraries are named "blas" instead of "f77blas". system_info.py has > >"f77blas" hardcoded in several places... > > Only in the parts related to ATLAS, because the ATLAS-generated BLAS > libraries are named this. The gentoo-science group has set up the math library ebuilds to create symlinks in /usr/lib, which can be changed to point to different external libraries (the blas libraries created by the blas-atlas ebuild are located at /usr/lib/blas/atlas/libblas.*). The thinking is that one could easily switch from using ATLAS to ACML, for example. site.cfg offers an opportunity to override the names of the ATLAS libraries, but I maybe this is misleading: # For overriding the names of the atlas libraries: atlas_libs = lapack, blas, cblas, atlas Darren From oliphant at ee.byu.edu Fri Nov 11 16:29:13 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 11 Nov 2005 14:29:13 -0700 Subject: [SciPy-dev] toimage broken In-Reply-To: <20051111192302.GA19207@localhost.localdomain> References: <20051111192302.GA19207@localhost.localdomain> Message-ID: <43750D29.2020409@ee.byu.edu> John Byrnes wrote: >Hello all, > >The following script throws an error > >----Code---- >import scipy as sp >a = sp.zeros((10,10)) >sp.utils.toimage(a) >--------- > >----Error---- >AttributeError: 'scipy.ndarray' object has no attribute 'typecode' >--------- > >Presumably this is a problem in scipy_core? > > > No, its a problem in scipy. This file was not completely converted. Thanks for the test. -Travis From jonathan.taylor at utoronto.ca Sat Nov 12 02:00:10 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sat, 12 Nov 2005 07:00:10 +0000 Subject: [SciPy-dev] Default type behaviour of array Message-ID: Hi, I have had quite a bit of success moving some of my R scripts to scipy. Today I was creating a matrix of zeros and assigning some elements uniform distributed values from -1 to 1. I couldnt figure out for quite a bit why my matrix remained zerod. So I guess that is because zeros returns an int array by default. I would have expected a float array by default. Maybe there is a good reason for this though. Also, maybe it should complain when you put floats into an integer array instead of just rounding the elements. That is, maybe you should have to explicity round the elements? Just some thoughts. Jon. From Fernando.Perez at colorado.edu Sat Nov 12 02:21:44 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 12 Nov 2005 00:21:44 -0700 Subject: [SciPy-dev] Default type behaviour of array In-Reply-To: References: Message-ID: <43759808.2010409@colorado.edu> Jonathan Taylor wrote: > Hi, > > I have had quite a bit of success moving some of my R scripts to scipy. > Today I was creating a matrix of zeros and assigning some elements > uniform distributed values from -1 to 1. I couldnt figure out for quite > a bit why my matrix remained zerod. So I guess that is because zeros > returns an int array by default. I would have expected a float array by > default. Maybe there is a good reason for this though. > > Also, maybe it should complain when you put floats into an integer array > instead of just rounding the elements. That is, maybe you should have > to explicity round the elements? I think that, while perhaps a bit surprising at first, this is one of those cases where you just have to 'learn to use the library'. The point is that you should think of scipy arrays as C-style, strongly typed variables. Hopefully this C snippet will illustrate the issue: abdul[~]> cat trunc.c #include int main(void) { int n; float x; x = 3.14; n = x; printf("The float x = %g\n",x); printf("The int n = x = %d\n",n); return(0); } abdul[~]> ./trunc The float x = 3.14 The int n = x = 3 Once you've declared your arrays as 'int' type, assignment will silently truncate. In fact, this is often used as a feature in C (for table interpolation codes, for example). I would not want a typecheck made on every assginment, as every layer of safety added to arrays carries a performance price. Scipy arrays need to balance safety with speed, and in many cases it's a valid choice to err on the side of speed, at least I think so. I recently lobbied for a change to the assignment semantics for the 'O' type arrays, but those are an altogether special type alread, and I think I managed to make a half-decent case that the existing behavior was just too bizarre to really be of any use. But in this case, I'd rather keep the 'C-like' behavior so that we don't need more if statements in the internals. At least that's my take on it, perhaps Travis or others will view it differently. Cheers, f From schofield at ftw.at Sat Nov 12 07:17:43 2005 From: schofield at ftw.at (Ed Schofield) Date: Sat, 12 Nov 2005 13:17:43 +0100 (CET) Subject: [SciPy-dev] Default type behaviour of array In-Reply-To: <43759808.2010409@colorado.edu> References: <43759808.2010409@colorado.edu> Message-ID: On Sat, 12 Nov 2005, Fernando Perez wrote: > Jonathan Taylor wrote: > > Hi, > > > > I have had quite a bit of success moving some of my R scripts to scipy. > > Today I was creating a matrix of zeros and assigning some elements > > uniform distributed values from -1 to 1. I couldnt figure out for quite > > a bit why my matrix remained zerod. So I guess that is because zeros > > returns an int array by default. I would have expected a float array by > > default. Maybe there is a good reason for this though. > > > > Also, maybe it should complain when you put floats into an integer array > > instead of just rounding the elements. That is, maybe you should have > > to explicity round the elements? > > I think that, while perhaps a bit surprising at first, this is one of those > cases where you just have to 'learn to use the library'. I also think that C-like casting is good for arrays. We could think about how to avoid such usability problems by making more extensive use of 'matrix' objects. Perhaps we could make these behave more like the matrices users of other scientific environments (R / S, Octave / Matlab ...) would expect. I've run some tests with a small change to make the default data type for matrices 'float', and all tests pass as before. (See diff below). It seems matrix objects are hardly used: two files in linalg/, one unused import in signal/ltisys.py, and otherwise not at all. Currently matrix objects redefine the * operator to the inner product; making their default data type 'float' would, I think, have a similar usability benefit. Making matrices use floats by default wouldn't solve the specific problem Jonathan described, because ones() and zeros() return arrays, not matrices. But we could think about changing the casting behaviour of matrices to do the safe thing: so if A is an int matrix and B is a float matrix, 'A += B' could convert A to a float matrix, and similarly for other types, e.g. float += complex. Then we'd have a nice distinction between arrays, which are efficient and have C-like casting, and matrices, which are less efficient but safe and intuitive. If you agree I'd be happy to work on this ... -- Ed Index: matrix.py =================================================================== --- matrix.py (revision 1474) +++ matrix.py (working copy) @@ -55,10 +55,12 @@ if (dtype2 is dtype) and (not copy): return data return data.astype(dtype) - - if dtype is None: - if isinstance(data, N.ndarray): + elif isinstance(data, N.ndarray): + if dtype is None: dtype = data.dtype + else: + if dtype is None: + dtype = float intype = N.obj2dtype(dtype) if isinstance(data, types.StringType): From aisaac at american.edu Sat Nov 12 08:06:06 2005 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 12 Nov 2005 08:06:06 -0500 Subject: [SciPy-dev] Default type behaviour of array In-Reply-To: <43759808.2010409@colorado.edu> References: <43759808.2010409@colorado.edu> Message-ID: > Jonathan Taylor wrote: >> Also, maybe it should complain when you put floats into >> an integer array instead of just rounding the elements. >> That is, maybe you should have to explicity round the >> elements? On Sat, 12 Nov 2005, Fernando Perez apparently wrote: > I think that, while perhaps a bit surprising at first, > this is one of those cases where you just have to 'learn > to use the library'. The point is that you should think > of scipy arrays as C-style, strongly typed variables. > interpolation codes, for example). ... I would not want > a typecheck made on every assginment, as every layer of > safety added to arrays carries a performance price. It seems reasonable to be able to "turn on" type checking as part of a debugging facility, however. Especially when silent truncation is likely to surprise some users (as in this case). Cheers, Alan Isaac From kamrik at gmail.com Sat Nov 12 10:54:15 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Sat, 12 Nov 2005 17:54:15 +0200 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4373DC84.4030600@colorado.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> <437396B9.5080301@astraw.com> <4373DC84.4030600@colorado.edu> Message-ID: We don't have to see Trac+Moin as 2 separate systems, which, kind of, double each other. I would prefer to see it like this: Trac is a project management system with bug tracking system etc. But the built in wiki engine is not as mature and feature rich as Moin. So we can use Trac and use the Moin instead of Trac's native wiki engine (as a single composite system) The Ubuntu Linux project does something similar, they use Plone + Moin + Malone[1] [1] Malone: https://launchpad.net/malone is an interesting bug tracking *service* A couple of words in favour of Moin: 1) It has a feature of "subscribing to pages". Whenever a page you subscribed to is changed, you get an email (which also contains the added text). It's almost like there is a mailing list attached to every page in the wiki. This can greatly accelerate things. 2) There is a simple patch for Moin which adds a LaTeX-like math markup which is displayed as MathML. Here is an example of a page with math on a student wiki I maintain for our institute (requires Firefox or MS Explorer+MathPlayer plugin for MathML rendering) http://www.weizmann.ac.il/student-wiki/Physics/Courses/IntroductoryAlgebra/GroupRepresentations (MathML has the drawback of poor support by some browsers, but it seems to improve with time.) On 11/11/05, Fernando Perez wrote: > Andrew Straw wrote: > > In addition to Trac, we could consider MoinMoin [1] as the "public face" > > of scipy.org. Let me declare immediately that I'm no Trac expert, so if > > Trac can do all of these things, great, and please ignore this > > suggestion. Reading the descriptions of Trac on this thread as well as > > my limited experience as a user is all that I'm going on here. > > I kind of like Andrew's approach on this one, as in a sense it provides at the > website level an equivalence with the mailing lists: > > Moin site <-> scipy-user list > Trac site <-> scipy-dev list > > It can also mean that the -user site will have a bit more emphasis on polish > and information for newcomers, while the Trac one is much more technically > oriented, and it can be labeled a 'hard hat area'. As long as both sites' > front pages prominently indicate their purpose and a link to the other for > those who end up in the wrong place by accident, this could be a nice solution. > > While it seems like more work (two systems), in fact it may actually be less > in the long run (as long as moin is very easy to maintain). Two > well-separated systems may be less work to manage than one hybrid trying to do > two things. > > In addition, this would allow each SVN repo (core/full, which Travis wants to > keep separate) to maintain its own Trac instance without confusing the users, > who can gather all the info they want from the unified Moin front. > > Cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jonathan.taylor at utoronto.ca Sat Nov 12 19:11:24 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 13 Nov 2005 00:11:24 +0000 Subject: [SciPy-dev] Multivariate normal support gone? Message-ID: I see there used to be a multivariate_normal rv. I can't seem to find it. Is it gone now? Jon. From strawman at astraw.com Sat Nov 12 20:31:00 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 Nov 2005 17:31:00 -0800 Subject: [SciPy-dev] patch for arraymethod.c and new segfault in masked arrays Message-ID: <43769754.10707@astraw.com> Here's a patch that reverts some of the changes made in revision 1471. I was getting segfaults because arraymethods.c's _ARET() macro assumes its input is PyArrayObject*, whereas in these cases it's simply PyObject*. But now I'm getting segfaults in a new spot: masked arrays. I'm enclosing the patch and the gdb session for the masked array segfault. I haven't pursued the masked array segfault any further. (In case it matters, this is on a linux AMD64 system.) -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gdb_session.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pyobject_patch URL: From Fernando.Perez at colorado.edu Sat Nov 12 21:09:54 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sat, 12 Nov 2005 19:09:54 -0700 Subject: [SciPy-dev] patch for arraymethod.c and new segfault in masked arrays In-Reply-To: <43769754.10707@astraw.com> References: <43769754.10707@astraw.com> Message-ID: <4376A072.1080600@colorado.edu> Andrew Straw wrote: > Here's a patch that reverts some of the changes made in revision 1471. I > was getting segfaults because arraymethods.c's _ARET() macro assumes its > input is PyArrayObject*, whereas in these cases it's simply PyObject*. I've committed it because at least your patch does allow a current build to complete the scipy-full test suite: ---------------------------------------------------------------------- Ran 1366 tests in 106.441s OK Now, I don't know that part of the code well, so it may be that a different solution is better in the long run. But just minutes before your email came in, I updated svn and tried to build scipy on my home box for the first time, and was getting segfaults on the test suite. I hadn't looked into the problem yet, so you saved me a lot of work :) At least I can confirm that your patch allows the test suite to complete, so even if in the long run Travis wants to provide a different solution, this allows those running up-to-date SVN to continue using it without these segfaults. Cheers, f From rkern at ucsd.edu Sat Nov 12 22:23:40 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sat, 12 Nov 2005 19:23:40 -0800 Subject: [SciPy-dev] Multivariate normal support gone? In-Reply-To: References: Message-ID: <4376B1BC.8000505@ucsd.edu> Jonathan Taylor wrote: > I see there used to be a multivariate_normal rv. I can't seem to find > it. Is it gone now? In [1]: from scipy import random In [2]: random.multivariate_normal? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: Return an array containing multivariate normally distributed random numbers with specified mean and covariance. multivariate_normal(mean, cov) -> random values multivariate_normal(mean, cov, [m, n, ...]) -> random values mean must be a 1 dimensional array. cov must be a square two dimensional array with the same number of rows and columns as mean has elements. The first form returns a single 1-D array containing a multivariate normal. The second form returns an array of shape (m, n, ..., cov.shape[0]). In this case, output[i,j,...,:] is a 1-D array containing a multivariate normal. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Sun Nov 13 01:17:59 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 13 Nov 2005 00:17:59 -0600 (CST) Subject: [SciPy-dev] Building scipy with GNU Fortran 95 (gcc 4) Message-ID: Hi, The support for building scipy with GNU Fortran 95 (gcc 4) compiler is now added to SVN. To build scipy using gfortran, use python setup.py config_fc --fcompiler=gnu95 build Pearu From rkern at ucsd.edu Sun Nov 13 02:37:25 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sat, 12 Nov 2005 23:37:25 -0800 Subject: [SciPy-dev] Building scipy with GNU Fortran 95 (gcc 4) In-Reply-To: References: Message-ID: <4376ED35.3030308@ucsd.edu> Pearu Peterson wrote: > Hi, > > The support for building scipy with GNU Fortran 95 (gcc 4) compiler is now > added to SVN. To build scipy using gfortran, use > > python setup.py config_fc --fcompiler=gnu95 build We need to be careful with names here. There is gfortran[1], which is officially part of the GNU Compiler Collection as of gcc 4. There is also g95[2], which isn't[3]. I would prefer that the --fcompiler name be gfortran to help avoid confusion. [1] http://gcc.gnu.org/fortran/ [2] http://www.g95.org/ [3] http://gcc.gnu.org/wiki/TheOtherGCCBasedFortranCompiler -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pearu at scipy.org Sun Nov 13 02:37:21 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 13 Nov 2005 01:37:21 -0600 (CST) Subject: [SciPy-dev] Building scipy with GNU Fortran 95 (gcc 4) In-Reply-To: <4376ED35.3030308@ucsd.edu> References: <4376ED35.3030308@ucsd.edu> Message-ID: On Sat, 12 Nov 2005, Robert Kern wrote: > Pearu Peterson wrote: >> Hi, >> >> The support for building scipy with GNU Fortran 95 (gcc 4) compiler is now >> added to SVN. To build scipy using gfortran, use >> >> python setup.py config_fc --fcompiler=gnu95 build > > We need to be careful with names here. There is gfortran[1], which is > officially part of the GNU Compiler Collection as of gcc 4. > > There is also g95[2], which isn't[3]. > > I would prefer that the --fcompiler name be gfortran to help avoid > confusion. > > [1] http://gcc.gnu.org/fortran/ > [2] http://www.g95.org/ > [3] http://gcc.gnu.org/wiki/TheOtherGCCBasedFortranCompiler Yes, for g95[2] we already have --fcompiler=g95. I choosed --fcompiler=gnu95 to be similar to --fcompiler=gnu as they are closely related by the gcc project. As I understand from gfortran man pages, gfortran will replace g77 in future, and so, in future --fcompiler=gnu should look for gfortran as well. For now we need to keep g77 and gfortran support separate as g77-3.4 is the recommended Fortran compiler by the gcc developers. So, I would still prefer gnu95 name over gfortran. Pearu From Fernando.Perez at colorado.edu Sun Nov 13 04:24:37 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 02:24:37 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> <437396B9.5080301@astraw.com> <4373DC84.4030600@colorado.edu> Message-ID: <43770655.8020404@colorado.edu> Mark Koudritsky wrote: > We don't have to see Trac+Moin as 2 separate systems, which, kind of, > double each other. I would prefer to see it like this: > Trac is a project management system with bug tracking system etc. But > the built in wiki engine is not as mature and feature rich as Moin. So > we can use Trac and use the Moin instead of Trac's native wiki engine > (as a single composite system) Well, does this mean losing the Trac wiki integration? One interesting feature of Trac is that in any of its wiki pages, you can say for example: ticket:NN and this automatically creates a proper Wiki link to Ticket #NN. The same thing happens for milestones, reports, etc. This is just to point out that the Trac wiki is very tightly integrated with the specific development/subversion features of Trac. In Trac, the wiki is an integral part of the system, which I find very nice. It may not be as flashy as other wikis, but it stays out of the way and lets you get work done quickly. Note that, due to the Trac design restriction of one SVN repo per Trac instance (see http://projects.edgewall.com/trac/wiki/TracFaq#can-i-manage-multiple-projects-from-a-single-installation-of-trac) for details), at the very least, we will have to deal with Trac-scipycore and Trac-scpiyfull as two separate Trac instances. This leaves us with: Facts: - trac has a one svn repo/one trac instance model. - trac by default ties logins to the svn developers with commit access. We want non-developers with logins. Robert suggested a plugin which may take care of this issue. - We ABSOLUTELY must disable anonymous wiki edits. The ipython Trac is hardly advertised anywhere (one link in the ipython main page, that's all) and today Robert warned me of finding Wiki spam in it. Fortunately the problem was caught after the scumbags had made a single new page, and I was able to simply delete it right away. But I had to immediately disable anonymous edits, which means that right now only those with ipython commit rights can make wiki changes. The Enthought Trac was similarly defaced a few months ago, and I don't think that one is advertised at all. These people WILL find any open wiki and fill it with their junk. My opinion: - we may benefit from having a separate, more cleanly organized wiki for user-facing support. This is where things like the TopicalSoftware pages, cookbooks, documentation, etc. would live. As a datapoint, I personally would like to have such a separation for ipython as well, from the experience of using Trac over the last few months. Here's one possibility that came to my mind: we could make a mock scpipy-web SVN repo for this purpose only. This would allow the creation of a Trac instance for the website, ASP project, documentation, etc, whose user accounts would be separate from those of the developer repos. While still requiring (if we block anonymous write access and the Trac login plugin doesn't work) admin intervention for new user creation, at least there's a clear separation between the users with write access here and the developers with write access to the core/full repositories. Now, it may well be that the Plone setup can be fixed to have more reasonable performance and the full CMS power it brings may prove useful, I don't know. But I'm finding Trac to be lightweight enough to be really easy to use, and with the TracNav and TOC plugins to make it easy to add navigational aids to a site, it doesn't have to produce the wiki-scatter websites which are so common (I installed these two plugins for the ipython Trac today, they are trivial to add and use). As a way to test this, I am going to move the ipython main user site to precisely such a Trac environment. Due to a permissions issue with Apache for which I need help from root it didn't happen today, but I am going to give it a try. I'll be happy to report back if I see any glaring problem with this idea in actual use (though obviously ipython sees much less use than scipy, so I may well miss something). Cheers, f From Fernando.Perez at colorado.edu Sun Nov 13 04:28:15 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 02:28:15 -0700 Subject: [SciPy-dev] Default type behaviour of array In-Reply-To: References: <43759808.2010409@colorado.edu> Message-ID: <4377072F.3030503@colorado.edu> Alan G Isaac wrote: > On Sat, 12 Nov 2005, Fernando Perez apparently wrote: > >>I think that, while perhaps a bit surprising at first, >>this is one of those cases where you just have to 'learn >>to use the library'. The point is that you should think >>of scipy arrays as C-style, strongly typed variables. >>interpolation codes, for example). ... I would not want >>a typecheck made on every assginment, as every layer of >>safety added to arrays carries a performance price. > > > It seems reasonable to be able to "turn on" type checking as > part of a debugging facility, however. Especially when > silent truncation is likely to surprise some users (as in > this case). Yes, a 'debug mode' would certainly be nice. Both f2py (--debug-capi) and blitz++ (the BZDEBUG macro) have debug modes which, while typically very verbose and/or slow (blitz grinds to a crawl in this mode), can be absolute life-savers when you really need them. I can think of a couple of ways of doing this, but it would require writing code in the internals which I'm not exactly volunteering for right now :) Cheers, f From kamrik at gmail.com Sun Nov 13 08:00:13 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Sun, 13 Nov 2005 15:00:13 +0200 Subject: [SciPy-dev] Some Feedback In-Reply-To: <43770655.8020404@colorado.edu> References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> <437396B9.5080301@astraw.com> <4373DC84.4030600@colorado.edu> <43770655.8020404@colorado.edu> Message-ID: Fernando Perez wrote: > Well, does this mean losing the Trac wiki integration? One interesting > feature of Trac is that in any of its wiki pages, you can say for example: > > ticket:NN > > and this automatically creates a proper Wiki link to Ticket #NN. The same > thing happens for milestones, reports, etc. I agree that losing this integration would be a pity. I'm not well familiar with Trac. Exactly how tight this integration is? The ticket:NN feature can be easily duplicated in Moin using Moin's "InterWiki links". Moin has an editable dictionary of short tags and corresponding URLs. A syntax like tag:PageName is translates to URL+PageName and converted to a link. For example WikiPedia:Kinase would point to wikipedia article on Kinase. So we can simply define the tag "ticket" to point to the correct URL in Trac so that URL+NN would open the ticket NN. Same goes for milestones reports etc. For the more sophisticated stuff, Moin has macros which are pretty easy to write. I also played with extending the core syntax of Moin it's also not that difficult. On 11/13/05, Fernando Perez wrote: > Mark Koudritsky wrote: > > We don't have to see Trac+Moin as 2 separate systems, which, kind of, > > double each other. I would prefer to see it like this: > > Trac is a project management system with bug tracking system etc. But > > the built in wiki engine is not as mature and feature rich as Moin. So > > we can use Trac and use the Moin instead of Trac's native wiki engine > > (as a single composite system) > > Well, does this mean losing the Trac wiki integration? One interesting > feature of Trac is that in any of its wiki pages, you can say for example: > > ticket:NN > > and this automatically creates a proper Wiki link to Ticket #NN. The same > thing happens for milestones, reports, etc. > > This is just to point out that the Trac wiki is very tightly integrated with > the specific development/subversion features of Trac. In Trac, the wiki is an > integral part of the system, which I find very nice. It may not be as flashy > as other wikis, but it stays out of the way and lets you get work done quickly. > > Note that, due to the Trac design restriction of one SVN repo per Trac > instance (see > http://projects.edgewall.com/trac/wiki/TracFaq#can-i-manage-multiple-projects-from-a-single-installation-of-trac) > for details), at the very least, we will have to deal with Trac-scipycore and > Trac-scpiyfull as two separate Trac instances. > > This leaves us with: > > Facts: > > - trac has a one svn repo/one trac instance model. > > - trac by default ties logins to the svn developers with commit access. We > want non-developers with logins. Robert suggested a plugin which may take > care of this issue. > > - We ABSOLUTELY must disable anonymous wiki edits. The ipython Trac is hardly > advertised anywhere (one link in the ipython main page, that's all) and today > Robert warned me of finding Wiki spam in it. Fortunately the problem was > caught after the scumbags had made a single new page, and I was able to simply > delete it right away. But I had to immediately disable anonymous edits, which > means that right now only those with ipython commit rights can make wiki > changes. The Enthought Trac was similarly defaced a few months ago, and I > don't think that one is advertised at all. These people WILL find any open > wiki and fill it with their junk. > > > My opinion: > > - we may benefit from having a separate, more cleanly organized wiki for > user-facing support. This is where things like the TopicalSoftware pages, > cookbooks, documentation, etc. would live. > > > As a datapoint, I personally would like to have such a separation for ipython > as well, from the experience of using Trac over the last few months. > > > Here's one possibility that came to my mind: we could make a mock scpipy-web > SVN repo for this purpose only. This would allow the creation of a Trac > instance for the website, ASP project, documentation, etc, whose user accounts > would be separate from those of the developer repos. While still requiring > (if we block anonymous write access and the Trac login plugin doesn't work) > admin intervention for new user creation, at least there's a clear separation > between the users with write access here and the developers with write access > to the core/full repositories. > > Now, it may well be that the Plone setup can be fixed to have more reasonable > performance and the full CMS power it brings may prove useful, I don't know. > But I'm finding Trac to be lightweight enough to be really easy to use, and > with the TracNav and TOC plugins to make it easy to add navigational aids to a > site, it doesn't have to produce the wiki-scatter websites which are so common > (I installed these two plugins for the ipython Trac today, they are trivial to > add and use). > > As a way to test this, I am going to move the ipython main user site to > precisely such a Trac environment. Due to a permissions issue with Apache for > which I need help from root it didn't happen today, but I am going to give it > a try. I'll be happy to report back if I see any glaring problem with this > idea in actual use (though obviously ipython sees much less use than scipy, so > I may well miss something). > > Cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From jonathan.taylor at utoronto.ca Sun Nov 13 14:02:07 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 13 Nov 2005 19:02:07 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <43770655.8020404@colorado.edu> Message-ID: <4IXsxgRb.1131908525.9504100.jtaylor@localhost> Hi, I agree. I think trying to integrate three seperate wiki's will be a lot of work and very confusing to new users. >This leaves us with: > >Facts: > >- trac has a one svn repo/one trac instance model. Yes. I understand there is a desire to have two svn repositories to assure the projects are seperate. I still feel that this could be acheived by a single repository with a directory structure such as: / /scipy/ /scipy/trunk/ /scipy/branches/ /scipy_core/ /scipy_core/trunk/ /scipy_core/branches/ which gives the same logical seperation of repositories but with the added advantages of simplicity and the ability to easily svn mv stuff from one "virtual repos" to another. Thus we could have one trac instance. >- trac by default ties logins to the svn developers with commit access. We >want non-developers with logins. Robert suggested a plugin which may take >care of this issue. I think you are mistaken. By default your userbase is just a htpasswd file that you can easily add anyone too without giving them subversion access. From jonathan.taylor at utoronto.ca Sun Nov 13 14:09:18 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 13 Nov 2005 19:09:18 +0000 Subject: [SciPy-dev] Freeze Thursday at 5:00pm until further notice In-Reply-To: <4373A022.8080809@ee.byu.edu> Message-ID: Hi Travis, >They are in separate repositories because scipy core is the replacement >for Numeric only. Several people don't like the monolithic nature of >scipy. We want to alleviate their concerns by having scipy core as a >separate project. > >I think we should have two trac pages. One for the core, and one for >the rest. > >So, I don't think we will be joining them under the same repository any >time soon. I posted this same text in the Some Feedback thread. I hope you do not mind me asking for clarification as to why the below does not accomplish the same goal as two repositories. Thanks for your patience. --QUOTE-- Yes. I understand there is a desire to have two svn repositories to assure the projects are seperate. I still feel that this could be acheived by a single repository with a directory structure such as: / /scipy/ /scipy/trunk/ /scipy/branches/ /scipy_core/ /scipy_core/trunk/ /scipy_core/branches/ which gives the same logical seperation of repositories but with the added advantages of simplicity and the ability to easily svn mv stuff from one "virtual repos" to another. Thus we could have one trac instance. --/QUOTE-- Jon. From Fernando.Perez at colorado.edu Sun Nov 13 15:17:18 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 13:17:18 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: <4IXsxgRb.1131908525.9504100.jtaylor@localhost> References: <4IXsxgRb.1131908525.9504100.jtaylor@localhost> Message-ID: <43779F4E.5050303@colorado.edu> Jonathan Taylor wrote: >>- trac by default ties logins to the svn developers with commit access. We >>want non-developers with logins. Robert suggested a plugin which may take >>care of this issue. > > > I think you are mistaken. By default your userbase is just a htpasswd > file that you can easily add anyone too without giving them subversion > access. I am certainly no trac expert, and so far I'd just been using it as it was setup in the virtualmin environment that scipy offers, so feel free to correct me. But from poking around with the ipython 'virtual root' account in the virtualmin host, I don't see a way to create new users for trac who are NOT svn users. This may be a restriction imposed by the virtualmin environment, and there may be a workaround. If you know of one, by all means let me know, as this would make it easier to take care of this issue without the multiplication of wikis which you (understandably) are worried about. Even if the webmin interface that the project leads have access to doesn't offer this, we could always do it at the command line. But even via that channel I haven't been able to find a mechanism to create, say users called ipwiki.user1 ipwiki.user2 etc, who would NOT be part of the svn user list. I tried making such a user with the webmin interface but WITHOUT SVN access, and even after adding the user to the trac-admin permissions, it still can't login as that user into the Trac wiki. In summary: not being by a long shot a trac expert, I'm pretty lost here, in particular because I don't have full apache access on this box, only what the virtualmin environment gives me. I'm trying to understand this not only for ipython, but because the access I have is the same Travis has for the scipy virtual host. If you know the magic incantation to set up all of this reasonably (and in particular, without requiring real root access so that we don't become a pest on Joe Cooper's back), I'm willing to continue trying until we sort this out. Best, f From Fernando.Perez at colorado.edu Sun Nov 13 15:26:59 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 13:26:59 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: References: <200511100352.jAA3quT1013377@oobleck.astro.cornell.edu> <200511100407.jAA47YW0013419@oobleck.astro.cornell.edu> <4372D509.40009@ucsd.edu> <437384B3.3060308@enthought.com> <437396B9.5080301@astraw.com> <4373DC84.4030600@colorado.edu> <43770655.8020404@colorado.edu> Message-ID: <4377A193.2040403@colorado.edu> Mark Koudritsky wrote: > Fernando Perez wrote: > >>Well, does this mean losing the Trac wiki integration? One interesting >>feature of Trac is that in any of its wiki pages, you can say for example: >> >>ticket:NN >> >>and this automatically creates a proper Wiki link to Ticket #NN. The same >>thing happens for milestones, reports, etc. > > > I agree that losing this integration would be a pity. > I'm not well familiar with Trac. Exactly how tight this integration is? Fairly tight, since Trac was written specifically as a web front end for SVN. A quick look at this page, in particular the Trac links section, will give you an idea: http://projects.scipy.org/ipython/ipython/wiki/WikiFormatting The top navigation bar in Trac has direct access to the tickets, reports, source code, etc. I have no religious preference on this matter at all, I just want the simplest solution that works. Trac claims to be a 'one-stop shop' for svn/wiki/bugs handling, and in my experience so far, does a pretty decent job at that. Keep in mind that we have a few constraints here (this applies to me with ipython and I think equally well to Travis with scipy): - every minute that goes into dealing with wikis/systems is a minute NOT spent coding on the project itself. Simplicity of administration is an important criterium, on par or even above richness of features. - we don't have real root access to these systems: Enthought provides a virtual host which works very well for most things, but is not true root. Anything that requires _real_ root access automatically loses a lot of points, because it means bugging Joe Cooper, the Enthought sysadmin, who is already stretched as thin as you can imagine. So solutions that we can implement with the 'virtual root' access we have will probably be favored. - I'm sure Moin is nice and everything, but the fact is that it's not installed at this point. It would be one more thing to ask of Joe Cooper to install/maintain, and for us to learn to manage. Note that many packages out there don't work well (or at all) in the virtualmin environmnent. This would be one more thing to test, debug, etc for Joe. Personally, if I can figure out the user permissions problem, I'm willing to concede to Jonathan that the multi-wiki thing may be more confusing than is necessary, and will gladly test-drive things with the ipython trac to be used as a user wiki as well. Cheers, f From jonathan.taylor at utoronto.ca Sun Nov 13 15:31:53 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sun, 13 Nov 2005 20:31:53 +0000 Subject: [SciPy-dev] Some Feedback In-Reply-To: <43779F4E.5050303@colorado.edu> Message-ID: Hey there, I have not used that setup before but I can describe how Joe could set up a scipy trac with wiki editing disconnected from svn access. The initial setup for a root user would be: 1. Install the trac module... maybe via apt-get install trac 2. Create the scipy instance trac-admin /var/trac/scipy initenv 3. Install mod python. 4. In the apache config load the mod python module: LoadModule python_module modules/mod_python.so 5. In the apache config set up a place for the trac AuthType Basic AuthName "scipy" AuthUserFile /var/trac/scipy/.htaccess Require valid-user 6. Create the .htaccess file touch /var/trac/scipy/.htaccess 7. Then assuming your user name is perez, Joe could then leave adding users up to you by: chown perez /var/trac/scipy/.htaccess At this point you can easily add users using your own user account via: htpasswd /var/trac/scipy/.htaccess username password Cheers, Jon. On 11/13/2005, "Fernando Perez" wrote: >Jonathan Taylor wrote: > >>>- trac by default ties logins to the svn developers with commit access. We >>>want non-developers with logins. Robert suggested a plugin which may take >>>care of this issue. >> >> >> I think you are mistaken. By default your userbase is just a htpasswd >> file that you can easily add anyone too without giving them subversion >> access. > >I am certainly no trac expert, and so far I'd just been using it as it was >setup in the virtualmin environment that scipy offers, so feel free to correct >me. But from poking around with the ipython 'virtual root' account in the >virtualmin host, I don't see a way to create new users for trac who are NOT >svn users. This may be a restriction imposed by the virtualmin environment, >and there may be a workaround. If you know of one, by all means let me know, >as this would make it easier to take care of this issue without the >multiplication of wikis which you (understandably) are worried about. > >Even if the webmin interface that the project leads have access to doesn't >offer this, we could always do it at the command line. But even via that >channel I haven't been able to find a mechanism to create, say users called > >ipwiki.user1 >ipwiki.user2 > >etc, who would NOT be part of the svn user list. > >I tried making such a user with the webmin interface but WITHOUT SVN access, >and even after adding the user to the trac-admin permissions, it still can't >login as that user into the Trac wiki. > >In summary: not being by a long shot a trac expert, I'm pretty lost here, in >particular because I don't have full apache access on this box, only what the >virtualmin environment gives me. I'm trying to understand this not only for >ipython, but because the access I have is the same Travis has for the scipy >virtual host. > >If you know the magic incantation to set up all of this reasonably (and in >particular, without requiring real root access so that we don't become a pest >on Joe Cooper's back), I'm willing to continue trying until we sort this out. > >Best, > >f > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From Fernando.Perez at colorado.edu Sun Nov 13 15:36:58 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 13:36:58 -0700 Subject: [SciPy-dev] Some Feedback In-Reply-To: References: Message-ID: <4377A3EA.9070203@colorado.edu> Jonathan Taylor wrote: > Hey there, > > I have not used that setup before but I can describe how Joe could set up > a scipy trac with wiki editing disconnected from svn access. > > The initial setup for a root user would be: [...] Thanks for the pointers, Jon. I'll leave it to Joe to answer whether these instructions will work with the peculiarities introduced by the virtualmin system, which may or may not play well with what you describe. I simply don't know, but he's mentioned to me how some packages (mailman, for example) were very hard to configure in the virtualmin environment due to assumptions they make about running in the 'real host' instead of this virtual one. Cheers, f From strawman at astraw.com Sun Nov 13 17:07:07 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 13 Nov 2005 14:07:07 -0800 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <4377A3EA.9070203@colorado.edu> References: <4377A3EA.9070203@colorado.edu> Message-ID: <4377B90B.80200@astraw.com> I'm convinced Trac is the correct "hardhat area" for the development of the scipy packages, but after reading the ongoing discussion, I have a few points that I think are in Moin's favor as the "public face" of scipy.org: 1) Once Moin is set up, the administration of who can edit pages and so on can be handled entirely within the Moin wiki through the use of Access Control Lists (ACLs). Users can login to create a new username out of the box and then edit pages where allowed (in fact, you can set it up to allow non-logged-in users to edit pages, too), but privileged users must be listed in a special page for listing ACL group members. So, for example, if we're worried about our front page being defaced (despite Moin's anti-spam features), we could create an ACL group for those allowed to edit the front page. See http://moinmoin.wikiwikiweb.de/HelpOnAccessControlLists for more information. 2a) I know Joe Cooper's time resources are scare, but having installed many instances of Moin and struggled with Trac on and off for over a year, I put Moin's ease-of-installation far ahead of Trac's. (Of course, Trac is already installed...) The reason is precisely because Trac integrates with svn, and to build it , at least, seems to require svn/apache/python/clearsilver/swig to all be the right versions. Moin, AFAIK is pure Python. 2b) I believe it's likely that installing Moin will take less time than figuring out the svn/login issue with Trac. (If we used Moin for the public face, we wouldn't need non-svn-commiters editing the Trac wiki, do we? Anonymous non-logged-in users can still create new tickets, I think.) 3) It would be really nice to be able to include images in wiki content, for tutorials, for example. Does Trac let you do that, including uploading the image? (I haven't seen any on any Trac sites I've seen. The docs http://projects.scipy.org/scipy/scipy_core/wiki/WikiFormatting suggest that the image must already be published somewhere on the web.) With Moin, it's "inline:MyFigureName.png", which then allows you to upload the image as an attachment to the page. 4) I don't see the benefits of having "Ticket:1432" on the public face of scipy.org. With the public face of scipy.org, I think we're aiming to promote the-universe-of-scientific-python (and not scipy-the-package). I don't think we want to scare people off who are comparison shopping against our proprietary competitors by listing open tickets on the front page. Obviously I'm not saying we would use "Ticket:1432" on the front page if we used Trac, it's just that I don't see this feature as an advantage in the context of the front page. 5) In the pretty-front-page-to-compete-with-proprietary-software department: Moin has lots of themes, some of which look pretty nice already, and modifying them to making new ones is easy. See http://moinmoin.wikiwikiweb.de/ThemeMarket . I haven't seen any Trac instances look different than the default, and it looks pretty intimidating for the casual browser. ("Roadmap","Browse Source", "View Tickets" are all things I'd venture we don't want on our public face.) I hereby volunteer myself to set up Moin for the public face scipy.org, although whether it's possible to give me login permissions to do so is another question... I'm also happy to help someone (Joe Cooper?) do the same. From rkern at ucsd.edu Sun Nov 13 17:22:52 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 13 Nov 2005 14:22:52 -0800 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <4377B90B.80200@astraw.com> References: <4377A3EA.9070203@colorado.edu> <4377B90B.80200@astraw.com> Message-ID: <4377BCBC.3050500@ucsd.edu> Andrew Straw wrote: > I hereby volunteer myself to set up Moin for the public face scipy.org, > although whether it's possible to give me login permissions to do so is > another question... I'm also happy to help someone (Joe Cooper?) do the > same. Hooray for volunteers! It's greatly appreciated. In the meantime, it might be helpful if you were to set up a mock version that we could gawk at. If you could work out configuring Moin appropriately (modulo the differences between your machine and the virtualmin setup) and installing/writing whatever plugins/tweaks/hacks you deem appropriate, that would go a long way towards convincing us that it's a good approach. We generally run things by _fait accompli_ around here. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Sun Nov 13 19:27:35 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 17:27:35 -0700 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <4377B90B.80200@astraw.com> References: <4377A3EA.9070203@colorado.edu> <4377B90B.80200@astraw.com> Message-ID: <4377D9F7.90806@colorado.edu> Andrew Straw wrote: > 2b) I believe it's likely that installing Moin will take less time than > figuring out the svn/login issue with Trac. (If we used Moin for the > public face, we wouldn't need non-svn-commiters editing the Trac wiki, > do we? Anonymous non-logged-in users can still create new tickets, I think.) Correct on both. It is trivial (I did it yesterday) to close the Trac wiki to anonymous edits and yet admit anonymous ticket submissions. In fact, changing Trac permissions is pretty straightforward, except for the issue that we don't know how to decouple, in the virtualmin setup, the svn user list from the Trac one. > 3) It would be really nice to be able to include images in wiki content, > for tutorials, for example. Does Trac let you do that, including > uploading the image? (I haven't seen any on any Trac sites I've seen. > The docs http://projects.scipy.org/scipy/scipy_core/wiki/WikiFormatting > suggest that the image must already be published somewhere on the web.) > With Moin, it's "inline:MyFigureName.png", which then allows you to > upload the image as an attachment to the page. To inline images uploaded as attachments, you need the Image macro, available here: http://projects.edgewall.com/trac/wiki/MacroBazaar I installed two of these macros (not the image one) yesterday, and they're very easy to set up. > 4) I don't see the benefits of having "Ticket:1432" on the public face > of scipy.org. With the public face of scipy.org, I think we're aiming > to promote the-universe-of-scientific-python (and not > scipy-the-package). I don't think we want to scare people off who are > comparison shopping against our proprietary competitors by listing open > tickets on the front page. Obviously I'm not saying we would use > "Ticket:1432" on the front page if we used Trac, it's just that I don't > see this feature as an advantage in the context of the front page. The special Trac links are handy, but as Mark mentioned yesterday, they can apparently be replicated in Moin easily. If you are going to hack on a Trac-friendly moin setup, it would be nice to use the approach Mark mentioned to provide transparent support for the basic Trac wiki links, as explained in section 'Trac Links' of (this is the standard Trac document, nothing IPython-specific): http://projects.scipy.org/ipython/ipython/wiki/WikiFormatting > 5) In the pretty-front-page-to-compete-with-proprietary-software > department: Moin has lots of themes, some of which look pretty nice > already, and modifying them to making new ones is easy. See > http://moinmoin.wikiwikiweb.de/ThemeMarket . I haven't seen any Trac > instances look different than the default, and it looks pretty > intimidating for the casual browser. ("Roadmap","Browse Source", "View > Tickets" are all things I'd venture we don't want on our public face.) Well, trac does have a templates/ directory for CSS: ipython at scipy[templates]$ d /home/ipython/trac/ipython/templates total 40 -rw-rw-r-- 1 ipython 75 Aug 10 00:27 README -rw-rw-r-- 1 ipython 197 Nov 12 22:25 site_css.cs -rw-rw-r-- 1 ipython 155 Aug 10 00:27 site_footer.cs -rw-rw-r-- 1 ipython 145 Aug 10 00:27 site_header.cs -rw-rw-r-- 1 ipython 1406 Nov 1 05:45 tracnav.css ipython at scipy[templates]$ cat site_css.cs @import url(/site/tracnav.css); But I haven't found (and I looked yesterday for a while) sites with ready-to-roll 'themes' for Trac. The top and bottom logos and are trivial to change from the trac.ini file, but more serious visual configurability doesn't appear to be Trac's strength. The docs don't even mention the word 'themes' nor anything similar, as far as I can see. > I hereby volunteer myself to set up Moin for the public face scipy.org, > although whether it's possible to give me login permissions to do so is > another question... I'm also happy to help someone (Joe Cooper?) do the > same. Great! As I said, I was willing to try any of these solutions on the ipython site as well, so if you need a sandbox to play on with the virtualmin system (once install details are sorted out with Joe), let me know and I'll give you the necessary access. I intend to use whatever solution we settle on (Trac/Moin/...) for the ipython site as well, so I'm doubly interested on seeing this resolved. Since I haven't been too successful with the login issue with Trac, I'm all for Plan B, esp. if it comes with capable builtin working hands ;) Cheers, f From strawman at astraw.com Sun Nov 13 21:13:27 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 13 Nov 2005 18:13:27 -0800 Subject: [SciPy-dev] MoinMoin demo site online Message-ID: <4377F2C7.806@astraw.com> OK, I've set up a demo site: http://astraw.com/scipy I've only converted a couple of pages, and I've made one new one to detail my findings. The new one, which may be most informative is: http://astraw.com/scipy/moin.fcg/ScipyPublicFaceTest The converted pages are the home page and http://astraw.com/scipy/moin.fcg/documentation http://astraw.com/scipy/moin.fcg/documentation/FAQ I spent a decent amount of time configuring the access control lists to be realistic, so if you'd like to be added to the "EditorGroup", let me know. You won't be able to edit pages (or see the edit buttons) unless you login. Robert Kern wrote: >Andrew Straw wrote: > > > >>I hereby volunteer myself to set up Moin for the public face scipy.org, >>although whether it's possible to give me login permissions to do so is >>another question... I'm also happy to help someone (Joe Cooper?) do the >>same. >> >> > >Hooray for volunteers! It's greatly appreciated. > >In the meantime, it might be helpful if you were to set up a mock >version that we could gawk at. If you could work out configuring Moin >appropriately (modulo the differences between your machine and the >virtualmin setup) and installing/writing whatever plugins/tweaks/hacks >you deem appropriate, that would go a long way towards convincing us >that it's a good approach. We generally run things by _fait accompli_ >around here. > > > From oliphant at ee.byu.edu Sun Nov 13 22:51:29 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 13 Nov 2005 20:51:29 -0700 Subject: [SciPy-dev] Binaries for scipy created Message-ID: <437809C1.5020909@ee.byu.edu> I've used the scipy sourceforge site to place binaries for a "release" of full scipy (built on the new core). The version is 0.4.3. There is an rpm and windows binaries, as well as a full tar ball. I know there are people out there who would like to try scipy but don't want to wrestle with the install. The rpms and/or windows binaries might help. This is the first time, I've made binaries for other people to use. Hopefully they work fine, but errors may be reported. -Travis From oliphant at ee.byu.edu Sun Nov 13 23:29:09 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 13 Nov 2005 21:29:09 -0700 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <200511132059.28334.dickrp@wckn.com> References: <200511132059.28334.dickrp@wckn.com> Message-ID: <43781295.50003@ee.byu.edu> Robert Dick wrote: >scipy.linalg.eig() returns transposed eigenvector matrix > >Results with old Numeric: > > >>>>import LinearAlgebra as la >>>>from Numeric import * >>>>la.eigenvectors(array([[1.0, 1.0], [1.0, 1.0]])) >>>> >>>> >(array([ 2., 0.]), array([[ 0.70710678, 0.70710678], > [-0.70710678, 0.70710678]])) > >Results with svn current SciPy linked against AMD ACML BLAS/LAPACK. > > >>>>import scipy.linalg as la >>>>from scipy import * >>>>la.eig(array([[1.0, 1.0], [1.0, 1.0]])) >>>> >>>> >(array([ 2.+0.j, 0.+0.j]), array([[ 0.70710678, -0.70710678], > [ 0.70710678, 0.70710678]])) > > >>>>la.eig(array([[1.0, 1.0], [1.0, 1.0]]))[1].transpose() >>>> >>>> >array([[ 0.70710678, 0.70710678], > [-0.70710678, 0.70710678]]) > >Can somebody else reproduce this? > > I can cofirm this. Notice that scipy_core basic eigenvalues work fine. I'm surprised this isn't being picked up by a test, though. import scipy.basic.linalg as sbl sbl.eig([[1.0,1.0],[1.0,1.0]]) (array([ 2., 0.]), array([[ 0.70710678, 0.70710678], [-0.70710678, 0.70710678]])) import scipy.linalg as sl sl.eig([[1.0,1.0],[1.0,1.0]]) (array([ 2.+0.j, 0.+0.j]), array([[ 0.70710678, -0.70710678], [ 0.70710678, 0.70710678]])) It may be an issue with f2py and the new FORTRAN style arrays that can be created, because both functions return exactly the same data. It's just that the hand-written routine in scipy_core (which is correct) has the CONTIGUOUS flag set, while the advanced routine generated automatically by f2py has the FORTRAN flag set. -Travis From oliphant at ee.byu.edu Sun Nov 13 23:58:52 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 13 Nov 2005 21:58:52 -0700 Subject: [SciPy-dev] [SciPy-user] efficiently importing ascii data In-Reply-To: <200511132331.28415.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <200511131059.37107.dd55@cornell.edu> <4378089F.8000902@ee.byu.edu> <200511132331.28415.dd55@cornell.edu> Message-ID: <4378198C.7070400@ee.byu.edu> Darren Dale wrote: >On Sunday 13 November 2005 10:46 pm, you wrote: > > >>Darren Dale wrote: >> >> >>>I am considering using scipy's fromfile function, which gives a big speed >>>boost over io.read_array, but I don't understand what this docstring is >>>trying to tell me: >>> >>> WARNING: This function should be used sparingly, as it is not >>> a robust method of persistence. But it can be useful to >>> read in simply-formatted or binary data quickly. >>> >>> >>It's simply trying to advertise that fromfile and tofile are very raw >>functions. They should work fine as far as they go, but there may be >>easier solutions. I don't expect the capability of these to increase. >>But, for example, something like a TableIO could take advantage of them. >> >> > >I was wondering if the fromstring function could be expanded to include ascii >strings. > Hmm. Interesting concept. Do you mean in the same way that fromfile works, a single type and a simple separator? That would be consistent wouldn't it... The other thing that needs to be done is that string (and unicode) arrays need to convert to the numeric types, easily. This would let you read in a string array and convert to numeric types quite cleanly. Right now, this doesn't work simply because the wrong generic functions are getting called in the conversion routines. This can and should be changed, however. The required code to change is in arraytypes.inc.src. I could see using PyInt_FromString, PyLong_FromString, PyFloat_FromString, and calling the complex python function for constructing complex numbers from a string. Code would need to be written to fully support Long double and complex long double conversion, but initially they could just punt and use the double conversions. Alternatively, sscanf could be called when available for the type, and the other approaches used when it isn't. Anybody want a nice simple project to get themselves up to speed with the new code base :-) -Travis From arnd.baecker at web.de Mon Nov 14 02:10:58 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 08:10:58 +0100 (CET) Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: <4377F2C7.806@astraw.com> References: <4377F2C7.806@astraw.com> Message-ID: On Sun, 13 Nov 2005, Andrew Straw wrote: > OK, I've set up a demo site: > > http://astraw.com/scipy I like it! Another point which I find very good is that restructured text (ReSt, http://docutils.sourceforge.net/rst.html) can be used as well. Best, Arnd From prabhu_r at users.sf.net Mon Nov 14 02:29:54 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 14 Nov 2005 12:59:54 +0530 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <4377B90B.80200@astraw.com> References: <4377A3EA.9070203@colorado.edu> <4377B90B.80200@astraw.com> Message-ID: <17272.15602.110899.956045@vulcan.linux.in> >>>>> "Andrew" == Andrew Straw writes: Andrew> I'm convinced Trac is the correct "hardhat area" for the Andrew> development of the scipy packages, but after reading the Andrew> ongoing discussion, I have a few points that I think are Andrew> in Moin's favor as the "public face" of scipy.org: Andrew> 2b) I believe it's likely that installing Moin will take Andrew> less time than Andrew> figuring out the svn/login issue with Trac. (If we used Andrew> Moin for the public face, we wouldn't need Andrew> non-svn-commiters editing the Trac wiki, do we? Anonymous Andrew> non-logged-in users can still create new tickets, I Andrew> think.) While I agree that MoinMoin is easy to setup and admin, my only issue with MoinMoin is that upgrading versions usually requires running a bunch of upgrade scripts. My memories of the upgrade from 1.0 to 1.3.5 are not too pleasant. That said, the upgrade scripts did work and overall it has worked well for the mayavi wiki. All of the wiki data is spread over several files and this is sometimes a little disconcerting from the admin/backup perspective. Andrew> 3) It would be really nice to be able to include images in Andrew> wiki content, Andrew> for tutorials, for example. Does Trac let you do that, Andrew> including uploading the image? (I haven't seen any on any Andrew> Trac sites I've seen. The docs Andrew> http://projects.scipy.org/scipy/scipy_core/wiki/WikiFormatting Yes, inlined images are definitely doable with trac. See here: http://www.enthought.com/enthought/wiki/TVTK cheers, prabhu From oliphant at ee.byu.edu Mon Nov 14 02:55:58 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Nov 2005 00:55:58 -0700 Subject: [SciPy-dev] [SciPy-user] eig() segfaults on SuSE 9.3 with ACML, Numeric's eigenvectors works In-Reply-To: References: <200511132359.43082.dickrp@wckn.com> Message-ID: <4378430E.6050207@ee.byu.edu> Arnd Baecker wrote: >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 46912507335168 (LWP 31397)] >0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 >(gdb) bt >#0 0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 >#1 0x00002aaaab96246c in PyArray_CopyInto (dest=0x2aaab55609e0, src=0x8) >at arrayobject.c:658 >#2 0x00002aaaab96735b in array_imag_set (self=0x2aaab55608f0, >val=0x83000d690097dd63) at arrayobject.c:4260 > > Yes, thanks for the trace back. I can't believe that one got through the tests. It's a pretty obvious error: (adding to the strides vector instead of the data). I guess it just emphasizes the need for more tests. -Travis From nwagner at mecha.uni-stuttgart.de Mon Nov 14 04:31:20 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 10:31:20 +0100 Subject: [SciPy-dev] [SciPy-user] eig() segfaults on SuSE 9.3 with ACML, Numeric's eigenvectors works In-Reply-To: <4378430E.6050207@ee.byu.edu> References: <200511132359.43082.dickrp@wckn.com> <4378430E.6050207@ee.byu.edu> Message-ID: <43785968.7010807@mecha.uni-stuttgart.de> Travis Oliphant wrote: >Arnd Baecker wrote: > > >>Program received signal SIGSEGV, Segmentation fault. >>[Switching to Thread 46912507335168 (LWP 31397)] >>0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 >>(gdb) bt >>#0 0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 >>#1 0x00002aaaab96246c in PyArray_CopyInto (dest=0x2aaab55609e0, src=0x8) >>at arrayobject.c:658 >>#2 0x00002aaaab96735b in array_imag_set (self=0x2aaab55608f0, >>val=0x83000d690097dd63) at arrayobject.c:4260 >> >> >> >Yes, thanks for the trace back. I can't believe that one got through >the tests. It's a pretty obvious error: (adding to the strides vector >instead of the data). > >I guess it just emphasizes the need for more tests. > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > I cannot reproduce the segfault. Is it fixed in svn ? I am using 0.6.2.1483 0.4.2_1442 Nils From pearu at scipy.org Mon Nov 14 03:48:54 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 14 Nov 2005 02:48:54 -0600 (CST) Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <43781295.50003@ee.byu.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> Message-ID: On Sun, 13 Nov 2005, Travis Oliphant wrote: > Robert Dick wrote: > >> scipy.linalg.eig() returns transposed eigenvector matrix This is a matter of definition. scipy.linalg.eig and scipy.basic.linalg.eig return correct results according to their documentation. Just scipy.linalg.eig assumes that eigenvectors are returned column-wise, i.e. a * vr[:,i] = w[i] * vr[:,i] holds. While scipy.basic.linalg.eig, that is copied from Numeric, assumes that eigenvectors are returned row-wise, i.e a * vr[i] = w[i] * vr[i] holds. Note that matlab returns eigenvectors column-wise, similar to scipy.linalg.eig, and that is how the matrix of eigenvectors is also usually defined. I think it is unfortunate that Numeric eig returns eigenvectors row-wise, it is not a mathematically convenient definition that it uses. Pearu From arnd.baecker at web.de Mon Nov 14 05:58:13 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 11:58:13 +0100 (CET) Subject: [SciPy-dev] [SciPy-user] eig() segfaults on SuSE 9.3 with ACML, Numeric's eigenvectors works In-Reply-To: <4378430E.6050207@ee.byu.edu> References: <200511132359.43082.dickrp@wckn.com> <4378430E.6050207@ee.byu.edu> Message-ID: On Mon, 14 Nov 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >Program received signal SIGSEGV, Segmentation fault. > >[Switching to Thread 46912507335168 (LWP 31397)] > >0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 > >(gdb) bt > >#0 0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 > >#1 0x00002aaaab96246c in PyArray_CopyInto (dest=0x2aaab55609e0, src=0x8) > >at arrayobject.c:658 > >#2 0x00002aaaab96735b in array_imag_set (self=0x2aaab55608f0, > >val=0x83000d690097dd63) at arrayobject.c:4260 > > > > > Yes, thanks for the trace back. I can't believe that one got through > the tests. It's a pretty obvious error: (adding to the strides vector > instead of the data). Great that you found it so quickly! The example works fine now with In [3]: scipy.__core_version__ Out[3]: '0.6.2.1483' > I guess it just emphasizes the need for more tests. Looks like - so maybe you better wait for the 1.0 a bit longer (so that it appears as rock-solid as possible from the very beginning), but encourage more testing in terms of converting bigger projects to newcore ... Best, Arnd P.S.: thanx to the one who has put fftw3 support into newscipy! From dd55 at cornell.edu Mon Nov 14 06:36:31 2005 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 14 Nov 2005 06:36:31 -0500 Subject: [SciPy-dev] [SciPy-user] efficiently importing ascii data In-Reply-To: <4378198C.7070400@ee.byu.edu> References: <200511101436.58074.dd55@cornell.edu> <200511132331.28415.dd55@cornell.edu> <4378198C.7070400@ee.byu.edu> Message-ID: <200511140636.31387.dd55@cornell.edu> On Sunday 13 November 2005 11:58 pm, you wrote: > Darren Dale wrote: > >On Sunday 13 November 2005 10:46 pm, you wrote: > >>Darren Dale wrote: > >>>I am considering using scipy's fromfile function, which gives a big > >>> speed boost over io.read_array, but I don't understand what this > >>> docstring is trying to tell me: > >>> > >>> WARNING: This function should be used sparingly, as it is not > >>> a robust method of persistence. But it can be useful to > >>> read in simply-formatted or binary data quickly. > >> > >>It's simply trying to advertise that fromfile and tofile are very raw > >>functions. They should work fine as far as they go, but there may be > >>easier solutions. I don't expect the capability of these to increase. > >>But, for example, something like a TableIO could take advantage of them. > > > >I was wondering if the fromstring function could be expanded to include > > ascii strings. > > Hmm. Interesting concept. Do you mean in the same way that fromfile > works, a single type and a simple separator? That would be consistent > wouldn't it... Yes, a single type and a single separator is what I had in mind. > The other thing that needs to be done is that string (and unicode) > arrays need to convert to the numeric types, easily. This would let you > read in a string array and convert to numeric types quite cleanly. For the longest time, I thought the this ability must exist, but that I was just overlooking it somehow. > Right now, this doesn't work simply because the wrong generic functions > are getting called in the conversion routines. This can and should be > changed, however. The required code to change is in arraytypes.inc.src. > > I could see using PyInt_FromString, PyLong_FromString, > PyFloat_FromString, and calling the complex python function for > constructing complex numbers from a string. Code would need to be > written to fully support Long double and complex long double conversion, > but initially they could just punt and use the double conversions. > > Alternatively, sscanf could be called when available for the type, and > the other approaches used when it isn't. > > Anybody want a nice simple project to get themselves up to speed with > the new code base :-) I'll look into it (after hours, just started a new job). I haven't worked much with C or wrapping C, and I need to learn sometime. Darren From nwagner at mecha.uni-stuttgart.de Mon Nov 14 06:52:42 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 12:52:42 +0100 Subject: [SciPy-dev] site.cfg and acml Message-ID: <43787A8A.4030602@mecha.uni-stuttgart.de> Hi all, I have installed AMD's fast math libraries. /opt/acml2.7.0/gnu32/lib consists -rw-r--r-- 1 root root 12610040 2005-08-16 15:00 libacml.a -rwxr-xr-x 1 root root 8083960 2005-08-16 15:00 libacml.so Where can I find the file site.cfg ? How do I add the information concerning the location of acml to site.cfg ? I guess that /opt/acml2.7.0/gnu32/lib is not a standard path for libraries as opposed to /usr/local/lib /usr/lib etc. [blas] # for overriding the names of the atlas libraries blas_libs = acml language = f77 [lapack] lapack_libs = acml language = f77 Nils From oliphant at ee.byu.edu Mon Nov 14 05:59:23 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Nov 2005 03:59:23 -0700 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> Message-ID: <43786E0B.7000603@ee.byu.edu> Pearu Peterson wrote: >On Sun, 13 Nov 2005, Travis Oliphant wrote: > > > >>Robert Dick wrote: >> >> >> >>>scipy.linalg.eig() returns transposed eigenvector matrix >>> >>> > >This is a matter of definition. scipy.linalg.eig and >scipy.basic.linalg.eig return correct results according to their >documentation. Just scipy.linalg.eig assumes that eigenvectors are >returned column-wise, i.e. > > a * vr[:,i] = w[i] * vr[:,i] > >holds. While scipy.basic.linalg.eig, that is copied from Numeric, assumes >that eigenvectors are returned row-wise, i.e > > > a * vr[i] = w[i] * vr[i] > >holds. > > Thanks for the clarification, Pearu. I'm glad things are actually working as advertised. -Travis From oliphant at ee.byu.edu Mon Nov 14 06:37:08 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Nov 2005 04:37:08 -0700 Subject: [SciPy-dev] New scipy core requires rebuild of extension modules Message-ID: <437876E4.30309@ee.byu.edu> Version 1484 (current SVN) of scipy core requires a complete rebuild because of changes to the C-API. Sorry about that. I've changed the version number to 0.7 to reflect the need to rebuild extension modules. I'm almost done with the C-API documentation and so hopefully shouldn't have to change things much more. The new call is multi = PyArray_MultIterNew(int n, ...) which returns a PyArrayMultiIterObject * for easier access to array broadcasting. Simply call the function with your Python Objects as arguments. In return you get an object that can be used like: i=PyArray_MultiIter_SIZE(multi); while(i--) { /* deal with PyArray_MultiIter_DATA(multi, n) which is a pointer to the data to use */ PyArray_MultiIter_NEXT(multi); } -Travis From arnd.baecker at web.de Mon Nov 14 08:24:49 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 14:24:49 +0100 (CET) Subject: [SciPy-dev] site.cfg and acml In-Reply-To: <43787A8A.4030602@mecha.uni-stuttgart.de> References: <43787A8A.4030602@mecha.uni-stuttgart.de> Message-ID: On Mon, 14 Nov 2005, Nils Wagner wrote: > Hi all, > > I have installed AMD's fast math libraries. > > /opt/acml2.7.0/gnu32/lib consists > -rw-r--r-- 1 root root 12610040 2005-08-16 15:00 libacml.a > -rwxr-xr-x 1 root root 8083960 2005-08-16 15:00 libacml.so > > Where can I find the file site.cfg ? You will have to create it before installation in the directory core/scipy/distutils Old scipy used to have a file `sample_site.cfg` in that directory. I think it would be helpful if such a file is added to new scipy as well. > How do I add the information concerning the location of acml to site.cfg ? > > I guess that /opt/acml2.7.0/gnu32/lib is not a standard path for > libraries as opposed to /usr/local/lib /usr/lib etc. Yep, it should be sufficient to add library_dirs = /opt/acml2.7.0/gnu32/lib Maybe also include_dirs= ... Arnd > [blas] > # for overriding the names of the atlas libraries > blas_libs = acml > language = f77 > > [lapack] > lapack_libs = acml > language = f77 > > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From nwagner at mecha.uni-stuttgart.de Mon Nov 14 08:45:26 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 14:45:26 +0100 Subject: [SciPy-dev] site.cfg and acml In-Reply-To: References: <43787A8A.4030602@mecha.uni-stuttgart.de> Message-ID: <437894F6.9050202@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Mon, 14 Nov 2005, Nils Wagner wrote: > > >>Hi all, >> >>I have installed AMD's fast math libraries. >> >>/opt/acml2.7.0/gnu32/lib consists >>-rw-r--r-- 1 root root 12610040 2005-08-16 15:00 libacml.a >>-rwxr-xr-x 1 root root 8083960 2005-08-16 15:00 libacml.so >> >>Where can I find the file site.cfg ? >> > >You will have to create it before installation in the directory >core/scipy/distutils > >Old scipy used to have a file `sample_site.cfg` in that directory. > >I think it would be helpful if such a file is added to new scipy >as well. > > >>How do I add the information concerning the location of acml to site.cfg ? >> >>I guess that /opt/acml2.7.0/gnu32/lib is not a standard path for >>libraries as opposed to /usr/local/lib /usr/lib etc. >> > >Yep, it should be sufficient to add > library_dirs = /opt/acml2.7.0/gnu32/lib >Maybe also > include_dirs= ... > >Arnd > > > >>[blas] >># for overriding the names of the atlas libraries >>blas_libs = acml >>language = f77 >> >>[lapack] >>lapack_libs = acml >>language = f77 >> >> >> Nils >> >>_______________________________________________ >>Scipy-dev mailing list >>Scipy-dev at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Arnd, Thank you for your note. How about priority if site.cfg is available in core/scipy/distutils ? Does it mean that acml will be used instead of ATLAS ? Nils From arnd.baecker at web.de Mon Nov 14 08:55:15 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 14:55:15 +0100 (CET) Subject: [SciPy-dev] site.cfg and acml In-Reply-To: <437894F6.9050202@mecha.uni-stuttgart.de> References: <43787A8A.4030602@mecha.uni-stuttgart.de> <437894F6.9050202@mecha.uni-stuttgart.de> Message-ID: On Mon, 14 Nov 2005, Nils Wagner wrote: [...] > Hi Arnd, > > Thank you for your note. How about priority > if site.cfg is available in core/scipy/distutils ? > Does it mean that acml will be used instead of ATLAS ? That's how it should work. To be sure you can run `python core/scipy/distutils/system_info.py`, which provides very helpful output. Arnd From nwagner at mecha.uni-stuttgart.de Mon Nov 14 09:16:37 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 15:16:37 +0100 Subject: [SciPy-dev] site.cfg and acml In-Reply-To: References: <43787A8A.4030602@mecha.uni-stuttgart.de> <437894F6.9050202@mecha.uni-stuttgart.de> Message-ID: <43789C45.5080101@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Mon, 14 Nov 2005, Nils Wagner wrote: >[...] > >>Hi Arnd, >> >>Thank you for your note. How about priority >>if site.cfg is available in core/scipy/distutils ? >>Does it mean that acml will be used instead of ATLAS ? >> > >That's how it should work. >To be sure you can run `python core/scipy/distutils/system_info.py`, >which provides very helpful output. > >Arnd > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > >>> import scipy Importing test to scipy Importing base to scipy Importing basic to scipy Failed to import basic libacml.so: cannot open shared object file: No such file or directory Importing io to scipy Importing special to scipy Importing fftpack to scipy Failed to import fftpack libacml.so: cannot open shared object file: No such file or directory Importing utils to scipy Importing cluster to scipy Failed to import cluster libacml.so: cannot open shared object file: No such file or directory Importing sparse to scipy Importing interpolate to scipy Importing lib to scipy Importing signal to scipy Failed to import signal cannot import name random Importing integrate to scipy Importing optimize to scipy Failed to import optimize cannot import name random Importing linalg to scipy Importing stats to scipy Failed to import stats cannot import name random Importing basic to scipy Failed to import basic libacml.so: cannot open shared object file: No such file or directory This is my site.cfg # # AMD fast math libraries # [DEFAULT] #library_dirs = /usr/local/lib:/opt/acml2.7.0/gnu32/lib:/usr/lib library_dirs = /opt/acml2.7.0/gnu32/lib [blas] # # for overriding the names of the atlas libraries # blas_libs = acml language = f77 [lapack] lapack_libs = acml language = f77 How can I resolve the problem ? Nils From arnd.baecker at web.de Mon Nov 14 10:51:06 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 16:51:06 +0100 (CET) Subject: [SciPy-dev] site.cfg and acml In-Reply-To: <43789C45.5080101@mecha.uni-stuttgart.de> References: <43787A8A.4030602@mecha.uni-stuttgart.de> <43789C45.5080101@mecha.uni-stuttgart.de> Message-ID: On Mon, 14 Nov 2005, Nils Wagner wrote: [...] > >>> import scipy > Importing test to scipy > Importing base to scipy > Importing basic to scipy > Failed to import basic > libacml.so: cannot open shared object file: No such file or directory > Importing io to scipy > Importing special to scipy > Importing fftpack to scipy > Failed to import fftpack > libacml.so: cannot open shared object file: No such file or directory > Importing utils to scipy > Importing cluster to scipy > Failed to import cluster > libacml.so: cannot open shared object file: No such file or directory > Importing sparse to scipy > Importing interpolate to scipy > Importing lib to scipy > Importing signal to scipy > Failed to import signal > cannot import name random > Importing integrate to scipy > Importing optimize to scipy > Failed to import optimize > cannot import name random > Importing linalg to scipy > Importing stats to scipy > Failed to import stats > cannot import name random > Importing basic to scipy > Failed to import basic > libacml.so: cannot open shared object file: No such file or directory > > This is my site.cfg > > # > # AMD fast math libraries > # > [DEFAULT] > #library_dirs = /usr/local/lib:/opt/acml2.7.0/gnu32/lib:/usr/lib > library_dirs = /opt/acml2.7.0/gnu32/lib > > [blas] > # > # for overriding the names of the atlas libraries > # > blas_libs = acml > language = f77 > > [lapack] > lapack_libs = acml > language = f77 > > How can I resolve the problem ? Have you tried to set LD_LIBRARY_PATH (maybe also LD_RUN_PATH) to /opt/acml2.7.0/gnu32/lib ? Arnd From nwagner at mecha.uni-stuttgart.de Mon Nov 14 11:11:00 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 17:11:00 +0100 Subject: [SciPy-dev] site.cfg and acml In-Reply-To: References: <43787A8A.4030602@mecha.uni-stuttgart.de> <43789C45.5080101@mecha.uni-stuttgart.de> Message-ID: <4378B714.8050307@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Mon, 14 Nov 2005, Nils Wagner wrote: > >[...] > > >>>>>import scipy >>>>> >>Importing test to scipy >>Importing base to scipy >>Importing basic to scipy >>Failed to import basic >>libacml.so: cannot open shared object file: No such file or directory >>Importing io to scipy >>Importing special to scipy >>Importing fftpack to scipy >>Failed to import fftpack >>libacml.so: cannot open shared object file: No such file or directory >>Importing utils to scipy >>Importing cluster to scipy >>Failed to import cluster >>libacml.so: cannot open shared object file: No such file or directory >>Importing sparse to scipy >>Importing interpolate to scipy >>Importing lib to scipy >>Importing signal to scipy >>Failed to import signal >>cannot import name random >>Importing integrate to scipy >>Importing optimize to scipy >>Failed to import optimize >>cannot import name random >>Importing linalg to scipy >>Importing stats to scipy >>Failed to import stats >>cannot import name random >>Importing basic to scipy >>Failed to import basic >>libacml.so: cannot open shared object file: No such file or directory >> >>This is my site.cfg >> >># >># AMD fast math libraries >># >>[DEFAULT] >>#library_dirs = /usr/local/lib:/opt/acml2.7.0/gnu32/lib:/usr/lib >>library_dirs = /opt/acml2.7.0/gnu32/lib >> >>[blas] >># >># for overriding the names of the atlas libraries >># >>blas_libs = acml >>language = f77 >> >>[lapack] >>lapack_libs = acml >>language = f77 >> >>How can I resolve the problem ? >> > >Have you tried to set > LD_LIBRARY_PATH >(maybe also LD_RUN_PATH) to /opt/acml2.7.0/gnu32/lib ? > >Arnd > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Yes, but it doesn't work. From Fernando.Perez at colorado.edu Mon Nov 14 13:56:18 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 14 Nov 2005 11:56:18 -0700 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <43786E0B.7000603@ee.byu.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> <43786E0B.7000603@ee.byu.edu> Message-ID: <4378DDD2.2030506@colorado.edu> Travis Oliphant wrote: > Pearu Peterson wrote: >>This is a matter of definition. scipy.linalg.eig and >>scipy.basic.linalg.eig return correct results according to their >>documentation. Just scipy.linalg.eig assumes that eigenvectors are >>returned column-wise, i.e. >> >> a * vr[:,i] = w[i] * vr[:,i] >> >>holds. While scipy.basic.linalg.eig, that is copied from Numeric, assumes >>that eigenvectors are returned row-wise, i.e >> >> >> a * vr[i] = w[i] * vr[i] >> >>holds. >> >> > > Thanks for the clarification, Pearu. I'm glad things are actually > working as advertised. If I may suggest, I think these two should be unified, though. It will be seriously disconcerting for new users to find that scipy.basic.linalg.eig = transpose(scipy.linalg.eig) Just my 2c. Cheers, f From oliphant at ee.byu.edu Mon Nov 14 14:03:19 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Nov 2005 12:03:19 -0700 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <4378DDD2.2030506@colorado.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> <43786E0B.7000603@ee.byu.edu> <4378DDD2.2030506@colorado.edu> Message-ID: <4378DF77.5090700@ee.byu.edu> Fernando Perez wrote: >Travis Oliphant wrote: > > >>Pearu Peterson wrote: >> >> > > > >>>This is a matter of definition. scipy.linalg.eig and >>>scipy.basic.linalg.eig return correct results according to their >>>documentation. Just scipy.linalg.eig assumes that eigenvectors are >>>returned column-wise, i.e. >>> >>> a * vr[:,i] = w[i] * vr[:,i] >>> >>>holds. While scipy.basic.linalg.eig, that is copied from Numeric, assumes >>>that eigenvectors are returned row-wise, i.e >>> >>> >>> a * vr[i] = w[i] * vr[i] >>> >>>holds. >>> >>> >>> >>> >>Thanks for the clarification, Pearu. I'm glad things are actually >>working as advertised. >> >> > >If I may suggest, I think these two should be unified, though. It will be >seriously disconcerting for new users to find that > > If the two are to be unified, I think scipy.basic should change. But, that leads to a problem because of compatibility with Numeric. So, what to do? We could change scipy.basic.linalg.eig and keep scipy.basic.linalg.eigenvectors as the old Numeric behavior. -Travis From aisaac at american.edu Mon Nov 14 14:13:54 2005 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 14 Nov 2005 14:13:54 -0500 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <4378DF77.5090700@ee.byu.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> <43786E0B.7000603@ee.byu.edu><4378DDD2.2030506@colorado.edu><4378DF77.5090700@ee.byu.edu> Message-ID: On Mon, 14 Nov 2005, Travis Oliphant apparently wrote: > We could change scipy.basic.linalg.eig and keep > scipy.basic.linalg.eigenvectors as the old Numeric > behavior. scipy.deprecated.linalg.eig ? fwiw, Alan Isaac From Fernando.Perez at colorado.edu Mon Nov 14 14:18:12 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 14 Nov 2005 12:18:12 -0700 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <4378DF77.5090700@ee.byu.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> <43786E0B.7000603@ee.byu.edu> <4378DDD2.2030506@colorado.edu> <4378DF77.5090700@ee.byu.edu> Message-ID: <4378E2F4.7050001@colorado.edu> Travis Oliphant wrote: >>If I may suggest, I think these two should be unified, though. It will be >>seriously disconcerting for new users to find that >> >> > > If the two are to be unified, I think scipy.basic should change. But, > that leads to a problem because of compatibility with Numeric. > > So, what to do? This one seems worth fixing to me, and I'd rather not clutter the API with too many similarly-named functions (people will always end up using the wrong one by accident). How about this: let's introduce a CompatibilityWarning call using the Python warnings framework. All functions whose API has changed will trigger such a warning (just noise on screen) when used. I see a few advantages to this: - people can turn it off using the warnings controls, if they really know they don't need it. Easy instructions on how to do this should be given, of course. - it will reduce the number of 'support calls' on the list due to broken code as people start transitioning. At least one hopes so :) - after 1.0 (or a little later?) is out, we can easily clear all this code out. Since these are real function calls, we'd have to fully purge them out at least from the python sources (no #ifdef in python without nasty bytecode hacks). Their counterpart in C extensions could be ifdef'd out. If you like this approach, it may make you more comfortable in introducing backwards-compatibility-breaking changes you feel are necessary. With nice, explicit messages in the warnings, the library becomes self-documenting to help users transition. I know the line between backwards-compatibility and clean evolution is a hard one to walk; this may be a tool to help in that. Does this sound reasonable? Cheers, f From nwagner at mecha.uni-stuttgart.de Mon Nov 14 15:10:38 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 21:10:38 +0100 Subject: [SciPy-dev] integrate.odeint Message-ID: Hi all, What is the meaning of lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout in above message, i1 = 500 in above message, r1 = 0.2460384696241E-01 Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: zen.py Type: text/x-python Size: 1982 bytes Desc: not available URL: From dickrp at wckn.com Tue Nov 15 01:46:19 2005 From: dickrp at wckn.com (Robert Dick) Date: Tue, 15 Nov 2005 00:46:19 -0600 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix Message-ID: <200511150046.19597.dickrp@wckn.com> Fernando Perez wrote: >Travis Oliphant wrote: >>Pearu Peterson wrote: >>>This is a matter of definition. scipy.linalg.eig and >>>scipy.basic.linalg.eig return correct results according to their >>>documentation. Just scipy.linalg.eig assumes that eigenvectors are >>>returned column-wise, i.e. >>Thanks for the clarification, Pearu. I'm glad things are actually >>working as advertised. >If I may suggest, I think these two should be unified, though. It will be >seriously disconcerting for new users to find that > If the two are to be unified, I think scipy.basic should change. But, > that leads to a problem because of compatibility with Numeric. > > So, what to do? > > We could change scipy.basic.linalg.eig and keep > scipy.basic.linalg.eigenvectors as the old Numeric behavior. If you need to decide which one to change, identify the common case. Is it more common to access all dimensions of one eigenvector or access one dimension of many eigenvectors? The common case should be the easiest to express, i.e., if one wants the first eigenvector, should la.eig(m)[1][0] or la.eig(m)[1][:, 0] be used? The first convention (Numeric-style) maintains compatibility with Numeric and conforms with the documentation in "Guide to SciPy: Core System": The second element of the return tuple contains the eigenvectors in the rows (x[i] is the ith eigenvector). The second convention (current scipy.eig) is more MATLAB-like. Other comments: It might be good to change "Building New SciPy" to "Building SciPy" and change "Building SciPy" to "Building Old Versions of SciPy" at http://www.scipy.org/documentation/ When I decided to try SciPy I read the first, and apparently default, document. It indicates that Numeric is necessary for SciPy. As a consqeuence, I wasted a lot of time rewriting significant portions of Numeric in order to get it to build with ACML. Only after later getting SciPy from svn did I realize that Numeric is no longer required. The downloads page might best direct folks to the svn repository. Svn is easy but, when trying a new package, I generally go with tarball releases under the assumption that they are more stable. However, scipy's tarball is a year old. I would have gone straight to svn if the page suggested it and that would have resulted in a better first impression of SciPy. In case this sounds too negative, I should state that SciPy has great potential and I intend to keep using it and sending bug reports (or checking in, after establishing history). Best Regards, -Robert Dick- From pearu at scipy.org Tue Nov 15 01:41:14 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 15 Nov 2005 00:41:14 -0600 (CST) Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <200511150046.19597.dickrp@wckn.com> References: <200511150046.19597.dickrp@wckn.com> Message-ID: On Tue, 15 Nov 2005, Robert Dick wrote: > Fernando Perez wrote: >> Travis Oliphant wrote: >>> Pearu Peterson wrote: >>>> This is a matter of definition. scipy.linalg.eig and >>>> scipy.basic.linalg.eig return correct results according to their >>>> documentation. Just scipy.linalg.eig assumes that eigenvectors are >>>> returned column-wise, i.e. > >>> Thanks for the clarification, Pearu. I'm glad things are actually >>> working as advertised. > >> If I may suggest, I think these two should be unified, though. It will be >> seriously disconcerting for new users to find that > >> If the two are to be unified, I think scipy.basic should change. But, >> that leads to a problem because of compatibility with Numeric. >> >> So, what to do? >> >> We could change scipy.basic.linalg.eig and keep >> scipy.basic.linalg.eigenvectors as the old Numeric behavior. > > If you need to decide which one to change, identify the common case. Is it > more common to access all dimensions of one eigenvector or access one > dimension of many eigenvectors? The common case should be the easiest to > express, i.e., if one wants the first eigenvector, should > la.eig(m)[1][0] > or > la.eig(m)[1][:, 0] > be used? I am not convinced that getting eigenvectors one-by-one is the most common case of la.eig usage. Sometimes one needs to operate with the whole matrix of eigenvectors and then the mathematically "correct" representation of the eigenmatrix would be more convinient. > The first convention (Numeric-style) maintains compatibility with Numeric and > conforms with the documentation in "Guide to SciPy: Core System": > The second element of the return tuple contains the eigenvectors in the > rows (x[i] is the ith eigenvector). > The second convention (current scipy.eig) is more MATLAB-like. Documentation can always be fixed. Numeric compatibility is the biggest problem now. I have had long time in my mind the following idea: Introduce, say, scipy.Numeric package that is fully compatible with Numeric and having array with typecode, etc methods and completely Numeric behaviour such that the replacement Numeric->scipy.Numeric would be the only change that one needs to apply to Numeric-based codes. Or even simpler, install scipy.Numeric as standalone Numeric. But introducing scipy.Numeric may require lots of work and I am not sure that the effort is worth it. Pearu From nwagner at mecha.uni-stuttgart.de Tue Nov 15 02:59:57 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Nov 2005 08:59:57 +0100 Subject: [SciPy-dev] scipy.test(1,10) Message-ID: <4379957D.5040205@mecha.uni-stuttgart.de> Hi all, If I import scipy and run scipy.test(1,10) two times scipy.test(1,10) Ran 1348 tests in 8.735s OK scipy.test(1,10) Ran 1351 tests in 8.304s OK There is a difference in the number of tests, but for what reason ? Nils Core/Scipy version: 0.7.0.1488 0.4.2_1442 From pearu at scipy.org Tue Nov 15 02:11:09 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 15 Nov 2005 01:11:09 -0600 (CST) Subject: [SciPy-dev] scipy.test(1,10) In-Reply-To: <4379957D.5040205@mecha.uni-stuttgart.de> References: <4379957D.5040205@mecha.uni-stuttgart.de> Message-ID: On Tue, 15 Nov 2005, Nils Wagner wrote: > Hi all, > > If I import scipy and run scipy.test(1,10) two times > > scipy.test(1,10) > Ran 1348 tests in 8.735s > > OK > scipy.test(1,10) > Ran 1351 tests in 8.304s > > OK > > There is a difference in the number of tests, but for what reason ? The additional three tests come from scipy.distutils. This is due to how scipy testing hooks work and is normal behaviour (tests are looked in imported modules and in the first run scipy.distutils has not been imported and so its tests are not found). Pearu From nwagner at mecha.uni-stuttgart.de Tue Nov 15 03:49:58 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Nov 2005 09:49:58 +0100 Subject: [SciPy-dev] integrate.odeint Message-ID: <4379A136.9010107@mecha.uni-stuttgart.de> Hi all, Is it a bug or a feature that the initial conditions z(0) are overwritten with z(T) ? test_odeint.py for details ... Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_odeint.py Type: text/x-python Size: 670 bytes Desc: not available URL: From schofield at ftw.at Tue Nov 15 06:41:52 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 15 Nov 2005 12:41:52 +0100 Subject: [SciPy-dev] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <4378DF77.5090700@ee.byu.edu> References: <200511132059.28334.dickrp@wckn.com> <43781295.50003@ee.byu.edu> <43786E0B.7000603@ee.byu.edu> <4378DDD2.2030506@colorado.edu> <4378DF77.5090700@ee.byu.edu> Message-ID: <4379C980.3050906@ftw.at> Travis Oliphant wrote: >Fernando Perez wrote: > > >>If I may suggest, I think these two should be unified, though. It will be >>seriously disconcerting for new users to find that >> >> >If the two are to be unified, I think scipy.basic should change. But, >that leads to a problem because of compatibility with Numeric. > >So, what to do? > > Is there actually a use for the row-wise definition? If so, how about calling it roweig()? This would be an easy change for the conversion script to handle or, if it's well documented, for people to make themselves along with changing import statements, etc. This could be done with or without a warning call like Fernando suggested. -- Ed From aisaac at american.edu Tue Nov 15 09:20:22 2005 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 15 Nov 2005 09:20:22 -0500 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: References: <200511150046.19597.dickrp@wckn.com> Message-ID: On Tue, 15 Nov 2005, Pearu Peterson apparently wrote: > I am not convinced that getting eigenvectors one-by-one is > the most common case of la.eig usage. Sometimes one needs > to operate with the whole matrix of eigenvectors and then > the mathematically "correct" representation of the > eigenmatrix would be more convinient. Even though you put "correct" in quotes, and even though I do not have a strong position on this, I'd like to caution against the "correct"ness argument (assuming I have understood it). Row vectors are perfectly valid left-eigenvectors. http://en.wikipedia.org/wiki/Eigenvector We are concerned with the case where left and right eigenvectors have the same (normalized) elements, so we are just trying to pick a convention. Like you I expect my eigenvectors to be columns, and I think about them as right eigenvectors. But others have different expectations apparently. The problem in this case is perhaps choosing a transparent notation. fwiw, Alan Isaac From pearu at scipy.org Tue Nov 15 08:47:38 2005 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 15 Nov 2005 07:47:38 -0600 (CST) Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: References: <200511150046.19597.dickrp@wckn.com> Message-ID: On Tue, 15 Nov 2005, Alan G Isaac wrote: > On Tue, 15 Nov 2005, Pearu Peterson apparently wrote: >> I am not convinced that getting eigenvectors one-by-one is >> the most common case of la.eig usage. Sometimes one needs >> to operate with the whole matrix of eigenvectors and then >> the mathematically "correct" representation of the >> eigenmatrix would be more convinient. > > Even though you put "correct" in quotes, > and even though I do not have a strong position on this, > I'd like to caution against the "correct"ness argument > (assuming I have understood it). I used "correct" solely because it is a matter of definition but I think column-wise view is also widely used. The reason why Numeric returns eigenvectors row-wise is because the default underlying eigensolver function is in C. In scipy.linalg.eig the eigensolver can be either in C or Fortran and the data storage order of the returned eigenmatrix is optimally choosed and there is little optimization argument of returning vectors row-wise. So, we can choose convention that would be in mathematical notation most convinient. > Row vectors are perfectly valid left-eigenvectors. > http://en.wikipedia.org/wiki/Eigenvector > We are concerned with the case where left and right > eigenvectors have the same (normalized) elements, > so we are just trying to pick a convention. Sure. For getting also left-eigenvectors one should call scipy.linalg.eig with left=True argument. See scipy.linalg.eig.__doc__ for more information. > Like you I expect my eigenvectors to be columns, > and I think about them as right eigenvectors. > But others have different expectations apparently. > The problem in this case is perhaps choosing a transparent > notation. I think if eig is properly documented and users will read the documentation, there will be no problem. Pearu From nwagner at mecha.uni-stuttgart.de Tue Nov 15 14:42:08 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Nov 2005 20:42:08 +0100 Subject: [SciPy-dev] tests for scipy.integrate Message-ID: Hi all, Is it planned to add some tests to scipy.integrate in the near future ? Nils From jswhit at fastmail.fm Tue Nov 15 18:09:46 2005 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Tue, 15 Nov 2005 16:09:46 -0700 Subject: [SciPy-dev] scipy_core 0.6.0 segfaults on macos x Message-ID: <437A6ABA.2030206@fastmail.fm> Running scipy.test(10,10) with python 2.4.2 gives Found 0 tests for __main__ check_reduce_complex (scipy.base.umath.test_umath.test_maximum) ... ok check_reduce_complex (scipy.base.umath.test_umath.test_minimum) ... ok check_basic (scipy.base.function_base.test_function_base.test_all) ... ok check_nd (scipy.base.function_base.test_function_base.test_all) ... ok check_basic (scipy.base.function_base.test_function_base.test_amax) ... Segmentation fault on both 10.3.9 and 10.4.2. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From oliphant at ee.byu.edu Tue Nov 15 18:15:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 15 Nov 2005 16:15:51 -0700 Subject: [SciPy-dev] scipy_core 0.6.0 segfaults on macos x In-Reply-To: <437A6ABA.2030206@fastmail.fm> References: <437A6ABA.2030206@fastmail.fm> Message-ID: <437A6C27.6010608@ee.byu.edu> Jeff Whitaker wrote: >Running scipy.test(10,10) with python 2.4.2 gives > > Found 0 tests for __main__ >check_reduce_complex (scipy.base.umath.test_umath.test_maximum) ... ok >check_reduce_complex (scipy.base.umath.test_umath.test_minimum) ... ok >check_basic (scipy.base.function_base.test_function_base.test_all) ... ok >check_nd (scipy.base.function_base.test_function_base.test_all) ... ok >check_basic (scipy.base.function_base.test_function_base.test_amax) ... >Segmentation fault > > > This is a known problem in 0.6.0 and the reason for the 0.6.1 release Sorry for the trouble. -Travis From dickrp at wckn.com Tue Nov 15 18:39:04 2005 From: dickrp at wckn.com (Robert Dick) Date: Tue, 15 Nov 2005 17:39:04 -0600 Subject: [SciPy-dev] [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix Message-ID: <200511151739.04845.dickrp@wckn.com> >> If you need to decide which one to change, identify the common case. Is it >> more common to access all dimensions of one eigenvector or access one >> dimension of many eigenvectors? The common case should be the easiest to >> express, i.e., if one wants the first eigenvector, should >> la.eig(m)[1][0] >> or >> la.eig(m)[1][:, 0] >> be used? Pearu Peterson: > I am not convinced that getting eigenvectors one-by-one is the most common > case of la.eig usage. Sometimes one needs to operate with the whole matrix > of eigenvectors and then the mathematically "correct" representation of > the eigenmatrix would be more convinient. Then let's go with the eig() convention (an eigenvector in each column). It's not very dangerous because the function names are different. Anybody implementing a backward-compatible scipy.Numeric will just need to transpose the results of eig() in an eigenvectors() wrapper. The only real problem now is the conflict between the book and the code. -Robert Dick- From oliphant at ee.byu.edu Wed Nov 16 00:19:26 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 15 Nov 2005 22:19:26 -0700 Subject: [SciPy-dev] Use of Python memory allocator -- Wait... Message-ID: <437AC15E.1030302@ee.byu.edu> Developers, I know some mention was made in the past of using the Python memory allocator for scipy. Let's not for now. I just spent way too long tracking down a "memory leak" problem for a user that amounted to the fact that the Python memory allocator never releases memory to the OS. Code that was working with Numeric without growing memory would grow memory with scipy core until it bogged down the entire machine. After closing several memory leaks in scipy core itself, the real problem was tracked finally to the fact that the new array scalars defined in scipy core (in fact all of the new types) all used the Python memory allocator for their creation. Apparently this code was using a lot of intermediate objects. The Python memory allocator was not re-using apparently freed memory and was instead consuming more and more system memory as time went on. Apparently this is a problem with the Python memory allocator and is known about. I think some patches have been discussed, but I'm not sure what the current state of any solution is. For scipy core, the solution is to simply not use the Python memory allocator. I changed all the objects to use the system allocator (malloc and free), instead. This solved the problem the user was having. So, any use of the Python memory allocator should be avoided for the time being. In addition, the array scalars should probably do something like Python does with its own scalars. -Travis From jonathan.taylor at utoronto.ca Wed Nov 16 01:19:55 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 16 Nov 2005 06:19:55 +0000 Subject: [SciPy-dev] Nifty matrix functions that others may find helpful Message-ID: I have been looking for the indices function for a while without knowing that was the name I was looking for. I am glad I finally found it. Following are two special cases (inspired by R) that I find immensely useful for getting row and column indices out of a matrix. In particular I use them often to extract the upper triangle from a matrix via: upper_triangle = m[row(m)>col(m)] Maybe there is something better in scipy for this anyways that I am missing? Cheers. Jon. def col(m): """col(m) returns a matrix of the same size of m where each element contains an integer denoting which column it is in. For example, >>> m = eye(3) >>> m array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> col(m) array([[0, 1, 2], [0, 1, 2], [0, 1, 2]]) """ assert len(m.shape) == 2, "should be a matrix" return indices(m.shape)[1] def row(m): """row(m) returns a matrix of the same size of m where each element contains an integer denoting which row it is in. For example, >>> m = eye(3) >>> m array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) >>> row(m) array([[0, 0, 0], [1, 1, 1], [2, 2, 2]]) """ assert len(m.shape) == 2, "should be a matrix" return indices(m.shape)[0] From Fernando.Perez at colorado.edu Wed Nov 16 01:57:27 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 15 Nov 2005 23:57:27 -0700 Subject: [SciPy-dev] Nifty matrix functions that others may find helpful In-Reply-To: References: Message-ID: <437AD857.2010003@colorado.edu> Jonathan Taylor wrote: > I have been looking for the indices function for a while without knowing > that was the name I was looking for. I am glad I finally found it. > Following are two special cases (inspired by R) that I find immensely > useful for getting row and column indices out of a matrix. In > particular I use them often to extract the upper triangle from a matrix > via: [...] I was going to suggest dropping it into the plone site temporarily until the move to a Moin setup is completed (if it is going to happen), but the plone site is timing out. I guess it's a good confirmation that we need something else :) At any rate, I think that a community cookbook is the right approach for this kind of thing, and a wiki a good system to host it. John has had good success with the matplotlib cookbook hosted at scipy.org, and people regularly contribute recipes to it. Hopefully once we have a more functional wiki than today, this will be one of the first things to go up. It will also help all of us see examples of how to transition old Numeric/SciPy code to the new system. Cheers, f From jonathan.taylor at utoronto.ca Wed Nov 16 01:59:43 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 16 Nov 2005 06:59:43 +0000 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <17272.15602.110899.956045@vulcan.linux.in> Message-ID: And here is a latex macro thing I am using on my trac instance. J. From nwagner at mecha.uni-stuttgart.de Wed Nov 16 02:58:31 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Nov 2005 08:58:31 +0100 Subject: [SciPy-dev] segfault in 0.7.0.1491 Message-ID: <437AE6A7.8010208@mecha.uni-stuttgart.de> scipy.test(1,10) results in check_definition (scipy.fftpack.helper.test_helper.test_fftfreq) ... ok check_definition (scipy.fftpack.helper.test_helper.test_fftshift) ... ok check_inverse (scipy.fftpack.helper.test_helper.test_fftshift) ... ok check_definition (scipy.fftpack.helper.test_helper.test_rfftfreq) ... ok check_cblas (scipy.lib.blas.test_blas.test_blas) ... ok check_fblas (scipy.lib.blas.test_blas.test_blas) ... ok check_axpy (scipy.lib.blas.test_blas.test_cblas1_simple) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076102528 (LWP 26517)] 0x4018bdaf in malloc_consolidate () from /lib/tls/libc.so.6 (gdb) bt #0 0x4018bdaf in malloc_consolidate () from /lib/tls/libc.so.6 #1 0x00000028 in ?? () #2 0x4023d568 in main_arena () from /lib/tls/libc.so.6 #3 0x000857d7 in ?? () #4 0x4023d554 in main_arena () from /lib/tls/libc.so.6 #5 0x4023d548 in main_arena () from /lib/tls/libc.so.6 #6 0x4023d520 in __malloc_initialize_hook () from /lib/tls/libc.so.6 #7 0x4023d520 in __malloc_initialize_hook () from /lib/tls/libc.so.6 #8 0x00000000 in ?? () #9 0xbfffc84c in ?? () #10 0x4018db5e in _int_malloc () from /lib/tls/libc.so.6 Previous frame inner to this frame (corrupt stack?) Nils From oliphant at ee.byu.edu Wed Nov 16 03:31:48 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 01:31:48 -0700 Subject: [SciPy-dev] segfault in 0.7.0.1491 In-Reply-To: <437AE6A7.8010208@mecha.uni-stuttgart.de> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> Message-ID: <437AEE74.9010403@ee.byu.edu> Nils Wagner wrote: >scipy.test(1,10) results in > > > I accidentally committed some changes to f2py/rules.py that I was playing with last night but didn't mean to commit. I restored the file. Try version 1493. Be sure to recompile scipy.. -Travis From nwagner at mecha.uni-stuttgart.de Wed Nov 16 06:32:07 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Nov 2005 12:32:07 +0100 Subject: [SciPy-dev] segfault in 0.7.0.1491 In-Reply-To: <437AEE74.9010403@ee.byu.edu> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> Message-ID: <437B18B7.8060201@mecha.uni-stuttgart.de> Travis Oliphant wrote: >Nils Wagner wrote: > > >>scipy.test(1,10) results in >> >> >> >> >I accidentally committed some changes to f2py/rules.py that I was >playing with last night but didn't mean to commit. > >I restored the file. Try version 1493. Be sure to recompile scipy.. > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, Thank you for the bug fixing. Tests for integrate.odeint - how about that ? Or is it planned to replace odeint by sundials ? Nils From schofield at ftw.at Wed Nov 16 10:28:11 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 16 Nov 2005 16:28:11 +0100 Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: <437B18B7.8060201@mecha.uni-stuttgart.de> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> Message-ID: <437B500B.70103@ftw.at> Nils Wagner wrote: >Tests for integrate.odeint - how about that ? >Or is it planned to replace odeint by sundials ? > > I think tests for scipy.integrate are a good idea and would be very welcome! I think it's a question of manpower. If you could write some tests and post them here I'd happily commit them. We could then adapt them later if / when sundials is wrapped... -- Ed From nwagner at mecha.uni-stuttgart.de Wed Nov 16 11:04:52 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Nov 2005 17:04:52 +0100 Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: <437B500B.70103@ftw.at> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> <437B500B.70103@ftw.at> Message-ID: <437B58A4.4090307@mecha.uni-stuttgart.de> Ed Schofield wrote: >Nils Wagner wrote: > > >>Tests for integrate.odeint - how about that ? >>Or is it planned to replace odeint by sundials ? >> >> >> > >I think tests for scipy.integrate are a good idea and would be very >welcome! I think it's a question of manpower. If you could write some >tests and post them here I'd happily commit them. > >We could then adapt them later if / when sundials is wrapped... > >-- Ed > > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-de > Hi Ed, I am not familiar with writing tests. However I have attached a small example, where a closed form solution is available. It's a second order system. Please let me know if it is useful. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_odeint1.py Type: text/x-python Size: 639 bytes Desc: not available URL: From schofield at ftw.at Wed Nov 16 11:08:45 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 16 Nov 2005 17:08:45 +0100 Subject: [SciPy-dev] Segfault with newcore Message-ID: <437B598D.4020504@ftw.at> The following gives a segfault >>> a = array([1.,2.]) >>> a['BLAH'] = 1 I admit that it's not clear why anyone sane would want to try this ;) Would anyone object to my adding this to a new test_crazy_input.py module? Here's the backtrace: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208756544 (LWP 6850)] 0x0808bb27 in tupleitem (a=0xb6411e14, i=0) at Objects/tupleobject.c:313 313 Py_INCREF(a->ob_item[i]); (gdb) bt #0 0x0808bb27 in tupleitem (a=0xb6411e14, i=0) at Objects/tupleobject.c:313 #1 0x0805bb27 in PySequence_GetItem (s=0xb6411e14, i=0) at Objects/abstract.c:1202 #2 0xb7d2d4b5 in discover_depth (s=0xb6411e14, max=40, stop_at_string=0) at arrayobject.c:4549 #3 0xb7d2d4e5 in discover_depth (s=0xb6a4202c, max=41, stop_at_string=0) at arrayobject.c:4551 #4 0xb7d2e210 in Array_FromSequence (s=0xb6a4202c, typecode=0xbfe4f9ec, min_depth=0, max_depth=0) at arrayobject.c:4878 #5 0xb7d304ef in array_fromobject (op=0xb6a4202c, typecode=0xbfe4f9ec, min_depth=0, max_depth=0, flags=16) at arrayobject.c:5639 #6 0xb7d307e8 in PyArray_FromAny (op=0xb6a4202c, typecode=0xbfe4f9ec, min_depth=0, max_depth=0, requires=16) at arrayobject.c:5772 #7 0xb7d33a91 in _convert_obj (obj=0xb6a4202c, iter=0xbfe4fa34) at arrayobject.c:6683 #8 0xb7d35b2c in PyArray_MapIterNew (indexobj=0xb7ea80f4) at arrayobject.c:7189 #9 0xb7d23391 in array_ass_sub (self=0x82a99e8, index=0xb61f88a0, op=0x8154db0) at arrayobject.c:1803 #10 0x0805965a in PyObject_SetItem (o=0x82a99e8, key=0xb61f88a0, value=0x8154db0) at Objects/abstract.c:123 ... -- Ed From jswhit at fastmail.fm Wed Nov 16 13:28:50 2005 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Wed, 16 Nov 2005 11:28:50 -0700 Subject: [SciPy-dev] scipy_core 0.6.0 segfaults on macos x In-Reply-To: <437A6C27.6010608@ee.byu.edu> References: <437A6ABA.2030206@fastmail.fm> <437A6C27.6010608@ee.byu.edu> Message-ID: <437B7A62.5080000@fastmail.fm> Travis Oliphant wrote: >Jeff Whitaker wrote: > > > >>Running scipy.test(10,10) with python 2.4.2 gives >> >> Found 0 tests for __main__ >>check_reduce_complex (scipy.base.umath.test_umath.test_maximum) ... ok >>check_reduce_complex (scipy.base.umath.test_umath.test_minimum) ... ok >>check_basic (scipy.base.function_base.test_function_base.test_all) ... ok >>check_nd (scipy.base.function_base.test_function_base.test_all) ... ok >>check_basic (scipy.base.function_base.test_function_base.test_amax) ... >>Segmentation fault >> >> >> >> >> >This is a known problem in 0.6.0 and the reason for the 0.6.1 release > >Sorry for the trouble. > >-Travis > > Thanks Travis - I hadn't noticed 0.6.1. I've added fink packages (http://fink.sf.net) for scipy-core 0.6.1 and scipy-0.4.3. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From mforbes at alum.MIT.EDU Wed Nov 16 13:50:40 2005 From: mforbes at alum.MIT.EDU (Michael Forbes) Date: Wed, 16 Nov 2005 13:50:40 -0500 (EST) Subject: [SciPy-dev] Mailing-list webpage broken. Message-ID: Hi, It is not clear to me where this message should be directed so I shall send it here. There are some problems with the mailinglist page: http://www.scipy.org/mailinglists/ In particular, the links to all list sign-up pages (except user) is messed up. For example, the developer sign-up links to: http://www.scipy.org/mailinglists/map?rmurl=http://scipy.net/mailman/listinfo/scipy-dev rather than http://scipy.net/mailman/listinfo/scipy-dev which fails. More importantly, the mailing-list search fails in a similar way, but I do not know what the correct URL should be or the query structure. As a minor improvement, a link directing comments to the appropriate "webmaster" would be useful to reduce clutter in this forum. Thanks, Michael. From oliphant at ee.byu.edu Wed Nov 16 14:21:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 12:21:31 -0700 Subject: [SciPy-dev] Problems with writing files and windows binary of scipy core on 2.4 Message-ID: <437B86BB.40904@ee.byu.edu> Well, the old mingw32 - Python 2.4 problem has raised its head again. The scipy core binaries I released for Python 2.4 (and probably the full scipy ones as well), segfault when using the tofile method. Apparently this is the same problem that existed with old scipy and Python 2.4 The solution apparently is to link multiarraymodule.o against msvcr71.a when compiling with mingw32 under Python 2.4. Is there an elegant way to do this, or is a hack the only thing that we can do. I just did it manually to fix the problem with the binaries. -Travis From robert.kern at gmail.com Wed Nov 16 14:40:16 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Nov 2005 11:40:16 -0800 Subject: [SciPy-dev] Problems with writing files and windows binary of scipy core on 2.4 In-Reply-To: <437B86BB.40904@ee.byu.edu> References: <437B86BB.40904@ee.byu.edu> Message-ID: <437B8B20.4090300@gmail.com> Travis Oliphant wrote: > Well, the old mingw32 - Python 2.4 problem has raised its head again. > > The scipy core binaries I released for Python 2.4 (and probably the full > scipy ones as well), segfault when using the tofile method. > > Apparently this is the same problem that existed with old scipy and > Python 2.4 > > The solution apparently is to link multiarraymodule.o against msvcr71.a > when compiling with mingw32 under Python 2.4. Is there an elegant way > to do this, or is a hack the only thing that we can do. I just did it > manually to fix the problem with the binaries. See the "begging for binaries" thread. I think the approach that finally worked was this (having fixed the unfortunate typo I originally wrote): Find the gcc specs file. If gcc.exe is %mingwpath%\bin\gcc.exe, and the version is %mingwversion%, then the specs file should be $mingwpath%\lib\gcc\%mingwversion%\specs . Change "-lmsvcrt" to "-lmsvcr71". Now, edit scipy_distutils/mingw32ccompiler.py at around line 102 or so: """ # no additional libraries needed self.dll_libraries=[] return """ to """ # no additional libraries needed self.dll_libraries=['msvcr71'] return """ -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Wed Nov 16 14:45:19 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 12:45:19 -0700 Subject: [SciPy-dev] Problems with writing files and windows binary of scipy core on 2.4 In-Reply-To: <437B8B20.4090300@gmail.com> References: <437B86BB.40904@ee.byu.edu> <437B8B20.4090300@gmail.com> Message-ID: <437B8C4F.4080305@ee.byu.edu> Robert Kern wrote: >Travis Oliphant wrote: > > >>Well, the old mingw32 - Python 2.4 problem has raised its head again. >> >>The scipy core binaries I released for Python 2.4 (and probably the full >>scipy ones as well), segfault when using the tofile method. >> >>Apparently this is the same problem that existed with old scipy and >>Python 2.4 >> >>The solution apparently is to link multiarraymodule.o against msvcr71.a >>when compiling with mingw32 under Python 2.4. Is there an elegant way >>to do this, or is a hack the only thing that we can do. I just did it >>manually to fix the problem with the binaries. >> >> > >See the "begging for binaries" thread. I think the approach that finally >worked was this (having fixed the unfortunate typo I originally wrote): > >Find the gcc specs file. If gcc.exe is %mingwpath%\bin\gcc.exe, and the >version is %mingwversion%, then the specs file should be >$mingwpath%\lib\gcc\%mingwversion%\specs . Change "-lmsvcrt" to >"-lmsvcr71". > >Now, edit scipy_distutils/mingw32ccompiler.py at around line 102 or so: > >""" > # no additional libraries needed > self.dll_libraries=[] > return >""" > >to > >""" > # no additional libraries needed > self.dll_libraries=['msvcr71'] > return >""" > > > The only problem is that on Python2.3 you should not link against msvcr71, otherwise you get segfaults. It all depends what Python was linked against I suppose. I can see how to fix this for the mingw32ccompiler.py file, but what about the gcc specs file -- will it matter? -Travis From arnd.baecker at web.de Wed Nov 16 15:02:25 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 16 Nov 2005 21:02:25 +0100 (CET) Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: <437B58A4.4090307@mecha.uni-stuttgart.de> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> <437B500B.70103@ftw.at> <437B58A4.4090307@mecha.uni-stuttgart.de> Message-ID: On Wed, 16 Nov 2005, Nils Wagner wrote: [...] > I am not familiar with writing tests. However I have attached a small > example, Hi Nils, I also have no experience in writing unittests, but it seems pretty straightforward. For example looking at scipy/Lib/linalg/tests/test_basic.py suggests that one has to a) create a subdirectory called `tests` b) add a test file `test_.py`, e.g. `test_simple.py` which contains ################################################# import scipy.base as Numeric from scipy.base import arange, add, array, dot, zeros, identity import sys from scipy.test.testing import * set_package_path() # adapt the following ones accordingly: from linalg import solve,inv,det,lstsq, toeplitz, hankel, tri, triu, tril from linalg import pinv, pinv2, solve_banded restore_path() import unittest class test_routine1(ScipyTestCase): """ Collection of tests for routine1""" def check_trivial(self): """do some test giving a scalar""" result= known_result= assert_almost_equal(result,known_result) def check_simple(self): """do some test giving an array""" result= known_result= assert_array_almost_equal(result,known_result) class test_routine2(ScipyTestCase): """ Collection of tests for routine2""" def check_trivial(self): """do some test giving a scalar""" result= known_result= assert_almost_equal(result,known_result) def check_simple(self): """do some test giving an array""" result= known_result= assert_array_almost_equal(result,known_result) def check_extensive(self,level=10): """do some more heavy test giving an array""" result= known_result= assert_array_almost_equal(result,known_result) if __name__ == "__main__": ScipyTest().run() #################### Note that the above is untested - good luck ;-) For other assert_xxx see scipy.test.testing. Best, Arnd From arnd.baecker at web.de Wed Nov 16 15:04:54 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 16 Nov 2005 21:04:54 +0100 (CET) Subject: [SciPy-dev] how to help scipy development? Message-ID: Hi, Nils' question on how to write unit tests for odeint raises an important point: How to encourage contributions to scipy? My impression is that somehow the gap between "end-users" and developers should be made smaller. For this it might be helpful to have (detailed) instructions on: - which information to provide for bug reports (a script which generates all the info would be great: scipy.__buginfo(fname="bug.txt") A lot of this is already available via from f2py2e import diagnose diagnose.run() ) - how to provide useful information when segfaults occur (invocation of gdb ...) - how to add documentation to routines and how to submit these - how to write unittests - how to add additional wrappers to lapack (etc.) - how to provide patches - how to add new tools to scipy (see core/scipy/doc/DISTUTILS.txt and scipy/DEVELOPERS.txt) Some of these points might seem trivial to expert developers, but can be a significant barrier for newcomers. In order to not completely bankrupt my opinion/suggestion budget ;-) I have added some remarks in the other thread on how one could set up unittests (hope I got that right ...). Best, Arnd From jh at oobleck.astro.cornell.edu Wed Nov 16 15:09:37 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 16 Nov 2005 15:09:37 -0500 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: (scipy-dev-request@scipy.net) References: Message-ID: <200511162009.jAGK9bPE019065@oobleck.astro.cornell.edu> I just did some tests on my own plone site (http://oobleck.astro.cornell.edu/astro). I get very different behavior from that on scipy.org. First, I don't have a growing binary. The size stays perfectly constant indefinitely. Second, it takes like 2 seconds for an edit to occur on the main pages (I don't have zwiki installed, so I couldn't do a perfect test). I suggest that the memory and speed problems may be fixed in later versions of plone than that on scipy.org. That said, my load is low and, since this is still a test site, traffic is basically nonexistent (though I'm a little concerned that the URL above will make the site "public" before it's ready!) Here's what I'm running: Plone-2.0.5 Zope-2.7.3-0 I've suggested to Joe C. that he might look into this (Joe, what versions are you running?), though it may not be worth it if a switch is to be made anyway. I don't know how painful the plone upgrade process is, but certainly no more painful than rebuilding the entire site, which is what we contemplate anyway with a switch to another system. My point is mainly that a switch for these reasons alone isn't indicated. We might still switch for other reasons. --jh-- From robert.kern at gmail.com Wed Nov 16 15:18:18 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Nov 2005 12:18:18 -0800 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <200511162009.jAGK9bPE019065@oobleck.astro.cornell.edu> References: <200511162009.jAGK9bPE019065@oobleck.astro.cornell.edu> Message-ID: <437B940A.1000909@gmail.com> Joe Harrington wrote: > I don't know how painful the plone upgrade process is, but certainly > no more painful than rebuilding the entire site, which is what we > contemplate anyway with a switch to another system. > > My point is mainly that a switch for these reasons alone isn't > indicated. We might still switch for other reasons. Upgrading to a newer version of Plone and administering it properly *is* more effort than switching to Trac or Trac+Moin. Currently, the www.scipy.org site is the odd one out of all the sites that Joe needs to administer. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Wed Nov 16 15:44:12 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Nov 2005 12:44:12 -0800 Subject: [SciPy-dev] Problems with writing files and windows binary of scipy core on 2.4 In-Reply-To: <437B8C4F.4080305@ee.byu.edu> References: <437B86BB.40904@ee.byu.edu> <437B8B20.4090300@gmail.com> <437B8C4F.4080305@ee.byu.edu> Message-ID: <437B9A1C.80907@gmail.com> Travis Oliphant wrote: > The only problem is that on Python2.3 you should not link against > msvcr71, otherwise you get segfaults. It all depends what Python was > linked against I suppose. I can see how to fix this for the > mingw32ccompiler.py file, but what about the gcc specs file -- will it > matter? WRT mingw32ccompily.py, I think we should simply avoid overriding self.dll_libraries at all. I believe that distutils' defaults are set appropriately in 2.3 and 2.4 if the specs file is correct. Fixing the specs file seemed to be instrumental in the "begging for binaries" thread. One might be able to futz around with gcc command line options to override that part of the specs file, but I haven't delved through that part of the gcc manual in a looong time. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Wed Nov 16 15:45:51 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Nov 2005 21:45:51 +0100 Subject: [SciPy-dev] segmentation fault 0.7.0.1496 Message-ID: Test of other odd features ... ok Test of pickling ... ok Test of put Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076175008 (LWP 26716)] 0x403d17f9 in PyArray_MapIterNew (indexobj=0x45bc836c) at arrayobject.c:6648 6648 if (PyString_Check(obj) || PyUnicode_Check(obj)) { (gdb) bt #0 0x403d17f9 in PyArray_MapIterNew (indexobj=0x45bc836c) at arrayobject.c:6648 #1 0x403d3735 in array_ass_sub (self=0x85970a8, index=0x45bc836c, op=0x81ec9f0) at arrayobject.c:1803 #2 0x0805d74e in PyObject_SetItem (o=0x85970a8, key=0x45bc836c, value=0x81ec9f0) at abstract.c:123 #3 0x080c41fd in PyEval_EvalFrame (f=0x861910c) at ceval.c:1474 #4 0x080c8bb4 in PyEval_EvalCodeEx (co=0x41eb7820, globals=0x40454acc, locals=0x0, args=0x45ade330, argcount=3, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #5 0x0811dda2 in function_call (func=0x41ec84fc, arg=0x45ade324, kw=0x0) at funcobject.c:548 #6 0x0805935e in PyObject_Call (func=0x41ec84fc, arg=0x45ade324, kw=0x0) at abstract.c:1756 #7 0x08064bd4 in instancemethod_call (func=0x813b700, arg=0x45ade324, kw=0x0) at classobject.c:2447 #8 0x0805935e in PyObject_Call (func=0x45ab77ac, arg=0x4593e48c, kw=0x0) at abstract.c:1756 #9 0x080989e0 in call_method (o=0x45c0b1cc, name=0x81255aa "__setitem__", nameobj=0x815d784, format=0x81287ef "(OO)") at typeobject.c:923 #10 0x08098ffc in slot_mp_ass_subscript (self=0x45c0b1cc, key=0x45bc836c, value=0x45aba66c) at typeobject.c:4233 #11 0x0805d74e in PyObject_SetItem (o=0x45c0b1cc, key=0x45bc836c, value=0x45aba66c) at abstract.c:123 From jonathan.taylor at utoronto.ca Wed Nov 16 15:57:03 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 16 Nov 2005 20:57:03 +0000 Subject: [SciPy-dev] the public face of scipy.org (was Some Feedback) In-Reply-To: <437B940A.1000909@gmail.com> Message-ID: I'd also suggest that there is now a familiarity with trac amongst the python community. People feel comfortable with it since it is so widely disseminated. As well, being able to see the current checkins in the timeline will show off the projects activity. I hope these features will potentially attract both more developers and more wiki contributors. Jon. On 11/16/2005, "Robert Kern" wrote: >Joe Harrington wrote: > >> I don't know how painful the plone upgrade process is, but certainly >> no more painful than rebuilding the entire site, which is what we >> contemplate anyway with a switch to another system. >> >> My point is mainly that a switch for these reasons alone isn't >> indicated. We might still switch for other reasons. > >Upgrading to a newer version of Plone and administering it properly *is* >more effort than switching to Trac or Trac+Moin. Currently, the >www.scipy.org site is the odd one out of all the sites that Joe needs to >administer. > >-- >Robert Kern >robert.kern at gmail.com > >"In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Wed Nov 16 16:18:32 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 14:18:32 -0700 Subject: [SciPy-dev] Fwd: [SciPy-user] MemoryError in scipy_core In-Reply-To: <723eb6930511160616j109fe1cfqb6c21fc165964b33@mail.gmail.com> References: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> <723eb6930511151258i3ae3e9f4ldeefc5ff369c4c@mail.gmail.com> <437A747C.3040807@ee.byu.edu> <723eb6930511151605w3dd6989eg838e0bd17e239b41@mail.gmail.com> <437A907B.6010406@ee.byu.edu> <723eb6930511151839n2d6f75c4o4dc52dd6e20169c5@mail.gmail.com> <437AB77E.6070100@ee.byu.edu> <723eb6930511152103s3e92e8fdh70abbabeeb9f2dc6@mail.gmail.com> <437AC669.8090504@ee.byu.edu> <723eb6930511160526h44034b27w31ee0b82b8eeb8b9@mail.gmail.com> <723eb6930511160616j109fe1cfqb6c21fc165964b33@mail.gmail.com> Message-ID: <437BA228.40205@ee.byu.edu> Chris Fonnesbeck wrote: >>On 11/16/05, Travis Oliphant wrote: >> >> >>>>Good news: the leak appears to be fixed. cbm.py is chugging along at >>>>constant memory usage (pre-burn-in, of course). Only took 41 messages >>>>to sort it out! >>>> >>>> >>>> >>>> >>>Fantastic. Your code is actually a pretty good test because apparently >>>you are using a lot of scalars. The performance of the array scalars >>>will be improved as time goes on. I'd be interested in hearing how your >>>code performs relative to the Numeric version because you have a lot of >>>small arrays in your code base --- and a lot of scalars. >>> >>> >>> > >Travis, > >I have profiled the Numeric-based and scipy-based versions of my code. >The Numeric version is just under twice as fast. The slowest bits >appear to be in the numeric.py and oldnumeric.py modules, although the >f2py stuff appears a little slower also. Here are the two profiles: > > > This seems about right given your heavy use of small arrays. While in C, the scipy core array creation is a bit more complicated, the indexing is more complicated, and the ufuncs require an attribute look up. These three things I suspect account for most of the slowdown although I'd be thrilled to find other low-hanging fruit optimizations. There are still optimizations possible (for example defining math on the array scalars rather than going through ufuncs would probably be a big gain). The other optimization possible is to consider a different way to get the special computation flags rather than an attribute look up in the locals, globals, and builtin dictionaries. Or, perhaps allow someway to bypass the standard lookup approach and fix the behavior globally. Profiling can help in the optimization process. -Travis Original profiles for scipy developers: >scipy code: > > 1399359 function calls in 58.010 CPU seconds > > Ordered by: standard name > > ncalls tottime percall cumtime percall filename:lineno(function) > 57005 0.400 0.000 0.400 0.000 :0(append) > 55 0.350 0.006 0.350 0.006 :0(array) > 60008 1.610 0.000 1.610 0.000 :0(concatenate) > 8000 0.060 0.000 0.060 0.000 :0(copy) > 15 0.000 0.000 0.000 0.000 :0(isinstance) > 165103 1.350 0.000 1.350 0.000 :0(len) > 15000 0.220 0.000 0.220 0.000 :0(normal) > 7544 0.100 0.000 0.100 0.000 :0(random_sample) > 3 0.000 0.000 0.000 0.000 :0(range) > 65010 0.470 0.000 0.470 0.000 :0(ravel) > 60023 2.840 0.000 2.840 0.000 :0(reduce) > 80010 0.720 0.000 0.720 0.000 :0(reshape) > 7238 0.100 0.000 0.100 0.000 :0(round) > 1 0.000 0.000 0.000 0.000 :0(setprofile) > 9 0.020 0.002 0.020 0.002 :0(sort) > 60015 1.370 0.000 1.370 0.000 :0(sum) > 51 0.000 0.000 0.000 0.000 :0(time) > 3 0.000 0.000 0.000 0.000 :0(transpose) > 10020 0.220 0.000 0.220 0.000 :0(values) > 15006 0.440 0.000 0.440 0.000 :0(zip) > 1 0.000 0.000 58.010 58.010 :1(?) > 30003 3.410 0.000 28.380 0.001 MCMC.py:1176(poisson_like) > 2 0.000 0.000 0.000 0.000 MCMC.py:1567(parameter) > 1 0.000 0.000 0.000 0.000 MCMC.py:1588(node) > 1 0.010 0.010 1.680 1.680 MCMC.py:1593(summary) > 1 0.000 0.000 0.050 0.050 MCMC.py:1989(calculate_dic) > 1 0.000 0.000 0.000 0.000 MCMC.py:2055(__init__) > 10000 1.750 0.000 35.700 0.004 MCMC.py:2061(test) > 1 0.860 0.860 58.010 58.010 MCMC.py:2086(sample) > 1 0.000 0.000 0.000 0.000 MCMC.py:2197(__init__) > 15002 2.380 0.000 49.710 0.003 MCMC.py:2217(calculate_likelihood) > 1 0.000 0.000 0.000 0.000 MCMC.py:279(make_indices) > 3 0.000 0.000 0.000 0.000 MCMC.py:335(__init__) > 36009 0.280 0.000 0.280 0.000 MCMC.py:352(get_value) > 8739 0.090 0.000 0.090 0.000 MCMC.py:357(set_value) > 12000 0.640 0.000 0.920 0.000 MCMC.py:362(tally) > 39 0.090 0.002 0.430 0.011 MCMC.py:384(get_trace) > 4 0.000 0.000 0.050 0.012 MCMC.py:396(quantiles) > 15 0.000 0.000 0.190 0.013 MCMC.py:444(mean) > 8 1.170 0.146 1.370 0.171 MCMC.py:452(var) > 4 0.000 0.000 0.710 0.178 MCMC.py:462(mcerror) > 4 0.020 0.005 0.070 0.017 MCMC.py:470(hpd) > 15000 0.210 0.000 0.430 0.000 MCMC.py:557(normal_deviate) > 2 0.000 0.000 0.000 0.000 MCMC.py:592(__init__) > 10000 0.710 0.000 1.700 0.000 MCMC.py:628(sample_candidate) > 10000 0.710 0.000 38.470 0.004 MCMC.py:645(propose) > 18 0.000 0.000 0.000 0.000 MCMC.py:676(tune) > 1 0.000 0.000 0.000 0.000 MCMC.py:737(__init__) > 7238 0.110 0.000 0.210 0.000 MCMC.py:756(set_value) > 1 0.000 0.000 0.000 0.000 MCMC.py:775(__init__) > 15002 3.550 0.000 18.850 0.001 MCMC.py:820(uniform_like) > 1 0.000 0.000 0.000 0.000 Matplot.py:36(__init__) > 15 0.000 0.000 0.010 0.001 function_base.py:127(average) > 280046 14.090 0.000 14.090 0.000 numeric.py:67(asarray) > 65008 1.670 0.000 3.120 0.000 oldnumeric.py:131(reshape) > 3 0.000 0.000 0.000 0.000 oldnumeric.py:179(transpose) > 9 0.000 0.000 0.020 0.002 oldnumeric.py:187(sort) > 60008 10.490 0.000 21.470 0.000 oldnumeric.py:220(resize) > 65010 1.500 0.000 3.300 0.000 oldnumeric.py:258(ravel) > 45008 0.600 0.000 1.270 0.000 oldnumeric.py:270(shape) > 60015 1.380 0.000 3.510 0.000 oldnumeric.py:289(sum) > 9 0.000 0.000 0.000 0.000 oldnumeric.py:348(rank) > 0 0.000 0.000 profile:0(profiler) > 1 0.000 0.000 58.010 58.010 >profile:0(sampler=DisasterSampler(); >sampler.sample(iterations=5000,burn=1000,verbose=False,plot=False)) > 45005 2.020 0.000 13.600 0.000 shape_base.py:115(atleast_1d) > >Numeric code: > > 1236421 function calls in 34.990 CPU seconds > > Ordered by: standard name > > ncalls tottime percall cumtime percall filename:lineno(function) > 12000 0.060 0.000 0.060 0.000 :0(append) > 22787 0.270 0.000 0.270 0.000 :0(apply) > 45062 4.630 0.000 4.630 0.000 :0(array) > 60009 1.050 0.000 1.050 0.000 :0(concatenate) > 4000 0.010 0.000 0.010 0.000 :0(copy) > 22802 0.260 0.000 0.260 0.000 :0(isinstance) > 277933 2.460 0.000 2.460 0.000 :0(len) > 3 0.000 0.000 0.000 0.000 :0(range) > 105038 1.930 0.000 1.930 0.000 :0(reduce) > 130020 3.320 0.000 3.320 0.000 :0(reshape) > 7440 0.100 0.000 0.100 0.000 :0(round) > 1 0.000 0.000 0.000 0.000 :0(setprofile) > 9 0.010 0.001 0.010 0.001 :0(sort) > 51 0.000 0.000 0.000 0.000 :0(time) > 3 0.000 0.000 0.000 0.000 :0(transpose) > 10011 0.180 0.000 0.180 0.000 :0(values) > 15006 0.200 0.000 0.200 0.000 :0(zip) > 1 0.000 0.000 34.990 34.990 :1(?) > 30004 2.850 0.000 15.410 0.001 MCMC.py:1179(poisson_like) > 2 0.000 0.000 0.000 0.000 MCMC.py:1579(parameter) > 1 0.000 0.000 0.000 0.000 MCMC.py:1600(node) > 1 0.000 0.000 0.660 0.660 MCMC.py:1605(summary) > 1 0.000 0.000 0.020 0.020 MCMC.py:2001(calculate_dic) > 1 0.000 0.000 0.000 0.000 MCMC.py:2067(__init__) > 10000 0.650 0.000 19.840 0.002 MCMC.py:2073(test) > 1 0.590 0.590 34.990 34.990 MCMC.py:2098(sample) > 1 0.000 0.000 0.000 0.000 MCMC.py:2210(__init__) > 15002 1.380 0.000 27.670 0.002 MCMC.py:2230(calculate_likelihood) > 1 0.000 0.000 0.000 0.000 MCMC.py:271(make_indices) > 3 0.000 0.000 0.000 0.000 MCMC.py:327(__init__) > 40000 0.360 0.000 0.360 0.000 MCMC.py:344(get_value) > 9014 0.060 0.000 0.060 0.000 MCMC.py:349(set_value) > 12000 0.640 0.000 0.870 0.000 MCMC.py:354(tally) > 39 0.080 0.002 0.380 0.010 MCMC.py:376(get_trace) > 4 0.000 0.000 0.050 0.012 MCMC.py:388(quantiles) > 15 0.000 0.000 0.140 0.009 MCMC.py:436(mean) > 8 0.200 0.025 0.430 0.054 MCMC.py:444(var) > 4 0.010 0.003 0.280 0.070 MCMC.py:454(mcerror) > 4 0.000 0.000 0.030 0.008 MCMC.py:462(hpd) > 15000 0.250 0.000 1.710 0.000 MCMC.py:544(normal_deviate) > 2 0.000 0.000 0.000 0.000 MCMC.py:579(__init__) > 10000 0.560 0.000 2.790 0.000 MCMC.py:622(sample_candidate) > 10000 0.560 0.000 23.590 0.002 MCMC.py:639(propose) > 18 0.010 0.001 0.010 0.001 MCMC.py:670(tune) > 1 0.000 0.000 0.000 0.000 MCMC.py:731(__init__) > 7440 0.150 0.000 0.250 0.000 MCMC.py:757(set_value) > 1 0.000 0.000 0.000 0.000 MCMC.py:776(__init__) > 15002 2.090 0.000 10.770 0.001 MCMC.py:821(uniform_like) > 1 0.000 0.000 0.000 0.000 Matplot.py:36(__init__) > 60009 1.210 0.000 2.260 0.000 Numeric.py:231(concatenate) > 3 0.000 0.000 0.000 0.000 Numeric.py:243(transpose) > 9 0.000 0.000 0.010 0.001 Numeric.py:252(sort) > 60009 4.260 0.000 13.350 0.000 Numeric.py:415(resize) > 65011 0.830 0.000 2.840 0.000 Numeric.py:583(ravel) > 60016 2.030 0.000 3.020 0.000 Numeric.py:634(sum) > 9 0.000 0.000 0.000 0.000 Numeric.py:738(rank) > 45018 0.400 0.000 0.400 0.000 Numeric.py:744(shape) > 15 0.000 0.000 0.000 0.000 Numeric.py:762(average) > 22787 0.760 0.000 1.490 0.000 >RandomArray.py:37(_build_random_array) > 7787 0.120 0.000 0.610 0.000 RandomArray.py:54(random) > 15000 0.270 0.000 1.270 0.000 RandomArray.py:87(standard_normal) > 15000 0.190 0.000 1.460 0.000 RandomArray.py:93(normal) > 0 0.000 0.000 profile:0(profiler) > 1 0.000 0.000 34.990 34.990 >profile:0(sampler=DisasterSampler(); >sampler.sample(iterations=5000,burn=1000,verbose=False,plot=False)) > > >-- >Chris Fonnesbeck >Atlanta, GA > > From pearu at scipy.org Wed Nov 16 15:21:33 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 16 Nov 2005 14:21:33 -0600 (CST) Subject: [SciPy-dev] how to help scipy development? In-Reply-To: References: Message-ID: On Wed, 16 Nov 2005, Arnd Baecker wrote: > Nils' question on how to write unit tests for odeint > raises an important point: > How to encourage contributions to scipy? > > My impression is that somehow the gap between "end-users" > and developers should be made smaller. > For this it might be helpful to have (detailed) > instructions on: > - which information to provide for bug reports > (a script which generates all the info would be great: > scipy.__buginfo(fname="bug.txt") > A lot of this is already available via > from f2py2e import diagnose > diagnose.run() > ) > - how to provide useful information when segfaults occur > (invocation of gdb ...) > - how to add documentation to routines and how to submit these > - how to write unittests > - how to add additional wrappers to lapack (etc.) > - how to provide patches > - how to add new tools to scipy > (see core/scipy/doc/DISTUTILS.txt > and scipy/DEVELOPERS.txt) > > Some of these points might seem trivial to expert developers, > but can be a significant barrier for newcomers. I can write docs for most of the above howtos. In fact, asking such well-defined questions is a very good start to write docs. However, the open question is still where to put these howtos so that users will find them easily and also so that it would be easy to maintain them. core/scipy/doc/ would be one place to put rest-docs. > In order to not completely bankrupt my opinion/suggestion budget ;-) > I have added some remarks in the other thread on how one could > set up unittests (hope I got that right ...). Thanks! I have few notes on your remarks though, I'll make them in the other thread. Pearu From oliphant at ee.byu.edu Wed Nov 16 16:25:41 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 14:25:41 -0700 Subject: [SciPy-dev] Fwd: [SciPy-user] MemoryError in scipy_core In-Reply-To: <723eb6930511160616j109fe1cfqb6c21fc165964b33@mail.gmail.com> References: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> <723eb6930511151258i3ae3e9f4ldeefc5ff369c4c@mail.gmail.com> <437A747C.3040807@ee.byu.edu> <723eb6930511151605w3dd6989eg838e0bd17e239b41@mail.gmail.com> <437A907B.6010406@ee.byu.edu> <723eb6930511151839n2d6f75c4o4dc52dd6e20169c5@mail.gmail.com> <437AB77E.6070100@ee.byu.edu> <723eb6930511152103s3e92e8fdh70abbabeeb9f2dc6@mail.gmail.com> <437AC669.8090504@ee.byu.edu> <723eb6930511160526h44034b27w31ee0b82b8eeb8b9@mail.gmail.com> <723eb6930511160616j109fe1cfqb6c21fc165964b33@mail.gmail.com> Message-ID: <437BA3D5.9040603@ee.byu.edu> Chris Fonnesbeck wrote: >>On 11/16/05, Travis Oliphant wrote: >> >> >>>>Good news: the leak appears to be fixed. cbm.py is chugging along at >>>>constant memory usage (pre-burn-in, of course). Only took 41 messages >>>>to sort it out! >>>> >>>> >>>> >>>> >>>Fantastic. Your code is actually a pretty good test because apparently >>>you are using a lot of scalars. The performance of the array scalars >>>will be improved as time goes on. I'd be interested in hearing how your >>>code performs relative to the Numeric version because you have a lot of >>>small arrays in your code base --- and a lot of scalars. >>> >>> >>> > >Travis, > >I have profiled the Numeric-based and scipy-based versions of my code. >The Numeric version is just under twice as fast. The slowest bits >appear to be in the numeric.py and oldnumeric.py modules, although the >f2py stuff appears a little slower also. Here are the two profiles: > > > One thing that can help immediately is that everything calling oldnumeric in your code should be replaced with the appropriate method if possible. Your code is spending quite a bit of time in asarray which is simply how the old function calls are implemented. Replacing these calls for objects you know are already arrays with the appropriate method can probably eliminate most of these asarray calls and therefore speed things up. If you don't know whether the object is an array or not, then you, of course, still need to use the function. The other thing to also consider is that resize should be used to actually change the number of elements in the array. If all you are doing is reshaping the array but keeping the same number of elements, then reshape is more appropriate. -Travis From pearu at scipy.org Wed Nov 16 15:37:05 2005 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 16 Nov 2005 14:37:05 -0600 (CST) Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> <437B500B.70103@ftw.at> Message-ID: On Wed, 16 Nov 2005, Arnd Baecker wrote: >> I am not familiar with writing tests. However I have attached a small >> example, > > I also have no experience in writing unittests, but it > seems pretty straightforward. > For example looking at scipy/Lib/linalg/tests/test_basic.py > suggests that one has to > a) create a subdirectory called `tests` > b) add a test file `test_.py`, e.g. `test_simple.py` > which contains Right, the existing unittests are all good starters for learning how to write tests. > ################################################# > import scipy.base as Numeric Hey, you can forget Numeric now:) The rest of the example is fine. Thanks, Pearu From cookedm at physics.mcmaster.ca Wed Nov 16 17:04:56 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 16 Nov 2005 17:04:56 -0500 Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: (Arnd Baecker's message of "Wed, 16 Nov 2005 21:02:25 +0100 (CET)") References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> <437B500B.70103@ftw.at> <437B58A4.4090307@mecha.uni-stuttgart.de> Message-ID: Arnd Baecker writes: > On Wed, 16 Nov 2005, Nils Wagner wrote: > > [...] > >> I am not familiar with writing tests. However I have attached a small >> example, > > Hi Nils, > > I also have no experience in writing unittests, but it > seems pretty straightforward. > For example looking at scipy/Lib/linalg/tests/test_basic.py > suggests that one has to > a) create a subdirectory called `tests` > b) add a test file `test_.py`, e.g. `test_simple.py` > which contains You can also do doctests now. Here's a snippet from test_polynomial.py: """ >>> import scipy.base as nx >>> from scipy.base.polynomial import poly1d >>> p = poly1d([1.,2,3]) >>> p poly1d([ 1., 2., 3.]) >>> print p 2 1 x + 2 x + 3 >>> q = poly1d([3.,2,1]) >>> q poly1d([ 3., 2., 1.]) >>> print q 2 3 x + 2 x + 1 """ from scipy.test.testing import * import doctest def test_suite(level=1): return doctest.DocTestSuite() if __name__ == "__main__": ScipyTest().run() -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Wed Nov 16 17:07:25 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 15:07:25 -0700 Subject: [SciPy-dev] Building scipy core on windows In-Reply-To: References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> Message-ID: <437BAD9D.3080409@ee.byu.edu> Travis Brady wrote: > Hi, > > I'd actually like to try compiling for 2.4 too, is this possible w/o > Visual Studio (I am using Python 2.4.1 compiled with VS)? > Maybe with MingW? Yes, that's what I use. Get MSYS and MinGW. Download a binary version of ATLAS if you want fast linear algebra. If you don't care, then don't worry about that part --- the code should still build. Then, check out the latest SVN tree (Tortoise SVN is an excellent windows SVN client that makes it easy). The URL is http://svn.scipy.org/svn/scipy_core/trunk You should be able to go into the directory where you placed the tree and type python setup.py config --compiler=mingw32 build --compiler=mingw32 install or python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst to get an installable executable. Alternatively, to avoid all the --compiler=xxxxx noise you can create (or modify if you already have one) a distutils configuration file for your version of Python. The file name is \Lib\distutils\distutils.cfg and the contents should contain [build] compiler = mingw32 [config] compiler = mingw32 On my system C:\Python24\Lib\distutils\distutils.cfg is where it is located. -Travis From schofield at ftw.at Wed Nov 16 18:18:58 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 17 Nov 2005 00:18:58 +0100 (CET) Subject: [SciPy-dev] tests for scipy.integrate In-Reply-To: <437B58A4.4090307@mecha.uni-stuttgart.de> References: <437AE6A7.8010208@mecha.uni-stuttgart.de> <437AEE74.9010403@ee.byu.edu> <437B18B7.8060201@mecha.uni-stuttgart.de> <437B500B.70103@ftw.at> <437B58A4.4090307@mecha.uni-stuttgart.de> Message-ID: On Wed, 16 Nov 2005, Nils Wagner wrote: > Hi Ed, > > I am not familiar with writing tests. However I have attached a small > example, > where a closed form solution is available. It's a second order system. > > Please let me know if it is useful. Yes, thank you! I've committed it. I chose 1e-6 fairly arbitrarily for the residual (because it was larger than on my machine). Scipy.integrate now has a single test. Keep them coming! :) If you have any more time to construct other tests (and that would be great), you could examine the new file test_integrate.py (or the other test_*.py files) and try modifying it. If you want to set a breakpoint in the file you could try my primitive way: insert import pdb pdb.set_trace() at the relevant place. Then re-build, re-install, and re-run the test. You shouldn't need to remove the build dir to pick up changes in unit tests. If you post a 'diff' of your new test_integrate.py file (or any others) to this mailing list I'm sure we'll snap it up :) -- Ed From kamrik at gmail.com Thu Nov 17 03:39:38 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Thu, 17 Nov 2005 10:39:38 +0200 Subject: [SciPy-dev] MoinMoin demo site online In-Reply-To: References: <4377F2C7.806@astraw.com> Message-ID: I also like the site. Some thoughts: * ACL config: I think some AdminGroup should be added with admin right. Since, as far as I understand, only people with admin right can put ACLs on pages which didn't have ACL before. * It might be useful to give revert right to all known users, this way any effects of vandalism can be eliminated significantly faster (The wiki effect). Reverts are also tracked in the change list, so one can revert a revert. From this point of view reverts are no more dangerous than edits Are there any potential dangers to revert right taht I've missed? * Minor, but. The page names on the navi_bar should probably begin with capital letters. * I would also add FAQ and something like "Developers area" or "Development" to the navi_bar. * Maybe a link like "Contribute" or "Getting involved" can be useful on the navibar to attract more volunteers. * The skins. I personally like the sidebar themes in Moin because then the text field is narrower and easier to read. Pthon Wiki http://wiki.python.org/moin/ uses Moin with side bar at the left. Here http://www.weizmann.ac.il/student-wiki/FrontPage is an example of the right side bar theme (one of the themes in default Moin install). Crossposted here and at http://astraw.com/scipy/moin.fcg/ScipyPublicFaceTest On 11/14/05, Arnd Baecker wrote: > On Sun, 13 Nov 2005, Andrew Straw wrote: > > > OK, I've set up a demo site: > > > > http://astraw.com/scipy > > I like it! > > Another point which I find very good is that > restructured text (ReSt, http://docutils.sourceforge.net/rst.html) > can be used as well. > > Best, Arnd > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From nwagner at mecha.uni-stuttgart.de Thu Nov 17 05:29:29 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 Nov 2005 11:29:29 +0100 Subject: [SciPy-dev] I/O features in scipy Message-ID: <437C5B89.4070309@mecha.uni-stuttgart.de> Hi all, I am curious about the interest of users to have an interface to binary/ascii files generated by NASTRAN or ANSYS (two well-known finite element packages). Scipy is able to read/write matrices in MatrixMarket format and Matlab files. BTW, which MATLAB versions are currently supported by io.loadmat io.savemat ? There exists some projects which might be useful in this context. NASTRAN: http://savannah.nongnu.org/cgi-bin/viewcvs/tops/tops/usr/extra/op4tools/ ANSYS: http://evgenii.rudnyi.ru/soft/readAnsys/ Any comment would be appreciated. Nils From arnd.baecker at web.de Thu Nov 17 09:49:35 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 17 Nov 2005 15:49:35 +0100 (CET) Subject: [SciPy-dev] how to help scipy development? Message-ID: - sorry for answering out of thread, but my bzip2 compressed mailbox with the stored message gives a nice CRC error - Pearu wrote: > However, the open question is still where to put these howtos so that > users will find them easily and also so that it would be easy to > maintain them. core/scipy/doc/ would be one place to put rest-docs. Another alternative would be to put them in a wiki, so that additions can be made easily (hopefully also by others ...). This might be more visible/accessible than having it in core/scipy/doc/? OTOH, texts in core/scipy/doc/ can be accessed even when one is off-line. I am not sure how feasible it is to make a copy of the ReSt from the wiki into core/scipy/doc/ for each release/change/... Wiki display and edit of texts which reside in a svn repository would be one solution - does such thing exist (eg. via trac) ? Best, Arnd From schofield at ftw.at Thu Nov 17 10:34:01 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 17 Nov 2005 16:34:01 +0100 Subject: [SciPy-dev] Segfault with newcore In-Reply-To: <437B598D.4020504@ftw.at> References: <437B598D.4020504@ftw.at> Message-ID: <437CA2E9.30408@ftw.at> Ed Schofield wrote: >The following gives a segfault > > > >>>>a = array([1.,2.]) >>>>a['BLAH'] = 1 >>>> >>>> Hi Travis, Well done for fixing this. More segfaults from the crazy input department: Case (a) >>> a = scipy.array([1,2,3,4]) >>> b = xrange(3) >>> a[b] Segmentation fault. Case (b) >>> a = scipy.array([1,2,3,4]) >>> b = scipy.sparse.dok_matrix((5,5)) >>> a[b] Segmentation fault. Case (c) >>> a = scipy.array([1,2,3,4]) >>> b = array.array('d',[1,2]) >>> a[b] Segmentation fault. Case (c) causes a segfault in a different part of the code to the other two. I'm not sure what we should aim to return in these cases, but we probably want to return IndexError: index must be either an int or a sequence for the first two but make case (c) return the same result as a[[1,2]]. I haven't tested a[numarray], a[newnumeric], or a[pre_24.0_numeric], but these might die too... Another, more trivial, observation: Case (d) >>> a = scipy.arange(10) >>> a[0.9] 0 >>> a[-1.9] 9 So float indices are rounded towards zero. Should we raise the same IndexError here? This would be consistent with the wording of the exception :) It might be safer, at least whenever int(floatindex) != floatindex. Or perhaps there's a use case for this that I'm missing ... -- Ed --------------------------- Backtraces: (a) and (c) [Switching to Thread -1208559936 (LWP 6393)] [Switching to Thread -1208822080 (LWP 21200)] 0xb7d24350 in PyArray_MapIterReset (mit=0x83a3df8) at arrayobject.c:6791 6791 copyswap = mit->iters[0]->ao->descr->copyswap; (gdb) bt #0 0xb7d64350 in PyArray_MapIterReset (mit=0x8210c60) at arrayobject.c:6791 #1 0xb7d52a13 in PyArray_GetMap (mit=0x8210c60) at arrayobject.c:1587 #2 0xb7d532df in array_subscript (self=0x8206700, op=0xb7f2db90) at arrayobject.c:1722 #3 0xb7d537c3 in array_subscript_nice (self=0x8206700, op=0xb7f2db90) at arrayobject.c:1841 #4 0x080594f4 in PyObject_GetItem (o=0x8206700, key=0xb7f2db90) at Objects/abstract.c:94 (b) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208465728 (LWP 21285)] 0x0808e582 in PyType_IsSubtype (a=0x3ff00000, b=0xb7d92820) at Objects/typeobject.c:821 821 if (!(a->tp_flags & Py_TPFLAGS_HAVE_CLASS)) (gdb) bt #0 0x0808e582 in PyType_IsSubtype (a=0x3ff00000, b=0xb7d92820) at Objects/typeobject.c:821 #1 0xb7d776e0 in array_fromobject (op=0x83ade88, typecode=0xbfb97dbc, min_depth=0, max_depth=0, flags=16) at arrayobject.c:5615 #2 0xb7d77b0c in PyArray_FromAny (op=0x83ade88, typecode=0xbfb97dbc, min_depth=0, max_depth=0, requires=16) at arrayobject.c:5772 #3 0xb7d7af4d in _convert_obj (obj=0x83ade88, iter=0xbfb97e04) at arrayobject.c:6697 #4 0xb7d7d02e in PyArray_MapIterNew (indexobj=0xb6214ca0) at arrayobject.c:7207 #5 0xb7d6a29b in array_subscript (self=0x83a45c0, op=0xb6214ca0) at arrayobject.c:1718 #6 0xb7d6a7c3 in array_subscript_nice (self=0x83a45c0, op=0xb6214ca0) at arrayobject.c:1841 From jswhit at fastmail.fm Thu Nov 17 11:59:25 2005 From: jswhit at fastmail.fm (Jeff Whitaker) Date: Thu, 17 Nov 2005 09:59:25 -0700 Subject: [SciPy-dev] scipy 0.4.3 won't build with gcc 4.0.1 (Xcode 2.2) on Macos X Message-ID: <437CB6ED.50105@fastmail.fm> gcc 4.0.1 doesn't like nested functions on MacOS X. building 'scipy.interpolate.dfitpack' extension compiling C sources gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/sw/ include' creating build/temp.darwin-8.3.0-Power_Macintosh-2.4/build/src/Lib/interpolate compile options: '-Ibuild/src -I/sw/lib/python2.4/site-packages/scipy/base/include -I/sw/include/python2.4 -c' gcc: build/src/Lib/interpolate/dfitpackmodule.c build/src/Lib/interpolate/dfitpackmodule.c: In function 'f2py_rout_dfitpack_surfit_smth': build/src/Lib/interpolate/dfitpackmodule.c:2528: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c:2540: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c: In function 'f2py_rout_dfitpack_surfit_lsq': build/src/Lib/interpolate/dfitpackmodule.c:2976: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c:2988: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c: In function 'f2py_rout_dfitpack_surfit_smth': build/src/Lib/interpolate/dfitpackmodule.c:2528: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c:2540: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c: In function 'f2py_rout_dfitpack_surfit_lsq': build/src/Lib/interpolate/dfitpackmodule.c:2976: error: nested functions are not supported on MacOSX build/src/Lib/interpolate/dfitpackmodule.c:2988: error: nested functions are not supported on MacOSX error: Command "gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I/sw/include -Ibuild/src -I/sw/lib/python2.4/site-packages/scipy/base/include -I/sw/include/python2.4 -c build/src/Lib/interpolate/ dfitpackmodule.c -o build/temp.darwin-8.3.0-Power_Macintosh-2.4/build/src/Lib/interpolate/dfitpackmodule.o" failed with exit status 1 And the first two "nested function" are as follows: 2528> int calc_lwrk1(void) { int u = nxest-kx-1; int v = nyest-ky-1; int km = MAX(kx,ky)+1; int ne = MAX(nxest,nyest); int bx = kx*v+ky+1; int by = ky*u+kx+1; int b1,b2; if (bx<=by) {b1=bx;b2=bx+v-ky;} else {b1=by;b2=by+u-kx;} return u*v*(2+b1+b2)+2*(u+v+km*(m+ne)+ne-kx-ky)+b2+1; } 2540> int calc_lwrk2(void) { int u = nxest-kx-1; int v = nyest-ky-1; int bx = kx*v+ky+1; int by = ky*u+kx+1; int b2 = (bx<=by?bx+v-ky:by+u-kx); return u*v*(b2+1)+b2; } These are indeed nested within another function. Unfortunately, unlss the new gcc has a "sloppy-code" flag or something similar, I can't think of a workaround. It works fine with Xcode 2.1, which uses gcc 4.0.0. A thread concerning this behavior has been posted at http://lists.apple.com/archives/Xcode-users/2005/Nov/msg00267.html -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/PSD R/PSD1 Email : Jeffrey.S.Whitaker at noaa.gov 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg From kamrik at gmail.com Thu Nov 17 12:17:55 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Thu, 17 Nov 2005 19:17:55 +0200 Subject: [SciPy-dev] how to help scipy development? In-Reply-To: References: Message-ID: On 11/17/05, Arnd Baecker wrote: > I am not sure how feasible it is to make a copy of > the ReSt from the wiki into core/scipy/doc/ > for each release/change/... In the case of Moin wiki a page URL with "?action=print" gives the page without all the side bars footers and headers - just plain HTML of tpage. So, I think, it can be easily done with something like wget http://www.scipy.org/Wiki/Contribute/?action=print mv .... ... core/scipy/doc/Contribute.html somewhere in the script for building the release. Similar strategy can be used for many doc files in the release. > - sorry for answering out of thread, but my bzip2 compressed > mailbox with the stored message gives a nice CRC error - > > Pearu wrote: > > > However, the open question is still where to put these howtos so that > > users will find them easily and also so that it would be easy to > > maintain them. core/scipy/doc/ would be one place to put rest-docs. > > Another alternative would be to put them in a wiki, > so that additions can be made easily (hopefully also by others ...). > This might be more visible/accessible than > having it in core/scipy/doc/? > > OTOH, texts in core/scipy/doc/ can be accessed even when > one is off-line. > I am not sure how feasible it is to make a copy of > the ReSt from the wiki into core/scipy/doc/ > for each release/change/... > > Wiki display and edit of texts which reside > in a svn repository would be one solution - > does such thing exist (eg. via trac) ? > > Best, Arnd > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From schofield at ftw.at Thu Nov 17 12:50:58 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 17 Nov 2005 18:50:58 +0100 Subject: [SciPy-dev] "item in array" currently unsupported Message-ID: <437CC302.3090005@ftw.at> Hi all, The ability to use the test "item in array" seems to be a casualty from the recent change to newcore to raise an exception when interpreting arrays as truth values. The old Numeric would allow this: >>> a = arange(100) >>> a.shape = (10,10) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) >>> 0 in a True >>> -1 in a False >>> 100 in a False >>> 99 in a True newcore and numarray (1.4.1) raise an exception, requiring the use of any() or all(). But the exception is bogus in this case: there's no way to use any() or all() here. Could we override __contains__ to do something more useful? I see two possibilities for the desired behaviour. One would be for "item in array" to return an array of truth values. But this doesn't seem feasible, since "item in obj" seems to return only the truth value of whatever __contains__() returns. (Another PEP candidate?) The other is for __contains__ to do something like this: def __contains__(self, item): if self.ndim > 1: return item in self.ravel() ... This is possible in numarray (using ravel(self)). But this wouldn't work in the current scipy core, which raises an exception for "0 in a[0]" even if a[0] is a rank-1 array: >>> 56 in a[0] Traceback (most recent call last): File "", line 1, in ? ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() This last point seems to be a simple bug. A simple test case is: >>> 0 in arange(10) which throws the same exception. -- Ed From arnd.baecker at web.de Thu Nov 17 12:59:46 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 17 Nov 2005 18:59:46 +0100 (CET) Subject: [SciPy-dev] how to help scipy development? In-Reply-To: References: Message-ID: On Thu, 17 Nov 2005, Mark Koudritsky wrote: > On 11/17/05, Arnd Baecker wrote: > > I am not sure how feasible it is to make a copy of > > the ReSt from the wiki into core/scipy/doc/ > > for each release/change/... > > In the case of Moin wiki a page URL with "?action=print" gives the > page without all the side bars footers and headers - just plain HTML > of tpage. > So, I think, it can be easily done with something like > > wget http://www.scipy.org/Wiki/Contribute/?action=print > mv .... ... core/scipy/doc/Contribute.html > > somewhere in the script for building the release. > Similar strategy can be used for many doc files in the release. That's nice! And with http://astraw.com/scipy/moin.fcg/HelpOnParsers/ReStructuredText?action=raw one gets the raw text of a page. See http://astraw.com/scipy/moin.fcg/HelpOnActions for more actions. Concerning restructured text, which I like a lot, one has to embed the page with {{{ #!rst }}} At least on http://astraw.com/scipy/moin.fcg/ArndBaecker it did not work with #!rst at the beginning only. Best, Arnd From schofield at ftw.at Thu Nov 17 13:19:50 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 17 Nov 2005 19:19:50 +0100 Subject: [SciPy-dev] Arrays as dtypes Message-ID: <437CC9C6.8030002@ftw.at> Hi all, Do we need to allow arrays as dtypes? I've just found a bug in the test suite for the matrix module: assert all(array(conjugate(transpose(B)), mB.H)) which should be assert all(array(conjugate(transpose(B)) == mB.H)) This was silently passing before without running the desired test. This used the dtype of mB.H to create the array instead of comparing the elements properly. I'd like to make a case for removing the ability to use arrays as dtypes to avoid little bugs like these. The meaning of the explicit notation: array(a, b.dtype) rather than array(a, b) is also clearer to a reader and not difficult to type. -- Ed From oliphant at ee.byu.edu Thu Nov 17 14:07:58 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 12:07:58 -0700 Subject: [SciPy-dev] "item in array" currently unsupported In-Reply-To: <437CC302.3090005@ftw.at> References: <437CC302.3090005@ftw.at> Message-ID: <437CD50E.2010505@ee.byu.edu> Ed Schofield wrote: >Hi all, > >The ability to use the test "item in array" seems to be a casualty from >the recent change to newcore to raise an exception when interpreting >arrays as truth values. The old Numeric would allow this: > > You are right. The contains function was using PyObject_RichCompareBool which relies on the "truth" value of the result. This is easily changed and I will do it. -Travis From oliphant at ee.byu.edu Thu Nov 17 15:23:46 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 13:23:46 -0700 Subject: [SciPy-dev] Arrays as dtypes In-Reply-To: <437CC9C6.8030002@ftw.at> References: <437CC9C6.8030002@ftw.at> Message-ID: <437CE6D2.6080200@ee.byu.edu> Ed Schofield wrote: >Hi all, > >Do we need to allow arrays as dtypes? > > Since we removed the use of integers as types, then it makes sense to remove the use of arrays and array scalars as types also. I just committed that change to SVN. Initial tests pass. If any problems arise please report them. -Travis From oliphant at ee.byu.edu Thu Nov 17 15:25:12 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 13:25:12 -0700 Subject: [SciPy-dev] Segfault with newcore In-Reply-To: <437CA2E9.30408@ftw.at> References: <437B598D.4020504@ftw.at> <437CA2E9.30408@ftw.at> Message-ID: <437CE728.2020902@ee.byu.edu> Ed Schofield wrote: >Ed Schofield wrote: > > > >>The following gives a segfault >> >> >> Thanks for catching these. The problems are connected. My fix for strings was too specific. In fact, there was a more subtle problem in that case. I've committed changes and hopefully these errors are gone. All non-tuple sequences can now be used as index arrays --- if they can be cast properly. -Travis From schofield at ftw.at Thu Nov 17 15:37:37 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 17 Nov 2005 21:37:37 +0100 Subject: [SciPy-dev] Segfault with newcore In-Reply-To: <437CE728.2020902@ee.byu.edu> References: <437B598D.4020504@ftw.at> <437CA2E9.30408@ftw.at> <437CE728.2020902@ee.byu.edu> Message-ID: <437CEA11.8080700@ftw.at> Travis Oliphant wrote: >Thanks for catching these. > You're welcome. Well done for fixing them so quickly! It's inspiring! :) -- Ed From robert.kern at gmail.com Fri Nov 18 02:36:30 2005 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Nov 2005 23:36:30 -0800 Subject: [SciPy-dev] scipy 0.4.3 won't build with gcc 4.0.1 (Xcode 2.2) on Macos X In-Reply-To: <437CB6ED.50105@fastmail.fm> References: <437CB6ED.50105@fastmail.fm> Message-ID: <437D847E.9090706@gmail.com> Jeff Whitaker wrote: > gcc 4.0.1 doesn't like nested functions on MacOS X. There are enough things that gcc 4 doesn't like that I always recommend OS X 10.4 users to use gcc 3.3 when building scipy. Or anything else, really. Use this command to switch: $ sudo gcc_select 3.3 -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From kamrik at gmail.com Fri Nov 18 02:37:08 2005 From: kamrik at gmail.com (Mark Koudritsky) Date: Fri, 18 Nov 2005 09:37:08 +0200 Subject: [SciPy-dev] how to help scipy development? In-Reply-To: References: Message-ID: On 11/17/05, Arnd Baecker wrote: > Concerning restructured text, which I like a lot, > one has to embed the page with > {{{ > #!rst > > }}} > > At least on http://astraw.com/scipy/moin.fcg/ArndBaecker > it did not work with #!rst at the beginning only. When one wants to apply a parser to the whole page, the instruction should be: #FORMAT parser-name at the beginning. In this case #FORMAT rst The {{{#!parser-name }}} syntax is for applying a parser to part of the page, which is useful for parsers like python (which displays a piece of python code with syntax highlighting and line numbering) On 11/17/05, Arnd Baecker wrote: > On Thu, 17 Nov 2005, Mark Koudritsky wrote: > > > On 11/17/05, Arnd Baecker wrote: > > > I am not sure how feasible it is to make a copy of > > > the ReSt from the wiki into core/scipy/doc/ > > > for each release/change/... > > > > In the case of Moin wiki a page URL with "?action=print" gives the > > page without all the side bars footers and headers - just plain HTML > > of tpage. > > So, I think, it can be easily done with something like > > > > wget http://www.scipy.org/Wiki/Contribute/?action=print > > mv .... ... core/scipy/doc/Contribute.html > > > > somewhere in the script for building the release. > > Similar strategy can be used for many doc files in the release. > > That's nice! And with > > http://astraw.com/scipy/moin.fcg/HelpOnParsers/ReStructuredText?action=raw > > one gets the raw text of a page. > See > http://astraw.com/scipy/moin.fcg/HelpOnActions > for more actions. > > Concerning restructured text, which I like a lot, > one has to embed the page with > {{{ > #!rst > > }}} > > At least on http://astraw.com/scipy/moin.fcg/ArndBaecker > it did not work with #!rst at the beginning only. > > Best, > > Arnd > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From arnd.baecker at web.de Fri Nov 18 03:49:59 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 18 Nov 2005 09:49:59 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran Message-ID: Hi, just for the fun of it I tried to install newcore using gcc4. More precisely: 64 Bit opteron, gcc version 4.0.2 20050901 (prerelease) (SUSE Linux). Newcore works fine! However for newscipy, built with python setup.py config_fc --fcompiler=gnu95 build python setup.py install --prefix=$DESTDIR , the following import scipy scipy.test(10,10) gives: check_bdtrik (scipy.special.basic.test_basic.test_cephes) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509653888 (LWP 21533)] 0x00007fffffbcb0e0 in ?? () (gdb) bt #0 0x00007fffffbcb0e0 in ?? () #1 0x00007fffffbcb0b0 in ?? () #2 0x0000000000000000 in ?? () #3 0x00007fffffbcb0b0 in ?? () #4 0x00007fffffbcb0a8 in ?? () #5 0x00007fffffbcb0d8 in ?? () #6 0x00002aaab1240d41 in dinvr_ () from /home/abaecker/BUILDS2/Build_83_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/special/_cephes.so #7 0x00002aaab123b4cc in cdfbin_ () from /home/abaecker/BUILDS2/Build_83_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/special/_cephes.so #8 0x00002aaab1208600 in cdfbin2_wrap (p=1, xn=3, pr=0.5) at cdf_wrappers.c:98 #9 0x00002aaab120ba99 in PyUFunc_ddd_d (args=, dimensions=, steps=, func=0x2aaab1208580) at ufunc_extras.c:303 #10 0x00002aaaabf6c574 in PyUFunc_GenericFunction (self=0x721880, args=, mps=0x7fffffbcb840) at ufuncobject.c:1338 #11 0x00002aaaabf6c65c in ufunc_generic_call (self=0x721880, args=0x2aaab5c825f0) at ufuncobject.c:2396 #12 0x00002aaaaabfa760 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #13 0x00002aaaaac5380d in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 Not sure if it is worth to start digging into gcc4/gfortran related problems. OTOH, gcc4/gfortran is the default for many recent distributions .... Best, Arnd From oliphant at ee.byu.edu Fri Nov 18 04:46:38 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 02:46:38 -0700 Subject: [SciPy-dev] newcore and gcc4/gfortran In-Reply-To: References: Message-ID: <437DA2FE.60505@ee.byu.edu> Arnd Baecker wrote: >Hi, > >just for the fun of it I tried to install newcore using gcc4. >More precisely: >64 Bit opteron, gcc version 4.0.2 20050901 (prerelease) (SUSE Linux). >Newcore works fine! >However for newscipy, built with > python setup.py config_fc --fcompiler=gnu95 build > python setup.py install --prefix=$DESTDIR >, the following > import scipy > scipy.test(10,10) >gives: > >check_bdtrik (scipy.special.basic.test_basic.test_cephes) >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 46912509653888 (LWP 21533)] >0x00007fffffbcb0e0 in ?? () >(gdb) bt >#0 0x00007fffffbcb0e0 in ?? () >#1 0x00007fffffbcb0b0 in ?? () >#2 0x0000000000000000 in ?? () >#3 0x00007fffffbcb0b0 in ?? () >#4 0x00007fffffbcb0a8 in ?? () >#5 0x00007fffffbcb0d8 in ?? () >#6 0x00002aaab1240d41 in dinvr_ () > from >/home/abaecker/BUILDS2/Build_83_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/special/_cephes.so >#7 0x00002aaab123b4cc in cdfbin_ () > from >/home/abaecker/BUILDS2/Build_83_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/special/_cephes.so >#8 0x00002aaab1208600 in cdfbin2_wrap (p=1, xn=3, pr=0.5) at >cdf_wrappers.c:98 >#9 0x00002aaab120ba99 in PyUFunc_ddd_d (args=, >dimensions=, > steps=, func=0x2aaab1208580) at >ufunc_extras.c:303 > > cdfbin_ is a Fortran-compiled subroutine so this looks like a compiler problem. It doesn't look like the fortran compiler in gcc is really ready for primetime. At some point it may be possible to test the compiler on these fortran files in scipy and decide if it will work during configuration. I'm not going to start debuggin the gcc fortran compiler at this point, though. -Travis From arnd.baecker at web.de Fri Nov 18 09:35:20 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 18 Nov 2005 15:35:20 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran In-Reply-To: <437DA2FE.60505@ee.byu.edu> References: <437DA2FE.60505@ee.byu.edu> Message-ID: On Fri, 18 Nov 2005, Travis Oliphant wrote: [...] > cdfbin_ is a Fortran-compiled subroutine so this looks like a compiler > problem. It doesn't look like the fortran compiler in gcc is really > ready for primetime. > > At some point it may be possible to test the compiler on these fortran > files in scipy and decide if it will work during configuration. I'm not > going to start debuggin the gcc fortran compiler at this point, though. Which is perfectly understandable! Just for the record - I also tried the gcc4/g77 variant, which gets beyond the above point, but segfaults at another place: check_choose (scipy.special.basic.test_basic.test_choose) ... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509653888 (LWP 29558)] 0x00002aaaabdec67b in PyArray_Choose (ip=, op=) at multiarraymodule.c:1422 1422 for(i=0; i, op=) at multiarraymodule.c:1422 #1 0x00002aaaabdec8b2 in array_choose (self=0xd31730, args=0x2aaab673e090) at arraymethods.c:646 #2 0x00002aaaaac5496a in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #3 0x00002aaaaac53b97 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #4 0x00002aaaaac53b97 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #5 0x00002aaaaac55404 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #6 0x00002aaaaac0e9af in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #7 0x00002aaaaabfa760 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #8 0x00002aaaaac532e2 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #9 0x00002aaaaac55404 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #10 0x00002aaaaac0e9af in PyFunction_SetClosure () from /usr/lib64/libpython2.4.so.1.0 #11 0x00002aaaaabfa760 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #12 0x00002aaaaac02131 in PyMethod_Fini () from /usr/lib64/libpython2.4.so.1.0 So much for that... Best, Arnd From oliphant at ee.byu.edu Fri Nov 18 12:08:26 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 10:08:26 -0700 Subject: [SciPy-dev] newcore and gcc4/gfortran In-Reply-To: References: <437DA2FE.60505@ee.byu.edu> Message-ID: <437E0A8A.1060608@ee.byu.edu> Arnd Baecker wrote: >On Fri, 18 Nov 2005, Travis Oliphant wrote: > >[...] > > > >>cdfbin_ is a Fortran-compiled subroutine so this looks like a compiler >>problem. It doesn't look like the fortran compiler in gcc is really >>ready for primetime. >> >>At some point it may be possible to test the compiler on these fortran >>files in scipy and decide if it will work during configuration. I'm not >>going to start debuggin the gcc fortran compiler at this point, though. >> >> > >Which is perfectly understandable! >Just for the record - I also tried the gcc4/g77 variant, which gets beyond >the above point, but segfaults at another place: > >check_choose (scipy.special.basic.test_basic.test_choose) ... >Program received signal SIGSEGV, Segmentation fault. >[Switching to Thread 46912509653888 (LWP 29558)] >0x00002aaaabdec67b in PyArray_Choose (ip=, > op=) at multiarraymodule.c:1422 >1422 for(i=0; i(gdb) bt >#0 0x00002aaaabdec67b in PyArray_Choose (ip=, > op=) at multiarraymodule.c:1422 > > Wait... This is probably a real bug. I was refactoring code in that area last night and may have left the SVN tree in a bad state. Which svn version did you test? -Travis From arnd.baecker at web.de Fri Nov 18 13:04:01 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 18 Nov 2005 19:04:01 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran In-Reply-To: <437E0A8A.1060608@ee.byu.edu> References: <437E0A8A.1060608@ee.byu.edu> Message-ID: On Fri, 18 Nov 2005, Travis Oliphant wrote: > Arnd Baecker wrote: > > >On Fri, 18 Nov 2005, Travis Oliphant wrote: > > > >[...] > > > > > > > >>cdfbin_ is a Fortran-compiled subroutine so this looks like a compiler > >>problem. It doesn't look like the fortran compiler in gcc is really > >>ready for primetime. > >> > >>At some point it may be possible to test the compiler on these fortran > >>files in scipy and decide if it will work during configuration. I'm not > >>going to start debuggin the gcc fortran compiler at this point, though. > >> > >> > > > >Which is perfectly understandable! > >Just for the record - I also tried the gcc4/g77 variant, which gets beyond > >the above point, but segfaults at another place: > > > >check_choose (scipy.special.basic.test_basic.test_choose) ... > >Program received signal SIGSEGV, Segmentation fault. > >[Switching to Thread 46912509653888 (LWP 29558)] > >0x00002aaaabdec67b in PyArray_Choose (ip=, > > op=) at multiarraymodule.c:1422 > >1422 for(i=0; i >(gdb) bt > >#0 0x00002aaaabdec67b in PyArray_Choose (ip=, > > op=) at multiarraymodule.c:1422 > > > > > Wait... This is probably a real bug. I was refactoring code in that > area last night and may have left the SVN tree in a bad state. Which > svn version did you test? >>> scipy.__core_version__ '0.7.1.1506' >>> scipy.__scipy_version__ '0.4.2_1443' Arnd From oliphant at ee.byu.edu Fri Nov 18 13:22:20 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 11:22:20 -0700 Subject: [SciPy-dev] newcore and gcc4/gfortran In-Reply-To: References: <437E0A8A.1060608@ee.byu.edu> Message-ID: <437E1BDC.50701@ee.byu.edu> Arnd Baecker wrote: > > >>> >>> >>Wait... This is probably a real bug. I was refactoring code in that >>area last night and may have left the SVN tree in a bad state. Which >>svn version did you test? >> >> I can't reproduce the problem on my system. Can anybody else see this problem? -Travis From schofield at ftw.at Fri Nov 18 14:12:31 2005 From: schofield at ftw.at (Ed Schofield) Date: Fri, 18 Nov 2005 20:12:31 +0100 Subject: [SciPy-dev] a.resize() segfaults Message-ID: <437E279F.1090604@ftw.at> I've found another segfault: >>> a = arange(10) >>> a array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]]) >>> a.resize(1,11) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208457536 (LWP 19305)] 0xb7d713e6 in array_resize (self=0x83a4230, args=0xb62363ac) at arraymethods.c:601 601 Py_DECREF(ret); (gdb) bt #0 0xb7d713e6 in array_resize (self=0x83a4230, args=0xb62363ec) at arraymethods.c:601 #1 0x08108e5b in PyCFunction_Call (func=0xb646ee8c, arg=0xb62363ec, kw=0x0) at Objects/methodobject.c:73 I have more 'luck' reproducing this after displaying the array, as here: >>> a = arange(10) >>> a.resize((1,11)) >>> a.resize((5,6)) >>> a.resize((1,11)) >>> a array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]]) >>> a.resize((5,15)) Segmentation fault I first thought displaying the array increases the chance that the stray pointer lands in a different memory page. But I've now realised that it's probably the result of 'a' being cached by the interpreter as the '_' variable, so the warning in the docstring that "Array must own its own memory and not be referenced by other arrays" doesn't hold. I know the bomb came with a warning, but I'd still prefer an exception to a segfault when I ignore it and light a match anyway. Is it feasible for the resize() function to test these conditions itself? -- Ed From oliphant at ee.byu.edu Fri Nov 18 14:28:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 12:28:03 -0700 Subject: [SciPy-dev] a.resize() segfaults In-Reply-To: <437E279F.1090604@ftw.at> References: <437E279F.1090604@ftw.at> Message-ID: <437E2B43.7050007@ee.byu.edu> Ed Schofield wrote: >I've found another segfault: > > > >>>>a = arange(10) >>>>a >>>> >>>> >array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0]]) > > >>>>a.resize(1,11) >>>> >>>> > > > Thanks for another catch. The PyArray_Resize function was throwing an exception (because the object had references) as it should. However, the method call was not doing the right thing when an exception was thrown. Segfaults are never acceptable behavior. -Travis From guyer at nist.gov Fri Nov 18 15:14:45 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 18 Nov 2005 15:14:45 -0500 Subject: [SciPy-dev] Trouble building scipy_core from svn on Debian machine Message-ID: <73463abe1ef76f6e0ebc8d0e4e824ed8@nist.gov> In the course of doing my long promised benchmarking of scipy.sparse against pysparse, I'm attempting to build scipy_core on a Debian Sarge machine, and I get an error in _configtest: Inconsistency detected by ld.so: dl-version.c: 230: _dl_check_map_versions: Assertion `needed != ((void *)0)' failed! googling this error message produces two seemingly irrelevant hits: and . I know that folks are building SciPy on Debian machines. Does anybody know what this error comes from? Full build log: ------------------------------------------ 301 benson% python setup.py build Running from scipy core source directory. Assuming default configuration (scipy/distutils/command/{setup_command,setup}.py was not found) Appending scipy.distutils.command configuration to scipy.distutils Assuming default configuration (scipy/distutils/fcompiler/{setup_fcompiler,setup}.py was not found) Appending scipy.distutils.fcompiler configuration to scipy.distutils Appending scipy.distutils configuration to scipy Appending scipy.weave configuration to scipy Assuming default configuration (scipy/test/{setup_test,setup}.py was not found) Appending scipy.test configuration to scipy No module named __svn_version__ Creating scipy/f2py/__svn_version__.py (version='1511') F2PY Version 2_1511 Appending scipy.f2py configuration to scipy Creating scipy/base/__svn_version__.py (version='1511') Appending scipy.base configuration to scipy blas_opt_info: atlas_blas_threads_info: Setting PTATLAS=ATLAS NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c include_dirs = ['/usr/include'] /u/home8/guyer/SciPy/core/scipy/distutils/system_info.py:897: FutureWarning: hex()/oct() of negative int will return a signed string in Python 2.4 and up magic = hex(hash(`config`)) running build_src building extension "atlas_version" sources creating build creating build/src adding 'build/src/atlas_version_0x81dd6ae9.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' creating build/temp.linux-i686-2.3 creating build/temp.linux-i686-2.3/build creating build/temp.linux-i686-2.3/build/src compile options: '-I/usr/include -Iscipy/base/include -I/usr/include/python2.3 -c' gcc: build/src/atlas_version_0x81dd6ae9.c gcc -pthread -shared build/temp.linux-i686-2.3/build/src/atlas_version_0x81dd6ae9.o -L/usr/lib -lf77blas -lcblas -latlas -o build/temp.linux-i686-2.3/atlas_version.so FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] lapack_opt_info: atlas_threads_info: Setting PTATLAS=ATLAS scipy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: scipy.distutils.system_info.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/atlas', '/usr/lib'] language = f77 include_dirs = ['/usr/include'] running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_0x6be84111.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/usr/include -Iscipy/base/include -I/usr/include/python2.3 -c' gcc: build/src/atlas_version_0x6be84111.c /usr/bin/g77 -shared build/temp.linux-i686-2.3/build/src/atlas_version_0x6be84111.o -L/usr/lib/atlas -L/usr/lib -llapack -lf77blas -lcblas -latlas -lg2c-pic -o build/temp.linux-i686-2.3/atlas_version.so FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/atlas', '/usr/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] Appending scipy.lib configuration to scipy Appending scipy.basic configuration to scipy Appending scipy configuration to Inheriting attribute 'version' from '?' scipy_core version 0.7.1.1511 running build running config_fc running build_src building extension "scipy.distutils.__config__" sources creating build/src/scipy creating build/src/scipy/distutils creating build/src/scipy/distutils/scipy creating build/src/scipy/distutils/scipy/distutils adding 'build/src/scipy/distutils/scipy/distutils/__config__.py' to sources. building extension "scipy.base.multiarray" sources creating build/src/scipy/base Generating build/src/scipy/base/config.h customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/usr/include/python2.3 -Iscipy/base/src -I/usr/include/python2.3 -c' gcc: _configtest.c gcc -pthread _configtest.o -o _configtest _configtest Inconsistency detected by ld.so: dl-version.c: 230: _dl_check_map_versions: Assertion `needed != ((void *)0)' failed! failure. removing: _configtest.c _configtest.o _configtest Traceback (most recent call last): File "setup.py", line 38, in ? setup_package() File "setup.py", line 31, in setup_package setup( **config.todict() ) File "/u/home8/guyer/SciPy/core/scipy/distutils/core.py", line 91, in setup return old_setup(**new_attr) File "/usr/lib/python2.3/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.3/distutils/dist.py", line 907, in run_commands self.run_command(cmd) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/usr/lib/python2.3/distutils/command/build.py", line 107, in run self.run_command(cmd_name) File "/usr/lib/python2.3/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/u/home8/guyer/SciPy/core/scipy/distutils/command/build_src.py", line 86, in run self.build_sources() File "/u/home8/guyer/SciPy/core/scipy/distutils/command/build_src.py", line 99, in build_sources self.build_extension_sources(ext) File "/u/home8/guyer/SciPy/core/scipy/distutils/command/build_src.py", line 143, in build_extension_sources sources = self.generate_sources(sources, ext) File "/u/home8/guyer/SciPy/core/scipy/distutils/command/build_src.py", line 199, in generate_sources source = func(extension, build_dir) File "scipy/base/setup.py", line 36, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration removed scipy/base/__svn_version__.py removed scipy/base/__svn_version__.pyc removed scipy/f2py/__svn_version__.py removed scipy/f2py/__svn_version__.pyc -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From daishi at egcrc.net Fri Nov 18 19:14:15 2005 From: daishi at egcrc.net (daishi at egcrc.net) Date: Fri, 18 Nov 2005 16:14:15 -0800 Subject: [SciPy-dev] small fix for Lib/signal/signaltools.py Message-ID: <6d3af4f37bacaaecfc56df9eeb7da84a@egcrc.net> The following adds some imports that are used by the functions in the module. Index: Lib/signal/signaltools.py =================================================================== --- Lib/signal/signaltools.py (revision 1446) +++ Lib/signal/signaltools.py (working copy) @@ -14,7 +14,7 @@ from scipy.stats import mean import scipy.base as Numeric from scipy.base import array, arange, where, sqrt, rank, zeros, NewAxis, \ - argmax, product + argmax, product, cos, pi from scipy.utils import factorial _modedict = {'valid':0, 'same':1, 'full':2} From oliphant at ee.byu.edu Fri Nov 18 23:19:03 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 21:19:03 -0700 Subject: [SciPy-dev] [matplotlib-devel] Preliminary patch for the (new) scipy In-Reply-To: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> References: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> Message-ID: <437EA7B7.500@ee.byu.edu> daishi at egcrc.net wrote: > I've submitted the following: > > http://sourceforge.net/tracker/index.php? > func=detail&aid=1360855&group_id=80706&atid=560722 > > This patch allows one to use matplotlib with (just) > the new scipy. > > The matplotlib built with this patch and just > the new scipy passes the examples/backend_driver.py > test with errors only occuring on those tests which > explicitly includes numarray, and a few others which > i believe are unrelated to the patch. This is fantastic. I'm absolutely thrilled by this development. Thank you... Thank you... -Travis From pearu at scipy.org Sat Nov 19 11:26:37 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 19 Nov 2005 10:26:37 -0600 (CST) Subject: [SciPy-dev] [SciPy-user] calculating factorial In-Reply-To: <437F3A6D.7040103@gmail.com> References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> Message-ID: With the revision 1516 of scipy core arange seems to have ref.counting problem: >>> from scipy import * >>> prod(arange(2,101,dtype=object)) Illegal instruction >>> from scipy import * >>> a=prod(arange(2,101,dtype=object)) >>> a Illegal instruction >>> from scipy import * >>> a=arange(2,101,dtype=object) >>> b=prod(a) >>> b 9.3326215443944102e+157 >>> Anyone can reproduce these segfaults? Pearu From dd55 at cornell.edu Sat Nov 19 13:19:43 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sat, 19 Nov 2005 13:19:43 -0500 Subject: [SciPy-dev] [SciPy-user] efficiently importing ascii data In-Reply-To: <200511140636.31387.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <4378198C.7070400@ee.byu.edu> <200511140636.31387.dd55@cornell.edu> Message-ID: <200511191319.44057.dd55@cornell.edu> Hi Travis, All, > > The other thing that needs to be done is that string (and unicode) > > arrays need to convert to the numeric types, easily. This would let you > > read in a string array and convert to numeric types quite cleanly. > > Right now, this doesn't work simply because the wrong generic functions > > are getting called in the conversion routines. This can and should be > > changed, however. The required code to change is in arraytypes.inc.src. > > > > I could see using PyInt_FromString, PyLong_FromString, > > PyFloat_FromString, and calling the complex python function for > > constructing complex numbers from a string. Code would need to be > > written to fully support Long double and complex long double conversion, > > but initially they could just punt and use the double conversions. > > > > Alternatively, sscanf could be called when available for the type, and > > the other approaches used when it isn't. > > > > Anybody want a nice simple project to get themselves up to speed with > > the new code base :-) > > I'll look into it (after hours, just started a new job). I haven't worked > much with C or wrapping C, and I need to learn sometime. Now that I have studied the source, I'm not sure I am up to this. Between trying to learn C and wade through the (what appears to me) extremely abstracted source code, I might as well deciphering Linear A. There wouldn't be, by chance, a developers guide to hacking scipy, would there? Darren From dd55 at cornell.edu Sat Nov 19 13:22:40 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sat, 19 Nov 2005 13:22:40 -0500 Subject: [SciPy-dev] [SciPy-user] calculating factorial In-Reply-To: References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> Message-ID: <200511191322.40415.dd55@cornell.edu> On Saturday 19 November 2005 11:26 am, Pearu Peterson wrote: > With the revision 1516 of scipy core arange seems to have ref.counting > > problem: > >>> from scipy import * > >>> prod(arange(2,101,dtype=object)) > > Illegal instruction > > >>> from scipy import * > >>> a=prod(arange(2,101,dtype=object)) > >>> a > > Illegal instruction > > >>> from scipy import * > >>> a=arange(2,101,dtype=object) > >>> b=prod(a) > >>> b > > 9.3326215443944102e+157 > > > Anyone can reproduce these segfaults? I can. Same version: 1516. From ravi at ati.com Sat Nov 19 13:38:57 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Sat, 19 Nov 2005 13:38:57 -0500 Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) Message-ID: Hi, A little more progress. I can now compile and run tests on gcc4/gfortran without any segfaults. This is on Fedora Core 4, with gcc 4.0.2 branch SVN checkout from Nov 9 along with a few patches from Redhat. I can make the source/binary RPMs available to anyone - I don't have a website to host it though. I think Arnd's build is messed up because of his installed g77. He needs the following patch before he can actually get a true compiled version: Index: scipy/distutils/fcompiler/gnu.py =================================================================== --- scipy/distutils/fcompiler/gnu.py (revision 1513) +++ scipy/distutils/fcompiler/gnu.py (working copy) @@ -230,7 +230,26 @@ module_dir_switch = '-M' module_include_switch = '-I' + def get_libraries(self): + opt = [] + d = self.get_libgcc_dir() + if d is not None: + g2c = 'gfortran-pic' + f = self.static_lib_format % (g2c, self.static_lib_extension) + if not os.path.isfile(os.path.join(d,f)): + g2c = 'gfortran' + else: + g2c = 'gfortran' + if sys.platform=='win32': + opt.append('gcc') + if g2c is not None: + opt.append(g2c) + if sys.platform == 'darwin': + opt.append('cc_dynamic') + return opt + + if __name__ == '__main__': from distutils import log log.set_verbosity(2) Note that this patch handles only gfortran. I don't have g95 available to test. Could someone commit this and makesure the proper g95 library is used for that case? > Which is perfectly understandable! > Just for the record - I also tried the gcc4/g77 variant, which gets beyond > the above point, but segfaults at another place: There are now only 3 errors with core 0.7.1.1513 and scipy 0.4.2_1446: ====================================================================== ERROR: check_dot (scipy.lib.blas.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) SystemError: NULL result without error in PyObject_Call ====================================================================== ERROR: check_dot (scipy.linalg.blas.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) SystemError: NULL result without error in PyObject_Call ====================================================================== ERROR: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 103, in check_ncg full_output=False, disp=False, retall=False) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 969, in fmin_ncg alphak, fc, gc, old_fval = line_search_BFGS(f,xk,pk,gfk,old_fval) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 556, in line_search_BFGS phi_a2 = apply(f,(xk+alpha2*pk,)+args) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 133, in function_wrapper return function(x, *args) File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 30, in func raise RuntimeError, "too many iterations in optimization routine" RuntimeError: too many iterations in optimization routine ---------------------------------------------------------------------- Ran 1370 tests in 114.800s FAILED (errors=3) I do not think any of these failures are related to gfortran, but I could be horribly wrong. How can I help debug these failures? I vaguely recall looking at check_dot a few months ago while debugging a segfault, but not much else. Regards, Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at scipy.org Sat Nov 19 13:11:15 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 19 Nov 2005 12:11:15 -0600 (CST) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Sat, 19 Nov 2005, Ravikiran Rajagopal wrote: > Hi, > A little more progress. I can now compile and run tests on > gcc4/gfortran without any segfaults. This is on Fedora Core 4, with gcc > 4.0.2 branch SVN checkout from Nov 9 along with a few patches from > Redhat. I can make the source/binary RPMs available to anyone - I don't > have a website to host it though. I think Arnd's build is messed up > because of his installed g77. He needs the following patch before he can > actually get a true compiled version: > > Index: scipy/distutils/fcompiler/gnu.py > =================================================================== > --- scipy/distutils/fcompiler/gnu.py (revision 1513) > +++ scipy/distutils/fcompiler/gnu.py (working copy) > @@ -230,7 +230,26 @@ > module_dir_switch = '-M' > module_include_switch = '-I' > > + def get_libraries(self): > + opt = [] > + d = self.get_libgcc_dir() > + if d is not None: > + g2c = 'gfortran-pic' > + f = self.static_lib_format % (g2c, self.static_lib_extension) > + if not os.path.isfile(os.path.join(d,f)): > + g2c = 'gfortran' > + else: > + g2c = 'gfortran' > > + if sys.platform=='win32': > + opt.append('gcc') > + if g2c is not None: > + opt.append(g2c) > + if sys.platform == 'darwin': > + opt.append('cc_dynamic') > + return opt > + > + Thanks for the patch. It (variant of) is in SVN now. > Note that this patch handles only gfortran. I don't have g95 available > to test. Could someone commit this and makesure the proper g95 library > is used for that case? For g95 one should modify scipy/distutils/fcompiler/g95.py Pearu From jonathan.taylor at utoronto.ca Sat Nov 19 16:20:50 2005 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Sat, 19 Nov 2005 21:20:50 +0000 Subject: [SciPy-dev] [SciPy-user] efficiently importing ascii data In-Reply-To: <200511191319.44057.dd55@cornell.edu> Message-ID: If I were you I would pick up K&R's C programming language. It is short and concice. Try to forget everything you know about python when you learn C as. http://www.amazon.com/gp/product/0131103628/002-0238679-1176829?v=glance&n=283155&n=507846&s=books&v=glance Once you understand some basic C you will prob want to read Guido's python extension documentation. http://docs.python.org/ext/ext.html It will probabally be a lot easier to understand what is going on after that. The sample chapter of Travis's book also gives away a few implementation details. See http://www.tramy.us/scipybooksample.pdf Cheers. Jon. On 11/19/2005, "Darren Dale" wrote: >Hi Travis, All, > >> > The other thing that needs to be done is that string (and unicode) >> > arrays need to convert to the numeric types, easily. This would let you >> > read in a string array and convert to numeric types quite cleanly. >> > Right now, this doesn't work simply because the wrong generic functions >> > are getting called in the conversion routines. This can and should be >> > changed, however. The required code to change is in arraytypes.inc.src. >> > >> > I could see using PyInt_FromString, PyLong_FromString, >> > PyFloat_FromString, and calling the complex python function for >> > constructing complex numbers from a string. Code would need to be >> > written to fully support Long double and complex long double conversion, >> > but initially they could just punt and use the double conversions. >> > >> > Alternatively, sscanf could be called when available for the type, and >> > the other approaches used when it isn't. >> > >> > Anybody want a nice simple project to get themselves up to speed with >> > the new code base :-) >> >> I'll look into it (after hours, just started a new job). I haven't worked >> much with C or wrapping C, and I need to learn sometime. > >Now that I have studied the source, I'm not sure I am up to this. Between >trying to learn C and wade through the (what appears to me) extremely >abstracted source code, I might as well deciphering Linear A. > >There wouldn't be, by chance, a developers guide to hacking scipy, would >there? > >Darren > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > From strawman at astraw.com Sat Nov 19 18:14:09 2005 From: strawman at astraw.com (Andrew Straw) Date: Sat, 19 Nov 2005 15:14:09 -0800 Subject: [SciPy-dev] minor bug in scipy svn Message-ID: <437FB1C1.8070808@astraw.com> =================================================================== --- Lib/utils/ppimport.py (revision 1447) +++ Lib/utils/ppimport.py (working copy) @@ -284,7 +284,7 @@ self.__dict__['_ppimport_module'] = module # XXX: Should we check the existence of module.test? Warn? - from scipy_test.testing import ScipyTest + from scipy.test.testing import ScipyTest module.test = ScipyTest(module).test return module Also, where is the Trac instance for full scipy? From robert.kern at gmail.com Sat Nov 19 19:07:33 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 19 Nov 2005 16:07:33 -0800 Subject: [SciPy-dev] minor bug in scipy svn In-Reply-To: <437FB1C1.8070808@astraw.com> References: <437FB1C1.8070808@astraw.com> Message-ID: <437FBE45.8040908@gmail.com> Andrew Straw wrote: > Also, where is the Trac instance for full scipy? I don't believe it exists, yet. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Sat Nov 19 23:53:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 19 Nov 2005 21:53:51 -0700 Subject: [SciPy-dev] [SciPy-user] efficiently importing ascii data In-Reply-To: <200511191319.44057.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <4378198C.7070400@ee.byu.edu> <200511140636.31387.dd55@cornell.edu> <200511191319.44057.dd55@cornell.edu> Message-ID: <4380015F.3000807@ee.byu.edu> Darren Dale wrote: >Hi Travis, All, > > > >>>The other thing that needs to be done is that string (and unicode) >>>arrays need to convert to the numeric types, easily. This would let you >>>read in a string array and convert to numeric types quite cleanly. >>>Right now, this doesn't work simply because the wrong generic functions >>>are getting called in the conversion routines. This can and should be >>>changed, however. The required code to change is in arraytypes.inc.src. >>> >>>I could see using PyInt_FromString, PyLong_FromString, >>>PyFloat_FromString, and calling the complex python function for >>>constructing complex numbers from a string. Code would need to be >>>written to fully support Long double and complex long double conversion, >>>but initially they could just punt and use the double conversions. >>> >>>Alternatively, sscanf could be called when available for the type, and >>>the other approaches used when it isn't. >>> >>>Anybody want a nice simple project to get themselves up to speed with >>>the new code base :-) >>> >>> >>I'll look into it (after hours, just started a new job). I haven't worked >>much with C or wrapping C, and I need to learn sometime. >> >> > >Now that I have studied the source, I'm not sure I am up to this. Between >trying to learn C and wade through the (what appears to me) extremely >abstracted source code, I might as well deciphering Linear A. > > If you don't already know C, then the extra code-generation comments could be more confusing. The code generation itself is very simple. It's just a Python program that takes a .src file and constructs actual c-files by repeating code between tags substituting for names defined in the /**begin repeat**/ header. For example: /**begin repeat #name=mine,yours,ours# #num=1,2,3# **/ this gets repeated as @name@ and @num@ /**end repeat**/ would produce a file with the contents: this gets repeated as mine and 1 this gets repeated as yours and 2 this gets repeated as ours and 3 This is a simple way to produce lots of similar code very quickly and maintain it more easily. -Travis From oliphant at ee.byu.edu Sun Nov 20 01:58:17 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 19 Nov 2005 23:58:17 -0700 Subject: [SciPy-dev] [SciPy-user] calculating factorial In-Reply-To: References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> Message-ID: <43801E89.2090004@ee.byu.edu> Pearu Peterson wrote: >With the revision 1516 of scipy core arange seems to have ref.counting >problem: > > > >>>>from scipy import * >>>>prod(arange(2,101,dtype=object)) >>>> >>>> >Illegal instruction > > > >>>>from scipy import * >>>>a=prod(arange(2,101,dtype=object)) >>>>a >>>> >>>> >Illegal instruction > > > >>>>from scipy import * >>>>a=arange(2,101,dtype=object) >>>>b=prod(a) >>>>b >>>> >>>> >9.3326215443944102e+157 > > > >Anyone can reproduce these segfaults? > > Yes, I can. We need continued testing of the object arrays. These can be tricky to get right. Several object-array-based bugs are in Numeric, in fact, which led to some not using them. -Travis From oliphant at ee.byu.edu Sun Nov 20 04:03:05 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 20 Nov 2005 02:03:05 -0700 Subject: [SciPy-dev] Memory mapped files in scipy core Message-ID: <43803BC9.6050103@ee.byu.edu> I would appreciate understanding typically use cases for memory-mapped files. I am not sure I understand why certain choices were made for numarray's memmap and memmap slice classes. They seem like a lot of "extra" stuff and I'm not sure what all that stuff is for. Rather than just copy these over, I would like to understand what people typically want to do with memory-mapped files to see if scipy core doesn't already provide that. For example, write now I can open a file, use mmap to obtain a memory map object and then pass that object into frombuffer in scipy_core to get an ndarray whose memory maps a file on disk. Now, this ndarray can be sliced and indexed and manipulated all the while referring to the file on disk (well technically, I suppose, the memory-mapped object would need to be flushed to synchronize). Now, I could see wanting to make the process of opening the file, getting the mmap object and setting it's buffer to the array object a little easier. Thus, a simple memmap class would be a useful construct -- I could even see it inheriting from the ndarray directly and adding a few methods. I guess I just don't see why one would care about a memory-mapped slice object, when the mmaparray sub-class would be perfectly useful. On a related, but orthogonal note: My understanding is that using memory-mapped files for *very* large files will require modification to the mmap module in Python --- something I think we should push. One part of that process would be to add the C-struct array interface to the mmap module and the buffer object -- perhaps this is how we get the array interface into Python quickly. Then, if we could make a base-type mmap that did not use the buffer interface or the sequence interface (similar to the bigndarray in scipy_core) and therefore by-passed the problems with Python in those areas, then the current mmap object could inherit from the base class and provide current functionality while still exposing the array interface for access to >2GB files on 64-bit systems. Who would like to take up the ball for modifying mmap in Python in this fashion? -Travis From oliphant at ee.byu.edu Sun Nov 20 03:14:51 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 20 Nov 2005 01:14:51 -0700 Subject: [SciPy-dev] [SciPy-user] calculating factorial In-Reply-To: References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> Message-ID: <4380307B.3020906@ee.byu.edu> Pearu Peterson wrote: >With the revision 1516 of scipy core arange seems to have ref.counting >problem: > > Actually, this was a problem in every version. The reduction functions were never properly handling reference counts for object arrays. This should be fixed now in svn. -Travis From strawman at astraw.com Sun Nov 20 04:55:40 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 20 Nov 2005 01:55:40 -0800 Subject: [SciPy-dev] dtype name coverage Message-ID: <4380481C.1050108@astraw.com> Hi, I was going through some old numstar code, trying to get it to work with scipy core, and I was initially frustrated by the lack of "UInt8" in scipy, because it's the only name for this dtype shared between numarray and Numeric. To investigate the situation further, I wrote a little script (attached), which I think should be used to add a few more names to the scipy namespace in the name of backwards compatibility with numstar. Here's the output on my machine: scipy Numeric numarray UInt8 - + + UnsignedInt8 + + - uint8 + - + UInt16 - + + UnsignedInt16 + + - uint16 + - + UInt32 - + + UnsignedInt32 - + - uint32 + - + UInt64 - - + UnsignedInt64 - - - uint64 + - + uint + - - uint_ - - - Int8 + + + int8 + - + Int16 + + + int16 + - + Int32 + + + int32 + - + Int64 - + + int64 + - + int_ + - - Float32 + + + float32 + - + Float64 + + + float64 + - + Float128 + - - float128 + - - Float + + + float - - - float_ + - - complex64 + - + Complex64 + + + complex128 + - + Complex128 + - - complex256 + - - Complex256 - - - Complex_ - - - complex - - - complex_ + - - -------------- next part -------------- A non-text attachment was scrubbed... Name: dtypes.py Type: text/x-python Size: 1252 bytes Desc: not available URL: From dd55 at cornell.edu Sun Nov 20 09:49:06 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 20 Nov 2005 09:49:06 -0500 Subject: [SciPy-dev] newcore atlas info on Gentoo In-Reply-To: <200511111608.49972.dd55@cornell.edu> References: <200510181341.12351.dd55@cornell.edu> <4374FC0E.1090700@csun.edu> <200511111608.49972.dd55@cornell.edu> Message-ID: <200511200949.06502.dd55@cornell.edu> On Friday 11 November 2005 4:08 pm, Darren Dale wrote: > On Friday 11 November 2005 03:16 pm, Stephen Walton wrote: > > Darren Dale wrote: > > >Well, in my case, editing site.cfg alone does not work, because my > > > fortran blas libraries are named "blas" instead of "f77blas". > > > system_info.py has "f77blas" hardcoded in several places... > > > > Only in the parts related to ATLAS, because the ATLAS-generated BLAS > > libraries are named this. > > The gentoo-science group has set up the math library ebuilds to create > symlinks in /usr/lib, which can be changed to point to different external > libraries (the blas libraries created by the blas-atlas ebuild are located > at /usr/lib/blas/atlas/libblas.*). The thinking is that one could easily > switch from using ATLAS to ACML, for example. site.cfg offers an > opportunity to override the names of the ATLAS libraries, but I maybe this > is misleading: > > # For overriding the names of the atlas libraries: > atlas_libs = lapack, blas, cblas, atlas Just to follow up on this, here is my site.cfg: ____________________________________ [DEFAULT] library_dirs = /usr/lib include_dirs = /usr/include:/usr/include/atlas [atlas] atlas_libs = lapack, blas, cblas, atlas [fftw] fftw_libs = dfftw, drfftw, sfftw, srfftw fftw_opt_libs = dfftw_threads, drfftw_threads, sfftw_threads, srfftw_threads ____________________________________ I get the following messages when I build scipy (not scipy_core) on gentoo: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) And when I test scipy, I get warnings like **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Now, if I edit system_info.py and replace every 'f77blas' with 'blas', this problem goes away. Darren From ravi at ati.com Sun Nov 20 11:51:52 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Sun, 20 Nov 2005 11:51:52 -0500 Subject: [SciPy-dev] newcore and gcc4/gfortran (almost there) Message-ID: Hi all, > Thanks for the patch. It (variant of) is in SVN now. Now, one more test passes - check_dot in scipy.linalg.test_blas. This took some figuring out; I had my ATLAS compiled with -ff2c but not SciPy. This caused a mismatch between the fortran code in linalg/src/fblaswrap.f and the ATLAS library. After the addition of -ff2c to the gfortran flag list, it succeeds. However, the code in lib/fblas/fblaswrap.f.src is a lot hairier and I got lost trying to figure that one out. Why are two wrappers (identical in intent as far as I can tell) built for ATLAS libraries (one in lib and one in linalg)? Could they not be unified? If so, I vote for the one in linalg since it works for me (and presumably for everyone else too). If the two can be merged, how should I go about doing it? In that case, the only failure is the non-convergence of the line search Newton conjugate gradient algorithm. I will take a look at it next weekend; have spent far too much time on this already this weekend. Regards, Ravi -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3093 bytes Desc: not available URL: From prabhu_r at users.sf.net Sun Nov 20 13:23:27 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sun, 20 Nov 2005 23:53:27 +0530 Subject: [SciPy-dev] scipy.distutils compatibility patch Message-ID: <17280.48927.912594.997543@vulcan.linux.in> Hi, While trying to get TVTK working with scipy I ran into some problems with scipy.distutils and compatibilty with older code. Attached is a patch (I don't have checkin access) that adds `default_config_dict` to scipy.distutils.misc_util. Adding this lets my simple setup.py scripts work with either scipy_distutils or scipy.distutils. I also found that `python setup.py build_ext --inplace` does not work correctly and it tries to build the extension module in the installed distutils directory (for e.g. /usr/local/lib/python2.3/site-packages/scipy/distutils). cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: patch URL: From prabhu_r at users.sf.net Sun Nov 20 13:35:25 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 21 Nov 2005 00:05:25 +0530 Subject: [SciPy-dev] Problem with ravel for non-contiguous 1-d arrays Message-ID: <17280.49645.331554.192657@vulcan.linux.in> Hi, Here is a simple example that works with numeric/numarray but fails with scipy. >>> import scipy >>> data = scipy.array([[0,0,0,10], [1,0,0,20], [0,1,0,20], [0,0,1,30]], 'f') >>> p = data[:,:3] >>> z = scipy.ravel(p) >>> z.flags.contiguous True # So far so good... >>> t = data[:,-1] >>> t.flags.contiguous False >>> z = scipy.ravel(t) >>> z.flags.contiguous False As you can see, the 1D array when ravelled does is still non-contiguous. However, scipy.flatten does the job correctly but always makes a copy (which is something I don't want). Numeric/numarray work correctly. cheers, prabhu From ryanlists at gmail.com Sun Nov 20 13:51:16 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 20 Nov 2005 13:51:16 -0500 Subject: [SciPy-dev] Binaries for scipy created In-Reply-To: <437809C1.5020909@ee.byu.edu> References: <437809C1.5020909@ee.byu.edu> Message-ID: Travis, Thanks for making the binaries available for windows. I really appreciate it. I am having a problem however. Running scipy.test() causes python to crash. This is the output I see before the crash: C:\Documents and Settings\Ryan Krauss>python Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing utils to scipy Importing cluster to scipy Importing sparse to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy >>> scipy.test() Found 14 tests for scipy.linalg.blas Found 339 tests for scipy.special.basic Found 92 tests for scipy.stats Found 70 tests for scipy.stats.distributions Found 12 tests for scipy.io.mmio Found 4 tests for scipy.linalg.lapack Found 2 tests for scipy.base.umath Found 24 tests for scipy.base.function_base Found 128 tests for scipy.linalg.fblas Found 17 tests for scipy.base.ma Found 7 tests for scipy.linalg.matfuncs Found 6 tests for scipy.optimize Found 92 tests for scipy.stats.stats Found 3 tests for scipy.base.getlimits Found 9 tests for scipy.base.twodim_base Found 36 tests for scipy.linalg.decomp Found 1 tests for scipy.optimize.zeros Found 49 tests for scipy.sparse.sparse Found 42 tests for scipy.lib.lapack Found 4 tests for scipy.io.array_import Found 18 tests for scipy.fftpack.basic Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 49 tests for scipy.sparse Found 4 tests for scipy.fftpack.helper Found 3 tests for scipy.base.matrix Found 3 tests for scipy.signal.signaltools Found 44 tests for scipy.base.shape_base Found 1 tests for scipy.optimize.cobyla Found 41 tests for scipy.linalg.basic Found 42 tests for scipy.base.type_check Found 10 tests for scipy.stats.morestats Found 3 tests for scipy.basic.helper Found 5 tests for scipy.interpolate.fitpack Found 4 tests for scipy.base.index_tricks Found 0 tests for __main__ ................................................................................ ................................................................................ ................................................................................ .Gegenbauer, a = -0.145613464768 ................................................................................ .....F.......................................................................... ................................................................................ ...............................................................................c axpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ....................Result may be inaccurate, approximate err = 1.46322737003e-0 08 ...Result may be inaccurate, approximate err = 1.55302685732e-011 ................................................................................ .....................................................................TESTING CON VERGENCE zero should be 1 function f2 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004661 cc.brenth : 0.9999999999999997 cc.brentq : 0.9999999999999577 function f3 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000000000 cc.brenth : 1.0000000000000009 cc.brentq : 1.0000000000000011 function f4 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000001454 cc.brenth : 0.9999999999993339 cc.brentq : 0.9999999999993339 function f5 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004574 cc.brenth : 0.9999999999991444 cc.brentq : 0.9999999999991444 function f6 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004343 cc.brenth : 1.0000000000012241 cc.brentq : 1.0000000000001927 ................................................................................ ............ Anything I can do to get you more information? Also, does scipy or scipy_core include ATLAS and BLAS and all that? I have whatever came with the scipy for python 2.3 binary, but have remove all references to Python23 from my path. This is a fresh install of Python 2.4.2 with only scipy and scipy_core installed: scipy-0.4.3.win32-py2.4.exe scipy_core-0.6.1.win32-py2.4.exe Thanks, Ryan On 11/13/05, Travis Oliphant wrote: > I've used the scipy sourceforge site to place binaries for a "release" > of full scipy (built on the new core). The version is 0.4.3. There is > an rpm and windows binaries, as well as a full tar ball. > > I know there are people out there who would like to try scipy but don't > want to wrestle with the install. The rpms and/or windows binaries > might help. This is the first time, I've made binaries for other people > to use. Hopefully they work fine, but errors may be reported. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Sun Nov 20 15:49:25 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 20 Nov 2005 13:49:25 -0700 Subject: [SciPy-dev] Binaries for scipy created In-Reply-To: References: <437809C1.5020909@ee.byu.edu> Message-ID: <4380E155.4010903@ee.byu.edu> Ryan Krauss wrote: >Travis, > >Thanks for making the binaries available for windows. I really appreciate it. > > This is probably the same problem with linking against the right libraries I spoke of before. The binaries were made before that worked. As long as you don't use the io library you should be O.K. (well --- besides the other bugs that have been found and fixed since I made the binaries...) You could verify the problem by removing the io test (just remove the test directory: site-packages/scipy/Lib/io/tests) and try again. -Travis From dd55 at cornell.edu Sun Nov 20 19:22:39 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 20 Nov 2005 19:22:39 -0500 Subject: [SciPy-dev] [matplotlib-devel] Preliminary patch for the (new) scipy In-Reply-To: <200511201857.27590.dd55@cornell.edu> References: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> <4380FCED.3060902@gmail.com> <200511201857.27590.dd55@cornell.edu> Message-ID: <200511201922.40076.dd55@cornell.edu> On Sunday 20 November 2005 6:57 pm, Darren Dale wrote: > On Sunday 20 November 2005 5:47 pm, Robert Kern wrote: > > Darren Dale wrote: > > > Will scipy_core ever include something like Numeric's lapack_lite? > > > > scipy_core does not depend on ATLAS. It already has lapack_lite. > > > > [svk-projects]$ ls scipy_core/scipy/corelib/lapack_lite > > blas_lite.c dlapack_lite.c f2c_lite.c > > zlapack_lite.c dlamch.c f2c.h > > lapack_litemodule.c Sorry, I've obviously overlooked something, but maybe I have also discovered a bug. I've just removed atlas/blas/lapack, updated scipy_core from svn, removed my build and site-packages/scipy directories, and tried to build scipy_core. Here is a warning message I get toward the end of the build and install processes: building 'scipy.lib._dotblas' extension compiling C sources i686-pc-linux-gnu-gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -fPIC' creating build/temp.linux-i686-2.4/scipy/corelib creating build/temp.linux-i686-2.4/scipy/corelib/blasdot compile options: '-DNO_ATLAS_INFO=1 -Iscipy/corelib/blasdot -Iscipy/base/include -Ibuild/src/scipy/base -Iscipy/base/src -I/usr/include/python2.4 -c' i686-pc-linux-gnu-gcc: scipy/corelib/blasdot/_dotblas.c /usr/bin/g77 -shared build/temp.linux-i686-2.4/scipy/corelib/blasdot/_dotblas.o -L/usr/lib -lblas -lg2c -o build/lib.linux-i686-2.4/scipy/lib/_dotblas.so /usr/lib/gcc/i686-pc-linux-gnu/3.4.4/../../../../i686-pc-linux-gnu/bin/ld: cannot find -lblas collect2: ld returned 1 exit status /usr/lib/gcc/i686-pc-linux-gnu/3.4.4/../../../../i686-pc-linux-gnu/bin/ld: cannot find -lblas collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -shared build/temp.linux-i686-2.4/scipy/corelib/blasdot/_dotblas.o -L/usr/lib -lblas -lg2c -o build/lib.linux-i686-2.4/scipy/lib/_dotblas.so" failed with exit status 1 and here I try to import scipy: In [1]: from scipy import * --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/darren/ ImportError: No module named scipy Darren From opoku at ece.cmu.edu Sun Nov 20 19:28:23 2005 From: opoku at ece.cmu.edu (Osei Poku) Date: Sun, 20 Nov 2005 19:28:23 -0500 Subject: [SciPy-dev] patch for rv_discrete Message-ID: <20051121002823.GD28083@ece.cmu.edu> Hi, I have attached a patch to fix this problem. http://www.scipy.net/roundup/scipy/issue242 I simply add the missing function. I think this should be fine. This patch is for scipy/Lib/stats/distributions.py Osei -------------- next part -------------- Index: distributions.py =================================================================== --- distributions.py (revision 1448) +++ distributions.py (working copy) @@ -2941,6 +2941,12 @@ except KeyError: return 0.0 +def _drv_pmf(self, xk, *args): + try: + return self.P[xk] + except KeyError: + return 0.0 + def _drv_cdf(self, xk, *args): indx = argmax((self.xk>xk))-1 return self.F[self.xk[indx]] From prabhu_r at users.sf.net Sun Nov 20 23:24:35 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 21 Nov 2005 09:54:35 +0530 Subject: [SciPy-dev] Wierd error with scipy and VTK Message-ID: <17281.19459.796758.632299@vulcan.linux.in> Hi, If I merely import scipy (newcore built last night from SVN) and run any trivial VTK script, I get a "Floating point exception" and a crash (this is under Debian Sarge on an i386). I ran the script under gdb and everything works! I've attached the script, which is basically an example lifted straight from the VTK examples and just added an "import scipy" at the beginning and get the crash. I haven't had the chance to look at this at any depth but maybe someone else has seen this before. cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Cone.py URL: From pearu at scipy.org Mon Nov 21 00:38:12 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 20 Nov 2005 23:38:12 -0600 (CST) Subject: [SciPy-dev] newcore and gcc4/gfortran (almost there) In-Reply-To: References: Message-ID: On Sun, 20 Nov 2005, Ravikiran Rajagopal wrote: > Now, one more test passes - check_dot in scipy.linalg.test_blas. This > took some figuring out; I had my ATLAS compiled with -ff2c but not > SciPy. This caused a mismatch between the fortran code in > linalg/src/fblaswrap.f and the ATLAS library. After the addition of > -ff2c to the gfortran flag list, it succeeds. Scipy assumes that ATLAS as well as LAPACK libraries are compiled without -ff2c. > However, the code in lib/fblas/fblaswrap.f.src is a lot hairier and I > got lost trying to figure that one out. Read pydoc scipy.distutils.from_template that describes the syntax of Fortran .src files. > Why are two wrappers (identical in intent as far as I can tell) built > for ATLAS libraries (one in lib and one in linalg)? Could they not be > unified? If so, I vote for the one in linalg since it works for me (and > presumably for everyone else too). If the two can be merged, how should > I go about doing it? scipy.lib.lapack/blas will replace lapack/blas wrappers in scipy.linalg. There is only few wrappers missing in scipy.lib.lapack to complete the move of lapack wrappers. When they are implemented, lapack/blas wrappers will be removed from scipy.linalg. The fact that linalg works better for you than lib.lapack may be due to more complete tests in lib.lapack. Wrappers in lib.lapack are in much better conditions than the ones in linalg in terms of unittests and documentation. Pearu From strawman at astraw.com Mon Nov 21 02:17:29 2005 From: strawman at astraw.com (Andrew Straw) Date: Sun, 20 Nov 2005 23:17:29 -0800 Subject: [SciPy-dev] Wierd error with scipy and VTK In-Reply-To: <17281.19459.796758.632299@vulcan.linux.in> References: <17281.19459.796758.632299@vulcan.linux.in> Message-ID: <43817489.1090203@astraw.com> GNU libc version 2.3.2 has a bug [1] "feclearexcept() error on CPUs with SSE" (fixed in 2.3.3) which has been submitted to Debian [2] but not yet fixed in sarge (and possibly other Debian releases). I have a patch[3] which fixes it. [1] http://sources.redhat.com/bugzilla/show_bug.cgi?id=10 [2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294 [3] http://www.its.caltech.edu/~astraw/coding.html#id3 Prabhu Ramachandran wrote: >Hi, > >If I merely import scipy (newcore built last night from SVN) and run >any trivial VTK script, I get a "Floating point exception" and a crash >(this is under Debian Sarge on an i386). I ran the script under gdb >and everything works! I've attached the script, which is basically an >example lifted straight from the VTK examples and just added an >"import scipy" at the beginning and get the crash. > From arnd.baecker at web.de Mon Nov 21 03:16:54 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 21 Nov 2005 09:16:54 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: Hi, things look very good on my machine: - gcc -v gcc version 4.0.2 20050901 (prerelease) (SUSE Linux) - g77 -v gcc version 3.3.5 20050117 (prerelease) (SUSE Linux) Building with python setup.py install --prefix=$DESTDIR and ATLAS variable set to a place with ATLAS 3.7.11, all tests of scipy core and scipy pass. ((Doing the build with python setup.py config_fc --fcompiler=gnu95 build fails - as expected )) Best, Arnd From pearu at scipy.org Mon Nov 21 03:30:58 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 21 Nov 2005 02:30:58 -0600 (CST) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Arnd Baecker wrote: > ((Doing the build with > python setup.py config_fc --fcompiler=gnu95 build > fails - as expected )) Hmm, why would that be expected? What errors do you get? Here python setup.py config_fc --fcompiler=gnu95 build works fine (I just get a segfault in one of cephes tests when building with gfortran). I have gcc (GCC) 4.0.3 20051023 (prerelease) (Debian 4.0.2-3) GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3)) Pearu From pearu at scipy.org Mon Nov 21 04:38:28 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 21 Nov 2005 03:38:28 -0600 (CST) Subject: [SciPy-dev] scipy.distutils compatibility patch In-Reply-To: <17280.48927.912594.997543@vulcan.linux.in> References: <17280.48927.912594.997543@vulcan.linux.in> Message-ID: On Sun, 20 Nov 2005, Prabhu Ramachandran wrote: > Hi, > > While trying to get TVTK working with scipy I ran into some problems > with scipy.distutils and compatibilty with older code. Attached is a > patch (I don't have checkin access) that adds `default_config_dict` to > scipy.distutils.misc_util. Adding this lets my simple setup.py > scripts work with either scipy_distutils or scipy.distutils. Thanks for the patch. It is in SVN now with small depreciation warning. > I also found that `python setup.py build_ext --inplace` does not work > correctly and it tries to build the extension module in the installed > distutils directory (for > e.g. /usr/local/lib/python2.3/site-packages/scipy/distutils). Yes, I am aware of possible --inplace problems. When building scipy, one should not build with --inplace even if it would work correctly --- if something gets wrong then it is very difficult to remove all the generated files inside scipy source tree. So, what comes to building scipy with --inplace switch, I cannot give full support on solving building problems. `rm -rf build` is a *must* before rebuilding scipy and starting to look into possible bugs. However, if you could provide a simple example where --inplace does not work as expected, I'll look into it and fix any related bugs in scipy.distutils. Pearu From arnd.baecker at web.de Mon Nov 21 05:49:37 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 21 Nov 2005 11:49:37 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Pearu Peterson wrote: > On Mon, 21 Nov 2005, Arnd Baecker wrote: > > > ((Doing the build with > > python setup.py config_fc --fcompiler=gnu95 build > > fails - as expected )) > > Hmm, why would that be expected? Well, somehow I remember that gfortran was not considered to be ready for prime time - can't confirm this as scipy.org seems to be down at the moment. In addition, I might be running a version which is not young enough ... > What errors do you get? Here > > python setup.py config_fc --fcompiler=gnu95 build > > works fine (I just get a segfault in one of cephes tests when building > with gfortran). OK, I should have been more precise: the build works fine. But already on import I get: Python 2.4.1 (#1, Sep 12 2005, 23:33:18) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Importing test to scipy Importing base to scipy Importing basic to scipy Failed to import basic /home/abaecker/BUILDS2/Build_87_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: e_ws fe Importing io to scipy Importing special to scipy Importing fftpack to scipy Failed to import fftpack /home/abaecker/BUILDS2/Build_87_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: e_ws fe Importing utils to scipy Failed to import utils cannot import name PostponedException Importing cluster to scipy Failed to import cluster cannot import name PostponedException Importing sparse to scipy Importing interpolate to scipy Importing lib to scipy Failed to import lib /home/abaecker/BUILDS2/Build_87_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: e_ws fe Importing signal to scipy Failed to import signal cannot import name random Importing integrate to scipy Importing optimize to scipy Failed to import optimize cannot import name random Importing linalg to scipy Failed to import linalg cannot import name PostponedException Importing stats to scipy Failed to import stats cannot import name PostponedException Importing basic to scipy Failed to import basic /home/abaecker/BUILDS2/Build_87_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/lib/blas/fblas.so: undefined symbol: e_ws fe Running scipy.test(10,10) gives check_bdtrik (scipy.special.basic.test_basic.test_cephes) Segmentation fault (the same one you get?) > I have > > gcc (GCC) 4.0.3 20051023 (prerelease) (Debian 4.0.2-3) > GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3)) > This is definitively newer than what I get for gcc -v/ gfortran -v (both give the same): gcc version 4.0.2 20050901 (prerelease) (SUSE Linux) Best, Arnd From pearu at scipy.org Mon Nov 21 04:57:11 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 21 Nov 2005 03:57:11 -0600 (CST) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Arnd Baecker wrote: > On Mon, 21 Nov 2005, Pearu Peterson wrote: > >> On Mon, 21 Nov 2005, Arnd Baecker wrote: >> >>> ((Doing the build with >>> python setup.py config_fc --fcompiler=gnu95 build >>> fails - as expected )) >> >> Hmm, why would that be expected? > > Well, somehow I remember that gfortran was not considered > to be ready for prime time - can't confirm this as scipy.org > seems to be down at the moment. In addition, I might > be running a version which is not young enough ... I hear that gfortran is causing problems in OSX systems, here, on debian linux box, it behaves better. >> What errors do you get? Here >> >> python setup.py config_fc --fcompiler=gnu95 build >> >> works fine (I just get a segfault in one of cephes tests when building >> with gfortran). > > OK, I should have been more precise: the build works fine. > But already on import I get: > > Python 2.4.1 (#1, Sep 12 2005, 23:33:18) > [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy > Importing test to scipy > Importing base to scipy > Importing basic to scipy > Failed to import basic > /home/abaecker/BUILDS2/Build_87_gcc4/inst_scipy_newcore/lib64/python2.4/site-packages/scipy/lib/blas/fblas.so: > undefined symbol: e_wsfe I suspect that your blas/lapack libraries are built with g77 3.x. Try adding -lg2c to linkage command or upgrade gfortran. > > Running scipy.test(10,10) gives > > check_bdtrik (scipy.special.basic.test_basic.test_cephes) > Segmentation fault > (the same one you get?) Yep. I get it also on a 32-bit system. Pearu From arnd.baecker at web.de Mon Nov 21 06:16:29 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 21 Nov 2005 12:16:29 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Pearu Peterson wrote: [...] > I suspect that your blas/lapack libraries are built with g77 3.x. Yep! > Try adding -lg2c to linkage command Just to be sure: when building blas/lapack or when building scipy? For the latter: What is the best way to do this? Can one add a flag to the build command python setup.py config_fc --fcompiler=gnu95 build ? > or upgrade gfortran. I will leave this aside for the moment as I am not root on that machine which means to build gcc/gfortran in a separate place (not a big deal, but I have to finish some other stuff before ...) Best, Arnd From pearu at scipy.org Mon Nov 21 05:27:04 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 21 Nov 2005 04:27:04 -0600 (CST) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Arnd Baecker wrote: >> Try adding -lg2c to linkage command > > Just to be sure: when building blas/lapack > or when building scipy? When building scipy OR build blas/lapack with gfortran. > For the latter: What is the best way to do this? > Can one add a flag to the build command > python setup.py config_fc --fcompiler=gnu95 build > ? Yes, see python setup.py build_ext --help Use -l (and -L). Pearu From travis at enthought.com Mon Nov 21 09:45:01 2005 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 21 Nov 2005 08:45:01 -0600 Subject: [SciPy-dev] OT: Enthought IT Admin Job Posting Message-ID: <4381DD6D.9030202@enthought.com> All, I'm posting this here in the hopes that someone that is passionate about python/numeric/scipy will "answer the call": `Enthought, Inc. `__ (Austin, TX, USA) ==================================================================== **Job Description**: Position: IT Administrator --------------------------- Enthought is looking for an exceptional IT Administrator to manage the IT infrastructure for it's Austin, TX offices. This person will have a passion for supporting software development tools and environments. The target server platforms include Linux (RedHat and Fedora), Windows XP, and Solaris. Workstations are a mix of Windows and Linux. We're looking for an Admin that focuses on supporting software developers. Desired Skills and Capabilities: * B.S. in Computer Science of other related field (preferably not MIS) * 5+ Years Experience in Enterprise IT Administration Role * Ability to program or script in a programming language or shell - (Python, Perl and small C programs) * Ability to solve problems quickly and completely * Strong inter-personal and communication skills as well as a team player in a group of highly talented software developers * Willingness to pitch in wherever needed * Interested in making developer's lives easier Duties: * Perform installation, patching and other server maintenance to ensure security and stability of server infrastructure. * Maintain core infrastructure technologies such as Firewall, VPN, apache web server, mail server, NIS, NFS, DNS, DHCP, SSH, network-wide backups, RAID and Samba * Identify routine tasks and automate through shell/Python scripting * Perform on-call duties and out of hours maintenance as necessary * Configure Nortel phone system * Build RPMs for RH/FC Linux * Using development tools on Linux such as gcc/g++, make, autoconf, etc. * Working with Python and building & installing Python packages using distutils * Support developer tools such as SVN repositories, bug trackers, Software Project Management utilities Company: -------- Enthought is a scientific computing company located in downtown Austin, Texas. Founded in 2001, it has grown nearly 100% per year both in staff and revenue to become a stable and profitable technology company. This growth has been possible because of Enthought?s talented team and because of our commitment to developing quality software. We strive to combine advanced algorithm development with modern software practices, such as component based architectures, application scripting, and intuitive user interface design. We take a holistic approach to software development, in which architects and developers team with technical writers, human factors specialists, and project managers (always highly technical individuals) to develop a complete solution for our customers. Much of our work is based on the Python programming language, and we are actively engaged in open source development (www.scipy.org ). We?re lucky enough to work on interesting problems and are looking for talented people to join us. Some of our current efforts are in the areas of geophysics, electromagnetics, fluid dynamics, micro-rheology, CAD, 2-D and 3-D visualization, and others. All of these tools are developed as plug-ins into our Envisage "Scientific IDE" framework. * **Contact**: Travis N. Vaught, CEO * **E-mail contact**: jobs at enthought.com * **Web**: http://www.enthought.com/careers.htm -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From jmiller at stsci.edu Mon Nov 21 10:12:42 2005 From: jmiller at stsci.edu (Todd Miller) Date: Mon, 21 Nov 2005 10:12:42 -0500 Subject: [SciPy-dev] [Numpy-discussion] Memory mapped files in scipy core In-Reply-To: <43803BC9.6050103@ee.byu.edu> References: <43803BC9.6050103@ee.byu.edu> Message-ID: <4381E3EA.9040500@stsci.edu> Travis Oliphant wrote: > > I would appreciate understanding typically use cases for memory-mapped > files. I am not sure I understand why certain choices were made for > numarray's memmap and memmap slice classes. They seem like a lot of > "extra" stuff and I'm not sure what all that stuff is for. > > Rather than just copy these over, I would like to understand what > people typically want to do with memory-mapped files to see if scipy > core doesn't already provide that. > > For example, write now I can open a file, use mmap to obtain a memory > map object and then pass that object into frombuffer in scipy_core to > get an ndarray whose memory maps a file on disk. > Now, this ndarray can be sliced and indexed and manipulated all the > while referring to the file on disk (well technically, I suppose, the > memory-mapped object would need to be flushed to synchronize). > Now, I could see wanting to make the process of opening the file, > getting the mmap object and setting it's buffer to the array object a > little easier. Thus, a simple memmap class would be a useful > construct -- I could even see it inheriting from the ndarray directly > and adding a few methods. I guess I just don't see why one would > care about a memory-mapped slice object, when the mmaparray sub-class > would be perfectly useful. > There are a few extra capabilities which are supported in numarray's memmap: 1. slice insertion 2. slice deletion 3. memmap based array resizing Each of these things implicitly changes the layout of the underlying file. Whether or not these features get used or justify the complexity is another matter. Because of 32-bit address space exhaustion and swap file issues, memory mapping was a disappointment at STSCI so I'm doubtful we used these features ourselves. Regards, Todd > On a related, but orthogonal note: > > My understanding is that using memory-mapped files for *very* large > files will require modification to the mmap module in Python --- > something I think we should push. One part of that process would be > to add the C-struct array interface to the mmap module and the buffer > object -- perhaps this is how we get the array interface into Python > quickly. Then, if we could make a base-type mmap that did not use > the buffer interface or the sequence interface (similar to the > bigndarray in scipy_core) and therefore by-passed the problems with > Python in those areas, then the current mmap object could inherit from > the base class and provide current functionality while still exposing > the array interface for access to >2GB files on 64-bit systems. > > Who would like to take up the ball for modifying mmap in Python in > this fashion? > > -Travis > > > > ------------------------------------------------------- > This SF.Net email is sponsored by the JBoss Inc. Get Certified Today > Register for a JBoss Training Course. Free Certification Exam > for All Training Attendees Through End of 2005. For more info visit: > http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From perry at stsci.edu Mon Nov 21 10:44:10 2005 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 21 Nov 2005 10:44:10 -0500 Subject: [SciPy-dev] [Numpy-discussion] Memory mapped files in scipy core In-Reply-To: <4381E3EA.9040500@stsci.edu> Message-ID: > Each of these things implicitly changes the layout of the underlying > file. Whether or not these features get used or justify the complexity > is another matter. Because of 32-bit address space exhaustion and swap > file issues, memory mapping was a disappointment at STSCI so I'm > doubtful we used these features ourselves. > > Regards, > Todd > Let me add that I believe this situation of memory mapping being not terribly useful now is (or should be) temporary. Basically what has happened is that the size of file that is interesting to memory map is larger than the current 2GB limit. When this limit is removed, I believe that there will be renewed interest in using memory mapping with data sets that are much larger than physical memory. We already occasionally encounter such data sets now but can't use this as a means of accessing the data. Perry From oliphant at ee.byu.edu Mon Nov 21 13:12:04 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 11:12:04 -0700 Subject: [SciPy-dev] [matplotlib-devel] Preliminary patch for the (new) scipy In-Reply-To: <200511201922.40076.dd55@cornell.edu> References: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> <4380FCED.3060902@gmail.com> <200511201857.27590.dd55@cornell.edu> <200511201922.40076.dd55@cornell.edu> Message-ID: <43820DF4.5080700@ee.byu.edu> Darren Dale wrote: >On Sunday 20 November 2005 6:57 pm, Darren Dale wrote: > > >>On Sunday 20 November 2005 5:47 pm, Robert Kern wrote: >> >> >>>Darren Dale wrote: >>> >>> >>>>Will scipy_core ever include something like Numeric's lapack_lite? >>>> >>>> >>>scipy_core does not depend on ATLAS. It already has lapack_lite. >>> >>>[svk-projects]$ ls scipy_core/scipy/corelib/lapack_lite >>>blas_lite.c dlapack_lite.c f2c_lite.c >>>zlapack_lite.c dlamch.c f2c.h >>>lapack_litemodule.c >>> >>> > >Sorry, I've obviously overlooked something, but maybe I have also discovered a >bug. I've just removed atlas/blas/lapack, updated scipy_core from svn, >removed my build and site-packages/scipy directories, and tried to build >scipy_core. Here is a warning message I get toward the end of the build and >install processes: > > This is definitely a configuration issue. For some reason, the system is picking up BLAS information and therefore trying to compile _dotblas for you. However, it is failing because the libblas file is not found. Can you show us the output of the start of the build (where SYSTEM INFO things are printed). Or just attach a complete record of the build. This might help track down what the configuration issue is. -Travis From dd55 at cornell.edu Mon Nov 21 13:48:09 2005 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 21 Nov 2005 13:48:09 -0500 Subject: [SciPy-dev] [matplotlib-devel] Preliminary patch for the (new) scipy In-Reply-To: <43820DF4.5080700@ee.byu.edu> References: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> <200511201922.40076.dd55@cornell.edu> <43820DF4.5080700@ee.byu.edu> Message-ID: <200511211348.09879.dd55@cornell.edu> On Monday 21 November 2005 1:12 pm, Travis Oliphant wrote: > Darren Dale wrote: > >Sorry, I've obviously overlooked something, but maybe I have also > > discovered a bug. I've just removed atlas/blas/lapack, updated scipy_core > > from svn, removed my build and site-packages/scipy directories, and tried > > to build scipy_core. Here is a warning message I get toward the end of > > the build and install processes: > > This is definitely a configuration issue. For some reason, the system > is picking up BLAS information and therefore trying to compile _dotblas > for you. However, it is failing because the libblas file is not found. > > Can you show us the output of the start of the build (where SYSTEM INFO > things are printed). Or just attach a complete record of the build. > This might help track down what the configuration issue is. This was due to an oversight on my end. I used gentoo's package manager to completely remove atlas/blas/lapack, and afterwards verified that every instance of /usr/lib/libatlas.* had been removed. I did not check that libblas.* and liblapack.* had been removed, but I should have. Unfortunately, a few broken soft links remained, which scipy found and tried to build against. After removing those links, I was able to build scipy_core. How embarrassing. I'm sorry for the noise. Darren From oliphant at ee.byu.edu Mon Nov 21 14:00:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 12:00:11 -0700 Subject: [SciPy-dev] [matplotlib-devel] Preliminary patch for the (new) scipy In-Reply-To: <200511211348.09879.dd55@cornell.edu> References: <18edad1d51ce1377a1b6d28f091f883b@egcrc.net> <200511201922.40076.dd55@cornell.edu> <43820DF4.5080700@ee.byu.edu> <200511211348.09879.dd55@cornell.edu> Message-ID: <4382193B.4070807@ee.byu.edu> Darren Dale wrote: >On Monday 21 November 2005 1:12 pm, Travis Oliphant wrote: > > >>Darren Dale wrote: >> >> >>>Sorry, I've obviously overlooked something, but maybe I have also >>>discovered a bug. I've just removed atlas/blas/lapack, updated scipy_core >>>from svn, removed my build and site-packages/scipy directories, and tried >>>to build scipy_core. Here is a warning message I get toward the end of >>>the build and install processes: >>> >>> >>This is definitely a configuration issue. For some reason, the system >>is picking up BLAS information and therefore trying to compile _dotblas >>for you. However, it is failing because the libblas file is not found. >> >>Can you show us the output of the start of the build (where SYSTEM INFO >>things are printed). Or just attach a complete record of the build. >>This might help track down what the configuration issue is. >> >> > >This was due to an oversight on my end. I used gentoo's package manager to >completely remove atlas/blas/lapack, and afterwards verified that every >instance of /usr/lib/libatlas.* had been removed. I did not check that >libblas.* and liblapack.* had been removed, but I should have. Unfortunately, >a few broken soft links remained, which scipy found and tried to build >against. After removing those links, I was able to build scipy_core. How >embarrassing. I'm sorry for the noise. > > Not a problem. In fact, the system_info functions could be made more robust by actually trying to link a simple program against what it finds to make sure they are actually valid. So, your example is not completely irrelevant. -Travis From pebarrett at gmail.com Mon Nov 21 14:59:21 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Mon, 21 Nov 2005 14:59:21 -0500 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: <43803BC9.6050103@ee.byu.edu> References: <43803BC9.6050103@ee.byu.edu> Message-ID: <40e64fa20511211159s3f65e0fbl2a1e87106f870405@mail.gmail.com> On 11/20/05, Travis Oliphant wrote: > > > On a related, but orthogonal note: > > My understanding is that using memory-mapped files for *very* large > files will require modification to the mmap module in Python --- > something I think we should push. One part of that process would be to > add the C-struct array interface to the mmap module and the buffer > object -- perhaps this is how we get the array interface into Python > quickly. Then, if we could make a base-type mmap that did not use the > buffer interface or the sequence interface (similar to the bigndarray in > scipy_core) and therefore by-passed the problems with Python in those > areas, then the current mmap object could inherit from the base class > and provide current functionality while still exposing the array > interface for access to >2GB files on 64-bit systems. Is it really necessary to embed the C-struct array interface in mmap? I thought that Python array interface was the standard mechanism for interfacing with array objects. Or do I misunderstand you. -- Paul Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Mon Nov 21 15:12:07 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 13:12:07 -0700 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: <40e64fa20511211159s3f65e0fbl2a1e87106f870405@mail.gmail.com> References: <43803BC9.6050103@ee.byu.edu> <40e64fa20511211159s3f65e0fbl2a1e87106f870405@mail.gmail.com> Message-ID: <43822A17.8060800@ee.byu.edu> Paul Barrett wrote: > On 11/20/05, *Travis Oliphant* > wrote: > > > On a related, but orthogonal note: > > My understanding is that using memory-mapped files for *very* large > files will require modification to the mmap module in Python --- > something I think we should push. One part of that process would > be to > add the C-struct array interface to the mmap module and the buffer > object -- perhaps this is how we get the array interface into Python > quickly. Then, if we could make a base-type mmap that did not > use the > buffer interface or the sequence interface (similar to the > bigndarray in > scipy_core) and therefore by-passed the problems with Python in those > areas, then the current mmap object could inherit from the base class > and provide current functionality while still exposing the array > interface for access to >2GB files on 64-bit systems. > > > Is it really necessary to embed the C-struct array interface in mmap? > I thought that Python array interface was the standard mechanism for > interfacing with array objects. Or do I misunderstand you. > The C-struct interface is faster and a bit easier to code in C. Currently the C-struct interface is how numarray/ Numeric and scipy core are all talking to each other. One can use either the C-struct interface or the Python interface for implementing the array interface. I suspect we would want both on the memory-mapped object. The most important element of the proposal is elimination of the buffer and sequence protocols on a new base-class mmap object. Then users could use this object to memory-map very large files. -Travis From arnd.baecker at web.de Tue Nov 22 01:57:01 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 22 Nov 2005 07:57:01 +0100 (CET) Subject: [SciPy-dev] statistic tests failures Message-ID: Moin, >From time to time one of the statistics tests will fail. Would it be reasonable/possible to automatically run those tests a couple of times to reduce the chance of failing? ====================================================================== FAIL: check_normal (scipy.stats.morestats.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abaecker/BUILDS2/Build_81/inst_scipy_newcore/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 51, in check_normal assert_array_less(A, crit[-2:]) File "/home/abaecker/BUILDS2/Build_81//inst_scipy_newcore/lib/python2.4/site-packages/scipy/test/testing.py", line 782, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 0.94024324379482493 Array 2: [ 0.858 1.0210000000000001] Not that this point is really worrying me a lot, but it might confuse newcomers ... Best, Arnd From strawman at astraw.com Tue Nov 22 02:20:12 2005 From: strawman at astraw.com (Andrew Straw) Date: Mon, 21 Nov 2005 23:20:12 -0800 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler Message-ID: <4382C6AC.9090303@astraw.com> Bool is defined in C++ compilers, so if arrayobject.h defines Bool, the compiler issues an error: "redeclaration of C++ built-in type `int'". And, in an apparent shortcoming of Trac, I can't seem to submit a patch. Is that really so? Here's a patch which allows C++ source to be compiled against scipy/arrayobject.h: Index: scipy/base/include/scipy/arrayobject.h =================================================================== --- scipy/base/include/scipy/arrayobject.h (revision 1517) +++ scipy/base/include/scipy/arrayobject.h (working copy) @@ -103,7 +103,9 @@ # define ULONGLONG_SUFFIX(x) (x##UL) #endif +#ifndef Bool typedef unsigned char Bool; +#endif #ifndef FALSE #define FALSE 0 #endif From Fernando.Perez at colorado.edu Tue Nov 22 02:27:53 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 22 Nov 2005 00:27:53 -0700 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <4382C6AC.9090303@astraw.com> References: <4382C6AC.9090303@astraw.com> Message-ID: <4382C879.8060605@colorado.edu> Andrew Straw wrote: > Bool is defined in C++ compilers, so if arrayobject.h defines Bool, the > compiler issues an error: "redeclaration of C++ built-in type `int'". > > And, in an apparent shortcoming of Trac, I can't seem to submit a > patch. Is that really so? well, at least the trac instance ipython uses (which should be the same running in scipy) does have an 'Attach file' button in all ticket pages. You should be able to attach patches via that button. It is true that it doesn't seem to have a _specific_ patch tracker though, only generic attachments. > Here's a patch which allows C++ source to be compiled against > scipy/arrayobject.h: In any case, I just applied your patch. I didn't test a C++ build, but I failt to see how that 2-line #ifdef could possibly cause any harm, so I went ahead with the commit. Hopefully I didn't screw up (last time I committed a patch without careful testing Pearu had to clean up my mess :) Cheers, f From oliphant at ee.byu.edu Tue Nov 22 03:03:45 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 01:03:45 -0700 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <4382C6AC.9090303@astraw.com> References: <4382C6AC.9090303@astraw.com> Message-ID: <4382D0E1.7020907@ee.byu.edu> Andrew Straw wrote: >Bool is defined in C++ compilers, so if arrayobject.h defines Bool, the >compiler issues an error: "redeclaration of C++ built-in type `int'". > > > I thought C++ defined bool. This was recently changed from bool to Bool precisely for C++ compilers. Now you tell me that C++ compilers define Bool as well? The problem with the patch is what is sizeof(Bool) on those C++ compilers? Is it 1? -Travis From arnd.baecker at web.de Tue Nov 22 03:06:33 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 22 Nov 2005 09:06:33 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran (only 3 tests fail now) In-Reply-To: References: Message-ID: On Mon, 21 Nov 2005, Pearu Peterson wrote: > On Mon, 21 Nov 2005, Arnd Baecker wrote: > > >> Try adding -lg2c to linkage command > > > > Just to be sure: when building blas/lapack > > or when building scipy? > > When building scipy OR build blas/lapack with gfortran. > > > For the latter: What is the best way to do this? > > Can one add a flag to the build command > > python setup.py config_fc --fcompiler=gnu95 build > > ? > > Yes, see > > python setup.py build_ext --help > > Use -l (and -L). Many thanks - python setup.py config_fc --fcompiler=gnu95 -lg2c build worked fine for me. And with >>> scipy.__core_version__ '0.7.1.1526' >>> scipy.__scipy_version__ '0.4.2_1456' the check_bdtrik (scipy.special.basic.test_basic.test_cephes) Segmentation fault is gone! Best, Arnd From jh at oobleck.astro.cornell.edu Tue Nov 22 09:10:36 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Tue, 22 Nov 2005 09:10:36 -0500 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: (scipy-dev-request@scipy.net) References: Message-ID: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> I'm not familiar with the mmap interface, but these insertion tricks sound like they solve a particularly unpleasant problem of IDL that I hit a lot. It's considered nice in CS to allocate space on the fly, since it keeps your allocation with the code that uses it. It's particularly useful if you have an unknown amount of data coming in. To that end, I have a routine, concat, that allows you to tack on an array to any side of an existing array: x = concat(2, x, y) puts x next to y in the second dimension. The arrays may have any dimension and shape. If they're different shapes, concat fills any void space in with a pad value. If I'm reading in a dataset of some thousands of images, I can just put that in a loop and then ask the final array how big it is, rather than "peeking" at some ancillary data to find out how much space to pre-allocate in x. The pre-allocation line (which has to be outside the loop in IDL, or has to be protected by an if) is unnecessary. That's just one use. There are many more, and options I'm not discussing. There are two problems with a non-mmapped approach to implementing concat. First, each call to concat allocates a new, larger array of the necessary kind and copies in the existing data. So obviously, it gets slow very soon, copying all the data that has been read so far over and over again. Also, it's not possible to make an array that contains more than half the RAM. Having the ability to insert data would be an immense help here. You'd just insert onto the end, or the side (which is a series of interior insertions under the hood). No extra copies happen, and both problems are solved. Concat solves a bunch of problems in IDL that Python may not have, such as allowing x to be undefined in the initial loop iteration (IDL's array syntax does not let you concatenate an array with an undefined object). So, I would make extensive use of this capability, and I think it might become the default way to read in large datasets, particularly if they are of variable size, or if data elements might be examined and discarded during the reading process (so that even if you knew the total number of elements, you would not know the final space you'd need, you'd have to overallocate, and you'd need to reshape at the end of the loop in order not to have a bunch of empty space in your array. Once I had an mmapped array, I suppose use would be pretty generic. I often need to take a cut or slice through all the data. For example, say I have a 3D stack of images. Yesterday I looped over X and Y, extracted all the Z values at a given X,Y, and did sigma rejection to find bad pixels. Then I poked in the median value for each outlier. I don't have any particular operations I can think of that I only do on large images. I have hit the 2GB limit recently, but fortunately only have 2 32-bit machines left. The 2GB limit does put a damper on things! You'll see a lot more call for mmapped files in the coming years. For the reasons given above, I think that will be true even if RAM does grow with dataset size. --jh-- From ravi at ati.com Tue Nov 22 10:19:53 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Tue, 22 Nov 2005 10:19:53 -0500 Subject: [SciPy-dev] newcore and gcc4/gfortran (SUCCESS) In-Reply-To: References: Message-ID: <200511221019.53912.ravi@ati.com> On Tuesday 22 November 2005 03:06, Arnd Baecker wrote: > Many thanks - > python setup.py config_fc --fcompiler=gnu95 -lg2c build > worked fine for me. > > And with > > >>> scipy.__core_version__ '0.7.1.1526' > >>> scipy.__scipy_version__ '0.4.2_1456' > > the check_bdtrik (scipy.special.basic.test_basic.test_cephes) > Segmentation fault is gone! Well, I am happy to report that I can do one better. With gcc4/gfortran combo from SVN yesterday, along with the usual Redhat/Fedora patches (see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=173771 for the complete discussion regarding SciPy), I can now compile ATLAS and SciPy with no problems and all tests pass with the following patch: Index: Lib/lib/blas/fblaswrap.f.src =================================================================== --- Lib/lib/blas/fblaswrap.f.src (revision 1452) +++ Lib/lib/blas/fblaswrap.f.src (working copy) @@ -1,7 +1,7 @@ c subroutine wdot (r, n, x, incx, y, incy) external dot - complex dot, r + dot, r integer n x (*) integer incx For Fedora Core 4 users who would like to play with this, please use gcc-4.0.2-7 currently available in FC4 updates-testing repository. I am using SciPy core 1526 and scipy 1452. Question: Scipy runs only 1370 tests on my machine. Is that the right number or am I missing some? Another question for Arnd: Why do you need g2c when you build with gfortran? Regards, Ravi From arnd.baecker at web.de Tue Nov 22 10:41:15 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 22 Nov 2005 16:41:15 +0100 (CET) Subject: [SciPy-dev] newcore and gcc4/gfortran (SUCCESS) In-Reply-To: <200511221019.53912.ravi@ati.com> References: <200511221019.53912.ravi@ati.com> Message-ID: On Tue, 22 Nov 2005, Ravikiran Rajagopal wrote: [...] > Question: Scipy runs only 1370 tests on my machine. Is that the right number > or am I missing some? Same number here. > Another question for Arnd: Why do you need g2c when you build with gfortran? The reason is that my blas/lapack libraries are built with g77 3.x - Pearu had the explanation and solution: http://www.scipy.org/mailinglists/mailman?fn=scipy-dev/2005-November/004172.html Best, Arnd From strawman at astraw.com Tue Nov 22 11:49:06 2005 From: strawman at astraw.com (Andrew Straw) Date: Tue, 22 Nov 2005 08:49:06 -0800 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <4382D0E1.7020907@ee.byu.edu> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> Message-ID: <43834C02.5090403@astraw.com> Sorry, my mistake! It turns out that OpenProducer (part of OpenSceneGraph) must do "typedef int Bool", and so this, combined with arrayobject.h's "typedef undefined char Bool" results in the "redeclaration of int" compiler error. So, you're right, Bool is not defined in C++, and its probaby safest to remove my patch. But would it make sense to rename Bool to ScipyBool or something less likely to defined in other source? Travis Oliphant wrote: >Andrew Straw wrote: > > > >>Bool is defined in C++ compilers, so if arrayobject.h defines Bool, the >>compiler issues an error: "redeclaration of C++ built-in type `int'". >> >> >> >> >> >I thought C++ defined bool. This was recently changed from bool to >Bool precisely for C++ compilers. Now you tell me that C++ compilers >define Bool as well? > >The problem with the patch is what is sizeof(Bool) on those C++ >compilers? Is it 1? > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From cookedm at physics.mcmaster.ca Tue Nov 22 12:18:47 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 22 Nov 2005 12:18:47 -0500 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <43834C02.5090403@astraw.com> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> Message-ID: <034A49BA-14B9-4022-B20A-E8EC998DFD49@physics.mcmaster.ca> On Nov 22, 2005, at 11:49 , Andrew Straw wrote: > Sorry, my mistake! It turns out that OpenProducer (part of > OpenSceneGraph) must do "typedef int Bool", and so this, combined with > arrayobject.h's "typedef undefined char Bool" results in the > "redeclaration of int" compiler error. So, you're right, Bool is not > defined in C++, and its probaby safest to remove my patch. But > would it > make sense to rename Bool to ScipyBool or something less likely to > defined in other source? This is the reason for PY_ARRAY_TYPES_PREFIX. If you do this: #define PY_ARRAY_TYPES_PREFIX Scipy #include "scipy/arrayobject.h" then all the types that scipy defines will have Scipy prefixed (so ScipyBool, Scipybyte, Scipyuint, etc.). > Travis Oliphant wrote: >> Andrew Straw wrote: >>> Bool is defined in C++ compilers, so if arrayobject.h defines >>> Bool, the >>> compiler issues an error: "redeclaration of C++ built-in type >>> `int'". >>> >> I thought C++ defined bool. This was recently changed from bool to >> Bool precisely for C++ compilers. Now you tell me that C++ compilers >> define Bool as well? >> >> The problem with the patch is what is sizeof(Bool) on those C++ >> compilers? Is it 1? -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From Fernando.Perez at colorado.edu Tue Nov 22 13:47:04 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 22 Nov 2005 11:47:04 -0700 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <43834C02.5090403@astraw.com> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> Message-ID: <438367A8.3020904@colorado.edu> Andrew Straw wrote: > Sorry, my mistake! It turns out that OpenProducer (part of > OpenSceneGraph) must do "typedef int Bool", and so this, combined with > arrayobject.h's "typedef undefined char Bool" results in the > "redeclaration of int" compiler error. So, you're right, Bool is not > defined in C++, and its probaby safest to remove my patch. But would it > make sense to rename Bool to ScipyBool or something less likely to > defined in other source? OK, my bad again. I'll stop trying to be useful with low-level parts of the code I'm not truly intimately familiar with. Two tries, two mistakes, one too many. I'll make it up by working on numerical code I can actually do something useful with. I'll try to contribute something for fitting, the included polyfit is nice but numerically less uniform in error and stable (due to condition issues) than a Chebyshev-based fit. There's also the lingering issue of numerically stable evaluation of polyval()... I'll find something to pay for my mistakes with :) Apologies... Cheers, f From cookedm at physics.mcmaster.ca Tue Nov 22 15:04:48 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 22 Nov 2005 15:04:48 -0500 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <438367A8.3020904@colorado.edu> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> <438367A8.3020904@colorado.edu> Message-ID: On Nov 22, 2005, at 13:47 , Fernando Perez wrote: > Andrew Straw wrote: >> Sorry, my mistake! It turns out that OpenProducer (part of >> OpenSceneGraph) must do "typedef int Bool", and so this, combined >> with >> arrayobject.h's "typedef undefined char Bool" results in the >> "redeclaration of int" compiler error. So, you're right, Bool is not >> defined in C++, and its probaby safest to remove my patch. But >> would it >> make sense to rename Bool to ScipyBool or something less likely to >> defined in other source? > > OK, my bad again. I'll stop trying to be useful with low-level > parts of the > code I'm not truly intimately familiar with. Two tries, two > mistakes, one too > many. Heh, don't worry about it. I'm not a fan of using "generic" names for types, and I think we should look at putting them under a namespace (using a prefix). I added the PY_ARRAY_TYPES_PREFIX to make it easy for a user to work around what I perceive as our deficiencies. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From guyer at nist.gov Tue Nov 22 15:05:10 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Tue, 22 Nov 2005 15:05:10 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> Message-ID: <1b53cd125533019bfb2bb9accae79754@nist.gov> On Oct 7, 2005, at 4:00 PM, I wrote: > regardless, pysparse is BSD > licensed, so it's perfectly legal to use his code to improve > scipy.sparse (assuming that rigorous benchmarking determines that there > are, in fact, improvements to be made). We'll do some tests and, if a > merge is warranted, we'll run it by Roman out of courtesy. I've finally done some benchmarking of scipy.sparse and PySparse and posted my results and comments on plone[*]: Bottom line is that, with a couple of exceptions, PySparse is both faster and less memory intensive than SciPy. I don't know whether anything can be lifted from PySparse to improve SciPy's implementation. [*] Not that this comes as a surprise to anybody here, but the SciPy plone wiki is scary slow. I also couldn't get it to accept reST, even though I've seen other people using it there. -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From nwagner at mecha.uni-stuttgart.de Tue Nov 22 15:16:23 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Nov 2005 21:16:23 +0100 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <1b53cd125533019bfb2bb9accae79754@nist.gov> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> Message-ID: On Tue, 22 Nov 2005 15:05:10 -0500 Jonathan Guyer wrote: > > On Oct 7, 2005, at 4:00 PM, I wrote: > >> regardless, pysparse is BSD >> licensed, so it's perfectly legal to use his code to >>improve >> scipy.sparse (assuming that rigorous benchmarking >>determines that there >> are, in fact, improvements to be made). We'll do some >>tests and, if a >> merge is warranted, we'll run it by Roman out of >>courtesy. > > I've finally done some benchmarking of scipy.sparse and >PySparse and > posted my results and comments on plone[*]: > > > > Bottom line is that, with a couple of exceptions, >PySparse is both > faster and less memory intensive than SciPy. I don't >know whether > anything can be lifted from PySparse to improve SciPy's >implementation. > > [*] Not that this comes as a surprise to anybody here, >but the SciPy > plone wiki is scary slow. I also couldn't get it to >accept reST, even > though I've seen other people using it there. > -- > Jonathan E. Guyer, PhD > Metallurgy Division > National Institute of Standards and Technology > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Jonathan, I had a short look on your comments. Actually, you can import matrices given in matrixmarket format. Help on function mmread in module scipy.io.mmio: mmread(source) Reads the contents of a Matrix Market file 'filename' into a matrix. Inputs: source - Matrix Market filename (extensions .mtx, .mtz.gz) or open file object. Outputs: a - sparse or full matrix Are you aware of a sparse eigensolver for scipy ? Nils From oliphant at ee.byu.edu Tue Nov 22 15:40:16 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 13:40:16 -0700 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> References: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> Message-ID: <43838230.2050400@ee.byu.edu> Joe Harrington wrote: >I'm not familiar with the mmap interface, but these insertion tricks >sound like they solve a particularly unpleasant problem of IDL that I >hit a lot. > >It's considered nice in CS to allocate space on the fly, since it >keeps your allocation with the code that uses it. It's particularly >useful if you have an unknown amount of data coming in. To that end, >I have a routine, concat, that allows you to tack on an array to any >side of an existing array: > >x = concat(2, x, y) > >puts x next to y in the second dimension. The arrays may have any >dimension and shape. If they're different shapes, concat fills any >void space in with a pad value. If I'm reading in a dataset of some >thousands of images, I can just put that in a loop and then ask the >final array how big it is, rather than "peeking" at some ancillary >data to find out how much space to pre-allocate in x. The >pre-allocation line (which has to be outside the loop in IDL, or has >to be protected by an if) is unnecessary. > > This looks like a good routine to put in scipy_core if you don't mind contributing it. I can see that resizing memory mapped arrays and inserting into them would be useful. I'm not familiar enough with the memory-mapped-file to understand how inserting into a memory-mapped file would work. It looks like numarray's implementation actually copies data into RAM-based buffers on insertion and re-sizing which would seem to negate the benefits you speak of. -Travis From oliphant at ee.byu.edu Tue Nov 22 15:52:38 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 13:52:38 -0700 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> References: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> Message-ID: <43838516.1070006@ee.byu.edu> Joe Harrington wrote: >I have hit the 2GB limit recently, but fortunately only have 2 32-bit >machines left. The 2GB limit does put a damper on things! You'll see >a lot more call for mmapped files in the coming years. For the >reasons given above, I think that will be true even if RAM does grow >with dataset size. > > The 2GB limit is of course an issue with Python's memory-mapped code. The Python memory map code definitely needs some attention in order to be more useful. Just to clarify, there is now a memmap call in scipy_core that creates a (sub-class) of the ndarray using a memory-mapped file for the buffer. You can manipulate it just like an array object and I could see adding a simple resize function as well (provided nobody else was using it's buffer). -Travis From oliphant at ee.byu.edu Tue Nov 22 16:09:32 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 14:09:32 -0700 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <1b53cd125533019bfb2bb9accae79754@nist.gov> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> Message-ID: <4383890C.30206@ee.byu.edu> Jonathan Guyer wrote: >On Oct 7, 2005, at 4:00 PM, I wrote: > > > >> regardless, pysparse is BSD >>licensed, so it's perfectly legal to use his code to improve >>scipy.sparse (assuming that rigorous benchmarking determines that there >>are, in fact, improvements to be made). We'll do some tests and, if a >>merge is warranted, we'll run it by Roman out of courtesy. >> >> > >I've finally done some benchmarking of scipy.sparse and PySparse and >posted my results and comments on plone[*]: > > > >Bottom line is that, with a couple of exceptions, PySparse is both >faster and less memory intensive than SciPy. I don't know whether >anything can be lifted from PySparse to improve SciPy's implementation. > > > This is very, very good information. Thank you for going through the effort. Yes, I think definitely we can improve the sparse matrix support in SciPy. For one, the ll_mat matrix can be lifted from PySparse. It is definitely useful for building arrays. In addition we can look into why the lu decomposition is faster for PySparse when presumably they are both using the same code underneath. I think those two issues would solve the major problems your graphs point out. Right now, pysparse is in the sandbox of svn scipy so that it can be used for improvements on the sparse matrix in scipy. -Travis From guyer at nist.gov Tue Nov 22 16:53:23 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Tue, 22 Nov 2005 16:53:23 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> Message-ID: On Nov 22, 2005, at 3:16 PM, Nils Wagner wrote: > Actually, you can import matrices given in matrixmarket > format. > > Help on function mmread in module scipy.io.mmio: I wasn't aware of that, thanks. > Are you aware of a sparse eigensolver for scipy ? Nope. -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From guyer at nist.gov Tue Nov 22 16:56:52 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Tue, 22 Nov 2005 16:56:52 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <4383890C.30206@ee.byu.edu> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> <4383890C.30206@ee.byu.edu> Message-ID: On Nov 22, 2005, at 4:09 PM, Travis Oliphant wrote: > This is very, very good information. Thank you for going through the > effort. Happy to help. > I think those two issues would solve the major problems your graphs > point out. Absolutely. Once we figure out what the deal is with LU, it's probably worth making it optional. You may not want (or may not have) 10x the memory. Better to get a slow answer than no answer at all. > Right now, pysparse is in the sandbox of svn scipy so that > it can be used for improvements on the sparse matrix in scipy. Great. -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From schofield at ftw.at Tue Nov 22 18:09:09 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 22 Nov 2005 23:09:09 +0000 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <1b53cd125533019bfb2bb9accae79754@nist.gov> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> Message-ID: <6D67C710-0C86-4C28-91D6-184AEDBEE82C@ftw.at> Hi Jonathan, Thanks very much for the benchmarks and detailed comments. Your conclusions also ring true for me, especially that matrix construction is currently slow and that we need better documentation. I have only one question for now. Could you please explain the comment > dok_matrix.setdiag() appears to change the matrix shape if you're not > careful, rendering it pretty useless for building the matrices for this > problem in more detail? The code for setdiag() is: def setdiag(self, values, k=0): N = len(values) for n in range(N): self[n, n+k] = values[n] return which grows the matrix only if it's currently smaller than len (values) x len(values). What's the problem this behaviour caused? Would you prefer to fix the size of a dok_matrix in advance and have a run-time check on the len(values)? Or was the problem with the second argument? -- Ed On 22/11/2005, at 8:05 PM, Jonathan Guyer wrote: > > On Oct 7, 2005, at 4:00 PM, I wrote: > >> regardless, pysparse is BSD >> licensed, so it's perfectly legal to use his code to improve >> scipy.sparse (assuming that rigorous benchmarking determines that >> there >> are, in fact, improvements to be made). We'll do some tests and, if a >> merge is warranted, we'll run it by Roman out of courtesy. > > I've finally done some benchmarking of scipy.sparse and PySparse and > posted my results and comments on plone[*]: > > > > Bottom line is that, with a couple of exceptions, PySparse is both > faster and less memory intensive than SciPy. I don't know whether > anything can be lifted from PySparse to improve SciPy's > implementation. > > [*] Not that this comes as a surprise to anybody here, but the SciPy > plone wiki is scary slow. I also couldn't get it to accept reST, even > though I've seen other people using it there. > -- > Jonathan E. Guyer, PhD > Metallurgy Division > National Institute of Standards and Technology > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu_r at users.sf.net Wed Nov 23 00:04:08 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 23 Nov 2005 10:34:08 +0530 Subject: [SciPy-dev] Wierd error with scipy and VTK In-Reply-To: <43817489.1090203@astraw.com> References: <17281.19459.796758.632299@vulcan.linux.in> <43817489.1090203@astraw.com> Message-ID: <17283.63560.184735.231513@enthought.cfl.aero.iitm.ernet.in> >>>>> "Andrew" == Andrew Straw writes: Andrew> GNU libc version 2.3.2 has a bug Andrew> [1] Andrew> "feclearexcept() error on CPUs with SSE" (fixed in 2.3.3) Andrew> which has been submitted to Debian Andrew> [2] Andrew> but not yet fixed in sarge (and possibly other Debian Andrew> releases). I have a patch[3] which fixes it. Excellent, thanks for this! I just tested on a non-SSE machine and my code works. So, the good news is that TVTK[1] is now (given that the ravel bug is fixed, thanks Travis) compatible with scipy_core. cheers, prabhu [1] http://www.enthought.com/enthought/wiki/TVTK From prabhu_r at users.sf.net Wed Nov 23 00:15:02 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 23 Nov 2005 10:45:02 +0530 Subject: [SciPy-dev] scipy.distutils compatibility patch In-Reply-To: References: <17280.48927.912594.997543@vulcan.linux.in> Message-ID: <17283.64214.137275.786161@enthought.cfl.aero.iitm.ernet.in> >>>>> "Pearu" == Pearu Peterson writes: Pearu> On Sun, 20 Nov 2005, Prabhu Ramachandran wrote: >> While trying to get TVTK working with scipy I ran into some >> problems with scipy.distutils and compatibilty with older code. >> Attached is a patch (I don't have checkin access) that adds >> `default_config_dict` to scipy.distutils.misc_util. Adding >> this lets my simple setup.py scripts work with either >> scipy_distutils or scipy.distutils. Pearu> Thanks for the patch. It is in SVN now with small Pearu> depreciation warning. Thanks! Just a minor nit, the spelling used in this context is usually "deprecated"[1] as against "depreciated"[2]. :-) >> I also found that `python setup.py build_ext --inplace` does >> not work correctly and it tries to build the extension module >> in the installed distutils directory (for >> e.g. /usr/local/lib/python2.3/site-packages/scipy/distutils). Pearu> Yes, I am aware of possible --inplace problems. When Pearu> building scipy, one should not build with --inplace even if Pearu> it would work correctly --- if something gets wrong then it Pearu> is very difficult to remove all the generated files inside Pearu> scipy source tree. So, what comes to building scipy with Pearu> --inplace switch, I cannot give full support on solving Pearu> building problems. `rm -rf build` is a *must* before Pearu> rebuilding scipy and starting to look into possible bugs. Pearu> However, if you could provide a simple example where Pearu> --inplace does not work as expected, I'll look into it and Pearu> fix any related bugs in scipy.distutils. OK, just to clarify, I am not building scipy_core/scipy with --inplace and am doing a normal build. I just added support for scipy arrays to TVTK and we use scipy_distutils there. The enthought tree is best used with an inplace build. So if you have the enthought tree handy: svn co http://svn.enthought.com/svn/enthought/trunk enthought Then if you build an extension module inplace inside that tree like so: cd enthought/src/lib/enthought/tvtk python setup.py build_ext --inplace Then I run into the problem I reported with scipy.distutils trying to build the exention module array_ext.so into site-packages/scipy/distutils/ HTH. cheers, prabhu [1] deprecate v 1: express strong disapproval of; deplore 2: belittle; "The teacher should not deprecate his student's efforts" [syn: {depreciate}] [2] depreciate v 1: belittle; "The teacher should not deprecate his student's efforts" [syn: {deprecate}] 2: lower the value of something; "The Fed depreciated the dollar once again" [ant: {appreciate}] 3: lose in value; "The dollar depreciated again" [syn: {undervalue}, {devaluate}, {devalue}] [ant: {appreciate}] From cimrman3 at ntc.zcu.cz Wed Nov 23 02:53:56 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 23 Nov 2005 08:53:56 +0100 Subject: [SciPy-dev] advanced memory allocator? Message-ID: <43842014.8010508@ntc.zcu.cz> Hi all, Following recent discussions here about malloc vs. python allocator and about hunting memory related problems like buffer overruns and memory leaks, I think a better memory allocator could help. Years ago I have looked at the allocator of PETSc and, inspired by it, wrote a much simpler but usable version, which can be used to replace malloc/free. All it does is: - for each data allocation it records the line, function, file and dir where it occurred, together with allocated size, in a header prepended to the data buffer - record current/max. memory usage - all currently dynamically allocated memory can be printed or freed (brutal version of garbage collection :) - memory can be checked for integrity, thanks to a 'magic cookie' value in the header - many buffer overruns can be detected this way - all you pay for this is a little extra space for each allocation call Then any python extension module using the above can be at any moment examined and its allocated memory checked, printed, freed etc. Of course, valgrind gives you much more, but this can be used all the time since it does not cause any performance problems... There is one problem though: it is tied to 32-bit systems, so a 64-bit expert would have to take a look... I will upload it into the sandbox, if some interest arises. cheers, r. From cimrman3 at ntc.zcu.cz Wed Nov 23 03:10:15 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 23 Nov 2005 09:10:15 +0100 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <1b53cd125533019bfb2bb9accae79754@nist.gov> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> Message-ID: <438423E7.5090606@ntc.zcu.cz> Jonathan Guyer wrote: > I've finally done some benchmarking of scipy.sparse and PySparse and > posted my results and comments on plone[*]: > > > > Bottom line is that, with a couple of exceptions, PySparse is both > faster and less memory intensive than SciPy. I don't know whether > anything can be lifted from PySparse to improve SciPy's implementation. Thanks for your 'great sparse shoot-out'! Just a little note: the link in the section discussing update_add_at() does not work for me (PAGE NOT FOUND). r. From strawman at astraw.com Wed Nov 23 14:15:22 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 23 Nov 2005 11:15:22 -0800 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <034A49BA-14B9-4022-B20A-E8EC998DFD49@physics.mcmaster.ca> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> <034A49BA-14B9-4022-B20A-E8EC998DFD49@physics.mcmaster.ca> Message-ID: <4384BFCA.9080501@astraw.com> David M. Cooke wrote: >On Nov 22, 2005, at 11:49 , Andrew Straw wrote: > > > >>But would it make sense to rename Bool to ScipyBool or something less likely to >>defined in other source? >> >> > >This is the reason for PY_ARRAY_TYPES_PREFIX. If you do this: > >#define PY_ARRAY_TYPES_PREFIX Scipy >#include "scipy/arrayobject.h" > >then all the types that scipy defines will have Scipy prefixed (so >ScipyBool, Scipybyte, Scipyuint, etc.). > > I haven't jumped into the source on this, but your suggestion doesn't entirely seem to work. If I do as you suggest and include the "#define PY_ARRAY_TYPES_PREFIX Scipy" before including arrayobject.h, I don't get the redefinition of type int error (and thus my extension builds), but I get the following warning: building 'fsee.FlySimWrap' extension gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-Isrc -I/home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/base/include -I/usr/include/python2.4 -c' gcc: src/CXX/cxxextensions.c gcc: src/FlySim/skybox.cpp gcc: src/FlySim/FlySimWrap.cpp In file included from src/FlySim/FlySimWrap.cpp:16: /home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/base/include/scipy/arrayobject.h:25:1: warning: "Bool" redefined In file included from /usr/include/Producer/Types:37, from /usr/include/Producer/Math:18, from /usr/include/Producer/Camera:24, from src/FlySim/FlySim.hpp:10, from src/FlySim/FlySimWrap.hpp:9, from src/FlySim/FlySimWrap.cpp:1: /usr/include/X11/Xlib.h:96:1: warning: this is the location of the previous definition From guyer at nist.gov Wed Nov 23 14:26:11 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 23 Nov 2005 14:26:11 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <6D67C710-0C86-4C28-91D6-184AEDBEE82C@ftw.at> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> <6D67C710-0C86-4C28-91D6-184AEDBEE82C@ftw.at> Message-ID: On Nov 22, 2005, at 6:09 PM, Ed Schofield wrote: > I have only one question for now.? Could you please explain the comment > > >?dok_matrix.setdiag() appears to change the matrix shape if you're not > > careful, rendering it pretty useless for building the matrices for > this > > problem > > in more detail?? The code for setdiag() is: > > ?? ?def setdiag(self, values, k=0): > ? ? ? ? N = len(values) > ? ? ? ? for n in range(N): > ? ? ? ? ? ? self[n, n+k] = values[n] > ? ? ? ? return > > which grows the matrix only if it's currently smaller than len(values) > x len(values).? What's the problem this behaviour caused???Would you > prefer to fix the size of a dok_matrix in advance and have a run-time > check on the len(values)?? Or was the problem with the second > argument? Well, I guess I find the result of: >>> B = scipy.sparse.dok_matrix((3,3)) >>> a = scipy.ones((3,)) >>> B.setdiag(a,0) >>> print B.todense() [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]] >>> B.setdiag(a,1) >>> print B.todense() [[ 1. 1. 0. 0.] [ 0. 1. 1. 0.] [ 0. 0. 1. 1.]] to be surprising. I was expecting [[ 1. 1. 0.] [ 0. 1. 1.] [ 0. 0. 1.]] This may not be a reasonable expectation, and I'm not sure where I came by it, since neither PySparse nor Numeric seems to support sticking anything into a diagonal, much less something that's too long. Still, I think if I created an NxM matrix, then I probably want an NxM matrix. -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From jh at oobleck.astro.cornell.edu Wed Nov 23 14:36:42 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 23 Nov 2005 14:36:42 -0500 Subject: [SciPy-dev] Memory mapped files in scipy core In-Reply-To: <43838230.2050400@ee.byu.edu> (message from Travis Oliphant on Tue, 22 Nov 2005 13:40:16 -0700) References: <200511221410.jAMEAab8021880@oobleck.astro.cornell.edu> <43838230.2050400@ee.byu.edu> Message-ID: <200511231936.jANJagpB028493@oobleck.astro.cornell.edu> > This looks like a good routine to put in scipy_core if you don't mind > contributing it. Concat is an IDL routine. Now that you've unified the community, I'll be using scipy for my next project, but I'm not a heavy-duty scipy user yet, so I haven't started converting my library. So, I've attached concat below, in case you or someone else finds it important enough to convert and include. It's pretty straightforward, but I know you're very busy. The idl2py converter might be able to do it. I'm happy to answer questions about it. I'm also happy if you don't want to deal with it, I'm just sending it since you were curious. --jh-- ;+ ; NAME: ; CONCAT ; ; PURPOSE: ; This function joins arrays along the specified dimension, ; expanding the dimensions to fit. Set kill0 or kill1 to ignore ; argument a0 or a1, respectively, and return the other. Unless ; noreset is set, the kill* keyword is then set to 0 so ; that it isn't set on future passes in loops. The idea of kill0 ; and kill1 is that expressions like a = concat(2, a, b) can be ; used in loops to collect values from b (see examples). This ; would be unnecessary if delvar worked reasonably, as one could ; delvar, a before the start of the loop to ensure the routine ; was reentrant. Optional arguments get passed to make_array. ; ; CATEGORY: ; Array manipulation. ; ; CALLING SEQUENCE: ; ; Result = CONCAT(A0, A1) ; ; INPUTS: ; A0: The first array to join. ; A1: The second array to join. Its contents will start at ; a non-zero Dim index, unless A0 is undefined. ; Dim: [1-7] The dimension in which to join A0 and A1. ; ; KEYWORDS: ; KILL0: If set, return A1, but first set KILL0 to 0 if ; NORESET is not set. ; KILL1: If set, return A0, but first set KILL1 to 0 if ; NORESET is not set. ; NORESET: If set, do not set KILL0 or KILL1 to zero. ; _EXTRA: Additional arguments (such as /NOZERO or ; VALUE=) are passed to MAKE_ARRAY. ; ; OUTPUTS: ; This function returns the concatenated array. ; ; SIDE EFFECTS: ; The KILL0 and KILL1 keywords are changed to zero in the caller ; if they are set (only affects one of these per call, KILL0 has ; priority). ; ; RESTRICTIONS: ; The concatenation is not done "in place", so it is generally ; faster and uses less memory to pre-allocate arrays and copy ; into them rather than using this routine, though computer ; scientists may consider that less pretty. ; ; EXAMPLE: ; a0 = [1, 2, 3] ; a1 = [[4, 5], [6, 7]] ; print, concat(1, a0, a1) ; 1 2 3 4 5 ; 0 0 0 6 7 ; print, concat(1, a0, a1, value=17) ; 1 2 3 4 5 ; 17 17 17 6 7 ; print, concat(3, a0, a1, value=17) ; 1 2 3 ; 17 17 17 ; ; 4 5 17 ; 6 7 17 ; help, concat(3, a0, a1, value=17) ; INT = Array[3, 2, 2] ; ; To collect values calculated in a loop: ; for a1 = 0, 6, 1 do $ ; a0 = concat(1, a0, a1) ; print, a0 ; 0 1 2 3 4 5 6 ; To re-use a variable inside a function (since delvar does not ; work there): ; a0 = [100, 10000] ; kill0 = 1 ; for a1 = 0, 6, 1 do $ ; a0 = concat(1, a0, a1, kill0 = kill0) ; print, a0 ; 0 1 2 3 4 5 6 ; print, kill0 ; 0 ; ; MODIFICATION HISTORY: ; Written by: Joseph Harrington, Cornell. before 2001 Aug 22 ; jh at oobleck.astro.cornell.edu ; ; 28oct05 jh Changed sz from intarr to lonarr, added ; header, changed array parens to brackets. ;- function concat, dim, a0, a1, $ kill0=kill0, kill1=kill1, noreset=noreset, $ _extra=e if (not keyword_defined(a0)) or keyword_set(kill0) then begin if not keyword_set(noreset) then begin kill0 = 0 endif return, a1 endif if (not keyword_defined(a1)) or keyword_set(kill1) then begin if not keyword_set(noreset) then begin kill1 = 0 endif return, a0 endif if dim gt 7 or dim lt 1 then begin message, 'dim: '+whiteout(string(dim))+' must be 1-7' endif ; make array of sizes (don't care about dimensionality or total size) sz = lonarr(8,2) sz[*, *] = 1L for j = 0, 1, 1 do begin if j eq 0 then begin s = size(a0) endif else begin s = size(a1) endelse ; deal with scalars if s[0] eq 0 then begin s = [1, 1, s[1], 1] endif for i = 1, s[0], 1 do begin sz[i-1, j] = s[i] endfor sz[7, j] = s[i] endfor ; find size and type of final array, and make it typlam = max(sz[7,*]) dimlam = colfunc(sz[0:6, *], 'max') dimlam[dim-1] = total(sz[dim-1, *]) if keyword_set(e) then begin lam = make_array(dimension=dimlam, type=typlam, _extra=e) endif else begin lam = make_array(dimension=dimlam, type=typlam) endelse ; insert first array lam[0, 0, 0, 0, 0, 0, 0] = a0 ; find place and insert second array ld = [0, 0, 0, 0, 0, 0, 0] ld[dim-1] = sz[dim-1, 0] ; there must be a better way... lam[ld[0], ld[1], ld[2], ld[3], ld[4], ld[5], ld[6]] = a1 return, lam end From ravi at ati.com Wed Nov 23 15:20:27 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Wed, 23 Nov 2005 15:20:27 -0500 Subject: [SciPy-dev] Weirdness with matplotlib and new scipy Message-ID: <200511231520.27267.ravi@ati.com> After application of Daishi Harada's patch on 0.85, I tried to use it with SciPy core SVN from yesterday and get rather strange results: x = scipy.array( [16.5]*10 ) y = scipy.array( [19.5]*10 ) w = scipy.arange(10) # This plots a line at 16.0, not 16.5! pylab.plot( w, x ) # However this works perfectly with a blue box 0-9 x 16.5x19.5 f = pylab.figure() a = f.add_subplot( 1, 1, 1 ) a.fill( scipy.concatenate((w,w[::-1])), scipy.concatenate((x,y[::-1])) ) f.canvas.draw() Further, the legend boxes are too large, usually obscuring most of the picture with a polygonal patch (such as the one produced by the fill above) or at least a half without any polygonal patches. How can I help debug these issues? Regards, Ravi From guyer at nist.gov Wed Nov 23 15:44:27 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 23 Nov 2005 15:44:27 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <438423E7.5090606@ntc.zcu.cz> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> <438423E7.5090606@ntc.zcu.cz> Message-ID: <107e6e3bd9a15f11d060afd93ebbed26@nist.gov> On Nov 23, 2005, at 3:10 AM, Robert Cimrman wrote: > Jonathan Guyer wrote: >> I've finally done some benchmarking of scipy.sparse and PySparse and >> posted my results and comments on plone[*]: >> >> >> >> Bottom line is that, with a couple of exceptions, PySparse is both >> faster and less memory intensive than SciPy. I don't know whether >> anything can be lifted from PySparse to improve SciPy's >> implementation. > > Thanks for your 'great sparse shoot-out'! Just a little note: the link > in the section discussing update_add_at() does not work for me (PAGE > NOT > FOUND). Thanks. Sloppy markup (the trailing period got included) combined with an inexplicably missing '?'. I've checked all the links now (and for anybody who checked shortly after my announcement, I've fixed by previously sloppy markup such that the majority of the links now appear at all). -- Jonathan E. Guyer, PhD Metallurgy Division National Institute of Standards and Technology From charlesr.harris at gmail.com Wed Nov 23 16:58:45 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Nov 2005 14:58:45 -0700 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <438367A8.3020904@colorado.edu> References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> <438367A8.3020904@colorado.edu> Message-ID: > > I'll make it up by working on numerical code I can actually do something > useful with. I'll try to contribute something for fitting, the included > polyfit is nice but numerically less uniform in error and stable (due to > condition issues) than a Chebyshev-based fit. There's also the lingering > issue of numerically stable evaluation of polyval()... I'll find > something to > pay for my mistakes with :) > > Apologies... > > Cheers, I've attached the Python routines I use for Chebychev stuff. It is also possible to solve ODEs over an interval by sampling at the Chebychev points and treating the derivative as a matrix. Of course, boundary conditions need to be added so that the resulting matrix isn't singular. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chebychev.py Type: application/octet-stream Size: 20212 bytes Desc: not available URL: From Fernando.Perez at colorado.edu Wed Nov 23 17:05:17 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 23 Nov 2005 15:05:17 -0700 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> <438367A8.3020904@colorado.edu> Message-ID: <4384E79D.9050405@colorado.edu> Charles R Harris wrote: >>I'll make it up by working on numerical code I can actually do something >>useful with. I'll try to contribute something for fitting, the included >>polyfit is nice but numerically less uniform in error and stable (due to >>condition issues) than a Chebyshev-based fit. There's also the lingering >>issue of numerically stable evaluation of polyval()... I'll find >>something to >>pay for my mistakes with :) >> >>Apologies... >> >>Cheers, > > > > > I've attached the Python routines I use for Chebychev stuff. It is also > possible to solve ODEs over an interval by sampling at the Chebychev points > and treating the derivative as a matrix. Of course, boundary conditions need > to be added so that the resulting matrix isn't singular. Great, thanks. Thanksgiving food to go along with the turkey :) Cheers, f From cookedm at physics.mcmaster.ca Wed Nov 23 18:01:03 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Nov 2005 18:01:03 -0500 Subject: [SciPy-dev] patch: allow arrayobject.h to compile in c++ compiler In-Reply-To: <4384BFCA.9080501@astraw.com> (Andrew Straw's message of "Wed, 23 Nov 2005 11:15:22 -0800") References: <4382C6AC.9090303@astraw.com> <4382D0E1.7020907@ee.byu.edu> <43834C02.5090403@astraw.com> <034A49BA-14B9-4022-B20A-E8EC998DFD49@physics.mcmaster.ca> <4384BFCA.9080501@astraw.com> Message-ID: Andrew Straw writes: > David M. Cooke wrote: > >>On Nov 22, 2005, at 11:49 , Andrew Straw wrote: >> >> >> >>>But would it make sense to rename Bool to ScipyBool or something less likely to >>>defined in other source? >>> >>> >> >>This is the reason for PY_ARRAY_TYPES_PREFIX. If you do this: >> >>#define PY_ARRAY_TYPES_PREFIX Scipy >>#include "scipy/arrayobject.h" >> >>then all the types that scipy defines will have Scipy prefixed (so >>ScipyBool, Scipybyte, Scipyuint, etc.). >> >> > > I haven't jumped into the source on this, but your suggestion doesn't > entirely seem to work. If I do as you suggest and include the "#define > PY_ARRAY_TYPES_PREFIX Scipy" before including arrayobject.h, I don't get > the redefinition of type int error (and thus my extension builds), but I > get the following warning: > > building 'fsee.FlySimWrap' extension > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC' > compile options: '-Isrc > -I/home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/base/include > -I/usr/include/python2.4 -c' > gcc: src/CXX/cxxextensions.c > gcc: src/FlySim/skybox.cpp > gcc: src/FlySim/FlySimWrap.cpp > In file included from src/FlySim/FlySimWrap.cpp:16: > /home/astraw/py24-amd64/lib/python2.4/site-packages/scipy/base/include/scipy/arrayobject.h:25:1: > warning: "Bool" redefined > In file included from /usr/include/Producer/Types:37, > from /usr/include/Producer/Math:18, > from /usr/include/Producer/Camera:24, > from src/FlySim/FlySim.hpp:10, > from src/FlySim/FlySimWrap.hpp:9, > from src/FlySim/FlySimWrap.cpp:1: > /usr/include/X11/Xlib.h:96:1: warning: this is the location of the > previous definition What's the order of the includes? If you include arrayobject.h before the other includes, it should work. In general, this would be a pain to fix. arrayobject.h would have to save the old definition and restore it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From daishi at egcrc.net Wed Nov 23 19:51:07 2005 From: daishi at egcrc.net (daishi at egcrc.net) Date: Wed, 23 Nov 2005 16:51:07 -0800 Subject: [SciPy-dev] [matplotlib-devel] Weirdness with matplotlib and new scipy In-Reply-To: <200511231520.27267.ravi@ati.com> References: <200511231520.27267.ravi@ati.com> Message-ID: On Nov 23, 2005, at 12:20 PM, Ravikiran Rajagopal wrote: > After application of Daishi Harada's patch on 0.85, I tried to use it > with > SciPy core SVN from yesterday and get rather strange results: I'm sorry you're having troubles with the patch. Unfortunately, I can't seem to recreate your problem. I realize "works for me" isn't a particularly useful response, but I'm afraid that's the best I can do for now - and I'll be away again for Thanksgiving until next week. FWIW, I'm using the CVS matplotlib with the wx backend, and the new scipy w/atlas. d From schofield at ftw.at Wed Nov 23 20:26:29 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 24 Nov 2005 01:26:29 +0000 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> <6D67C710-0C86-4C28-91D6-184AEDBEE82C@ftw.at> Message-ID: <52997AD9-D5F6-4A93-8B9A-F6987FB5CBFC@ftw.at> On 23/11/2005, at 7:26 PM, Jonathan Guyer wrote: > > Well, I guess I find the result of: > >>>> B = scipy.sparse.dok_matrix((3,3)) >>>> a = scipy.ones((3,)) >>>> B.setdiag(a,0) >>>> print B.todense() > [[ 1. 0. 0.] > [ 0. 1. 0.] > [ 0. 0. 1.]] >>>> B.setdiag(a,1) >>>> print B.todense() > [[ 1. 1. 0. 0.] > [ 0. 1. 1. 0.] > [ 0. 0. 1. 1.]] > > to be surprising. > > I was expecting > > [[ 1. 1. 0.] > [ 0. 1. 1.] > [ 0. 0. 1.]] > I agree this is more useful behaviour. I've changed this in SVN and added an explicit resize() method for dok_matrices. Thanks again for the feedback. -- Ed From guyer at nist.gov Wed Nov 23 21:21:50 2005 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 23 Nov 2005 21:21:50 -0500 Subject: [SciPy-dev] sparse matrix support status In-Reply-To: <52997AD9-D5F6-4A93-8B9A-F6987FB5CBFC@ftw.at> References: <433CE24A.6040509@ee.byu.edu> <433D5BAD.8030801@ntc.zcu.cz> <433D690D.7060008@ee.byu.edu> <43468A59.9000503@ntc.zcu.cz> <4346B65D.3070604@colorado.edu> <4346C618.2040403@colorado.edu> <1b53cd125533019bfb2bb9accae79754@nist.gov> <6D67C710-0C86-4C28-91D6-184AEDBEE82C@ftw.at> <52997AD9-D5F6-4A93-8B9A-F6987FB5CBFC@ftw.at> Message-ID: <2ddc150ccb4705af458136edca87530d@nist.gov> On Nov 23, 2005, at 8:26 PM, Ed Schofield wrote: > I agree this is more useful behaviour. I've changed this in SVN and > added an explicit resize() method for dok_matrices. > > Thanks again for the feedback. Thanks for the speedy turnaround. If others object to this change, I'd be OK with an error when len(values) won't fit on the specified diagonal. I just don't think the matrix shape should change. From schofield at ftw.at Wed Nov 23 21:39:29 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 24 Nov 2005 03:39:29 +0100 (CET) Subject: [SciPy-dev] In-place operators and casting Message-ID: Hi all, I've been discussing the behaviour of matrix objects with Travis offline after I made a rather ugly patch. The problem I was trying to solve was the one described by Jonathan Taylor in the [Default type behaviour of array] thread, essentially that: >>> c = zeros(10) >>> c += rand(10) >>> c array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) wasn't what he wanted, and he had to spend time figuring out what was wrong. My idea was to turn matrices into something more user-friendly than arrays for users migrating from Matlab, R, etc. by redefining matrices' in-place operators like += to have the same upcasting behaviour as the regular operators like +. Then this would be possible: >>> b = matrix(zeros(10)) >>> b matrix([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> b += rand(10) >>> b matrix([[ 0.80751041, 0.61973329, 0.70726955, 0.94220288, 0.41340826, 0.39087675, 0.81454443, 0.25357685, 0.06850165, 0.19652445]]) Travis says that he doesn't think it makes sense to in-place cast an array (or matrix) to a different type, and that a floatzeros() function could be sufficient to avoid the problem above. But I think this only solves one instance of a more general usability problem with casting and in-place operators. I see two requirements for an intuitive += operator (and other = friends) without any nasty surprises. First, 'a += b' should give the same result as 'a = a + b', just more efficiently if possible, and this shouldn't eat up the data of unsuspecting users. This isn't true at the moment. Second, 'a += b' shouldn't change 'a' to a different object. This is true at the moment: >>> a = ones(3) >>> id(a) 140648904 >>> a += ones(3, complex) >>> id(a) 140648904 Another interpretation of this second point is that the type of an array shouldn't change once we've declared it. I think this is what Travis is reluctant to sacrifice for the sake of the first point. If a cast from b.dtype to a.dtype can lose information (like in these examples) I don't think it's possible for a += b to satisfy both these requirements. The meaning of "a += b" is ambiguous: does the user want a safe or unsafe cast? I propose instead that we raise an exception: >>> a = zeros(5) >>> a += rand(5) Traceback (most recent call last): File "", line 1, in ? TypeError: array cannot be safely cast to required type We currently have similar examples of type-checking: >>> array(rand(5), int) Traceback (most recent call last): File "", line 1, in ? TypeError: array cannot be safely cast to required type >>> a[:] = rand(5) Traceback (most recent call last): File "", line 1, in ? TypeError: array cannot be safely cast to required type This would allow Travis to remove another red warning from his book and should save users some grief if they haven't read it (or know it and still make the mistake). In some cases the user will know that the operation will result in a potentially unsafe cast and will want to proceed anyway. For these cases I'd argue that a more explicit notation is no bad thing. Two options are: >>> a += cast[int](rand(5)) >>> a += rand(5).astype(int) Another might be the 'FORCECAST' flag, but I'm not sure whether this is still supported. Comments? -- Ed From n_habili at hotmail.com Thu Nov 24 02:04:51 2005 From: n_habili at hotmail.com (Nariman Habili) Date: Thu, 24 Nov 2005 07:04:51 +0000 Subject: [SciPy-dev] C API Message-ID: Hi there, I would like to know how to extract a value (eg int) from a PyObject that contains an ndarray. I'm using boost.python to expose classes to Python. I'm passing an ndarray to C++ by reference. However, since boost.python hasn't been updated yet to scipy core, I'm using the C API. I've tried PyArray_GETITEM, however it returns a PyObject, and I'm not sure how to extract an int or double etc from it. I need the C types so I can interface python with other parts of my code. Would boost python be updated to scipy core soon? Thanks for your help. From robert.kern at gmail.com Thu Nov 24 02:17:52 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Nov 2005 23:17:52 -0800 Subject: [SciPy-dev] C API In-Reply-To: References: Message-ID: <43856920.4040109@gmail.com> Nariman Habili wrote: > Hi there, > > I would like to know how to extract a value (eg int) from a PyObject that > contains an ndarray. I'm using boost.python to expose classes to Python. I'm > passing an ndarray to C++ by reference. However, since boost.python hasn't > been updated yet to scipy core, I'm using the C API. > I've tried PyArray_GETITEM, however it returns a PyObject, and I'm not sure > how to extract an int or double etc from it. I need the C types so I can > interface python with other parts of my code. I'm not sure what you mean by "a PyObject that contains an ndarray." If, on the other hand, you mean "a PyObject that *is* an ndarray," then it's pretty easy. Read the file scipy/doc/CAPI.txt, specifically, the section "Passing Data Type information to C-code". > Would boost python be updated to scipy core soon? You'll have to ask the boost.python developers about that. It shouldn't be hard. The old Numeric C API still works. You just have to change #include "Numeric/arrayobject.h" to #include "scipy/arrayobject.h" They will probably want to do more, though, because the new API has extended the old one in interesting ways. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Thu Nov 24 03:53:11 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 24 Nov 2005 01:53:11 -0700 Subject: [SciPy-dev] C API In-Reply-To: References: Message-ID: <43857F77.1050705@ee.byu.edu> Nariman Habili wrote: >Hi there, > >I would like to know how to extract a value (eg int) from a PyObject that >contains an ndarray. I'm using boost.python to expose classes to Python. I'm >passing an ndarray to C++ by reference. However, since boost.python hasn't >been updated yet to scipy core, I'm using the C API. >I've tried PyArray_GETITEM, however it returns a PyObject, and I'm not sure >how to extract an int or double etc from it. I need the C types so I can >interface python with other parts of my code. > > PyArray_DATA(obj) gives you a pointer to the first element of the ndarray. This assumes that obj is actually an ndarray object, of course. The data flag bits in PyArray_FLAGS(obj) are critical to dealing with the data correctly. It is possible that the data is not in machine byte order, not aligned properly (which can cause BUS errors on some platforms if you try certain operations), and not writeable. Of course in C you can do whatever you want, but if you don't respect the flags, you may get gibberish and even segfault your machine. If PyArray_CHKFLAGS(obj, BEHAVED_FLAGS) is true then you don't have to worry about the above paragraph, otherwise you do How to get elements beyond the first depends on PyArray_STRIDES(obj) and PyArray_DIMS(obj) If PyArray_CHKFLAGS(obj, CARRAY_FLAGS) is true then access to elements beyond the first is simple as the data region is a simple contiguous chunk of memory with each element placed right after the next in C-style contiguous fashion (last index varies the fastest). Otherwise you need to understand the concept of striding in order to access elements beyond the first. There is also the Array iterator which you can construct and use the PyArray_ITER_GOTO() macro to jump to a particular element in the array (regardless of the flags). If the array is not BEHAVED, then you need to be careful about how you access it, but the dataptr field of the array iterator structure will point to the right place. So, is this all clear as mud? If you were more clear about which element of the array you wanted and how the array was created, I could be more directly helpful instead of exposing you to all the possibilities. -Travis From cimrman3 at ntc.zcu.cz Thu Nov 24 06:03:39 2005 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 24 Nov 2005 12:03:39 +0100 Subject: [SciPy-dev] In-place operators and casting In-Reply-To: References: Message-ID: <43859E0B.1030003@ntc.zcu.cz> Ed Schofield wrote: > Hi all, > > I've been discussing the behaviour of matrix objects with Travis offline > after I made a rather ugly patch. The problem I was trying to solve was > the one described by Jonathan Taylor in the [Default type behaviour of > array] thread, essentially that: > > > >>>>c = zeros(10) >>>>c += rand(10) >>>>c > > array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) > > wasn't what he wanted, and he had to spend time figuring out what was > wrong. My idea was to turn matrices into something more user-friendly > than arrays for users migrating from Matlab, R, etc. by redefining > matrices' in-place operators like += to have the same upcasting behaviour > as the regular operators like +. Then this would be possible: > > > >>>>b = matrix(zeros(10)) >>>>b > > matrix([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) > >>>>b += rand(10) >>>>b > > matrix([[ 0.80751041, 0.61973329, 0.70726955, 0.94220288, 0.41340826, > 0.39087675, 0.81454443, 0.25357685, 0.06850165, 0.19652445]]) > > > Travis says that he doesn't think it makes sense to in-place cast an array > (or matrix) to a different type, and that a floatzeros() function could be > sufficient to avoid the problem above. But I think this only solves one > instance of a more general usability problem with casting and in-place > operators. > > I see two requirements for an intuitive += operator (and other = > friends) without any nasty surprises. First, 'a += b' should give the > same result as 'a = a + b', just more efficiently if possible, and this > shouldn't eat up the data of unsuspecting users. This isn't true at the > moment. Second, 'a += b' shouldn't change 'a' to a different object. > This is true at the moment: > > >>>>a = ones(3) >>>>id(a) > > 140648904 > >>>>a += ones(3, complex) >>>>id(a) > > 140648904 > > Another interpretation of this second point is that the type of an array > shouldn't change once we've declared it. I think this is what Travis is > reluctant to sacrifice for the sake of the first point. > > If a cast from b.dtype to a.dtype can lose information (like in these > examples) I don't think it's possible for a += b to satisfy both these > requirements. The meaning of "a += b" is ambiguous: does the user want a > safe or unsafe cast? I propose instead that we raise an exception: > > >>>>a = zeros(5) >>>>a += rand(5) > > Traceback (most recent call last): > File "", line 1, in ? > TypeError: array cannot be safely cast to required type > > We currently have similar examples of type-checking: > > >>>>array(rand(5), int) > > Traceback (most recent call last): > File "", line 1, in ? > TypeError: array cannot be safely cast to required type > > >>>>a[:] = rand(5) > > Traceback (most recent call last): > File "", line 1, in ? > TypeError: array cannot be safely cast to required type > > This would allow Travis to remove another red warning from his book and > should save users some grief if they haven't read it (or know it and > still make the mistake). > > In some cases the user will know that the operation will result in a > potentially unsafe cast and will want to proceed anyway. For these cases > I'd argue that a more explicit notation is no bad thing. Two options > are: > > >>>>a += cast[int](rand(5)) >>>>a += rand(5).astype(int) > > > Another might be the 'FORCECAST' flag, but I'm not sure whether this is > still supported. > > Comments? I am not friend of silent upcasting, since often (my :-)) extension modules require a certain data type. I would prefer throwing the exception with the possibility of overriding via .astype(). r. From aisaac at american.edu Thu Nov 24 09:02:52 2005 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 24 Nov 2005 09:02:52 -0500 Subject: [SciPy-dev] In-place operators and casting In-Reply-To: References: Message-ID: On Thu, 24 Nov 2005, Ed Schofield apparently wrote: >>> c = zeros(10) >>> c += rand(10) >>> c array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) > Comments? User perspective: Having that result differ from the result of c=array([ci+ri for ci,ri in zip(c,rand(10))]) certainly feels weird. I doubt anyone expects or wants the current behavior except those who have already been bitten, figured it out, and then got used to it. That said, there seems to be a conflict between a "user friendly" approach and the possible fixes. I think the "user friendly" approach would be a flag that sets the default return type of scipy functions (like zeros) to float. Anyway, in this case I favor an exception. Otherwise users will be constantly surprised. Cheers, Alan Isaac From pebarrett at gmail.com Thu Nov 24 10:17:00 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Thu, 24 Nov 2005 10:17:00 -0500 Subject: [SciPy-dev] In-place operators and casting In-Reply-To: References: Message-ID: <40e64fa20511240717n51ff619fj11ae1b45a2a0c067@mail.gmail.com> On 11/23/05, Ed Schofield wrote: > > > wasn't what he wanted, and he had to spend time figuring out what was > wrong. My idea was to turn matrices into something more user-friendly > than arrays for users migrating from Matlab, R, etc. by redefining > matrices' in-place operators like += to have the same upcasting behaviour > as the regular operators like +. Then this would be possible: > > > >>> b = matrix(zeros(10)) > >>> b > matrix([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) > >>> b += rand(10) > >>> b > matrix([[ 0.80751041, 0.61973329, 0.70726955, 0.94220288, 0.41340826, > 0.39087675, 0.81454443, 0.25357685, 0.06850165, 0.19652445]]) I've never been on the right side of these discussions, so take my opinion with a grain of salt. As you noted, a += b is really just a short hand for a = a + b, so upcasting makes sense to me. Either way the user really needs to know what he is doing. In one case he gets the wrong answer, makes several changes to the code because he thinks he has made a mistake, and then realizes it is the lack upcasting. He was hoping to do a quick and dirty calculation and instead has wasted time with a quirk of the implementation. The other case is that the user gets the right answer, but it takes too long to execute, so he realizes that he needs to optimize the code, which he does by making some small changes. I have personally experienced such situations and find the first case the most irritating, so I vote for upcasting. We might also want to consider how long it took Guido to add in-place operators to Python, since he did not consider them necessary - at least in terms of code efficiency. -- Paul -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Fernando.Perez at colorado.edu Thu Nov 24 13:42:36 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 24 Nov 2005 11:42:36 -0700 Subject: [SciPy-dev] In-place operators and casting In-Reply-To: <40e64fa20511240717n51ff619fj11ae1b45a2a0c067@mail.gmail.com> References: <40e64fa20511240717n51ff619fj11ae1b45a2a0c067@mail.gmail.com> Message-ID: <4386099C.8030903@colorado.edu> Paul Barrett wrote: > As you noted, a += b is really just a short hand for a = a + b, so upcasting > makes sense to me. I'm afraid it's a bit more subtle than that in python, due to the fundamental distinction between mutable and immutable objects. Hopefully this example will clarify things: In [1]: objs = [['hi'],9] In [2]: for o in objs: ...: id1 = id(o) ...: o += o ...: id2 = id(o) ...: if id1==id2: ...: print 'Object <%s> was modified in-place' % o ...: else: ...: print 'Object <%s> was replaced by a new one' % o ...: Object <['hi', 'hi']> was modified in-place Object <18> was replaced by a new one For a list, a+=a is NOT shorthand for a = a+a, because the operation keeps the same python object. For integers, on the other hand, it is: a += a under the hood does the equivalent of 'tmp = a+a;a = tmp', creating a new object which is rebound to the name 'a'. As far as I understand (please correct me if I'm wrong), the basic python behavior for builtin types is, for any operator with an in-place version: a = expr <==> a = a expr iff a is an immutable object. If a is mutable, the operation may be implemented truly in-place, without object rebindings and by shrinking/growing/reallocating the object as needed. If we are to follow the default semantics of the language for in-place operations, then I suppose that we shouldn't allow object ID changes, and raise exceptions in cases where an object re-allocation would be required to satisfy the in-place operation. As convenient as I find the in-place operators, I tend to think that due to the mutable/immutable subtleties of python, they have too much gotcha potential in the language. I almost wonder if Python wouldn't be better off without them... Cheers, f From a.schmolck at gmx.net Thu Nov 24 14:54:39 2005 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Thu, 24 Nov 2005 19:54:39 +0000 Subject: [SciPy-dev] some scipy.io.mio problems Message-ID: Hi, I've recently patched mio.py in my ubuntu install of scipy with svn repository code because I needed to read v6 matlab files. I think I've found some problems that are still present in the latest svn version (just checked) Firstly line 557 (in _parse_mimatrix) if type == mxCHAR_CLASS: should read if dclass == mxCHAR_CLASS I also suspect line 624 should be s/KeyError/(KeyError,AttributeError)/, i.e. try: res = res + _unit_imag[imag.dtypechar] * imag except (KeyError,AttributeError): res = res + 1j*imag Finally loadmat searches sys.path for the mat file. It's not obvious to me why that would be a good idea, but shouldn't it at least return the *first* match? I.e. shouldn't that be (in line 735): full_name = test_name; break as opposed to just full_name = test_name Finally, as a mere suggestion, I'd find it convenient if mat_struct were either made a bit more featureful or a part of the official interface for which the user is free to supply a drop-in replacement (with a __repr__ method, for example). For example, I currently use my somewhat baroque `EasyStruct' class (which includes __repr__ and most of the features of a dict) to get a more covenient container for the loaded mat file as follows: scipy.io.mio.mat_struct = EasyStruct res = EasyStruct(**loadmat(bla)) 'as ----8<----Code for EasyStruct----8<------ class EasyStruct(object): r""" Examples: >>> brian = EasyStruct(name="Brian", age=30) >>> brian.name 'Brian' >>> brian.age 30 >>> brian.life = "short" >>> brian EasyStruct(age=30, life='short', name='Brian') >>> del brian.life >>> brian == EasyStruct(name="Brian", age=30) True >>> brian != EasyStruct(name="Jesus", age=30) True >>> len(brian) 2 Call the object to create a clone: >>> brian() is not brian and brian(name="Jesus") == >>> EasyStruct(name="Jesus", age=30) True Conversion to/from dict: >>> EasyStruct(**dict(brian)) == brian True Evil Stuff: >>> brian['name', 'age'] ('Brian', 30) >>> brian['name', 'age'] = 'BRIAN', 'XXX' >>> brian EasyStruct(age='XXX', name='BRIAN') """ def __init__(self,**kwargs): self.__dict__.update(kwargs) def __call__(self, **kwargs): import copy res = copy.copy(self) res.__init__(**kwargs) return res def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def __len__(self): return len([k for k in self.__dict__.iterkeys() if not (k.startswith('__') or k.endswith('__'))]) # FIXME rather perverse def __getitem__(self, nameOrNames): if isString(nameOrNames): return self.__dict__[nameOrNames] else: return tuple([self.__dict__[n] for n in nameOrNames]) # FIXME rather perverse def __setitem__(self, nameOrNames, valueOrValues): if isString(nameOrNames): self.__dict__[nameOrNames] = valueOrValues else: for (n,v) in zip(nameOrNames, valueOrValues): self.__dict__[n] = v def __contains__(self, key): return key in self.__dict__ and not (key.startswith('__') or key.endswith('__')) def __iter__(self): for (k,v) in self.__dict__.iteritems(): if not (k.startswith('__') or k.endswith('__')): yield k,v def __repr__(self): return mkRepr(self, **vars(self)) def isString(obj): return isinstance(obj, basestring) def mkRepr(instance, *argls, **kwargs): r"""Convinience function to implement ``__repr__``. `kwargs` values are ``repr`` ed. Special behaviour for ``instance=None``: just the arguments are formatted. Example: >>> class Thing: ... def __init__(self, color, shape, taste=None): ... self.color, self.shape, self.taste = color, shape, taste ... def __repr__(self): ... return mkRepr(self, self.color, self.shape, taste=self.taste) ... >>> maggot = Thing('white', 'cylindrical', 'chicken') >>> maggot Thing('white', 'cylindrical', taste='chicken') >>> Thing('Color # 132942430-214809804-412430988081-241234', 'unkown', >>> taste=maggot) Thing('Color # 132942430-214809804-412430988081-241234', 'unkown', taste=Thing('white', 'cylindrical', taste='chicken')) """ width=79 maxIndent=15 minIndent=2 args = map(repr, argls) + ["%s=%r" % (k, v) for (k,v) in ipsort(kwargs.items())] if instance is not None: start = "%s(" % instance.__class__.__name__ args[-1] += ")" else: start = "" if len(start) <= maxIndent and len(start) + len(args[0]) <= width and \ max(map(len,args)) <= width: # XXX mag of last condition bit arbitrary indent = len(start) args[0] = start + args[0] if sum(map(len, args)) + 2*(len(args) - 1) <= width: return ", ".join(args) else: indent = minIndent args[0] = start + "\n" + " " * indent + args[0] return (",\n" + " " * indent).join(args) From n_habili at hotmail.com Fri Nov 25 01:47:36 2005 From: n_habili at hotmail.com (Nariman Habili) Date: Fri, 25 Nov 2005 06:47:36 +0000 Subject: [SciPy-dev] C API In-Reply-To: <43857F77.1050705@ee.byu.edu> Message-ID: Travis and Robert, thanks very much for your help. I now have a better understanding of how to access ndarray elements from C++. The application is mainly for image processing. I've been using boost.python to pass numeric arrays from Python to C++ by reference, eg void FeatureDetector(numeric::array &input_image, numeric::array &feature_image); Therefore I need to access every element of the input numeric array (in most cases representing images), calculate the image features (eg, corner points) in C++ and then output the modified image, again via reference. With the information you provided, I can now access the ndarray elements. Travis, I have purchased a copy of your "Guide to SciPy: Core System", some sections seem to be missing, for example section 13.2, "Using the Array iterator in C". I would appreciate it if you could explain to me how I can use the array iterator. I'm currently using boost.python to expose C++ classes and methods to Python. I've also used swig in the past. I would like to know how I can expose C++ classes and methods to Python using just the C API. Any examples or references would be appreciated. Sorry if this question sounds obvious; I'm new to the C API. Nariman >From: Travis Oliphant >Reply-To: SciPy Developers List >To: SciPy Developers List >Subject: Re: [SciPy-dev] C API >Date: Thu, 24 Nov 2005 01:53:11 -0700 > >Nariman Habili wrote: > > >Hi there, > > > >I would like to know how to extract a value (eg int) from a PyObject that > >contains an ndarray. I'm using boost.python to expose classes to Python. >I'm > >passing an ndarray to C++ by reference. However, since boost.python >hasn't > >been updated yet to scipy core, I'm using the C API. > >I've tried PyArray_GETITEM, however it returns a PyObject, and I'm not >sure > >how to extract an int or double etc from it. I need the C types so I can > >interface python with other parts of my code. > > > > > >PyArray_DATA(obj) gives you a pointer to the first element of the ndarray. >This assumes that obj is actually an ndarray object, of course. > >The data flag bits in PyArray_FLAGS(obj) are critical to dealing with >the data correctly. > >It is possible that the data is not in machine byte order, not aligned >properly (which can cause >BUS errors on some platforms if you try certain operations), and not >writeable. Of course in C >you can do whatever you want, but if you don't respect the flags, you >may get gibberish and >even segfault your machine. > >If PyArray_CHKFLAGS(obj, BEHAVED_FLAGS) is true then you don't have to >worry about the above paragraph, otherwise you do > >How to get elements beyond the first depends on PyArray_STRIDES(obj) and >PyArray_DIMS(obj) > >If PyArray_CHKFLAGS(obj, CARRAY_FLAGS) is true then access to elements >beyond the first is simple >as the data region is a simple contiguous chunk of memory with each >element placed right after the next in C-style contiguous fashion (last >index varies the fastest). > >Otherwise you need to understand the concept of striding in order to >access elements beyond the first. > >There is also the Array iterator which you can construct and use the >PyArray_ITER_GOTO() macro to jump to a particular element in the array >(regardless of the flags). If the array is not BEHAVED, then you need >to be careful about how you access it, but the dataptr field of the >array iterator structure will point to the right place. > >So, is this all clear as mud? If you were more clear about which >element of the array you wanted and how the array was created, I could >be more directly helpful instead of exposing you to all the possibilities. > >-Travis > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Fri Nov 25 18:58:32 2005 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Nov 2005 16:58:32 -0700 Subject: [SciPy-dev] In-place operators and casting In-Reply-To: <40e64fa20511240717n51ff619fj11ae1b45a2a0c067@mail.gmail.com> References: <40e64fa20511240717n51ff619fj11ae1b45a2a0c067@mail.gmail.com> Message-ID: On 11/24/05, Paul Barrett wrote: > > On 11/23/05, Ed Schofield wrote: > > > > > > > > > As you noted, a += b is really just a short hand for a = a + b, so > upcasting makes sense to me. Either way the user really > I never think of it having that meaning. I think of it as shorthand saying reuse the same memory used by the variable a so that the operation is efficient and parsimonious. The variable a, for instance, could reside in a register, which is where these notations came from in c; the old DEC machines had an asm instruction for ++a and a += 3 is also a natural expression when a is in a register. So in the case cited, I tend to think of a as residing in a register and a += b as short hand for a corresponding machine instruction saying to add b to the register value. Anyway, Matlab pretty much sidesteps these problems by making everything double. Using actual integers in Matlab is unnatural. Because scipy allows far more control over types, and because I see a += b as an expression about memory usage, I vote not to promote the result. If you get the wrong answer, tough. However, it might be nice to throw an error in these cases as using different types is just about morally equivalent to using arrays of different dimensions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravi at ati.com Sat Nov 26 14:24:20 2005 From: ravi at ati.com (Ravikiran Rajagopal) Date: Sat, 26 Nov 2005 14:24:20 -0500 Subject: [SciPy-dev] [matplotlib-devel] Weirdness with matplotlib and new scipy In-Reply-To: References: <200511231520.27267.ravi@ati.com> Message-ID: <200511261424.20526.ravi@ati.com> On Wednesday 23 November 2005 19:51, daishi at egcrc.net wrote: > Unfortunately, I can't seem to recreate your problem. The problem is that I have both Numeric[1] and the new scipy installed. When both are installed, the Numeric headers are picked up by default during scipy compilation. By forcing them to pick up the scipy headers, the problem was resolved. But it is a rather strange coincidence that the error results in truncated numbers rather than something drastic. Ravi [1] Reason: It will be a while before all my Numeric-based libraries are ported to the new scipy; porting is trivial, but regression testing takes time, especially given that the new scipy C API is not quite stable yet and that I haven't yet ported boost.python.numeric to scipy. From stephen.walton at csun.edu Mon Nov 28 11:34:45 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 28 Nov 2005 08:34:45 -0800 Subject: [SciPy-dev] config_fc and bdist_rpm Message-ID: <438B31A5.1090106@csun.edu> It appears that it is still the case that config_fc is ignored when one does bdist_rpm instead of build for building Scipy_core and Scipy. That is, on my system g77 is always used for the build, even when I give the command python setup.py config_fc --fcompiler=absoft bdist_rpm Is there an easy fix? From pearu at scipy.org Mon Nov 28 13:21:23 2005 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 28 Nov 2005 12:21:23 -0600 (CST) Subject: [SciPy-dev] config_fc and bdist_rpm In-Reply-To: <438B31A5.1090106@csun.edu> References: <438B31A5.1090106@csun.edu> Message-ID: On Mon, 28 Nov 2005, Stephen Walton wrote: > It appears that it is still the case that config_fc is ignored when one > does bdist_rpm instead of build for building Scipy_core and Scipy. That > is, on my system g77 is always used for the build, even when I give the > command > > python setup.py config_fc --fcompiler=absoft bdist_rpm > > Is there an easy fix? bdist_rpm runs setup.py file with its own options. If someone could figure out how to alter bdist_rpm options without modifying distutils then that would be great. Otherwise we need to patch bdist_rpm in scipy.distutils. Pearu From a.h.jaffe at gmail.com Mon Nov 28 14:37:53 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Mon, 28 Nov 2005 19:37:53 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? Message-ID: hi all, (apologies that a similar question has appeared elsewhere...) In the newest incarnation of scipy_core, I am having trouble with the cholesky(a) routine. Here is some minimal code reproducing the bug (on OS X) ------------------------------------------------------------ from scipy import identity, __core_version__, Float64 import scipy.linalg as la print 'Scipy version: ', __core_version__ i = identity(4, Float64) print 'identity matrix:' print i print 'about to get cholesky decomposition' c = la.cholesky(i) print c ------------------------------------------------------------ which gives ------------------------------------------------------------ Scipy version: 0.6.1 identity matrix: [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]] about to get cholesky decomposition Traceback (most recent call last): File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? c = la.cholesky(i) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/basic_lite.py", line 117, in cholesky_decomposition results = lapack_routine('L', n, a, m, 0) lapack_lite.LapackError: Parameter a is not contiguous in lapack_lite.dpotrf ------------------------------------------------------------ (The cholesky decomposition in this case should just be the matrix itself; the same error occurs with a complex matrix.) Any ideas? Could this have anything to do with _CastCopyAndtranspose in Basic_lite.py? (Since there are few other candidates for anything that actually changes the matrix.) Thanks in advance, A ______________________________________________________________________ Andrew Jaffe a.jaffe at imperial.ac.uk Astrophysics Group +44 207 594-7526 Blackett Laboratory, Room 1013 FAX 7541 Imperial College, Prince Consort Road London SW7 2AZ ENGLAND http://astro.imperial.ac.uk/~jaffe From nwagner at mecha.uni-stuttgart.de Mon Nov 28 15:25:08 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 28 Nov 2005 21:25:08 +0100 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: On Mon, 28 Nov 2005 19:37:53 +0000 Andrew Jaffe wrote: > hi all, > > (apologies that a similar question has appeared >elsewhere...) > > In the newest incarnation of scipy_core, I am having >trouble with the > cholesky(a) routine. Here is some minimal code >reproducing the bug > (on OS X) > > ------------------------------------------------------------ > from scipy import identity, __core_version__, Float64 > import scipy.linalg as la > print 'Scipy version: ', __core_version__ > i = identity(4, Float64) > print 'identity matrix:' > print i > > print 'about to get cholesky decomposition' > c = la.cholesky(i) > print c > ------------------------------------------------------------ > > which gives > > ------------------------------------------------------------ > Scipy version: 0.6.1 > identity matrix: > [[ 1. 0. 0. 0.] > [ 0. 1. 0. 0.] > [ 0. 0. 1. 0.] > [ 0. 0. 0. 1.]] > about to get cholesky decomposition > Traceback (most recent call last): > File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, >in ? > c = la.cholesky(i) > File >"/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/linalg/basic_lite.py", >line 117, in > cholesky_decomposition > results = lapack_routine('L', n, a, m, 0) > lapack_lite.LapackError: Parameter a is not contiguous >in > lapack_lite.dpotrf > ------------------------------------------------------------ > > (The cholesky decomposition in this case should just be >the matrix > itself; the same error occurs with a complex matrix.) > > > Any ideas? Could this have anything to do with >_CastCopyAndtranspose > in Basic_lite.py? (Since there are few other candidates >for anything > that actually changes the matrix.) > > Thanks in advance, > > A > > > ______________________________________________________________________ > Andrew Jaffe > a.jaffe at imperial.ac.uk > Astrophysics Group > +44 207 594-7526 > Blackett Laboratory, Room 1013 FAX > 7541 > Imperial College, Prince Consort Road > London SW7 2AZ ENGLAND > http://astro.imperial.ac.uk/~jaffe > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Scipy version: 0.7.1.1530 identity matrix: [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]] about to get cholesky decomposition [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]] Nils From a.h.jaffe at gmail.com Mon Nov 28 16:31:05 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Mon, 28 Nov 2005 21:31:05 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: Nils Wagner wrote: > On Mon, 28 Nov 2005 19:37:53 +0000 > Andrew Jaffe wrote: >> >>In the newest incarnation of scipy_core, I am having >>trouble with the cholesky(a) routine. Here is some minimal code >>reproducing the bug (on OS X) >> >>------------------------------------------------------------ >>from scipy import identity, __core_version__, Float64 >>import scipy.linalg as la >>print 'Scipy version: ', __core_version__ >>i = identity(4, Float64) >>print 'identity matrix:' >>print i >> >>print 'about to get cholesky decomposition' >>c = la.cholesky(i) >>print c >>------------------------------------------------------------ >> >>which gives >> >>------------------------------------------------------------ >>Scipy version: 0.6.1 >>identity matrix: >>[[ 1. 0. 0. 0.] >> [ 0. 1. 0. 0.] >> [ 0. 0. 1. 0.] >> [ 0. 0. 0. 1.]] >>about to get cholesky decomposition >>Traceback (most recent call last): >> File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? >> c = la.cholesky(i) >> File >>"/Library/Frameworks/Python.framework/Versions/2.4/lib/ >>python2.4/site-packages/scipy/linalg/basic_lite.py", >>line 117, in >>cholesky_decomposition >> results = lapack_routine('L', n, a, m, 0) >>lapack_lite.LapackError: Parameter a is not contiguous >>in lapack_lite.dpotrf >>------------------------------------------------------------ >> >>(The cholesky decomposition in this case should just be >>the matrix itself; the same error occurs with a complex matrix.) But Nils W got: > > Scipy version: 0.7.1.1530 > identity matrix: > [[ 1. 0. 0. 0.] > [ 0. 1. 0. 0.] > [ 0. 0. 1. 0.] > [ 0. 0. 0. 1.]] > about to get cholesky decomposition > [[ 1. 0. 0. 0.] > [ 0. 1. 0. 0.] > [ 0. 0. 1. 0.] > [ 0. 0. 0. 1.]] Hmmmm, I *still* get the error with 0.7.1.1534. So it must be specific to OS X... Anyone else out there see this? Andrew From oliphant.travis at ieee.org Mon Nov 28 21:07:41 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Nov 2005 19:07:41 -0700 Subject: [SciPy-dev] Real culprit of previous memory problems with array scalars Message-ID: <438BB7ED.4020804@ieee.org> For those that don't follow the python-dev list, here is a summary of the real problem with the array scalars that was causing the Python memory manager to eat up memory. The problem was in the array scalars that dual-inherited from both a Python type and the generic, base type. Even though the generic, base-type was listed first, PyType_Ready filled the tp_free function with the Python Integer free function: int_free (which never released the previously allocated memory). It worked correctly when the generic base type tp_free pointer was changed to "free" but curiously not when it was "PyObject_Del" Thus, changing the memory allocator really just changed how the pointer table got filled in from inheritance. Therefore, the Python memory allocator is not at fault. It might be useful to go back to it as it allocates pools of memory at a time (which might make array scalar allocation faster). We just have to make sure the correct tp_free is obtained on inheritance (we can easily over-ride what PyType_Ready does and force tp_free to be whatever is wanted). -Travis From nwagner at mecha.uni-stuttgart.de Tue Nov 29 02:37:43 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 29 Nov 2005 08:37:43 +0100 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 Message-ID: <438C0547.6060902@mecha.uni-stuttgart.de> ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __r add__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __r add__ return csc.__radd__(other) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __r add__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in asse rt_array_equal reduced = ravel(equal(x,y)) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __c mp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __r add__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __r add__ return csc.__radd__(other) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __r add__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in asse rt_array_equal reduced = ravel(equal(x,y)) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __c mp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ---------------------------------------------------------------------- Ran 1355 tests in 6.372s FAILED (errors=12) From schofield at ftw.at Tue Nov 29 02:48:46 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 29 Nov 2005 07:48:46 +0000 Subject: [SciPy-dev] Maximum entropy module Message-ID: <3BECC3A1-3FCE-45A8-A0B4-C0F0B384D130@ftw.at> Hi all, I've been working on getting my maximum entropy module into shape for scipy. Unless there are any objections I'll upload it into the sandbox in the next few days. It's quite well documented, but I'd be happy to answer any questions on [SciPy-user] or [SciPy-dev] about how to use it and interpret the results. I'd also welcome any feedback on improving it further. -- Ed From robert.kern at gmail.com Tue Nov 29 03:06:07 2005 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Nov 2005 00:06:07 -0800 Subject: [SciPy-dev] Maximum entropy module In-Reply-To: <3BECC3A1-3FCE-45A8-A0B4-C0F0B384D130@ftw.at> References: <3BECC3A1-3FCE-45A8-A0B4-C0F0B384D130@ftw.at> Message-ID: <438C0BEF.2030504@gmail.com> Ed Schofield wrote: > Hi all, > > I've been working on getting my maximum entropy module into shape for > scipy. Unless there are any objections I'll upload it into the > sandbox in the next few days. As long as it's licensed appropriately, we shouldn't have any problems with it in the sandbox. > It's quite well documented, but I'd be happy to answer any questions > on [SciPy-user] or [SciPy-dev] about how to use it and interpret the > results. I'd also welcome any feedback on improving it further. Maximum entropy what? I know of quite a few "maximum entropy methods" that don't have much to do with each other algorithmically. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Tue Nov 29 03:03:54 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 29 Nov 2005 09:03:54 +0100 Subject: [SciPy-dev] Constrained optimization in scipy Message-ID: <438C0B6A.9060201@mecha.uni-stuttgart.de> Hi all, Afaik there are three multivariate constrained optimizers available in scipy. Can I use them in case of equality constraints, e.g. x^T B x - 1 = 0 fmin_tnc and fmin_l_bfgs_b use bound constraints x_l < x < x_u while fmin-cobyla uses inequality constraints x >=0 Nils From nwagner at mecha.uni-stuttgart.de Tue Nov 29 03:53:10 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 29 Nov 2005 09:53:10 +0100 Subject: [SciPy-dev] Optimization Message-ID: <438C16F6.9060206@mecha.uni-stuttgart.de> Hi all, How can I find the minimizer of trace[(Y^T A Y) (Y^T Y)^{-1}], where Y is full-rank n \times p and A is a given symmetric n \times n matrix. p > 1 yields File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 179, in fmin raise ValueError, "Initial guess must be a scalar or rank-1 sequence." ValueError: Initial guess must be a scalar or rank-1 sequence. p=1 Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 183, in inv raise ValueError, 'expected square matrix' ValueError: expected square matrix linalg.inv(mat(2)) gives array([[ 0.5]]) linalg.inv(2) should also return 0.5 Nils From schofield at ftw.at Tue Nov 29 04:33:36 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 29 Nov 2005 09:33:36 +0000 Subject: [SciPy-dev] Maximum entropy module In-Reply-To: <438C0BEF.2030504@gmail.com> References: <3BECC3A1-3FCE-45A8-A0B4-C0F0B384D130@ftw.at> <438C0BEF.2030504@gmail.com> Message-ID: <117F3FEF-025D-46B6-8CD9-1446E25BD727@ftw.at> On 29/11/2005, at 8:06 AM, Robert Kern wrote: > Ed Schofield wrote: >> Hi all, >> >> I've been working on getting my maximum entropy module into shape for >> scipy. Unless there are any objections I'll upload it into the >> sandbox in the next few days. > > As long as it's licensed appropriately, we shouldn't have any problems > with it in the sandbox. > Yes, it has a BSD license. > >> It's quite well documented, but I'd be happy to answer any questions >> on [SciPy-user] or [SciPy-dev] about how to use it and interpret the >> results. I'd also welcome any feedback on improving it further. > > Maximum entropy what? I know of quite a few "maximum entropy methods" > that don't have much to do with each other algorithmically. It's the generic framework described by Jaynes or Csiszar. The code finds the probability distribution that that maximises entropy subject to constraints on the expectations of given statistics. All "maximum entropy" applications I know of just apply this method of inference in different domains; the core parameter estimation code should be applicable to all of them. -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Tue Nov 29 06:14:00 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 29 Nov 2005 11:14:00 +0000 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438C0547.6060902@mecha.uni-stuttgart.de> References: <438C0547.6060902@mecha.uni-stuttgart.de> Message-ID: <5A8604C1-3FCD-4049-830A-48CD81A24C43@ftw.at> On 29/11/2005, at 7:37 AM, Nils Wagner wrote: > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- I think that core revision 1537 broke this, rather than the recent sparse matrix patches... -- Ed From oliphant.travis at ieee.org Tue Nov 29 14:08:34 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Nov 2005 12:08:34 -0700 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438C0547.6060902@mecha.uni-stuttgart.de> References: <438C0547.6060902@mecha.uni-stuttgart.de> Message-ID: <438CA732.9080003@ieee.org> Nils Wagner wrote: There have been reports of sparse matrix errors after the recent change to scipy_core to allow other object array math to work correctly. I'm not getting these errors. Please make sure you've got an updated version of Lib/sparse/sparse.py In particular, make sure that the spmatrix class defines the __array_priority__ attribute. The hand-off to the rop methods only occurs if the object has this attribute. -Travis From nwagner at mecha.uni-stuttgart.de Tue Nov 29 14:35:48 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 29 Nov 2005 20:35:48 +0100 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438CA732.9080003@ieee.org> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> Message-ID: On Tue, 29 Nov 2005 12:08:34 -0700 Travis Oliphant wrote: > Nils Wagner wrote: > > There have been reports of sparse matrix errors after >the recent change > to scipy_core to allow other object array math to work >correctly. > > I'm not getting these errors. > > Please make sure you've got an updated version of >Lib/sparse/sparse.py > > In particular, make sure that the spmatrix class defines >the > __array_priority__ attribute. The hand-off to the rop >methods only > occurs if the object has this attribute. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev Hi Travis, I am using the latest svn versions of core/scipy. The errors are still there. Can someone reproduce these errors ? scipy.test(1,10) yields ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502 , in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173 , in __radd__ return csc.__radd__(other) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502 , in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146 , in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502 , in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173 , in __radd__ return csc.__radd__(other) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502 , in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/local/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146 , in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py ", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ---------------------------------------------------------------------- Ran 1352 tests in 4.438s FAILED (errors=12) Nils From schofield at ftw.at Tue Nov 29 15:33:08 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 29 Nov 2005 20:33:08 +0000 Subject: [SciPy-dev] Standard deviations Message-ID: <438CBB04.4090004@ftw.at> Hi all, I have three questions related to standard deviations and variances in scipy. First, can someone explain the behaviour of array.std() without any arguments? >>> a = arange(30).reshape(3,10) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) >>> a.std() array([ 2.99856287, 2.85723522, 2.74647109, 2.67007684, 2.63104804, 2.63104804, 2.67007684, 2.74647109, 2.85723522, 2.99856287]) I don't understand what these numbers represent. The correct standard deviations of the column vectors are given by: >>> a.std(0) array([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.]) and the standard deviations of the row vectors are: >>> a.std(1) array([ 3.02765035, 3.02765035, 3.02765035]) I would have expected a.std() to give the same output as >>> a.ravel().std() 8.8034084308295046 which is what a.mean() does. Second, I'd like to point out that some of the functions in Lib/stats/ have a different convention to scipy core about whether operations are performed row-wise or column-wise, and whether anyone would object to my changing the stats functions to operate column-wise. At the moment we get this: >>> average(a) array([ 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.]) which is column-wise, but >>> std(a) array([ 3.02765035, 3.02765035, 3.02765035]) which is row-wise. I presume the default behaviour of std() and friends is just a historical relic. If so we'd be wise to get this straight well before a 1.0 release. Third, I'd like to request that we add an array.var() method to scipy core to compute an array's sample variance. At the moment it seems that there is no way to compute the sample variance of an array of numbers without installing the full scipy. Users needing to do this will either have to roll their own function in Python, like this: def var(A): m = len(A) return average((a-means)**2) * (m/(m-1.)) or square the output of std(). Both are less efficient than a native array.var() would be, requiring extra memory copying and, in the second case, squaring the result of a square root operation, which also introduces numerical imprecision. The extra code required is minimal. There's an example patch below, which works fine except that it inherits the weirdness of std(). Comments? :) -- Ed --------------------- Index: base/src/multiarraymodule.c =================================================================== --- base/src/multiarraymodule.c (revision 1534) +++ base/src/multiarraymodule.c (working copy) @@ -460,7 +460,61 @@ return obj1; } +static PyObject * +PyArray_Var(PyArrayObject *self, int axis, int rtype) +{ + PyObject *obj1=NULL, *obj2=NULL, *new=NULL; + PyObject *ret=NULL, *newshape=NULL; + int i, n; + intp val; + if ((new = _check_axis(self, &axis, 0))==NULL) return NULL; + + /* Compute and reshape mean */ + obj1 = PyArray_EnsureArray(PyArray_Mean((PyAO *)new, axis, rtype)); + if (obj1 == NULL) {Py_DECREF(new); return NULL;} + n = PyArray_NDIM(new); + newshape = PyTuple_New(n); + if (newshape == NULL) {Py_DECREF(obj1); Py_DECREF(new); return NULL;} + for (i=0; i References: <438CBB04.4090004@ftw.at> Message-ID: <438CBEF8.3080603@ee.byu.edu> Ed Schofield wrote: >Hi all, > >I have three questions related to standard deviations and variances in >scipy. > >First, can someone explain the behaviour of array.std() without any >arguments? > > >>> a = arange(30).reshape(3,10) > >>> a >array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], > [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) > >>> a.std() >array([ 2.99856287, 2.85723522, 2.74647109, 2.67007684, 2.63104804, > 2.63104804, 2.67007684, 2.74647109, 2.85723522, 2.99856287]) > >I don't understand what these numbers represent. The correct standard >deviations of the column vectors are given by: > > >>> a.std(0) >array([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.]) > >and the standard deviations of the row vectors are: > > >>> a.std(1) >array([ 3.02765035, 3.02765035, 3.02765035]) > >I would have expected a.std() to give the same output as > >>> a.ravel().std() >8.8034084308295046 > >which is what a.mean() does. > > This is a bug. Thanks for finding it. I'll look into it. > > >Second, I'd like to point out that some of the functions in Lib/stats/ >have a different convention to scipy core about whether operations are >performed row-wise or column-wise, and whether anyone would object to my >changing the stats functions to operate column-wise. At the moment we >get this: > > >>> average(a) >array([ 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.]) > >which is column-wise, but > > >>> std(a) >array([ 3.02765035, 3.02765035, 3.02765035]) > >which is row-wise. I presume the default behaviour of std() and friends >is just a historical relic. If so we'd be wise to get this straight >well before a 1.0 release. > > Good catch. It would be nice to have things as consistent as possible. Feel free to make consistency changes --- especially in stats.py which is still messy. >Third, I'd like to request that we add an array.var() method to scipy >core to compute an array's sample variance. > >At the moment it seems that there is no way to compute the sample >variance of an array of numbers without installing the full scipy. >Users needing to do this will either have to roll their own function in >Python, like this: > >def var(A): > m = len(A) > return average((a-means)**2) * (m/(m-1.)) > >or square the output of std(). Both are less efficient than a native >array.var() would be, requiring extra memory copying and, in the second >case, squaring the result of a square root operation, which also >introduces numerical imprecision. > >The extra code required is minimal. There's an example patch below, >which works fine except that it inherits the weirdness of std(). > > I'm O.K. with this. Anybody else see a problem? -Travis From pebarrett at gmail.com Tue Nov 29 16:48:14 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Tue, 29 Nov 2005 16:48:14 -0500 Subject: [SciPy-dev] Standard deviations In-Reply-To: <438CBEF8.3080603@ee.byu.edu> References: <438CBB04.4090004@ftw.at> <438CBEF8.3080603@ee.byu.edu> Message-ID: <40e64fa20511291348y67806cdcm45e62c27bba0b7bd@mail.gmail.com> I'd like to see more explicit method names. At first sight, 'a.var' and ' a.std' don't mean much to me, whereas 'a.variance' and 'a.standard_dev' do. -- Paul On 11/29/05, Travis Oliphant wrote: > > Ed Schofield wrote: > > >Hi all, > > > >I have three questions related to standard deviations and variances in > >scipy. > > > >First, can someone explain the behaviour of array.std() without any > >arguments? > > > > >>> a = arange(30).reshape(3,10) > > >>> a > >array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], > > [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], > > [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]]) > > >>> a.std() > >array([ 2.99856287, 2.85723522, 2.74647109, 2.67007684, 2.63104804, > > 2.63104804, 2.67007684, 2.74647109, 2.85723522, 2.99856287]) > > > >I don't understand what these numbers represent. The correct standard > >deviations of the column vectors are given by: > > > > >>> a.std(0) > >array([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.]) > > > >and the standard deviations of the row vectors are: > > > > >>> a.std(1) > >array([ 3.02765035, 3.02765035, 3.02765035]) > > > >I would have expected a.std() to give the same output as > > >>> a.ravel().std() > >8.8034084308295046 > > > >which is what a.mean() does. > > > > > > This is a bug. Thanks for finding it. I'll look into it. > > > > > > >Second, I'd like to point out that some of the functions in Lib/stats/ > >have a different convention to scipy core about whether operations are > >performed row-wise or column-wise, and whether anyone would object to my > >changing the stats functions to operate column-wise. At the moment we > >get this: > > > > >>> average(a) > >array([ 10., 11., 12., 13., 14., 15., 16., 17., 18., 19.]) > > > >which is column-wise, but > > > > >>> std(a) > >array([ 3.02765035, 3.02765035, 3.02765035]) > > > >which is row-wise. I presume the default behaviour of std() and friends > >is just a historical relic. If so we'd be wise to get this straight > >well before a 1.0 release. > > > > > Good catch. It would be nice to have things as consistent as possible. > Feel free to make consistency changes --- especially in stats.py which > is still messy. > > >Third, I'd like to request that we add an array.var() method to scipy > >core to compute an array's sample variance. > > > >At the moment it seems that there is no way to compute the sample > >variance of an array of numbers without installing the full scipy. > >Users needing to do this will either have to roll their own function in > >Python, like this: > > > >def var(A): > > m = len(A) > > return average((a-means)**2) * (m/(m-1.)) > > > >or square the output of std(). Both are less efficient than a native > >array.var() would be, requiring extra memory copying and, in the second > >case, squaring the result of a square root operation, which also > >introduces numerical imprecision. > > > >The extra code required is minimal. There's an example patch below, > >which works fine except that it inherits the weirdness of std(). > > > > > I'm O.K. with this. Anybody else see a problem? > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.h.jaffe at gmail.com Tue Nov 29 17:02:11 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Tue, 29 Nov 2005 22:02:11 +0000 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: Hi All, One more data point: in addition to my problems on OS X, someone else has the same issue on Win XP... So it's not OS-specific, but it does occur only for some installations. Perhaps it depends on the underlying BLAS/LAPACK library? Andrew Andrew Jaffe wrote: > hi all, > > (apologies that a similar question has appeared elsewhere...) > > In the newest incarnation of scipy_core, I am having trouble with the > cholesky(a) routine. Here is some minimal code reproducing the bug > (on OS X) > > ------------------------------------------------------------ > from scipy import identity, __core_version__, Float64 > import scipy.linalg as la > print 'Scipy version: ', __core_version__ > i = identity(4, Float64) > print 'identity matrix:' > print i > > print 'about to get cholesky decomposition' > c = la.cholesky(i) > print c > ------------------------------------------------------------ > > which gives > > ------------------------------------------------------------ > Scipy version: 0.6.1 > identity matrix: > [[ 1. 0. 0. 0.] > [ 0. 1. 0. 0.] > [ 0. 0. 1. 0.] > [ 0. 0. 0. 1.]] > about to get cholesky decomposition > Traceback (most recent call last): > File "/Users/jaffe/Desktop/bad_cholesky.py", line 13, in ? > c = la.cholesky(i) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/linalg/basic_lite.py", line 117, in > cholesky_decomposition > results = lapack_routine('L', n, a, m, 0) > lapack_lite.LapackError: Parameter a is not contiguous in > lapack_lite.dpotrf > ------------------------------------------------------------ > > (The cholesky decomposition in this case should just be the matrix > itself; the same error occurs with a complex matrix.) > > > Any ideas? Could this have anything to do with _CastCopyAndtranspose > in Basic_lite.py? (Since there are few other candidates for anything > that actually changes the matrix.) > > Thanks in advance, > > A > > > ______________________________________________________________________ > Andrew Jaffe a.jaffe at imperial.ac.uk > Astrophysics Group +44 207 594-7526 > Blackett Laboratory, Room 1013 FAX 7541 > Imperial College, Prince Consort Road > London SW7 2AZ ENGLAND http://astro.imperial.ac.uk/~jaffe From oliphant at ee.byu.edu Tue Nov 29 18:11:00 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Nov 2005 16:11:00 -0700 Subject: [SciPy-dev] problem with linalg.cholesky? In-Reply-To: References: Message-ID: <438CE004.6000806@ee.byu.edu> Andrew Jaffe wrote: >Hi All, > >One more data point: in addition to my problems on OS X, someone else >has the same issue on Win XP... So it's not OS-specific, but it does >occur only for some installations. > > Did they build their own version, or use the provided binary (which links to the version of ATLAS that I have)? I strongly suspect the BLAS/LAPACK library, but I could be wrong. I'd like to figure out what the problem here is so anybody else with the problem should chime in and give their experience. -Travis From fonnesbeck at gmail.com Tue Nov 29 22:29:33 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 22:29:33 -0500 Subject: [SciPy-dev] Standard deviations In-Reply-To: <40e64fa20511291348y67806cdcm45e62c27bba0b7bd@mail.gmail.com> References: <438CBB04.4090004@ftw.at> <438CBEF8.3080603@ee.byu.edu> <40e64fa20511291348y67806cdcm45e62c27bba0b7bd@mail.gmail.com> Message-ID: <723eb6930511291929r647fdda3m62783ac643e276eb@mail.gmail.com> On 11/29/05, Paul Barrett wrote: > I'd like to see more explicit method names. At first sight, 'a.var' and > 'a.std' don't mean much to me, whereas 'a.variance' and 'a.standard_dev' do. > I think it is meant to follow Matlab conventions, so they are familiar to some. -- Chris Fonnesbeck Atlanta, GA From oliphant.travis at ieee.org Mon Nov 28 23:25:01 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Nov 2005 21:25:01 -0700 Subject: [SciPy-dev] C API In-Reply-To: References: Message-ID: <438BD81D.3080503@ieee.org> Nariman Habili wrote: >Travis, I have purchased a copy of your "Guide to SciPy: Core System", some >sections seem to be missing, for example section 13.2, "Using the Array >iterator in C". I would appreciate it if you could explain to me how I can >use the array iterator. > > The iterator object makes it easy to walk through an entire ndarray or move to a particular location. It is general purpose and so possibly slower on 2-d images than one of the other macros I'll explain next Suppose array is your PyArrayObject * variable. PyArrayIterObject *iter; iter = (PyArrayIterObject *)PyArray_IterNew(array); while (iter->index < iter->size) { /* iter->dataptr points to the current element of the array -- recast it to the appropriate type if you want */ PyArray_ITER_NEXT(iter); } You can also use PyArray_ITER_GOTO(iter, ind) where ind is an array of intp integers giving the index into the array (available in the latest SVN version of scipy core). If you just want to access a particular i,j location in a 2-d array, then use ptr = ( *)PyArray_GETPTR2(obj, i, j); This could be used in a dual for-loop to process the data in an image: for (i=0; i >>> linalg.inv(2) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", line 183, in inv raise ValueError, 'expected square matrix' ValueError: expected square matrix >>> linalg.inv(mat(2)) array([[ 0.5]]) Is this behaviour wanted ? Nils From bart.vandereycken at cs.kuleuven.be Wed Nov 30 08:03:13 2005 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Wed, 30 Nov 2005 14:03:13 +0100 Subject: [SciPy-dev] f2py bugs with fortran 90 Message-ID: <438DA311.6000502@cs.kuleuven.be> Hi all, while processing a Fortran 90 module I discovered some problems/bugs. 1) Comments aren't always ignored, e.g. this subroutine is correctly parsed only without the comment ----------------- SUBROUTINE TEST (a, b, c & ! Optional arguments x, y, z) ----------------- 2) INTENT(IN OUT) is the same as INTENT(INOUT) but crackfortran.py gives ----------------- File "....../f2py/crackfortran.py", line 2543, in vars2fortran lst = true_intent_list(vars[a]) File "...../f2py/crackfortran.py", line 2451, in true_intent_list exec('c = isintent_%s(var)' % intent) File "", line 1 c = isintent_in out(var) ^ SyntaxError: invalid syntax ----------------- 3) PUBLIC or PRIVATE objects within a module are not supported, e.g. ----------------- MODULE Foo PRIVATE PUBLIC :: bar PRIVATE :: foobar CONTAINS SUBROUTINE bar ... END SUBROUTINE SUBROUTINE foobar ... END SUBROUTINE END MODULE Foo ----------------- gives ----------------- {'attrspec': ['public']} In: foo.f90:foo vars2fortran: No typespec for argument "bar". {'attrspec': ['private']} In: foo.f90:foo vars2fortran: No typespec for argument "foobar". ----------------- Thanks in advance, -- Bart Vandereycken Scientific Computing Research Group, K.U.Leuven www.cs.kuleuven.be/~bartvde -- Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From pebarrett at gmail.com Wed Nov 30 08:28:01 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 30 Nov 2005 08:28:01 -0500 Subject: [SciPy-dev] Standard deviations In-Reply-To: <723eb6930511291929r647fdda3m62783ac643e276eb@mail.gmail.com> References: <438CBB04.4090004@ftw.at> <438CBEF8.3080603@ee.byu.edu> <40e64fa20511291348y67806cdcm45e62c27bba0b7bd@mail.gmail.com> <723eb6930511291929r647fdda3m62783ac643e276eb@mail.gmail.com> Message-ID: <40e64fa20511300528h6ab5c4c4r3c06ee749b339e2b@mail.gmail.com> On 11/29/05, Chris Fonnesbeck wrote: > > On 11/29/05, Paul Barrett wrote: > > I'd like to see more explicit method names. At first sight, 'a.var' and > > 'a.std' don't mean much to me, whereas 'a.variance' and 'a.standard_dev'do. > > > > I think it is meant to follow Matlab conventions, so they are familiar to > some. > First, I believe the "Zen of Python" says "Explicit is better than implicit." And second, scipy isn't matlab, unless we start charging money for it. :-) -- Paul -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Wed Nov 30 12:48:16 2005 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 30 Nov 2005 14:48:16 -0300 Subject: [SciPy-dev] segfault Message-ID: I am using scipy_core-0.6.1 downloaded from SouceForge. I got the following segfault: [dalcinl at trantor dalcinl]$ python Python 2.4.2 (#1, Nov 7 2005, 16:51:05) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.base import * >>> a = array(1) >>> a.dtype = float Segmentation fault [dalcinl at trantor dalcinl]$ -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From oliphant.travis at ieee.org Wed Nov 30 13:41:37 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 11:41:37 -0700 Subject: [SciPy-dev] segfault In-Reply-To: References: Message-ID: <438DF261.2010708@ieee.org> Lisandro Dalcin wrote: >I am using scipy_core-0.6.1 downloaded from SouceForge. I got the >following segfault: > >[dalcinl at trantor dalcinl]$ python >Python 2.4.2 (#1, Nov 7 2005, 16:51:05) >[GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 >Type "help", "copyright", "credits" or "license" for more information. > > >>>>from scipy.base import * >>>>a = array(1) >>>>a.dtype = float >>>> >>>> >Segmentation fault >[dalcinl at trantor dalcinl]$ > > > Thanks for the bug-report. This is now fixed in SVN. The problem was that 0-dimensional arrays were not being handled correctly when the itemsize was different. This now raises an error. -Travis From schofield at ftw.at Wed Nov 30 13:49:39 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 30 Nov 2005 18:49:39 +0000 Subject: [SciPy-dev] segfault In-Reply-To: References: Message-ID: <438DF443.9020500@ftw.at> Lisandro Dalcin wrote: >I am using scipy_core-0.6.1 downloaded from SouceForge. I got the >following segfault: > >[dalcinl at trantor dalcinl]$ python > > >>>>from scipy.base import * >>>>a = array(1) >>>>a.dtype = float >>>> >>>> >Segmentation fault > > This is also true for my SVN revision (r1534). Thanks for the report! -- Ed From schofield at ftw.at Wed Nov 30 14:29:22 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 30 Nov 2005 19:29:22 +0000 Subject: [SciPy-dev] Three cheers for Travis! Message-ID: <438DFD92.6040003@ftw.at> I'd like to thank Travis for his tireless work answering feature requests, fixing bugs, and generally doing a superb job. Hip hip hooray! SciPy is becoming very good, very fast. I have another couple of 0-dim array bugs to report (as a reward :D) >>> b = array(2) >>> b.argmax() 0 >>> b.argmin() Segmentation fault From fonnesbeck at gmail.com Wed Nov 30 14:34:50 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Wed, 30 Nov 2005 14:34:50 -0500 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <438DFD92.6040003@ftw.at> References: <438DFD92.6040003@ftw.at> Message-ID: <723eb6930511301134s203e98e6t650c6d1c7880f922@mail.gmail.com> On 11/30/05, Ed Schofield wrote: > > > I'd like to thank Travis for his tireless work answering feature > requests, fixing bugs, and generally doing a superb job. Hip hip > hooray! SciPy is becoming very good, very fast. > I will second this motion. We all appreciate your work, Travis. -- Chris Fonnesbeck Atlanta, GA From oliphant.travis at ieee.org Wed Nov 30 14:38:29 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 12:38:29 -0700 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <438DFD92.6040003@ftw.at> References: <438DFD92.6040003@ftw.at> Message-ID: <438DFFB5.7030202@ieee.org> Ed Schofield wrote: >I have another couple of 0-dim array bugs to report (as a reward :D) > > >>> b = array(2) > >>> b.argmax() >0 > >>> b.argmin() >Segmentation fault > > > Thanks. There may be a few more like these lurking. The problem is that in the C-code, whenever the generic object interface is used (like PyNumber_Subtract in this case) an array-scalar may get returned. Then, if that returned value is handed off to a subroutine that expects an array (like PyArray_ArgMax in this case), boom the problem occurs. obj = PyArray_EnsureArray(obj) can always be used in this case to get an actual ndarray object (it steals the reference for obj and so can be used as a wrapper around the generic Python C-API command). Trying-to-teach-everyone-to-fish, -Travis From oliphant.travis at ieee.org Wed Nov 30 14:41:51 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 12:41:51 -0700 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <438DFD92.6040003@ftw.at> References: <438DFD92.6040003@ftw.at> Message-ID: <438E007F.5010200@ieee.org> Ed Schofield wrote: >I'd like to thank Travis for his tireless work answering feature >requests, fixing bugs, and generally doing a superb job. Hip hip >hooray! SciPy is becoming very good, very fast. > > >I have another couple of 0-dim array bugs to report (as a reward :D) > > >>> b = array(2) > >>> b.argmax() >0 > >>> b.argmin() >Segmentation fault > > > I guess the broader question here is should argmax even work for 0-d arrays since they can't be indexed? -Travis From schofield at ftw.at Wed Nov 30 14:43:19 2005 From: schofield at ftw.at (Ed Schofield) Date: Wed, 30 Nov 2005 19:43:19 +0000 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <438DFD92.6040003@ftw.at> References: <438DFD92.6040003@ftw.at> Message-ID: <438E00D7.6020802@ftw.at> Ed Schofield wrote: >I'd like to thank Travis for his tireless work answering feature >requests, fixing bugs, and generally doing a superb job. Hip hip >hooray! SciPy is becoming very good, very fast. > > >I have another couple of 0-dim array bugs to report (as a reward :D) > > >>> b = array(2) > >>> b.argmax() >0 > >>> b.argmin() >Segmentation fault > Oops, I guess that's only one bug. Too bad ... -- Ed From oliphant.travis at ieee.org Wed Nov 30 15:01:48 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 13:01:48 -0700 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <723eb6930511301134s203e98e6t650c6d1c7880f922@mail.gmail.com> References: <438DFD92.6040003@ftw.at> <723eb6930511301134s203e98e6t650c6d1c7880f922@mail.gmail.com> Message-ID: <438E052C.3070400@ieee.org> Chris Fonnesbeck wrote: >On 11/30/05, Ed Schofield wrote: > > >>I'd like to thank Travis for his tireless work answering feature >>requests, fixing bugs, and generally doing a superb job. Hip hip >>hooray! SciPy is becoming very good, very fast. >> >> >> > >I will second this motion. We all appreciate your work, Travis. > > > *Blush* Thank you for the kudos. I do appreciate them. I'm spending a lot of time right now getting scipy_core to the point that it can be adopted without hesitation by lots of people. At the start of the year, I won't have this kind of time, so I'm hoping that we can get as many wrinkles ironed out as possible. There are still some things pending. I just added support for character arrays that needs some testing. The ndchararray is a subclass of the ndarray that soups up the string and unicode support (it allows comparison functions element by element and adapts all the methods of strings and unicode objects to work element-by-element). It makes use of the broadcast iterator that has been exposed to Python so that broadcasting is supported rather like ufuncs. I'm really pleased with some of the abstractions that we've been able to extract from old Numeric and make more universally accessible. I'm anxious for people to start to use them and see if they work as expected and/or can be made better. I'm working on the records.py module right now and adding a few things to make that a bit easier. Other tasks still on the back-burner: * improving coercion of strings to other types * giving array scalars their own optimized math with supported error-control flags instead of going through the slower ufunc machinery -Travis From dd55 at cornell.edu Wed Nov 30 15:04:27 2005 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 30 Nov 2005 15:04:27 -0500 Subject: [SciPy-dev] pickling problem Message-ID: <200511301504.27962.dd55@cornell.edu> I'm having some trouble with unpickled scipy arrays. The issue shows up when I try to plot in matplotlib, but I think it is a scipy problem. import pickle import scipy import pylab a = scipy.arange(10) pickle.dump(a,file('temp.p','w')) b = pickle.load(file('temp.p')) print type(b), type(scipy.array(b)) pylab.plot(scipy.array(b)) # no error here pylab.plot(b) # error here Traceback (most recent call last): File "pickletest.py", line 8, in ? pylab.plot(b) File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 2055, in plot ret = gca().plot(*args, **kwargs) File "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 2636, in plot for line in self._get_lines(*args, **kwargs): File "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 267, in _grab_next_args yield self._plot_1_arg(remaining[0], **kwargs) File "/usr/lib64/python2.4/site-packages/matplotlib/axes.py", line 189, in _plot_1_arg markerfacecolor=color, File "/usr/lib64/python2.4/site-packages/matplotlib/lines.py", line 209, in __init__ self.set_data(xdata, ydata) File "/usr/lib64/python2.4/site-packages/matplotlib/lines.py", line 265, in set_data y = ma.ravel(y) File "/usr/lib64/python2.4/site-packages/Numeric/MA/MA.py", line 1810, in ravel d = Numeric.ravel(filled(a)) File "/usr/lib64/python2.4/site-packages/Numeric/MA/MA.py", line 222, in filled return Numeric.array(a) ValueError: cannot handle misaligned or not writeable arrays. Darren From oliphant.travis at ieee.org Wed Nov 30 15:12:52 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 13:12:52 -0700 Subject: [SciPy-dev] pickling problem In-Reply-To: <200511301504.27962.dd55@cornell.edu> References: <200511301504.27962.dd55@cornell.edu> Message-ID: <438E07C4.6050606@ieee.org> Darren Dale wrote: >I'm having some trouble with unpickled scipy arrays. The issue shows up when I >try to plot in matplotlib, but I think it is a scipy problem. > > Can you reproduce the problem just using Numeric and scipy and pickle? What version of Numeric are you using? This looks like an array protocol issue. But, for me, not using pylab, I can create the scipy array, pickle it, unpickle it and convert it to a Numeric array just fine. -Travis From dd55 at cornell.edu Wed Nov 30 15:34:57 2005 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 30 Nov 2005 15:34:57 -0500 Subject: [SciPy-dev] pickling problem In-Reply-To: <438E07C4.6050606@ieee.org> References: <200511301504.27962.dd55@cornell.edu> <438E07C4.6050606@ieee.org> Message-ID: <200511301534.58185.dd55@cornell.edu> On Wednesday 30 November 2005 03:12 pm, Travis Oliphant wrote: > Darren Dale wrote: > >I'm having some trouble with unpickled scipy arrays. The issue shows up > > when I try to plot in matplotlib, but I think it is a scipy problem. > > Can you reproduce the problem just using Numeric and scipy and pickle? I just ran the same script again with Numeric, and was not able to reproduce the problem. > What version of Numeric are you using? 24.2 > This looks like an array protocol issue. But, for me, not using pylab, > I can create the scipy array, pickle it, unpickle it and convert it to a > Numeric array just fine. The following gives me the same error: import pickle import scipy import Numeric a = scipy.arange(10) pickle.dump(a,file('temp.p','w')) b = pickle.load(file('temp.p')) c = Numeric.array(b) Traceback (most recent call last): File "pickletest.py", line 8, in ? c = Numeric.array(b) ValueError: cannot handle misaligned or not writeable arrays. I have an up-to-date scipy, (every day I update svn, rm -Rf build, rm -R site-packages/scipy, and install from scratch) In [4]: scipy.base.__version__ Out[4]: '0.7.4.1544' In [5]: scipy.__scipy_version__ Out[5]: '0.4.2_1466' Darren From stephen.walton at csun.edu Wed Nov 30 15:43:46 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 30 Nov 2005 12:43:46 -0800 Subject: [SciPy-dev] newcore and gcc4/gfortran (SUCCESS) In-Reply-To: <200511221019.53912.ravi@ati.com> References: <200511221019.53912.ravi@ati.com> Message-ID: <438E0F02.2000007@csun.edu> Ravikiran Rajagopal wrote: >Well, I am happy to report that I can do one better. With gcc4/gfortran combo >from SVN yesterday, along with the usual Redhat/Fedora patches (see >https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=173771 for the complete >discussion regarding SciPy), I can now compile ATLAS and SciPy with no >problems and all tests pass with the following patch: > >Index: Lib/lib/blas/fblaswrap.f.src >=================================================================== >--- Lib/lib/blas/fblaswrap.f.src (revision 1452) >+++ Lib/lib/blas/fblaswrap.f.src (working copy) >@@ -1,7 +1,7 @@ > c > subroutine wdot (r, n, x, incx, y, incy) > external dot >- complex dot, r >+ dot, r > integer n > x (*) > integer incx > > I have checked the above fix into scipy, along with a couple of other small patches, making revision 1467. Fedora users might be interested to know that gcc 4.0.2, which includes the gfortran patches needed to compile scipy correctly, was released as a Fedora Core 4 update yesterday. I appreciate the effort Ravi put into making scipy work with gfortran. From oliphant.travis at ieee.org Wed Nov 30 16:03:02 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 14:03:02 -0700 Subject: [SciPy-dev] pickling problem In-Reply-To: <200511301534.58185.dd55@cornell.edu> References: <200511301504.27962.dd55@cornell.edu> <438E07C4.6050606@ieee.org> <200511301534.58185.dd55@cornell.edu> Message-ID: <438E1386.90400@ieee.org> Darren Dale wrote: >On Wednesday 30 November 2005 03:12 pm, Travis Oliphant wrote: > > >>Darren Dale wrote: >> >> >>>I'm having some trouble with unpickled scipy arrays. The issue shows up >>>when I try to plot in matplotlib, but I think it is a scipy problem. >>> >>> >>Can you reproduce the problem just using Numeric and scipy and pickle? >> >> > >I just ran the same script again with Numeric, and was not able to reproduce >the problem. > > > >>What version of Numeric are you using? >> >> > >24.2 > > > >>This looks like an array protocol issue. But, for me, not using pylab, >>I can create the scipy array, pickle it, unpickle it and convert it to a >>Numeric array just fine. >> >> > >The following gives me the same error: > >import pickle >import scipy >import Numeric > >a = scipy.arange(10) >pickle.dump(a,file('temp.p','w')) >b = pickle.load(file('temp.p')) >c = Numeric.array(b) > >Traceback (most recent call last): > File "pickletest.py", line 8, in ? > c = Numeric.array(b) >ValueError: cannot handle misaligned or not writeable arrays. > > Which platform are you on? Do you have an old version of Numeric getting picked up by accident? I can't seem to reproduce this error. This error will only show up in the array_fromstructinterface code which parses the array_struct interface information. What is the result of b.flags on your system? To find out insert print b.flags right before you call Numeric.array. Mine is {'WRITEABLE': True, 'UPDATEIFCOPY': False, 'NOTSWAPPED': True, 'CONTIGUOUS': True, 'FORTRAN': True, 'ALIGNED': True, 'OWNDATA': False} In particular you are looking for WRITEABLE and ALIGNED to both be True. -Travis From stephen.walton at csun.edu Wed Nov 30 16:37:42 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 30 Nov 2005 13:37:42 -0800 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> Message-ID: <438E1BA6.3040307@csun.edu> I'm afraid that I'm getting a very similar set of errors to Nils. I have BLAS, LAPACK, and ATLAS 3.7.11 locally compiled using Absoft Fortran 95, and Scipy also compiled with the same compiler, on Fedora Core 4. On another FC4 system, which is using gfortran 4.0.2 as the compiler and ATLAS 3.6.0 distributed by Fedora Extras, I'm getting the same 12 errors as Nils. Output from the Absoft based system below. Both are scipy core version 0.7.4.1545 and scipy version 0.4.2_1443 ====================================================================== ERROR: check_random (scipy.linalg.decomp.test_decomp.test_cholesky) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 283, in check_random c = cholesky(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 334, in cholesky if info>0: raise LinAlgError, "matrix not positive definite" LinAlgError: matrix not positive definite ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __radd__ return csc.__radd__(other) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 103, in check_ncg full_output=False, disp=False, retall=False) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 969, in fmin_ncg alphak, fc, gc, old_fval = line_search_BFGS(f,xk,pk,gfk,old_fval) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 556, in line_search_BFGS phi_a2 = apply(f,(xk+alpha2*pk,)+args) File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 133, in function_wrapper return function(x, *args) File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_optimize.py", line 30, in func raise RuntimeError, "too many iterations in optimization routine" RuntimeError: too many iterations in optimization routine ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __radd__ return csc.__radd__(other) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_double_integral (scipy.integrate.quadpack.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 95, in check_double_integral 5/6.0 * (b**3.0-a**3.0)) File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 11, in assert_quad assert err < errTol, (err, errTol) AssertionError: (23403637477.367432, 1.4999999999999999e-08) ====================================================================== FAIL: check_triple_integral (scipy.integrate.quadpack.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 106, in check_triple_integral 8/3.0 * (b**4.0 - a**4.0)) File "/usr/lib/python2.4/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (4.2487097720268533e-21, 40.0, 1.5870895476678149e-21) ====================================================================== FAIL: check_expon (scipy.stats.morestats.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 782, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 1.603235478998613 Array 2: [ 1.587 1.9339999999999999] ====================================================================== FAIL: check_sygv (scipy.lib.lapack.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/gesv_tests.py", line 15, in check_sygv assert_array_almost_equal(dot(a,v[:,i]),w[i]*dot(b,v[:,i]),self.decimal) File "/usr/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ nan nan nan] Array 2: [ nan nan nan] ---------------------------------------------------------------------- Ran 1223 tests in 13.048s FAILED (failures=4, errors=14) From stephen.walton at csun.edu Wed Nov 30 17:14:01 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 30 Nov 2005 14:14:01 -0800 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438E1BA6.3040307@csun.edu> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> Message-ID: <438E2429.9030605@csun.edu> Stephen Walton wrote: >I'm afraid that I'm getting a very similar set of errors to Nils. I >have BLAS, LAPACK, and ATLAS 3.7.11 locally compiled using Absoft >Fortran 95, and Scipy also compiled with the same compiler, on Fedora >Core 4. On another FC4 system, which is using gfortran 4.0.2 as the >compiler and ATLAS 3.6.0 distributed by Fedora Extras, I'm getting the >same 12 errors as Nils. Output from the Absoft based system below. >Both are scipy core version 0.7.4.1545 and scipy version 0.4.2_1443 > > Correction: scipy version is 1467, not 1443. It turns out there's a minor bug in how scipy gets built: if the build gets interrupted for some reason an old version of Lib/__svn_version__.py gets left lying around, and is not deleted the next time a build is started. From oliphant.travis at ieee.org Wed Nov 30 17:35:28 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 15:35:28 -0700 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438E2429.9030605@csun.edu> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> <438E2429.9030605@csun.edu> Message-ID: <438E2930.9000705@ieee.org> Stephen Walton wrote: >Stephen Walton wrote: > > > >>I'm afraid that I'm getting a very similar set of errors to Nils. I >>have BLAS, LAPACK, and ATLAS 3.7.11 locally compiled using Absoft >>Fortran 95, and Scipy also compiled with the same compiler, on Fedora >>Core 4. On another FC4 system, which is using gfortran 4.0.2 as the >>compiler and ATLAS 3.6.0 distributed by Fedora Extras, I'm getting the >>same 12 errors as Nils. Output from the Absoft based system below. >>Both are scipy core version 0.7.4.1545 and scipy version 0.4.2_1443 >> >> >> >> >Correction: scipy version is 1467, not 1443. It turns out there's a >minor bug in how scipy gets built: if the build gets interrupted for >some reason an old version of Lib/__svn_version__.py gets left lying >around, and is not deleted the next time a build is started. > > > O.K. I just figured out the problem. It turns out my tree is still updating the 0.4.3 tag for scipy. A silly mistake on my part. Let me merge the changes from 0.4.3 back to the main trunk and see where we are at then. -Travis From oliphant.travis at ieee.org Wed Nov 30 16:50:27 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 14:50:27 -0700 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438E1BA6.3040307@csun.edu> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> Message-ID: <438E1EA3.7070605@ieee.org> Stephen Walton wrote: >I'm afraid that I'm getting a very similar set of errors to Nils. I >have BLAS, LAPACK, and ATLAS 3.7.11 locally compiled using Absoft >Fortran 95, and Scipy also compiled with the same compiler, on Fedora >Core 4. On another FC4 system, which is using gfortran 4.0.2 as the >compiler and ATLAS 3.6.0 distributed by Fedora Extras, I'm getting the >same 12 errors as Nils. Output from the Absoft based system below. >Both are scipy core version 0.7.4.1545 and scipy version 0.4.2_1443 > > Latest scipy is version 1467 -Travis From oliphant.travis at ieee.org Wed Nov 30 17:24:28 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 15:24:28 -0700 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438E2429.9030605@csun.edu> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> <438E2429.9030605@csun.edu> Message-ID: <438E269C.9030502@ieee.org> Stephen Walton wrote: >Stephen Walton wrote: > > > >>I'm afraid that I'm getting a very similar set of errors to Nils. I >>have BLAS, LAPACK, and ATLAS 3.7.11 locally compiled using Absoft >>Fortran 95, and Scipy also compiled with the same compiler, on Fedora >>Core 4. On another FC4 system, which is using gfortran 4.0.2 as the >>compiler and ATLAS 3.6.0 distributed by Fedora Extras, I'm getting the >>same 12 errors as Nils. Output from the Absoft based system below. >>Both are scipy core version 0.7.4.1545 and scipy version 0.4.2_1443 >> >> >> >> >Correction: scipy version is 1467, not 1443. It turns out there's a >minor bug in how scipy gets built: if the build gets interrupted for >some reason an old version of Lib/__svn_version__.py gets left lying >around, and is not deleted the next time a build is started. > > > Perhaps this bug is causing your Lib/sparse/sparse.py to not get updated as well. Some of the errors being reported are due to changes in scipy core. The sparse.py file received an appropriate fix. Somehow this is not getting propagated to your installed version of scipy (even though it's been checked in to svn). Please check /scipy/sparse/sparse.py and compare with /Lib/sparse/sparse.py In particular, make sure the spmatrix class defines the __array_priority__ attribute. -Travis From oliphant.travis at ieee.org Wed Nov 30 18:06:13 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 Nov 2005 16:06:13 -0700 Subject: [SciPy-dev] Version 1471 should have my changes In-Reply-To: <438E2429.9030605@csun.edu> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> <438E2429.9030605@csun.edu> Message-ID: <438E3065.6020000@ieee.org> Well I've been on a different page with scipy development since the 0.4.3 tagging. My changes were not getting placed on the main tree because of my silly mistake. After some fiddling, I've finally merged changes back to the HEAD revision of scipy. Hopefully I didn't mess something else up. -Travis From stephen.walton at csun.edu Wed Nov 30 19:25:31 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 30 Nov 2005 16:25:31 -0800 Subject: [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438E269C.9030502@ieee.org> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> <438E1BA6.3040307@csun.edu> <438E2429.9030605@csun.edu> <438E269C.9030502@ieee.org> Message-ID: <438E42FB.2090808@csun.edu> Travis Oliphant wrote: >Some of the errors being reported are due to changes in scipy core. >The sparse.py file received an appropriate fix. Somehow this is not >getting propagated to your installed version of scipy (even though it's >been checked in to svn). > >Please check > >/scipy/sparse/sparse.py > >and compare with > >/Lib/sparse/sparse.py > > These two files were identical, and neither included the __array_priority__ attribute, even though "svn up" reported earlier today that I was at revision 1467,. So on one machine I manually deleted Lib/sparse/sparse.py and did "svn up." On another machine I just did "svn up." Both now are at revision 1471 of scipy and sparse.py contains the __array_priority__ attribute. With this resolved, all of the sparse matrix test failures are gone. There are a few others which I will investigate a bit after the next update. Steve From schofield at ftw.at Wed Nov 30 20:12:18 2005 From: schofield at ftw.at (Ed Schofield) Date: Thu, 1 Dec 2005 01:12:18 +0000 Subject: [SciPy-dev] Three cheers for Travis! In-Reply-To: <438E007F.5010200@ieee.org> References: <438DFD92.6040003@ftw.at> <438E007F.5010200@ieee.org> Message-ID: On 30/11/2005, at 7:41 PM, Travis Oliphant wrote: >> I have another couple of 0-dim array bugs to report (as a reward :D) >> >>>>> b = array(2) >>>>> b.argmax() >> 0 >>>>> b.argmin() >> Segmentation fault >> >> >> > I guess the broader question here is should argmax even work for 0-d > arrays since they can't be indexed? Yes, it's a crazy thing to do. I guess an exception would be the best result. -- Ed