From eric at enthought.com Sat Feb 1 05:32:39 2003 From: eric at enthought.com (eric jones) Date: Sat, 1 Feb 2003 04:32:39 -0600 Subject: [SciPy-dev] cow: 'Connection reset by peer' timeout problem? In-Reply-To: Message-ID: <000001c2c9dd$42365e90$8901a8c0@ERICDESKTOP> Hey Simon, I don't remember ever seeing this, but it has been about a year since I used cow heavily. At the time, the jobs I ran lasted about 1 minute each, so I didn't run into the 4 minute time out you are seeing. I can't think of a technical reason why 4 minutes is a magic number from the Python code standpoint. There is a timeout value I believe, but it wouldn't cause the error you are seeing. Could it have something with ssh timing out and disconnecting? eric ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Simon Saubern > Sent: Wednesday, January 29, 2003 7:34 PM > To: scipy-dev at scipy.net > Subject: [SciPy-dev] cow: 'Connection reset by peer' timeout problem? > > I'm re-posting this message here as I didn't get any replies on the > scipy-users list: > > I've been using cow to try out some distributed calculations. > Everything works fine if I use a subset of my data, but when I use > the full set I get "error: (10054, 'Connection reset by peer')" > messages on the master unit (see below for full output). > > I can operate on larger and larger subsets until I get to the point > where if the slaves take more than about 4 minutes to complete a > task, the above error appears at the master. > > That is, connections are established (confirmed using netstat), > processing occurs on the slaves and keeps going, but the master times > out after about 4min. > > Is this a 'keep alive' problem? If so, how can I extend the time out > period? > > The setup: > 10 x slave + master, all Win2K SP-3 > Python 2.2.2 > latest scipy binary for Win > > cowname['data']=data # a list 35000 long. > lendata=range(7000) # just use a subset > bessy=None > while not bessy: > bessy=cowname.loop_code('do something;do > something;calc=function(data[x])',loop_var='x',inputs={'x':lendata},retu rn > s=['calc']) > bessy gets processed here > > 'data' is quite large and takes a while to transfer over the network. > But by doing it once and looping over the index, I minimize network > movements. The 'python' process on each slave uses about 85MB. > > Increasing 'lendata' eventually causes the 'Connection reset by peer' > message to appear. > > Any pointers welcomed. > > ---------------error output > > > File "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\cow.py", > line 823, in loop_code > return self.loop_send_recv(package,loop_data,loop_var) > File "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\cow.py", > line 847, in loop_send_recv > results = self._send_recv(package,addendums) > File "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\cow.py", > line 345, in _send_recv > self.last_results = self._recv() > File "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\cow.py", > line 303, in _recv > results.append(worker.recv()) > File > "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\sync_cluster.py", > line 404, in recv > package = self.channel.read() > File > "C:\PROGRA~1\Python22\Lib\site-packages\scipy\cow\sync_cluster.py", > line 164, in read > x = self.rfile.read() > File "c:\Program Files\Python22\lib\socket.py", line 228, in read > new = self._sock.recv(k) > error: (10054, 'Connection reset by peer') > >>> > ------------ > -- > > Cheers, > > Simon > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From simon.saubern at molsci.csiro.au Sun Feb 2 19:17:33 2003 From: simon.saubern at molsci.csiro.au (Simon Saubern) Date: Mon, 3 Feb 2003 11:17:33 +1100 Subject: [SciPy-dev] Re: cow: 'Connection reset by peer' timeout problem? Message-ID: Thanks for replying Eric, but I thought that the ssh connection didn't work under W2K? At least that was my impression from reading the cow code. I've actually put in an extra loop into my code to package the data into smaller chunks and keep the reply time to under 4min. This seems to work for calculations that take up to 4h. I can get to 10h if I make the packages small enough to take about 1.5min. The smaller I get the better, but then the amount of network traffic is increasing at the same time and the proportion of time that the slaves are spending actually doing calculations decreases. Some of the calculations that I've run take 18-27h, which requires me to log in from home in the wee hours to restart the calculations. Are there any other distributed computing environments out there for Python that run under W2K? PyMPI would require me to convince all my colleagues to install linux on their desktop machines - which just isn't going to happen (yet). Cheers, Simon >From: "eric jones" >To: >Subject: RE: [SciPy-dev] cow: 'Connection reset by peer' timeout problem? >Date: Sat, 1 Feb 2003 04:32:39 -0600 >Reply-To: scipy-dev at scipy.net > >Hey Simon, > >I don't remember ever seeing this, but it has been about a year since I >used cow heavily. At the time, the jobs I ran lasted about 1 minute >each, so I didn't run into the 4 minute time out you are seeing. > >I can't think of a technical reason why 4 minutes is a magic number from >the Python code standpoint. There is a timeout value I believe, but it >wouldn't cause the error you are seeing. > >Could it have something with ssh timing out and disconnecting? > >eric > >---------------------------------------------- >eric jones 515 Congress Ave >www.enthought.com Suite 1614 >512 536-1057 Austin, Tx 78701 > From w.f.alexander at ieee.org Tue Feb 4 03:32:20 2003 From: w.f.alexander at ieee.org (Bill Alexander) Date: 04 Feb 2003 00:32:20 -0800 Subject: [SciPy-dev] Fix to FFTW Compilation Error in latest CVS Message-ID: <1044347540.1394.105.camel@turtle> All - I believe I have fixed a compile-time error under Unix/Linux in the fftpack module of scipy. I found this problem in some of the older tgz's (source and binary forms), as well as the last three days' CVS versions. I found no published reference to this error or solution, so I hope this isn't just additional noise on the list. Problem: >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/site-packages/scipy/__init__.py", line 49, in ? import special, io, linalg, stats, fftpack File "/usr/lib/python2.2/site-packages/scipy/fftpack/__init__.py", line 55, in ? from pseudo_diffs import * File "/usr/lib/python2.2/site-packages/scipy/fftpack/pseudo_diffs.py", line 12, in ? import convolve ImportError: /usr/lib/python2.2/site-packages/scipy/fftpack/convolve.so: undefined symbol: fftw_lookup Diagnosis: Analysis reveals that the FFTW libraries are being linked in the wrong order, causing the fftw_lookup object to be obscured and lost. Looking into the scipy/scipy_distutils/system_info.py script reveals that in every position except one, the *rfftw libraries properly precede the *fftw library (all the precisions are in there, with rfftw and fftw being the default case). This default case is in the wrong order, with fftw in front of rfftw. Doesn't it just suck that we still have these kinds of problems? You'd think our compilers could figure it out by now. :) Solution: The fix is trivial - just reorder the libraries in the list (it looks like this is already OK for all the other precisions in the script, just not this particular one): Roughly line #324 of scipy/scipy_distutils/system_info.py: you want to use libs = ['rfftw','fftw'] instead of libs = ['fftw','rfftw'] A diff patch is included below. This is against the 03 February 2003 CVS (not that the date really matters - it looks like the same file as the Oct 2002 Sun build that's in the download area). Now that I've got it running, I can't wait to throw my Matlab support agreement into a ditch. OK, maybe that's cruel and a bit optimistic, but I sure hope so anyway. Also, this was built on a mostly stock Red Hat 8.0 system, in case that matters to anyone. Cheers, - Bill Alexander diff -u scipy/scipy_distutils/system_info.py scipy_fixed/scipy_distutils/system_info.py --- scipy/scipy_distutils/system_info.py 2002-10-14 16:04:20.000000000 -0700 +++ scipy_fixed/scipy_distutils/system_info.py 2003-02-03 23:31:03.000000000 -0800 @@ -321,7 +321,7 @@ class fftw_info(system_info): section = 'fftw' dir_env_var = 'FFTW' - libs = ['fftw','rfftw'] + libs = ['rfftw','fftw'] includes = ['fftw.h','rfftw.h'] macros = [('SCIPY_FFTW_H',None)] From pearu at cens.ioc.ee Tue Feb 4 03:54:06 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 4 Feb 2003 10:54:06 +0200 (EET) Subject: [SciPy-dev] Fix to FFTW Compilation Error in latest CVS In-Reply-To: <1044347540.1394.105.camel@turtle> Message-ID: On 4 Feb 2003, Bill Alexander wrote: > Solution: > > The fix is trivial - just reorder the libraries in the list (it looks > like this is already OK for all the other precisions in the script, just > not this particular one): Thanks for the fix. It is applied to scipy CVS. Pearu From datafeed at SoftHome.net Wed Feb 5 14:12:35 2003 From: datafeed at SoftHome.net (M. Evans) Date: Wed, 5 Feb 2003 12:12:35 -0700 Subject: [SciPy-dev] Blitz++ and Digital Mars C++ Message-ID: <82319068.20030205121235@SoftHome.net> http://www.digitalmars.com/drn-bin/wwwnews?c%2B%2B/2079 From scipy-dev at scipy.net Wed Feb 5 15:47:18 2003 From: scipy-dev at scipy.net (Neal D. Becker) Date: Wed, 5 Feb 2003 15:47:18 -0500 Subject: [SciPy-dev] complex-valued remez Message-ID: <200302051547.18294."Neal D. Becker" <>> I see that some remez algorithm was posted here some time back. I wonder if either this code or that in the sigtools could be used to apply the remez algorithm to complex-valued signals? From Chuck.Harris at sdl.usu.edu Wed Feb 5 16:08:22 2003 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Wed, 5 Feb 2003 14:08:22 -0700 Subject: [SciPy-dev] complex-valued remez Message-ID: Depends... I used the code I posted to design complex-hermitean filters, i.e., the filter coefficients were complex but the transfer function was real and arbitrary from zero the sampling frequency. The Remez code in sigtools will design filters whose transfer functions are symmetric or antisymmetric about the Nyquist frequency. The Remez code I posted can be made considerably more efficient for the special case of filter design. It was made for the general case. Chuck > -----Original Message----- > From: > Sent: Wednesday, February 05, 2003 1:47 PM > To: scipy-dev at scipy.net > Subject: [SciPy-dev] complex-valued remez > > > I see that some remez algorithm was posted here some time back. I > wonder if either this code or that in the sigtools could be used to > apply the remez algorithm to complex-valued signals? > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Wed Feb 5 16:17:16 2003 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 5 Feb 2003 14:17:16 -0700 (MST) Subject: [SciPy-dev] complex-valued remez In-Reply-To: Message-ID: > Depends... > > I used the code I posted to design complex-hermitean filters, > i.e., the filter coefficients were complex but the transfer > function was real and arbitrary from zero the sampling frequency. > The Remez code in sigtools will design filters whose transfer > functions are symmetric or antisymmetric about the Nyquist frequency. > > The Remez code I posted can be made considerably more efficient for > the special case of filter design. It was made for the general case. > > Chuck I was going back to check on this code, and noticed that it is posted in base64 encoding. I'm not sure how to decode the file. I would like to incorporate it into signal as a separate function. Can you resend it in ascii format or tell me how to decode it? Thanks, -Travis From oliphant at ee.byu.edu Wed Feb 5 16:31:29 2003 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 5 Feb 2003 14:31:29 -0700 (MST) Subject: [SciPy-dev] complex-valued remez In-Reply-To: Message-ID: > > I used the code I posted to design complex-hermitean filters, > > i.e., the filter coefficients were complex but the transfer > > function was real and arbitrary from zero the sampling frequency. > > The Remez code in sigtools will design filters whose transfer > > functions are symmetric or antisymmetric about the Nyquist frequency. > > > > The Remez code I posted can be made considerably more efficient for > > the special case of filter design. It was made for the general case. > > > > Chuck > > > I was going back to check on this code, and noticed that it is posted in > base64 encoding. > > I'm not sure how to decode the file. I found out that using uudeview on Linux will decode the file, so I now have it in text form. You may ignore the question. Thanks, -Travis O. From Chuck.Harris at sdl.usu.edu Wed Feb 5 16:46:02 2003 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Wed, 5 Feb 2003 14:46:02 -0700 Subject: [SciPy-dev] complex-valued remez Message-ID: > -----Original Message----- > From: Travis Oliphant [mailto:oliphant at ee.byu.edu] > Sent: Wednesday, February 05, 2003 2:17 PM > To: scipy-dev at scipy.net > Subject: RE: [SciPy-dev] complex-valued remez > > > > Depends... > > > > I used the code I posted to design complex-hermitean filters, > > i.e., the filter coefficients were complex but the transfer > > function was real and arbitrary from zero the sampling frequency. > > The Remez code in sigtools will design filters whose transfer > > functions are symmetric or antisymmetric about the Nyquist > frequency. > > > > The Remez code I posted can be made considerably more efficient for > > the special case of filter design. It was made for the general case. > > > > Chuck > > > I was going back to check on this code, and noticed that it > is posted in > base64 encoding. > > I'm not sure how to decode the file. > > I would like to incorporate it into signal as a separate function. > > Can you resend it in ascii format or tell me how to decode it? > > Thanks, > > -Travis Oops, I thought I just sent the (Python) text file as text. This particular version should probably go into the optimization directory. The general setting is the space of real continuous functions on a compact subset of the real line in the sup norm. Given a point in this space and a special sort of finite dimensional subspace (Chebychev system), it finds the unique closest point in the subspace. Without the restriction to Chebychev systems the closest points are not necessarily unique. Anyhow, its a type of optimization. For the signals tools, the best version is probably an adaptation of the current algorithm with either complex barycentric interpolation instead of the current real version, or a combination of this with the fft. I've given some thought to these, but haven't actually coded them up beyond bits and pieces. I can work these over a bit and send them in if you think they would be useful. Chuck From nwagner at mecha.uni-stuttgart.de Fri Feb 7 09:24:52 2003 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 07 Feb 2003 15:24:52 +0100 Subject: [SciPy-dev] Bug in signaltools.py Message-ID: <3E43C1B4.2D4B3A01@mecha.uni-stuttgart.de> Hi, I guess there is a bug in signaltools.py >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.1/site-packages/scipy/__init__.py", line 64, in ? import optimize, integrate, signal, special, interpolate, cow, \ File "/usr/local/lib/python2.1/site-packages/scipy/signal/__init__.py", line 70, in ? from signaltools import * File "/usr/local/lib/python2.1/site-packages/scipy/signal/signaltools.py", line 190 order = numels//2 ^ SyntaxError: invalid syntax >>> Any idea ? Nils From otttr440 at student.liu.se Fri Feb 7 10:50:14 2003 From: otttr440 at student.liu.se (Otto Tronarp) Date: 07 Feb 2003 16:50:14 +0100 Subject: [SciPy-dev] Bug in signaltools.py In-Reply-To: <3E43C1B4.2D4B3A01@mecha.uni-stuttgart.de> References: <3E43C1B4.2D4B3A01@mecha.uni-stuttgart.de> Message-ID: <1044633014.1006.7.camel@mathcore2> On Fri, 2003-02-07 at 15:24, Nils Wagner wrote: > Hi, > > I guess there is a bug in signaltools.py [snip] > File > "/usr/local/lib/python2.1/site-packages/scipy/signal/signaltools.py", > line 190 > order = numels//2 > ^ > SyntaxError: invalid syntax > >>> > > Any idea ? > > Nils I don't know if the // is a typo or if it is intentional, if it is intentional you need to upgrade your python to a version that uses // for integer division. (2.2.1 works for me) Regards, Otto > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev -- Otto Tronarp From pearu at cens.ioc.ee Fri Feb 7 16:20:06 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 7 Feb 2003 23:20:06 +0200 (EET) Subject: [SciPy-dev] Bug in signaltools.py In-Reply-To: <1044633014.1006.7.camel@mathcore2> Message-ID: On 7 Feb 2003, Otto Tronarp wrote: > On Fri, 2003-02-07 at 15:24, Nils Wagner wrote: > > Hi, > > > > I guess there is a bug in signaltools.py > [snip] > > File > > "/usr/local/lib/python2.1/site-packages/scipy/signal/signaltools.py", > > line 190 > > order = numels//2 > > ^ > > SyntaxError: invalid syntax > > >>> > > > > Any idea ? > > > > Nils > > I don't know if the // is a typo or if it is intentional, if it is > intentional you need to upgrade your python to a version that uses // > for integer division. (2.2.1 works for me) Scipy should to work with Python 2.1 as well (until decided otherwise). So, I have fixed this bug in CVS. I believe it is safe to use int(n/m) for future n//m, even when n/m will return true division. Pearu From joseoomartin at hotmail.com Sat Feb 8 19:23:10 2003 From: joseoomartin at hotmail.com (Jose Martin) Date: Sun, 09 Feb 2003 00:23:10 +0000 Subject: [SciPy-dev] hi, there, i want to include a gabor filter routines to scipy Message-ID: Hi, i am jose antonio martin, i am a researcher on computer vision. I want to include into the scipy signals package routines for calculating and perform gabor filters that are filters based on fourier spaces and acts like a neurones of the visual cortex of monkeys. it is also posible.. i have currently the source code in c++ an di wanto to do the wrapper for python under scipy. but i need some help to do it. i have the gcc complier for window and de borland c++ compiler. i there are some body to help in this task, please let me know thanks... _________________________________________________________________ MSN Fotos: la forma m?s f?cil de compartir e imprimir fotos. http://photos.msn.es/support/worldwide.aspx From a.schmolck at gmx.net Sun Feb 9 19:13:28 2003 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 10 Feb 2003 00:13:28 +0000 Subject: [SciPy-dev] small bug(s) in scipy.linalg Message-ID: I think scipy.linalg.basic has a few issues with 1D vectors vs row and column vectors (respective shapes: (n,), (1,n), (n,1)). It would seem clear to me, for example that `norm` ought to produce exactly the same with a 1D vector as with a row vector or column vector. Currently it doesn't, and the very common case of the 2 norm for a vector is, I think, handled inefficiently. I've created a "fixed" the `norm` function according to these criteria, but before I submit a properly tested patch, I wanted to make sure that this behavior is indeed regarded as buggy. Here is my modified and completely untested version, just to give you the picture: UNTESTED CODE def norm(x, ord=2): """matrix and vector norm. Inputs: x -- a rank-1 (vector) or rank-2 (matrix) array ord -- the order of norm. Comments: For vectors ord can be any real number including Inf or -Inf. ord = Inf, computes the maximum of the magnitudes ord = -Inf, computes minimum of the magnitudes ord is finite, computes sum(abs(x)**ord)**(1.0/ord) For matrices ord can only be + or - 1, 2, Inf. ord = 2 computes the largest singular value ord = -2 computes the smallest singular value ord = 1 computes the largest row sum ord = -1 computes the smallest row sum ord = Inf computes the largest column sum ord = -Inf computes the smallest column sum """ x = asarray(x) nd = len(x.shape) Inf = scipy_base.Inf if nd > 2: raise ValueError, "Improper number of dimensions to norm." if nd == 2: # a 'real' matrix, i.e. not a column or row vector (Nx1 or 1xN) if min(self.me.shape) != 1: if ord == 2: return scipy_base.amax(decomp.svd(x)[1]) elif ord == -2: return scipy_base.amin(decomp.svd(x)[1]) elif ord == 1: return scipy_base.amax(scipy_base.sum(abs(x))) elif ord == Inf: return scipy_base.amax(scipy_base.sum(abs(x),axis=1)) elif ord == -1: return scipy_base.amin(scipy_base.sum(abs(x))) elif ord == -Inf: return scipy_base.amin(scipy_base.sum(abs(x),axis=1)) else: raise ValueError, "Invalid norm order for matrices." # Nx1 or 1xN to 1D vector else: x = ravel(x) # a vector if ord == Inf: return scipy_base.amax(abs(x)) elif ord == -Inf: return scipy_base.amin(abs(x)) elif ord == 2: return scipy_base.sqrt(dot(a, a)) else: return scipy_base.sum(abs(x)**ord)**(1.0/ord) From pearu at cens.ioc.ee Mon Feb 10 05:27:39 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 10 Feb 2003 12:27:39 +0200 (EET) Subject: [SciPy-dev] small bug(s) in scipy.linalg In-Reply-To: Message-ID: On 10 Feb 2003, Alexander Schmolck wrote: > I think scipy.linalg.basic has a few issues with 1D vectors vs row and column > vectors (respective shapes: (n,), (1,n), (n,1)). It would seem clear to me, > for example that `norm` ought to produce exactly the same with a 1D vector as > with a row vector or column vector. Currently it doesn't, -1 for considering it as a bug. For generality, I'd leave to users to decide whether (1,n) or (n,1) shaped matrices should regarded as vectors. It is simple to call norm(ravel(x)) (even if x is (n,) shaped). However, if norm() would automatically map x -> ravel(x) for one row/column matrix x, as suggested, then this would make writing generic algorithms harder (check shapes, split program flow, etc). How about introducing a convenience function def vnorm(x): return norm(ravel(x)) ? > and the very common case of the 2 norm for a vector is, I think, > handled inefficiently. +1 for fixing this. Pearu From a.schmolck at gmx.net Mon Feb 10 10:25:59 2003 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 10 Feb 2003 15:25:59 +0000 Subject: [SciPy-dev] small bug(s) in scipy.linalg In-Reply-To: References: Message-ID: Pearu Peterson writes: > On 10 Feb 2003, Alexander Schmolck wrote: > > > I think scipy.linalg.basic has a few issues with 1D vectors vs row and column > > vectors (respective shapes: (n,), (1,n), (n,1)). It would seem clear to me, > > for example that `norm` ought to produce exactly the same with a 1D vector as > > with a row vector or column vector. Currently it doesn't, > > -1 for considering it as a bug. > > For generality, I'd leave to users to decide whether (1,n) or > (n,1) shaped matrices should regarded as vectors. > At first I didn't understand what you meant, but then I realized my mistake: matrix norm and vector norm are *not* the same for row and column vectors for e.g. infinity norms. One reason why I didn't realize the obvious difference earlier is that matlab indeed treats vectors specially. So although the (matrix) infinity norm of a row vector and a column vector should be different, they aren't -- matlab's `norm` will in both cases compute the same result (the vector norm), although matrix and vector infinity norm should only be the same for column vectors. It thus appears to me that there is no way to calculate the matrix infinity norm for 1xN matrices in matlab, without doing it by hand. This seems fairly undesirable, but having a neutrally named `norm` function that gives different results for column and row vectors also seems rather fairly undesirable. Hence I'd propose that there should be separate functions, vector_norm and matrix_norm (or something shorter like 'vnorm' and 'mnorm'), where vector_norm only accepts arrays with shapes (n,), (1,n) and (n,1) and produces the same results for all of them, whereas matrix_norm only accepts arrays with shape (n,m) (mathematica seems to do something like this, BTW). Or is there any reason I'm missing why one can write more general code if `norm` sometimes computes the vector norm and sometimes the matrix norm, depending on the shape of it first argument? One more thing: in the docstring for `norm` the scipy defines the matrix infinity norm as "the largest column sum" and the 1 norm as "the largest row sum". Isn't that backwards (i.e. ``row_sum = sum(matrix,1)``)? Matlab and linear algebra books (e.g. Horn & Johnson, p.295) seem to have it like this at least -- so although matlab's infinity norm of a matrix yields the same answer, the doc defines it as "the largest *row* sum". alex From Chuck.Harris at sdl.usu.edu Mon Feb 10 11:35:28 2003 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Mon, 10 Feb 2003 09:35:28 -0700 Subject: [SciPy-dev] small bug(s) in scipy.linalg Message-ID: <1885D1238A4FFE40B113CEB26F87872B118351@cobra.usurf.usu.edu> What I would like do see is the reduce method take a list of axis to run over. The n x m L1 matrix norm would then look like maximum.reduce(a,axis=[-1,-2]). I actually went so far as to start coding up my own version of Numeric to make this easy, but all that free time eventually went away. I hope it makes it into Numarray (hint,hint). Chuck > -----Original Message----- > From: Pearu Peterson [mailto:pearu at cens.ioc.ee] > Sent: Monday, February 10, 2003 3:28 AM > To: scipy-dev at scipy.net > Subject: Re: [SciPy-dev] small bug(s) in scipy.linalg > > > > On 10 Feb 2003, Alexander Schmolck wrote: > > > I think scipy.linalg.basic has a few issues with 1D vectors > vs row and column > > vectors (respective shapes: (n,), (1,n), (n,1)). It would > seem clear to me, > > for example that `norm` ought to produce exactly the same > with a 1D vector as > > with a row vector or column vector. Currently it doesn't, > > -1 for considering it as a bug. > > For generality, I'd leave to users to decide whether (1,n) or > (n,1) shaped matrices should regarded as vectors. > > It is simple to call norm(ravel(x)) (even if x is (n,) shaped). > However, if norm() would automatically map x -> ravel(x) for one > row/column matrix x, as suggested, then this would make > writing generic > algorithms harder (check shapes, split program flow, etc). > > How about introducing a convenience function > > def vnorm(x): > return norm(ravel(x)) > > ? > > > and the very common case of the 2 norm for a vector is, I think, > > handled inefficiently. > > +1 for fixing this. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From Chuck.Harris at sdl.usu.edu Mon Feb 10 11:59:48 2003 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Mon, 10 Feb 2003 09:59:48 -0700 Subject: [SciPy-dev] fft Message-ID: <1885D1238A4FFE40B113CEB26F87872B118352@cobra.usurf.usu.edu> I was playing around with the fft programs in fftpack and fftw and noticed that the real transforms return the coefficients in different formats. If rk,ik are the real and imaginary components of the k'th Fourier coefficient, then fftpack: [r0,r1,i1,.....,rm] where rm is absent in odd length transforms fftw: [r0,r1,...,i2,i1] where again rm is absent in odd length transforms Besides the incompatibility of the forms, neither is of much use in Numeric, where the coefficients would preferably be exposed as an array of complex. Of course they do have the virtue that the number of (real) components is exactly equal to the number of points transformed. However, I think the best solution would be to return a complex array [r0+i0j,r1+i1j,...]. In this case i0==0 and when the number of samples is even im==0 also. This means that the length of the returned complex array is n//2 + 1, which may seem a bit inconvenient, but the result is then easy to use in Numeric for such things as convolution, amplitude, and phase. By the way, what happened to the other fft package that was being discussed? Chuck From pearu at cens.ioc.ee Mon Feb 10 16:05:36 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 10 Feb 2003 23:05:36 +0200 (EET) Subject: [SciPy-dev] fft In-Reply-To: <1885D1238A4FFE40B113CEB26F87872B118352@cobra.usurf.usu.edu> Message-ID: On Mon, 10 Feb 2003, Chuck Harris wrote: > I was playing around with the fft programs in fftpack and fftw and > noticed that the real transforms return the coefficients in different > formats. If rk,ik are the real and imaginary components of the k'th > Fourier coefficient, then > > fftpack: [r0,r1,i1,.....,rm] > where rm is absent in odd length transforms > > fftw: [r0,r1,...,i2,i1] > where again rm is absent in odd length transforms There is no need to use fftw package anymore. Current scipy.fftpack uses fftw libraries if available. Btw, scipy.fftpack uses the same convention for real transforms that is used in fftpack from netlib. > Besides the incompatibility of the forms, neither is of much use > in Numeric, where the coefficients would preferably be exposed as an > array of complex. Of course they do have the virtue that the number > of (real) components is exactly equal to the number of points > transformed. There are better reasons to keep real and imaginary parts separate than the above. For instance, in pseudo-spectral methods for integrating PDEs it is essential to minimize floating point operations that, in general, are the main source of numerical noise causing possible stability problems for integrators. Provided that functions, being subjects to integration, are real valued, finding their derivatives using FFT method that returns real/imaginary parts separately, takes twice less number of floating point operations than if FFT would return generic array of complex numbers. > However, I think the best solution would be to return > a complex array [r0+i0j,r1+i1j,...]. In this case i0==0 and when > the number of samples is even im==0 also. This means that the length > of the returned complex array is n//2 + 1, which may seem a bit > inconvenient, but the result is then easy to use in Numeric for > such things as convolution, amplitude, and phase. You can safely use scipy.fftpack.fft function if complex result is needed, just take the first half of the resulting array. Note that scipy.fftpack.fft takes into account if the input array is real and then a more efficient algorithm is used for calculating FT. > By the way, what happened to the other fft package that was being > discussed? What other fft package? Pearu From datafeed at SoftHome.net Mon Feb 10 17:32:01 2003 From: datafeed at SoftHome.net (M. Evans) Date: Mon, 10 Feb 2003 15:32:01 -0700 Subject: [SciPy-dev] Fitz 2D cross platform graphics Message-ID: <7912116232.20030210153201@SoftHome.net> >From the author of libart. Information sketchy, but prototyping is underway, and public source release is "close" as of December last year. So it's a bit more than vaporware, but not too much more. LGPL as I understand it - too bad! A major emphasis seems to be elimination of X11 resolution dependencies and yes, they are using native Quartz on the Mac. I will alert him about Scipy and Chaco. He is using Python already. http://wiki.ghostscript.com/cgi-bin/fitz?Fitz http://www.levien.com/ Mark From Chuck.Harris at sdl.usu.edu Mon Feb 10 17:54:29 2003 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Mon, 10 Feb 2003 15:54:29 -0700 Subject: [SciPy-dev] fft Message-ID: <1885D1238A4FFE40B113CEB26F87872B066720@cobra.usurf.usu.edu> > -----Original Message----- > From: Pearu Peterson [mailto:pearu at cens.ioc.ee] > Sent: Monday, February 10, 2003 2:06 PM > To: scipy-dev at scipy.net > Subject: Re: [SciPy-dev] fft > > > > On Mon, 10 Feb 2003, Chuck Harris wrote: > > > I was playing around with the fft programs in fftpack and fftw and > > noticed that the real transforms return the coefficients in > different > > formats. If rk,ik are the real and imaginary components of the k'th > > Fourier coefficient, then > > > > fftpack: [r0,r1,i1,.....,rm] > > where rm is absent in odd length transforms > > > > fftw: [r0,r1,...,i2,i1] > > where again rm is absent in odd length transforms > > There is no need to use fftw package anymore. Current scipy.fftpack > uses fftw libraries if available. > > Btw, scipy.fftpack uses the same convention for real transforms > that is used in fftpack from netlib. The main virtue of libraries is that someone else already wrote them :o) I suspect that they chose this format so they could do in place transforms, back in the days when memory mattered. > > > Besides the incompatibility of the forms, neither is of much use > > in Numeric, where the coefficients would preferably be exposed as an > > array of complex. Of course they do have the virtue that the number > > of (real) components is exactly equal to the number of points > > transformed. > > There are better reasons to keep real and imaginary parts separate > than the above. For instance, in pseudo-spectral methods for > integrating > PDEs it is essential to minimize floating point operations that, in > general, are the main source of numerical noise causing > possible stability problems for integrators. coef.real, coef.imag. Same algorithm, just with rearranged output. > > > However, I think the best solution would be to return > > a complex array [r0+i0j,r1+i1j,...]. In this case i0==0 and when > > the number of samples is even im==0 also. This means that the length > > of the returned complex array is n//2 + 1, which may seem a bit > > inconvenient, but the result is then easy to use in Numeric for > > such things as convolution, amplitude, and phase. > > You can safely use scipy.fftpack.fft function if complex result is > needed, just take the first half of the resulting array. Note that > scipy.fftpack.fft takes into account if the input array is > real and then > a more efficient algorithm is used for calculating FT. Sure, I wrote my first split-radix algorithm 32 years ago in Algol 68 (blush), so I'm not unfamiliar with these things. The virtue of the real transforms is that they *are* more efficient, that is why it would be nice to have the output in a compatible format. > > > By the way, what happened to the other fft package that was being > > discussed? > > What other fft package? I must be confused, but weren't you folks benchmarking another package? Chuck From pearu at cens.ioc.ee Mon Feb 10 19:55:59 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 11 Feb 2003 02:55:59 +0200 (EET) Subject: [SciPy-dev] fft In-Reply-To: <1885D1238A4FFE40B113CEB26F87872B066720@cobra.usurf.usu.edu> Message-ID: On Mon, 10 Feb 2003, Chuck Harris wrote: > > > By the way, what happened to the other fft package that was being > > > discussed? > > > > What other fft package? > > I must be confused, but weren't you folks benchmarking another package? Indeed, there was a discussion about replacing scipy.fftpack with fftpack2 about half a year ago and it is completed by now. The current scipy.fftpack is fftpack2 and it differs from the original scipy.fftpack in that it wraps djbfft and fftw libraries when available, otherwise the underlying implementation of the fft algorithm is fftpack from netlib. The fact that scipy.fftpack uses netlib/fftpack format for real fft is almost a coincidence (there were certain license issues with GPL'd fftw so that scipy could not use it as a default implementation of fft). I guess, if scipy.fftpack would have originally based on the fftw library, may be the format would be different now. Personally, I don't have any preferences on the fft output format, we just have to choose one and stick to it. And the choice was made too long ago to change it again. Pearu From datafeed at SoftHome.net Tue Feb 11 16:58:32 2003 From: datafeed at SoftHome.net (M. Evans) Date: Tue, 11 Feb 2003 14:58:32 -0700 Subject: [SciPy-dev] FLTK Cross-platform GUI toolkit Message-ID: <12611832754.20030211145832@SoftHome.net> Scipy pays much attention to wxWindows and none to FLTK which is probably a better back end, in that it does its own drawing AND has tight integration with OpenGL. In fact it can run inside an OpenGL window (like GLUT) but is not constrained to do so, running equally well in Win32 or X or Quartz. http://www.FLTK.org Mark From datafeed at SoftHome.net Tue Feb 11 19:50:32 2003 From: datafeed at SoftHome.net (M. Evans) Date: Tue, 11 Feb 2003 17:50:32 -0700 Subject: [SciPy-dev] Lush Message-ID: <2722152894.20030211175032@SoftHome.net> Sounds like weave, maybe some good ideas in here. http://lush.sourceforge.net/ Mark From datafeed at SoftHome.net Wed Feb 12 15:48:37 2003 From: datafeed at SoftHome.net (M. Evans) Date: Wed, 12 Feb 2003 13:48:37 -0700 Subject: [SciPy-dev] Chaco for drawing cross platform widgets? Message-ID: <16711958835.20030212134837@SoftHome.net> Chaco presently targets other widget toolkits and focuses on plotting. Has any consideration been given to using Chaco itself to create a cross-platform widget toolkit? Or is that not feasible? I'm not asking "are you going to do it" but rather "does anything in the design of Chaco prevent it from being done by someone else." Tor (part of the Fitz team) is interested in using a DisplayPDF like system to create a fully anti-aliased widget toolkit with alternative language bindings. http://www.df.lth.se/~mazirian/wiki.cgi/AeKit Mark From fperez at pizero.colorado.edu Fri Feb 14 22:51:21 2003 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Fri, 14 Feb 2003 20:51:21 -0700 (MST) Subject: [SciPy-dev] Bug in Mplot.py Message-ID: Hi all, In [16]: xplt.imagesc_cb(K,palette=array([0,1])) --------------------------------------------------------------------------- NameError Traceback (most recent call last) ? /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in imagesc_cb(z, cmin, cmax, xryr, _style, zlabel, font, fontsize, color, palette) 928 cmin = float(cmin) 929 --> 930 change_palette(palette) 931 932 byteimage = gist.bytscl(z,cmin=cmin,cmax=cmax) /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in change_palette(pal) 603 else: 604 data = Numeric.asarray(pal) --> 605 write_palette('/tmp/_temp.gp',data) 606 gist.palette('/tmp/_temp.gp') 607 /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in write_palette(tofile, pal) 549 palsize = pal.shape 550 if len(palsize) == 1: --> 551 pal = multiply.outer(pal,ones((3,),pal.typecode())) 552 palsize = pal.shape 553 if not (palsize[1] == 3 or palsize[0] == 3): NameError: global name 'multiply' is not defined It seems that this code was written thinking of a from Numeric import * and later was changed without testing. I added 'multiply,ones' to the from Numeric import ... line and it fixed the crashes I saw, but there may be other problems lurking. A pychecker run on all the scipy codes would probably be a good idea... Cheers, f. From oliphant.travis at ieee.org Tue Feb 18 00:11:12 2003 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 17 Feb 2003 22:11:12 -0700 Subject: [SciPy-dev] Bug in Mplot.py In-Reply-To: References: Message-ID: <1045545073.9842.1.camel@travis.local.net> > In [16]: xplt.imagesc_cb(K,palette=array([0,1])) > --------------------------------------------------------------------------- > NameError Traceback (most recent call last) > > ? > > /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in imagesc_cb(z, cmin, > cmax, xryr, _style, zlabel, font, fontsize, color, palette) > 928 cmin = float(cmin) > 929 > --> 930 change_palette(palette) > 931 > 932 byteimage = gist.bytscl(z,cmin=cmin,cmax=cmax) > > /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in change_palette(pal) > 603 else: > 604 data = Numeric.asarray(pal) > --> 605 write_palette('/tmp/_temp.gp',data) > 606 gist.palette('/tmp/_temp.gp') > 607 > > /usr/lib/python2.2/site-packages/scipy/xplt/Mplot.py in write_palette(tofile, > pal) > 549 palsize = pal.shape > 550 if len(palsize) == 1: > --> 551 pal = multiply.outer(pal,ones((3,),pal.typecode())) > 552 palsize = pal.shape > 553 if not (palsize[1] == 3 or palsize[0] == 3): > > NameError: global name 'multiply' is not defined Thanks for this help. > > > > It seems that this code was written thinking of a from Numeric import * and > later was changed without testing. I added 'multiply,ones' to the from Numeric > import ... line and it fixed the crashes I saw, but there may be other > problems lurking. A pychecker run on all the scipy codes would probably be a > good idea... Yes, I agree, I've tried this in the past but pychecker returns so many spurious errors that it can be very time consuming to sort through the valid ones. If anyone knows more about pychecker and could turn down the error reporting a bit and would like to post the errors it finds, that would be great. -Travis O. From baecker at physik.tu-dresden.de Tue Feb 18 07:37:36 2003 From: baecker at physik.tu-dresden.de (baecker at physik.tu-dresden.de) Date: Tue, 18 Feb 2003 13:37:36 +0100 (CET) Subject: [SciPy-dev] segmentation fault Message-ID: Hi, with the current CVS version I get for scipy.test(10) after a while "[...] function f6 cc.bisect : 0.480 cc.ridder : 0.560 cc.brenth : 0.520 cc.brentq : 0.530 ..zsh: 4347 segmentation fault python " (Several lines before it says "TESTING SPEED".) I managed to find the routine which produces the above output (namely: scipy.optimize.test(), apart from the segmentation fault ..., so that one must occur afterwards.) My question is: How can I find out which of the routines being tested leads to the segmentation fault? For example is there a way to make the testing more verbose in telling me what it is just about to test ? (I tried to figure out which packages are included in the unit test, but got lost quickly ...) ((In any case the reason for this fault has to do with my installation/setup as on a different machine scipy just installs fine ...)) Many thanks, Arnd From ymoisan at smnetcom.com Wed Feb 19 14:10:46 2003 From: ymoisan at smnetcom.com (Yves Moisan) Date: Wed, 19 Feb 2003 14:10:46 -0500 Subject: [SciPy-dev] Unable to install SciPy 0.2.0 Message-ID: Hi, I have downloaded SciPy and can't install it on my Win 2K system. I have Python 2.2 in C:\Python22 but I want to stuff it with a Python 2.1 that comes with Zope. I don't know if it is the potential conflict between the two Python versions (or the fact that I have no Python environment variables?), but the 2nd window of the installer 9see attached) does not let me specify a Python implementation. What do I do ? Thanx, Yves Moisan -------------- next part -------------- A non-text attachment was scrubbed... Name: SciPy-0.2.0.gif Type: image/gif Size: 15950 bytes Desc: not available URL: From DavidA at ActiveState.com Wed Feb 19 14:18:35 2003 From: DavidA at ActiveState.com (David Ascher) Date: Wed, 19 Feb 2003 11:18:35 -0800 Subject: [SciPy-dev] Unable to install SciPy 0.2.0 References: Message-ID: <3E53D88B.9000402@ActiveState.com> Yves Moisan wrote: >Hi, > >I have downloaded SciPy and can't install it on my Win 2K system. I have >Python 2.2 in C:\Python22 but I want to stuff it with a Python 2.1 that >comes with Zope. I don't know if it is the potential conflict between the >two Python versions (or the fact that I have no Python environment >variables?), but the 2nd window of the installer 9see attached) does not let >me specify a Python implementation. > >What do I do ? > Extension modules are dependent on particular installations. In other words, you can't install an extension module built for Python 2.2 on a Python 2.1 system. --david From DavidA at ActiveState.com Wed Feb 19 14:18:35 2003 From: DavidA at ActiveState.com (David Ascher) Date: Wed, 19 Feb 2003 11:18:35 -0800 Subject: [SciPy-dev] Unable to install SciPy 0.2.0 References: Message-ID: <3E53D88B.9000402@ActiveState.com> Yves Moisan wrote: >Hi, > >I have downloaded SciPy and can't install it on my Win 2K system. I have >Python 2.2 in C:\Python22 but I want to stuff it with a Python 2.1 that >comes with Zope. I don't know if it is the potential conflict between the >two Python versions (or the fact that I have no Python environment >variables?), but the 2nd window of the installer 9see attached) does not let >me specify a Python implementation. > >What do I do ? > Extension modules are dependent on particular installations. In other words, you can't install an extension module built for Python 2.2 on a Python 2.1 system. --david From nwagner at mecha.uni-stuttgart.de Fri Feb 21 12:22:23 2003 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Feb 2003 18:22:23 +0100 Subject: [SciPy-dev] [Fwd: [SciPy-user] ValueError: I/O operation on closed file] Message-ID: <3E56604F.ABD0EBDC@mecha.uni-stuttgart.de> -------------- next part -------------- An embedded message was scrubbed... From: Nils Wagner Subject: [SciPy-user] ValueError: I/O operation on closed file Date: Thu, 20 Feb 2003 14:52:43 +0100 Size: 3125 URL: From pearu at cens.ioc.ee Tue Feb 25 07:29:15 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 25 Feb 2003 14:29:15 +0200 (EET) Subject: [SciPy-dev] Prebuilt ATLAS libraries available Message-ID: Hi! An initial list of prebuilt ATLAS libraries for various platforms (currently only for unices, though) with various C compiler combinations is available at http://www.scipy.org/site_content/downloads/atlas_binaries This list will be used for regression testing of scipy but it can be useful for those users who choose not to build ATLAS libraries them self. And the list will probably grow in future.. Regards, Pearu From eric at enthought.com Tue Feb 25 12:58:05 2003 From: eric at enthought.com (eric jones) Date: Tue, 25 Feb 2003 11:58:05 -0600 Subject: [SciPy-dev] Prebuilt ATLAS libraries available In-Reply-To: Message-ID: <001101c2dcf7$7566a810$8901a8c0@ERICDESKTOP> Very Cool. This will hopefully ease the build process for a number of people. Thanks, Eric ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Tuesday, February 25, 2003 6:29 AM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] Prebuilt ATLAS libraries available > > > Hi! > > An initial list of prebuilt ATLAS libraries for various platforms > (currently only for unices, though) with various C compiler combinations > is available at > > http://www.scipy.org/site_content/downloads/atlas_binaries > > This list will be used for regression testing of scipy but it can be > useful for those users who choose not to build ATLAS libraries them self. > And the list will probably grow in future.. > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From a.schmolck at gmx.net Tue Feb 25 14:32:08 2003 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 25 Feb 2003 19:32:08 +0000 Subject: [SciPy-dev] Prebuilt ATLAS libraries available In-Reply-To: References: Message-ID: Pearu Peterson writes: > Hi! > > An initial list of prebuilt ATLAS libraries for various platforms > (currently only for unices, though) with various C compiler combinations > is available at > > http://www.scipy.org/site_content/downloads/atlas_binaries > > This list will be used for regression testing of scipy but it can be > useful for those users who choose not to build ATLAS libraries them self. > And the list will probably grow in future.. > Wouldn't getting them into http://www.netlib.org/atlas/archives also be a good idea? alex From tjlahey at eon.uwaterloo.ca Wed Feb 26 15:49:45 2003 From: tjlahey at eon.uwaterloo.ca (tjlahey) Date: Wed, 26 Feb 2003 15:49:45 -0500 (EST) Subject: [SciPy-dev] Error compiling on Sun Solaris Message-ID: <200302262049.h1QKnjR29792@eon.uwaterloo.ca> Hi, I've just tried compiling from CVS, and I ran into the following problem. The Sun compiler uses CC to compile C++, and cc to compile C code. Scipy uses cc to compile all the C and C++ code. There only appears to be 1 C++ file, vq_wrap.cpp and the cc compiler chokes on it. Hand compiling with CC works and restarting the compile process appears to work. Cheers, Tim Lahey From prabhu at aero.iitm.ernet.in Fri Feb 28 03:02:48 2003 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 28 Feb 2003 13:32:48 +0530 Subject: [SciPy-dev] [ANN] imv.py: interactive 3D plots with MayaVi. Message-ID: <15967.6056.108550.473693@monster.linux.in> hi, imv.py is a module that lets you sample surfaces and arrays from the Python interpreter via convenient one line functions. It requires MayaVi (version 1.2) to be installed and running as a Python module i.e. the binary installs will not be able to use this. imv provides three useful functions: surf -- Creates a surface given regularly spaced values of x, y and the corresponding z as arrays. Also works if z is a function. view -- Allows one to view a 2D Numeric array. Note that the view will be set to the way we normally think of matrices with with 0, 0 at the top left of the screen. Works best for smaller arrays (size < 512x512). viewi -- Allows one to view a 2D Numeric array as an image. This works best for very large arrays (like 1024x1024 or larger arrays). The code is well documented and has an illustrative example in the function 'main' at the end of the file. To see the example simply run imv.py. The module is available here: http://www.ae.iitm.ac.in/~prabhu/software/mayavi.html or http://www.aero.iitm.ernet.in/~prabhu/software/mayavi.html Additionally, the module has some code that lets you easily convert Numeric Arrays into vtkDataArrays. So if you are interested in doing that you might want to take a look at the code. Have fun! prabhu p.s. Sorry about the cross-posting. From weimin.shi at intel.com Fri Feb 28 13:54:54 2003 From: weimin.shi at intel.com (Shi, Weimin) Date: Fri, 28 Feb 2003 10:54:54 -0800 Subject: [SciPy-dev] Question on GA lab in Python Message-ID: Hi, Eric, I am an new user in Python and are trying to use the GA do some design experiment, but could not even get the example in ga work. Could you please tell me where can I get help? (It is on WinXP.) Thanks, Weimin Shi Platform Tech Operation, DPG, Intel Corp. (503) 712-2790 (503) 712-2823 (Fax) -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Fri Feb 28 15:21:17 2003 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 28 Feb 2003 13:21:17 -0700 Subject: [SciPy-dev] Problem in linalg functions with overwrite check Message-ID: <1046463678.7699.15.camel@travis.local.net> I've found a potential problem in the linear algebra functions related to the overwrite_a parameter. Many routines alter the overwrite_a parameter after conversion of the input object to an array. The check is not very robust however when the input is an object that uses an array as it's main core (like a Matrix object). For example. Code looks something like this a1 = asarray(a) # Convert a to an array object overwrite_a = overwrite_a or (a1 is not a) So, if a was not already an array then overwrite_a is modified to overwrite the a1 array (which is presumed to be a copy anyway). The problem with this strategy is that if a is an object that has an __array__ method. The asarray function will call it to obtain the object a as an array. Often, this call does not return a new copy of the data but just a reference to the array forming the underlying storage of a. So, while a and a1 are different objects and so a1 is not a, they use the same storage space, so overwriting a1 changes the object a leading to strange, unexpected behavior. I got caught while passing Matrix objects to the linalg.det routine and getting different answers everytime the function ran (because the underlying matrix was being changed everytime). I'm asking for suggestions on how to fix this. We could for example, try to call the __array__ method on our own and then if it succeeds never alter the overwrite_a parameter. Or more elegantly perhaps we could try to write some function share_storage(a1,a) --- maybe partly in C that tries to determine whether or not a1 and a actually share the same storage. Comments welcome. -Travis Oliphant From pearu at cens.ioc.ee Fri Feb 28 16:18:01 2003 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 28 Feb 2003 23:18:01 +0200 (EET) Subject: [SciPy-dev] Problem in linalg functions with overwrite check In-Reply-To: <1046463678.7699.15.camel@travis.local.net> Message-ID: On 28 Feb 2003, Travis Oliphant wrote: > > I've found a potential problem in the linear algebra functions related > to the overwrite_a parameter. I agree that what follows is a problem. > Many routines alter the overwrite_a parameter after conversion of the > input object to an array. The check is not very robust however when the > input is an object that uses an array as it's main core (like a Matrix > object). > > For example. Code looks something like this > > a1 = asarray(a) # Convert a to an array object > > overwrite_a = overwrite_a or (a1 is not a) > > So, if a was not already an array then overwrite_a is modified to > overwrite the a1 array (which is presumed to be a copy anyway). > > The problem with this strategy is that if a is an object that has an > __array__ method. The asarray function will call it to obtain the > object a as an array. Often, this call does not return a new copy of > the data but just a reference to the array forming the underlying > storage of a. So, while a and a1 are different objects and so a1 is not > a, they use the same storage space, so overwriting a1 changes the object > a leading to strange, unexpected behavior. > > I got caught while passing Matrix objects to the linalg.det routine and > getting different answers everytime the function ran (because the > underlying matrix was being changed everytime). > > I'm asking for suggestions on how to fix this. > > We could for example, try to call the __array__ method on our own and > then if it succeeds never alter the overwrite_a parameter. Or more > elegantly perhaps we could try to write some function > share_storage(a1,a) --- maybe partly in C that tries to determine > whether or not a1 and a actually share the same storage. Correct me if I am wrong but share_storage(a1,a) will always try to call a.__array__ and that will result exactly a1 if succesful. Consequently, share_storage(a1,a) returns True whenever a has __array__ method (assuming that subsequent calls to a.__array__ will return identical objects). So, share_storage(a1,a) boils down to checking hasattr(a,'__array__'). This analysis suggest the following fix: if not hasattr(a,'__array__'): overwrite_a = overwrite_a or (a1 is not a) or equivalent, but more efficient one (saves few expensive attribute lookups): overwrite_a = overwrite_a \ or (a1 is not a and not hasattr(a,'__array__')) Pearu From eric at enthought.com Fri Feb 28 18:17:59 2003 From: eric at enthought.com (eric jones) Date: Fri, 28 Feb 2003 17:17:59 -0600 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/scipy_distutils mingw32_support.py,1.9,1.10 In-Reply-To: <20030228233317.599BF3EB0C@www.scipy.com> Message-ID: <000001c2df7f$a5210340$8901a8c0@ERICDESKTOP> Looks like this fixed things. The vq build is dying on my desktop which has mingw gcc 3.2 on it. It is the dllwrap call that is failing. It needs dllwrap.exe --driver-name g++ ... specified so that it links in the correct C++ libraries. How was this working before? Maybe 2.95.3 didn't have the problem?? Anyway, I'll build the vq version by hand for the moment. Thanks for the fix, eric ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 > -----Original Message----- > From: scipy-cvs-admin at scipy.org [mailto:scipy-cvs-admin at scipy.org] On > Behalf Of pearu at scipy.org > Sent: Friday, February 28, 2003 5:33 PM > To: scipy-cvs at scipy.org > Subject: [Scipy-cvs] world/scipy/scipy_distutils > mingw32_support.py,1.9,1.10 > > Update of /home/cvsroot/world/scipy/scipy_distutils > In directory scipy.org:/tmp/cvs-serv23974 > > Modified Files: > mingw32_support.py > Log Message: > Enabled use_gcc,g77 again in mingw32_support.py as calling them in > build_flib may be too late. Not sure if this was the cause of building > failures on win32.. > > > Index: mingw32_support.py > =================================================================== > RCS file: /home/cvsroot/world/scipy/scipy_distutils/mingw32_support.py,v > retrieving revision 1.9 > retrieving revision 1.10 > diff -C2 -d -r1.9 -r1.10 > *** mingw32_support.py 27 Feb 2003 16:21:00 -0000 1.9 > --- mingw32_support.py 28 Feb 2003 23:33:15 -0000 1.10 > *************** > *** 68,72 **** > # raise DistutilsPlatformError, msg > > ! if 0: > # See build_flib.finalize_options method in build_flib.py > # where set_windows_compiler is called with proper > --- 68,72 ---- > # raise DistutilsPlatformError, msg > > ! if 1: > # See build_flib.finalize_options method in build_flib.py > # where set_windows_compiler is called with proper > > > _______________________________________________ > Scipy-cvs mailing list > Scipy-cvs at scipy.org > http://scipy.net/mailman/listinfo/scipy-cvs From eric at enthought.com Fri Feb 28 18:24:32 2003 From: eric at enthought.com (eric jones) Date: Fri, 28 Feb 2003 17:24:32 -0600 Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/scipy_distutils mingw32_support.py,1.9,1.10 In-Reply-To: <000001c2df7f$a5210340$8901a8c0@ERICDESKTOP> Message-ID: <000101c2df80$8f88a460$8901a8c0@ERICDESKTOP> Just checked on a machine with 2.95.3. The build worked fine. So it is only the newest mingw that has a the build hiccup with C++ modules. eric ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of eric jones > Sent: Friday, February 28, 2003 5:18 PM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] RE: [Scipy-cvs] world/scipy/scipy_distutils > mingw32_support.py,1.9,1.10 > > Looks like this fixed things. The vq build is dying on my desktop which > has mingw gcc 3.2 on it. It is the dllwrap call that is failing. It > needs > > dllwrap.exe --driver-name g++ ... > > specified so that it links in the correct C++ libraries. How was this > working before? Maybe 2.95.3 didn't have the problem?? > > Anyway, I'll build the vq version by hand for the moment. > > Thanks for the fix, > eric > > ---------------------------------------------- > eric jones 515 Congress Ave > www.enthought.com Suite 1614 > 512 536-1057 Austin, Tx 78701 > > > > -----Original Message----- > > From: scipy-cvs-admin at scipy.org [mailto:scipy-cvs-admin at scipy.org] On > > Behalf Of pearu at scipy.org > > Sent: Friday, February 28, 2003 5:33 PM > > To: scipy-cvs at scipy.org > > Subject: [Scipy-cvs] world/scipy/scipy_distutils > > mingw32_support.py,1.9,1.10 > > > > Update of /home/cvsroot/world/scipy/scipy_distutils > > In directory scipy.org:/tmp/cvs-serv23974 > > > > Modified Files: > > mingw32_support.py > > Log Message: > > Enabled use_gcc,g77 again in mingw32_support.py as calling them in > > build_flib may be too late. Not sure if this was the cause of building > > failures on win32.. > > > > > > Index: mingw32_support.py > > =================================================================== > > RCS file: > /home/cvsroot/world/scipy/scipy_distutils/mingw32_support.py,v > > retrieving revision 1.9 > > retrieving revision 1.10 > > diff -C2 -d -r1.9 -r1.10 > > *** mingw32_support.py 27 Feb 2003 16:21:00 -0000 1.9 > > --- mingw32_support.py 28 Feb 2003 23:33:15 -0000 1.10 > > *************** > > *** 68,72 **** > > # raise DistutilsPlatformError, msg > > > > ! if 0: > > # See build_flib.finalize_options method in build_flib.py > > # where set_windows_compiler is called with proper > > --- 68,72 ---- > > # raise DistutilsPlatformError, msg > > > > ! if 1: > > # See build_flib.finalize_options method in build_flib.py > > # where set_windows_compiler is called with proper > > > > > > _______________________________________________ > > Scipy-cvs mailing list > > Scipy-cvs at scipy.org > > http://scipy.net/mailman/listinfo/scipy-cvs > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev