From robert.kern at gmail.com Sat Jul 1 00:54:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 23:54:24 -0500 Subject: [SciPy-user] Mail lists for Trac tickets and SVN checkins Message-ID: <44A60000.9070602@gmail.com> We now have mailing lists set up to receive notifications of changes to Trac tickets and SVN checkins for both NumPy and SciPy. We do not have Gmane gateways for them, yet. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From f.braennstroem at gmx.de Sat Jul 1 03:38:59 2006 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Sat, 1 Jul 2006 09:38:59 +0200 Subject: [SciPy-user] Modelica and Python References: <20060626182311.44915.qmail@web60016.mail.yahoo.com> Message-ID: Hi Srijit, * Srijit Kumar Bhadra wrote: > Hello, > What are the possible options to use numpy and scipy with Modelica? That sounds interesting for me too ... did you found out anything? Greetings! Fabian From schofield at ftw.at Sat Jul 1 07:05:50 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 1 Jul 2006 13:05:50 +0200 Subject: [SciPy-user] Struggling to make use of sparse In-Reply-To: <98AA9E629A145D49A1A268FA6DBA70B424902C@mmihserver01.MMIH01.local> References: <98AA9E629A145D49A1A268FA6DBA70B424902C@mmihserver01.MMIH01.local> Message-ID: <9894611B-A63B-4D02-9615-BCF9EB77CE96@ftw.at> Hi William, On 30/06/2006, at 2:40 PM, William Hunter wrote: > Old Matlab user here, I need some help on using 'sparse'. Not a lot > of documentation on it, and I suck at programming, so there you go... > > > I have a (sparse) array [K] and vector {F}. I need to solve for > {U}. If these were 'normal' matrices, I would do the following: > >>> import numpy as N > >>> U = N.linalg.solve(K,F) > > I know how to get the matrices (both K and F) in sparse format with > 'lil_matrix', but I get an error if I try the following: > >>> import scipy.spare as SS > >>> U = SS.sparse.solve(K,F) > > What am I doing wrong? Somebody who's done FEA type stuff will be > able to answer. Try making F a dense 1d array; then try >>> from scipy import linsolve >>> U = linsolve.spsolve(K, F) I'll try to write some tutorials on the sparse and maxentropy modules in the next few weeks ... -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From William.Hunter at mmhgroup.com Mon Jul 3 05:05:04 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Mon, 3 Jul 2006 11:05:04 +0200 Subject: [SciPy-user] MATLAB code faster than Python! Message-ID: <98AA9E629A145D49A1A268FA6DBA70B4249033@mmihserver01.MMIH01.local> Yes, it's true. Using MATLAB's and IPython's there is at least some agreement of where the code is consuming time. What I find strange is the type of operation that's causing this. Below is the "culprit line" (MATLAB): K(edof,edof) = K(edof,edof) + x(ely,elx)^penal*Kel; In Python I wrote this as: K[ix_(edof,edof)] += x[ely,elx]**penal*Kel Both lines are nested inside two loops. How is it possible that MATLAB is faster (by a factor of at least 2.25), or am I naive to think it will be otherwise?! also reports a function 0:(dgesv) that takes up a lot of time, and I'm certain it is connected to my "culprit line" in some way... Can it be that loops are slow in Python, if so, what else can I do? Unfortunately I HAVE to use a as it updates an array, i.e., it's not possible to make use of a static one. Any suggestions? William -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jul 3 17:45:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 03 Jul 2006 16:45:42 -0500 Subject: [SciPy-user] MATLAB code faster than Python! In-Reply-To: <98AA9E629A145D49A1A268FA6DBA70B4249033@mmihserver01.MMIH01.local> References: <98AA9E629A145D49A1A268FA6DBA70B4249033@mmihserver01.MMIH01.local> Message-ID: <44A99006.1080805@gmail.com> William Hunter wrote: > Yes, it's true. Using MATLAB's and IPython's there is > at least some agreement > of where the code is consuming time. What I find strange is the type of > operation that's causing this. > > Below is the "culprit line" (MATLAB): > K(edof,edof) = K(edof,edof) + x(ely,elx)^penal*Kel; > > In Python I wrote this as: > K[ix_(edof,edof)] += x[ely,elx]**penal*Kel > > Both lines are nested inside two loops. How is it possible that > MATLAB is faster (by a factor > of at least 2.25), or am I naive to think it will be otherwise?! You should not use profiled times to compare implementations. You should only use them to identify hotspots in your code. Profiling involves instrumenting your code, and that affects the timing. Generally, relationships between times of functions A and B called in the same profile run will be the same as if there were no profiling. However, A and A' (the same as A but in a different language using a different profiling harness) won't be. Without knowing any of the context (the size of K, what edof, el[xy], penal, and Kel are, how many times you are executing this line, etc.), I have no idea whether Matlab is actually faster or how to improve the Python code. > also reports a function 0:(dgesv) that takes up a lot of time, > and I'm certain it is connected > to my "culprit line" in some way... dgesv is a LAPACK function that calculates singular values. It is called by linalg.svd(). If the size of the input is substantial, it is reasonable that this might take up a chunk of time. > Can it be that loops are slow in Python, if so, what else can I do? > Unfortunately I HAVE to use a as it updates an array, i.e., it's > not possible to make use of a static one. Well, we can't really do anything for you. You haven't shown us enough code to make sense of your results much less to offer specific suggestions. However, in general terms, you should break out the pieces of code that you think are problematic into separate benchmarks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Mon Jul 3 19:48:22 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 04 Jul 2006 01:48:22 +0200 Subject: [SciPy-user] ipython help() doesn't work Message-ID: <44A9ACC6.5010001@gmx.net> After installing ipython 0.72 from debian testing I get the followng error when saying help(), while ? works. help() works in the normal python shell. In [7]: help(sys) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /usr/lib/python2.3/site-packages/ /usr/lib/python2.3/site.py in __call__(self, *args, **kwds) 317 def __call__(self, *args, **kwds): 318 import pydoc --> 319 return pydoc.help(*args, **kwds) 320 321 __builtin__.help = _Helper() /usr/lib/python2.3/pydoc.py in __call__(self, request) 1546 def __call__(self, request=None): 1547 if request is not None: -> 1548 self.help(request) 1549 else: 1550 self.intro() /usr/lib/python2.3/pydoc.py in help(self, request) 1582 elif request: doc(request, 'Help on %s:') 1583 elif isinstance(request, Helper): self() -> 1584 else: doc(request, 'Help on %s:') 1585 self.output.write('\n') 1586 /usr/lib/python2.3/pydoc.py in doc(thing, title, forceload) 1373 elif module and module is not object: 1374 desc += ' in module ' + module.__name__ -> 1375 pager(title % desc + '\n\n' + text.document(object, name)) 1376 except (ImportError, ErrorDuringImport), value: 1377 print value /usr/lib/python2.3/pydoc.py in document(self, object, name, *args) 281 # by lacking a __name__ attribute) and an instance. 282 try: --> 283 if inspect.ismodule(object): return self.docmodule(*args) 284 if inspect.isclass(object): return self.docclass(*args) 285 if inspect.isroutine(object): return self.docroutine(*args) /usr/lib/python2.3/pydoc.py in docmodule(self, object, name, mod) 961 funcs = [] 962 for key, value in inspect.getmembers(object, inspect.isroutine): --> 963 if inspect.isbuiltin(value) or inspect.getmodule(value) is object: 964 if visiblename(key): 965 funcs.append((key, value)) /usr/lib/python2.3/inspect.py in getmodule(object) 380 for module in sys.modules.values(): 381 if hasattr(module, '__file__'): --> 382 modulesbyfile[getabsfile(module)] = module.__name__ 383 if file in modulesbyfile: 384 return sys.modules.get(modulesbyfile[file]) /usr/lib/python2.3/inspect.py in getabsfile(object) 361 The idea is for each object to have a unique origin, so this routine 362 normalizes the result as much as possible.""" --> 363 return os.path.normcase( 364 os.path.abspath(getsourcefile(object) or getfile(object))) 365 /usr/lib/python2.3/inspect.py in getsourcefile(object) 346 def getsourcefile(object): 347 """Return the Python source file an object was defined in, if it exists.""" --> 348 filename = getfile(object) 349 if string.lower(filename[-4:]) in ['.pyc', '.pyo']: 350 filename = filename[:-4] + '.py' /usr/lib/python2.3/inspect.py in getfile(object) 326 if iscode(object): 327 return object.co_filename --> 328 raise TypeError('arg is not a module, class, method, ' 329 'function, traceback, frame, or code object') 330 TypeError: arg is not a module, class, method, function, traceback, frame, or code object cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From fperez.net at gmail.com Mon Jul 3 20:58:04 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 3 Jul 2006 18:58:04 -0600 Subject: [SciPy-user] ipython help() doesn't work In-Reply-To: <44A9ACC6.5010001@gmx.net> References: <44A9ACC6.5010001@gmx.net> Message-ID: On 7/3/06, Steve Schmerler wrote: > After installing ipython 0.72 from debian testing I get the followng > error when saying help(), while ? works. > > help() works in the normal python shell. > > In [7]: help(sys) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /usr/lib/python2.3/site-packages/ > > /usr/lib/python2.3/site.py in __call__(self, *args, **kwds) > 317 def __call__(self, *args, **kwds): > 318 import pydoc > --> 319 return pydoc.help(*args, **kwds) > 320 > 321 __builtin__.help = _Helper() Unfortunately this is due to a bug in python2.3 itself, not in ipython. Python 2.4 works fine (meaning, ipython running under python2.4). I thought we'd been able to put in workarounds in ipython to protect against this bug, but I obviously missed some cases (I did fix others). If you can upgrade to 2.4, that would be the easiest fix. I'll try to add some code to work around the python bug for the next release, since there's little chance of the python 2.3 series ever fixing this. Cheers, f From elcorto at gmx.net Tue Jul 4 02:19:00 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 04 Jul 2006 08:19:00 +0200 Subject: [SciPy-user] ipython help() doesn't work In-Reply-To: References: <44A9ACC6.5010001@gmx.net> Message-ID: <44AA0854.3060201@gmx.net> Fernando Perez wrote: > On 7/3/06, Steve Schmerler wrote: >> After installing ipython 0.72 from debian testing I get the followng >> error when saying help(), while ? works. >> >> help() works in the normal python shell. >> >> In [7]: help(sys) >> --------------------------------------------------------------------------- >> exceptions.TypeError Traceback (most >> recent call last) >> >> /usr/lib/python2.3/site-packages/ >> >> /usr/lib/python2.3/site.py in __call__(self, *args, **kwds) >> 317 def __call__(self, *args, **kwds): >> 318 import pydoc >> --> 319 return pydoc.help(*args, **kwds) >> 320 >> 321 __builtin__.help = _Helper() > > Unfortunately this is due to a bug in python2.3 itself, not in > ipython. Python 2.4 works fine (meaning, ipython running under > python2.4). I thought we'd been able to put in workarounds in ipython > to protect against this bug, but I obviously missed some cases (I did > fix others). > > If you can upgrade to 2.4, that would be the easiest fix. Sure. But (at least when I checked last) not all 2.3-packages I use are already there as python2.4- and I planned to wait until 2.4 becomes Debians default version (which I hope to happen relatively soon for testing :) ) > I'll try to > add some code to work around the python bug for the next release, > since there's little chance of the python 2.3 series ever fixing this. > Thanks! Since then I'll use 0.71.fix1 which worked. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From nwagner at iam.uni-stuttgart.de Wed Jul 5 03:35:49 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 09:35:49 +0200 Subject: [SciPy-user] meshgrid Message-ID: <44AB6BD5.2050900@iam.uni-stuttgart.de> Hi all, Is it intended that the shape of the input vectors xcoor, ycoor is modified ? from scipy import * xcoor=linspace(-2,2,10) ycoor=linspace(-1,1,10) print shape(xcoor) print shape(ycoor) X1,Y1 = meshgrid(xcoor,ycoor) print shape(xcoor) print shape(ycoor) Output (10,) (10,) (1, 10) (10, 1) Nils From nwagner at iam.uni-stuttgart.de Wed Jul 5 04:18:06 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 10:18:06 +0200 Subject: [SciPy-user] What happened to outerproduct ? Message-ID: <44AB75BE.3080302@iam.uni-stuttgart.de> I saw a comment on outerproduct http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3178098 Does that mean that outerproduct exists no longer or is it temporarily not available ? Nils From wbaxter at gmail.com Wed Jul 5 04:24:26 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 5 Jul 2006 17:24:26 +0900 Subject: [SciPy-user] meshgrid In-Reply-To: <44AB6BD5.2050900@iam.uni-stuttgart.de> References: <44AB6BD5.2050900@iam.uni-stuttgart.de> Message-ID: Is that new? I don't have a meshgrid function in my scipy 0.4.9. >>> from scipy import * >>> meshgrid Traceback (most recent call last): File "", line 1, in ? NameError: name 'meshgrid' is not defined >>> import scipy >>> scipy.__version__ '0.4.9' mgrid works though: numpy.mgrid[-2:2:10j, -1:1:10j]. --bb On 7/5/06, Nils Wagner wrote: > > Hi all, > > Is it intended that the shape of the input vectors xcoor, ycoor is > modified ? > > from scipy import * > xcoor=linspace(-2,2,10) > ycoor=linspace(-1,1,10) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > > > Output > > (10,) > (10,) > (1, 10) > (10, 1) > > Nils > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Jul 5 04:31:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 10:31:59 +0200 Subject: [SciPy-user] meshgrid In-Reply-To: References: <44AB6BD5.2050900@iam.uni-stuttgart.de> Message-ID: <44AB78FF.3010905@iam.uni-stuttgart.de> Bill Baxter wrote: > Is that new? I don't have a meshgrid function in my scipy 0.4.9. > > >>> from scipy import * > >>> meshgrid > Traceback (most recent call last): > File "", line 1, in ? > NameError: name 'meshgrid' is not defined > >>> import scipy > >>> scipy.__version__ > '0.4.9' > > mgrid works though: > numpy.mgrid[-2:2:10j, -1:1:10j]. > > --bb > > On 7/5/06, *Nils Wagner* > wrote: > > Hi all, > > Is it intended that the shape of the input vectors xcoor, ycoor is > modified ? > > from scipy import * > xcoor=linspace(-2,2,10) > ycoor=linspace(-1,1,10) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > > > Output > > (10,) > (10,) > (1, 10) > (10, 1) > > Nils > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Sorry it belongs to numpy (see below numpy.lib.function_base) , but here I get >>> from scipy import * >>> meshgrid >>> scipy.__version__ '0.5.0.2034' Help on function meshgrid in module numpy.lib.function_base: meshgrid(x, y) For vectors x, y with lengths Nx=len(x) and Ny=len(y), return X, Y where X and Y are (Ny, Nx) shaped arrays with the elements of x and y repeated to fill the matrix EG, [X, Y] = meshgrid([1,2,3], [4,5,6,7]) X = 1 2 3 1 2 3 1 2 3 1 2 3 Y = 4 4 4 5 5 5 6 6 6 7 7 7 From nwagner at iam.uni-stuttgart.de Wed Jul 5 05:21:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 11:21:19 +0200 Subject: [SciPy-user] nested for loops Message-ID: <44AB848F.9070906@iam.uni-stuttgart.de> Hi all, How can I improve the computation of U in the attached script ? I mean how can I circumvent the nested loops ? An example would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_mesh.py Type: text/x-python Size: 1078 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jul 5 05:01:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 11:01:37 +0200 Subject: [SciPy-user] nested for loops Message-ID: <44AB7FF1.8060201@iam.uni-stuttgart.de> Hi all, How can I improve the computation of U in the attached script ? An example would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_mesh.py Type: text/x-python Size: 790 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mode.dat Type: application/ms-tnef Size: 250 bytes Desc: not available URL: From ckkart at hoc.net Wed Jul 5 07:33:55 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 05 Jul 2006 20:33:55 +0900 Subject: [SciPy-user] nested for loops In-Reply-To: <44AB7FF1.8060201@iam.uni-stuttgart.de> References: <44AB7FF1.8060201@iam.uni-stuttgart.de> Message-ID: <44ABA3A3.1050600@hoc.net> Nils Wagner wrote: > Hi all, > > How can I improve the computation of U in the attached script ? > An example would be appreciated. > Make use of the numpy array operations and the broadcasting feature. See the attached file. The computation is about 60 times faster here. Christian -------------- next part -------------- A non-text attachment was scrubbed... Name: mesh2.py Type: text/x-python Size: 782 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jul 5 07:46:30 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 13:46:30 +0200 Subject: [SciPy-user] nested for loops In-Reply-To: <44ABA3A3.1050600@hoc.net> References: <44AB7FF1.8060201@iam.uni-stuttgart.de> <44ABA3A3.1050600@hoc.net> Message-ID: <44ABA696.6090705@iam.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >> Hi all, >> >> How can I improve the computation of U in the attached script ? >> An example would be appreciated. >> >> > > Make use of the numpy array operations and the broadcasting feature. See the > attached file. The computation is about 60 times faster here. > > Christian > > > > ------------------------------------------------------------------------ > > from scipy import * > import time > > from pylab import contourf, show, subplot, figure, title, colorbar > xcoor=linspace(-1,1,30) > ycoor=linspace(-1,1,30) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > xcoor=linspace(-1,1,30) > ycoor=linspace(-1,1,30) > U = zeros((len(xcoor),len(ycoor)),float) > file = open('mode.dat') > v = io.read_array(file) > N = len(v) > > start = time.time() > z_ik = xcoor[:,newaxis]+1j*ycoor[newaxis,:] > U = zeros(z_ik.shape) > > for n in arange(1,N+1): > U = U + sin(2.*n*angle(z_ik)/3.)*special.jv(2.*n/3.,sqrt(19.739209)*abs(z_ik))*v[n-1] > print time.time()-start > # > # How can I improve the computation of the mode shape U = ? > # > X1,Y1 = meshgrid(xcoor,ycoor) > figure() > CS=contourf(X1,Y1,U,20) > cbar=colorbar(CS) > show() > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Great ! Thank you very much for your prompt reply. Nils From nwagner at iam.uni-stuttgart.de Wed Jul 5 08:15:32 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 14:15:32 +0200 Subject: [SciPy-user] nested for loops In-Reply-To: <44ABA3A3.1050600@hoc.net> References: <44AB7FF1.8060201@iam.uni-stuttgart.de> <44ABA3A3.1050600@hoc.net> Message-ID: <44ABAD64.7070403@iam.uni-stuttgart.de> Christian Kristukat wrote: > Nils Wagner wrote: > >> Hi all, >> >> How can I improve the computation of U in the attached script ? >> An example would be appreciated. >> >> > > Make use of the numpy array operations and the broadcasting feature. See the > attached file. The computation is about 60 times faster here. > > Christian > > > > ------------------------------------------------------------------------ > > from scipy import * > import time > > from pylab import contourf, show, subplot, figure, title, colorbar > xcoor=linspace(-1,1,30) > ycoor=linspace(-1,1,30) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > xcoor=linspace(-1,1,30) > ycoor=linspace(-1,1,30) > U = zeros((len(xcoor),len(ycoor)),float) > file = open('mode.dat') > v = io.read_array(file) > N = len(v) > > start = time.time() > z_ik = xcoor[:,newaxis]+1j*ycoor[newaxis,:] > U = zeros(z_ik.shape) > > for n in arange(1,N+1): > U = U + sin(2.*n*angle(z_ik)/3.)*special.jv(2.*n/3.,sqrt(19.739209)*abs(z_ik))*v[n-1] > print time.time()-start > # > # How can I improve the computation of the mode shape U = ? > # > X1,Y1 = meshgrid(xcoor,ycoor) > figure() > CS=contourf(X1,Y1,U,20) > cbar=colorbar(CS) > show() > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > If the number of evenly spaced samples differ in the x and y direction e.g. numx = 30 numy =20 xcoor=linspace(-1,1,numx) ycoor=linspace(-1,1,numy) z_ik = xcoor[:,newaxis]+1j*ycoor[newaxis,:] one should use CS=contourf(X1,Y1,U.transpose(),20) instead of CS=contourf(X1,Y1,U,20) Is that correct ? Nils From ckkart at hoc.net Wed Jul 5 08:23:47 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 05 Jul 2006 21:23:47 +0900 Subject: [SciPy-user] nested for loops In-Reply-To: <44ABAD64.7070403@iam.uni-stuttgart.de> References: <44AB7FF1.8060201@iam.uni-stuttgart.de> <44ABA3A3.1050600@hoc.net> <44ABAD64.7070403@iam.uni-stuttgart.de> Message-ID: <44ABAF53.30906@hoc.net> Nils Wagner wrote: > Christian Kristukat wrote: >> Nils Wagner wrote: >> >>> Hi all, >>> >>> How can I improve the computation of U in the attached script ? >>> An example would be appreciated. >>> >>> >> Make use of the numpy array operations and the broadcasting feature. See the >> attached file. The computation is about 60 times faster here. >> >> Christian >> >> >> >> ------------------------------------------------------------------------ >> >> from scipy import * >> import time >> >> from pylab import contourf, show, subplot, figure, title, colorbar >> xcoor=linspace(-1,1,30) >> ycoor=linspace(-1,1,30) >> print shape(xcoor) >> print shape(ycoor) >> X1,Y1 = meshgrid(xcoor,ycoor) >> print shape(xcoor) >> print shape(ycoor) >> xcoor=linspace(-1,1,30) >> ycoor=linspace(-1,1,30) >> U = zeros((len(xcoor),len(ycoor)),float) >> file = open('mode.dat') >> v = io.read_array(file) >> N = len(v) >> >> start = time.time() >> z_ik = xcoor[:,newaxis]+1j*ycoor[newaxis,:] >> U = zeros(z_ik.shape) >> >> for n in arange(1,N+1): >> U = U + sin(2.*n*angle(z_ik)/3.)*special.jv(2.*n/3.,sqrt(19.739209)*abs(z_ik))*v[n-1] >> print time.time()-start >> # >> # How can I improve the computation of the mode shape U = ? >> # >> X1,Y1 = meshgrid(xcoor,ycoor) >> figure() >> CS=contourf(X1,Y1,U,20) >> cbar=colorbar(CS) >> show() >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > If the number of evenly spaced samples differ in the x and y direction > e.g. > > numx = 30 > numy =20 > xcoor=linspace(-1,1,numx) > ycoor=linspace(-1,1,numy) > z_ik = xcoor[:,newaxis]+1j*ycoor[newaxis,:] > > one should use > > CS=contourf(X1,Y1,U.transpose(),20) > > instead of > > CS=contourf(X1,Y1,U,20) > > Is that correct ? True, in the notation above xcoor is a column vector. You could swap [:,newaxis] and [newaxis,:] above, or tranpose U as you supposed. Christian From William.Hunter at mmhgroup.com Wed Jul 5 05:00:32 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Wed, 5 Jul 2006 11:00:32 +0200 Subject: [SciPy-user] MATLAB faster than Python (more code included) In-Reply-To: <44A99006.1080805@gmail.com> Message-ID: <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> Robert; Thanks for your reply and advise, I found through Google - you should've given me a STFW here... Thank you anyway. I've attached MIF.py (and MIF.m for comparison). The function is part of a topology optimization routine (top.py) originally written in MATLAB. The number of times MIF is called is a function of the convergence criteria. For a sloppy convergence criteria and a very rough domain of 20x20 (i.e.,(nelx)x(nely)) elements, MIF is called 37 times. The finer the domain (one would typically want 200x200 for example), the more calls to MIF. The main script (top.py) takes 34 sec to complete for 20x20 elements, of which MIF's share is 15 sec. If you run MIF() on its own you'll see that it is slow compared to the MATLAB one. William Hunter PS: You don't have to be courteous in your replies (I'm already very grateful for getting a reply and free advise), I can handle it :) -----Original Message----- From: Robert Kern [mailto:robert.kern at gmail.com] Sent: 03 July 2006 23:46 To: SciPy Users List Subject: Re: [SciPy-user] MATLAB code faster than Python! William Hunter wrote: > Yes, it's true. Using MATLAB's and IPython's there is > at least some agreement of where the code is consuming time. What I > find strange is the type of operation that's causing this. > > Below is the "culprit line" (MATLAB): > K(edof,edof) = K(edof,edof) + x(ely,elx)^penal*Kel; > > In Python I wrote this as: > K[ix_(edof,edof)] += x[ely,elx]**penal*Kel > > Both lines are nested inside two loops. How is it possible that > MATLAB is faster (by a factor of at least 2.25), or am I naive to > think it will be otherwise?! You should not use profiled times to compare implementations. You should only use them to identify hotspots in your code. Profiling involves instrumenting your code, and that affects the timing. Generally, relationships between times of functions A and B called in the same profile run will be the same as if there were no profiling. However, A and A' (the same as A but in a different language using a different profiling harness) won't be. Without knowing any of the context (the size of K, what edof, el[xy], penal, and Kel are, how many times you are executing this line, etc.), I have no idea whether Matlab is actually faster or how to improve the Python code. > also reports a function 0:(dgesv) that takes up a lot of time, > and I'm certain it is connected to my "culprit line" in some way... dgesv is a LAPACK function that calculates singular values. It is called by linalg.svd(). If the size of the input is substantial, it is reasonable that this might take up a chunk of time. > Can it be that loops are slow in Python, if so, what else can I do? > Unfortunately I HAVE to use a as it updates an array, i.e., it's > not possible to make use of a static one. Well, we can't really do anything for you. You haven't shown us enough code to make sense of your results much less to offer specific suggestions. However, in general terms, you should break out the pieces of code that you think are problematic into separate benchmarks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- A non-text attachment was scrubbed... Name: MIF.m Type: application/octet-stream Size: 463 bytes Desc: MIF.m URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MIF.py Type: application/octet-stream Size: 842 bytes Desc: MIF.py URL: From massimo.sandal at unibo.it Wed Jul 5 10:54:26 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 05 Jul 2006 16:54:26 +0200 Subject: [SciPy-user] MATLAB faster than Python In-Reply-To: <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> References: <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> Message-ID: <44ABD2A2.7070806@unibo.it> William Hunter ha scritto: > Robert; > > I found through Google - you > should've given me a STFW here... Thank you anyway. [...] > PS: You don't have to be courteous in your replies (I'm already very > grateful for getting a reply and free advise), I can handle it :) I'm sorry for the offtopic but I just wanna say that if all users on a given software ML were like you the world would be a much, much better place. Kudos. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From stefan at sun.ac.za Wed Jul 5 11:29:10 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 5 Jul 2006 17:29:10 +0200 Subject: [SciPy-user] meshgrid In-Reply-To: <44AB6BD5.2050900@iam.uni-stuttgart.de> References: <44AB6BD5.2050900@iam.uni-stuttgart.de> Message-ID: <20060705152909.GC14060@mentat.za.net> Nils, These bug reports are especially useful in the form of tickets. See http://projects.scipy.org/scipy/numpy/ticket/169 Regards St?fan On Wed, Jul 05, 2006 at 09:35:49AM +0200, Nils Wagner wrote: > Hi all, > > Is it intended that the shape of the input vectors xcoor, ycoor is > modified ? > > from scipy import * > xcoor=linspace(-2,2,10) > ycoor=linspace(-1,1,10) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > > > Output > > (10,) > (10,) > (1, 10) > (10, 1) > > Nils From stefan at sun.ac.za Wed Jul 5 11:29:10 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 5 Jul 2006 17:29:10 +0200 Subject: [SciPy-user] meshgrid In-Reply-To: <44AB6BD5.2050900@iam.uni-stuttgart.de> References: <44AB6BD5.2050900@iam.uni-stuttgart.de> Message-ID: <20060705152909.GC14060@mentat.za.net> Nils, These bug reports are especially useful in the form of tickets. See http://projects.scipy.org/scipy/numpy/ticket/169 Regards St?fan On Wed, Jul 05, 2006 at 09:35:49AM +0200, Nils Wagner wrote: > Hi all, > > Is it intended that the shape of the input vectors xcoor, ycoor is > modified ? > > from scipy import * > xcoor=linspace(-2,2,10) > ycoor=linspace(-1,1,10) > print shape(xcoor) > print shape(ycoor) > X1,Y1 = meshgrid(xcoor,ycoor) > print shape(xcoor) > print shape(ycoor) > > > Output > > (10,) > (10,) > (1, 10) > (10, 1) > > Nils From m.oliver at iu-bremen.de Wed Jul 5 16:09:17 2006 From: m.oliver at iu-bremen.de (Marcel Oliver) Date: Wed, 5 Jul 2006 16:09:17 -0400 Subject: [SciPy-user] Compile Scipy on Message-ID: <17580.7277.913640.918916@localhost.localdomain> Has anybody sucessfully installed scipy on Solaris (SunOS 5.9)? I have tried forcing both Sun compilers and GNU compilers for the Fortran and C bits, but do not get a working environment. When forcing GNU compilers, I get >>> from numpy import * >>> from scipy import * import linsolve.umfpack -> failed: ld.so.1: python: fatal: >>> libg2c.so.0: open failed: No such file or directory When forcing Sun compilers with python setup.py config --compiler=sun install --prefix=$HOME/python the setup script seems to take the gcc for C anyway, but uses Sun's f90 for Fortran, and I get >>> from numpy import * >>> from scipy import * import linsolve.umfpack -> failed: ld.so.1: python: fatal: relocation error: file /export/opt/SUNWspro10/lib/v8plus/libfsu.so.1: symbol omp_set_dynamic: referenced symbol not found (followed by various tracebacks). I am using numpy-0.9.8 and scipy-0.4.9. Any hints would be appreciated. --Marcel From bryan at cole.uklinux.net Wed Jul 5 16:32:36 2006 From: bryan at cole.uklinux.net (Bryan Cole) Date: Wed, 05 Jul 2006 21:32:36 +0100 Subject: [SciPy-user] MATLAB faster than Python (more code included) In-Reply-To: <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> References: <44A99006.1080805@gmail.com> <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> Message-ID: <1152131556.18296.9.camel@pc1.cole.uklinux.net> On Wed, 2006-07-05 at 11:00 +0200, William Hunter wrote: > Robert; > > Thanks for your reply and advise, I found through Google - you > should've given me a STFW here... Thank you anyway. > > I've attached MIF.py (and MIF.m for comparison). > > The function is part of a topology optimization routine (top.py) > originally written in MATLAB. The number of times MIF is called is a > function of the convergence criteria. For a sloppy convergence criteria > and a very rough domain of 20x20 (i.e.,(nelx)x(nely)) elements, MIF is > called 37 times. The finer the domain (one would typically want 200x200 > for example), the more calls to MIF. > > The main script (top.py) takes 34 sec to complete for 20x20 elements, of > which MIF's share is 15 sec. > > If you run MIF() on its own you'll see that it is slow compared to the > MATLAB one. Well, I get a factor >2 speedup in the MIF.py script by vectorising the inner loops: from numpy import mgrid, sum def MIF2(nelx=60,nely=20,rmin=1.5,x=zeros((20,60),\ float)+0.5,dc=randn(20,60)): dcNew = zeros_like(dc) round_rmin = int(round(rmin)) for i in xrange(nelx): k_min, k_max = maximum(i-round_rmin,0),\ minimum(i+round_rmin,nelx) for j in xrange(nely): l_min, l_max = maximum(j-round_rmin,0),\ minimum(j+round_rmin,nely) k,l = mgrid[k_min:k_max,l_min:l_max] fac = rmin-sqrt((i-k)**2+(j-l)**2) result = maximum(0,fac)*x[l,k]*dc[l,k] Sum = sum(maximum(0,fac).flat) dcNew[j,i] = sum(result.flat)/(Sum*x[j,i]) return dcNew However, I would have thought this equally possible in the matlab version. N.B. I've changed the local variable 'sum' to 'Sum', as I use the numpy.sum function. BC From bryan at cole.uklinux.net Wed Jul 5 17:01:38 2006 From: bryan at cole.uklinux.net (Bryan Cole) Date: Wed, 05 Jul 2006 22:01:38 +0100 Subject: [SciPy-user] MATLAB faster than Python (more code included) In-Reply-To: <1152131556.18296.9.camel@pc1.cole.uklinux.net> References: <44A99006.1080805@gmail.com> <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> <1152131556.18296.9.camel@pc1.cole.uklinux.net> Message-ID: <1152133298.18296.13.camel@pc1.cole.uklinux.net> > > Well, I get a factor >2 speedup in the MIF.py script by vectorising the > inner loops: > > from numpy import mgrid, sum > def MIF2(nelx=60,nely=20,rmin=1.5,x=zeros((20,60),\ > float)+0.5,dc=randn(20,60)): > dcNew = zeros_like(dc) > round_rmin = int(round(rmin)) > for i in xrange(nelx): > k_min, k_max = maximum(i-round_rmin,0),\ > minimum(i+round_rmin,nelx) > for j in xrange(nely): > l_min, l_max = maximum(j-round_rmin,0),\ > minimum(j+round_rmin,nely) > k,l = mgrid[k_min:k_max,l_min:l_max] > fac = rmin-sqrt((i-k)**2+(j-l)**2) > result = maximum(0,fac)*x[l,k]*dc[l,k] > Sum = sum(maximum(0,fac).flat) > dcNew[j,i] = sum(result.flat)/(Sum*x[j,i]) > return dcNew > > However, I would have thought this equally possible in the matlab > version. N.B. I've changed the local variable 'sum' to 'Sum', as I use > the numpy.sum function. and faster still (now 3.4x faster than the original): def MIF3(nelx=60,nely=20,rmin=1.5,x=zeros((20,60),\ float)+0.5,dc=randn(20,60)): dcNew = zeros_like(dc) round_rmin = int(round(rmin)) K,L = indices((nelx,nely)) for i in xrange(nelx): k_min, k_max = maximum(i-round_rmin,0),\ minimum(i+round_rmin,nelx) for j in xrange(nely): l_min, l_max = maximum(j-round_rmin,0),\ minimum(j+round_rmin,nely) k=K[k_min:k_max,l_min:l_max] l=L[k_min:k_max,l_min:l_max] fac = rmin-sqrt((i-k)**2+(j-l)**2) result = maximum(0,fac)*x[l,k]*dc[l,k] Sum = sum(maximum(0,fac).flat) dcNew[j,i] = sum(result.flat)/(Sum*x[j,i]) return dcNew Of course doing more vectorisation reduces the read-ability and increases memory overhead. > > BC From drnlmuller+scipy at gmail.com Wed Jul 5 17:57:28 2006 From: drnlmuller+scipy at gmail.com (Neil Muller) Date: Wed, 5 Jul 2006 23:57:28 +0200 Subject: [SciPy-user] MATLAB faster than Python (more code included) In-Reply-To: <1152133298.18296.13.camel@pc1.cole.uklinux.net> References: <44A99006.1080805@gmail.com> <98AA9E629A145D49A1A268FA6DBA70B4249041@mmihserver01.MMIH01.local> <1152131556.18296.9.camel@pc1.cole.uklinux.net> <1152133298.18296.13.camel@pc1.cole.uklinux.net> Message-ID: On 7/5/06, Bryan Cole wrote: > def MIF3(nelx=60,nely=20,rmin=1.5,x=zeros((20,60),\ > float)+0.5,dc=randn(20,60)): > dcNew = zeros_like(dc) > round_rmin = int(round(rmin)) > K,L = indices((nelx,nely)) > for i in xrange(nelx): > k_min, k_max = maximum(i-round_rmin,0),\ > minimum(i+round_rmin,nelx) > for j in xrange(nely): > l_min, l_max = maximum(j-round_rmin,0),\ > minimum(j+round_rmin,nely) > k=K[k_min:k_max,l_min:l_max] > l=L[k_min:k_max,l_min:l_max] > fac = rmin-sqrt((i-k)**2+(j-l)**2) > result = maximum(0,fac)*x[l,k]*dc[l,k] > Sum = sum(maximum(0,fac).flat) > dcNew[j,i] = sum(result.flat)/(Sum*x[j,i]) > return dcNew > > Of course doing more vectorisation reduces the read-ability and > increases memory overhead. It's also possible to gain a bit by moving the calculation of x[l,k]*dc[l,k] outside the loop using numpy.multiply (about 10% here), again at the expense of extra storage (this is a considerably more impressive speedup when applied directly to the non-vectorised version of the code) In theory, there should be a benefit to consolidating the maximum(0,fac) calls, but the gains aren't particularly significant (<5%) here. -- Neil Muller From bhendrix at enthought.com Wed Jul 5 19:58:31 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 05 Jul 2006 18:58:31 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released Message-ID: <44AC5227.9030902@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0.beta3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 1.0.0.beta3 Release Notes: -------------------- Version 1.0.0.beta3 of Python Enthought Edition is the first version based on Python 2.4.3 and includes updates to nearly every package. This is the third and (hopefully) last beta release. This release includes version 1.0.8 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.8.html About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkominek at cs.cmu.edu Wed Jul 5 20:14:55 2006 From: jkominek at cs.cmu.edu (John Kominek) Date: Wed, 05 Jul 2006 20:14:55 -0400 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released In-Reply-To: <44AC5227.9030902@enthought.com> References: <44AC5227.9030902@enthought.com> Message-ID: <44AC55FF.5090503@cs.cmu.edu> Could you clarify something that newcomers to the world scientific python (such as myself) stumble over. When you list Numeric in the list of included packages, are you referring to the now-obsoleted Numerical python 24.2 dating back to 2001, or the replacement NumPy that is at the core of SciPy? Perhaps it is there for legacy reasons, but I thought the idea is that NumPy is to replace Numeric, not co-exist with it. Anyway, this release is great news. I look forward to trying it out. john > Enthought is pleased to announce the release of Python Enthought > Edition Version 1.0.0.beta3 (http://code.enthought.com/enthon/) -- a > python distribution for Windows. > > 1.0.0.beta3 Release Notes: > -------------------- > Version 1.0.0.beta3 of Python Enthought Edition is the first version > based on Python 2.4.3 and includes updates to nearly every package. > This is the third and (hopefully) last beta release. > > This release includes version 1.0.8 of the Enthought Tool Suite (ETS) > Package and bug fixes-- you can look at the release notes for this ETS > version here: > > http://svn.enthought.com/downloads/enthought/changelog-release.1.0.8.html > > > > About Python Enthought Edition: > ------------------------------- > Python 2.4.3, Enthought Edition is a kitchen-sink-included Python > distribution for Windows including the following packages out of the box: > > Numeric > SciPy > IPython > Enthought Tool Suite > wxPython > PIL > mingw > f2py > MayaVi > Scientific Python > VTK > and many more... > > More information is available about all Open Source code written and > released by Enthought, Inc. at http://code.enthought.com > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > From wbaxter at gmail.com Wed Jul 5 21:12:15 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 6 Jul 2006 10:12:15 +0900 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released In-Reply-To: <44AC55FF.5090503@cs.cmu.edu> References: <44AC5227.9030902@enthought.com> <44AC55FF.5090503@cs.cmu.edu> Message-ID: According to http://code.enthought.com/enthon/, it contains not only Numarray 24.2, but also Numarray 1.5.1, AND numpy 0.9.9. I'm guessing they list only Numeric in the announcement because it's still the most well-known and widely deployed numerical package for Python. --bb On 7/6/06, John Kominek wrote: > > > Could you clarify something that newcomers to the world scientific > python (such as myself) stumble over. When you list Numeric in the list > of included packages, are you referring to the now-obsoleted Numerical > python 24.2 dating back to 2001, or the replacement NumPy that is at the > core of SciPy? Perhaps it is there for legacy reasons, but I thought the > idea is that NumPy is to replace Numeric, not co-exist with it. > > Anyway, this release is great news. I look forward to trying it out. > > john > > > > > > Enthought is pleased to announce the release of Python Enthought > > Edition Version 1.0.0.beta3 (http://code.enthought.com/enthon/) -- a > > python distribution for Windows. > > > > 1.0.0.beta3 Release Notes: > > -------------------- > > Version 1.0.0.beta3 of Python Enthought Edition is the first version > > based on Python 2.4.3 and includes updates to nearly every package. > > This is the third and (hopefully) last beta release. > > > > This release includes version 1.0.8 of the Enthought Tool Suite (ETS) > > Package and bug fixes-- you can look at the release notes for this ETS > > version here: > > > > > http://svn.enthought.com/downloads/enthought/changelog-release.1.0.8.html > > > > > > > > About Python Enthought Edition: > > ------------------------------- > > Python 2.4.3, Enthought Edition is a kitchen-sink-included Python > > distribution for Windows including the following packages out of the > box: > > > > Numeric > > SciPy > > IPython > > Enthought Tool Suite > > wxPython > > PIL > > mingw > > f2py > > MayaVi > > Scientific Python > > VTK > > and many more... > > > > More information is available about all Open Source code written and > > released by Enthought, Inc. at http://code.enthought.com > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 5 21:25:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Jul 2006 20:25:05 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released In-Reply-To: References: <44AC5227.9030902@enthought.com> <44AC55FF.5090503@cs.cmu.edu> Message-ID: <44AC6671.5010606@gmail.com> Bill Baxter wrote: > According to http://code.enthought.com/enthon/, it contains not only > Numarray 24.2, but also Numarray 1.5.1, AND numpy 0.9.9. I'm guessing > they list only Numeric in the announcement because it's still the most > well-known and widely deployed numerical package for Python. Actually, it's due to old boilerplate that needs to be updated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bhendrix at enthought.com Wed Jul 5 21:53:39 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Wed, 05 Jul 2006 20:53:39 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released In-Reply-To: <44AC6671.5010606@gmail.com> References: <44AC5227.9030902@enthought.com> <44AC55FF.5090503@cs.cmu.edu> <44AC6671.5010606@gmail.com> Message-ID: <44AC6D23.2050501@enthought.com> Shhhh, Robert, its because Numeric is so well known! :) The web site has a complete list of included packages. Scipy is listed as version 0.5.0 and numpy as 0.9.9 because thats what the version file says, although they were pulled from svn last Friday. These may be updated for the final Enthon 1.0.0 release, but its probably a 50% chance it will be the versions included in this release. Bryce Robert Kern wrote: > Bill Baxter wrote: > >> According to http://code.enthought.com/enthon/, it contains not only >> Numarray 24.2, but also Numarray 1.5.1, AND numpy 0.9.9. I'm guessing >> they list only Numeric in the announcement because it's still the most >> well-known and widely deployed numerical package for Python. >> > > Actually, it's due to old boilerplate that needs to be updated. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Wed Jul 5 23:21:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 05 Jul 2006 21:21:40 -0600 Subject: [SciPy-user] What happened to outerproduct ? In-Reply-To: <44AB75BE.3080302@iam.uni-stuttgart.de> References: <44AB75BE.3080302@iam.uni-stuttgart.de> Message-ID: <44AC81C4.4090504@ieee.org> Nils Wagner wrote: > I saw a comment on outerproduct > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/3178098 > Does that mean that outerproduct exists no longer or is it temporarily > not available ? > It's name is "outer" -Travis From strawman at astraw.com Thu Jul 6 01:11:36 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 05 Jul 2006 22:11:36 -0700 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta3 Released In-Reply-To: <44AC6D23.2050501@enthought.com> References: <44AC5227.9030902@enthought.com> <44AC55FF.5090503@cs.cmu.edu> <44AC6671.5010606@gmail.com> <44AC6D23.2050501@enthought.com> Message-ID: <44AC9B88.4090102@astraw.com> bryce hendrix wrote: > Shhhh, Robert, its because Numeric is so well known! :) > > The web site has a complete list of included packages. Scipy is listed > as version 0.5.0 and numpy as 0.9.9 because thats what the version > file says, although they were pulled from svn last Friday. These may > be updated for the final Enthon 1.0.0 release, but its probably a 50% > chance it will be the versions included in this release. FWIW, I just filed a ticket on the versioning scheme: http://projects.scipy.org/scipy/numpy/ticket/170#preview From oliphant.travis at ieee.org Thu Jul 6 06:53:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 06 Jul 2006 04:53:57 -0600 Subject: [SciPy-user] [Fwd: SciPy doc] Message-ID: <44ACEBC5.2000704@ieee.org> -------- Original Message -------- Subject: SciPy doc Date: Thu, 6 Jul 2006 13:16:37 +0300 (EEST) From: Matti Vuorinen To: oliphant.travis at ieee.org CC: vuorinen at utu.fi Hello, I am not sure whether I am writing to the right person about downloading problems. If not would you please forward this to the right person? A couple of days ago I downloaded the Enthought Ed. 1.00<#69 of Python 2.4.3 I seems to me that I cannot open the scipy document file. I have been using SciPy (for the first time about 3 years ago), and want to learn the new features of this wonderful free software. Best wishes, Matti Vuorinen From nwagner at iam.uni-stuttgart.de Thu Jul 6 07:22:47 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Jul 2006 13:22:47 +0200 Subject: [SciPy-user] strange behaviour of angle(z) Message-ID: <44ACF287.3050103@iam.uni-stuttgart.de> Hi all, I am surprised by the behaviour of angle(z) I didn't expect negative return values. >>> angle(1.0+0j) 0.0 >>> angle(0.0+1j) 1.5707963267948966 >>> angle(-1.0+0j) 3.1415926535897931 but >>> angle(0.0-1j) -1.5707963267948966 instead of 3*pi/2 Is there any reason for negative return values ? Nils From nwagner at iam.uni-stuttgart.de Thu Jul 6 07:49:02 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Jul 2006 13:49:02 +0200 Subject: [SciPy-user] strange behaviour of angle(z) In-Reply-To: <44ACF287.3050103@iam.uni-stuttgart.de> References: <44ACF287.3050103@iam.uni-stuttgart.de> Message-ID: <44ACF8AE.3080405@iam.uni-stuttgart.de> Nils Wagner wrote: > Hi all, > > I am surprised by the behaviour of angle(z) > I didn't expect negative return values. > > > >>> angle(1.0+0j) > 0.0 > >>> angle(0.0+1j) > 1.5707963267948966 > >>> angle(-1.0+0j) > 3.1415926535897931 > > but > > >>> angle(0.0-1j) > -1.5707963267948966 > > instead of > > 3*pi/2 > > Is there any reason for negative return values ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Just now I discovered that Matlab has a function phase. PHASE computes the phase of a complex vector PHI=phase(G) G is a complex-valued row vector and PHI is returned as its phase (in radians), with an effort made to keep it c o n t i n u e s over the \pi-borders. Is there such a function in scipy ? If not it would be a nice enhancement. Nils From jelle.feringa at ezct.net Thu Jul 6 08:05:02 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 6 Jul 2006 14:05:02 +0200 Subject: [SciPy-user] interpolate.Un In-Reply-To: <44ACF8AE.3080405@iam.uni-stuttgart.de> Message-ID: <000001c6a0f4$6a003260$1001a8c0@JELLE> Hi, I'm running into some troubles when using the scipy.interpolate.UnivariateSpline function. Likely this is a compilation problem; thing is that the function works just fine on my laptop, which has a Pentium-M processor, and UnivariateSpline fails on my workstation, which is an AMD Athlon XP processor... I just upgraded the scipy / numpy binaries to the enthon 1.0.0b3 version, and its still the same issue. Also I went through the scipy 0.4.8 / 0.4.9 releases, again, same thing. Any suggestion appreciated! Cheers, Jelle. In [20]: aa Out[20]: array([ 0. , 0.25, 0.5 , 0.75, 0.85, 0.9 , 0.95, 1. ]) In [21]: bb Out[21]: array([ 0. , 0.33333333, 1. , 3. , 5.666667 , 9. , 19. , 100. ]) In [22]: p = interp.UnivariateSpline In [23]: p = interp.UnivariateSpline(aa,bb) **boom** & python exits... //same thing happens when using lists... From jf.moulin at gmail.com Thu Jul 6 08:26:47 2006 From: jf.moulin at gmail.com (Jean-Francois Moulin) Date: Thu, 6 Jul 2006 14:26:47 +0200 Subject: [SciPy-user] mpfit questions Message-ID: Hi all, I am trying to get mpfit to work and I have some questions: 0) cutting and pasting the doc example which goes like ... m = mpfit('myfunct', p0, functkw=fa) ... I get a "module not callable" error I changed the call to m = mpfit.mpfit('myfunct', p0, functkw=fa) and it seems to work (until I reach the prblem below). 1) I get this message when trying to call mpfit File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ self.maxlog = log(self.maxnum) NameError: global name 'log' is not defined (I did import Numeric and mpfit before my call) 2) how to have it working with numpy/scipy (rather than Numeric) -I tried putting a from scipy import * clause in mpfit.py and remove all refs to numeric but I get the same message as above message. Thanks for any input JF From nwagner at iam.uni-stuttgart.de Thu Jul 6 08:30:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Jul 2006 14:30:20 +0200 Subject: [SciPy-user] interpolate.Un In-Reply-To: <000001c6a0f4$6a003260$1001a8c0@JELLE> References: <000001c6a0f4$6a003260$1001a8c0@JELLE> Message-ID: <44AD025C.5020005@iam.uni-stuttgart.de> Jelle Feringa / EZCT Architecture & Design Research wrote: > Hi, > > I'm running into some troubles when using the > scipy.interpolate.UnivariateSpline function. Likely this is a compilation > problem; thing is that the function works just fine on my laptop, which has > a Pentium-M processor, and UnivariateSpline fails on my workstation, which > is an AMD Athlon XP processor... > I just upgraded the scipy / numpy binaries to the enthon 1.0.0b3 version, > and its still the same issue. Also I went through the scipy 0.4.8 / 0.4.9 > releases, again, same thing. Any suggestion appreciated! > > Cheers, > > Jelle. > > > > In [20]: aa > Out[20]: array([ 0. , 0.25, 0.5 , 0.75, 0.85, 0.9 , 0.95, 1. ]) > > In [21]: bb > Out[21]: > array([ 0. , 0.33333333, 1. , 3. , > 5.666667 , 9. , 19. , 100. ]) > > In [22]: p = interp.UnivariateSpline > > In [23]: p = interp.UnivariateSpline(aa,bb) > > > **boom** & python exits... > > //same thing happens when using lists... > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > >>> numpy.__version__ '0.9.9.2744' >>> import scipy >>> scipy.__version__ '0.5.0.2044' >>> aa=array([ 0. , 0.25, 0.5 , 0.75, 0.85, 0.9 , 0.95, 1. ]) >>> bb=array([ 0. , 0.33333333, 1. , 3. , ... 5.666667 , 9. , 19. , 100. ]) >>> p = interpolate.UnivariateSpline(aa,bb) >>> p No problem here processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 12 model name : AMD Athlon(tm) 64 Processor 3400+ stepping : 0 cpu MHz : 2403.179 cache size : 512 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 pni syscall nx mmxext lm 3dnowext 3dnow bogomips : 4702.20 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp From jelle.feringa at ezct.net Thu Jul 6 08:18:52 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 6 Jul 2006 14:18:52 +0200 Subject: [SciPy-user] enthon beta 1.0.0b3 Message-ID: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> When import pylab using the enthon 1.0.0b3 scipy/numpy package, the following error occurs: ImportError: cannot import name Int8 I think this error already showed up on this list? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Thu Jul 6 08:29:47 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 6 Jul 2006 14:29:47 +0200 Subject: [SciPy-user] interpolate.UnivariateSpline In-Reply-To: <44AD025C.5020005@iam.uni-stuttgart.de> Message-ID: <000c01c6a0f7$def62f40$1001a8c0@JELLE> In [18]: np.__version__ Out[18]: '0.9.9.2706' In [19]: sp.__version__ Out[19]: '0.5.0.2033' Hi Nils, Thanks for your response, seems my version isnt that far from yours... I'm running the win32 binaries, I'm assuming your not? Thanks for your help, Jelle. By the way, it would be great to be able the 'cubic', 'spline', or 'nearest' options in interpolate.interp1d, that would be most useful... -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Nils Wagner Sent: Thursday, July 06, 2006 2:30 PM To: SciPy Users List Subject: Re: [SciPy-user] interpolate.Un Jelle Feringa / EZCT Architecture & Design Research wrote: > Hi, > > I'm running into some troubles when using the > scipy.interpolate.UnivariateSpline function. Likely this is a compilation > problem; thing is that the function works just fine on my laptop, which has > a Pentium-M processor, and UnivariateSpline fails on my workstation, which > is an AMD Athlon XP processor... > I just upgraded the scipy / numpy binaries to the enthon 1.0.0b3 version, > and its still the same issue. Also I went through the scipy 0.4.8 / 0.4.9 > releases, again, same thing. Any suggestion appreciated! > > Cheers, > > Jelle. > > > > In [20]: aa > Out[20]: array([ 0. , 0.25, 0.5 , 0.75, 0.85, 0.9 , 0.95, 1. ]) > > In [21]: bb > Out[21]: > array([ 0. , 0.33333333, 1. , 3. , > 5.666667 , 9. , 19. , 100. ]) > > In [22]: p = interp.UnivariateSpline > > In [23]: p = interp.UnivariateSpline(aa,bb) > > > **boom** & python exits... > > //same thing happens when using lists... > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > >>> numpy.__version__ '0.9.9.2744' >>> import scipy >>> scipy.__version__ '0.5.0.2044' >>> aa=array([ 0. , 0.25, 0.5 , 0.75, 0.85, 0.9 , 0.95, 1. ]) >>> bb=array([ 0. , 0.33333333, 1. , 3. , ... 5.666667 , 9. , 19. , 100. ]) >>> p = interpolate.UnivariateSpline(aa,bb) >>> p No problem here processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 12 model name : AMD Athlon(tm) 64 Processor 3400+ stepping : 0 cpu MHz : 2403.179 cache size : 512 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 pni syscall nx mmxext lm 3dnowext 3dnow bogomips : 4702.20 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Thu Jul 6 08:58:48 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 06 Jul 2006 14:58:48 +0200 Subject: [SciPy-user] mpfit questions In-Reply-To: References: Message-ID: <44AD0908.9070707@gmx.net> Jean-Francois Moulin wrote: > Hi all, > > I am trying to get mpfit to work and I have some questions: > > 0) cutting and pasting the doc example which goes like > ... > m = mpfit('myfunct', p0, functkw=fa) > ... > I get a "module not callable" error > > I changed the call to m = mpfit.mpfit('myfunct', p0, functkw=fa) > and it seems to work (until I reach the prblem below). > > 1) I get this message when trying to call mpfit > File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ > self.maxlog = log(self.maxnum) > NameError: global name 'log' is not defined > (I did import Numeric and mpfit before my call) > I think the line should be self.maxlog = Numeric.log(self.maxnum) > 2) how to have it working with numpy/scipy (rather than Numeric) > -I tried putting a from scipy import * clause in mpfit.py and remove > all refs to numeric but I get the same message as above message. > This will be a bit of work. At first you have to change the import statememts in mpfit.py: ##import Numeric import numpy # you don't have to replace Numeric with numpy everywhere Numeric = numpy Numeric.Float = numpy.float64 Numeric.Int = numpy.int64 But then you will probably run into problems like Traceback (most recent call last): File "./trans.py", line 173, in ? x, dy = fit_transient(tt, yy, method = method, x_start = x_start, ret_all = 1, red_data = red_data) File "/home/schmerler/sim/transient_fitting.py", line 89, in fit_transient x = lmFit(array([t, y]), func = model, args = (dy,), limits = bounds, ig = x_start) File "/home/schmerler/Python/lmfit.py", line 198, in lmFit fit = mpfit.mpfit(residualFunction, parinfo = parameter_infos, quiet = 1, ftol = 1e-15, xtol = 1e-15, gtol = 1e-15, maxiter = niter) File "/home/schmerler/Python/mpfit.py", line 1062, in __init__ functkw=functkw, ifree=ifree, xall=self.params) File "/home/schmerler/Python/mpfit.py", line 1563, in fdjac2 mask = mask or (ulimited and (x > ulimit-h)) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Since I'm not using mpfit ATM, I didn't try to adapt the mpfit code to numpy ... Have you tried optimize.leastsq? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From oliphant.travis at ieee.org Thu Jul 6 09:05:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 06 Jul 2006 07:05:36 -0600 Subject: [SciPy-user] mpfit questions In-Reply-To: <44AD0908.9070707@gmx.net> References: <44AD0908.9070707@gmx.net> Message-ID: <44AD0AA0.1050207@ieee.org> Steve Schmerler wrote: > I think the line should be > > This will be a bit of work. At first you have to change the import > statememts in mpfit.py: > > ##import Numeric > import numpy > # you don't have to replace Numeric with numpy everywhere > Numeric = numpy > Numeric.Float = numpy.float64 > Numeric.Int = numpy.int64 > Actually, it's easier to do import numpy.oldnumeric as Numeric which will keep the Float and Int names (and several other things) backward compatible. -Travis From Peter.Bienstman at ugent.be Thu Jul 6 09:18:38 2006 From: Peter.Bienstman at ugent.be (Peter Bienstman) Date: Thu, 6 Jul 2006 15:18:38 +0200 Subject: [SciPy-user] optimize.brute Message-ID: <200607061518.38803.Peter.Bienstman@ugent.be> Hi, I'm trying to get optimize.brute to work: import scipy.optimize def calc(x,y): return x+y print scipy.optimize.brute(calc, ((.050,.100,.0025), (.050,.100,.0025))) which gives File "/usr/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 1558, in brute Jout = vecfunc(*grid) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 620, in __call__ raise ValueError, "mismatch between python function inputs"\ ValueError: mismatch between python function inputs and received arguments I'm quite sure that at some point in past I was able to use this syntax, but I haven't been able to trace back which versions of scipy and numpy I used for that. Can anyone point me in the correct direction? Thanks! Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 307 bytes Desc: not available URL: From jf.moulin at gmail.com Thu Jul 6 09:40:36 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Thu, 6 Jul 2006 13:40:36 +0000 (UTC) Subject: [SciPy-user] mpfit questions References: <44AD0908.9070707@gmx.net> <44AD0AA0.1050207@ieee.org> Message-ID: Thanks for your fast input! FWIW, I want to use bounds for my fitting params that is why I need mpfit... JF From vincefn at users.sourceforge.net Thu Jul 6 09:42:24 2006 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Thu, 6 Jul 2006 15:42:24 +0200 Subject: [SciPy-user] strange behaviour of angle(z) In-Reply-To: <44ACF8AE.3080405@iam.uni-stuttgart.de> References: <44ACF287.3050103@iam.uni-stuttgart.de> <44ACF8AE.3080405@iam.uni-stuttgart.de> Message-ID: <200607061542.25071.vincefn@users.sourceforge.net> On Thursday 06 July 2006 13:49, Nils Wagner wrote: > Just now I discovered that Matlab has a function phase. > > PHASE computes the phase of a complex vector > PHI=phase(G) > G is a complex-valued row vector and PHI is returned as its phase (in > radians), > with an effort made to keep it c o n t i n u e s over the \pi-borders. > > Is there such a function in scipy ? If not it would be a nice enhancement. If all you want is angles in [0;2pi] rather than [-pi;pi], it's easy to write ...: from scipy import * z=2*rand(10)-1+1j*(2*rand((10))-1) a=angle(z) a+=2*pi*(angle(z)<0) The [-pi;pi] range comes from the use of atan2. Or you can use modulo, i.e. a=angle(z)%(2*pi) (strange, I thought that scipy (maybe it was Numeric) modulo worked as C/C++ modulo and returned within ]-z;z[ instead of [0;z[ - I'am glad to see it's back to standard python behaviour) -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From jf.moulin at gmail.com Thu Jul 6 09:51:14 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Thu, 6 Jul 2006 13:51:14 +0000 (UTC) Subject: [SciPy-user] mpfit questions References: <44AD0908.9070707@gmx.net> <44AD0AA0.1050207@ieee.org> Message-ID: Me again ok, I did what Travis suggested and added a Numeric clause in front of the calls to log and sqrt that gave problems before, so that now in mpfit we have: ... self.maxlog = Numeric.log(self.maxnum) self.minlog = Numeric.log(self.minnum) self.rdwarf = Numeric.sqrt(self.minnum*1.5) * 10 self.rgiant = Numeric.sqrt(self.maxnum) * 0.1 ... but still when I call mpfit I got the message: pythonw -u "testmpfit.py" Traceback (most recent call last): File "testmpfit.py", line 11, in ? m = mpfit.mpfit('myfunct', p0, functkw=fa) File "c:\Python24\lib\site-packages\mpfit.py", line 852, in __init__ self.damp = damp File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ self.maxlog = Numeric.log(self.maxnum) NameError: global name 'log' is not defined Exit code: 1 ???? what the heck does that mean??? Thanks in advance for your help... From bhendrix at enthought.com Thu Jul 6 11:50:11 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 06 Jul 2006 10:50:11 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> Message-ID: <44AD3133.8020006@enthought.com> Can you offer a hint as to what I should search for? Google isn't being all that helpful this morning. When I try to import matplotlib.pylab I get an error about _ctype not being in the dll, which means matplotlib was built incorrectly (well, it means other things, but its best I don't get started on that rant...). In the past when I've had problems building matplotlib, I just install the binary from their website and use it instead. Can you try that when you get a chance & let me know if it solves your problem? Thanks, Bryce Jelle Feringa / EZCT Architecture & Design Research wrote: > > > > When import pylab using the enthon 1.0.0b3 scipy/numpy package, the > following error occurs: > > ImportError: cannot import name Int8 > > > > I think this error already showed up on this list? > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Thu Jul 6 11:53:30 2006 From: sransom at nrao.edu (Scott Ransom) Date: Thu, 6 Jul 2006 11:53:30 -0400 Subject: [SciPy-user] mpfit questions In-Reply-To: <44AD0908.9070707@gmx.net> References: <44AD0908.9070707@gmx.net> Message-ID: <20060706155330.GB9357@ssh.cv.nrao.edu> Running numpy/lib/convertcode.py on mpfit.py worked fine for me. I don't think I needed to make any other changes. Scott On Thu, Jul 06, 2006 at 02:58:48PM +0200, Steve Schmerler wrote: > Jean-Francois Moulin wrote: > > Hi all, > > > > I am trying to get mpfit to work and I have some questions: > > > > 0) cutting and pasting the doc example which goes like > > ... > > m = mpfit('myfunct', p0, functkw=fa) > > ... > > I get a "module not callable" error > > > > I changed the call to m = mpfit.mpfit('myfunct', p0, functkw=fa) > > and it seems to work (until I reach the prblem below). > > > > 1) I get this message when trying to call mpfit > > File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ > > self.maxlog = log(self.maxnum) > > NameError: global name 'log' is not defined > > (I did import Numeric and mpfit before my call) > > > > I think the line should be > > self.maxlog = Numeric.log(self.maxnum) > > > > 2) how to have it working with numpy/scipy (rather than Numeric) > > -I tried putting a from scipy import * clause in mpfit.py and remove > > all refs to numeric but I get the same message as above message. > > > > This will be a bit of work. At first you have to change the import > statememts in mpfit.py: > > ##import Numeric > import numpy > # you don't have to replace Numeric with numpy everywhere > Numeric = numpy > Numeric.Float = numpy.float64 > Numeric.Int = numpy.int64 > > But then you will probably run into problems like > > Traceback (most recent call last): > File "./trans.py", line 173, in ? > x, dy = fit_transient(tt, yy, method = method, x_start = x_start, > ret_all = 1, red_data = red_data) > File "/home/schmerler/sim/transient_fitting.py", line 89, in > fit_transient > x = lmFit(array([t, y]), func = model, args = (dy,), limits = > bounds, ig = x_start) > File "/home/schmerler/Python/lmfit.py", line 198, in lmFit > fit = mpfit.mpfit(residualFunction, parinfo = parameter_infos, > quiet = 1, ftol = 1e-15, xtol = 1e-15, gtol = 1e-15, maxiter = niter) > File "/home/schmerler/Python/mpfit.py", line 1062, in __init__ > functkw=functkw, ifree=ifree, xall=self.params) > File "/home/schmerler/Python/mpfit.py", line 1563, in fdjac2 > mask = mask or (ulimited and (x > ulimit-h)) > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > Since I'm not using mpfit ATM, I didn't try to adapt the mpfit code to > numpy ... > > Have you tried optimize.leastsq? > > cheers, > steve > > -- > Random number generation is the art of producing pure gibberish as > quickly as possible. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From elcorto at gmx.net Thu Jul 6 11:58:21 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 06 Jul 2006 17:58:21 +0200 Subject: [SciPy-user] mpfit questions In-Reply-To: <44AD0AA0.1050207@ieee.org> References: <44AD0908.9070707@gmx.net> <44AD0AA0.1050207@ieee.org> Message-ID: <44AD331D.9020801@gmx.net> Travis Oliphant wrote: > Steve Schmerler wrote: >> I think the line should be >> >> This will be a bit of work. At first you have to change the import >> statememts in mpfit.py: >> >> ##import Numeric >> import numpy >> # you don't have to replace Numeric with numpy everywhere >> Numeric = numpy >> Numeric.Float = numpy.float64 >> Numeric.Int = numpy.int64 >> > > Actually, it's easier to do > > import numpy.oldnumeric as Numeric > > which will keep the Float and Int names (and several other things) > backward compatible. > Ah... cool. But I guess it won't be there forever so in the long run one should completely switch to numpy. -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Thu Jul 6 12:03:37 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 6 Jul 2006 12:03:37 -0400 Subject: [SciPy-user] strange behaviour of angle(z) In-Reply-To: <44ACF8AE.3080405@iam.uni-stuttgart.de> References: <44ACF287.3050103@iam.uni-stuttgart.de> <44ACF8AE.3080405@iam.uni-stuttgart.de> Message-ID: <20060706120337.7e5b7de7@arbutus.physics.mcmaster.ca> On Thu, 06 Jul 2006 13:49:02 +0200 Nils Wagner wrote: > Nils Wagner wrote: > > Hi all, > > > > I am surprised by the behaviour of angle(z) > > I didn't expect negative return values. > > > > > > >>> angle(1.0+0j) > > 0.0 > > >>> angle(0.0+1j) > > 1.5707963267948966 > > >>> angle(-1.0+0j) > > 3.1415926535897931 > > > > but > > > > >>> angle(0.0-1j) > > -1.5707963267948966 > > > > instead of > > > > 3*pi/2 > > > > Is there any reason for negative return values ? convention? (-pi,pi] instead of [0,2*pi). They're equivalent. Although admittedly changing from one to the other is a bit of a pain. (Hmm, and the range is actually [-float(pi), float(pi)], where float(pi) is the floating-point approximation to pi -- a closed interval.) > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Just now I discovered that Matlab has a function phase. > > PHASE computes the phase of a complex vector > PHI=phase(G) > G is a complex-valued row vector and PHI is returned as its phase (in > radians), > with an effort made to keep it c o n t i n u e s over the \pi-borders. > > Is there such a function in scipy ? If not it would be a nice enhancement. *scratches head* How does it keep it continuous over \pi-borders? (Well, I suppose [0,2*pi) is continuous across pi, but it's still discontinuous across 0 and 2*pi.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant at ee.byu.edu Thu Jul 6 12:16:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 06 Jul 2006 10:16:26 -0600 Subject: [SciPy-user] strange behaviour of angle(z) In-Reply-To: <44ACF8AE.3080405@iam.uni-stuttgart.de> References: <44ACF287.3050103@iam.uni-stuttgart.de> <44ACF8AE.3080405@iam.uni-stuttgart.de> Message-ID: <44AD375A.1000700@ee.byu.edu> Nils Wagner wrote: >Nils Wagner wrote: > > >>Hi all, >> >>I am surprised by the behaviour of angle(z) >>I didn't expect negative return values. >> >> >> >>> angle(1.0+0j) >>0.0 >> >>> angle(0.0+1j) >>1.5707963267948966 >> >>> angle(-1.0+0j) >>3.1415926535897931 >> >>but >> >> >>> angle(0.0-1j) >>-1.5707963267948966 >> >>instead of >> >>3*pi/2 >> >>Is there any reason for negative return values ? >> >> Where you place the branch-cut for a multi-valued function is just a convention. There is an 'unwrap' function in numpy. -Travis From elcorto at gmx.net Thu Jul 6 12:23:05 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 06 Jul 2006 18:23:05 +0200 Subject: [SciPy-user] mpfit questions In-Reply-To: <20060706155330.GB9357@ssh.cv.nrao.edu> References: <44AD0908.9070707@gmx.net> <20060706155330.GB9357@ssh.cv.nrao.edu> Message-ID: <44AD38E9.5060605@gmx.net> Scott Ransom wrote: > Running numpy/lib/convertcode.py on mpfit.py worked fine for me. > I don't think I needed to make any other changes. > That's probably the best way. I was just about to ask how I do In [35]: import Numeric as N In [36]: X=N.array([1,0,1,0]) In [37]: Y=N.array([1,0,0,0]) In [38]: X or Y Out[38]: array([1, 0, 1, 0]) in numpy, since In [39]: x=numpy.array([1,0,1,0]) In [40]: y=numpy.array([1,0,0,0]) In [41]: x or y --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/schmerler/ ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() is the problem if I try to run mpfit with numpy. BTW, since many people seem to use it, would it be a problem to include mpfit in scipy, or are there licencing issues? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Thu Jul 6 12:35:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 6 Jul 2006 12:35:26 -0400 Subject: [SciPy-user] mpfit questions In-Reply-To: <44AD38E9.5060605@gmx.net> References: <44AD0908.9070707@gmx.net> <20060706155330.GB9357@ssh.cv.nrao.edu> <44AD38E9.5060605@gmx.net> Message-ID: <20060706123526.26696dd9@arbutus.physics.mcmaster.ca> On Thu, 06 Jul 2006 18:23:05 +0200 Steve Schmerler wrote: > > BTW, since many people seem to use it, would it be a problem to include > mpfit in scipy, or are there licencing issues? I presume you mean mpfit from here: http://cars9.uchicago.edu/software/python/mpfit.html There's no licence on that code. For inclusion in Scipy it has to have a BSD-compatible licence. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mike.s.duffy at gmail.com Thu Jul 6 14:04:37 2006 From: mike.s.duffy at gmail.com (Mike Duffy) Date: Thu, 6 Jul 2006 14:04:37 -0400 Subject: [SciPy-user] FFT not producing correct result Message-ID: <3a3da060607061104g7323ebd4t62c5a86fe5b92f28@mail.gmail.com> I have been trying to use fft/ifft for a program, but the program has been failing. So, I tried testing the fft fuunction alone and found that it does not work as I woul expect it to. The test I have been using is to feed it a standard Gaussian, since the FFT of this should be just another gaussian. But, this is not what I get at all. I've been using pylab to visualize the function before and after the transform. Here is my test code: def main(): sigma = 25. x = arange(0, 100, 1, 'f') x0 = 50. g = exp(-(x - x0)**2 / (2 * sigma)) / sqrt(2 * pi) for i in xrange(1): testFFT(x, g) def testFFT(x, f): pylab.plot(x, f) pylab.title('Initial gaussian') pylab.show() pylab.clf() f = fft(f) pylab.plot(x, abs(f)) pylab.title('After first FFT') pylab.show() pylab.clf() f = ifft(f) pylab.plot(x, abs(f)) pylab.title('After first iFFt') pylab.show() pylab.clf() It seems that the result is a Gaussian, but with the first half of the array swapped with the second. I tried using this silly fudging function def swapaxis(a): n = len(a) tmp = a.copy() tmp[:n/2] = a[n/2:] tmp[n/2:] = a[:n/2] return tmp on the result of the fft and, indeed, I got a Gaussian as expected. But, this does not seem legitimate. Can anyone help? Thanks in advance. -- Michael S. Duffy University of Florida -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincefn at users.sourceforge.net Thu Jul 6 14:23:16 2006 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Thu, 6 Jul 2006 20:23:16 +0200 Subject: [SciPy-user] FFT not producing correct result In-Reply-To: <529E0C005F46104BA9DB3CB93F397975326A6C@TOKYO.intra.cea.fr> References: <529E0C005F46104BA9DB3CB93F397975326A6C@TOKYO.intra.cea.fr> Message-ID: <200607062023.16790.vincefn@users.sourceforge.net> On Thursday 06 July 2006 20:07, Mike Duffy wrote: > I have been trying to use fft/ifft for a program, but the program has been > failing. So, I tried testing the fft fuunction alone and found that it does > not work as I woul expect it to. The test I have been using is to feed it a > standard Gaussian, since the FFT of this should be just another gaussian. > But, this is not what I get at all. I've been using pylab to visualize the > function before and after the transform. Here is my test code: That's normal - as written in the help of fft, the first element in the array is the zero-frequency term, so this is where your gaussian is. You should look at fftpack.fftshift if you want to shift the zero-frequency term to the center of the array. -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From mike.s.duffy at gmail.com Thu Jul 6 14:59:15 2006 From: mike.s.duffy at gmail.com (Mike Duffy) Date: Thu, 6 Jul 2006 14:59:15 -0400 Subject: [SciPy-user] FFT not producing correct result In-Reply-To: <200607062023.16790.vincefn@users.sourceforge.net> References: <529E0C005F46104BA9DB3CB93F397975326A6C@TOKYO.intra.cea.fr> <200607062023.16790.vincefn@users.sourceforge.net> Message-ID: <3a3da060607061159v487cffb6w4919f81905bac650@mail.gmail.com> On 7/6/06, Vincent Favre-Nicolin wrote: > > You should look at fftpack.fftshift if you want to shift the > zero-frequency > term to the center of the array. > Ok, thanks. That helps some. I used the fftshift and got the expected result (I'm guessing my swapaxis function only worked because the gaussian is a special case?). But, it seems that I dont need the ifftshift? I wish I understood this better. Essentially, the algorithm I have been trying to execute (unsuccessfully) is this: f = some_wavefunction() for i in xrange(N): g = fft(f) g = foo(g) f = ifft(g) f = bar(f) Should it be written this way: f = some_wavefunction() for i in xrange(N): g = fftshift(fft(f)) g = foo(g) f = ifftshift(ifft(g)) # is this ifftshift necessary?? f = bar(f) -- Michael S. Duffy University of Florida -------------- next part -------------- An HTML attachment was scrubbed... URL: From yannick.copin at laposte.net Thu Jul 6 17:11:10 2006 From: yannick.copin at laposte.net (Yannick Copin) Date: Thu, 06 Jul 2006 23:11:10 +0200 Subject: [SciPy-user] Flux conservative rebinning Message-ID: <44AD7C6E.7090606@laposte.net> Hi, I'm looking for a "flux-conservative" rebinning scheme, i.e. resampling an input vector while consering the total "flux" (i.e. sum). As for now, I'm aware of basically two techniques, none of them suitable: 1. boxcar-filtering and sub-sampling (e.g. http://article.gmane.org/gmane.comp.python.numeric.general/888) 2. interpolation (e.g. http://www.scipy.org/Cookbook/Rebinning) For example (attachment), given y = array([0,2,1,3], dtype='d') with a total sum of 6.0, boxcar-filtering and sub-sampling (x2) would be: yy = convolve(y, ones(2), mode='valid')[::2] giving yy = [ 2. 4.] sum = 6.0. The problem is that I can reduce size only by an integer factor (e.g. I cannot rebin y on 3 points), and furthermore it looks like a waste of CPU to compute a convolution on full input vector and *then* sub-sampling. Using recipe #3 of http://www.scipy.org/Cookbook/Rebinning allows "any dimension sizes" rebinning, but unfortunately does not conserve the flux: yy = congrid(y, (3,), method='linear', centre=True) gives yy = [ 0.33333337 1.5 2.66666698] sum = 4.5. Finally, what I would have liked is yy = [ 0.66666667 2. 3.33333333] sum = 6.0, which is yy = dot(mat, y) with mat = array([[1.,1./3.,0.,0.],[0.,2./3.,2./3.,0.],[0.,0.,1./3.,1.]]). Any way to efficiently construct the 'mat' array? Cheers. PS: BTW, method='neighbour' in congrid of example #3 of http://www.scipy.org/Cookbook/Rebinning does not work out of the box and raises: NameError: name 'xi' is not defined Is the code still incomplete? -- .~. Yannick COPIN (o:>* Doctus cum libro /V\ ---===<<<### NOT IN THE OFFICE ###>>>===--- // \\ Institut de physique nucleaire de Lyon (IN2P3 - France) /( )\ http://snovae.in2p3.fr/ycopin/ ^`~'^ -------------- next part -------------- A non-text attachment was scrubbed... Name: rebin.py Type: text/x-python Size: 5915 bytes Desc: not available URL: From hgk at et.uni-magdeburg.de Fri Jul 7 06:30:30 2006 From: hgk at et.uni-magdeburg.de (=?ISO-8859-15?Q?Hans_Georg_Krauth=E4user?=) Date: Fri, 07 Jul 2006 12:30:30 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter Message-ID: Hi all, I've tried to use optimize.leastsq from a fresh binary install of numpy and scipy with python 2.4 on XP. A call to optimize.leastsq results in the crash of the interpreter without any traceback. XP tells me that the problem occurs inside _minpack.pyd. I can reproduce the error with the following script on two different computers -------------------8<------------------------ import scipy scipy.pkgload('optimize') x=scipy.array([1., 2., 3.]) y=scipy.array([4., 5., 6.]) def res(p, y, x): err=y-peval(x,p) print p, y, x, err return err def peval(x,p): return p[0]*x+p[1] m_0=0.1 b_0=0.0 p0=scipy.array([m_0,b_0]) plsq=scipy.optimize.leastsq(res, p0, args=(y,x)) print plsq --------------------->8--------------------------- If I comment out line 2 (pkgload) the script runs perfectly on an old python installation (2.3) with old scipy and numpy. It would be nice if someone could try to reproduce the error. Please tell me also if something is wrong with my script. Regards Hans Georg From nwagner at iam.uni-stuttgart.de Fri Jul 7 06:50:36 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 07 Jul 2006 12:50:36 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: Message-ID: <44AE3C7C.6020508@iam.uni-stuttgart.de> Hans Georg Krauth?user wrote: > Hi all, > > I've tried to use optimize.leastsq from a fresh binary install of numpy > and scipy with python 2.4 on XP. A call to optimize.leastsq results in > the crash of the interpreter without any traceback. XP tells me that the > problem occurs inside _minpack.pyd. > > I can reproduce the error with the following script on two different > computers > -------------------8<------------------------ > import scipy > scipy.pkgload('optimize') > > x=scipy.array([1., 2., 3.]) > y=scipy.array([4., 5., 6.]) > > def res(p, y, x): > err=y-peval(x,p) > print p, y, x, err > return err > > def peval(x,p): > return p[0]*x+p[1] > > m_0=0.1 > b_0=0.0 > > p0=scipy.array([m_0,b_0]) > plsq=scipy.optimize.leastsq(res, p0, args=(y,x)) > > print plsq > --------------------->8--------------------------- > > If I comment out line 2 (pkgload) the script runs perfectly on an old > python installation (2.3) with old scipy and numpy. > > It would be nice if someone could try to reproduce the error. > > Please tell me also if something is wrong with my script. > > Regards > Hans Georg > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Your script works fine. I am on Linux. [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] [ 1.00000000e-01 1.49011612e-08] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.89999999 4.79999999 5.69999999] [ 0.99999992 3.00000018] [ 4. 5. 6.] [ 1. 2. 3.] [ -9.83561659e-08 -1.78918595e-08 6.25724468e-08] [ 0.99999993 3.00000018] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.13257325e-07 -4.76941793e-08 1.78689668e-08] [ 0.99999992 3.00000022] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.43059652e-07 -6.25953458e-08 1.78689596e-08] [ 1. 3.] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.77635684e-15 -1.77635684e-15 -2.66453526e-15] [ 1.00000001 3. ] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.49011630e-08 -2.98023242e-08 -4.47034862e-08] [ 1. 3.00000004] [ 4. 5. 6.] [ 1. 2. 3.] [ -4.47034854e-08 -4.47034854e-08 -4.47034862e-08] [ 1. 3.] [ 4. 5. 6.] [ 1. 2. 3.] [ 0. 0. 0.] (array([ 1., 3.]), 2) >>> scipy.__version__ '0.5.0.2051' Nils From hgk at et.uni-magdeburg.de Fri Jul 7 07:09:52 2006 From: hgk at et.uni-magdeburg.de (=?ISO-8859-1?Q?Hans_Georg_Krauth=E4user?=) Date: Fri, 07 Jul 2006 13:09:52 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <44AE3C7C.6020508@iam.uni-stuttgart.de> References: <44AE3C7C.6020508@iam.uni-stuttgart.de> Message-ID: Nils Wagner schrieb: > Hans Georg Krauth?user wrote: >> Hi all, >> >> I've tried to use optimize.leastsq from a fresh binary install of numpy >> and scipy with python 2.4 on XP. A call to optimize.leastsq results in >> the crash of the interpreter without any traceback. XP tells me that the >> problem occurs inside _minpack.pyd. >> >> I can reproduce the error with the following script on two different >> computers >> -------------------8<------------------------ >> import scipy >> scipy.pkgload('optimize') >> >> x=scipy.array([1., 2., 3.]) >> y=scipy.array([4., 5., 6.]) >> >> def res(p, y, x): >> err=y-peval(x,p) >> print p, y, x, err >> return err >> >> def peval(x,p): >> return p[0]*x+p[1] >> >> m_0=0.1 >> b_0=0.0 >> >> p0=scipy.array([m_0,b_0]) >> plsq=scipy.optimize.leastsq(res, p0, args=(y,x)) >> >> print plsq >> --------------------->8--------------------------- >> >> If I comment out line 2 (pkgload) the script runs perfectly on an old >> python installation (2.3) with old scipy and numpy. >> >> It would be nice if someone could try to reproduce the error. >> >> Please tell me also if something is wrong with my script. >> >> Regards >> Hans Georg >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > Your script works fine. I am on Linux. > > [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] > [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] > [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] > [ 0.1 0. ] [ 4. 5. 6.] [ 1. 2. 3.] [ 3.9 4.8 5.7] > [ 1.00000000e-01 1.49011612e-08] [ 4. 5. 6.] [ 1. 2. 3.] [ > 3.89999999 4.79999999 5.69999999] > [ 0.99999992 3.00000018] [ 4. 5. 6.] [ 1. 2. 3.] [ -9.83561659e-08 > -1.78918595e-08 6.25724468e-08] > [ 0.99999993 3.00000018] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.13257325e-07 > -4.76941793e-08 1.78689668e-08] > [ 0.99999992 3.00000022] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.43059652e-07 > -6.25953458e-08 1.78689596e-08] > [ 1. 3.] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.77635684e-15 > -1.77635684e-15 -2.66453526e-15] > [ 1.00000001 3. ] [ 4. 5. 6.] [ 1. 2. 3.] [ -1.49011630e-08 > -2.98023242e-08 -4.47034862e-08] > [ 1. 3.00000004] [ 4. 5. 6.] [ 1. 2. 3.] [ -4.47034854e-08 > -4.47034854e-08 -4.47034862e-08] > [ 1. 3.] [ 4. 5. 6.] [ 1. 2. 3.] [ 0. 0. 0.] > (array([ 1., 3.]), 2) > >>> scipy.__version__ > '0.5.0.2051' > > Nils Thanks Nils. So, it it a problem with the win-distribution or just with my installation? Someone out there, who can try it on a windows box? Regards Hans Georg From jelle.feringa at ezct.net Fri Jul 7 07:28:42 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 13:28:42 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: Message-ID: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> Someone out there, who can try it on a windows box? Sure Hans, I've tried your problem with the scipy/numpy versions compiled for the enthon 1.0.0b3 edition, raises the same problem you've described. I'm sorry to say so, but I cannot avoid but to get the feeling that right now scipy is in a pretty poor state on windows... Is it reasonable to assume scipy's win32 binaries are of lesser quality, compared to Linux? By the way, I ran the tests in the scipy.interpolate /test directory, which pass just fine, but interpolate.UnivariateSpline is crashing seriously however... From hgk at et.uni-magdeburg.de Fri Jul 7 08:14:02 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Fri, 07 Jul 2006 14:14:02 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> Message-ID: Jelle Feringa / EZCT Architecture & Design Research wrote: > Someone out there, who can try it on a windows box? > > Sure Hans, > I've tried your problem with the scipy/numpy versions compiled for > the enthon 1.0.0b3 edition, raises the same problem you've described. > > I'm sorry to say so, but I cannot avoid but to get the feeling that > right now scipy is in a pretty poor state on windows... > Is it reasonable to assume scipy's win32 binaries are of lesser > quality, compared to Linux? > > By the way, I ran the tests in the scipy.interpolate /test > directory, which pass just fine, but interpolate.UnivariateSpline is > crashing seriously however... > > Thank you Jelle for testing. Regarding the state of scipy on windows: I assume most of the core developers are on linux. We simply can not assume that they buy (and run) a windows box just to provide binary distributions to us. Let's ask how WE can contribute: I noticed that no test case for leastsq is present at the moment. So it is MY task to provide it (I will do that later) . So next time the distributor (the person who prepares the binary package) will stumble over the error. Of course, I would very much appreciate if someone could provide a corrected binary distribution. Regards Hans Georg From schofield at ftw.at Fri Jul 7 08:15:25 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 07 Jul 2006 14:15:25 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> Message-ID: <44AE505D.8050905@ftw.at> Jelle Feringa / EZCT Architecture & Design Research wrote: > Someone out there, who can try it on a windows box? > > Sure Hans, > I've tried your problem with the scipy/numpy versions compiled for > the enthon 1.0.0b3 edition, raises the same problem you've described. > > I'm sorry to say so, but I cannot avoid but to get the feeling that > right now scipy is in a pretty poor state on windows... > Is it reasonable to assume scipy's win32 binaries are of lesser > quality, compared to Linux? > Well, I don't know of any SciPy developers who use win32, so it does get less testing. The compiler toolchains available for win32 are probably also of a poorer quality than for Linux, Solaris etc. But the binaries are built from a near-identical code base on all platforms, and indeed your code works fine on the latest 'official' win32 release of scipy (0.4.9). I suggest you file this bug against the Enthon distribution. Keep in mind that it's still marked 'beta' ... -- Ed From jelle.feringa at ezct.net Fri Jul 7 08:42:48 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 14:42:48 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: Message-ID: <003a01c6a1c2$dab59b20$1001a8c0@JELLE> Thank you Jelle for testing. Your welcome, thanks for sharing your code. Its rather useful to read these small snippets, it's the best documentation one can find ;') Regarding the state of scipy on windows: I assume most of the core developers are on linux. So do i. We simply can not assume that they buy (and run) a windows box just to provide binary distributions to us. Well linux_hardware == win32_hardware? I think there's this specific license that would make linux developers reluctant to run win32 :) Than again, the optimizing compiler that has been used to compile python is free //though I heared rumours that now it's not anymore?! I'm speaking about the compiler setup found here: http://www.vrplumber.com/programming/mstoolkit/ which is a fairly ok setup... not of any use to Fortran code that is... Let's ask how WE can contribute: I noticed that no test case for leastsq is present at the moment. So it is MY task to provide it (I will do that later) . So next time the distributor (the person who prepares the binary package) will stumble over the error. True! Completely agreed Hans! From gruben at bigpond.net.au Fri Jul 7 09:28:10 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 07 Jul 2006 23:28:10 +1000 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AD3133.8020006@enthought.com> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> Message-ID: <44AE616A.10509@bigpond.net.au> Actually I tried enthon 1.0.0b3 today and first got the _ctype error. Then I tried with matplotlib 0.87.3 and got the traceback, namely ImportError: cannot import name Int8 Gary R. Bryce Hendrix wrote: > Can you offer a hint as to what I should search for? Google isn't being > all that helpful this morning. > > When I try to import matplotlib.pylab I get an error about _ctype not > being in the dll, which means matplotlib was built incorrectly (well, it > means other things, but its best I don't get started on that rant...). > In the past when I've had problems building matplotlib, I just install > the binary from their website and use it instead. Can you try that when > you get a chance & let me know if it solves your problem? > > Thanks, > Bryce > > Jelle Feringa / EZCT Architecture & Design Research wrote: >> >> >> >> When import pylab using the enthon 1.0.0b3 scipy/numpy package, the >> following error occurs: >> >> ImportError: cannot import name Int8 >> >> >> >> I think this error already showed up on this list? From hetland at tamu.edu Fri Jul 7 09:38:42 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 7 Jul 2006 09:38:42 -0400 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <44A5521C.6060206@gmail.com> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> Message-ID: <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> I have a feeling that there *might* be a way to tease out which triangles are inside the set of points you give, and which are outside. However, I have to say, that I thought about this for a while, and then gave up. More generally: Is there a good utility for finding points inside an arbitrary polygon? I, for one, could really use such a utility. -Rob On Jun 30, 2006, at 12:32 PM, Robert Kern wrote: > Nils Wagner wrote: >> John Hunter wrote: >>>>>>>> "Nils" == Nils Wagner writes: >>>>>>>> >>> Nils> Hi all, I have installed delaunay from the sandbox. >>> How can >>> Nils> I triangulate L-shaped domains ? >>> >>> Nils> My first try is somewhat unsatisfactory (delaunay.png) ? >>> Nils> How can I remove the unwanted triangles down to the >>> right ? >>> >>> delaunay assumes a convex shape, which your domain is not. I think >>> you'll need a more sophisticated mesh algorithm. >> >> A short note in the docstring would be very helpful. >> **delaunay assumes a convex shape** > > Actually, it doesn't "assume" any shape at all. It computes a Delaunay > triangulation of a set of points irrespective of any boundary edges > that you > might have wanted. That's what an (unqualified) Delaunay > triangulation is. > > Docstrings are not the place for tutorials on basic concepts. > Google works quite > well. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Assistant Professor Dept. of Oceanography, Texas A&M University http://pong.tamue.edu/~rob phone: 979-458-0096, fax: 979-845-6331 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhunter at ace.bsd.uchicago.edu Fri Jul 7 09:37:29 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 07 Jul 2006 08:37:29 -0500 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> (Rob Hetland's message of "Fri, 7 Jul 2006 09:38:42 -0400") References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> Message-ID: <87odw1sfhi.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Rob" == Rob Hetland writes: Rob> I have a feeling that there *might* be a way to tease out Rob> which triangles are inside the set of points you give, and Rob> which are outside. However, I have to say, that I thought Rob> about this for a while, and then gave up. Rob> More generally: Is there a good utility for finding points Rob> inside an arbitrary polygon? I, for one, could really use Rob> such a utility. Here is something I wrote for matplotlib -- someone more clever than I might be able to do it w/o the loop over the vertices by making better use of numpy .... import matplotlib.numerix as nx def inside_poly(points, verts): """ points is a sequence of x,y points verts is a sequence of x,y vertices of a poygon return value is a sequence on indices into points for the points that are inside the polygon """ xys = nx.asarray(points) Nxy = xys.shape[0] Nv = len(verts) def angle(x1, y1, x2, y2): twopi = 2*nx.pi theta1 = nx.arctan2(y1, x1) theta2 = nx.arctan2(y2, x2) dtheta = theta2-theta1 d = dtheta%twopi d = nx.where(nx.less(d, 0), twopi + d, d) return nx.where(nx.greater(d,nx.pi), d-twopi, d) angles = nx.zeros((Nxy,), nx.Float) x1 = nx.zeros((Nxy,), nx.Float) y1 = nx.zeros((Nxy,), nx.Float) x2 = nx.zeros((Nxy,), nx.Float) y2 = nx.zeros((Nxy,), nx.Float) x = xys[:,0] y = xys[:,1] for i in range(Nv): thisx, thisy = verts[i] x1 = thisx - x y1 = thisy - y thisx, thisy = verts[(i+1)%Nv] x2 = thisx - x y2 = thisy - y a = angle(x1, y1, x2, y2) angles += a return nx.nonzero(nx.greater_equal(nx.absolute(angles), nx.pi)) From nwagner at iam.uni-stuttgart.de Fri Jul 7 09:47:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 07 Jul 2006 15:47:19 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> Message-ID: <44AE65E7.3090502@iam.uni-stuttgart.de> Rob Hetland wrote: > > I have a feeling that there *might* be a way to tease out which > triangles are inside the set of points you give, and which are > outside. However, I have to say, that I thought about this for a > while, and then gave up. > > More generally: Is there a good utility for finding points inside an > arbitrary polygon? I, for one, could really use such a utility. > http://en.wikipedia.org/wiki/Point_in_polygon http://www.ariel.com.au/a/python-point-int-poly.html BTW Matlab has such a function http://www.mathworks.com/access/helpdesk/help/techdoc/ref/inpolygon.html It would be a nice enhancement for numpy/scipy. Nils > -Rob > > On Jun 30, 2006, at 12:32 PM, Robert Kern wrote: > >> Nils Wagner wrote: >>> John Hunter wrote: >>>>>>>>> "Nils" == Nils Wagner >>>>>>>> > writes: >>>>>>>>> >>>> Nils> Hi all, I have installed delaunay from the sandbox. How can >>>> Nils> I triangulate L-shaped domains ? >>>> >>>> Nils> My first try is somewhat unsatisfactory (delaunay.png) ? >>>> Nils> How can I remove the unwanted triangles down to the right ? >>>> >>>> delaunay assumes a convex shape, which your domain is not. I think >>>> you'll need a more sophisticated mesh algorithm. >>> >>> A short note in the docstring would be very helpful. >>> **delaunay assumes a convex shape** >> >> Actually, it doesn't "assume" any shape at all. It computes a Delaunay >> triangulation of a set of points irrespective of any boundary edges >> that you >> might have wanted. That's what an (unqualified) Delaunay >> triangulation is. >> >> Docstrings are not the place for tutorials on basic concepts. Google >> works quite >> well. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma >> that is made terrible by our own mad attempt to interpret it as >> though it had >> an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > ---- > Rob Hetland, Assistant Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamue.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wbaxter at gmail.com Fri Jul 7 09:52:29 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 7 Jul 2006 22:52:29 +0900 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> Message-ID: On 7/7/06, Rob Hetland wrote: > > > I have a feeling that there *might* be a way to tease out which triangles > are inside the set of points you give, and which are outside. However, I > have to say, that I thought about this for a while, and then gave up. > > More generally: Is there a good utility for finding points inside an > arbitrary polygon? I, for one, could really use such a utility. > The easiest thing is to do a ray-cast to infinity and count how many edges of the polygon you intersect. If you cross an odd number then the point was inside, if even then it was outside. So to implement it basically you only need a 2D ray-edge intersection routine, and the simplest thing is to just check the ray against every edge of the polygon one by one. If you have a lot of edges you can speed up the search in a variety of ways, like by using a spatial data strucure for your edges, e.g. a quadtree, k-d tree, bsp tree, etc. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgk at et.uni-magdeburg.de Fri Jul 7 09:53:52 2006 From: hgk at et.uni-magdeburg.de (=?ISO-8859-1?Q?Hans_Georg_Krauth=E4user?=) Date: Fri, 07 Jul 2006 15:53:52 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <44AE505D.8050905@ftw.at> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> Message-ID: Ed Schofield schrieb: > Jelle Feringa / EZCT Architecture & Design Research wrote: >> Someone out there, who can try it on a windows box? >> >> Sure Hans, >> I've tried your problem with the scipy/numpy versions compiled for >> the enthon 1.0.0b3 edition, raises the same problem you've described. >> >> I'm sorry to say so, but I cannot avoid but to get the feeling that >> right now scipy is in a pretty poor state on windows... >> Is it reasonable to assume scipy's win32 binaries are of lesser >> quality, compared to Linux? >> > > Well, I don't know of any SciPy developers who use win32, so it does get > less testing. The compiler toolchains available for win32 are probably > also of a poorer quality than for Linux, Solaris etc. But the binaries > are built from a near-identical code base on all platforms, and indeed > your code works fine on the latest 'official' win32 release of scipy > (0.4.9). Did I get that right? My code works for you on XP with scipy 0.4.9? I *have* 0.4.9 (from official binary distribution) and the interpreter crashes. I'm confused... Hans Georg From jelle.feringa at ezct.net Fri Jul 7 09:58:02 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 15:58:02 +0200 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE616A.10509@bigpond.net.au> Message-ID: <005f01c6a1cd$5eb9a470$1001a8c0@JELLE> Hi Gary, Having the same issue here... -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Gary Ruben Sent: Friday, July 07, 2006 3:28 PM To: SciPy Users List Subject: Re: [SciPy-user] enthon beta 1.0.0b3 Actually I tried enthon 1.0.0b3 today and first got the _ctype error. Then I tried with matplotlib 0.87.3 and got the traceback, namely ImportError: cannot import name Int8 Gary R. From jelle.feringa at ezct.net Fri Jul 7 10:00:26 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 16:00:26 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: Message-ID: <006001c6a1cd$b334afe0$1001a8c0@JELLE> What CPU are you running Hans? I had scipy 0.4.9 run fine on my laptop which is a Pentium M, and crashed at my AMD Athlon XP -------- Did I get that right? My code works for you on XP with scipy 0.4.9? I *have* 0.4.9 (from official binary distribution) and the interpreter crashes. I'm confused... Hans Georg From bhendrix at enthought.com Fri Jul 7 10:13:10 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Fri, 07 Jul 2006 09:13:10 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE616A.10509@bigpond.net.au> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> <44AE616A.10509@bigpond.net.au> Message-ID: <44AE6BF6.40103@enthought.com> Robert Kern told me there was a checkin in matplotlib's svn (rev 2478) which addresses the problem you are seeing. We'll be releasing a new build of Enthon with the necessary matplotlib revision next week, but if you can't wait, you can build matplotlib from its svn repository. Bryce Gary Ruben wrote: > Actually I tried enthon 1.0.0b3 today and first got the _ctype error. > Then I tried with matplotlib 0.87.3 and got the traceback, namely > ImportError: cannot import name Int8 > > Gary R. > > Bryce Hendrix wrote: > >> Can you offer a hint as to what I should search for? Google isn't being >> all that helpful this morning. >> >> When I try to import matplotlib.pylab I get an error about _ctype not >> being in the dll, which means matplotlib was built incorrectly (well, it >> means other things, but its best I don't get started on that rant...). >> In the past when I've had problems building matplotlib, I just install >> the binary from their website and use it instead. Can you try that when >> you get a chance & let me know if it solves your problem? >> >> Thanks, >> Bryce >> >> Jelle Feringa / EZCT Architecture & Design Research wrote: >> >>> >>> >>> When import pylab using the enthon 1.0.0b3 scipy/numpy package, the >>> following error occurs: >>> >>> ImportError: cannot import name Int8 >>> >>> >>> >>> I think this error already showed up on this list? >>> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhunter at ace.bsd.uchicago.edu Fri Jul 7 10:08:55 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 07 Jul 2006 09:08:55 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE6BF6.40103@enthought.com> (bryce hendrix's message of "Fri, 07 Jul 2006 09:13:10 -0500") References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> <44AE616A.10509@bigpond.net.au> <44AE6BF6.40103@enthought.com> Message-ID: <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "bryce" == bryce hendrix writes: bryce> Robert Kern told me there was a checkin in matplotlib's svn bryce> (rev 2478) which addresses the problem you are bryce> seeing. We'll be releasing a new build of Enthon with the bryce> necessary matplotlib revision next week, but if you can't bryce> wait, you can build matplotlib from its svn repository. Hey Bryce, If you'd like to stick with official releases for enthon, we can probably get out a matplotlib0.87.4 bugfix release for you. JDH From jelle.feringa at ezct.net Fri Jul 7 10:22:14 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 16:22:14 +0200 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE6BF6.40103@enthought.com> Message-ID: <006a01c6a1d0$c019e5b0$1001a8c0@JELLE> Thanks a lot Bryce, will do. Have you got an idea on how scipy -0.4.9- runs more stable on a intel than a amd processor? Have other users been confronted with this problem as well? -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Fri Jul 7 10:30:46 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Fri, 07 Jul 2006 09:30:46 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> <44AE616A.10509@bigpond.net.au> <44AE6BF6.40103@enthought.com> <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44AE7016.4010707@enthought.com> John Hunter wrote: > Hey Bryce, > > If you'd like to stick with official releases for enthon, we can > probably get out a matplotlib0.87.4 bugfix release for you. > > JDH > John, That would be great, I was going to drop a line asking about this very topic this morning. Bryce From gruben at bigpond.net.au Fri Jul 7 10:37:44 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 08 Jul 2006 00:37:44 +1000 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> <44AE616A.10509@bigpond.net.au> <44AE6BF6.40103@enthought.com> <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44AE71B8.8060906@bigpond.net.au> Hi John and Bryce, Do you think this would be a good time to go with the numerix change to numpy in matplotlibrc? A couple of unrelated things for Bryce (in case he reads this). I guess it was noticed that the Enthon menu documentation link is broken; at least it was on my test install. Finally, I couldn't think of a good reason for the mayavi2 script not to be named mayavi2.pyw Gary R. John Hunter wrote: >>>>>> "bryce" == bryce hendrix writes: > > bryce> Robert Kern told me there was a checkin in matplotlib's svn > bryce> (rev 2478) which addresses the problem you are > bryce> seeing. We'll be releasing a new build of Enthon with the > bryce> necessary matplotlib revision next week, but if you can't > bryce> wait, you can build matplotlib from its svn repository. > > Hey Bryce, > > If you'd like to stick with official releases for enthon, we can > probably get out a matplotlib0.87.4 bugfix release for you. > > JDH From bhendrix at enthought.com Fri Jul 7 10:39:06 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Fri, 07 Jul 2006 09:39:06 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <006a01c6a1d0$c019e5b0$1001a8c0@JELLE> References: <006a01c6a1d0$c019e5b0$1001a8c0@JELLE> Message-ID: <44AE720A.8030001@enthought.com> Jelle Feringa / EZCT Architecture & Design Research wrote: > > Thanks a lot Bryce, will do. > > Have you got an idea on how scipy -0.4.9- runs more stable on a intel than a amd processor? > Have other users been confronted with this problem as well? > > -jelle > > > No idea. I honestly don't use scipy that often. I do know ndimage has had problems with x86_64 :) I'd be interested in hearing peoples experience with Win64, as thats the one platform we have done zero testing on. Bryce -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Fri Jul 7 11:01:46 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 07 Jul 2006 17:01:46 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> Message-ID: <44AE775A.30004@ftw.at> Hans Georg Krauth?user wrote: > Ed Schofield schrieb: > >> Jelle Feringa / EZCT Architecture & Design Research wrote: >> >>> Someone out there, who can try it on a windows box? >>> >>> Sure Hans, >>> I've tried your problem with the scipy/numpy versions compiled for >>> the enthon 1.0.0b3 edition, raises the same problem you've described. >>> >>> I'm sorry to say so, but I cannot avoid but to get the feeling that >>> right now scipy is in a pretty poor state on windows... >>> Is it reasonable to assume scipy's win32 binaries are of lesser >>> quality, compared to Linux? >>> >>> >> Well, I don't know of any SciPy developers who use win32, so it does get >> less testing. The compiler toolchains available for win32 are probably >> also of a poorer quality than for Linux, Solaris etc. But the binaries >> are built from a near-identical code base on all platforms, and indeed >> your code works fine on the latest 'official' win32 release of scipy >> (0.4.9). >> > > Did I get that right? My code works for you on XP with scipy 0.4.9? > Yes, it works fine on my Pentium 4 with XP SP2, Python 2.4, NumPy 0.9.8, SciPy 0.4.9. What's your setup? > I *have* 0.4.9 (from official binary distribution) and the interpreter > crashes. > > I'm confused... > Hmmm ... it could be an ATLAS problem. What's your processor? I built the SciPy 0.4.9 binaries against Pearu's ATLAS binaries for Pentium 2, thinking that this would give maximum compatibility ... Or perhaps it's something else. Could someone with this problem please post a backtrace? -- Ed From stefan at sun.ac.za Fri Jul 7 11:02:04 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 7 Jul 2006 17:02:04 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> Message-ID: <20060707150204.GB32102@mentat.za.net> On Fri, Jul 07, 2006 at 10:52:29PM +0900, Bill Baxter wrote: > > On 7/7/06, Rob Hetland wrote: > > > I have a feeling that there *might* be a way to tease out which triangles > are inside the set of points you give, and which are outside. However, I > have to say, that I thought about this for a while, and then gave up. > > More generally: Is there a good utility for finding points inside an > arbitrary polygon? I, for one, could really use such a utility. > > > The easiest thing is to do a ray-cast to infinity and count how many edges of > the polygon you intersect. If you cross an odd number then the point was > inside, if even then it was outside. So to implement it basically you only > need a 2D ray-edge intersection routine, and the simplest thing is to just > check the ray against every edge of the polygon one by one. If you have a lot > of edges you can speed up the search in a variety of ways, like by using a > spatial data strucure for your edges, e.g. a quadtree, k-d tree, bsp tree, etc. A very simple implementation (not heavily tested) of this method is attached. Again, unfortunately, not vectorised. Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: poly.py Type: text/x-python Size: 2116 bytes Desc: not available URL: From stefan at sun.ac.za Fri Jul 7 11:31:51 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 7 Jul 2006 17:31:51 +0200 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE720A.8030001@enthought.com> References: <006a01c6a1d0$c019e5b0$1001a8c0@JELLE> <44AE720A.8030001@enthought.com> Message-ID: <20060707153151.GA32666@mentat.za.net> On Fri, Jul 07, 2006 at 09:39:06AM -0500, bryce hendrix wrote: > Jelle Feringa / EZCT Architecture & Design Research wrote: > > > Thanks a lot Bryce, will do. > > Have you got an idea on how scipy -0.4.9- runs more stable on a intel than a amd processor? > > Have other users been confronted with this problem as well? > > > > -jelle > > > > No idea. I honestly don't use scipy that often. I do know ndimage has had > problems with x86_64 :) I'd be interested in hearing peoples experience with > Win64, as thats the one platform we have done zero testing on. Everyone keeps talking about problems with ndimage and 64-bit platforms. Can anyone provide specific examples or failures? I just ran some of my code which uses a lot of ndimage functionality on a 64-bit machine without any problems (after re-enabling ndimage in __init__.py). When I run the test suite, I see: Ran 397 tests in 0.574s FAILED (errors=3) and all of these are in morphology.py (test_distance_*). Furthermore, these don't even look difficult to fix: raise RuntimeError, 'indices must of Int32 type' Would anyone object if I enabled ndimage for 64-bit machines? We could then file and handle tickets as necessary. - St?fan From nmarais at sun.ac.za Fri Jul 7 11:48:09 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 07 Jul 2006 17:48:09 +0200 Subject: [SciPy-user] Sparse matrix usage, documentation. References: <4465C4F4.50905@ftw.at> Message-ID: Hi Ed Sorry for the late reply, I seem to have missed the activity in this thread somehow. On Sat, 13 May 2006 13:37:24 +0200, Ed Schofield wrote: >> > I've just added some basic documentation to SVN, accessible from the > prompt with help(sparse). I'd be very grateful if you could contribute > any more examples. Do you think it should take the form of a wiki page, > or as an html or pdf file in the source tree? Thanks, that helped quite a bit. I'd like to contribute some documentation. I'd prefer to use the wiki, since it's less effort for me, but I'm willing to go with whatever you start. Just let me know where to look. Regards Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From chiaracaronna at hotmail.com Fri Jul 7 12:13:07 2006 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Fri, 07 Jul 2006 16:13:07 +0000 Subject: [SciPy-user] estimating errors with optimize.leastsq Message-ID: Hello everybody, I wrote a fitting program using optimize.leastsq, and I have some questions: 1) Is it possible to do a weighted fit? (I mean using also the errorbars on the data) 2) Is there any way to get the estimated errors on the fitting parameter? Maybe optimize.leastsq is not the right choice? Does anyone has some good hints? Thank you! From bhendrix at enthought.com Fri Jul 7 12:21:58 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 07 Jul 2006 11:21:58 -0500 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <44AE71B8.8060906@bigpond.net.au> References: <000101c6a0f6$5885f1d0$1001a8c0@JELLE> <44AD3133.8020006@enthought.com> <44AE616A.10509@bigpond.net.au> <44AE6BF6.40103@enthought.com> <87zmfl33t4.fsf@peds-pc311.bsd.uchicago.edu> <44AE71B8.8060906@bigpond.net.au> Message-ID: <44AE8A26.3040902@enthought.com> Gary, I tried building matplotlib with numpy yesterday, but ran into a link error. I'll forward the error on to John. I'd certainly like to switch over to numpy as it gets us a little closer to not having to support 3 numeric libraries. Thanks for the tip about the docs, I'm working on updating them right now. Bryce Gary Ruben wrote: > Hi John and Bryce, > Do you think this would be a good time to go with the numerix change to > numpy in matplotlibrc? > > A couple of unrelated things for Bryce (in case he reads this). I guess > it was noticed that the Enthon menu documentation link is broken; at > least it was on my test install. Finally, I couldn't think of a good > reason for the mayavi2 script not to be named mayavi2.pyw > > Gary R. > From oliphant.travis at ieee.org Fri Jul 7 13:54:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 07 Jul 2006 11:54:04 -0600 Subject: [SciPy-user] enthon beta 1.0.0b3 In-Reply-To: <20060707153151.GA32666@mentat.za.net> References: <006a01c6a1d0$c019e5b0$1001a8c0@JELLE> <44AE720A.8030001@enthought.com> <20060707153151.GA32666@mentat.za.net> Message-ID: <44AE9FBC.5030606@ieee.org> Stefan van der Walt wrote: > On Fri, Jul 07, 2006 at 09:39:06AM -0500, bryce hendrix wrote: > >> Jelle Feringa / EZCT Architecture & Design Research wrote: >> >> >> Thanks a lot Bryce, will do. >> >> Have you got an idea on how scipy -0.4.9- runs more stable on a intel than a amd processor? >> >> Have other users been confronted with this problem as well? >> >> >> >> -jelle >> >> >> >> No idea. I honestly don't use scipy that often. I do know ndimage has had >> problems with x86_64 :) I'd be interested in hearing peoples experience with >> Win64, as thats the one platform we have done zero testing on. >> > > > and all of these are in morphology.py (test_distance_*). Furthermore, > these don't even look difficult to fix: > > raise RuntimeError, 'indices must of Int32 type' > > Would anyone object if I enabled ndimage for 64-bit machines? We > could then file and handle tickets as necessary. > I say go ahead and re-enable it. I'd like to get the problems fixed. -Travis From robert.kern at gmail.com Fri Jul 7 14:20:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Jul 2006 13:20:56 -0500 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> Message-ID: <44AEA608.4050101@gmail.com> Rob Hetland wrote: > > I have a feeling that there *might* be a way to tease out which > triangles are inside the set of points you give, and which are outside. > However, I have to say, that I thought about this for a while, and then > gave up. It wouldn't necessarily work. The Delaunay triangulation performed by scipy.sandbox.delaunay is not constrained to require the edges of your non-convex boundary. But supposing you did have a constrained Delaunay triangulation, you could pick a triangle that is known to be outside and then push its neighbors onto a stack if the edge that joins them is not a boundary edge. Pop a triangle from the stack and repeat. (More or less; there are some details you need to fill in). You will have to put multiple "seed" triangles to initialize the stack if there are multiple "outside" regions (including holes!). > More generally: Is there a good utility for finding points inside an > arbitrary polygon? I, for one, could really use such a utility. Raycast with simulation of simplicity to handle degeneracy is probably your best bet. http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html John's "add up the angles" approach is not really a good one. I frequently find it referred to in the literature as "the worst thing you could possibly do." :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jelle.feringa at ezct.net Fri Jul 7 14:36:17 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 7 Jul 2006 20:36:17 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <44AEA608.4050101@gmail.com> Message-ID: <009d01c6a1f4$3e61d950$1001a8c0@JELLE> If your really set on making this work, you should probably have a look at the wonderful project of wrapping cgal to python: http://cgal-python.gforge.inria.fr/ A very, very exciting project in terms of computational geometry! Cgal has a constrained version of the Delaunay triangulation, not sure if its as of yet wrapped in cgal-python... The project is moving rapidly though... -jelle -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Robert Kern Sent: Friday, July 07, 2006 8:21 PM To: SciPy Users List Subject: Re: [SciPy-user] Triangulation of L-shaped domains Rob Hetland wrote: > > I have a feeling that there *might* be a way to tease out which > triangles are inside the set of points you give, and which are outside. > However, I have to say, that I thought about this for a while, and then > gave up. It wouldn't necessarily work. The Delaunay triangulation performed by scipy.sandbox.delaunay is not constrained to require the edges of your non-convex boundary. But supposing you did have a constrained Delaunay triangulation, you could pick a triangle that is known to be outside and then push its neighbors onto a stack if the edge that joins them is not a boundary edge. Pop a triangle from the stack and repeat. (More or less; there are some details you need to fill in). You will have to put multiple "seed" triangles to initialize the stack if there are multiple "outside" regions (including holes!). > More generally: Is there a good utility for finding points inside an > arbitrary polygon? I, for one, could really use such a utility. Raycast with simulation of simplicity to handle degeneracy is probably your best bet. http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html John's "add up the angles" approach is not really a good one. I frequently find it referred to in the literature as "the worst thing you could possibly do." :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Fri Jul 7 14:46:13 2006 From: gael.varoquaux at normalesup.org (=?iso-8859-1?Q?Ga=EBl?= Varoquaux) Date: Fri, 7 Jul 2006 20:46:13 +0200 Subject: [SciPy-user] Integration of ODE with a random force Message-ID: <20060707184613.GK24881@clipper.ens.fr> I have been using scipy to do some simple models to try to understand a physical problem and have been extremely please with it: it allows me to do more physics and less IT ! I would now like to add the heating effects due to random fluctuation in my model. A way of doing this would be, with a constant time step integrator, to calculate at every step a random force: my heating is due to spontaneous emission of photons, for each time step I have a probability to have an emission of one photon, and if a photon is emitted I can expressed the force generated. With a constant time step integrator this would be easy to implement. With an adaptive time step integrator, if I have access to the duration of the time step it would also be not to hard to implement. I am wandering how to implement this with scipy's integrator. I guess my e-mail is not to clear but I welcome any advice or questions. Ga?l From hasslerjc at adelphia.net Fri Jul 7 16:06:32 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Fri, 7 Jul 2006 20:06:32 +0000 (UTC) Subject: [SciPy-user] optimize.leastsq crashs my interpreter References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> Message-ID: Ed Schofield ftw.at> writes: > > Hans Georg Krauth?user wrote: > > Ed Schofield schrieb: > > > >> Jelle Feringa / EZCT Architecture & Design Research wrote: > >> > >> > > > > Did I get that right? My code works for you on XP with scipy 0.4.9? > > > > Yes, it works fine on my Pentium 4 with XP SP2, Python 2.4, NumPy 0.9.8, > SciPy 0.4.9. What's your setup? > > > I *have* 0.4.9 (from official binary distribution) and the interpreter > > crashes. > > > > I'm confused... > > > > Hmmm ... it could be an ATLAS problem. What's your processor? I built > the SciPy 0.4.9 binaries against Pearu's ATLAS binaries for Pentium 2, > thinking that this would give maximum compatibility ... > > Or perhaps it's something else. Could someone with this problem please > post a backtrace? > This computer is an AMD Athlon 1600+ running Windows XP. Scipy version 0.3.2 with Numeric version 23.5 works. The latest Scipy works on this same computer under Debian Etch. (I can't check the version at the moment - - I just "upgraded" from Windows ME to XP, which overwrote the MBR. I'll have to fix it before I can boot into Linux again.) All of the versions of scipy using numpy crash with XP whenever I access any of the functions in "optimize" or "integrate" which (I assume) call the Fortran libraries. In the current version, running scipy.test() gives an "unhandled exception." Debug shows a pointer to: 020CA9C3 xorps xmm6,xmm6 I suspect that there is an inconsistency between the Athlon and the P4. A Google search finds lots of comments about xorps and various problems. However, I haven't done any PC assembly since the 8086, so I'm WAY out of my depth. Some other information: >>> scipy.__version__ '0.4.9' >>>scipy.__numpy_version__ '0.9.8' >>> scipy.show_numpy_config() atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\Libraries\\ATLAS_3.6.0_WIN_P4'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c include_dirs = ['C:\\Libraries\\ATLAS_3.6.0_WIN_P4'] plus similar stuff. Probably the important thing is the "ATLAS ... P4" line. John Hassler From bapeters at terastat.com Fri Jul 7 17:36:45 2006 From: bapeters at terastat.com (Bruce Peterson) Date: Fri, 07 Jul 2006 14:36:45 -0700 Subject: [SciPy-user] SciPy and Py2EXE Message-ID: <7.0.1.0.2.20060707143122.02728958@terastat.com> Has anyone tried to make a distributable with PY2EXE and the latest release of SCIPY? in my code I use from scipy import interpolate and get the following errors from PY2EXE The following modules appear to be missing ['Pyrex', 'Pyrex.Compiler', '_curses', '_imaging_gif', 'fcompiler.FCompiler', 'fcompiler.show_fcompilers', 'lib.add_newdoc', 'mxDateTime.__version__', 'numarray', 'numpy.Float', 'numpy.Int', 'numpy.alltrue', 'numpy.arange', 'numpy.array', 'numpy.atleast_1d', 'numpy.clip', 'numpy.concatenate', 'numpy.cos', 'numpy.greater', 'numpy.less', 'numpy.logical_or', 'numpy.matrixmultiply', 'numpy.ones', 'numpy.pi', 'numpy.putmask', 'numpy.rank', 'numpy.ravel', 'numpy.searchsorted', 'numpy.shape', 'numpy.sin', 'numpy.sometrue', 'numpy.sqrt', 'numpy.swapaxes', 'numpy.take', 'numpy.transpose', 'numpy.zeros', 'pkg_resources', 'pre', 'pylab', 'setuptools', 'setuptools.command', 'setuptools.command.egg_info', 'numpy.core.conjugate', 'numpy.core.equal', 'numpy.core.less', 'numpy.core.less_equal'] some of these are not SciPy (e.g. MxDateTime) but most appear to be SciPy related. The py2exe wiki discusses only older versions of SciPy. Thanks Bruce Peterson 425 466 7344 From gsmith at alumni.uwaterloo.ca Fri Jul 7 13:47:12 2006 From: gsmith at alumni.uwaterloo.ca (Greg Smith) Date: Fri, 7 Jul 2006 17:47:12 +0000 (UTC) Subject: [SciPy-user] (no subject) References: <4487A586.2000707@mspacek.mm.st> Message-ID: Martin Spacek mspacek.mm.st> writes: > [mspacek]|73> arange(10, 13, 0.1)[[0, 10, 20]] == arange(10,13) > <73> array([True, False, False], dtype=bool) > > In this case, 10 == 10.0, but 11 != 11.0 and 12 != 12.0 > > I guess comparing ints to floats is a touchy thing, and depends on the > system's C libraries. Maybe minuscule errors accumulate when building up > a float array using arange(). Anyways, rounding and then converting the Comparing floats to floats (or ints) for equality is always a touchy thing. The miniscule error is there to begin with; since 0.1 cannot be exactly represented (unlike, say, 0.3056640625, which can). So you won't actually get 11.0, you'll get 10.0 + 10*x where x is the representation of 0.1. (I get a number which is 11.0 - 3.553e-15). When generating ranges with an imprecise step, it is better to do something like this: arange(100,130)*0.1 Now, you get exactly 11.0 and 12.0. This is because multiplying 0.1 by 120 gives exactly 12.0, but adding 0.1 to 10.0 20 times doesn't. It's always better to use abs(a-b)<1e-10 or something; values found by different but mathematically equivalent calculations are not necessarily equal. From ckkart at hoc.net Fri Jul 7 20:20:54 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 08 Jul 2006 09:20:54 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: References: Message-ID: <44AEFA66.6010207@hoc.net> Chiara Caronna wrote: > Hello everybody, > I wrote a fitting program using optimize.leastsq, and I have some questions: > 1) Is it possible to do a weighted fit? (I mean using also the errorbars on > the data) Knowing the variance of the data, you can weight the data when calculating the residual like this: r = \sum_i w_i(data_i-model_i)^2 where the weights w_i are just the inverse of the squared variance: w_i = 1/\sigma_i^2 > 2) Is there any way to get the estimated errors on the fitting parameter? > Maybe optimize.leastsq is not the right choice? Does anyone has some good > hints? Everything you need is in here: http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi I haven't yet found time/will to dig into it, but I'm definitely interested in a good error estimation routine. Christian From robert.kern at gmail.com Fri Jul 7 20:25:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Jul 2006 19:25:12 -0500 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44AEFA66.6010207@hoc.net> References: <44AEFA66.6010207@hoc.net> Message-ID: <44AEFB68.4090803@gmail.com> Christian Kristukat wrote: > Chiara Caronna wrote: >> 2) Is there any way to get the estimated errors on the fitting parameter? >> Maybe optimize.leastsq is not the right choice? Does anyone has some good >> hints? > > Everything you need is in here: > http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi > I haven't yet found time/will to dig into it, but I'm definitely interested in a > good error estimation routine. The implementation of the ideas in that paper is in ODRPACK by the same author. It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I think, but please let me know if something needs to be clarified. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Fri Jul 7 20:54:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 8 Jul 2006 02:54:14 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <44AEA608.4050101@gmail.com> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> <44AEA608.4050101@gmail.com> Message-ID: <20060708005413.GG7117@mentat.za.net> On Fri, Jul 07, 2006 at 01:20:56PM -0500, Robert Kern wrote: > Raycast with simulation of simplicity to handle degeneracy is probably your best > bet. > > http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html Beautiful. I wrote a quick implementation in Python. Please feel free to suggest improvements. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: pnpoly.py Type: text/x-python Size: 1339 bytes Desc: not available URL: From tim.leslie at gmail.com Sat Jul 8 01:56:18 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Sat, 8 Jul 2006 15:56:18 +1000 Subject: [SciPy-user] ndimage on 64 bit (was enthon beta 1.0.0b3) Message-ID: On 7/8/06, Travis Oliphant wrote: > I say go ahead and re-enable it. I'd like to get the problems fixed. I did a clean install of the latest svn and ndimage segfaulted on me when running the tests. I'm keen to see ndimage working, so if anyone would like to give pointers on how to go about fixing this, I'm happy to help out. These tests were done on a dual athlon 3800+ with latest svn versions of numpy and scipy and Python 2.4.3 (#2, Apr 27 2006, 14:43:32) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 >>> numpy.__version__ '0.9.9.2786' >>> scipy.__version__ '0.5.0.2056' running scipy.test(10, 10) in gdb gives the following: histogram 1 ... ok histogram 2 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 47207542961872 (LWP 15035)] 0x00002aef624e4afa in NI_Histogram (input=0x749800, labels=0xd5fa80, min_label=1, max_label=1, indices=0xeb98d0, n_results=1, histograms=0xeb2d60, min=0, max=4, nbins=47205985550341) at Lib/ndimage/src/ni_measure.c:752 752 ph[jj][kk] = 0; (gdb) bt #0 0x00002aef624e4afa in NI_Histogram (input=0x749800, labels=0xd5fa80, min_label=1, max_label=1, indices=0xeb98d0, n_results=1, histograms=0xeb2d60, min=0, max=4, nbins=47205985550341) at Lib/ndimage/src/ni_measure.c:752 #1 0x00002aef624d6e82 in Py_Histogram (obj=, args=) at Lib/ndimage/src/nd_image.c:1103 #2 0x00000000004779c1 in PyEval_EvalFrame () #3 0x000000000047830f in PyEval_EvalCodeEx () #4 0x00000000004768ab in PyEval_EvalFrame () #5 0x00000000004769c6 in PyEval_EvalFrame () #6 0x000000000047830f in PyEval_EvalCodeEx () #7 0x00000000004c013a in PyFunction_SetClosure () #8 0x0000000000414490 in PyObject_Call () #9 0x0000000000475732 in PyEval_EvalFrame () #10 0x000000000047830f in PyEval_EvalCodeEx () #11 0x00000000004c013a in PyFunction_SetClosure () #12 0x0000000000414490 in PyObject_Call () #13 0x000000000041afe7 in PyMethod_New () #14 0x0000000000414490 in PyObject_Call () #15 0x0000000000475cf5 in PyEval_EvalFrame () #16 0x000000000047830f in PyEval_EvalCodeEx () #17 0x00000000004c013a in PyFunction_SetClosure () #18 0x0000000000414490 in PyObject_Call () #19 0x000000000041afe7 in PyMethod_New () #20 0x0000000000414490 in PyObject_Call () #21 0x0000000000449b66 in _PyType_Lookup () #22 0x0000000000414490 in PyObject_Call () #23 0x0000000000475cf5 in PyEval_EvalFrame () #24 0x000000000047830f in PyEval_EvalCodeEx () #25 0x00000000004c013a in PyFunction_SetClosure () #26 0x0000000000414490 in PyObject_Call () #27 0x0000000000475732 in PyEval_EvalFrame () #28 0x000000000047830f in PyEval_EvalCodeEx () #29 0x00000000004c013a in PyFunction_SetClosure () #30 0x0000000000414490 in PyObject_Call () #31 0x000000000041afe7 in PyMethod_New () #32 0x0000000000414490 in PyObject_Call () #33 0x0000000000449b66 in _PyType_Lookup () ---Type to continue, or q to quit--- #34 0x0000000000414490 in PyObject_Call () #35 0x0000000000475cf5 in PyEval_EvalFrame () #36 0x00000000004769c6 in PyEval_EvalFrame () #37 0x000000000047830f in PyEval_EvalCodeEx () #38 0x00000000004768ab in PyEval_EvalFrame () #39 0x000000000047830f in PyEval_EvalCodeEx () #40 0x00000000004768ab in PyEval_EvalFrame () #41 0x000000000047830f in PyEval_EvalCodeEx () #42 0x0000000000478422 in PyEval_EvalCode () #43 0x000000000049bd60 in PyRun_InteractiveOneFlags () #44 0x000000000049be64 in PyRun_InteractiveLoopFlags () #45 0x000000000049c55a in PyRun_AnyFileExFlags () #46 0x0000000000410a80 in Py_Main () #47 0x00002aef5cb210c4 in __libc_start_main () from /lib/libc.so.6 #48 0x000000000040ffba in _start () (gdb) Cheers, Tim > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From listservs at mac.com Sat Jul 8 12:23:34 2006 From: listservs at mac.com (listservs at mac.com) Date: Sat, 8 Jul 2006 12:23:34 -0400 Subject: [SciPy-user] scipy svn build errors (fftpack) Message-ID: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Recent svn builds of scipy on OSX are failing. The errors occur with fftpack: creating build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/fftpack /usr/local/bin/g77 -undefined dynamic_lookup -bundle build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/lib/ gcc/powerpc-apple-darwin6.8/3.4.2 -L../staticlibs -Lbuild/ temp.darwin-8.7.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - lcc_dynamic -o build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/ fftpack/_fftpack.so /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -undefined dynamic_lookup -bundle build/temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/lib/ gcc/powerpc-apple-darwin6.8/3.4.2 -L../staticlibs -Lbuild/ temp.darwin-8.7.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - lcc_dynamic -o build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/ fftpack/_fftpack.so" failed with exit status 1 These builds worked fine until a week or so ago. I have fftw3: fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/ Versions/2.4/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFEr9wIkeka2iCbE4wRAq2jAJ0SOgvX/KUb+w3G298tdQCq0Id4fACgoNKo 9fkjGUSSrXnw8FwkT5rv8ig= =YVzB -----END PGP SIGNATURE----- From ckkart at hoc.net Sat Jul 8 22:43:28 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 09 Jul 2006 11:43:28 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44AEFB68.4090803@gmail.com> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> Message-ID: <44B06D50.100@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Chiara Caronna wrote: >>> 2) Is there any way to get the estimated errors on the fitting parameter? >>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>> hints? >> Everything you need is in here: >> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >> I haven't yet found time/will to dig into it, but I'm definitely interested in a >> good error estimation routine. > > The implementation of the ideas in that paper is in ODRPACK by the same author. > It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I > think, but please let me know if something needs to be clarified. > great! I just to tried to build from svn with sandbox.odr enabled. Upon importing odr I get the following error: >>> from scipy.sandbox import odr Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", line 55, in ? import odrpack File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", line 113, in ? from scipy.sandbox.odr import __odrpack ImportError: cannot import name __odrpack Looking at site-packages/scipy/sandbox/odr it looks like that the extension module has not been built. Btw. is it intended that numpy distutils (0.9.8) installs everything to /usr/local/lib/python instead of /usr/lib/python? I don't like that very much. Can I change the location with some configuration paramter? Regards, Christian From ckkart at hoc.net Sun Jul 9 07:39:37 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 09 Jul 2006 20:39:37 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44AEFB68.4090803@gmail.com> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> Message-ID: <44B0EAF9.1060102@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Chiara Caronna wrote: >>> 2) Is there any way to get the estimated errors on the fitting parameter? >>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>> hints? >> Everything you need is in here: >> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >> I haven't yet found time/will to dig into it, but I'm definitely interested in a >> good error estimation routine. > > The implementation of the ideas in that paper is in ODRPACK by the same author. > It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I > think, but please let me know if something needs to be clarified. > great! I just to tried to build from svn with sandbox.odr enabled. Upon importing odr I get the following error: >>> from scipy.sandbox import odr Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", line 55, in ? import odrpack File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", line 113, in ? from scipy.sandbox.odr import __odrpack ImportError: cannot import name __odrpack Looking at site-packages/scipy/sandbox/odr it looks like that the extension module has not been built. Btw. is it intended that numpy distutils (0.9.8) installs everything to /usr/local/lib/python instead of /usr/lib/python? I don't like that very much. Can I change the location with some configuration paramter? Regards, Christian From robert.kern at gmail.com Sat Jul 8 18:55:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 08 Jul 2006 17:55:33 -0500 Subject: [SciPy-user] scipy svn build errors (fftpack) In-Reply-To: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> References: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> Message-ID: <44B037E5.60707@gmail.com> listservs at mac.com wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Recent svn builds of scipy on OSX are failing. The errors occur with > fftpack: > > creating build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/fftpack > /usr/local/bin/g77 -undefined dynamic_lookup -bundle build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- > Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/ > temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- > Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/lib/ > gcc/powerpc-apple-darwin6.8/3.4.2 -L../staticlibs -Lbuild/ > temp.darwin-8.7.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - > lcc_dynamic -o build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/ > fftpack/_fftpack.so > /usr/bin/ld: can't locate file for: -ldfftpack > collect2: ld returned 1 exit status What command line are you using to build? You will probably need to explicitly specify build_clib in addition to build_ext. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Mon Jul 10 02:50:47 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 10 Jul 2006 15:50:47 +0900 Subject: [SciPy-user] Compiling SciPy on Windows Message-ID: Hi everyone, I'm following the instructions on building SciPy for Windows here: http://www.scipy.org/Installing_SciPy/Windows I've managed to get ATLAS and Numpy built, and now I'm up to scipy. Using the standalone MinGW compiler I get an error about unresolved symbol __EH_FRAME_BEGIN__. I tried using the msvc.net 2003 compiler too (despite warnings on the above web page that it wouldn't work). After fixing a couple of compilation errors in Lib/special/cephes, I got this: No module named msvccompiler in numpy.distutils, trying from distutils.. customize MSVCCompiler Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 174, in setup return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 149, in setup dist.run_commands() File "C:\Python24\lib\distutils\dist.py", line 946, in run_commands self.run_command(cmd) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "c:\python24\lib\distutils\command\build.py", line 112, in run self.run_command(cmd_name) File "C:\Python24\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\Python24\Lib\site-packages\numpy\distutils\command\build_ext.py", line 89, in run self.compiler.customize(self.distribution,need_cxx=need_cxx_compiler) File "C:\Python24\Lib\site-packages\numpy\distutils\ccompiler.py", line 200, in CCompiler_customize self.compiler_so.remove('-Wstrict-prototypes') AttributeError: MSVCCompiler instance has no attribute 'compiler_so' Ok, I didn't really expect MSVC to work, just gave it a try. But can anyone shed light on the __EH_FRAME_BEGIN__ error with MinGW? I just installed this mingw: C:\usr\pkg\scipysvn>gcc --version gcc (GCC) 3.4.2 (mingw-special) And just downloaded scipy from SVN. Here's the full text of the error with MinGW: compile options: '-Ibuild\src.win32-2.4-Ic:\Python24\lib\site-packages\numpy\co re\include -Ic:\Python24\include -Ic:\Python24\PC -c' C:\mingw\bin\g77.exe -shared build\temp.win32- 2.4\Release\build\src.win32-2.4\li b\fftpack\_fftpackmodule.o build\temp.win32- 2.4\Release\lib\fftpack\src\zfft.o b uild\temp.win32-2.4\Release\lib\fftpack\src\drfft.o build\temp.win32- 2.4\Release \lib\fftpack\src\zrfft.o build\temp.win32- 2.4\Release\lib\fftpack\src\zfftnd.o b uild\temp.win32-2.4\Release\build\src.win32-2.4\fortranobject.o-LC:/mingw/bin/. ./lib/gcc/mingw32/3.4.2 -Lc:\Python24\libs -Lc:\Python24\PCBuild -Lbuild\temp.wi n32-2.4 -ldfftpack -lpython24 -lgcc -lg2c -o build\lib.win32- 2.4\scipy\fftpack\_ fftpack.pyd C:/mingw/bin/../lib/gcc/mingw32/3.4.2/libgcc.a(__main.o)(.text+0x4f): undefined reference to `__EH_FRAME_BEGIN__' C:/mingw/bin/../lib/gcc/mingw32/3.4.2/libgcc.a(__main.o)(.text+0x73): undefined reference to `__EH_FRAME_BEGIN__' collect2: ld returned 1 exit status error: Command "C:\mingw\bin\g77.exe -shared build\temp.win32- 2.4\Release\build\ src.win32-2.4\lib\fftpack\_fftpackmodule.o build\temp.win32- 2.4\Release\lib\fftp ack\src\zfft.o build\temp.win32-2.4\Release\lib\fftpack\src\drfft.obuild\temp.w in32-2.4\Release\lib\fftpack\src\zrfft.o build\temp.win32- 2.4\Release\lib\fftpac k\src\zfftnd.o build\temp.win32- 2.4\Release\build\src.win32-2.4\fortranobject.o -LC:/mingw/bin/../lib/gcc/mingw32/3.4.2 -Lc:\Python24\libs -Lc:\Python24\PCBuild -Lbuild\temp.win32-2.4 -ldfftpack -lpython24 -lgcc -lg2c -o build\lib.win32-2.4 \scipy\fftpack\_fftpack.pyd" failed with exit status 1 Thanks for any help on this. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From kschutte at csail.mit.edu Mon Jul 10 17:03:35 2006 From: kschutte at csail.mit.edu (Ken Schutte) Date: Mon, 10 Jul 2006 17:03:35 -0400 Subject: [SciPy-user] scipy svn build errors (fftpack) In-Reply-To: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> References: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> Message-ID: <44B2C0A7.2010204@csail.mit.edu> listservs at mac.com wrote: > > Recent svn builds of scipy on OSX are failing. The errors occur with > fftpack: > I was also about to post about an error building scipy svn due to fftpack. Although, I'm not sure if it's a related problem (that looked like an ld error, mine's on compiling). I am on Fedora Core 4, and getting the following error messages: ------------------------------------------------ building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -march=pentium3 -mmmx -msse2 -m sse -fomit-frame-pointer -malign-double creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/Lib creating build/temp.linux-i686-2.4/Lib/fftpack creating build/temp.linux-i686-2.4/Lib/fftpack/dfftpack compile options: '-c' g77:f77: Lib/fftpack/dfftpack/zffti1.f Lib/fftpack/dfftpack/zffti1.f: In subroutine `zffti1': Lib/fftpack/dfftpack/zffti1.f:10: warning: `ntry' might be used uninitialized in this function g77:f77: Lib/fftpack/dfftpack/dcost.f g77:f77: Lib/fftpack/dfftpack/dsint.f g77:f77: Lib/fftpack/dfftpack/zfftf.f g77:f77: Lib/fftpack/dfftpack/dsinti.f g77:f77: Lib/fftpack/dfftpack/dcosti.f g77:f77: Lib/fftpack/dfftpack/dcosqi.f g77:f77: Lib/fftpack/dfftpack/dsinqi.f g77:f77: Lib/fftpack/dfftpack/dcosqf.f g77:f77: Lib/fftpack/dfftpack/dcosqb.f g77:f77: Lib/fftpack/dfftpack/dsinqf.f g77:f77: Lib/fftpack/dfftpack/dfftb.f g77:f77: Lib/fftpack/dfftpack/zffti.f g77:f77: Lib/fftpack/dfftpack/zfftb.f g77:f77: Lib/fftpack/dfftpack/dfftf.f g77:f77: Lib/fftpack/dfftpack/dfftb1.f g77:f77: Lib/fftpack/dfftpack/dffti.f g77:f77: Lib/fftpack/dfftpack/zfftf1.f /tmp/ccCnF9WU.s: Assembler messages: /tmp/ccCnF9WU.s:599: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s:2982: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s: Assembler messages: /tmp/ccCnF9WU.s:599: Error: suffix or operands invalid for `movd' /tmp/ccCnF9WU.s:2982: Error: suffix or operands invalid for `movd' error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -march=pentium3 -mmmx -msse2 -msse -f omit-frame-pointer -malign-double -c -c Lib/fftpack/dfftpack/zfftf1.f -o build/temp.linux-i686-2.4/Lib/fftpack/dfftpack/zff tf1.o" failed with exit status 1 ------------------------------------ Someone noted the same thing on FC5 about a month ago on SciPy-dev: http://www.scipy.net/pipermail/scipy-dev/2006-June/005885.html Any suggestions would be appreciated. Thanks, Ken From kschutte at csail.mit.edu Tue Jul 11 00:32:30 2006 From: kschutte at csail.mit.edu (Ken Schutte) Date: Tue, 11 Jul 2006 00:32:30 -0400 Subject: [SciPy-user] scipy svn build errors (fftpack) In-Reply-To: <44B2C0A7.2010204@csail.mit.edu> References: <9B18E2C2-AA85-4CC3-AC81-FDEE9AD48B69@mac.com> <44B2C0A7.2010204@csail.mit.edu> Message-ID: <44B329DE.4060605@csail.mit.edu> > listservs at mac.com wrote: > >>Recent svn builds of scipy on OSX are failing. The errors occur with >>fftpack: >> > > I was also about to post about an error building scipy svn due to > fftpack. > Someone noted the same thing on FC5 about a month ago on SciPy-dev: > http://www.scipy.net/pipermail/scipy-dev/2006-June/005885.html Just to follow up - this problem was answered on SciPy-dev. I had to comment out this line: if cpu.has_sse2(): opt.append('-msse2') in this file: /usr/lib/python2.4/site-packages/numpy/distutils/fcompiler/gnu.py From hgk at et.uni-magdeburg.de Tue Jul 11 02:40:51 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Tue, 11 Jul 2006 08:40:51 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <44AE775A.30004@ftw.at> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> Message-ID: Ed Schofield wrote: >>... > > Hmmm ... it could be an ATLAS problem. What's your processor? I built > the SciPy 0.4.9 binaries against Pearu's ATLAS binaries for Pentium 2, > thinking that this would give maximum compatibility ... > > Or perhaps it's something else. Could someone with this problem please > post a backtrace? Ed, in the meantime I tried the script on several computers. All of them are runnung XP, python 2.43, and the newest 'officially binary distributed' numpy and scipy. Summary: It crashes the interpreter on Atlons, Pentiums (M and 4) are OK. What do you mean with 'backtrace'. How do I make it. Have a nice day Hans Georg From nwagner at iam.uni-stuttgart.de Tue Jul 11 04:31:03 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 11 Jul 2006 10:31:03 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> Message-ID: <44B361C7.3060906@iam.uni-stuttgart.de> Dr. Hans Georg Krauthaeuser wrote: > Ed Schofield wrote: > >>> ... >>> >> Hmmm ... it could be an ATLAS problem. What's your processor? I built >> the SciPy 0.4.9 binaries against Pearu's ATLAS binaries for Pentium 2, >> thinking that this would give maximum compatibility ... >> >> Or perhaps it's something else. Could someone with this problem please >> post a backtrace? >> > > Ed, > > in the meantime I tried the script on several computers. All of them are > runnung XP, python 2.43, and the newest 'officially binary distributed' > numpy and scipy. Summary: It crashes the interpreter on Atlons, Pentiums > (M and 4) are OK. > > What do you mean with 'backtrace'. How do I make it. > > Have a nice day > Hans Georg > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > On Linux you may use gdb >gdb python GNU gdb 6.3 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i586-suse-linux"...Using host libthread_db library "/lib/libthread_db.so.1". (gdb) run Starting program: /usr/bin/python [Thread debugging using libthread_db enabled] [New Thread 16384 (LWP 25701)] Python 2.4 (#1, Mar 22 2005, 21:42:42) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information >>> import scipy >>>scipy.test(1,10) If you get a segfault you can use bt for backtrace. (gdb) help bt Print backtrace of all stack frames, or innermost COUNT frames. With a negative argument, print outermost -COUNT frames. Use of the 'full' qualifier also prints the values of the local variables. Nils From jelle.feringa at ezct.net Tue Jul 11 05:07:03 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 11 Jul 2006 11:07:03 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: Message-ID: <005301c6a4c9$60b0a950$1001a8c0@JELLE> Hi Ed, I can confirm that the same holds true for nterpolate.UnivariateSpline. Runs smooth on a Pentium M, fails on Athlon. So likely these different processors require different builds? -jelle -------------------------- Ed, in the meantime I tried the script on several computers. All of them are runnung XP, python 2.43, and the newest 'officially binary distributed' numpy and scipy. Summary: It crashes the interpreter on Atlons, Pentiums (M and 4) are OK. What do you mean with 'backtrace'. How do I make it. Have a nice day Hans Georg From ckkart at hoc.net Tue Jul 11 05:42:41 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 11 Jul 2006 18:42:41 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44AEFB68.4090803@gmail.com> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> Message-ID: <44B37291.2000504@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Chiara Caronna wrote: >>> 2) Is there any way to get the estimated errors on the fitting parameter? >>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>> hints? >> Everything you need is in here: >> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >> I haven't yet found time/will to dig into it, but I'm definitely interested in a >> good error estimation routine. > > The implementation of the ideas in that paper is in ODRPACK by the same author. > It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I > think, but please let me know if something needs to be clarified. > great! I just to tried to build from svn with sandbox.odr enabled. Upon importing odr I get the following error: >>> from scipy.sandbox import odr Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", line 55, in ? import odrpack File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", line 113, in ? from scipy.sandbox.odr import __odrpack ImportError: cannot import name __odrpack Looking at site-packages/scipy/sandbox/odr it looks like that the extension module has not been built. However I have been able to build the odr module alone. I had to comment out line 48 in setup_odr.py (I've no atlas) and in the last line a .todict()) was missing. After that it was really easy to switch to odr and the results, especially the confidential intervals look very nice. I noticed that odr is more demanding on a good initial guess than leastsq and sometimes it seems to takes a dead-end road. But I'll have to play some more with it. It just came into my mind that for data which has no noise on the x-values, odr might not be advantageous compared to an ordinary least squares fit of the y-values. Is that assumption right? Anyway, thanks for wrapping odr! Is it ok to include odr in my GPLed package as long as odr is not part of the officail scipy distribution? Regards, Christian From schofield at ftw.at Tue Jul 11 06:37:25 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 11 Jul 2006 12:37:25 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> Message-ID: <44B37F65.1090508@ftw.at> John Hassler wrote: > Ed Schofield ftw.at> writes: > > >> Hmmm ... it could be an ATLAS problem. What's your processor? I built >> the SciPy 0.4.9 binaries against Pearu's ATLAS binaries for Pentium 2, >> thinking that this would give maximum compatibility ... >> >> Or perhaps it's something else. Could someone with this problem please >> post a backtrace? >> > > > This computer is an AMD Athlon 1600+ running Windows XP. > > > > All of the versions of scipy using numpy crash with XP whenever I access > any of the functions in "optimize" or "integrate" which (I assume) call the > Fortran libraries. > > In the current version, running scipy.test() gives an "unhandled > exception." Debug shows a pointer to: > 020CA9C3 xorps xmm6,xmm6 > > > > Some other information: > >>>> scipy.__version__ '0.4.9' >>>> > > >>>> scipy.__numpy_version__ '0.9.8' >>>> > > >>>> scipy.show_numpy_config() >>>> > > atlas_threads_info: NOT AVAILABLE > > blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = > ['C:\\Libraries\\ATLAS_3.6.0_WIN_P4'] define_macros = [('ATLAS_INFO', > '"\\"3.6.0\\""')] language = c include_dirs = > ['C:\\Libraries\\ATLAS_3.6.0_WIN_P4'] > > plus similar stuff. Probably the important thing is the "ATLAS ... P4" > line. > Thanks, John -- this is helpful. (Hans too, thanks for your testing). This looks like a problem with the NumPy build. Travis, could you have compiled the Win32 binaries accidentally against the P4/SSE2 ATLAS library? -- Ed From hetland at tamu.edu Tue Jul 11 09:47:59 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 11 Jul 2006 09:47:59 -0400 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <20060707150204.GB32102@mentat.za.net> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> <44A5521C.6060206@gmail.com> <2239236B-934B-435C-A715-DE66EC6B74FA@tamu.edu> <20060707150204.GB32102@mentat.za.net> Message-ID: <44D2E650-9A8D-4B4E-92DB-7B3E8CC3C5ED@tamu.edu> > > A very simple implementation (not heavily tested) of this method is > attached. Again, unfortunately, not vectorised. > > Cheers > St?fan > [I sent this message a while ago, but it appears to have been lost... so again:] I modified St?fan's code (he may not recognize it anymore -- now it's ugly), so that the area and centroid methods are vectorized (using numpy arrays). I also simplified the inside method (was call contains, but I think inside is more common). Finally, I renamed the entire class from Poly to Polygon to prevent confusion with polynomial classes. Everything seems to work, but I have not had a chance to test it extensively. The inside class is slow when there are many points. The code available here: http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html seems to work great. Unfortunately, I am not very good at wrapping C into python, and I also not good at putting compiled code into distutils. If anyone else has the skills and is so motivated, could we make a small class (with a compiled inside method) and throw it in the scipy sandbox? -Rob ? ---- Rob Hetland, Assistant Professor Dept. of Oceanography, Texas A&M University http://pong.tamue.edu/~rob phone: 979-458-0096, fax: 979-845-6331 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: polygon.py Type: text/x-python-script Size: 4904 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jul 11 10:13:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Jul 2006 09:13:17 -0500 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44B37291.2000504@hoc.net> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> <44B37291.2000504@hoc.net> Message-ID: <44B3B1FD.90501@gmail.com> Christian Kristukat wrote: > Robert Kern wrote: >> Christian Kristukat wrote: >>> Chiara Caronna wrote: >>>> 2) Is there any way to get the estimated errors on the fitting parameter? >>>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>>> hints? >>> Everything you need is in here: >>> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >>> I haven't yet found time/will to dig into it, but I'm definitely interested in a >>> good error estimation routine. >> The implementation of the ideas in that paper is in ODRPACK by the same author. >> It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I >> think, but please let me know if something needs to be clarified. > > great! I just to tried to build from svn with sandbox.odr enabled. Upon > importing odr I get the following error: > >>>> from scipy.sandbox import odr > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", > line 55, in ? > import odrpack > File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", > line 113, in ? > from scipy.sandbox.odr import __odrpack > ImportError: cannot import name __odrpack > > Looking at site-packages/scipy/sandbox/odr it looks like that the extension > module has not been built. > > However I have been able to build the odr module alone. I had to comment out > line 48 in setup_odr.py (I've no atlas) and in the last line a .todict()) was > missing. D'oh! I could have sworn I had that working. Oh well. It's fixed now. > After that it was really easy to switch to odr and the results, especially the > confidential intervals look very nice. I noticed that odr is more demanding on a > good initial guess than leastsq and sometimes it seems to takes a dead-end > road. But I'll have to play some more with it. > It just came into my mind that for data which has no noise on the x-values, odr > might not be advantageous compared to an ordinary least squares fit of the > y-values. Is that assumption right? One thing you have to watch out for is that if you don't specify the weights on the X-values, then they will be implicitly set to 1 (in whatever units the data are in), so you'll be solving an ODR problem whether it's appropriate or not. So the question isn't really "is ODR better or worse technique than OLS?" but rather "do I have an ODR problem or an OLS problem?" Of course, ODRPACK handles OLS problems, too. Just do .set_job(fit_type=2) (no, I'm not really happy with that interface, either, but there it is). > Anyway, thanks for wrapping odr! Is it ok to include odr in my GPLed package as > long as odr is not part of the officail scipy distribution? You could include all of scipy into your GPLed package, too. The BSD license is GPL-compatible. Well, now that I think about it, that's not entirely true. We've been a little lax about wrapping one or two routines where the authors have requested citations in publications, which, if enforced, is not a GPL-compatible restriction. But there are no such problems with odr; ODRPACK is US government public domain code, and my wrappers are given under the scipy license. The license requirements for the code in the sandbox are no different from the "official" scipy distribution. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.huard at gmail.com Tue Jul 11 10:37:23 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 11 Jul 2006 10:37:23 -0400 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44B37291.2000504@hoc.net> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> <44B37291.2000504@hoc.net> Message-ID: <91cf711d0607110737r7ab4d37axe410eb910f66e6fa@mail.gmail.com> 2006/7/11, Christian Kristukat : > > It just came into my mind that for data which has no noise on the > x-values, odr > might not be advantageous compared to an ordinary least squares fit of the > y-values. Is that assumption right? Yes. A least square fit is perfectly ok if the input values are known exactly. In fact, even if the x have errors on them, its not so bad as long as the corresponding errors on the output are small compared to measurement errors on y. That is, if y = f(x), and you have an error dx, if f(x+dx) - y is small compared to errors on y, you should be ok with a least square fit. There is a pretty good chapter in a book by Zellner on regression with input errors. However, if you're serious about error estimation and uncertainty analysis, you should definitely try to compute the entire parameter distribution instead of looking only for the best fitting parameters. You can get nasty surprises by using only the "best" parameters. For one or two parameters, you can generally do that by brute force, for more parameters, I'd suggest Monte Carlo sampling (see PyMC). (mmm ...I got carried along here... sorry, that's my thesis subject.) David -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetland at tamu.edu Tue Jul 11 12:23:35 2006 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 11 Jul 2006 12:23:35 -0400 Subject: [SciPy-user] sandbox.netcdf does not install correctly. Message-ID: <63EA6A99-4A9E-4FFC-9BE8-AD3ECFB55523@tamu.edu> The netcdf class in the scipy sandbox does not install correctly. I have installed this utility by hand, moving the required files over to the site-packages directory, and the package does work (after the import statements are fixed), it's just the installation doesn't move the right files over. I am very bad at distutils, so I will simply describe what needs to be changed: The import statements including Scientific_netcdf need to be changed to: from _netcdf import * from _netcdf import _C_API netcdf.py needs to be transfered to the correct directory. It would most likely be best to put things in a netcdf directory, with an __init__.py (absent now, as far as I can tell). Other than these small installation issues, it seems to work great! -Rob ---- Rob Hetland, Assistant Professor Dept. of Oceanography, Texas A&M University http://pong.tamue.edu/~rob phone: 979-458-0096, fax: 979-845-6331 -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Jul 11 14:18:22 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 11:18:22 -0700 Subject: [SciPy-user] matplotlib IndexFormatter missing Message-ID: Anyone know what happened to matplotlib.ticker.IndexFormatter? Is there are replacement for it? -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Jul 11 17:17:34 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 14:17:34 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors Message-ID: What function should I use to get some eigenvalues and eigenvectors? There are so many and it's difficult to pin down what they are all for. -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.j.ross at gmail.com Tue Jul 11 17:17:58 2006 From: alex.j.ross at gmail.com (Alexander Ross) Date: Tue, 11 Jul 2006 14:17:58 -0700 Subject: [SciPy-user] Feature Request. Message-ID: Is it possible to extend the ndarray methods sort and argsort to add the keyword arguments that the builtin sorted has? I am especially interested in the `key` argument. - Alexander Ross From oliphant at ee.byu.edu Tue Jul 11 17:27:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 11 Jul 2006 15:27:09 -0600 Subject: [SciPy-user] Plea for help with cobyla test Message-ID: <44B417AD.2080907@ee.byu.edu> Nils has reported that the cobyla test passes on 64-bit system. It is currently failing on 32-bit systems and I'm not sure why. Could anybody verify that the cobyla test passes on a 64-bit system? Is there anyone who has access to both 32 and 64-bit systems who would be willing to help me figure out the problem? I'm not happy that it used to work and now is "slightly" off. I've learned not to ignore little errors like this as it may show a problem with numpy (or f2py). -Travis From kwgoodman at gmail.com Tue Jul 11 17:40:18 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 11 Jul 2006 14:40:18 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, David Grant wrote: > What function should I use to get some eigenvalues and eigenvectors? There > are so many and it's difficult to pin down what they are all for. Only two of the functions return both eigenvalues and eigenvectors: eig --- Eigenvalues and vectors of a square matrix eigh --- Eigenvalues and eigenvectors of a Hermitian matrix I use eigh since my matrices are real and symmetric. (Should vectors be changed to eigenvectors in the doc string for eig? Then it would match the doc string for eigh.) Wait, I'm using nympy. In scipy I only see eig (except for banded stuff). From davidgrant at gmail.com Tue Jul 11 17:51:04 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 14:51:04 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, Keith Goodman wrote: > > On 7/11/06, David Grant wrote: > > What function should I use to get some eigenvalues and eigenvectors? > There > > are so many and it's difficult to pin down what they are all for. > > Only two of the functions return both eigenvalues and eigenvectors: > > eig --- Eigenvalues and vectors of a square matrix > eigh --- Eigenvalues and eigenvectors of a Hermitian matrix > > I use eigh since my matrices are real and symmetric. > > (Should vectors be changed to eigenvectors in the doc string for eig? > Then it would match the doc string for eigh.) > > Wait, I'm using numpy. In scipy I only see eig (except for banded stuff). Well I'm using numpy too because my scipy is still compiling right now. :-) I see there are in numpy.linalg.linalg. Weird package name. Well it looks like both eig and eigh completely lock up for me.... I compiled numeric without lapack. I'll try re-compiling with it. (I'm using gentoo by the way). -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Tue Jul 11 18:06:20 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 11 Jul 2006 15:06:20 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, David Grant wrote: > I see there are in numpy.linalg.linalg. Weird package name. Well it looks > like both eig and eigh completely lock up for me.... I compiled numeric > without lapack. I'll try re-compiling with it. (I'm using gentoo by the > way). How about atlas? That should speed things up. From davidgrant at gmail.com Tue Jul 11 18:12:29 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 15:12:29 -0700 Subject: [SciPy-user] matplotlib IndexFormatter missing (re-send) Message-ID: Anyone know what happened to matplotlib.ticker.IndexFormatter? Is there are replacement for it? -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Tue Jul 11 18:11:23 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 15:11:23 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, Keith Goodman wrote: > > On 7/11/06, David Grant wrote: > > > I see there are in numpy.linalg.linalg. Weird package name. Well it > looks > > like both eig and eigh completely lock up for me.... I compiled numeric > > without lapack. I'll try re-compiling with it. (I'm using gentoo by the > > way). > > How about atlas? That should speed things up. > Well I was only testing the function with a little 5x5 matrix... so I'm pretty sure it was a hard-lockup. Python was taking up all CPU cyles. Anyways I recompiled with lapack-atlas and it seems to work now. Although I'm not getting the answers I was expecting... probably a bug in my code. We'll see. Dave -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jul 11 18:31:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Jul 2006 17:31:38 -0500 Subject: [SciPy-user] matplotlib IndexFormatter missing (re-send) In-Reply-To: References: Message-ID: <44B426CA.5010504@gmail.com> David Grant wrote: > Anyone know what happened to matplotlib.ticker.IndexFormatter? Is there > are replacement for it? You'll probably want to ask on matplotlib-users, not here. https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alex.j.ross at gmail.com Tue Jul 11 19:18:27 2006 From: alex.j.ross at gmail.com (Alexander Ross) Date: Tue, 11 Jul 2006 16:18:27 -0700 Subject: [SciPy-user] atleast_1d Message-ID: <45A152A8-A001-41C1-9B60-F6205C5577B1@gmail.com> Why isn't atleast_1d implemented similar to this? def atleast_1d(*arys): res = [] for ary in arys: if len(ary.shape) == 0: ary.shape = (1,) res.append(ary) if len(res) == 1: return res[0] else: return res I need to pass a masked array to this function and have the returned arrays retain the mask. Currently, atleast_1d gives a warning, and returns a filled ndarray. This implementation would solve this problem. - Alex Ross From wbaxter at gmail.com Tue Jul 11 20:34:49 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 12 Jul 2006 09:34:49 +0900 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/12/06, David Grant wrote: > > What function should I use to get some eigenvalues and eigenvectors? There > are so many and it's difficult to pin down what they are all for. > This probably isn't what you meant, but... last I checked there was no function in Scipy that would let you get just *some* eigenvalues and eigenvectors. eig() gets you *all* eigenvals/vecs whether you want them or not. Which is really a bad idea if you're trying to do some sort of eigenvector-based compression/dimension-reduction on a 5000 dimensional dataset. SciPy really needs a wrapper for ARPACK. http://www.caam.rice.edu/software/ARPACK/download.html f2py is supposedly easy to use, so I guess I'm going to have to figure it out eventually. Currently, though, I've hit a dead-end trying to compile SciPy on Windows. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Tue Jul 11 20:50:38 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 12 Jul 2006 09:50:38 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44B3B1FD.90501@gmail.com> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> <44B37291.2000504@hoc.net> <44B3B1FD.90501@gmail.com> Message-ID: <44B4475E.90900@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Robert Kern wrote: >>> Christian Kristukat wrote: >>>> Chiara Caronna wrote: >>>>> 2) Is there any way to get the estimated errors on the fitting parameter? >>>>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>>>> hints? >>>> Everything you need is in here: >>>> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >>>> I haven't yet found time/will to dig into it, but I'm definitely interested in a >>>> good error estimation routine. >>> The implementation of the ideas in that paper is in ODRPACK by the same author. >>> It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I >>> think, but please let me know if something needs to be clarified. >> great! I just to tried to build from svn with sandbox.odr enabled. Upon >> importing odr I get the following error: >> >>>>> from scipy.sandbox import odr >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", >> line 55, in ? >> import odrpack >> File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", >> line 113, in ? >> from scipy.sandbox.odr import __odrpack >> ImportError: cannot import name __odrpack >> >> Looking at site-packages/scipy/sandbox/odr it looks like that the extension >> module has not been built. >> >> However I have been able to build the odr module alone. I had to comment out >> line 48 in setup_odr.py (I've no atlas) and in the last line a .todict()) was >> missing. > > D'oh! I could have sworn I had that working. Oh well. It's fixed now. It still gives me an error, obviously beacuse I have no ATLAS installed. Lib/sandbox/odr/setup.py:27: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 14, in configuration config.add_subpackage('Lib') File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 17, in configuration config.add_subpackage('sandbox') File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/sandbox/setup.py", line 47, in configuration config.add_subpackage('odr') File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/sandbox/odr/setup.py", line 46, in configuration library_dirs=atlas_info['library_dirs'], KeyError: 'library_dirs' Christian From davidgrant at gmail.com Tue Jul 11 20:57:27 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 17:57:27 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, Bill Baxter wrote: > > On 7/12/06, David Grant wrote: > > > > What function should I use to get some eigenvalues and eigenvectors? > > There are so many and it's difficult to pin down what they are all for. > > > > This probably isn't what you meant, but... > last I checked there was no function in Scipy that would let you get just > *some* eigenvalues and eigenvectors. eig() gets you *all* eigenvals/vecs > whether you want them or not. Which is really a bad idea if you're trying > to do some sort of eigenvector-based compression/dimension-reduction on a > 5000 dimensional dataset. > > SciPy really needs a wrapper for ARPACK. > http://www.caam.rice.edu/software/ARPACK/download.html > > f2py is supposedly easy to use, so I guess I'm going to have to figure it > out eventually. Currently, though, I've hit a dead-end trying to compile > SciPy on Windows. > Funny you mention it, I could have really used a wrapper for ARPACK last year... I actually started to work on doing it but it was over my head. -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Tue Jul 11 21:08:46 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 12 Jul 2006 10:08:46 +0900 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/12/06, David Grant wrote: > > > SciPy really needs a wrapper for ARPACK. > > http://www.caam.rice.edu/software/ARPACK/download.html > > > > f2py is supposedly easy to use, so I guess I'm going to have to figure > > it out eventually. Currently, though, I've hit a dead-end trying to compile > > SciPy on Windows. > > > > Funny you mention it, I could have really used a wrapper for ARPACK last > year... I actually started to work on doing it but it was over my head. > That doesn't bode well for my chances then, seeing as how I can't even get SciPy to compile. :-) What part was over your head? Was it the f2py part or figuring out how to pass ARPACK the data? I think there have been a few examples of wrapping code in various ways added recently either in the distro or in the Wiki. I was hoping those would make it clear. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Tue Jul 11 21:54:05 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 12 Jul 2006 10:54:05 +0900 Subject: [SciPy-user] estimating errors with optimize.leastsq In-Reply-To: <44B4475E.90900@hoc.net> References: <44AEFA66.6010207@hoc.net> <44AEFB68.4090803@gmail.com> <44B37291.2000504@hoc.net> <44B3B1FD.90501@gmail.com> <44B4475E.90900@hoc.net> Message-ID: <44B4563D.6070403@hoc.net> Christian Kristukat wrote: > Robert Kern wrote: >> Christian Kristukat wrote: >>> Robert Kern wrote: >>>> Christian Kristukat wrote: >>>>> Chiara Caronna wrote: >>>>>> 2) Is there any way to get the estimated errors on the fitting parameter? >>>>>> Maybe optimize.leastsq is not the right choice? Does anyone has some good >>>>>> hints? >>>>> Everything you need is in here: >>>>> http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi >>>>> I haven't yet found time/will to dig into it, but I'm definitely interested in a >>>>> good error estimation routine. >>>> The implementation of the ideas in that paper is in ODRPACK by the same author. >>>> It is wrapped as scipy.sandbox.odr . The docstrings are fairly thorough, I >>>> think, but please let me know if something needs to be clarified. >>> great! I just to tried to build from svn with sandbox.odr enabled. Upon >>> importing odr I get the following error: >>> >>>>>> from scipy.sandbox import odr >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/__init__.py", >>> line 55, in ? >>> import odrpack >>> File "/usr/local/lib/python2.4/site-packages/scipy/sandbox/odr/odrpack.py", >>> line 113, in ? >>> from scipy.sandbox.odr import __odrpack >>> ImportError: cannot import name __odrpack >>> >>> Looking at site-packages/scipy/sandbox/odr it looks like that the extension >>> module has not been built. >>> >>> However I have been able to build the odr module alone. I had to comment out >>> line 48 in setup_odr.py (I've no atlas) and in the last line a .todict()) was >>> missing. >> D'oh! I could have sworn I had that working. Oh well. It's fixed now. > In addition I had to make some changes to odrpack.py to get it to work with numpy from svn: --- odrpack.py 2006-07-12 10:49:05.000000000 +0900 +++ odrpack_mod.py 2006-07-12 10:38:44.000000000 +0900 @@ -109,7 +109,7 @@ Robert Kern robert.kern at gmail.com """ - +import numpy from scipy.sandbox.odr import __odrpack from types import NoneType @@ -740,7 +740,7 @@ x_s = list(self.data.x.shape) - if type(self.data.y) is numpy.ArrayType: + if type(self.data.y) is numpy.ndarray: y_s = list(self.data.y.shape) if self.model.implicit: raise odr_error, "an implicit model cannot use response data" @@ -853,12 +853,12 @@ lwork = (18 + 11*p + p*p + m + m*m + 4*n*q + 2*n*m + 2*n*q*p + 5*q + q*(p+m) + ldwe*ld2we*q) - if type(self.work) is numpy.ArrayType and self.work.shape == (lwork,)\ - and self.work.dtype == numpy.Float: + if type(self.work) is numpy.ndarray and self.work.shape == (lwork,)\ + and self.work.dtype == numpy.float64: # the existing array is fine return else: - self.work = numpy.zeros((lwork,), numpy.Float) + self.work = numpy.zeros((lwork,), numpy.float64) def set_job(self, fit_type=None, deriv=None, var_calc=None, del_init=None, restart=None): Regards, Christian From davidgrant at gmail.com Wed Jul 12 01:32:33 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 11 Jul 2006 22:32:33 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: On 7/11/06, Bill Baxter wrote: > > > On 7/12/06, David Grant wrote: > > > > > SciPy really needs a wrapper for ARPACK. > > > http://www.caam.rice.edu/software/ARPACK/download.html > > > > > > f2py is supposedly easy to use, so I guess I'm going to have to figure > > > it out eventually. Currently, though, I've hit a dead-end trying to compile > > > SciPy on Windows. > > > > > > > Funny you mention it, I could have really used a wrapper for ARPACK last > > year... I actually started to work on doing it but it was over my head. > > > > > That doesn't bode well for my chances then, seeing as how I can't even get > SciPy to compile. :-) What part was over your head? Was it the f2py part > or figuring out how to pass ARPACK the data? I think there have been a few > examples of wrapping code in various ways added recently either in the > distro or in the Wiki. I was hoping those would make it clear. > > Well I can't remember exactly. I was looking in to it on work time and so I couldn't spend too long on it. Maybe I'll look into it again some time (it's an interesting project) but I'm not motivated enough by it personally right now. -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.u.r.e.l.i.a.n at gmx.net Wed Jul 12 02:54:02 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 12 Jul 2006 08:54:02 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> Hi, > This probably isn't what you meant, but... > last I checked there was no function in Scipy that would let you get just > *some* eigenvalues and eigenvectors. eig() gets you *all* eigenvals/vecs > whether you want them or not. Which is really a bad idea if you're trying > to do some sort of eigenvector-based compression/dimension-reduction on a > 5000 dimensional dataset. scipy.linalg.eig_banded can be used to obtain a subset of eigenvectors (by giving limits for the eigenvalues or by specifying an index range). However it is only applicable to symmetric and hermitian matrices (preferably band-limited, of course). Internally it wraps the lapack routines dsbevx / zhbevx. I do not know if it is already contained in the latest scipy release, maybe you need to build from svn. Johannes From nwagner at iam.uni-stuttgart.de Wed Jul 12 03:03:43 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 12 Jul 2006 09:03:43 +0200 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: <44B417AD.2080907@ee.byu.edu> References: <44B417AD.2080907@ee.byu.edu> Message-ID: <44B49ECF.2040002@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils has reported that the cobyla test passes on 64-bit system. It is > currently failing on 32-bit systems and I'm not sure why. Could anybody > verify that the cobyla test passes on a 64-bit system? > > Is there anyone who has access to both 32 and 64-bit systems who would > be willing to help me figure out the problem? I'm not happy that it > used to work and now is "slightly" off. I've learned not to ignore > little errors like this as it may show a problem with numpy (or f2py). > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Travis, I have access to both 32 and 64-bit systems. Please let me know what I can do to figure out the problem. Maybe you can provide me with a short test script. Anyway I am still confused by the constraints in the test example con1 = lambda x: x[0]**2 + x[1]**2 - 25 con2 = lambda x: -con1(x) The first constraint is a circle with radius 5 centered at (x=y=0). x^2+y^2 \ge 25 So feasible points lie outside that circle. The second constraint is again a circle with radius 5 centered at (x=y=0) -x^2-y^2+25 \ge = 0 or x^2+y^2 \le 25 Hence feasible points lie inside the circle. both constraints are fulfilled if x^2+y^2=25. AFAIK it's difficult to check equalities wrt to floating point numbers. Am I missing something ? cons -- a sequence of functions that all must be >=0 (a single function if only 1 constraint) Nils P.S. There is another issue wrt to 32 and 64-bit systems --> http://projects.scipy.org/scipy/scipy/ticket/223 From nwagner at iam.uni-stuttgart.de Wed Jul 12 03:11:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 12 Jul 2006 09:11:40 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> References: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <44B4A0AC.9080604@iam.uni-stuttgart.de> Johannes Loehnert wrote: > Hi, > > >> This probably isn't what you meant, but... >> last I checked there was no function in Scipy that would let you get just >> *some* eigenvalues and eigenvectors. eig() gets you *all* eigenvals/vecs >> whether you want them or not. Which is really a bad idea if you're trying >> to do some sort of eigenvector-based compression/dimension-reduction on a >> 5000 dimensional dataset. >> > > scipy.linalg.eig_banded can be used to obtain a subset of eigenvectors (by > giving limits for the eigenvalues or by specifying an index range). However > it is only applicable to symmetric and hermitian matrices (preferably > band-limited, of course). Internally it wraps the lapack routines dsbevx / > zhbevx. > > I do not know if it is already contained in the latest scipy release, maybe > you need to build from svn. > > Johannes > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > And it's only applicable to standard eigenvalue problems A x = \lambda x. How about generalized eigenvalue problems A x = \lambda B x with B spd (symmetric positive definite) ? Nils From William.Hunter at mmhgroup.com Wed Jul 12 03:23:44 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Wed, 12 Jul 2006 09:23:44 +0200 Subject: [SciPy-user] MATLAB code faster than Python - my second last reply. Message-ID: <98AA9E629A145D49A1A268FA6DBA70B424904C@mmihserver01.MMIH01.local> I've been away for a couple of days. I'll start implementing the improvements on the original code tonight and also do the same with the MATLAB code, so that we can compare apples... Being a mechanical engineer I generally need to see gears and smoke to understand most things, I'm learning a whole lot very quickly here so thanks to everyone who helped. My last reply will include feedback with regards to the speed difference between the MATLAB and Python scripts. Also, can anyone perhaps venture a guess (ballpark-ish) if the code (MIF) in question were done in C and then wrapped? I've never done this before but know it's possible. WH From a.u.r.e.l.i.a.n at gmx.net Wed Jul 12 03:29:17 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 12 Jul 2006 09:29:17 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: <44B4A0AC.9080604@iam.uni-stuttgart.de> References: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> <44B4A0AC.9080604@iam.uni-stuttgart.de> Message-ID: <200607120929.17742.a.u.r.e.l.i.a.n@gmx.net> Hi, > > And it's only applicable to standard eigenvalue problems > A x = \lambda x. > > How about generalized eigenvalue problems > A x = \lambda B x with B spd (symmetric positive definite) ? Well, having a quick look into the lapack docs, I found the following routines: file dsygvx.f dsygvx.f plus dependencies prec double for Computes selected eigenvalues, and optionally, the eigenvectors of , a generalized symmetric-definite generalized eigenproblem, , Ax= lambda Bx, ABx= lambda x, or BAx= lambda x. gams d4b1 file dspevx.f dspevx.f plus dependencies prec double for Computes selected eigenvalues and eigenvectors of a , symmetric matrix in packed storage. gams d4a1 file dspgvx.f dspgvx.f plus dependencies prec double for Computes selected eigenvalues, and optionally, eigenvectors of , a generalized symmetric-definite generalized eigenproblem, Ax= lambda , Bx, ABx= lambda x, or BAx= lambda x, where A and B are in packed , storage. gams d4b1 (I presume there are corresponding complex versions, too.) However, these are not included in scipy right now. Johannes From nwagner at iam.uni-stuttgart.de Wed Jul 12 07:37:29 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 12 Jul 2006 13:37:29 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: Message-ID: <44B4DEF9.9020303@iam.uni-stuttgart.de> Bill Baxter wrote: > On 7/12/06, *David Grant* > wrote: > > What function should I use to get some eigenvalues and > eigenvectors? There are so many and it's difficult to pin down > what they are all for. > > > This probably isn't what you meant, but... > last I checked there was no function in Scipy that would let you get > just *some* eigenvalues and eigenvectors. eig() gets you *all* > eigenvals/vecs whether you want them or not. Which is really a bad > idea if you're trying to do some sort of eigenvector-based > compression/dimension-reduction on a 5000 dimensional dataset. > > SciPy really needs a wrapper for ARPACK. > http://www.caam.rice.edu/software/ARPACK/download.html > +1 but with my programming skills it will take too much time. Nils > f2py is supposedly easy to use, so I guess I'm going to have to figure > it out eventually. Currently, though, I've hit a dead-end trying to > compile SciPy on Windows. > > --bb > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From marcelomgarcia at gmail.com Wed Jul 12 11:45:08 2006 From: marcelomgarcia at gmail.com (Marcelo Maia Garcia) Date: Wed, 12 Jul 2006 12:45:08 -0300 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: <44B417AD.2080907@ee.byu.edu> References: <44B417AD.2080907@ee.byu.edu> Message-ID: On 7/11/06, Travis Oliphant wrote: > > > Nils has reported that the cobyla test passes on 64-bit system. It is > currently failing on 32-bit systems and I'm not sure why. Could anybody > verify that the cobyla test passes on a 64-bit system? > > Is there anyone who has access to both 32 and 64-bit systems who would > be willing to help me figure out the problem? I'm not happy that it > used to work and now is "slightly" off. I've learned not to ignore > little errors like this as it may show a problem with numpy (or f2py). > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Travis el al. I can help if you explain the tests that you want to do. I have 32-bit (Xeon) and 64-bit (Itanium2) systems. Marcelo Garcia -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Jul 12 11:48:13 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 12 Jul 2006 17:48:13 +0200 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: References: <44B417AD.2080907@ee.byu.edu> Message-ID: <44B519BD.7080504@iam.uni-stuttgart.de> Marcelo Maia Garcia wrote: > > > On 7/11/06, *Travis Oliphant* > wrote: > > > Nils has reported that the cobyla test passes on 64-bit system. It is > currently failing on 32-bit systems and I'm not sure why. Could > anybody > verify that the cobyla test passes on a 64-bit system? > > Is there anyone who has access to both 32 and 64-bit systems who > would > be willing to help me figure out the problem? I'm not happy that it > used to work and now is "slightly" off. I've learned not to ignore > little errors like this as it may show a problem with numpy (or > f2py). > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > Hi Travis el al. > > I can help if you explain the tests that you want to do. I have 32-bit > (Xeon) and 64-bit (Itanium2) systems. > > Marcelo Garcia > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Did you check if scipy.test(1,10) passed without any failure on your systems ? Nils From eugene.druker at gmail.com Wed Jul 12 12:24:34 2006 From: eugene.druker at gmail.com (Eugene Druker) Date: Wed, 12 Jul 2006 16:24:34 +0000 (UTC) Subject: [SciPy-user] ANN: build grids from scattered 2D data Message-ID: Dear SciPy users, I'd like to announce the release of version 0.5 of build_grid module: http://projects.scipy.org/scipy/scipy/changeset/2003 The module is to build regular 2D grids from scattered 2D data. The extension module is written in C, no dependencies but standard python. Features: ------------------- - Input: - xyz-file of text lines with x,y,z values, or - lists of x,y,z data. - grid boundaries are set from (xmin,ymin), square cell size and grid sizes in nodes. - acceptable error level - absolute or relative. Default is 0.0 - but it is not an 'interpolation' because of moving values to grid nodes. - Output: - list of node values to python caller or - file of (x y z) triples (text) - Method (empirical): - move values to nodes, then hierarchically smooth and fill up nodes - Tests: - on Windows XP with MinGW 2 and VC++ 6, on ubuntu with gcc 4. - speed is about 10**5 nodes/second (with 1.6 GHz) More detailed documentation can be found in README.txt and examples are in test_build_grid.py. Special thanks to Travis Olifant for help and posting. Thanks for your attention, Eugene Druker eugene.druker at gmail.com From oliphant.travis at ieee.org Wed Jul 12 16:16:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 12 Jul 2006 14:16:21 -0600 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: <44B49ECF.2040002@iam.uni-stuttgart.de> References: <44B417AD.2080907@ee.byu.edu> <44B49ECF.2040002@iam.uni-stuttgart.de> Message-ID: <44B55895.70101@ieee.org> Nils Wagner wrote: > Travis Oliphant wrote: > >> Nils has reported that the cobyla test passes on 64-bit system. It is >> currently failing on 32-bit systems and I'm not sure why. Could anybody >> verify that the cobyla test passes on a 64-bit system? >> >> Is there anyone who has access to both 32 and 64-bit systems who would >> be willing to help me figure out the problem? I'm not happy that it >> used to work and now is "slightly" off. I've learned not to ignore >> little errors like this as it may show a problem with numpy (or f2py). >> >> -Travis >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > Travis, > > I have access to both 32 and 64-bit systems. Please let me know what I > can do to figure out the problem. > Maybe you can provide me with a short test script. > > Anyway I am still confused by the constraints in the test example > > con1 = lambda x: x[0]**2 + x[1]**2 - 25 > con2 = lambda x: -con1(x) > > > The first constraint is a circle with radius 5 centered at (x=y=0). > x^2+y^2 \ge 25 > > So feasible points lie outside that circle. > > The second constraint is again a circle with radius 5 centered at (x=y=0) > > -x^2-y^2+25 \ge = 0 or x^2+y^2 \le 25 > > Hence feasible points lie inside the circle. > > both constraints are fulfilled if x^2+y^2=25. AFAIK it's difficult to > check equalities wrt to floating point numbers. > > Am I missing something ? > No, it's just that the constraints are never doing equality testing individually (only together). So, how good this can do will depend on the specifics of cobyla.f (which I'm not familiar with). But, this is largely irrelevant to our problem because what we are trying to track down is why the test stopped working some time ago. One approach is to figure out at which point the test stopped working (but this can be very time consuming as we must install all the versions). If there are people out there where the cobyla test *does not* fail, then that could be a start. The hope is that if it's working on 64-bit systems we can figure out what is different on 32-bit systems by printing some of the internal values being calculated and comparing them on the two systems. -Travis From fullung at gmail.com Wed Jul 12 16:33:34 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 12 Jul 2006 22:33:34 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: <44B4DEF9.9020303@iam.uni-stuttgart.de> Message-ID: <00ea01c6a5f2$721b7ab0$0100000a@dsp.sun.ac.za> Hello all Nils Wagner wrote: > Bill Baxter wrote: > > On 7/12/06, *David Grant* > > wrote: > > > > What function should I use to get some eigenvalues and > > eigenvectors? There are so many and it's difficult to pin down > > what they are all for. > > > > > > This probably isn't what you meant, but... > > last I checked there was no function in Scipy that would let you get > > just *some* eigenvalues and eigenvectors. eig() gets you *all* > > eigenvals/vecs whether you want them or not. Which is really a bad > > idea if you're trying to do some sort of eigenvector-based > > compression/dimension-reduction on a 5000 dimensional dataset. > > > > SciPy really needs a wrapper for ARPACK. > > http://www.caam.rice.edu/software/ARPACK/download.html > > > +1 Definately another +1. MATLAB's eigs (sparse eig) also uses some ARPACK functions. I'd like to tackle this problem, but I'll probably only get to it at the of August (if someone doesn't beat me to it, hopefully). Regards, Albert From strawman at astraw.com Wed Jul 12 16:52:17 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 12 Jul 2006 13:52:17 -0700 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: <44B55895.70101@ieee.org> References: <44B417AD.2080907@ee.byu.edu> <44B49ECF.2040002@iam.uni-stuttgart.de> <44B55895.70101@ieee.org> Message-ID: <44B56101.3010103@astraw.com> Travis Oliphant wrote: > >If there are people out there where the cobyla test *does not* fail, >then that could be a start. > >The hope is that if it's working on 64-bit systems we can figure out >what is different on 32-bit systems by printing some of the internal >values being calculated and comparing them on the two systems. > > Travis, it looks like it's working on my 64 bit side, but not on my 32 bit linux32 chroot in the same system. Both are Ubuntu 6.06 (Dapper) with stock gcc et al. I don't think I'm using ATLAS on either. 64-bit mode ------------------------------------------- Ran 1566 tests in 55.891s OK >>> scipy.__version__ '0.5.0.2085' >>> numpy.__version__ '0.9.9.2811' 32-bit linux32 chroot --------------------------------- ====================================================================== FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/astraw/py2.4-linux-i686/lib/python2.4/site-packages/scipy/optimize/tests/test_cobyla.py", line 20, in check_simple assert_almost_equal(x, [x0, x1], decimal=5) File "/home/astraw/py2.4-linux-i686/lib/python2.4/site-packages/numpy/testing/utils.py", line 152, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/home/astraw/py2.4-linux-i686/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/astraw/py2.4-linux-i686/lib/python2.4/site-packages/numpy/testing/utils.py", line 213, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.957975 , 0.64690335]) y: array([ 4.95535625, 0.66666667]) ---------------------------------------------------------------------- Ran 1566 tests in 62.534s FAILED (failures=1) >>> scipy.__version__ '0.5.0.2085' >>> numpy.__version__ '0.9.9.2811' From cookedm at physics.mcmaster.ca Wed Jul 12 16:55:46 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 12 Jul 2006 16:55:46 -0400 Subject: [SciPy-user] Plea for help with cobyla test In-Reply-To: <44B417AD.2080907@ee.byu.edu> References: <44B417AD.2080907@ee.byu.edu> Message-ID: <20060712165546.2aba0736@arbutus.physics.mcmaster.ca> On Tue, 11 Jul 2006 15:27:09 -0600 Travis Oliphant wrote: > > Nils has reported that the cobyla test passes on 64-bit system. It is > currently failing on 32-bit systems and I'm not sure why. Could anybody > verify that the cobyla test passes on a 64-bit system? Works for my on my 64-bit system. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From wbaxter at gmail.com Wed Jul 12 20:17:19 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 13 Jul 2006 09:17:19 +0900 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: <00ea01c6a5f2$721b7ab0$0100000a@dsp.sun.ac.za> References: <44B4DEF9.9020303@iam.uni-stuttgart.de> <00ea01c6a5f2$721b7ab0$0100000a@dsp.sun.ac.za> Message-ID: On 7/13/06, Albert Strasheim wrote: > > Hello all > > Nils Wagner wrote: > > Bill Baxter wrote: > > > On 7/12/06, *David Grant* > > > wrote: > > > > > > > > > SciPy really needs a wrapper for ARPACK. > > > http://www.caam.rice.edu/software/ARPACK/download.html > > > > > +1 > > Definately another +1. > > MATLAB's eigs (sparse eig) also uses some ARPACK functions. I'd like to > tackle this problem, but I'll probably only get to it at the of August (if > someone doesn't beat me to it, hopefully). I threw out the name ARPACK there like I'm some kind of ARPACK expert, but the truth is that all I know about it is what you just said -- it's what Matlab uses for eigs, and I want an eigs replacement too. :-) My time frame for actually needing it is not immediate either. But I'm trying to inch my way towards being able to make contributions like that. Working my way through the prerequisites like getting my own numpy and scipy to compile, etc. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From tara at physics.usyd.edu.au Wed Jul 12 21:25:24 2006 From: tara at physics.usyd.edu.au (Tara Murphy) Date: Thu, 13 Jul 2006 11:25:24 +1000 (EST) Subject: [SciPy-user] Problem with AstroLib - coords-0.2 Message-ID: Hi, I've just started playing around with coords-0.2 and am having a problem reproducing the example shown on the wiki: http://www.scipy.org/AstroLibCoordsSnapshot I am using Python2.4 and I've pasted my session below. When I try ob.dd() and ob.j2000() the answers are not the same: >>> import coords as C >>> ob=C.Position('12:34:45.34 -23:42:32.6') >>> ob.hmsdms() '12:34:45.340 -23:42:32.600' >>> ob.dd() (188.68891666666667, -23.709055555555555) >>> ob.j2000() (-171.31108333333333, -23.709055555555555) Is this something I'm misinterpreting, or is it a bug? thanks, Tara ----- Dr. Tara Murphy ARC Postdoctoral Fellow School of IT | School of Physics Room 448 | Room 565 University of Sydney | University of Sydney P: +61 2 9351 4723 | P: +61 2 9351 3041 E: tm at it.usyd.edu.au | E: tara at physics.usyd.edu.au http://www.it.usyd.edu.au/~info1903 From davidgrant at gmail.com Thu Jul 13 03:16:16 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 13 Jul 2006 00:16:16 -0700 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> References: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On 7/11/06, Johannes Loehnert wrote: > > Hi, > > > This probably isn't what you meant, but... > > last I checked there was no function in Scipy that would let you get > just > > *some* eigenvalues and eigenvectors. eig() gets you *all* > eigenvals/vecs > > whether you want them or not. Which is really a bad idea if you're > trying > > to do some sort of eigenvector-based compression/dimension-reduction on > a > > 5000 dimensional dataset. > > scipy.linalg.eig_banded can be used to obtain a subset of eigenvectors (by > giving limits for the eigenvalues or by specifying an index range). > However > it is only applicable to symmetric and hermitian matrices (preferably > band-limited, of course). Internally it wraps the lapack routines dsbevx / > zhbevx. > > I do not know if it is already contained in the latest scipy release, > maybe > you need to build from svn. > Thanks. It doesn't look like it is in scipy-0.4.9 -- David Grant Please Note my new email address: davidgrant at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Jul 13 03:20:27 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Jul 2006 09:20:27 +0200 Subject: [SciPy-user] Eigenvalues and eigenvectors In-Reply-To: References: <200607120854.02658.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <44B5F43B.6050605@iam.uni-stuttgart.de> David Grant wrote: > > > On 7/11/06, *Johannes Loehnert* > wrote: > > Hi, > > > This probably isn't what you meant, but... > > last I checked there was no function in Scipy that would let you > get just > > *some* eigenvalues and eigenvectors. eig() gets you *all* > eigenvals/vecs > > whether you want them or not. Which is really a bad idea if > you're trying > > to do some sort of eigenvector-based > compression/dimension-reduction on a > > 5000 dimensional dataset. > > scipy.linalg.eig_banded can be used to obtain a subset of > eigenvectors (by > giving limits for the eigenvalues or by specifying an index > range). However > it is only applicable to symmetric and hermitian matrices (preferably > band-limited, of course). Internally it wraps the lapack routines > dsbevx / > zhbevx. > > I do not know if it is already contained in the latest scipy > release, maybe > you need to build from svn. > > > Thanks. It doesn't look like it is in scipy-0.4.9 > > -- > David Grant > Please Note my new email address: davidgrant at gmail.com > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > http://projects.scipy.org/scipy/scipy/changeset/2030 Nils From stefan at sun.ac.za Thu Jul 13 06:11:39 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 13 Jul 2006 12:11:39 +0200 Subject: [SciPy-user] Problem with AstroLib - coords-0.2 In-Reply-To: References: Message-ID: <20060713101139.GG23554@mentat.za.net> Hi Tara On Thu, Jul 13, 2006 at 11:25:24AM +1000, Tara Murphy wrote: > I've just started playing around with coords-0.2 and am having a problem > reproducing the example shown on the wiki: > > http://www.scipy.org/AstroLibCoordsSnapshot > > I am using Python2.4 and I've pasted my session below. When I try ob.dd() > and ob.j2000() the answers are not the same: > > >>> import coords as C > >>> ob=C.Position('12:34:45.34 -23:42:32.6') > >>> ob.hmsdms() > '12:34:45.340 -23:42:32.600' > >>> ob.dd() > (188.68891666666667, -23.709055555555555) > >>> ob.j2000() > (-171.31108333333333, -23.709055555555555) > > > Is this something I'm misinterpreting, or is it a bug? I see the same results on my machine, so I don't think you are doing anything wrong. However, this package isn't part of scipy, and your question looks to be of a field-specific nature, so it will be better answered by the authors of coords themselves (some of them might be on this list). Best way to draw their attention is probably to file a ticket at http://projects.scipy.org/astropy/astrolib/wiki/WikiStart Regards St?fan From listservs at mac.com Thu Jul 13 16:50:13 2006 From: listservs at mac.com (listservs at mac.com) Date: Thu, 13 Jul 2006 16:50:13 -0400 Subject: [SciPy-user] fftpack import error on OSX Message-ID: <9993B916-3D43-4461-879C-81406CDDF27C@mac.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I am able to get scipy built now on OSX, but importing causes the following error. Usually this error is associated with trying to build with gcc 4.0 rather than 3.3, but I used 3.3 throughout. Any ideas? In [1]: from scipy import * - ------------------------------------------------------------------------ - --- exceptions.ImportError Traceback (most recent call last) /Users/chris/ /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy/fftpack/__init__.py 8 from fftpack_version import fftpack_version as __version__ 9 - ---> 10 from basic import * global basic = undefined 11 from pseudo_diffs import * 12 from helper import * /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy/fftpack/basic.py 11 from numpy import asarray, zeros, swapaxes, integer, array 12 import numpy - ---> 13 import _fftpack as fftpack global _fftpack = undefined global as = undefined fftpack = undefined 14 15 import atexit ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/fftpack/_fftpack.so Expected in: dynamic lookup - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFEtrIGkeka2iCbE4wRAqS8AJ41LE2fxN62oZMJx2pSh8T5yj3fJgCeLY1x FYMvDGINgN0P7c6qX+5zBRA= =0jPk -----END PGP SIGNATURE----- From bhendrix at enthought.com Thu Jul 13 18:48:45 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 13 Jul 2006 17:48:45 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta4 Released Message-ID: <44B6CDCD.7010206@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0.beta4 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 1.0.0.beta4 Release Notes: -------------------- There are two known issues: * No documentation is included due to problems with the chm. Instead, all documentation for this beta is available on the web at http://code.enthought.com/enthon/docs. The official 1.0.0 will include a chm containing all of our docs again. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. Unless something terrible is discovered between now and the next release, we intend on releasing 1.0.0 on July 25th. This release includes version 1.0.9 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.9.html About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrmaple at gmail.com Thu Jul 13 20:47:04 2006 From: mrmaple at gmail.com (James Carroll) Date: Thu, 13 Jul 2006 20:47:04 -0400 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta4 Released In-Reply-To: <44B6CDCD.7010206@enthought.com> References: <44B6CDCD.7010206@enthought.com> Message-ID: Hi Bryce! That's looking like a nice fresh list of python modules... My only question is why wxPython 2.6.1 ? 2.6.3.2 has been the stable release for several months. Thanks, -Jim On 7/13/06, Bryce Hendrix wrote: > > > > Enthought is pleased to announce the release of Python Enthought Edition > Version 1.0.0.beta4 (http://code.enthought.com/enthon/) -- > a python distribution for Windows. > > 1.0.0.beta4 Release Notes: > -------------------- > There are two known issues: > * No documentation is included due to problems with the chm. Instead, all > documentation for this beta is available on the web at > http://code.enthought.com/enthon/docs. The official 1.0.0 > will include a chm containing all of our docs again. > * IPython may cause problems when starting the first time if a previous > version of IPython was ran. If you see "WARNING: could not import user > config", either follow the directions which follow the warning. > > Unless something terrible is discovered between now and the next release, > we intend on releasing 1.0.0 on July 25th. > > This release includes version 1.0.9 of the Enthought Tool Suite (ETS) > Package and bug fixes-- you can look at the release notes for this ETS > version here: > > http://svn.enthought.com/downloads/enthought/changelog-release.1.0.9.html > > > > About Python Enthought Edition: > ------------------------------- > Python 2.4.3, Enthought Edition is a kitchen-sink-included Python > distribution for Windows including the following packages out of the box: > > Numpy > SciPy > IPython > Enthought Tool Suite > wxPython > PIL > mingw > f2py > MayaVi > Scientific Python > VTK > and many more... > > More information is available about all Open Source code written and > released by Enthought, Inc. at http://code.enthought.com > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From bhendrix at enthought.com Thu Jul 13 22:10:43 2006 From: bhendrix at enthought.com (bryce hendrix) Date: Thu, 13 Jul 2006 21:10:43 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 1.0.0.beta4 Released In-Reply-To: References: <44B6CDCD.7010206@enthought.com> Message-ID: <44B6FD23.4070400@enthought.com> I had some problems building it, none of which I remember right now... wx is generally one of those packages I dread rebuilding and testing, so I avoid it if it cause me any problems. I will re-evaluate it for Enthon 1.1.0, which is due in 6-8 weeks. Bryce James Carroll wrote: > Hi Bryce! That's looking like a nice fresh list of python modules... > > My only question is why wxPython 2.6.1 ? 2.6.3.2 has been the stable > release for several months. > > Thanks, > -Jim > > On 7/13/06, Bryce Hendrix wrote: > >> >> Enthought is pleased to announce the release of Python Enthought Edition >> Version 1.0.0.beta4 (http://code.enthought.com/enthon/) -- >> a python distribution for Windows. >> >> 1.0.0.beta4 Release Notes: >> -------------------- >> There are two known issues: >> * No documentation is included due to problems with the chm. Instead, all >> documentation for this beta is available on the web at >> http://code.enthought.com/enthon/docs. The official 1.0.0 >> will include a chm containing all of our docs again. >> * IPython may cause problems when starting the first time if a previous >> version of IPython was ran. If you see "WARNING: could not import user >> config", either follow the directions which follow the warning. >> >> Unless something terrible is discovered between now and the next release, >> we intend on releasing 1.0.0 on July 25th. >> >> This release includes version 1.0.9 of the Enthought Tool Suite (ETS) >> Package and bug fixes-- you can look at the release notes for this ETS >> version here: >> >> http://svn.enthought.com/downloads/enthought/changelog-release.1.0.9.html >> >> >> >> About Python Enthought Edition: >> ------------------------------- >> Python 2.4.3, Enthought Edition is a kitchen-sink-included Python >> distribution for Windows including the following packages out of the box: >> >> Numpy >> SciPy >> IPython >> Enthought Tool Suite >> wxPython >> PIL >> mingw >> f2py >> MayaVi >> Scientific Python >> VTK >> and many more... >> >> More information is available about all Open Source code written and >> released by Enthought, Inc. at http://code.enthought.com >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbdr at cmp.uea.ac.uk Fri Jul 14 10:43:55 2006 From: pbdr at cmp.uea.ac.uk (Pierre Barbier de Reuille) Date: Fri, 14 Jul 2006 15:43:55 +0100 Subject: [SciPy-user] Adding support for 16 bits images to scipy.fromimage Message-ID: <44B7ADAB.6030902@cmp.uea.ac.uk> Hi, I just noticed that scipy.fromimage was unable to handle 16 bits images. However, here is a patch to apply to scipy/misc/pilutil.py to correct that: 74a75,76 > if mode == 'I;16': > type = numpy.uint16 Pierre From travis at enthought.com Fri Jul 14 13:19:11 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 14 Jul 2006 12:19:11 -0500 Subject: [SciPy-user] ANN: SciPy 2006 Schedule/Early Registration Reminder Message-ID: <44B7D20F.8060701@enthought.com> Greetings, The SciPy 2006 Conference (http://www.scipy.org/SciPy2006) is August 17-18 this year. The deadline for early registration is *today*, July 14, 2006. The registration price will increase from $100 to $150 after today. You can register online at https://www.enthought.com/scipy06 . We invite everyone attending the conference to also attend the Coding Sprints on Monday-Tuesday , August 14-15 and also the Tutorials Wednesday, August 16. There is no additional charge for these sessions. A *tentative* schedule of talks has now been posted. http://www.scipy.org/SciPy2006/Schedule We look forward to seeing you at CalTech in August! Best, Travis From wegwerp at gmail.com Fri Jul 14 15:06:45 2006 From: wegwerp at gmail.com (weg werp) Date: Fri, 14 Jul 2006 21:06:45 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <44B361C7.3060906@iam.uni-stuttgart.de> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> Message-ID: <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> Hi group, I can confirm some problems with an Athlon 1700+ processor. I just downloaded and installed the latest enthought version (enthon-python2.4-1.0.0.beta4) If you then do: import numpy numpy.test(10,10) you get: ...lots of test results... check_matmat (numpy.core.tests.test_numeric.test_dot) ... ok check_matscalar (numpy.core.tests.test_numeric.test_dot) ... ok check_matvec (numpy.core.tests.test_numeric.test_dot) The example at the beginning of this thread also crashes. Note that I reported similar problems with older precompiled versions, but then it crashed at different locations: http://projects.scipy.org/pipermail/scipy-user/2006-April/007751.html http://projects.scipy.org/pipermail/scipy-user/2006-May/008110.html So I guess there are some persistent problems with the standard compiled versions on Athlon processors. Cheers, Bas From schofield at ftw.at Fri Jul 14 17:19:37 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 14 Jul 2006 23:19:37 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> Message-ID: On 14/07/2006, at 9:06 PM, weg werp wrote: > Hi group, > > I can confirm some problems with an Athlon 1700+ processor. I just > downloaded and installed the latest enthought version > (enthon-python2.4-1.0.0.beta4) > > If you then do: > import numpy > numpy.test(10,10) > > you get: > ...lots of test results... > check_matmat (numpy.core.tests.test_numeric.test_dot) ... ok > check_matscalar (numpy.core.tests.test_numeric.test_dot) ... ok > check_matvec (numpy.core.tests.test_numeric.test_dot) > Thanks to all who have posted information about this. It looks like NumPy is built against an ATLAS library that uses SSE2 instructions, and that Athlons don't support these. I'm working with Travis to iron this out with the next NumPy release (probably 1.0beta1). In the meantime, if you need to use NumPy / SciPy on an Athlon, you'll need to build your own NumPy binary using the instructions at http://new.scipy.org/Wiki/Installing_SciPy. Building SciPy too shouldn't be necessary, but I can't be sure. To simplify the build, I'd recommend using Pearu's ATLAS libraries built for Pentium 2. They were available on old.scipy.org, but it seems this is no longer accessible. I'll upload my local copy to the wiki in the next few days. -- Ed From strawman at astraw.com Fri Jul 14 17:55:02 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 14 Jul 2006 14:55:02 -0700 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> Message-ID: <44B812B6.9000008@astraw.com> Ed Schofield wrote: >Thanks to all who have posted information about this. It looks like >NumPy is built against an ATLAS library that uses SSE2 instructions, >and that Athlons don't support these. > > For reference, a list of CPUs that supports SSE2 is available at http://en.wikipedia.org/wiki/SSE2 Note that Athlon 64 does support SSE2 (and SSE3, aka PNI, in Rev E. and later). --Andrew From cookedm at physics.mcmaster.ca Fri Jul 14 18:26:22 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 14 Jul 2006 18:26:22 -0400 Subject: [SciPy-user] Adding support for 16 bits images to scipy.fromimage In-Reply-To: <44B7ADAB.6030902@cmp.uea.ac.uk> References: <44B7ADAB.6030902@cmp.uea.ac.uk> Message-ID: <747BF52E-6283-4C7E-8B82-BFEC309F56CA@physics.mcmaster.ca> On Jul 14, 2006, at 10:43 , Pierre Barbier de Reuille wrote: > Hi, > > I just noticed that scipy.fromimage was unable to handle 16 bits > images. > However, here is a patch to apply to scipy/misc/pilutil.py to > correct that: > > 74a75,76 >> if mode == 'I;16': >> type = numpy.uint16 applied. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From demarchi at duke.edu Fri Jul 14 20:52:02 2006 From: demarchi at duke.edu (demarchi at duke.edu) Date: Fri, 14 Jul 2006 20:52:02 -0400 (EDT) Subject: [SciPy-user] fsolve help Message-ID: Folks: I'm trying to use fsolve and my program crashes(!) when I execute this: #!/usr/bin/env python import math from scipy.optimize import fsolve def func2(x): out = [x[0]+2*x[1]+2*x[2]-1] out.append(x[0]+x[2] - 2*x[1]) out.append(x[3]+x[4]-1) out.append(1/5-2/5*x[2]+2/5*x[0]*x[3]-x[0]) out.append(1/5-x[1]/5-x[2]/5+2/5*x[1]*x[4]+x[1]/5-x[1]) out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) return out x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) print x02 I just did it in maple, and the roots are x0=1/8 x1=3/16 x2=1/4 x3=.5 x4=.5 Moroever, the final equation isn't really needed (and I tried rerunning the program without it). If I do this, the program doesn't crash, but the answer doesn't converge (and isn't close -- "The iteration is not making good progress..." message shows up). Am I doing something wrong in python, or is the solver weak, or...? IF it is the latter, is there another library that I could be using? Thanks for any help. Scott From hasslerjc at adelphia.net Fri Jul 14 21:24:40 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Fri, 14 Jul 2006 21:24:40 -0400 Subject: [SciPy-user] fsolve help In-Reply-To: References: Message-ID: <44B843D8.3080202@adelphia.net> Put a decimal point in each quotient to force floating division ... otherwise, you get integers (ie., 1/2 = 0, but 1./2 = 0.5). If I change your function to: def func2(x): out = [x[0]+2.*x[1]+2.*x[2]-1.] out.append(x[0]+x[2] - 2.*x[1]) out.append(x[3]+x[4]-1.) out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) return out I get: "D:\Python\test.py" [ 0.125 0.1875 0.25 0.5 0.5 ] which is correct. john demarchi at duke.edu wrote: > Folks: > > I'm trying to use fsolve and my program crashes(!) when I execute this: > > #!/usr/bin/env python > import math > from scipy.optimize import fsolve > > def func2(x): > out = [x[0]+2*x[1]+2*x[2]-1] > out.append(x[0]+x[2] - 2*x[1]) > out.append(x[3]+x[4]-1) > out.append(1/5-2/5*x[2]+2/5*x[0]*x[3]-x[0]) > out.append(1/5-x[1]/5-x[2]/5+2/5*x[1]*x[4]+x[1]/5-x[1]) > out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) > return out > > x02 = fsolve(func2, [0.25,0.25,0.5,0.5,0.5]) > print x02 > > I just did it in maple, and the roots are > x0=1/8 > x1=3/16 > x2=1/4 > x3=.5 > x4=.5 > > Moroever, the final equation isn't really needed (and I tried rerunning > the program without it). If I do this, the program doesn't crash, but the > answer doesn't converge (and isn't close -- "The iteration is not making > good progress..." message shows up). > > Am I doing something wrong in python, or is the solver weak, or...? IF it > is the latter, is there another library that I could be using? > > Thanks for any help. > Scott > From ckkart at hoc.net Fri Jul 14 22:04:11 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 15 Jul 2006 11:04:11 +0900 Subject: [SciPy-user] fsolve help In-Reply-To: <44B843D8.3080202@adelphia.net> References: <44B843D8.3080202@adelphia.net> Message-ID: <44B84D1B.9040205@hoc.net> John Hassler wrote: > Put a decimal point in each quotient to force floating division ... > otherwise, you get integers > (ie., 1/2 = 0, but 1./2 = 0.5). > If I change your function to: > > def func2(x): > out = [x[0]+2.*x[1]+2.*x[2]-1.] > out.append(x[0]+x[2] - 2.*x[1]) > out.append(x[3]+x[4]-1.) > out.append(1./5-2./5*x[2]+2./5*x[0]*x[3]-x[0]) > out.append(1./5-x[1]/5.-x[2]/5.+2/5.*x[1]*x[4]+x[1]/5.-x[1]) > #out.append(1/5-2/5*x[1]+2*x[2]/5+x[2]*x[3]/5) > return out Or even easier, put a from __future__ import division at the beginning of your script. Then division default to floating point operations. Christian From davidgrant at gmail.com Fri Jul 14 23:51:19 2006 From: davidgrant at gmail.com (David Grant) Date: Fri, 14 Jul 2006 20:51:19 -0700 Subject: [SciPy-user] fsolve help In-Reply-To: <44B84D1B.9040205@hoc.net> References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> Message-ID: On 7/14/06, Christian Kristukat wrote: > > Or even easier, put a > > from __future__ import division > > at the beginning of your script. Then division default to floating point operations. What exactly does that do? What is __future__ all about? From demarchi at duke.edu Sat Jul 15 01:18:43 2006 From: demarchi at duke.edu (sd) Date: Sat, 15 Jul 2006 05:18:43 +0000 (UTC) Subject: [SciPy-user] fsolve help References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> Message-ID: OK -- if I convert to float it works, sort of. I have to comment out (as you did) one of the constraints. If I leave them all in, it crashes python. And, of course, w/ all the constraints (uncomment the last one), it should work (or at least it does on paper, and in maple). thanks! sd From cookedm at physics.mcmaster.ca Sat Jul 15 01:42:28 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 15 Jul 2006 01:42:28 -0400 Subject: [SciPy-user] fsolve help In-Reply-To: References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> Message-ID: <20060715054228.GA3983@arbutus.physics.mcmaster.ca> On Fri, Jul 14, 2006 at 08:51:19PM -0700, David Grant wrote: > On 7/14/06, Christian Kristukat wrote: > > > > Or even easier, put a > > > > from __future__ import division > > > > at the beginning of your script. Then division default to floating point operations. > > What exactly does that do? What is __future__ all about? It's the Python developers' way of saying "sometime in the future, this option will be the default. But for now, you can be future-proof.." See PEP 236 (http://www.python.org/dev/peps/pep-0236/) for __future__, and PEP 238 (http://www.python.org/dev/peps/pep-0238/) for division. In this case, this statement at the top of your module means that within that module, the '/' does "true" division, e.g., 1/2 == 0.5. To get integer division (the previous behaviour), do 1//2 == 0. "true" division will be the default in Python 3.0; you don't have to worry about the 2.x series (although, for numerical computation, it's *much* more convienent). Previous uses of 'from __future__ import ' have included 'nested_scopes', which you can still find in odd corners of scipy, and 'generators'. There's also the fun 'from __future__ import braces' to try to use { } for blocks instead of indentation. (Try it in the interpreter.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From demarchi at duke.edu Sat Jul 15 01:44:15 2006 From: demarchi at duke.edu (sd) Date: Sat, 15 Jul 2006 05:44:15 +0000 (UTC) Subject: [SciPy-user] fsolve help References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> Message-ID: Some info: I edited minpack.py (the module called by fsolve) and commented out the check_func() function -- it was checking for n equations for n unknowns, and crashing when I had n+1 equations. The solver works without it (i.e., _minpack._hybrd). sd From hasslerjc at adelphia.net Sat Jul 15 08:57:26 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 15 Jul 2006 08:57:26 -0400 Subject: [SciPy-user] fsolve help In-Reply-To: References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> Message-ID: <44B8E636.40501@adelphia.net> Actually, they're two different problems. Assuming that a solution exists, having the same number of equations as unknowns allows you to find the "exact" solution (that is, there will be a solution which exactly fits the equations as written). If you have more equations than unknowns (overdetermined system), there can be an "exact" solution only for infinite precision arithmetic. Otherwise, the best you can do is to _minimize_ (in some sense) the residuals. This is not the same thing as finding zeros, although there is some overlap among the various methods (see, eg., "Numerical Recipes," which explains this pretty well). For example, imagine linear equations. If you have two (non-parallel) lines, they cross at a single point (two eqn., two unknowns), and there is no ambiguity in calculating this point in finite precision arithmetic. If you use three lines to define a single point (overdetermined), then you need infinite precision arithmetic to calculate the intersection. If your numbers are not exactly representable in binary, then this isn't possible with a computer, and the best you can do is to calculate some form of minimum residual (closest approach). I'm not familiar with Maple, but Mathcad, for example, would allow you to switch between zero-finding and minimization with a trivial change in your program. Perhaps Maple does it automatically. In another lifetime, I was a ChE professor, and taught this stuff. john sd wrote: > OK -- if I convert to float it works, sort of. I have to comment out (as you > did) one of the constraints. If I leave them all in, it crashes python. > > And, of course, w/ all the constraints (uncomment the last one), it should work > (or at least it does on paper, and in maple). > > thanks! > > sd > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From demarchi at duke.edu Sat Jul 15 12:00:59 2006 From: demarchi at duke.edu (demarchi) Date: Sat, 15 Jul 2006 16:00:59 +0000 (UTC) Subject: [SciPy-user] fsolve help References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> <44B8E636.40501@adelphia.net> Message-ID: John Hassler adelphia.net> writes: > > Actually, they're two different problems. Assuming that a solution > exists, having the same number of equations as unknowns allows you to > find the "exact" solution (that is, there will be a solution which > exactly fits the equations as written). If you have more equations than > unknowns (overdetermined system), there can be an "exact" solution only > for infinite precision arithmetic. In general, you're sorta right. The correct statement for a linear system (Ax=b) is that the rank of A is the same as the number of unknowns. If I have more equations than unknowns but they some of them are superfluous, I'm still good to go if the rank is right. And generally w/ constraints, you can imagine I have superfluous constraints. E.g., let's say my system has an exact solution that is the vector 1. If I add the constraint that one of the variables is between 0 and 2, that won't upset the apple cart. Last, in the example code I sent out, there's an exact solution that I find if I disable check_func... thanks sd From hgk at et.uni-magdeburg.de Mon Jul 17 04:00:31 2006 From: hgk at et.uni-magdeburg.de (=?ISO-8859-1?Q?Hans_Georg_Krauth=E4user?=) Date: Mon, 17 Jul 2006 10:00:31 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> Message-ID: Ed Schofield schrieb: > On 14/07/2006, at 9:06 PM, weg werp wrote: > >> Hi group, >> >> I can confirm some problems with an Athlon 1700+ processor. I just >> downloaded and installed the latest enthought version >> (enthon-python2.4-1.0.0.beta4) >> >> If you then do: >> import numpy >> numpy.test(10,10) >> >> you get: >> ...lots of test results... >> check_matmat (numpy.core.tests.test_numeric.test_dot) ... ok >> check_matscalar (numpy.core.tests.test_numeric.test_dot) ... ok >> check_matvec (numpy.core.tests.test_numeric.test_dot) >> > > > Thanks to all who have posted information about this. It looks like > NumPy is built against an ATLAS library that uses SSE2 instructions, > and that Athlons don't support these. > > I'm working with Travis to iron this out with the next NumPy release > (probably 1.0beta1). In the meantime, if you need to use NumPy / > SciPy on an Athlon, you'll need to build your own NumPy binary using > the instructions at http://new.scipy.org/Wiki/Installing_SciPy. > Building SciPy too shouldn't be necessary, but I can't be sure. To > simplify the build, I'd recommend using Pearu's ATLAS libraries built > for Pentium 2. They were available on old.scipy.org, but it seems > this is no longer accessible. I'll upload my local copy to the wiki > in the next few days. > > -- Ed Meanwhile, I compiled numpy and scipy (latest stable, not svn) on a Athlon (Duron) against the ATLAS library with SSE1. It is working for me, but I can't guarantee that it will work for someone else. But, if someone is interested in the windows installers, I will send it by email. I didn't tried whether the official scipy works with my self-made numpy, but I can try if it helps. Regards Hans Georg From jf.moulin at gmail.com Mon Jul 17 08:03:40 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Mon, 17 Jul 2006 12:03:40 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... Message-ID: Hi! I am coming back with my mpfit problem! -I dowloaded mpfit, applied the numeric to numpy conversion (numpy/lib/convertcode.py) and then I still get this very odd message: File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ self.maxlog = log(self.maxnum) NameError: global name 'log' is not defined this behaviour is maintained when I use numpy.log or Numeric.log instead of just log()... Any idea? This is driving me mad and I really need to be able to fit with constraints on the parameters (hence mpfit...) Thanks in advance for your input JF From robert.kern at gmail.com Mon Jul 17 10:48:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Jul 2006 09:48:40 -0500 Subject: [SciPy-user] mpfit crashes (still)... In-Reply-To: References: Message-ID: <44BBA348.3090301@gmail.com> JF Moulin wrote: > Hi! > > I am coming back with my mpfit problem! > > -I dowloaded mpfit, applied the numeric to numpy conversion > (numpy/lib/convertcode.py) > and then I still get this very odd message: > > File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ > self.maxlog = log(self.maxnum) > NameError: global name 'log' is not defined > > this behaviour is maintained when I use numpy.log or Numeric.log instead of just > log()... It certainly won't give you that same NameError when you make those changes. Could you copy-and-paste the error you get after you have made those changes? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Mon Jul 17 11:06:26 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 17 Jul 2006 17:06:26 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> Message-ID: <44BBA772.204@ftw.at> Ed Schofield wrote: > On 14/07/2006, at 9:06 PM, weg werp wrote: > > >> I can confirm some problems with an Athlon 1700+ processor. >> >> >> > Thanks to all who have posted information about this. It looks like > NumPy is built against an ATLAS library that uses SSE2 instructions, > and that Athlons don't support these. > > I'm working with Travis to iron this out with the next NumPy release > (probably 1.0beta1). In the meantime, if you need to use NumPy / > SciPy on an Athlon, you'll need to build your own NumPy binary using > the instructions at http://new.scipy.org/Wiki/Installing_SciPy. > Building SciPy too shouldn't be necessary, but I can't be sure. To > simplify the build, I'd recommend using Pearu's ATLAS libraries built > for Pentium 2. They were available on old.scipy.org, but it seems > this is no longer accessible. I'll upload my local copy to the wiki > in the next few days. > Done -- I've now uploaded the Win32 ATLAS libraries to the new wiki and added links to the page http://www.scipy.org/Installing_SciPy/Windows. -- Ed From haase at msg.ucsf.edu Mon Jul 17 12:34:19 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 17 Jul 2006 09:34:19 -0700 Subject: [SciPy-user] SciPy COW seems to work - but no info() Message-ID: <200607170934.19859.haase@msg.ucsf.edu> Hi, I found a very informative (even if from 2004) online video stream + powerpoint at https://www.nanohub.org/resources/?id=99 (you need to create an account and answer lot's of stupid questions [why is this ?] ) Anyway: I heard about COW ! A simple SciPy module to execute Python in parallel distributed across multiple machines. It is still around in the new SciPy - in the sandbox. I compiled it and it seems to do the simple tests given in the doc strings. Thanks !! Great work !! The only exception seem to be the cluster.info() command which is supposed to list each cluster node together with it's CPU type and so on, e.g.: # >>> cluster.info() # MACHINE CPU GHZ MB TOTAL MB FREE LOAD # bull 2xP3 0.5 960.0 930.0 0.00 # bull 2xP3 0.5 960.0 930.0 0.00 [ BTW: how do you make sure that a two CPU node will make use of both CPUs ? Do you just have to list it twice with two different port numbers ? ] The traceback: print c.info() File "<..>cow/cow.py", line 470, in info results = self.info_list() File "<..>cow/cow.py", line 538, in info_list import numpy.distutils.proc as numpy_proc ImportError: No module named proc What is numpy_proc supposed to be ? Thanks, Sebastian Haase From kwgoodman at gmail.com Mon Jul 17 12:45:08 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Mon, 17 Jul 2006 09:45:08 -0700 Subject: [SciPy-user] SciPy COW seems to work - but no info() In-Reply-To: <200607170934.19859.haase@msg.ucsf.edu> References: <200607170934.19859.haase@msg.ucsf.edu> Message-ID: On 7/17/06, Sebastian Haase wrote: > Anyway: I heard about COW ! A simple SciPy module to execute Python in > parallel distributed across multiple machines. > It is still around in the new SciPy - in the sandbox. I compiled it and it > seems to do the simple tests given in the doc strings. > Thanks !! Great work !! Holy cow. That sounds exciting. If you take any notes please dump them on the wiki for slow people like me. From robert.kern at gmail.com Mon Jul 17 12:47:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Jul 2006 11:47:24 -0500 Subject: [SciPy-user] SciPy COW seems to work - but no info() In-Reply-To: <200607170934.19859.haase@msg.ucsf.edu> References: <200607170934.19859.haase@msg.ucsf.edu> Message-ID: <44BBBF1C.8030902@gmail.com> Sebastian Haase wrote: > Hi, > I found a very informative (even if from 2004) online video stream + > powerpoint at > https://www.nanohub.org/resources/?id=99 > (you need to create an account and answer lot's of stupid questions [why is > this ?] ) Don't ask us. Ask the nanohub.org people. > Anyway: I heard about COW ! A simple SciPy module to execute Python in > parallel distributed across multiple machines. > It is still around in the new SciPy - in the sandbox. I compiled it and it > seems to do the simple tests given in the doc strings. > Thanks !! Great work !! > > The only exception seem to be the > cluster.info() > command which is supposed to list each cluster node together with it's CPU > type and so on, e.g.: > # >>> cluster.info() > # MACHINE CPU GHZ MB TOTAL MB FREE LOAD > # bull 2xP3 0.5 960.0 930.0 0.00 > # bull 2xP3 0.5 960.0 930.0 0.00 > > [ > BTW: how do you make sure that a two CPU node will make use of both CPUs ? > Do you just have to list it twice with two different port numbers ? > ] Probably. > The traceback: > print c.info() > File "<..>cow/cow.py", line 470, in info > results = self.info_list() > File "<..>cow/cow.py", line 538, in info_list > import numpy.distutils.proc as numpy_proc > ImportError: No module named proc > > What is numpy_proc supposed to be ? The version of cow in the sandbox is autoconverted to use the new numpy. Essentially no manual work has been done on cow at all. The proc module is in scipy_distutils. If you would like to help get cow working, you should grab the proc module from an old distribution of scipy and refactor cow to use a version of it distributed with cow itself. Submit some patches to the Trac, and we will apply them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Mon Jul 17 17:30:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 17 Jul 2006 15:30:09 -0600 Subject: [SciPy-user] Arithmetic Errors In-Reply-To: <20060717205116.GA11043@redwoodscientific.com> References: <20060717205116.GA11043@redwoodscientific.com> Message-ID: <44BC0161.8070007@ieee.org> John Lawless wrote: > How do I instruct scipy to give me an error, or better yet compute > the right answer, as opposed to silently ignoring an integer overflow? > Two examples are: > > >>>> from scipy import * >>>> array((3000))*array((1000000)) >>>> > -1294967296 > >>>> sum(3000*ones(1000000)) >>>> > -1294967296 > > You have to use object arrays. array(3000,'O')*array(1000000,'O') or use array scalars with the error set as over='raise' seterr(over='raise') int32(3000)*int32(1000000) For arrays, we don't check for over-flow as this is a time-consuming procedure that slows down all the calculations. There is a dtype= keyword argument to the sum method of arrays which allows you to change the data-type over which the sum proceeds. This could also be used to obtain the result: 3000*ones(1000000).sum(dtype=int64) -Travis From aisaac at american.edu Mon Jul 17 20:10:45 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 17 Jul 2006 20:10:45 -0400 Subject: [SciPy-user] Arithmetic Errors In-Reply-To: <44BC0161.8070007@ieee.org> References: <20060717205116.GA11043@redwoodscientific.com><44BC0161.8070007@ieee.org> Message-ID: On Mon, 17 Jul 2006, Travis Oliphant apparently wrote: > seterr(over='raise') > int32(3000)*int32(1000000) Traceback (most recent call last): File "", line 1, in ? File "c:\temp.py", line 18, in ? int32(3000)*int32(1000000) FloatingPointError: overflow encountered in long_scalars That's numpy version 0.9.8. fwiw, Alan Isaac From oliphant.travis at ieee.org Mon Jul 17 20:10:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 17 Jul 2006 18:10:36 -0600 Subject: [SciPy-user] Arithmetic Errors In-Reply-To: <20060717235553.GA14362@redwoodscientific.com> References: <20060717205116.GA11043@redwoodscientific.com> <44BC0161.8070007@ieee.org> <20060717235553.GA14362@redwoodscientific.com> Message-ID: <44BC26FC.8050408@ieee.org> John Lawless wrote: > Travis, > > Thanks! > > 1). I haven't found any documentation on dtype='O'. (I purchased your > trelgol book but it hasn't arrived yet.) Does 'O' guarantee no > wrong answers? > The object data-type uses Python objects instead of low-level C-types for the calculations. So, it gives the same calculations that Python would do (but of course it's much slower). > 2). My actual code was more complex than the example I posted. It was > giving correct answers until I increased the dataset size. Then, > luckily, the result became obviously wrong. I can go through a > code and try to coerce everything to double but, when debugging a > large code, how can one ever be sure that all types are coerced > correctly if no errors are generated? > NumPy uses c data-types for calculations. It is therefore, *much* faster, but you have to take precautions about overflowing on integer operations. > 3). AFAIK, checking for overflows should take no CPU time whatsoever > unless an exception is actually generated. This is true for floating point operations, but you were doing integer multiplication. There is no support for hardware multiply overflow in NumPy (is there even such a thing?). Python checks for overflow on integer arithmetic by doing some additional calculations. It would be possible to add slower, integer over-flow checking ufuncs to NumPy if this was desired and you could replace the standard non-checking functions pretty easily. -Travis From oliphant.travis at ieee.org Mon Jul 17 20:19:01 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Mon, 17 Jul 2006 18:19:01 -0600 Subject: [SciPy-user] Cobyla test fixed Message-ID: <44BC28F5.8000200@ieee.org> I think we tracked down the problem in cobyla. Apparently, a certain operation deep in the bowels of the Fortran code was causing a number to be very-small negative on 32-bit platforms but 0.0d0 on 64-bit platforms. Different behavior occurred based on whether or not the number was less than zero or not. On 32-bit platforms the different behavior ensued on 64-bit platforms it did not. I changed the code in cobyla/trstlp.f so that it checks for a number that is less than eps currently eps is hard-coded to -2.2e-16 but this is a hack. But, the hack works and allows all scipy tests to pass on 32-bit systems (and hopefully 64-bit systems as well). There are a lot more print statements in the code now, but they are all embedded in a test for iprint to be equal to 3 so they don't run by default (the tests will cause the code to run very slightly slower, however). Thanks to those with 64-bit systems who helped in debugging. -Travis From davidgrant at gmail.com Mon Jul 17 20:22:29 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 17 Jul 2006 17:22:29 -0700 Subject: [SciPy-user] fsolve help In-Reply-To: <20060715054228.GA3983@arbutus.physics.mcmaster.ca> References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> <20060715054228.GA3983@arbutus.physics.mcmaster.ca> Message-ID: On 7/14/06, David M. Cooke wrote: > > > What exactly does that do? What is __future__ all about? > > There's also the fun 'from __future__ import braces' to try to use { } > for blocks instead of indentation. (Try it in the interpreter.) This doesn't work for some reason. >>> from __future__ import braces File "", line 1 SyntaxError: not a chance >>> from __future__ import division >>> 5/6 0.83333333333333337 -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Mon Jul 17 20:25:15 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 17 Jul 2006 20:25:15 -0400 Subject: [SciPy-user] fsolve help In-Reply-To: References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> <20060715054228.GA3983@arbutus.physics.mcmaster.ca> Message-ID: <20060717202515.5225a91e@arbutus.physics.mcmaster.ca> On Mon, 17 Jul 2006 17:22:29 -0700 "David Grant" wrote: > On 7/14/06, David M. Cooke wrote: > > > > > What exactly does that do? What is __future__ all about? > > > > There's also the fun 'from __future__ import braces' to try to use { } > > for blocks instead of indentation. (Try it in the interpreter.) > > This doesn't work for some reason. > > >>> from __future__ import braces > File "", line 1 > SyntaxError: not a chance Exactly :-) (It's a joke; braces are a commonly requested feature by newbies that ain't gonna happen.) sorry for playing you as the straight man -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From myeates at jpl.nasa.gov Mon Jul 17 21:21:07 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Mon, 17 Jul 2006 18:21:07 -0700 Subject: [SciPy-user] annoying import message Message-ID: <44BC3783.4040904@jpl.nasa.gov> Hi all when I do >>> from scipy.io import savemat I get import linsolve.umfpack -> failed: No module named sparse Kind of weird since sparse is definitely there under scipy. (I tried adding scipy to my PYTHONPATH and now I get import linsolve.umfpack -> failed: cannot import name isspmatrix_csc) What am I doing wrong? This is scipy 0.4.9 Mathew From davidgrant at gmail.com Mon Jul 17 22:10:21 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 17 Jul 2006 19:10:21 -0700 Subject: [SciPy-user] fsolve help In-Reply-To: <20060717202515.5225a91e@arbutus.physics.mcmaster.ca> References: <44B843D8.3080202@adelphia.net> <44B84D1B.9040205@hoc.net> <20060715054228.GA3983@arbutus.physics.mcmaster.ca> <20060717202515.5225a91e@arbutus.physics.mcmaster.ca> Message-ID: On 7/17/06, David M. Cooke wrote: > > On Mon, 17 Jul 2006 17:22:29 -0700 > "David Grant" wrote: > > > On 7/14/06, David M. Cooke wrote: > > > > > > > What exactly does that do? What is __future__ all about? > > > > > > There's also the fun 'from __future__ import braces' to try to use { } > > > for blocks instead of indentation. (Try it in the interpreter.) > > > > This doesn't work for some reason. > > > > >>> from __future__ import braces > > File "", line 1 > > SyntaxError: not a chance > > Exactly :-) (It's a joke; braces are a commonly requested feature by > newbies that ain't gonna happen.) > > sorry for playing you as the straight man > > LOL. Can't believe I fell for that. Looks like I'll have to play that joke on somebody else now to even out the score. Definitely not a feature I want to see in python. -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Tue Jul 18 02:19:36 2006 From: gael.varoquaux at normalesup.org (=?utf-8?Q?Ga=EBl?= Varoquaux) Date: Tue, 18 Jul 2006 08:19:36 +0200 Subject: [SciPy-user] Arithmetic Errors In-Reply-To: <20060718020158.GA16246@redwoodscientific.com> References: <20060717205116.GA11043@redwoodscientific.com> <44BC0161.8070007@ieee.org> <20060717235553.GA14362@redwoodscientific.com> <44BC26FC.8050408@ieee.org> <20060718020158.GA16246@redwoodscientific.com> Message-ID: <20060718061936.GA5066@clipper.ens.fr> On Mon, Jul 17, 2006 at 07:01:58PM -0700, John Lawless wrote: > 2). As a quick-fix alternative to replacing all integer ufuncs, would > it be easier to make numpy produce doubles by default (rather than > any lesser type)? That way, the user would only get wrong > results if he went out of his way to specify explicitly an > inadequate dtype. +1 on that. It seems that integer related problems are the number one problem that people not used to scipy uncounter. Its a big change and it will probably be disruptive but I think it would help new users a lot. -- Ga?l From hgk at et.uni-magdeburg.de Tue Jul 18 02:36:50 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Tue, 18 Jul 2006 08:36:50 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: <44BBA772.204@ftw.at> References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> <44BBA772.204@ftw.at> Message-ID: Ed Schofield wrote: > Ed Schofield wrote: > > Done -- I've now uploaded the Win32 ATLAS libraries to the new wiki and > added links to the page http://www.scipy.org/Installing_SciPy/Windows. > > -- Ed Ed, just out of curiosity: Is there any special reason that you use ATLAS 3.6. The wiki suggest the use of the latest unstable ATLAS (3.7.x). Hans Georg From jf.moulin at gmail.com Tue Jul 18 02:49:45 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Tue, 18 Jul 2006 06:49:45 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... References: <44BBA348.3090301@gmail.com> Message-ID: Robert Kern gmail.com> writes: > It certainly won't give you that same NameError when you make those changes. > Could you copy-and-paste the error you get after you have made those changes? > Here we go.... this is the script that calls mpfit for testing: import mpfit from scipy import * import numpy as Numeric def F(x,p): y = ( p[0] + p[1]*x + p[2]*x**2 + p[3]*sqrt(x) +p[4]*log(x)) return y def myfunct(p,fjac=None,x=None,y=None,err=None): model=F(x,p) return([0,(y-model)/err]) x = arange(100.0) p = [5.7, 2.2, 500., 1.5, 2000.] p0 = [5, 2, 250., 1.5, 2000.] y = F(x,p) print y err=array(ones(len(y))) fa = {'x':x, 'y':y, 'err':err} m = mpfit.mpfit('myfunct', p0, functkw=fa) print 'status = ', m.status if (m.status <= 0): print 'error message = ', m.errmsg print 'parameters = ', m.params This is the first output pythonw -u "testmpfit.py" [ -1.#INF0000e+000 5.09400000e+002 3.39851568e+003 6.71212265e+003 1.07900887e+004 1.57389299e+004 2.16060932e+004 2.84168889e+004 ....SNIP.... 4.61736029e+006 4.71388330e+006 4.81140608e+006 4.90992866e+006] Traceback (most recent call last): File "testmpfit.py", line 24, in ? m = mpfit.mpfit('myfunct', p0, functkw=fa) File "c:\Python24\lib\site-packages\mpfit.py", line 852, in __init__ self.nfev = 0 File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ self.maxgam = 171.624376956302725 NameError: global name 'log' is not defined Exit code: 1 This is the badly behaving code in mpfit: class machar: def __init__(self, double=1): if (double == 0): self.machep = 1.19209e-007 self.maxnum = 3.40282e+038 self.minnum = 1.17549e-038 self.maxgam = 171.624376956302725 else: self.machep = 2.2204460e-016 self.maxnum = 1.7976931e+308 self.minnum = 2.2250739e-308 self.maxgam = 171.624376956302725 self.maxlog = log(self.maxnum) self.minlog = log(self.minnum) self.rdwarf = sqrt(self.minnum*1.5) * 10 self.rgiant = sqrt(self.maxnum) * 0.1 Now let us add Numeric. : beginning of Mpfit contains import numpy as Numeric import types from scipy import * and then... self.maxlog = Numeric.log(self.maxnum) self.minlog = Numeric.log(self.minnum) self.rdwarf = Numeric.sqrt(self.minnum*1.5) * 10 self.rgiant = Numeric.sqrt(self.maxnum) * 0.1 this gives the following output: Traceback (most recent call last): File "testmpfit.py", line 24, in ? m = mpfit.mpfit('myfunct', p0, functkw=fa) File "c:\Python24\lib\site-packages\mpfit.py", line 852, in __init__ self.nfev = 0 File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ self.maxgam = 171.624376956302725 NameError: global name 'log' is not defined Exit code: 1 .... I am for sure as puzzled as you are.... Thanks for looking.... JF From schofield at ftw.at Tue Jul 18 05:07:23 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 18 Jul 2006 11:07:23 +0200 Subject: [SciPy-user] optimize.leastsq crashs my interpreter In-Reply-To: References: <003301c6a1b8$80ca16e0$1001a8c0@JELLE> <44AE505D.8050905@ftw.at> <44AE775A.30004@ftw.at> <44B361C7.3060906@iam.uni-stuttgart.de> <6f54c160607141206y1c4f548g6da0012ef76f1c80@mail.gmail.com> <44BBA772.204@ftw.at> Message-ID: <44BCA4CB.8010408@ftw.at> Dr. Hans Georg Krauthaeuser wrote: > Ed Schofield wrote: > >> Ed Schofield wrote: >> >> Done -- I've now uploaded the Win32 ATLAS libraries to the new wiki and >> added links to the page http://www.scipy.org/Installing_SciPy/Windows. >> >> -- Ed >> > Ed, > > just out of curiosity: Is there any special reason that you use ATLAS > 3.6. The wiki suggest the use of the latest unstable ATLAS (3.7.x). > No reason -- I just uploaded the version that was on the old website ... -- Ed From elcorto at gmx.net Tue Jul 18 07:38:57 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 18 Jul 2006 13:38:57 +0200 Subject: [SciPy-user] mpfit crashes (still)... In-Reply-To: References: <44BBA348.3090301@gmail.com> Message-ID: <44BCC851.2080705@gmx.net> JF Moulin wrote: > Robert Kern gmail.com> writes: > >> It certainly won't give you that same NameError when you make those changes. >> Could you copy-and-paste the error you get after you have made those changes? >> > Here we go.... > > this is the script that calls mpfit for testing: > > > import mpfit > from scipy import * > import numpy as Numeric > > def F(x,p): > y = ( p[0] + p[1]*x + p[2]*x**2 + p[3]*sqrt(x) +p[4]*log(x)) > return y > > def myfunct(p,fjac=None,x=None,y=None,err=None): > model=F(x,p) > return([0,(y-model)/err]) > > > x = arange(100.0) > p = [5.7, 2.2, 500., 1.5, 2000.] > p0 = [5, 2, 250., 1.5, 2000.] > > y = F(x,p) > print y > > err=array(ones(len(y))) > > fa = {'x':x, 'y':y, 'err':err} > m = mpfit.mpfit('myfunct', p0, functkw=fa) > print 'status = ', m.status > if (m.status <= 0): print 'error message = ', m.errmsg > print 'parameters = ', m.params > > > > This is the first output > > pythonw -u "testmpfit.py" > [ -1.#INF0000e+000 5.09400000e+002 3.39851568e+003 6.71212265e+003 > 1.07900887e+004 1.57389299e+004 2.16060932e+004 2.84168889e+004 > ....SNIP.... > 4.61736029e+006 4.71388330e+006 4.81140608e+006 4.90992866e+006] > Traceback (most recent call last): > File "testmpfit.py", line 24, in ? > m = mpfit.mpfit('myfunct', p0, functkw=fa) > File "c:\Python24\lib\site-packages\mpfit.py", line 852, in __init__ > self.nfev = 0 > File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ > self.maxgam = 171.624376956302725 > NameError: global name 'log' is not defined > Exit code: 1 > > > This is the badly behaving code in mpfit: > class machar: > def __init__(self, double=1): > if (double == 0): > self.machep = 1.19209e-007 > self.maxnum = 3.40282e+038 > self.minnum = 1.17549e-038 > self.maxgam = 171.624376956302725 > else: > self.machep = 2.2204460e-016 > self.maxnum = 1.7976931e+308 > self.minnum = 2.2250739e-308 > self.maxgam = 171.624376956302725 > > self.maxlog = log(self.maxnum) > self.minlog = log(self.minnum) > self.rdwarf = sqrt(self.minnum*1.5) * 10 > self.rgiant = sqrt(self.maxnum) * 0.1 > > > Now let us add Numeric. : > > beginning of Mpfit contains > > import numpy as Numeric > import types > from scipy import * > > and then... > self.maxlog = Numeric.log(self.maxnum) > self.minlog = Numeric.log(self.minnum) > self.rdwarf = Numeric.sqrt(self.minnum*1.5) * 10 > self.rgiant = Numeric.sqrt(self.maxnum) * 0.1 > > this gives the following output: > Traceback (most recent call last): > File "testmpfit.py", line 24, in ? > m = mpfit.mpfit('myfunct', p0, functkw=fa) > File "c:\Python24\lib\site-packages\mpfit.py", line 852, in __init__ > self.nfev = 0 > File "c:\Python24\lib\site-packages\mpfit.py", line 2249, in __init__ > self.maxgam = 171.624376956302725 > NameError: global name 'log' is not defined > Exit code: 1 > > > .... I am for sure as puzzled as you are.... > Thanks for looking.... > > JF > Thats's crazy. However, if I convert my mpfit.py with convertcode.py the only change in the code is to replace import Numeric by import numpy.oldnumeric an Numpy (as Travis mentioned earlier) and this seems to work. However I still have this boolean array comparison issue (works with Numeric arrays, not numpy but this should be solvable fairly easily ...). cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From jf.moulin at gmail.com Tue Jul 18 08:00:02 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Tue, 18 Jul 2006 12:00:02 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> Message-ID: Well... i did it all over again and no progress whatsoever... in desperation I tried this crazy move: I replaced Numeric.log() by foo.log() in the previous lines and tried... believe it or not... same err message! :-(((( What the heck does python play here?? JF From jf.moulin at gmail.com Tue Jul 18 08:07:00 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Tue, 18 Jul 2006 12:07:00 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> Message-ID: yes... even if I ***delete*** the lines where log is used the error message is the same... always. Seems the problem is not with mpfit??! Any clue? JF From elcorto at gmx.net Tue Jul 18 08:54:16 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 18 Jul 2006 14:54:16 +0200 Subject: [SciPy-user] mpfit crashes (still)... In-Reply-To: References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> Message-ID: <44BCD9F8.6090201@gmx.net> JF Moulin wrote: > yes... > > even if I ***delete*** the lines where log is used the error message is the > same... always. Seems the problem is not with mpfit??! Any clue? > > JF > Could it be that you accidentally placed the mpfit.py that you're changing into a path which isn't on your PYTHONPATH? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From jf.moulin at gmail.com Tue Jul 18 09:21:42 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Tue, 18 Jul 2006 13:21:42 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> <44BCD9F8.6090201@gmx.net> Message-ID: Steve Schmerler gmx.net> writes: > > JF Moulin wrote: > > yes... > > > > even if I ***delete*** the lines where log is used the error message is the > > same... always. Seems the problem is not with mpfit??! Any clue? > > > > JF > > > > Could it be that you accidentally placed the mpfit.py that you're > changing into a path which isn't on your PYTHONPATH? > > cheers, > steve > I'have been thinking in the same direction! but I put it in C:\Python24\Lib\site-packages and run my test from within the Scite editor, which sends the error back and points to the mpfit file that I edit also from there.... (I checked the same behaviour via python testmpfit.py from the command line...) From jf.moulin at gmail.com Tue Jul 18 09:58:12 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Tue, 18 Jul 2006 13:58:12 +0000 (UTC) Subject: [SciPy-user] mpfit crashes (still)... References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> <44BCD9F8.6090201@gmx.net> Message-ID: JF Moulin gmail.com> writes: > > Steve Schmerler gmx.net> writes: > > > > > JF Moulin wrote: > > > yes... > > > > > > even if I ***delete*** the lines where log is used the error message is the > > > same... always. Seems the problem is not with mpfit??! Any clue? > > > > > > JF > > > > > > > Could it be that you accidentally placed the mpfit.py that you're > > changing into a path which isn't on your PYTHONPATH? > > > > cheers, > > steve > > > > I'have been thinking in the same direction! but I put it in > C:\Python24\Lib\site-packages and run my test from within the Scite editor, > which sends the error back and points to the mpfit file that I edit also from > there.... > (I checked the same behaviour via python testmpfit.py from the command line...) > arghl!!! yesssss.... there was an old copy of mpfit in the dir from where I was calling... now I got different problems though: I cannot import oldnumeric! Is this something I should install separately from numpy??? Thanks for helping a stupid newbie... JF From robert.kern at gmail.com Tue Jul 18 12:30:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Jul 2006 11:30:31 -0500 Subject: [SciPy-user] mpfit crashes (still)... In-Reply-To: References: <44BBA348.3090301@gmail.com> <44BCC851.2080705@gmx.net> <44BCD9F8.6090201@gmx.net> Message-ID: <44BD0CA7.3060405@gmail.com> JF Moulin wrote: > yesssss.... there was an old copy of mpfit in the dir from where I was calling... > now I got different problems though: I cannot import oldnumeric! > Is this something I should install separately from numpy??? Thanks for helping a > stupid newbie... It's a relatively recent change. All (or almost all) of the Numeric compatibility aliases have been moved to numpy.oldnumeric. If you want to go that route (and you probably do, since you will have to eventually), grab a recent SVN checkout of numpy or wait for the 1.0beta which is scheduled for Thursday. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From williams at astro.ox.ac.uk Tue Jul 18 12:49:28 2006 From: williams at astro.ox.ac.uk (Michael Williams) Date: Tue, 18 Jul 2006 17:49:28 +0100 Subject: [SciPy-user] [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <4496D1AC.8030100@astraw.com> References: <4496D1AC.8030100@astraw.com> Message-ID: <20060718164928.GB21578@astro.ox.ac.uk> Hi Andrew (and others), On Mon, Jun 19, 2006 at 09:32:44AM -0700, Andrew Straw wrote: >I have updated the apt repository I maintain for Ubuntu's Dapper, which >now includes: > >numpy >matplotlib >scipy > >Each package is from a recent SVN checkout and should thus be regarded >as "bleeding edge". The repository has a new URL: >http://debs.astraw.com/dapper/ I intend to keep this repository online >for an extended duration. If you want to put this repository in your >sources list, you need to add the following lines to >/etc/apt/sources.list:: > deb http://debs.astraw.com/ dapper/ > deb-src http://debs.astraw.com/ dapper/ I am unable to access these repositories (which sound very useful, and for which I am grateful to Andrew!). apt-get update gives "Failed to fetch http://debs.astraw.com/dapper/Release.gpg Could not resolve ?debs.astraw.com?" I am also unable to access the repositories listed on the website: deb http://sefton.astraw.com/ubuntu/ dapper/ deb-src http://sefton.astraw.com/ubuntu/ dapper/ for the same reason. Does anyone know where they've gone and if they're coming back? Cheers, -- Michael Williams Rudolph Peierls Centre for Theoretical Physics University of Oxford From strawman at astraw.com Tue Jul 18 14:24:17 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 18 Jul 2006 11:24:17 -0700 Subject: [SciPy-user] [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <20060718164928.GB21578@astro.ox.ac.uk> References: <4496D1AC.8030100@astraw.com> <20060718164928.GB21578@astro.ox.ac.uk> Message-ID: <44BD2751.3040904@astraw.com> Michael Williams wrote: >Hi Andrew (and others), > >On Mon, Jun 19, 2006 at 09:32:44AM -0700, Andrew Straw wrote: > > >>I have updated the apt repository I maintain for Ubuntu's Dapper, which >>now includes: >> >>numpy >>matplotlib >>scipy >> >>Each package is from a recent SVN checkout and should thus be regarded >>as "bleeding edge". The repository has a new URL: >>http://debs.astraw.com/dapper/ I intend to keep this repository online >>for an extended duration. If you want to put this repository in your >>sources list, you need to add the following lines to >>/etc/apt/sources.list:: >>deb http://debs.astraw.com/ dapper/ >>deb-src http://debs.astraw.com/ dapper/ >> >> > >I am unable to access these repositories (which sound very useful, and >for which I am grateful to Andrew!). apt-get update gives > > "Failed to fetch http://debs.astraw.com/dapper/Release.gpg Could not > resolve ?debs.astraw.com?" > > > Hmm, that looks like DNS error. My repository is still up and online... debs.astraw.com is actually a "CNAME" record, which aliases another domain name. The canonical domain name uses a dynamic DNS to point to a dynamically assigned IP addresses. Your system (probably in your DNS name resolution) must be having some issue with some of that. Unfortunately, you can't just plug in the IP address, because I'm using Apache virtual hosting to serve a website at "debs.astraw.com" differently from other websites. Anyhow, here's what I get, hopefully it will help you fix your DNS issue. $ host debs.astraw.com debs.astraw.com is an alias for astraw-office.kicks-ass.net. astraw-office.kicks-ass.net has address 131.215.28.162 debs.astraw.com is an alias for astraw-office.kicks-ass.net. debs.astraw.com is an alias for astraw-office.kicks-ass.net. >I am also unable to access the repositories listed on the website: > > deb http://sefton.astraw.com/ubuntu/ dapper/ > deb-src http://sefton.astraw.com/ubuntu/ dapper/ > > Eek, I replaced that with the new one location and some more info. >for the same reason. Does anyone know where they've gone and if they're >coming back? > > > I'm planning on keeping them around for a while -- at least until numpy, scipy, and matplotlib get integrated into my flavor-of-the-year Debian/Ubuntu release in a sufficiently up-to-date version. From Pierre.Barbier_de_Reuille at inria.fr Tue Jul 18 15:34:35 2006 From: Pierre.Barbier_de_Reuille at inria.fr (Pierre Barbier de Reuille) Date: Tue, 18 Jul 2006 20:34:35 +0100 Subject: [SciPy-user] Problem with weave Message-ID: <44BD37CB.6040106@inria.fr> Hello, I noticed that weave.inline does not work with g++ ... the error was pretty simple: some name already in a namespace were "over qualified", so here is a patch. I also noticed a few errors in the example. Like in the binary search, it makes more sens to use bisect_left as it gives the right result (instead of displaying something inconsistant). Also added a (necessary for my version of g++) indexed_ref::operator=(const indexed_ref&) in scxx/sequence.h Well, there are a few other things. The patch is to be applied at the root of the weave library. Pierre -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: weave.patch URL: From williams at astro.ox.ac.uk Tue Jul 18 16:36:12 2006 From: williams at astro.ox.ac.uk (Michael Williams) Date: Tue, 18 Jul 2006 21:36:12 +0100 Subject: [SciPy-user] [Numpy-discussion] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online In-Reply-To: <20060718164928.GB21578@astro.ox.ac.uk> References: <4496D1AC.8030100@astraw.com> <20060718164928.GB21578@astro.ox.ac.uk> Message-ID: <20060718203612.GA25724@astro.ox.ac.uk> On Tue, Jul 18, 2006 at 11:45:32AM -0700, Andrew Straw wrote: >>I am unable to access these repositories (which sound very useful, and >>for which I am grateful to Andrew!). > >Hmm, that looks like DNS error. My repository is still up and online... It is indeed a DNS error. I should have tested from a different machine before posting. Thanks for the quick reply. >>for the same reason. Does anyone know where they've gone and if >>they're coming back? >> >I'm planning on keeping them around for a while -- at least until >numpy, scipy, and matplotlib get integrated into my flavor-of-the-year >Debian/Ubuntu release in a sufficiently up-to-date version. Great. I've installed them now, and the seem to be working fine. Thanks very much! Cheers, -- Mike From davidgrant at gmail.com Tue Jul 18 20:23:20 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 18 Jul 2006 17:23:20 -0700 Subject: [SciPy-user] Would like to simplify my 3 where statements Message-ID: What I want to do is find out which row indices in the jth column of a matrix are 1 except for A[i,j]. I came up with this: where(A[:,j])[0][where(where(A[:,j])[0]!=i)] It is really ugly, but it is already faster than my previous implementation using a for loop and no longer the bottleneck in my code, but I'd like something that looks more readable. Thanks, -- David Grant From davidgrant at gmail.com Wed Jul 19 00:44:03 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 18 Jul 2006 21:44:03 -0700 Subject: [SciPy-user] Would like to simplify my 3 where statements (re-send) Message-ID: What I want to do is find out which row indices in the jth column of a matrix are 1 except for A[i,j]. I came up with this: where(A[:,j])[0][where(where(A[:,j])[0]!=i)] It is really ugly, but it is already faster than my previous implementation using a for loop and no longer the bottleneck in my code, but I'd like something that looks more readable. Thanks, David Grant p.s. Not sure why, but I sent this message much, much earlier today but it didn't seem to go through -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Wed Jul 19 01:14:51 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 18 Jul 2006 22:14:51 -0700 Subject: [SciPy-user] sorry (mailing list is a bit slow?) Message-ID: Sorry for the re-send everybody (and for this message). It looks like my first email and the re-send hit the archives already but I have yet to receive either of them through the mailing list. Did anyone receive either of those messages? Please reply in private, do not reply to the list. Thanks and sorry again. -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 19 01:28:47 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Jul 2006 00:28:47 -0500 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: Message-ID: <44BDC30F.7000808@gmail.com> David Grant wrote: > Sorry for the re-send everybody (and for this message). It looks like my > first email and the re-send hit the archives already but I have yet to > receive either of them through the mailing list. Did anyone receive > either of those messages? Please reply in private, do not reply to the > list. Thanks and sorry again. [replaying to the list since it seems relevant] I got both of your messages. I know that I don't get any of my messages that I send to the list. I think it's a GMail thing since my "Receive your own posts to the list?" setting on my Mailman preferences page is set to "Yes," and I didn't used to see this behavior before I started using GMail. You may want to check your own Mailman and GMail preference pages. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davidgrant at gmail.com Wed Jul 19 01:37:23 2006 From: davidgrant at gmail.com (David Grant) Date: Tue, 18 Jul 2006 22:37:23 -0700 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: <44BDC30F.7000808@gmail.com> References: <44BDC30F.7000808@gmail.com> Message-ID: On 7/18/06, Robert Kern wrote: > > David Grant wrote: > > Sorry for the re-send everybody (and for this message). It looks like my > > first email and the re-send hit the archives already but I have yet to > > receive either of them through the mailing list. Did anyone receive > > either of those messages? Please reply in private, do not reply to the > > list. Thanks and sorry again. > > [replaying to the list since it seems relevant] > > I got both of your messages. I know that I don't get any of my messages > that I > send to the list. I think it's a GMail thing since my "Receive your own > posts to > the list?" setting on my Mailman preferences page is set to "Yes," and I > didn't > used to see this behavior before I started using GMail. You may want to > check > your own Mailman and GMail preference pages. Interesting... thanks for this info. I just switched to GMail last week... and my mailman settings are like yours. I guess what would solve it would be if I could label outgoing messages to scipy-user with my sci-python label... then it would show up under that label. There is no way to do this though, unless I go through my Sent mail folder every once in a while and tag stuff. Thanks, Dave -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Jul 19 02:53:38 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 19 Jul 2006 00:53:38 -0600 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: On 7/18/06, David Grant wrote: > > > > On 7/18/06, Robert Kern wrote: > > David Grant wrote: > > > Sorry for the re-send everybody (and for this message). It looks like my > > > first email and the re-send hit the archives already but I have yet to > > > receive either of them through the mailing list. Did anyone receive > > > either of those messages? Please reply in private, do not reply to the > > > list. Thanks and sorry again. > > > > [replaying to the list since it seems relevant] > > > > I got both of your messages. I know that I don't get any of my messages > that I > > send to the list. I think it's a GMail thing since my "Receive your own > posts to > > the list?" setting on my Mailman preferences page is set to "Yes," and I > didn't > > used to see this behavior before I started using GMail. You may want to > check > > your own Mailman and GMail preference pages. > > > Interesting... thanks for this info. I just switched to GMail last week... > and my mailman settings are like yours. I guess what would solve it would be > if I could label outgoing messages to scipy-user with my sci-python label... > then it would show up under that label. There is no way to do this though, > unless I go through my Sent mail folder every once in a while and tag stuff. Yes, it seems gmail does NOT show your own messages in your inbox as new, though they do appear once someone replies and you get the whole thread. It also threw me for a loop, and I find it mildly annoying, but there doesn't seem to be a config option I can find. I don't know if sourceforge really /is/ blocking gmail, I think quite a few of us have just tripped on this (though it could be that SF is also doing something nasty). Cheers, f From adiril at mynet.com Wed Jul 19 04:02:42 2006 From: adiril at mynet.com (adiril) Date: Wed, 19 Jul 2006 11:02:42 +0300 (EEST) Subject: [SciPy-user] Ynt: SciPy-user Digest, Vol 35, Issue 31 Message-ID: <1133.85.98.6.139.1153296162.mynet@webmail38.mynet.com>   -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.turchi at gmail.com Wed Jul 19 04:57:12 2006 From: marco.turchi at gmail.com (marco turchi) Date: Wed, 19 Jul 2006 10:57:12 +0200 Subject: [SciPy-user] sparse matrix coo_matrix Message-ID: <79a042480607190157j8c17406o2e9b5c609c35400e@mail.gmail.com> Hi, I'm a new user of SciPy. I've to store in a sparse matrix some float data. Every time, I have to add a new entry to the matrix. so I try to build the matrix in that way: values = [1, 3, 2] indices = [[0,0],[1,1],[2,1]] matrix = coo_matrix(values, indices) bu t I obtain the following error: invalid input format Traceback (most recent call last): File "", line 1, in ? File "/opt/STools/python/2.4.1/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 1715, in __init__ raise AssertionError AssertionError I do not understand where the error is. Then, suppose that I'm able to create the matrix, how can I add a new entry a that matrix?? thanks a lot Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Wed Jul 19 07:59:39 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 19 Jul 2006 13:59:39 +0200 Subject: [SciPy-user] sparse matrix usage, documentation In-Reply-To: <98AA9E629A145D49A1A268FA6DBA70B4249076@mmihserver01.MMIH01.local> References: <98AA9E629A145D49A1A268FA6DBA70B4249076@mmihserver01.MMIH01.local> Message-ID: <44BE1EAB.8090704@ftw.at> William Hunter wrote: > Ed; > > I searched scipy ML archive and Googled as usual, but nothing pops up. > If you think other users might find this of interest I'll post to the > user's list. > Okay -- I'll send this reply to the list. > You're likely to be one of only three who can answer this, if you don't > mind. This is from info.py in SVN, as you'll know... > > > I have a matrix defined as A = sparse.lil_matrix((1000, 1000)) > > What exactly does the A.T bit do (I see there is also A.H, A.I and A.A)? > And why does it only work with sparse matrices? > A.T is a shortcut for A.transpose(). It also works with dense (numpy) matrices, and, in the latest SVN versions of NumPy, with dense arrays. A.H computes the Hermitian; it's a shortcut for A.transpose().conj() A.I is available for dense (square) matrices, where it computes the inverse. It isn't currently supported for sparse matrices, but we could do this if people want. > Just point me in a direction, I'll go read. > > ALSO: I'm willing to write some newbie-to-sparse documentation (pdf's or > something else), as I'm new to this and in all likelyhood other new > users will be able to benefit. > Thanks a lot! I also saw that Neilen Marais offered this about a week ago too, but I've been miserably slow in replying; I should have jumped at the opportunity. I think the Wiki's the easiest way to get a tutorial up; we can then repackage the content in a different format later if we want. I've created a basic stub at http://www.scipy.org/SciPy_Tutorial with the content from sparse/info.py. Your help here would be great! -- Ed From schofield at ftw.at Wed Jul 19 10:37:48 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 19 Jul 2006 16:37:48 +0200 Subject: [SciPy-user] sparse matrix coo_matrix In-Reply-To: <79a042480607190157j8c17406o2e9b5c609c35400e@mail.gmail.com> References: <79a042480607190157j8c17406o2e9b5c609c35400e@mail.gmail.com> Message-ID: <44BE43BC.3050806@ftw.at> marco turchi wrote: > Hi, > I'm a new user of SciPy. I've to store in a sparse matrix some float > data. Every time, I have to add a new entry to the matrix. > > so I try to build the matrix in that way: > values = [1, 3, 2] > indices = [[0,0],[1,1],[2,1]] > matrix = coo_matrix(values, indices) > > bu t I obtain the following error: > invalid input format > Traceback (most recent call last): > File "", line 1, in ? > File > "/opt/STools/python/2.4.1/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 1715, in __init__ > raise AssertionError > AssertionError > I do not understand where the error is. Which version of SciPy are you using? With recent versions, you can use >>> ij = numpy.transpose(indices))) >>> coo_matrix((values, ij)) These semantics are described in the docstring, available with >>> help(sparse) or >>> help(coo_matrix) > Then, suppose that I'm able to create the matrix, how can I add a new > entry a that matrix?? With coo_matrix, you can't. It's probably easiest to use lil_matrix instead for sparse matrix construction: >>> m = lil_matrix((3, 2)) >>> m[0,0] = 1 >>> m[1,1] = 3 >>> m[2,1] = 2 I suppose I (or someone) should add this info to www.scipy.org/SciPy_Tutorial ;) -- Ed From marco.turchi at gmail.com Wed Jul 19 10:52:21 2006 From: marco.turchi at gmail.com (marco turchi) Date: Wed, 19 Jul 2006 16:52:21 +0200 Subject: [SciPy-user] sparse matrix coo_matrix In-Reply-To: <44BE43BC.3050806@ftw.at> References: <79a042480607190157j8c17406o2e9b5c609c35400e@mail.gmail.com> <44BE43BC.3050806@ftw.at> Message-ID: <79a042480607190752m76e5ebcbp384de949b3c7640a@mail.gmail.com> > > > Which version of SciPy are you using? With recent versions, you can use > >>> ij = numpy.transpose(indices))) > >>> coo_matrix((values, ij)) > > These semantics are described in the docstring, available with > >>> help(sparse) > or > >>> help(coo_matrix) I have this version: 0.4.6, I guess that it is old... but I cannot install the new one... :-( > Then, suppose that I'm able to create the matrix, how can I add a new > > entry a that matrix?? > > With coo_matrix, you can't. It's probably easiest to use lil_matrix > instead for sparse matrix construction: > >>> m = lil_matrix((3, 2)) > >>> m[0,0] = 1 > >>> m[1,1] = 3 > >>> m[2,1] = 2 in this version, I've not the lil_matrix anyway, I guess that I've solved the problem. I create 2 arrays, value and positions, and every time I add a new elements inside each array , using the concatenate command, and then at the end I construct the coo_matrix.... I hope that this solution will work.... I suppose I (or someone) should add this info to > www.scipy.org/SciPy_Tutorial ;) maybe... :-) thanks Marco -- Ed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 19 12:18:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Jul 2006 11:18:51 -0500 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: <44BE5B6B.8040404@gmail.com> Fernando Perez wrote: > I don't know if sourceforge really /is/ blocking gmail, I think quite > a few of us have just tripped on this (though it could be that SF is > also doing something nasty). Sourceforge blocking mail going to a scipy.org list would be a nasty thing indeed. ;-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Wed Jul 19 16:16:26 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 19 Jul 2006 16:16:26 -0400 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: For what it's worth, I have a gmail account set up just for my email lists and I really like it. I have a label and filter for each list so that any mail to scipy-user gets labeled SciPy and is taken out of my inbox automatically. Once I do that, email I send to a list shows up in the SciPy label view. Ryan On 7/19/06, David Grant wrote: > > > > On 7/18/06, Robert Kern wrote: > > David Grant wrote: > > > Sorry for the re-send everybody (and for this message). It looks like my > > > first email and the re-send hit the archives already but I have yet to > > > receive either of them through the mailing list. Did anyone receive > > > either of those messages? Please reply in private, do not reply to the > > > list. Thanks and sorry again. > > > > [replaying to the list since it seems relevant] > > > > I got both of your messages. I know that I don't get any of my messages > that I > > send to the list. I think it's a GMail thing since my "Receive your own > posts to > > the list?" setting on my Mailman preferences page is set to "Yes," and I > didn't > > used to see this behavior before I started using GMail. You may want to > check > > your own Mailman and GMail preference pages. > > > Interesting... thanks for this info. I just switched to GMail last week... > and my mailman settings are like yours. I guess what would solve it would be > if I could label outgoing messages to scipy-user with my sci-python label... > then it would show up under that label. There is no way to do this though, > unless I go through my Sent mail folder every once in a while and tag stuff. > > Thanks, > Dave > > > -- > David Grant > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From davidgrant at gmail.com Wed Jul 19 18:55:05 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 19 Jul 2006 15:55:05 -0700 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: I also have a flter and label for scipy. How do you get your sent message to get labelled SciPy? Dave On 7/19/06, Ryan Krauss wrote: > > For what it's worth, I have a gmail account set up just for my email > lists and I really like it. I have a label and filter for each list > so that any mail to scipy-user gets labeled SciPy and is taken out of > my inbox automatically. Once I do that, email I send to a list shows > up in the SciPy label view. > > Ryan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at arrowtheory.com Wed Jul 19 21:12:20 2006 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 20 Jul 2006 11:12:20 +1000 Subject: [SciPy-user] build problem: turn off swig ? Message-ID: <20060720111220.72e84203.simon@arrowtheory.com> ... building extension "scipy.linsolve.umfpack.__umfpack" sources creating build/src.linux-i686-2.4/scipy/linsolve creating build/src.linux-i686-2.4/scipy/linsolve/umfpack adding 'Lib/linsolve/umfpack/umfpack.i' to sources. creating build/src.linux-i686-2.4/Lib/linsolve creating build/src.linux-i686-2.4/Lib/linsolve/umfpack swig: Lib/linsolve/umfpack/umfpack.i swig -python -I/usr/include -o build/src.linux-i686-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c -outdir build/src.linux-i686-2.4/Lib/linsolve/umfpack Lib/linsolve/umfpack/umfpack.i :1: Error: Unable to find 'swig.swg' :3: Error: Unable to find 'python.swg' Lib/linsolve/umfpack/umfpack.i:188: Error: Unable to find 'umfpack.h' Lib/linsolve/umfpack/umfpack.i:189: Error: Unable to find 'umfpack_solve.h' Lib/linsolve/umfpack/umfpack.i:190: Error: Unable to find 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:191: Error: Unable to find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack_col_to_triplet.h' Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_transpose.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_scale.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_report_symbolic.h' Lib/linsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_report_numeric.h' Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_report_info.h' Lib/linsolve/umfpack/umfpack.i:199: Error: Unable to find 'umfpack_report_control.h' Lib/linsolve/umfpack/umfpack.i:211: Error: Unable to find 'umfpack_symbolic.h' Lib/linsolve/umfpack/umfpack.i:212: Error: Unable to find 'umfpack_numeric.h' Lib/linsolve/umfpack/umfpack.i:221: Error: Unable to find 'umfpack_free_symbolic.h' Lib/linsolve/umfpack/umfpack.i:222: Error: Unable to find 'umfpack_free_numeric.h' error: command 'swig' failed with exit status 1 Do I actually need swig ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 2 6249 6940 http://arrowtheory.com From john at nnytech.net Wed Jul 19 22:29:10 2006 From: john at nnytech.net (John Byrnes) Date: Thu, 20 Jul 2006 02:29:10 +0000 Subject: [SciPy-user] build problem: turn off swig ? In-Reply-To: <20060720111220.72e84203.simon@arrowtheory.com> References: <20060720111220.72e84203.simon@arrowtheory.com> Message-ID: <200607200229.19917.john@nnytech.net> It's some error related to wrapping UMFPACK. I've gotten it before and couldn't figure out how to get fix it. You can build scipy without UMFPACK by setting the $UMFPACK variable to None. In bash: $> export UMFPACK = "None" then building as normal. Regards, John On Thursday 20 July 2006 01:12, Simon Burton wrote: > ... > building extension "scipy.linsolve.umfpack.__umfpack" sources > creating build/src.linux-i686-2.4/scipy/linsolve > creating build/src.linux-i686-2.4/scipy/linsolve/umfpack > adding 'Lib/linsolve/umfpack/umfpack.i' to sources. > creating build/src.linux-i686-2.4/Lib/linsolve > creating build/src.linux-i686-2.4/Lib/linsolve/umfpack > swig: Lib/linsolve/umfpack/umfpack.i > swig -python -I/usr/include -o > build/src.linux-i686-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c -outdir > build/src.linux-i686-2.4/Lib/linsolve/umfpack > Lib/linsolve/umfpack/umfpack.i > > :1: Error: Unable to find 'swig.swg' > :3: Error: Unable to find 'python.swg' > > Lib/linsolve/umfpack/umfpack.i:188: Error: Unable to find 'umfpack.h' > Lib/linsolve/umfpack/umfpack.i:189: Error: Unable to find 'umfpack_solve.h' > Lib/linsolve/umfpack/umfpack.i:190: Error: Unable to find > 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:191: Error: Unable to > find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:192: Error: > Unable to find 'umfpack_col_to_triplet.h' > Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find > 'umfpack_transpose.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to > find 'umfpack_scale.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to > find 'umfpack_report_symbolic.h' Lib/linsolve/umfpack/umfpack.i:197: Error: > Unable to find 'umfpack_report_numeric.h' > Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find > 'umfpack_report_info.h' Lib/linsolve/umfpack/umfpack.i:199: Error: Unable > to find 'umfpack_report_control.h' Lib/linsolve/umfpack/umfpack.i:211: > Error: Unable to find 'umfpack_symbolic.h' > Lib/linsolve/umfpack/umfpack.i:212: Error: Unable to find > 'umfpack_numeric.h' Lib/linsolve/umfpack/umfpack.i:221: Error: Unable to > find 'umfpack_free_symbolic.h' Lib/linsolve/umfpack/umfpack.i:222: Error: > Unable to find 'umfpack_free_numeric.h' error: command 'swig' failed with > exit status 1 > > Do I actually need swig ? > > Simon. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 191 bytes Desc: not available URL: From simon at arrowtheory.com Wed Jul 19 23:56:57 2006 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 20 Jul 2006 13:56:57 +1000 Subject: [SciPy-user] build problem: turn off swig ? In-Reply-To: <200607200229.19917.john@nnytech.net> References: <20060720111220.72e84203.simon@arrowtheory.com> <200607200229.19917.john@nnytech.net> Message-ID: <20060720135657.4bbf8463.simon@arrowtheory.com> On Thu, 20 Jul 2006 02:29:10 +0000 John Byrnes wrote: > It's some error related to wrapping UMFPACK. I've gotten it before and > couldn't figure out how to get fix it. You can build scipy without UMFPACK > by setting the $UMFPACK variable to None. > > In bash: > $> export UMFPACK = "None" > OK, that worked but actually it turned out to be a bad swig installation. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 2 6249 6940 http://arrowtheory.com From fullung at gmail.com Thu Jul 20 09:21:13 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 20 Jul 2006 15:21:13 +0200 Subject: [SciPy-user] Problem with weave In-Reply-To: <44BD37CB.6040106@inria.fr> Message-ID: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> Hey Pierre Some of the problems you mentioned were already reported here: http://projects.scipy.org/scipy/scipy/ticket/232 and fixed here: http://projects.scipy.org/scipy/scipy/changeset/2098 Please attach a patch against the latest SciPy source to a new ticket: http://projects.scipy.org/scipy/scipy/newticket so that we don't lose track of the other changes you mentioned. Regards, Albert > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > On Behalf Of Pierre Barbier de Reuille > Sent: 18 July 2006 21:35 > To: SciPy Users List > Subject: [SciPy-user] Problem with weave > > Hello, > > I noticed that weave.inline does not work with g++ ... the error was > pretty simple: some name already in a namespace were "over qualified", > so here is a patch. I also noticed a few errors in the example. > Like in the binary search, it makes more sens to use bisect_left as it > gives the right result (instead of displaying something inconsistant). > > Also added a (necessary for my version of g++) > indexed_ref::operator=(const indexed_ref&) > in scxx/sequence.h > > Well, there are a few other things. > > The patch is to be applied at the root of the weave library. > > Pierre From Pierre.Barbier_de_Reuille at inria.fr Thu Jul 20 11:15:37 2006 From: Pierre.Barbier_de_Reuille at inria.fr (Pierre Barbier de Reuille) Date: Thu, 20 Jul 2006 16:15:37 +0100 Subject: [SciPy-user] Problem with weave In-Reply-To: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> References: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> Message-ID: <44BF9E19.7030706@inria.fr> Hi ! Sorry about that ... I update scipy regularly, but because of some strange network configuration, I cannot do so at work ... I will update both my scipy and my patch this evening and submit a new ticket ! Either way, thanks for telling me the right procedure to submit patches ! Pierre Albert Strasheim wrote: > Hey Pierre > > Some of the problems you mentioned were already reported here: > > http://projects.scipy.org/scipy/scipy/ticket/232 > > and fixed here: > > http://projects.scipy.org/scipy/scipy/changeset/2098 > > Please attach a patch against the latest SciPy source to a new ticket: > > http://projects.scipy.org/scipy/scipy/newticket > > so that we don't lose track of the other changes you mentioned. > > Regards, > > Albert > > >> -----Original Message----- >> From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] >> On Behalf Of Pierre Barbier de Reuille >> Sent: 18 July 2006 21:35 >> To: SciPy Users List >> Subject: [SciPy-user] Problem with weave >> >> Hello, >> >> I noticed that weave.inline does not work with g++ ... the error was >> pretty simple: some name already in a namespace were "over qualified", >> so here is a patch. I also noticed a few errors in the example. >> Like in the binary search, it makes more sens to use bisect_left as it >> gives the right result (instead of displaying something inconsistant). >> >> Also added a (necessary for my version of g++) >> indexed_ref::operator=(const indexed_ref&) >> in scxx/sequence.h >> >> Well, there are a few other things. >> >> The patch is to be applied at the root of the weave library. >> >> Pierre >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hetland at tamu.edu Thu Jul 20 12:36:01 2006 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 20 Jul 2006 12:36:01 -0400 Subject: [SciPy-user] numpy netcdf reader Message-ID: Konrad & scipy-users, I have made some changes to the scipy.sandbox.netcdf package so that it installs correctly. (I am a total hack at distutils, but it seems to do the right thing..). This could also be installed as a stand alone package, since it does not actually depend on scipy for anything. I have tested this module on my Mac, a Redhat distribution, and a Win box (using cygwin netcdf libraries), and all seem to work fine. One issue I have is that I can't seem to write a single character. In one of the programs I use, a variable called 'sherical' is expected to have a value of 'T' or 'F'. I am unable to write a value to this variable no matter what I try. Similar zero-dimensional assignments work for floating point numbers, just not for characters. Any advice? -Rob ? ---- Rob Hetland, Assistant Professor Dept. of Oceanography, Texas A&M University http://pong.tamue.edu/~rob phone: 979-458-0096, fax: 979-845-6331 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: netcdf.tar.gz Type: application/x-gzip Size: 17778 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nvf at MIT.EDU Thu Jul 20 18:27:53 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Thu, 20 Jul 2006 18:27:53 -0400 Subject: [SciPy-user] Ticket #14 commit? Message-ID: <87BD153A-7B37-4C91-8ECC-313C8567AA3A@mit.edu> Dear devs, A few months ago, I posted an update for scipy.io.loadmat which allows the reading of v7 mat-files. I've received several inquiries and a few patches that add support for more Matlab constructs, which prompt me to ask if someone could check the changes into SVN for me and close the ticket. I checked yesterday and the svn diff is fairly small, so I imagine it's not a big job. I could also provide test mat-files with loadmat sample output if that's of interest. Thanks, Nick -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2802 bytes Desc: not available URL: From nvf at MIT.EDU Thu Jul 20 18:32:09 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Thu, 20 Jul 2006 18:32:09 -0400 Subject: [SciPy-user] Ticket #14 commit? GJKRS Message-ID: <5C7B0793-3810-4049-8C4A-70FD1E370083@mit.edu> Dear devs, A few months ago, I posted an update for scipy.io.loadmat which allows the reading of v7 mat-files. I've received several inquiries and a few patches that add support for more Matlab constructs, which prompt me to ask if someone could check the changes into SVN for me and close the ticket. I checked yesterday and the svn diff is fairly small, so I'm guessing it's not a big job. I could also provide test mat-files with loadmat sample output if that's of interest. Thanks, Nick -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2802 bytes Desc: not available URL: From stefan at sun.ac.za Thu Jul 20 19:19:15 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 21 Jul 2006 01:19:15 +0200 Subject: [SciPy-user] Ticket #14 commit? In-Reply-To: <87BD153A-7B37-4C91-8ECC-313C8567AA3A@mit.edu> References: <87BD153A-7B37-4C91-8ECC-313C8567AA3A@mit.edu> Message-ID: <20060720231915.GD1046@mentat.za.net> Hi Nick On Thu, Jul 20, 2006 at 06:27:53PM -0400, Nick Fotopoulos wrote: > A few months ago, I posted an update for scipy.io.loadmat which > allows the reading of v7 mat-files. I've received several inquiries > and a few patches that add support for more Matlab constructs, which > prompt me to ask if someone could check the changes into SVN for me > and close the ticket. I checked yesterday and the svn diff is fairly > small, so I imagine it's not a big job. Can you do a diff against the latest SVN? I assume you have a version that incorporates the changes suggested by Brian Blais and Brice Thurin (otherwise just attach it, I'll do a diff and merge it). > I could also provide test mat-files with loadmat sample output if > that's of interest. We should write unit tests to make sure that the current behaviour is maintained, and that the new behaviour isn't broken in the future. Minimal examples will help a great deal. Cheers St?fan From michael.sorich at gmail.com Thu Jul 20 20:17:06 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 21 Jul 2006 09:47:06 +0930 Subject: [SciPy-user] Would like to simplify my 3 where statements In-Reply-To: References: Message-ID: <16761e100607201717t1c610949xd24f579ef6f69013@mail.gmail.com> Can you give an specific example of how this would work? The codes really is ugly and it is not clear to me what exactly it does. On 7/19/06, David Grant wrote: > What I want to do is find out which row indices in the jth column of a > matrix are 1 except for A[i,j]. I came up with this: > > where(A[:,j])[0][where(where(A[:,j])[0]!=i)] > > It is really ugly, but it is already faster than my previous > implementation using a for loop and no longer the bottleneck in my > code, but I'd like something that looks more readable. > > Thanks, > -- > David Grant > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From davidgrant at gmail.com Thu Jul 20 20:47:03 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 20 Jul 2006 17:47:03 -0700 Subject: [SciPy-user] Would like to simplify my 3 where statements In-Reply-To: <16761e100607201717t1c610949xd24f579ef6f69013@mail.gmail.com> References: <16761e100607201717t1c610949xd24f579ef6f69013@mail.gmail.com> Message-ID: On 7/20/06, Michael Sorich wrote: > Can you give an specific example of how this would work? The codes > really is ugly and it is not clear to me what exactly it does. ok here's a quick example: import numpy n=5 i=3 j=4 A=numpy.random.randint(0,2,(n,n)) #make random graph A=A-diag(diag(A)) #remove diagonal A=triu(A)+transpose(triu(A)) #make symmetric Now say A is the adjacency matrix for a graph and I want to know which nodes are neighbours of A, but I want to exclude node i from consideration. So if A is: array([[0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [1, 0, 0, 0, 0], [0, 0, 1, 0, 1], [1, 1, 0, 1, 0]]) the neighbours are array([1]) for i=3, j=4. One way to do it is to do: ans=where(A[:,j]==1)[0] if A[i,j] == 1: ans -= 1 which is probably faster than my method. I don't really care so much about speed though. I'm just trying to figure out different ways of doing this using numpy. Dave From Li.Cheng at nicta.com.au Thu Jul 20 22:42:23 2006 From: Li.Cheng at nicta.com.au (Li Cheng) Date: Fri, 21 Jul 2006 12:42:23 +1000 Subject: [SciPy-user] symbol not found: sparsetools.so Message-ID: <26D5456A-1737-40B9-A52F-69CFFDED8EC6@nicta.com.au> Python 2.4.2 (#1, Jul 6 2006, 17:43:12) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy import linsolve.umfpack -> failed: Failure linking new module: /sw/ lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Symbol not found: ___dso_handle Referenced from: /sw/lib/python2.4/site-packages/scipy/sparse/ sparsetools.so Expected in: dynamic lookup Is there a way to disable "sparse" package since I don't need it ? (i'm using svn revision 2115) thanks, Li From michael.sorich at gmail.com Thu Jul 20 23:56:12 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 21 Jul 2006 13:26:12 +0930 Subject: [SciPy-user] Would like to simplify my 3 where statements In-Reply-To: References: <16761e100607201717t1c610949xd24f579ef6f69013@mail.gmail.com> Message-ID: <16761e100607202056v538ffe55q114332001031c0c9@mail.gmail.com> If I run the script below I get [0,2]. Is this what you want? In any case the code you wrote looks fairly simple now. An alternative function to where is nonzero, however this will not make and difference. from numpy import * i=3 j=4 A = array([[0, 1, 0, 1, 0], [1, 0, 1, 0, 1], [1, 0, 0, 0, 0], [0, 0, 1, 0, 1], [1, 1, 0, 1, 0]]) ans=where(A[:,j]==1)[0] #ans=[1,3] if A[i,j] == 1: ans -= 1 print ans >> [0 2] On 7/21/06, David Grant wrote: > On 7/20/06, Michael Sorich wrote: > > Can you give an specific example of how this would work? The codes > > really is ugly and it is not clear to me what exactly it does. > > ok here's a quick example: > > import numpy > n=5 > i=3 > j=4 > A=numpy.random.randint(0,2,(n,n)) #make random graph > A=A-diag(diag(A)) #remove diagonal > A=triu(A)+transpose(triu(A)) #make symmetric > > Now say A is the adjacency matrix for a graph and I want to know which > nodes are neighbours of A, but I want to exclude node i from > consideration. So if A is: > > array([[0, 1, 0, 1, 0], > [1, 0, 1, 0, 1], > [1, 0, 0, 0, 0], > [0, 0, 1, 0, 1], > [1, 1, 0, 1, 0]]) > > the neighbours are array([1]) for i=3, j=4. > > One way to do it is to do: > > ans=where(A[:,j]==1)[0] > if A[i,j] == 1: > ans -= 1 > > which is probably faster than my method. I don't really care so much > about speed though. I'm just trying to figure out different ways of > doing this using numpy. > > Dave > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From oliphant.travis at ieee.org Fri Jul 21 01:31:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 20 Jul 2006 23:31:24 -0600 Subject: [SciPy-user] symbol not found: sparsetools.so In-Reply-To: <26D5456A-1737-40B9-A52F-69CFFDED8EC6@nicta.com.au> References: <26D5456A-1737-40B9-A52F-69CFFDED8EC6@nicta.com.au> Message-ID: <44C066AC.6030408@ieee.org> Li Cheng wrote: > Python 2.4.2 (#1, Jul 6 2006, 17:43:12) > [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > import linsolve.umfpack -> failed: Failure linking new module: /sw/ > lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Symbol not > found: ___dso_handle > Referenced from: /sw/lib/python2.4/site-packages/scipy/sparse/ > sparsetools.so > Expected in: dynamic lookup > > > Is there a way to disable "sparse" package since I don't need it ? > > Sure. Just comment out the config.add_subpackage('sparse') in Lib/setup.py -Travis From William.Hunter at mmhgroup.com Fri Jul 21 03:47:57 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Fri, 21 Jul 2006 09:47:57 +0200 Subject: [SciPy-user] sparse & linsolve: AttributeError: rowind not found Message-ID: <98AA9E629A145D49A1A268FA6DBA70B4249082@mmihserver01.MMIH01.local> I'm having trouble ONLY with the last line (xsp3) of the following: from numpy import arange, ones from scipy import linalg, sparse Asp = sparse.lil_matrix((50000,50000)) Asp.setdiag(ones(50000)) Asp[20,100:250] = 10*rand(150) Asp[200:250,30] = 10*rand(50) b = arange(0,50000) xsp1 = linsolve.solve(Asp,b) xsp2 = linsolve.solve(Asp.tocsc(),b) xsp3 = linsolve.spsolve(Asp.tocsr(),b) I get this error for xsp3: AttributeError: rowind not found. However, if I type xsp4 = linsolve.spsolve(Asp.tocsr().T,b) it finds a solution, but obviously not the correct one. Also, the solution for xsp4 is faster by about 1,5 times on my computer than for xsp1 and xsp2. My questions are therefore: Why won't it solve the system if I convert from LIL to CSR, but it solves (incorrectly) if I use the transpose of Asp.tocsr()? I can only guess it has something to do with the structure of the CSR format. So... What am I doing wrong or am I not understanding something here? This is part of a (simple) example I want to put on Scipy_Tutorial wiki... Regards, William From Pierre.Barbier_de_Reuille at inria.fr Fri Jul 21 04:14:35 2006 From: Pierre.Barbier_de_Reuille at inria.fr (Pierre Barbier de Reuille) Date: Fri, 21 Jul 2006 09:14:35 +0100 Subject: [SciPy-user] Problem with weave In-Reply-To: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> References: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> Message-ID: <44C08CEB.60503@inria.fr> Albert Strasheim wrote: > Hey Pierre > > Some of the problems you mentioned were already reported here: > > http://projects.scipy.org/scipy/scipy/ticket/232 > > and fixed here: > > http://projects.scipy.org/scipy/scipy/changeset/2098 > > Please attach a patch against the latest SciPy source to a new ticket: > > http://projects.scipy.org/scipy/scipy/newticket > > so that we don't lose track of the other changes you mentioned. > > Regards, > > Albert > > Hi ! I tried to create a new ticket but I get the error : "TICKET_CREATE privileges are required to perform this operation" In case, I just created a user called "PierreBdR", so how am I suppose to do ? Thank you, Pierre >> -----Original Message----- >> From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] >> On Behalf Of Pierre Barbier de Reuille >> Sent: 18 July 2006 21:35 >> To: SciPy Users List >> Subject: [SciPy-user] Problem with weave >> >> Hello, >> >> I noticed that weave.inline does not work with g++ ... the error was >> pretty simple: some name already in a namespace were "over qualified", >> so here is a patch. I also noticed a few errors in the example. >> Like in the binary search, it makes more sens to use bisect_left as it >> gives the right result (instead of displaying something inconsistant). >> >> Also added a (necessary for my version of g++) >> indexed_ref::operator=(const indexed_ref&) >> in scxx/sequence.h >> >> Well, there are a few other things. >> >> The patch is to be applied at the root of the weave library. >> >> Pierre >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From matthew.brett at gmail.com Fri Jul 21 05:40:42 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 21 Jul 2006 10:40:42 +0100 Subject: [SciPy-user] Ticket #14 commit? In-Reply-To: <20060720231915.GD1046@mentat.za.net> References: <87BD153A-7B37-4C91-8ECC-313C8567AA3A@mit.edu> <20060720231915.GD1046@mentat.za.net> Message-ID: <1e2af89e0607210240s80e0453v21b8fe2f212c5e@mail.gmail.com> Hi, > We should write unit tests to make sure that the current behaviour is > maintained, and that the new behaviour isn't broken in the future. > Minimal examples will help a great deal. The loadmat module may get a great deal of use from us matlab-switching types. I had considered making unit tests of reads from real matlab (4), 5, 6, 7, mat files of various complexities. I can offer lots of matlab versions and enthusiasm - is anyone out there interested in helping me with this? Matthew From matthew.brett at gmail.com Fri Jul 21 05:40:42 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 21 Jul 2006 10:40:42 +0100 Subject: [SciPy-user] Ticket #14 commit? In-Reply-To: <20060720231915.GD1046@mentat.za.net> References: <87BD153A-7B37-4C91-8ECC-313C8567AA3A@mit.edu> <20060720231915.GD1046@mentat.za.net> Message-ID: <1e2af89e0607210240s80e0453v21b8fe2f212c5e@mail.gmail.com> Hi, > We should write unit tests to make sure that the current behaviour is > maintained, and that the new behaviour isn't broken in the future. > Minimal examples will help a great deal. The loadmat module may get a great deal of use from us matlab-switching types. I had considered making unit tests of reads from real matlab (4), 5, 6, 7, mat files of various complexities. I can offer lots of matlab versions and enthusiasm - is anyone out there interested in helping me with this? Matthew From Pierre.Barbier_de_Reuille at inria.fr Fri Jul 21 07:01:06 2006 From: Pierre.Barbier_de_Reuille at inria.fr (Pierre Barbier de Reuille) Date: Fri, 21 Jul 2006 12:01:06 +0100 Subject: [SciPy-user] Problem with weave In-Reply-To: <44C08CEB.60503@inria.fr> References: <039501c6abff$5f51bbc0$0100000a@dsp.sun.ac.za> <44C08CEB.60503@inria.fr> Message-ID: <44C0B3F2.2050500@inria.fr> Pierre Barbier de Reuille wrote: > Albert Strasheim wrote: > >> Hey Pierre >> >> Some of the problems you mentioned were already reported here: >> >> http://projects.scipy.org/scipy/scipy/ticket/232 >> >> and fixed here: >> >> http://projects.scipy.org/scipy/scipy/changeset/2098 >> >> Please attach a patch against the latest SciPy source to a new ticket: >> >> http://projects.scipy.org/scipy/scipy/newticket >> >> so that we don't lose track of the other changes you mentioned. >> >> Regards, >> >> Albert >> >> >> > Hi ! > > I tried to create a new ticket but I get the error : > > "TICKET_CREATE privileges are required to perform this operation" > > In case, I just created a user called "PierreBdR", so how am I suppose > to do ? > > Thank you, > > Pierre Ok, I created a ticket, but I didn't find out how to attach a file :/ So I attach the patch here and if someone could tell me how to do so ... Thanks, Pierre -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: weave.patch URL: From davidgrant at gmail.com Fri Jul 21 10:47:05 2006 From: davidgrant at gmail.com (David Grant) Date: Fri, 21 Jul 2006 07:47:05 -0700 Subject: [SciPy-user] Would like to simplify my 3 where statements In-Reply-To: <16761e100607202056v538ffe55q114332001031c0c9@mail.gmail.com> References: <16761e100607201717t1c610949xd24f579ef6f69013@mail.gmail.com> <16761e100607202056v538ffe55q114332001031c0c9@mail.gmail.com> Message-ID: On 7/20/06, Michael Sorich wrote: > > If I run the script below I get [0,2]. Is this what you want? In any > case the code you wrote looks fairly simple now. An alternative > function to where is nonzero, however this will not make and > difference. > > from numpy import * > i=3 > j=4 > A = array([[0, 1, 0, 1, 0], > [1, 0, 1, 0, 1], > [1, 0, 0, 0, 0], > [0, 0, 1, 0, 1], > [1, 1, 0, 1, 0]]) > ans=where(A[:,j]==1)[0] #ans=[1,3] > if A[i,j] == 1: > ans -= 1 > print ans > >> [0 2] > > Oops, I screwed up. It shouldn't be ans -= 1. It should be something like ans.remove(i). I was getting confused because previously my algorithm was just finding the length of ans, then subtracting 1 if A[i,j] == 1. Now I want the actual indices. Sorry for the confusion.. I wrote this algorithm a while go and have moved on to other code...although I am still interested in different ways to do this. So what's the quickest way to do a remove type operation on a numpy array? -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From nvf at MIT.EDU Fri Jul 21 13:25:19 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Fri, 21 Jul 2006 13:25:19 -0400 Subject: [SciPy-user] Ticket #14 commit? Message-ID: On Friday, July 21 at 10:40AM, Matthew Brett wrote: > The loadmat module may get a great deal of use from us > matlab-switching types. I had considered making unit tests of reads > from real matlab (4), 5, 6, 7, mat files of various complexities. I > can offer lots of matlab versions and enthusiasm - is anyone out there > interested in helping me with this? Stefan van der Walt and I are working on getting the updated loadmat committed along with some unittests. I will contact you off-list to talk specifics. Thanks for volunteering. Take care, Nick From nicolas.chopin at bristol.ac.uk Sat Jul 22 11:27:36 2006 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Sat, 22 Jul 2006 16:27:36 +0100 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 Message-ID: <44C243E8.6050804@bristol.ac.uk> Dear all, I have troubles installing Scipy on my SUSE 10.1 box (a standard i686 DELL laptop). I spent a few hours trying to find a solution on the internet, to no avail. I assumed at first that BLAS and LAPACK were properly installed, but it seems the corresponding SUSE packages are incomplete. (Maybe you can confirm this?). Then I tried to install everything from source, following the detailed tutorial of Steve Baum: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 but I get complicate error messages which I don't understand. For instance, when I try to import scipy: >>> from numpy import * >>> from scipy import * import linsolve.umfpack -> failed: liblapack.so.3: cannot open shared object file: No such file or directory Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from flinalg import get_flinalg_funcs File "/usr/local/lib/python2.4/site-packages/scipy/linalg/flinalg.py", line 15, in ? from numpy.distutils.misc_util import PostponedException ImportError: cannot import name PostponedException So I understand that liblapack.so.3 is missing, but how do I get it? In the tutorial, there is only a reference to liblapack.a I had also a few error messages during the compilation of scipy, indicating python could not find various libraries, including BLAS and LAPACK. I did install BLAS and LAPACK however, since I followed the tutorial. I am not a developer, just a scientist who would like to get rid of Matlab; so I am quite confused by all these error messages, and I don't have a clue of what to do to fix the problem. On various forums, people mention some problems with too recent versions of gcc or g77. Could that be one explanation? Thank you very much in advance. If a solution is found, I'd be happy to post it on my web site, to help people who may run in similar difficulties. And thank you for working on Scipy, which seems a very interesting piece of software I die to try out! Cheers -- ________________________________________________________ Dr. Nicolas Chopin tel: +44 117 928 9127 School of Mathematics fax: +44 117 928 7999 University of Bristol Bristol BS8 1TW, UK http://www.stats.bris.ac.uk/~manxac/ From nwagner at iam.uni-stuttgart.de Sat Jul 22 11:38:24 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 22 Jul 2006 17:38:24 +0200 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C243E8.6050804@bristol.ac.uk> References: <44C243E8.6050804@bristol.ac.uk> Message-ID: On Sat, 22 Jul 2006 16:27:36 +0100 Nicolas Chopin wrote: > Dear all, > > I have troubles installing Scipy on my SUSE 10.1 box (a >standard i686 > DELL laptop). > I spent a few hours trying to find a solution on the >internet, to no avail. > > I assumed at first that BLAS and LAPACK were properly >installed, but it > seems the corresponding SUSE packages are incomplete. > (Maybe you can confirm this?). > Please deinstall the SUSE packages blas/lapack. They are incomplete. I suggest that you build your blas/lapack and (complete) ATLAS library. Another issue is the fortran compiler that comes with SUSE. I still recommend g77. Nils > Then I tried to install everything from source, >following the detailed > tutorial of Steve Baum: > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > but I get complicate error messages which I don't >understand. > >For instance, when I try to import scipy: > >>>> from numpy import * >>>> from scipy import * > import linsolve.umfpack -> failed: liblapack.so.3: >cannot open shared > object file: No such file or directory > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", >line > 8, in ? > from basic import * > File >"/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from flinalg import get_flinalg_funcs > File >"/usr/local/lib/python2.4/site-packages/scipy/linalg/flinalg.py", > line 15, in ? > from numpy.distutils.misc_util import >PostponedException > ImportError: cannot import name PostponedException > > So I understand that liblapack.so.3 is missing, but how >do I get it? In > the tutorial, there is only a reference to liblapack.a > > I had also a few error messages during the compilation >of scipy, > indicating python could not find various libraries, >including BLAS and > LAPACK. I did install BLAS and LAPACK however, since I >followed the > tutorial. > > > I am not a developer, just a scientist who would like to >get rid of > Matlab; so I am quite confused by all these error >messages, and I don't > have a clue of what to do to fix the problem. > On various forums, people mention some problems with too >recent versions > of gcc or g77. Could that be one explanation? > > Thank you very much in advance. If a solution is found, >I'd be happy to > post it on my web site, to help people who may run in >similar difficulties. > And thank you for working on Scipy, which seems a very >interesting piece > of software I die to try out! > > Cheers > -- > ________________________________________________________ > Dr. Nicolas Chopin tel: +44 117 928 9127 > School of Mathematics fax: +44 117 928 7999 > University of Bristol > Bristol BS8 1TW, UK > http://www.stats.bris.ac.uk/~manxac/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ckkart at hoc.net Sat Jul 22 20:47:22 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 23 Jul 2006 00:47:22 +0000 (UTC) Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 References: <44C243E8.6050804@bristol.ac.uk> Message-ID: Nicolas Chopin bristol.ac.uk> writes: > > Dear all, > > I have troubles installing Scipy on my SUSE 10.1 box (a standard i686 > DELL laptop). > I spent a few hours trying to find a solution on the internet, to no avail. > > I assumed at first that BLAS and LAPACK were properly installed, but it > seems the corresponding SUSE packages are incomplete. > (Maybe you can confirm this?). Yes, that's true. > Then I tried to install everything from source, following the detailed > tutorial of Steve Baum: > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > but I get complicate error messages which I don't understand. > > For instance, when I try to import scipy: > > >>> from numpy import * > >>> from scipy import * > import linsolve.umfpack -> failed: liblapack.so.3: cannot open shared > object file: No such file or directory > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line > 8, in ? > from basic import * > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from flinalg import get_flinalg_funcs > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/flinalg.py", > line 15, in ? > from numpy.distutils.misc_util import PostponedException > ImportError: cannot import name PostponedException For me that looks like some incompatibility between the versions of scipy and numpy you're trying to install. But that's only a guess. > So I understand that liblapack.so.3 is missing, but how do I get it? In > the tutorial, there is only a reference to liblapack.a Make sure, that scipy really grabs your home made libs. > I had also a few error messages during the compilation of scipy, > indicating python could not find various libraries, including BLAS and > LAPACK. I did install BLAS and LAPACK however, since I followed the > tutorial. That's normal, but at some point it should say that he will use lapack/blas libs at /usr/local/lib/libfblas.a /usr/local/lib/libflapack.a if you exactly followed the instructions on that wiki. > I am not a developer, just a scientist who would like to get rid of > Matlab; so I am quite confused by all these error messages, and I don't > have a clue of what to do to fix the problem. > On various forums, people mention some problems with too recent versions > of gcc or g77. Could that be one explanation? The gfortran on SuSE10.0 was indeed still buggy, but on SuSE10.1 (gorftran 4.1) up to now I did not encounter any problems when using gfortran instead of g77. If you like I can send you numpy/scipy P4 rpms built from last weeks svn on SuSE10.1. Regards, Christian From ckkart at hoc.net Sat Jul 22 23:09:55 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 23 Jul 2006 03:09:55 +0000 (UTC) Subject: [SciPy-user] cygwin build problems Message-ID: Hi, I installed the windows numpy1.0b binary and tried to build scipy from svn using cygwin as desrcibed on scipy.org (patch - with corrections - applied). There seems to be a problem with the include path, e.g. files like stdio.h/stdlib.h etc. are not found. Adding --include-dirs=/usr/include when running 'setup.py config' didn't help. This is the error message: compile options: '-DNO_TIMER=1 -DUSE_VENDOR_BLAS=1 -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -DNO_TIMER=1 -DUSE_VENDOR_BLAS=1 -c Lib\linsolve\SuperLU\SRC\zgstrf.c -o build\temp.win32-2.4\lib\linsolve\superlu\src\zgstrf.o In file included from Lib\linsolve\SuperLU\SRC\zsp_defs.h:28, from Lib\linsolve\SuperLU\SRC\zgstrf.c:22: Lib\linsolve\SuperLU\SRC\util.h:4:19: stdio.h: No such file or directory Lib\linsolve\SuperLU\SRC\util.h:5:20: stdlib.h: No such file or directory Lib\linsolve\SuperLU\SRC\util.h:6:20: string.h: No such file or directory Lib\linsolve\SuperLU\SRC\util.h:10:20: assert.h: No such file or directory In file included from Lib\linsolve\SuperLU\SRC\zsp_defs.h:28, from Lib\linsolve\SuperLU\SRC\zgstrf.c:22: Lib\linsolve\SuperLU\SRC\util.h:226: warning: parameter names (without types) in function declaration Lib\linsolve\SuperLU\SRC\zgstrf.c: In function `zgstrf': Lib\linsolve\SuperLU\SRC\zgstrf.c:185: warning: 'iperm_r' might be used uninitialized in this function error: Command "gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -DNO_TIMER=1 -DUSE_VENDOR_BLAS=1 -c Lib\linsolve\SuperLU\SRC\zgstrf.c -o build\temp.win32-2.4\lib\linsolve\superlu\src\zgstrf.o" failed with exit status 1 Christian From nicolas.chopin at bristol.ac.uk Sun Jul 23 17:08:38 2006 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Sun, 23 Jul 2006 22:08:38 +0100 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 Message-ID: <44C3E556.6040203@bris.ac.uk> Dear Christian and others, > If you like I can send you numpy/scipy P4 rpms built from last weeks svn on > SuSE10.1. I'd be very interested to have your rpm's; thanks a lot for proposing this. > > For me that looks like some incompatibility between the versions of scipy > and numpy you're trying to install. But that's only a guess. > > I followed the tutorials, and downloaded numpy and scipy from svn, so they should be compatible versions. On the other hand, I still have python-numeric in my rpm database, so they may be some conflict. I'd like to remove this package, but too many other packages depend on it! I have a similar problem with blas/lapack: I've removed the rpm's, but other rpms (such as R-base) require them... Is it possible to use two versions of the same library, one installed from a rpm, and another one from source? I don't understand why Suse packagers are not a bit more careful; what's the point of packaging incomplete versions of standard libraries??? Anyway, thanks again for your help and suggestions. Best wishes -- ________________________________________________________ Dr. Nicolas Chopin tel: +44 117 928 9127 School of Mathematics fax: +44 117 928 7999 University of Bristol Bristol BS8 1TW, UK http://www.stats.bris.ac.uk/~manxac/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists.steve at arachnedesign.net Sun Jul 23 18:01:10 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sun, 23 Jul 2006 18:01:10 -0400 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C3E556.6040203@bris.ac.uk> References: <44C3E556.6040203@bris.ac.uk> Message-ID: Hi Nicolas, > Is it possible to use two versions of the same library, one > installed from a rpm, and another one from source? While I'm also a relative newbie to scipy/numpy, I don't know how much help I can offer overall, but I can at least tell you that this is possible. You can have two different version of the same library working on the same machine as long as you are careful as to where you put *your* stuff and where the system (rpm's, in this case) put their stuff. I'm not familiar with Suse ... are the system-installed blas/lapack in /usr, or /usr/local? Generally, /usr/local is meant to be left for you (the administrator) to put your own stuff into .. and many packages, when compiled from source, default to /usr/local/ as their prefix. The "official" system stuff usually stays in /usr ... they'll both end up w/ similar directory structure, but you'll know that /usr/local is yours. (By same structure, you'll see that as you install/compile your own stuff into /usr/local .. you'll also get directories like libexec, share, bin, lib ... which are probably already in /usr) You can also use other directories for your own stuff (like /opt, / opt/local ... or anything really, /usr/local/mine ...) Anyhow ... then when you want to compile numpy/scipy .. you can point it to your blas/lapack libraries (in /usr/local ... or /opt/local, or whatever) to use (as well as any other software you'll compile). In this way, you won't have to remove the system installed ones .. the R-package will be happy, and your scipy install (may) be happy, too. Hope that helped, -steve From ckkart at hoc.net Mon Jul 24 00:03:29 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 24 Jul 2006 13:03:29 +0900 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C3E556.6040203@bris.ac.uk> References: <44C3E556.6040203@bris.ac.uk> Message-ID: <44C44691.2020400@hoc.net> Nicolas Chopin wrote: > Dear Christian and others, > >> If you like I can send you numpy/scipy P4 rpms built from last weeks svn on >> SuSE10.1. > I'd be very interested to have your rpm's; thanks a lot for proposing this. I put them here: http://rosa.physik.tu-berlin.de/~semmel/suse10.1/ built with lapack/blas I just noticed that I disabled the following scipy sub-packages: cluster, maxentropy, montecarlo, signal, sparse, stats, ndimage though I don't remeber why. I hope you won;t need them. Christian From William.Hunter at mmhgroup.com Mon Jul 24 03:29:12 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Mon, 24 Jul 2006 09:29:12 +0200 Subject: [SciPy-user] sparse & linsolve: AttributeError: rowind not found - RESEND Message-ID: My apologies, typo in my previous mail (thanks St?fan): Lines xsp1 and xsp2 below: Was linsolve.solve, should be linsolve.spsolve. I'm having trouble ONLY with the last line (xsp3) of the following: from numpy import arange, ones from scipy import linalg, sparse Asp = sparse.lil_matrix((50000,50000)) Asp.setdiag(ones(50000)) Asp[20,100:250] = 10*rand(150) Asp[200:250,30] = 10*rand(50) b = arange(0,50000) xsp1 = linsolve.spsolve(Asp,b) xsp2 = linsolve.spsolve(Asp.tocsc(),b) xsp3 = linsolve.spsolve(Asp.tocsr(),b) I get this error for xsp3: AttributeError: rowind not found. However, if I type xsp4 = linsolve.spsolve(Asp.tocsr().T,b) it finds a solution, but obviously not the correct one. Also, the solution for xsp4 is faster by about 1,5 times on my computer than for xsp1 and xsp2. My questions are therefore: Why won't it solve the system if I convert from LIL to CSR, but it solves (incorrectly) if I use the transpose of Asp.tocsr()? I can only guess it has something to do with the structure of the CSR format. So... What am I doing wrong or am I not understanding something here? This is part of a (simple) example I want to put on Scipy_Tutorial wiki... Regards, William -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Jul 24 04:23:38 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 24 Jul 2006 10:23:38 +0200 Subject: [SciPy-user] sparse & linsolve: AttributeError: rowind not found - RESEND In-Reply-To: References: Message-ID: <44C4838A.8000408@ntc.zcu.cz> William Hunter wrote: > I'm having trouble ONLY with the last line (xsp3) of the following: > > > > from numpy import arange, ones > > from scipy import linalg, sparse > > Asp = sparse.lil_matrix((50000,50000)) > > Asp.setdiag(ones(50000)) > > Asp[20,100:250] = 10*rand(150) > > Asp[200:250,30] = 10*rand(50) > > b = arange(0,50000) > > xsp1 = linsolve.spsolve(Asp,b) > > xsp2 = linsolve.spsolve(Asp.tocsc(),b) > > xsp3 = linsolve.spsolve(Asp.tocsr(),b) > > > > I get this error for xsp3: AttributeError: rowind not found. I cannot replicate this, with In [26]:scipy.__version__ Out[26]:'0.5.0.2111' In [28]:numpy.__version__ Out[28]:'0.9.9.2844' what sparse linear solver are you using? I have umfpack installed, and it seems ok... ...hmm, now I see that you must use the default sparse solver - there was a bug, which I fixed just now in SVN (rev. 2117) I have also modified a little bit your test script (linsolve import was missing): from time import time from numpy import arange, ones, allclose from scipy import linalg, sparse, linsolve Asp = sparse.lil_matrix((50000,50000)) Asp.setdiag(ones(50000)) Asp[20,100:250] = 10*rand(150) Asp[200:250,30] = 10*rand(50) b = arange(0,50000) linsolve.use_solver( {'useUmfpack' : False} ) t = time(); xsp1 = linsolve.spsolve(Asp,b); print time() - t t = time(); xsp2 = linsolve.spsolve(Asp.tocsc(),b); print time() - t t = time(); xsp3 = linsolve.spsolve(Asp.tocsr(),b); print time() - t print allclose( xsp1, xsp2 ) print allclose( xsp1, xsp3 ) linsolve.use_solver( {'useUmfpack' : True} ) # If umfpack is not istalled, the default solver will still be used. t = time(); xsp1 = linsolve.spsolve(Asp,b); print time() - t t = time(); xsp2 = linsolve.spsolve(Asp.tocsc(),b); print time() - t t = time(); xsp3 = linsolve.spsolve(Asp.tocsr(),b); print time() - t print allclose( xsp1, xsp2 ) print allclose( xsp1, xsp3 ) cheers, r. From massimo.sandal at unibo.it Mon Jul 24 12:16:18 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 24 Jul 2006 18:16:18 +0200 Subject: [SciPy-user] matplotlib/wxmpl/numpy conflict on windows xp: runtime error, C-API ? Message-ID: <44C4F252.9050709@unibo.it> Hi, I'm trying to install a matplotlib+wxmpl app I wrote on Linux on a Windows XP machine. It crashes starting with the following error: ----- C:\Documents and Settings\Principale\Desktop\Python\Hooke>python hooke.py Traceback (most recent call last): File "hooke.py", line 12, in ? import wxmpl File "C:\Programmi\Python23\Lib\site-packages\wxmpl.py", line 26, in ? from matplotlib.axes import PolarAxes, _process_plot_var_args File "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\axes.py", line 23, in ? from contour import ContourSet File "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\contour.py", line 18, in ? import _contour File "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\_contour.py", line 17, in ? from matplotlib._ns_cntr import * RuntimeError: module compiled against version 90709 of C-API but this version of numpy is 1000000 ----- I installed the latest sourceforge stable (not CVS/SVN) versions of matplotlib, numpy and wxmpl as of today. I'm quite puzzled, I guess I understand what the error is (different gcc versions?), but how can I get around it? Do I need to recompile numpy/matplotlib? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From bhendrix at enthought.com Mon Jul 24 16:31:25 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 24 Jul 2006 15:31:25 -0500 Subject: [SciPy-user] Live CD for SciPy 2006 Conference Message-ID: <44C52E1D.6000208@enthought.com> I will be creating a Live CD for this years conference and I'm taking suggestions on which flavor and version of Linux to provide for x86 and PowerPC. Please email me off list with your suggestions and I'll make one for the top vote getter. I'll take votes until Friday, the only stipulation is that there has to be an already existing Live CD for the distribution and it be free (as in beer). Bryce From oliphant.travis at ieee.org Mon Jul 24 17:27:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 24 Jul 2006 15:27:57 -0600 Subject: [SciPy-user] matplotlib/wxmpl/numpy conflict on windows xp: runtime error, C-API ? In-Reply-To: <44C4F252.9050709@unibo.it> References: <44C4F252.9050709@unibo.it> Message-ID: <44C53B5D.9040703@ieee.org> massimo sandal wrote: > Hi, > I'm trying to install a matplotlib+wxmpl app I wrote on Linux on a > Windows XP machine. > > It crashes starting with the following error: > ----- > C:\Documents and Settings\Principale\Desktop\Python\Hooke>python hooke.py > Traceback (most recent call last): > File "hooke.py", line 12, in ? > import wxmpl > File "C:\Programmi\Python23\Lib\site-packages\wxmpl.py", line 26, in ? > from matplotlib.axes import PolarAxes, _process_plot_var_args > File "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\axes.py", > line 23, in > ? > from contour import ContourSet > File "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\contour.py", > line 18, > in ? > import _contour > File > "C:\PROGRA~1\Python23\Lib\site-packages\matplotlib\_contour.py", line 17, > in ? > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > ----- > > I installed the latest sourceforge stable (not CVS/SVN) versions of > matplotlib, numpy and wxmpl as of today. > > I'm quite puzzled, I guess I understand what the error is (different > gcc versions?), but how can I get around it? Do I need to recompile > numpy/matplotlib? Short answer: You need to recompile matplotlib. matplotlib is released separately from numpy. You have to re-compile matplotlib whenever a new version of NumPy comes out. -Travis From hurak at control.felk.cvut.cz Tue Jul 25 04:32:22 2006 From: hurak at control.felk.cvut.cz (=?ISO-8859-2?Q?Zden=ECk_Hur=E1k?=) Date: Tue, 25 Jul 2006 10:32:22 +0200 Subject: [SciPy-user] Uninstalling Scipy/Numpy/Matplotlib Message-ID: I used to install these manually (python setup.py install), but now I would like to keep my Gentoo system better managed and therefore use only the Gentoo packages of Scipy/Numpy/Matplotlib. How can I get rid of the versions that I installed manually? Thanks, Zdenek Hurak From elcorto at gmx.net Tue Jul 25 06:40:46 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 25 Jul 2006 12:40:46 +0200 Subject: [SciPy-user] Uninstalling Scipy/Numpy/Matplotlib In-Reply-To: References: Message-ID: <44C5F52E.6090004@gmx.net> Zden?k Hur?k wrote: > I used to install these manually (python setup.py install), but now I would > like to keep my Gentoo system better managed and therefore use only the > Gentoo packages of Scipy/Numpy/Matplotlib. How can I get rid of the > versions that I installed manually? > > Thanks, > Zdenek Hurak > Just rm -r your .../lib/python2.x/site-packages/scipy .../lib/python2.x/site-packages/numpy ../lib/python2.x/site-packages/matplotlib ../lib/python2.x/site-packages/pylab* and $HOME/.matplotlib Use locate (after updatedb) to check if you catched everything. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From kbreger at science.uva.nl Tue Jul 25 06:56:15 2006 From: kbreger at science.uva.nl (Kfir Breger) Date: Tue, 25 Jul 2006 12:56:15 +0200 (CEST) Subject: [SciPy-user] optimize.leastsq Message-ID: <50396.145.18.18.141.1153824975.squirrel@webmail.science.uva.nl> I am working on a camera calibration tool in python. In order to achive taht i need to use a non linear least square fitting. I have a series of 3d points M and their counterpart 2d points m. I also have a first estimation of the projection/rotation matrix H. so im trying to solve MH = m. my fit fnction calculatse m_project (the value of the 2d point as projected by H) then returns the square of m = m_project, it is clear to see that minimizing this function will give the best H. The problem is i cant get it to work. I get a value error about something being nested to deep. any1 got an idea? is it at all posible to use optimize.leastsq in this fashion? Greatings Kfir Breger From williamhunter at ananzi.co.za Tue Jul 25 10:00:30 2006 From: williamhunter at ananzi.co.za (William Hunter) Date: Tue, 25 Jul 2006 16:00:30 +0200 Subject: [SciPy-user] sparse and linsolve: rowind not found Message-ID: Robert; Thanks for your reply, I'll have to get the latest SVN version then. Apparently a bit of a pain if you're using xp like me, I hear. We've been having some trouble at work with our mail (they've "upgraded" our Windows Server), so I haven't been getting my mails from the ML... If anybody has a GMail invitation, I'll be much obliged :-) As an aside: I'm using Enthon 2.4-1.0.0.beta4 for now: scipy.__version__ = 0.5.0.2033 numpy.__version__ = 0.9.9.2706 Regards, WH Sign up for Ananzi Mail - and you and a partner stand the chance to win a breakaway for two at the stunning boutique hotel, Riviera on Vaal Hotel & Country Club! http://www.swiftsms.co.za/swiftT/track.asp?e=*em*&cid=113&u=8&tid=1115 From williamhunter at ananzi.co.za Tue Jul 25 10:01:55 2006 From: williamhunter at ananzi.co.za (William Hunter) Date: Tue, 25 Jul 2006 16:01:55 +0200 Subject: [SciPy-user] sparse: slicing/indexing problems Message-ID: Dear Scipy-Users; I'm struggling (again/still) with sparse matrices, but now it has to do with slicing: 1) The following (inserting a sub-matrix into a bigger matrix at arbitrary positions and then updating it) works very well for dense arrays: from numpy allclose, import ix_, ones, zeros Ad = zeros((5,5),float) Adsub = 5*ones((3,3),float) indeks = [3,0,4] Ad[ix_(indeks,indeks)] += Adsub The last line above will typically be inside a loop. 2) Doing this with sparse matrices is not 'possible', although I'm sure it can be done, but I've tried everything I know (slicing as per normal, over one axis only, etc.) This is the best I can do, but I don't like it: from scipy import sparse As = sparse.lil_matrix(5,5) col = 0 for posr in indeks: row = 0 for posc in indeks: As[posr,posc] += Adsub[row,col] row += 1 col += 1 # We want a True here... print allclose(Ad,As.todense()) Any other suggestions, i.e., how do I get rid of the 's? I'm using Enthon 2.4-1.0.0.beta4 (but my brand new Linux box is coming!!!) Regards, WH Sign up for Ananzi Mail - and you and a partner stand the chance to win a breakaway for two at the stunning boutique hotel, Riviera on Vaal Hotel & Country Club! http://www.swiftsms.co.za/swiftT/track.asp?e=*em*&cid=113&u=8&tid=1115 From fredantispam at free.fr Tue Jul 25 12:32:51 2006 From: fredantispam at free.fr (fred) Date: Tue, 25 Jul 2006 18:32:51 +0200 Subject: [SciPy-user] scipy 0.4.9 & typecode Message-ID: <44C647B3.6060401@free.fr> Hi all, Since I have upgraded scipy from 0.3.2 to 0.4.9, typecode(), used as der = zeros(x.shape,x.typecode()) does not work anymore : In [6]: x=array([1,2,3,1,2,4,5]) In [7]: der = zeros(x.shape,x.typecode()) --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) AttributeError: 'numpy.ndarray' object has no attribute 'typecode' Any suggestion ? Cheers, -- Fred. From robert.kern at gmail.com Tue Jul 25 12:42:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Jul 2006 11:42:58 -0500 Subject: [SciPy-user] scipy 0.4.9 & typecode In-Reply-To: <44C647B3.6060401@free.fr> References: <44C647B3.6060401@free.fr> Message-ID: <44C64A12.3040504@gmail.com> fred wrote: > Hi all, > > Since I have upgraded scipy from 0.3.2 to 0.4.9, typecode(), used as > > der = zeros(x.shape,x.typecode()) > > does not work anymore : Please read the file COMPATIBILITY that comes with numpy. While typecodes are accepted by most functions which used to, they have been replaced by a much more flexible mechanism. Consequently, the methods that return typecodes have been deprecated. You want zeros(x.shape, x.dtype) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From travis at enthought.com Tue Jul 25 12:43:45 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 25 Jul 2006 11:43:45 -0500 Subject: [SciPy-user] scipy 0.4.9 & typecode In-Reply-To: <44C647B3.6060401@free.fr> References: <44C647B3.6060401@free.fr> Message-ID: <44C64A41.6040302@enthought.com> fred wrote: > Hi all, > > Since I have upgraded scipy from 0.3.2 to 0.4.9, typecode(), used as > > der = zeros(x.shape,x.typecode()) > > does not work anymore : > > In [6]: x=array([1,2,3,1,2,4,5]) > > In [7]: der = zeros(x.shape,x.typecode()) > --------------------------------------------------------------------------- > exceptions.AttributeError Traceback (most > recent call last) > > > > AttributeError: 'numpy.ndarray' object has no attribute 'typecode' > > Any suggestion ? > > There are notes about the changes (which are Numeric-to-numpy in this case) here: http://www.scipy.org/Converting_from_Numeric > Cheers, > > -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From strawman at astraw.com Tue Jul 25 12:54:01 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 25 Jul 2006 09:54:01 -0700 Subject: [SciPy-user] optimize.leastsq In-Reply-To: <50396.145.18.18.141.1153824975.squirrel@webmail.science.uva.nl> References: <50396.145.18.18.141.1153824975.squirrel@webmail.science.uva.nl> Message-ID: <44C64CA9.6020900@astraw.com> Dear Kfir, I attach a file which implements the DLT method of calibration. (See http://www.miba.auc.dk/~lasse/publications/HTML/pilot/cam_cal/camcal.html ) numpy.linalg.lstsq() used in a key part of this code. There are some tools that we use internally used in this, but I'm pretty sure you can figure out what's going on. If you get around to implementing anything more fancy, I'd be very interested to hear about it. Kfir Breger wrote: > I am working on a camera calibration tool in python. In order to achive > taht i need to use a non linear least square fitting. I have a series of > 3d points M and their counterpart 2d points m. I also have a first > estimation of the projection/rotation matrix H. so im trying to solve MH > = m. my fit fnction calculatse m_project (the value of the 2d point as > projected by H) then returns the square of m = m_project, it is clear to > see that minimizing this function will give the best H. > The problem is i cant get it to work. I get a value error about > something being nested to deep. any1 got an idea? is it at all posible > to use optimize.leastsq in this fashion? > > Greatings > Kfir Breger > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: find_cam_matrix.py Type: text/x-python Size: 3440 bytes Desc: not available URL: From nicolas.chopin at bristol.ac.uk Tue Jul 25 15:38:36 2006 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Tue, 25 Jul 2006 20:38:36 +0100 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 Message-ID: <44C6733C.3020001@bris.ac.uk> Dear Scipy users, I am sorry, but things are getting weirder and weirder... When I install Christian's rpm's, I get this: > python > Python 2.4.2 (#1, May 2 2006, 08:13:46) > [GCC 4.1.0 (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from numpy import * > >>> from scipy import * > import linsolve.umfpack -> failed: cannot import name ArrayType > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib/python2.4/site-packages/scipy/ndimage/__init__.py", > line 34, in ? > from filters import * > File > "/usr/local/lib/python2.4/site-packages/scipy/ndimage/filters.py", > line 34, in ? > import _nd_image > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 Which suggests some incompatibility between numpy and scipy; but why would it work for Christian and not for me? Also, following the explanations of Steve re: libraries (thanks a lot btw), I now realise that when I did everything manually, again following the tutorial on scipy web site, the compilation of scipy seemed to run smoothly, and blas/lapack libraries were found at the correct place, so that was not the issue: > blas_src_info: > FOUND: > sources = ['./blas/caxpy.f', './blas/csscal.f', './blas/dnrm2.f', > './b ... > ... rk.f', './blas/zgemm.f', './blas/zsymm.f', './blas/ztrsm.f'] > language = f77 etc. So the only error messages I got (again, before trying to use Christian's rpms), was when I tried to import scipy: > import linsolve.umfpack -> failed: liblapack.so.3: cannot open shared > > object file: No such file or directory > > Traceback (most recent call last): > > File "", line 1, in ? > > File > > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line > > 8, in ? > > from basic import * > > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", > > line 17, in ? > > from flinalg import get_flinalg_funcs > > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/flinalg.py", > > line 15, in ? > > from numpy.distutils.misc_util import PostponedException > > ImportError: cannot import name PostponedException > What's wrong with my computer??? Did I hit some weird bug that affects only a few people, or am I doing something stupid??? My only explanation I could find is that I've still python-numeric package, so they may be some conflict. But that does not make sense, because when I import numpy, it's the correct one that python use (i.e. in /usr/lib/local/, installed manually be me, or by Christian rpm). Anyway, thanks for your help. If this turns out to be too difficult, I'll consider switching to Kubuntu, for which the installation of scipy seems to be easy. Best wishes -- ________________________________________________________ Dr. Nicolas Chopin tel: +44 117 928 9127 School of Mathematics fax: +44 117 928 7999 University of Bristol Bristol BS8 1TW, UK http://www.stats.bris.ac.uk/~manxac/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Jul 25 15:57:15 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 25 Jul 2006 13:57:15 -0600 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C6733C.3020001@bris.ac.uk> References: <44C6733C.3020001@bris.ac.uk> Message-ID: <44C6779B.50505@ee.byu.edu> Nicolas Chopin wrote: > Dear Scipy users, > I am sorry, but things are getting weirder and weirder... > > When I install Christian's rpm's, I get this: > >> python >> Python 2.4.2 (#1, May 2 2006, 08:13:46) >> [GCC 4.1.0 (SUSE Linux)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> from numpy import * >> >>> from scipy import * >> import linsolve.umfpack -> failed: cannot import name ArrayType >> Traceback (most recent call last): >> File "", line 1, in ? >> File >> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/__init__.py", >> line 34, in ? >> from filters import * >> File >> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/filters.py", >> line 34, in ? >> import _nd_image >> RuntimeError: module compiled against version 90709 of C-API but this >> version of numpy is 1000000 > > > > Which suggests some incompatibility between numpy and scipy; but why > would it work for Christian and not for me? For binary installs, the version number of scipy and numpy have to be compatible. Right now, there is not a binary-released version of scipy that is compatible with numpy 1.0 beta. This should change in a few days. > So the only error messages I got (again, before trying to use > Christian's rpms), was when I tried to import scipy: > >>import linsolve.umfpack -> failed: liblapack.so.3: cannot open shared >>> object file: No such file or directory >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File >>> "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line >>> 8, in ? >>> from basic import * >>> File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", >>> line 17, in ? >>> from flinalg import get_flinalg_funcs >>> File "/usr/local/lib/python2.4/site-packages/scipy/linalg/flinalg.py", >>> line 15, in ? >>> from numpy.distutils.misc_util import PostponedException >>> ImportError: cannot import name PostponedException >> >> You've got an older version of scipy. This command is not in current SVN version of scipy. So, it looks like the problem is still that you are not installing a version of scipy that is compatible with NumPy 1.0 beta (right now, you have to check out the SVN version of scipy to get one that is). -Travis From cookedm at physics.mcmaster.ca Tue Jul 25 16:43:42 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 25 Jul 2006 16:43:42 -0400 Subject: [SciPy-user] optimize.leastsq In-Reply-To: <50396.145.18.18.141.1153824975.squirrel@webmail.science.uva.nl> References: <50396.145.18.18.141.1153824975.squirrel@webmail.science.uva.nl> Message-ID: <20060725164342.53eb1db3@arbutus.physics.mcmaster.ca> On Tue, 25 Jul 2006 12:56:15 +0200 (CEST) "Kfir Breger" wrote: > I am working on a camera calibration tool in python. In order to achive > taht i need to use a non linear least square fitting. I have a series of > 3d points M and their counterpart 2d points m. I also have a first > estimation of the projection/rotation matrix H. so im trying to solve MH > = m. my fit fnction calculatse m_project (the value of the 2d point as > projected by H) then returns the square of m = m_project, it is clear to > see that minimizing this function will give the best H. > The problem is i cant get it to work. I get a value error about > something being nested to deep. any1 got an idea? is it at all posible > to use optimize.leastsq in this fashion? Can you post the exact error message, and the code you're using? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fredantispam at free.fr Tue Jul 25 16:10:56 2006 From: fredantispam at free.fr (fred) Date: Tue, 25 Jul 2006 22:10:56 +0200 Subject: [SciPy-user] scipy 0.4.9 & typecode In-Reply-To: <44C64A41.6040302@enthought.com> References: <44C647B3.6060401@free.fr> <44C64A41.6040302@enthought.com> Message-ID: <44C67AD0.6080607@free.fr> Travis N. Vaught a ?crit : > There are notes about the changes (which are Numeric-to-numpy in this > case) here: > > http://www.scipy.org/Converting_from_Numeric Sorry. I had a look on the scipy user list, but not on the webpage. Thanks all. -- Fred. From paustin at eos.ubc.ca Tue Jul 25 17:57:45 2006 From: paustin at eos.ubc.ca (Philip Austin) Date: Tue, 25 Jul 2006 14:57:45 -0700 Subject: [SciPy-user] wanted: working site.cfg for fftw3 In-Reply-To: <44C6779B.50505@ee.byu.edu> References: <44C6733C.3020001@bris.ac.uk> <44C6779B.50505@ee.byu.edu> Message-ID: <17606.37849.806246.477257@eos.ubc.ca> I'd appreciate a hand with the site.cfg syntax needed to get numpy/scipy to recognize fftw3. I'm working with the current scipy/numpy snapshots on Fedora Core 4 PIII, Python 2.4.3). fftw-3.1.2 is built with: ./configure --prefix=/home/phil/lib/fftw3_thrush make make install which creates the libraries ~/install/numpy-trunk phil at thrush% ls /home/phil/lib/fftw3_thrush/lib libfftw3.a libfftw3.la pkgconfig I've edited /home/phil/install/numpy-trunk/numpy/distutils/site.cfg to add an entry under [fftw] (here an excerpt of the full site.cfg I've uploaded to: http://clouds.eos.ubc.ca/~phil/numpy/site_cfg.txt): __________________________________ [atlas] library_dirs = /home/phil/install/ATLAS/lib/Linux_P4SSE2 ...snip... [fftw] library_dirs = /home/phil/lib/fftw3_thrush/lib fftw_libs = fftw3 fftw_opt_libs = fftw_threaded, rfftw_threaded ____________________________________ During the install, Atlas/blas/lapack are all found, but there's no sign that numpy knows about fftw3 in the output to python setup.py install, which is in the file http://clouds.eos.ubc.ca/~phil/numpy/numpy_install.txt If I start python and look at numpy.show_config() I get __________________________________ ~ phil at thrush% ~/usrThrushFc4/bin/python Python 2.4.3 (#1, Jul 25 2006, 10:08:38) [GCC 4.0.2 20051125 (Red Hat 4.0.2-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.show_config() atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] language = f77 Lapack_Mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] language = c mkl_info: NOT AVAILABLE ___________________________________________ and when I build scipy, not suprisingly, I get (http://clouds.eos.ubc.ca/~phil/numpy/scipy_install.txt) fft_opt_info: fftw3_info: fftw3 not found NOT AVAILABLE and from python: >>> scipy.show_config() lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 [snip] atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] language = c [snip] fftw3_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/phil/install/ATLAS/lib/Linux_P4SSE2'] language = f77 Has anyone succeeded in getting fftw3 working? Thanks, Phil From schofield at ftw.at Tue Jul 25 19:00:06 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 26 Jul 2006 01:00:06 +0200 Subject: [SciPy-user] sparse: slicing/indexing problems In-Reply-To: References: Message-ID: On 25/07/2006, at 4:01 PM, William Hunter wrote: > Dear Scipy-Users; > > I'm struggling (again/still) with sparse matrices, but now > it has to do > with slicing: > > 1) The following (inserting a sub-matrix into a bigger > matrix at arbitrary > positions and then updating it) works very well for dense > arrays: > > > from numpy allclose, import ix_, ones, zeros > Ad = zeros((5,5),float) > Adsub = 5*ones((3,3),float) > indeks = [3,0,4] > Ad[ix_(indeks,indeks)] += Adsub > > > The last line above will typically be inside a loop. > > 2) Doing this with sparse matrices is not 'possible', > although I'm sure it > can be done, but I've tried everything I know (slicing as > per normal, > over one axis only, etc.) This is the best I can do, but I > don't like it: > > from scipy import sparse > As = sparse.lil_matrix(5,5) > col = 0 > for posr in indeks: > row = 0 > for posc in indeks: > As[posr,posc] += Adsub[row,col] > row += 1 > col += 1 > # We want a True here... > print allclose(Ad,As.todense()) > > > Any other suggestions, i.e., how do I get rid of the > 's? I don't think you could gain much here by eliminating the loops. The reason is that += will need to check each element anyway and take one of four different code paths, depending on whether it's a zero being converted to a nonzero, or a nonzero to a zero, etc. I'd probably write it like this: As = sparse.lil_matrix((5,5)) for row, posr in enumerate(indeks): for col, posc in enumerate(indeks): As[posr,posc] += Adsub[row,col] This seems to work fine (I think your row and column increments were swapped). Using a single for loop and fancy indexing should work too (saving a line of code), but doesn't at the moment, since support for fancy indexing with in-place operations is incomplete (ticket #226). I'll add this soon, but probably only after the 0.5.0 release. > I'm using Enthon 2.4-1.0.0.beta4 (but my brand new Linux > box is coming!!!) Oooh, goody! -- Ed From ckkart at hoc.net Tue Jul 25 20:43:16 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 26 Jul 2006 09:43:16 +0900 Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C6779B.50505@ee.byu.edu> References: <44C6733C.3020001@bris.ac.uk> <44C6779B.50505@ee.byu.edu> Message-ID: <44C6BAA4.5080008@hoc.net> Travis Oliphant wrote: > Nicolas Chopin wrote: > >> Dear Scipy users, >> I am sorry, but things are getting weirder and weirder... >> >> When I install Christian's rpm's, I get this: >> >>> python >>> Python 2.4.2 (#1, May 2 2006, 08:13:46) >>> [GCC 4.1.0 (SUSE Linux)] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>>>>> from numpy import * >>>>>> from scipy import * >>> import linsolve.umfpack -> failed: cannot import name ArrayType >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File >>> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/__init__.py", >>> line 34, in ? >>> from filters import * >>> File >>> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/filters.py", >>> line 34, in ? >>> import _nd_image >>> RuntimeError: module compiled against version 90709 of C-API but this >>> version of numpy is 1000000 >> >> >> Which suggests some incompatibility between numpy and scipy; but why >> would it work for Christian and not for me? > > > For binary installs, the version number of scipy and numpy have to be > compatible. Right now, there is not a binary-released version of scipy > that is compatible with numpy 1.0 beta. This should change in a few days. But as Nicolas said, those rpms are working here. I built them from last weeks svn. I guess you had still some old installations of numpy/scipy lying around. To be absolutely sure that you do not have installed multiple versions, deinstall the rpms and remove any scipy/numpy directory from /usr/lib/python/2.4/site-packages and /usr/local/lib/python2.X/site-packages. Then try the rpms again. Christian From ckkart at hoc.net Tue Jul 25 20:52:06 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 26 Jul 2006 09:52:06 +0900 Subject: [SciPy-user] odr Message-ID: <44C6BCB6.9060405@hoc.net> Hi, I'd like to vote for moving the odr module from the sandbox to the optimize package. I've been testing it the last two weeks and I'm really pleased. The only thing to do is to update odrpack.py to current numpy, which is done by the patch I sent last week. What do you think? Christian From kbreger at science.uva.nl Wed Jul 26 04:24:30 2006 From: kbreger at science.uva.nl (Kfir Breger) Date: Wed, 26 Jul 2006 10:24:30 +0200 (CEST) Subject: [SciPy-user] troubles installing Scipy on SUSE 10.1 In-Reply-To: <44C6BAA4.5080008@hoc.net> References: <44C6733C.3020001@bris.ac.uk> <44C6779B.50505@ee.byu.edu> <44C6BAA4.5080008@hoc.net> Message-ID: <52016.132.229.87.52.1153902270.squirrel@webmail.science.uva.nl> I might shed some light here I got that same error. I was trying to build scipy from source on my OS X 10.4 and i got this error. Turns out that i was using gcc 4.0.1 to build. Now numpy can be be build with 4.0.1 but at least on the mac scipy cant. Maybe the rpm was made with gcc 4.0.1. I would suggest you to get the source from the site and try comipling it after using gcc_select 3.3. Give it a spin, might work Cheers Kfir > Travis Oliphant wrote: >> Nicolas Chopin wrote: >> >>> Dear Scipy users, >>> I am sorry, but things are getting weirder and weirder... >>> >>> When I install Christian's rpm's, I get this: >>> >>>> python >>>> Python 2.4.2 (#1, May 2 2006, 08:13:46) >>>> [GCC 4.1.0 (SUSE Linux)] on linux2 >>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>> from numpy import * >>>>>>> from scipy import * >>>> import linsolve.umfpack -> failed: cannot import name ArrayType >>>> Traceback (most recent call last): >>>> File "", line 1, in ? >>>> File >>>> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/__init__.py", >>>> line 34, in ? >>>> from filters import * >>>> File >>>> "/usr/local/lib/python2.4/site-packages/scipy/ndimage/filters.py", >>>> line 34, in ? >>>> import _nd_image >>>> RuntimeError: module compiled against version 90709 of C-API but this >>>> version of numpy is 1000000 >>> >>> >>> Which suggests some incompatibility between numpy and scipy; but why >>> would it work for Christian and not for me? >> >> >> For binary installs, the version number of scipy and numpy have to be >> compatible. Right now, there is not a binary-released version of scipy >> that is compatible with numpy 1.0 beta. This should change in a few >> days. > > But as Nicolas said, those rpms are working here. I built them from last > weeks > svn. I guess you had still some old installations of numpy/scipy lying > around. > To be absolutely sure that you do not have installed multiple versions, > deinstall the rpms and remove any scipy/numpy directory from > /usr/lib/python/2.4/site-packages and > /usr/local/lib/python2.X/site-packages. > Then try the rpms again. > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From willemjagter at gmail.com Wed Jul 26 04:28:44 2006 From: willemjagter at gmail.com (William Hunter) Date: Wed, 26 Jul 2006 10:28:44 +0200 Subject: [SciPy-user] sparse slicing and Gmail Message-ID: <8b3894bc0607260128o1a9e2a92lf2f73019c73fd8c@mail.gmail.com> SciPy Users; sparse: Thanks (Ed) for help on the slicing bit. I'm now in a position to add my simple example to SciPy_Tutorial Gmail: Fernando Perez was out of the blocks first, thanks Fernando. Also to Steve Lianoglou, David Cooke, St?fan v/d Walt and Bill Baxter - didn't expect such a response, all fans of Gmail obviously. I like it already. By-the-way: willem jagter is a direct translation of william hunter to Afrikaans, for those who wondered... Regards, WH -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.h.jaffe at gmail.com Wed Jul 26 04:28:33 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Wed, 26 Jul 2006 09:28:33 +0100 Subject: [SciPy-user] FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) Message-ID: Hi All- I get the following error with scipy.__version__ = '0.5.0.2121' on PPC OSX 2.4.3 Universal Build with numpy 1.0.2889 (note that the latter has a known longdouble problem, but I don't think that's related). Indeed the fblas complex [c,z]dot[c,u] functions do seem to fail. (In fact, zdotc([1,-4,3],[2,3,1]) hangs the interpreter...) Andrew ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy-0.5.0.2121-py2.4-macosx-10.3-fat.egg/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-1.0.2889-py2.4-macosx-10.3-fat.egg/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9980511665344238+2.1130658899732593e-36j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy-0.5.0.2121-py2.4-macosx-10.3-fat.egg/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-1.0.2889-py2.4-macosx-10.3-fat.egg/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9980511665344238+2.1996782422872647e-36j) DESIRED: (-9+2j) From saba.ahsan at telematix.com.pk Wed Jul 26 06:32:30 2006 From: saba.ahsan at telematix.com.pk (Saba Ahsan) Date: Wed, 26 Jul 2006 15:32:30 +0500 Subject: [SciPy-user] Error Importing SciPy Windows XP Python2.4 Message-ID: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> Hi, I have used scipy-0.4.9.win32-py2.4 for installing python on my windows xp machine. I tried importing scipy and I get the following error. Traceback (most recent call last): File "", line 1, in ? from scipy import * File "D:\Python24\lib\site-packages\scipy\__init__.py", line 33, in ? del lib NameError: name 'lib' is not defined I have searched all mailing lists but could not find anyone else who had the same problem. Can somebody please tell me where I went wrong. Kind Regards Saba -------------- next part -------------- An HTML attachment was scrubbed... URL: From willemjagter at gmail.com Wed Jul 26 10:09:36 2006 From: willemjagter at gmail.com (William Hunter) Date: Wed, 26 Jul 2006 16:09:36 +0200 Subject: [SciPy-user] sparse: New Example 1 Message-ID: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> Users; I've added a new sparse example to SciPy_Tutorial. It is a simple one so I moved the previous one down and renamed it. I'd appreciate it if you could check through it for any possible wrong use of programming terms (like module, command, attribute, function, etc.) I'll add some more in the coming weeks. -- Regards, WH From schofield at ftw.at Wed Jul 26 11:14:47 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 26 Jul 2006 17:14:47 +0200 Subject: [SciPy-user] ANN: SciPy 0.5.0 released Message-ID: <44C786E7.50209@ftw.at> =========================== SciPy 0.5.0 Scientific tools for Python =========================== I'm pleased to announce the release of SciPy 0.5.0. The main change in this version is support for NumPy 1.0b1. There are also bug fixes and minor enhancements to several modules, including ndimage, optimize, sparse, stats, and weave. It is available for download from http://www.scipy.org/Download as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and 32-bit) and as an executable installer for Win32. More information on SciPy is available at http://www.scipy.org/ =========================== SciPy is an Open Source library of scientific tools for Python. It contains a variety of high-level science and engineering modules, including modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, genetic algorithms, ODE solvers, special functions, and more. From schofield at ftw.at Wed Jul 26 11:23:53 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 26 Jul 2006 17:23:53 +0200 Subject: [SciPy-user] sparse: New Example 1 In-Reply-To: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> References: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> Message-ID: <44C78909.7050005@ftw.at> William Hunter wrote: > Users; > > I've added a new sparse example to SciPy_Tutorial. It is a simple one so I moved > the previous one down and renamed it. I'd appreciate it if you could > check through > it for any possible wrong use of programming terms (like module, command, > attribute, function, etc.) > > I'll add some more in the coming weeks. > Great! Thanks a lot! -- Ed From davidgrant at gmail.com Wed Jul 26 11:38:43 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 26 Jul 2006 08:38:43 -0700 Subject: [SciPy-user] sparse: New Example 1 In-Reply-To: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> References: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> Message-ID: Are you talking about the tutorial here: http://www.scipy.org/SciPy_Tutorial ? Why is it called SciPy_Tutorial by the way when it is just a sparse matrix tutorial? In Example 1, when trying to solve the Ax=b where A is sparse, I get the following error: In [16]: xsp=linsolve.spsolve(Asp,b) data-ftype: d compared to data d Calling _superlu.dgssv Use minimum degree ordering on A'+A. David On 7/26/06, William Hunter wrote: > Users; > > I've added a new sparse example to SciPy_Tutorial. It is a simple one so I moved > the previous one down and renamed it. I'd appreciate it if you could > check through > it for any possible wrong use of programming terms (like module, command, > attribute, function, etc.) > > I'll add some more in the coming weeks. > > -- > Regards, > WH > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- David Grant From robert.kern at gmail.com Wed Jul 26 11:52:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Jul 2006 10:52:39 -0500 Subject: [SciPy-user] Error Importing SciPy Windows XP Python2.4 In-Reply-To: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> References: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> Message-ID: <44C78FC7.6080604@gmail.com> Saba Ahsan wrote: > Hi, > I have used scipy-0.4.9.win32-py2.4 for installing python on my windows > xp machine. I tried importing scipy and I get the following error. > > Traceback (most recent call last): > File "", line 1, in ? > from scipy import * > File "D:\Python24\lib\site-packages\scipy\__init__.py", line 33, in ? > del lib > NameError: name 'lib' is not defined > > I have searched all mailing lists but could not find anyone else who had > the same problem. Can somebody please tell me where I went wrong. What is your numpy version? That statement was there to remove the 'lib' module from the namespace which used to be imported with "from numpy import *". It was interfering with importing scipy.lib. However, recent versions for numpy do no export that name, and the del statement no longer exists in scipy/__init__.py . You can just remove it (and probably the adjacent del statement, too). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Wed Jul 26 12:08:55 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 26 Jul 2006 18:08:55 +0200 Subject: [SciPy-user] sparse: New Example 1 In-Reply-To: References: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> Message-ID: <44C79397.40008@ftw.at> David Grant wrote: > Are you talking about the tutorial here: http://www.scipy.org/SciPy_Tutorial ? > > Why is it called SciPy_Tutorial by the way when it is just a sparse > matrix tutorial? > Because it's a tutorial for the whole of SciPy -- the other sections just haven't been written yet ;) -- Ed From fredantispam at free.fr Wed Jul 26 13:04:58 2006 From: fredantispam at free.fr (fred) Date: Wed, 26 Jul 2006 19:04:58 +0200 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() Message-ID: <44C7A0BA.6020708@free.fr> Hi all, Since the new release, I get this error message : Python 2.4.3 (#1, Jul 25 2006, 18:55:45) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.special.basic import jn_zeros >>> print jn_zeros(0,1) Floating exception What am I doing wrong ? Cheers, -- Fred. From oliphant.travis at ieee.org Wed Jul 26 14:38:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 26 Jul 2006 12:38:43 -0600 Subject: [SciPy-user] We need a system for scipy modularity Message-ID: <44C7B6B3.7020107@ieee.org> We need to think about a system for allowing people to install scipy modules with more modularity. In other-words, we should have a standard means for installing parts of scipy. Right now, this is possible simply by running the setup.py script from the sub-package directory. In other words you can install ndimage separately by going to (or checking out) scipy/Lib/ndimage and then running python setup.py install This places the package ndimage in the site-packages directory of your installation so that access to ndimage is obtained using import ndimage While this is functional it has a couple of problems: 1) Name-space duplication --- i.e. is it a good thing to have users of nidimage that have it installed in a top-level name-space while other users of the very same package have it installed as scipy.ndimage It seems like this will create confusion as people begin to look for both names. 2) No automated system in place --- no distributions of separate packages. I am not an egg-expert (I barely know what they do). I hear rumors that they solve all our problems. But, I have yet to see anything substantial. So, I'm looking for ideas as to what to do. There is a demand for "modularity" but not very much in the way of an explanation of what that "modularity" might look like. -Travis From davidgrant at gmail.com Wed Jul 26 14:47:31 2006 From: davidgrant at gmail.com (David Grant) Date: Wed, 26 Jul 2006 11:47:31 -0700 Subject: [SciPy-user] sparse: New Example 1 In-Reply-To: <44C79397.40008@ftw.at> References: <8b3894bc0607260709j9d0caefv8b4af0871241fbe0@mail.gmail.com> <44C79397.40008@ftw.at> Message-ID: On 7/26/06, Ed Schofield wrote: > > David Grant wrote: > > Are you talking about the tutorial here: > http://www.scipy.org/SciPy_Tutorial ? > > > > Why is it called SciPy_Tutorial by the way when it is just a sparse > > matrix tutorial? > > > > Because it's a tutorial for the whole of SciPy -- the other sections > just haven't been written yet ;) > > There are already a few tutorials out there right? Could any of these be incorporated? http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ http://www.rexx.com/~dkuhlman/scipy_course_01.html http://scipy.mit.edu/ (there is apparently something here but it won't load) -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 26 14:56:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Jul 2006 13:56:33 -0500 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7B6B3.7020107@ieee.org> References: <44C7B6B3.7020107@ieee.org> Message-ID: <44C7BAE1.60106@gmail.com> Travis Oliphant wrote: > I am not an egg-expert (I barely know what they do). I hear rumors that > they solve all our problems. But, I have yet to see anything substantial. No, they don't solve all of our problems. What they do is provide tools that will be useful in building a solution. The Python community is aligning behind them for solving similar problems. > So, I'm looking for ideas as to what to do. There is a demand for > "modularity" but not very much in the way of an explanation of what that > "modularity" might look like. I'm playing around with a scheme to build only some subpackages. It's similar to what David Cooke did with scipy.sandbox, only the main "build all eggs" script will have to generate the list of subpackages to build on-the-fly. The first scheme that I tried did not work. I had a tiny module that simply provided a place to store some global state. The main script had a dictionary mapping subpackages ('linalg', 'stats', etc.) to some metainformation about the subpackage. The name of the subpackage was set to a global variable in the tiny module. Lib/setup.py would import that module and use that information to only build the one subpackage. The main script would iterate over the subpackages and do one setup() for each. Unfortunately, that didn't work very well. I think I need to clean out the build directory between each setup(). Possibly they also need to be in separate processes. I'm going to try that tonight, and if it shows promise, I will make a branch to work on making this feature robust. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Wed Jul 26 14:59:09 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 27 Jul 2006 03:59:09 +0900 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7B6B3.7020107@ieee.org> References: <44C7B6B3.7020107@ieee.org> Message-ID: Another issue, which I think is related, is the stuff in sandbox. It would be nice if the answer to "how do I use from the sandbox" were not "change setup.py and rebuild scipy." (at least that's what it says is required here: http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data). --bb On 7/27/06, Travis Oliphant wrote: > > We need to think about a system for allowing people to install scipy > modules with more modularity. In other-words, we should have a standard > means for installing parts of scipy. > > Right now, this is possible simply by running the setup.py script from > the sub-package directory. In other words you can install ndimage > separately by going to (or checking out) > > scipy/Lib/ndimage > > and then running > > python setup.py install > > This places the package ndimage in the site-packages directory of your > installation so that access to ndimage is obtained using > > import ndimage > > While this is functional it has a couple of problems: > > 1) Name-space duplication --- i.e. is it a good thing to have users of > nidimage that have it installed > in a top-level name-space while other users of the very same package > have it installed as scipy.ndimage > > It seems like this will create confusion as people begin to look for > both names. > > 2) No automated system in place --- no distributions of separate packages. > > I am not an egg-expert (I barely know what they do). I hear rumors that > they solve all our problems. But, I have yet to see anything substantial. > > So, I'm looking for ideas as to what to do. There is a demand for > "modularity" but not very much in the way of an explanation of what that > "modularity" might look like. > > -Travis > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hasslerjc at adelphia.net Wed Jul 26 15:08:44 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Wed, 26 Jul 2006 15:08:44 -0400 Subject: [SciPy-user] ANN: SciPy 0.5.0 released In-Reply-To: <44C786E7.50209@ftw.at> References: <44C786E7.50209@ftw.at> Message-ID: <44C7BDBC.3030101@adelphia.net> This still has the same "Athlon problem." I don't remember if this build was supposed to address that or not ... probably not. john Ed Schofield wrote: > =========================== > SciPy 0.5.0 > Scientific tools for Python > =========================== > > I'm pleased to announce the release of SciPy 0.5.0. The main change in this version is support for NumPy 1.0b1. There are also bug fixes and minor enhancements to several modules, including ndimage, optimize, sparse, stats, and weave. > > From oliphant.travis at ieee.org Wed Jul 26 15:30:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 26 Jul 2006 13:30:24 -0600 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7BAE1.60106@gmail.com> References: <44C7B6B3.7020107@ieee.org> <44C7BAE1.60106@gmail.com> Message-ID: <44C7C2D0.8040606@ieee.org> >> So, I'm looking for ideas as to what to do. There is a demand for >> "modularity" but not very much in the way of an explanation of what that >> "modularity" might look like. >> > > I'm playing around with a scheme to build only some subpackages. It's similar to > what David Cooke did with scipy.sandbox, only the main "build all eggs" script > will have to generate the list of subpackages to build on-the-fly. > > Unfortunately, that didn't work very well. I think I need to clean out the build > directory between each setup(). Possibly they also need to be in separate > processes. I'm going to try that tonight, and if it shows promise, I will make a > branch to work on making this feature robust. > Great news. I'm glad to hear you are working on it. I'll look forward to hearing how it goes. -Travis From oliphant.travis at ieee.org Wed Jul 26 15:32:56 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 26 Jul 2006 13:32:56 -0600 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: References: <44C7B6B3.7020107@ieee.org> Message-ID: <44C7C368.1070409@ieee.org> Bill Baxter wrote: > Another issue, which I think is related, is the stuff in sandbox. > It would be nice if the answer to "how do I use from the > sandbox" were not "change setup.py and rebuild scipy." (at least > that's what it says is required here: > http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data). > > You can make a file named "enabled_packages.txt" in the Lib/sandbox directory with packages listed on each line. For example mine is $ cat enabled_packages.txt delaunay xplt netcdf odr constants buildgrid image stsci You still have to run python setup.py install again (i.e. "rebuild" scipy). Of course scipy doesn't get rebuilt (unless you deleted the build directory) just the new stuff gets added and installed. -Travis From oliphant.travis at ieee.org Wed Jul 26 15:34:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 26 Jul 2006 13:34:20 -0600 Subject: [SciPy-user] odr In-Reply-To: <44C6BCB6.9060405@hoc.net> References: <44C6BCB6.9060405@hoc.net> Message-ID: <44C7C3BC.4050004@ieee.org> Christian Kristukat wrote: > Hi, > I'd like to vote for moving the odr module from the sandbox to the optimize > package. I've been testing it the last two weeks and I'm really pleased. The > only thing to do is to update odrpack.py to current numpy, which is done by the > patch I sent last week. > What do you think? > Robert Kern needs to sign off on this idea since it is his sub-module. -Travis From jonas at mwl.mit.edu Wed Jul 26 15:41:43 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Wed, 26 Jul 2006 15:41:43 -0400 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7B6B3.7020107@ieee.org> References: <44C7B6B3.7020107@ieee.org> Message-ID: <1153942903.22173.92.camel@convolution.mit.edu> I volunteer to help modularize scipy.stats and friends if it's any effort and once we know how (I have to agree with Travis re: eggs and barely understanding what they do). > So, I'm looking for ideas as to what to do. There is a demand for > "modularity" but not very much in the way of an explanation of what that > "modularity" might look like. As we factor chunks of scipy off into more modular pieces, would it be possible to also suggest/develop standards for documentation? I guess that's the other side of modularity -- "uniformity of interface". I've had a real burning desire to hack on the distributions in the stats module (hey, I have no life) and to write up some documentation and examples, but I'm hesitant to do these things in a way that will be unloved and unmaintained. There are also areas (again picking on the distributions) where there is some very (too?) clever coding going on that it took us a long time to figure out when trying to debug an error. It would be great if an attempt to modularize thus came with suggestions on how to 1. document code internals and 2. write/locate/save external/tutorial docs. ...Eric From jonas at mwl.mit.edu Wed Jul 26 15:41:43 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Wed, 26 Jul 2006 15:41:43 -0400 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7B6B3.7020107@ieee.org> References: <44C7B6B3.7020107@ieee.org> Message-ID: <1153942903.22173.92.camel@convolution.mit.edu> I volunteer to help modularize scipy.stats and friends if it's any effort and once we know how (I have to agree with Travis re: eggs and barely understanding what they do). > So, I'm looking for ideas as to what to do. There is a demand for > "modularity" but not very much in the way of an explanation of what that > "modularity" might look like. As we factor chunks of scipy off into more modular pieces, would it be possible to also suggest/develop standards for documentation? I guess that's the other side of modularity -- "uniformity of interface". I've had a real burning desire to hack on the distributions in the stats module (hey, I have no life) and to write up some documentation and examples, but I'm hesitant to do these things in a way that will be unloved and unmaintained. There are also areas (again picking on the distributions) where there is some very (too?) clever coding going on that it took us a long time to figure out when trying to debug an error. It would be great if an attempt to modularize thus came with suggestions on how to 1. document code internals and 2. write/locate/save external/tutorial docs. ...Eric From vincefn at users.sourceforge.net Wed Jul 26 15:53:57 2006 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Wed, 26 Jul 2006 21:53:57 +0200 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <529E0C005F46104BA9DB3CB93F3979753DFC86@TOKYO.intra.cea.fr> References: <44C7B6B3.7020107@ieee.org> <529E0C005F46104BA9DB3CB93F3979753DFC86@TOKYO.intra.cea.fr> Message-ID: <200607262153.57433.vincefn@users.sourceforge.net> On Wednesday 26 July 2006 21:33, Travis Oliphant wrote: > You can make a file named "enabled_packages.txt" in the Lib/sandbox > directory with packages listed on each line. Maybe the packages in sandbox should also be included in the scipy distribution ? They're not in 0.5.0, so only those using svn can test them. Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From fperez.net at gmail.com Wed Jul 26 16:18:53 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 26 Jul 2006 14:18:53 -0600 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7B6B3.7020107@ieee.org> References: <44C7B6B3.7020107@ieee.org> Message-ID: On 7/26/06, Travis Oliphant wrote: > > We need to think about a system for allowing people to install scipy > modules with more modularity. In other-words, we should have a standard > means for installing parts of scipy. I'm not too sure I like this idea. Maybe I'm just low on my meds and my ocd is really flaring, but I tend to much prefer the notion that import scipy.foo for any given version of scipy, is known to work, independent of how scipy was actually built. Having pieces of the 'core' scipy namespace be optional sounds to me like a recipe for long-term maintenance headaches. Will the ubuntu (or suse, or whatever) scipy package have the optional pieces? Will it have some of them? None? How can you get the optional pieces added? Will they make a sub-package per piece? I really don't like this, and I still think that the idea of a separate, /purely modular/ namespace (the 'scikits' thingie) is more reasonable in the long term. That one would consist of packages which obviously depend on numpy and likely on parts of scipy, and which are meant, from day 1, to be separately installed. One can then easily think of a 'sumo' package which ships them all (likely implemented in linux distros as a meta-package which depends on all the available components). Since I'm too swamped to write the code for this, I know this suggestion is worth only 1e-20, likely below double precision threshold, but I still think it's worth at least saying it. I am, however, willing to make a concrete offer: if you (a collective you) like the idea, and esp. if Robert likes it and would rather hold off on implementing something inside of the scipy namespace, I /will/ implement this during the scipy sprint days. I am way too busy between now and then, but I could do this on Monday afternoon (with anyone else who wants to help). This should be no more than a few hours worth of work, and I'd propose deciding what of the sandbox is meant to be officially part of the core, and move the rest into the toolkits. It's only 2 1/2 weeks away. Cheers, f From cookedm at physics.mcmaster.ca Wed Jul 26 17:11:49 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 26 Jul 2006 17:11:49 -0400 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: References: <44C7B6B3.7020107@ieee.org> Message-ID: <20060726171149.7c86f1ef@arbutus.physics.mcmaster.ca> On Wed, 26 Jul 2006 14:18:53 -0600 "Fernando Perez" wrote: > On 7/26/06, Travis Oliphant wrote: > > > > We need to think about a system for allowing people to install scipy > > modules with more modularity. In other-words, we should have a standard > > means for installing parts of scipy. > > I'm not too sure I like this idea. Maybe I'm just low on my meds and > my ocd is really flaring, but I tend to much prefer the notion that > > import scipy.foo > > for any given version of scipy, is known to work, independent of how > scipy was actually built. Having pieces of the 'core' scipy namespace > be optional sounds to me like a recipe for long-term maintenance > headaches. Will the ubuntu (or suse, or whatever) scipy package have > the optional pieces? Will it have some of them? None? How can you > get the optional pieces added? Will they make a sub-package per > piece? > > I really don't like this, and I still think that the idea of a > separate, /purely modular/ namespace (the 'scikits' thingie) is more > reasonable in the long term. That one would consist of packages which > obviously depend on numpy and likely on parts of scipy, and which are > meant, from day 1, to be separately installed. One can then easily > think of a 'sumo' package which ships them all (likely implemented in > linux distros as a meta-package which depends on all the available > components). I'd really suggest looking into setuptools's namespace packages. For example, I just tried doing it with numexpr, and I could make a 'scikits.numexpr' package. Now, someone else can make a 'scikits.foo' package, and it will install right next to numexpr, and both 'import scikits.numexpr' and 'import scikits.foo' will work. If we get scikits going, then some stuff from scipy could probably move to there (some of the less-maintained sandbox projects, for instance). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Wed Jul 26 18:11:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Jul 2006 17:11:57 -0500 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: References: <44C7B6B3.7020107@ieee.org> Message-ID: <44C7E8AD.8060507@gmail.com> Fernando Perez wrote: > On 7/26/06, Travis Oliphant wrote: >> We need to think about a system for allowing people to install scipy >> modules with more modularity. In other-words, we should have a standard >> means for installing parts of scipy. > > I'm not too sure I like this idea. Maybe I'm just low on my meds and > my ocd is really flaring, but I tend to much prefer the notion that > > import scipy.foo > > for any given version of scipy, is known to work, independent of how > scipy was actually built. Having pieces of the 'core' scipy namespace > be optional sounds to me like a recipe for long-term maintenance > headaches. Will the ubuntu (or suse, or whatever) scipy package have > the optional pieces? Will it have some of them? None? How can you > get the optional pieces added? Will they make a sub-package per > piece? They'll either have a Debian package per piece or one Debian package containing all of scipy. They've done both in the past for similar projects. The mechanism I'm proposing isn't a way to simply "turn off" the subpackages that you don't want. It's a way to break up scipy into separately installable subpackages. If you want to write a lightweight program that really only needs scipy.nd_image, you could simply require "scipy.nd_image". If you want to make sure that all of scipy is available, by all means require "scipy". > I really don't like this, and I still think that the idea of a > separate, /purely modular/ namespace (the 'scikits' thingie) is more > reasonable in the long term. That one would consist of packages which > obviously depend on numpy and likely on parts of scipy, and which are > meant, from day 1, to be separately installed. One can then easily > think of a 'sumo' package which ships them all (likely implemented in > linux distros as a meta-package which depends on all the available > components). The thing is, I don't understand is why scikits is significantly different from scipy, here. The only thing different is that scipy.__init__ isn't empty. That's okay; that's easy to deal with. > Since I'm too swamped to write the code for this, I know this > suggestion is worth only 1e-20, likely below double precision > threshold, but I still think it's worth at least saying it. > > I am, however, willing to make a concrete offer: if you (a collective > you) like the idea, and esp. if Robert likes it and would rather hold > off on implementing something inside of the scipy namespace, I /will/ > implement this during the scipy sprint days. I am way too busy > between now and then, but I could do this on Monday afternoon (with > anyone else who wants to help). This should be no more than a few > hours worth of work, and I'd propose deciding what of the sandbox is > meant to be officially part of the core, and move the rest into the > toolkits. It's only 2 1/2 weeks away. The current problem is largely the building, not the namespace manipulation. That's already a solved problem. We don't need to roll our own, here. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Jul 26 18:29:13 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 26 Jul 2006 16:29:13 -0600 Subject: [SciPy-user] We need a system for scipy modularity In-Reply-To: <44C7E8AD.8060507@gmail.com> References: <44C7B6B3.7020107@ieee.org> <44C7E8AD.8060507@gmail.com> Message-ID: On 7/26/06, Robert Kern wrote: > Fernando Perez wrote: > The mechanism I'm proposing isn't a way to simply "turn off" the subpackages > that you don't want. It's a way to break up scipy into separately installable > subpackages. If you want to write a lightweight program that really only needs > scipy.nd_image, you could simply require "scipy.nd_image". If you want to make > sure that all of scipy is available, by all means require "scipy". Would this do dependency analysis? Declaring that you require scipy.nd_image would pull in the parts of scipy that nd_image needs and only those? I'm just curious at this point. > The thing is, I don't understand is why scikits is significantly different from > scipy, here. The only thing different is that scipy.__init__ isn't empty. That's > okay; that's easy to deal with. To me it's just an organizational issue: making scipy a monolithic, but not too big core which is guaranteed to have a certain list of things always in it, and moving all the ancillary stuff out into the toolkits. I also think that would help with maintenance, build times, etc. Basically I view scipy as numpy++ (or numpy+fortran, if you will), and would leave the modularity to toolkits with scikit as a namespace, so that even third-parties could provide packages whose setup.py install goes under site-packages/scikit (yes, issues of --prefix need sorting out, perhaps the setuptools namespace package concept is the proper solution; I'm not familiar with it). > The current problem is largely the building, not the namespace manipulation. > That's already a solved problem. We don't need to roll our own, here. I don't think I was proposing rolling anything new on our own, I think it's more a question of policy than one of implementation. One approach is to make scipy itself modular, which I tend not to like a whole lot. Another (the one I propose) is to keep scipy monolithic, but small-ish (like numpy is, but obviously bigger than numpy), and have all modularity in the notion of toolkits (even if this means moving some things in today's sandbox out to the toolkits). There are probably actual building implementation details that apply to either approach, and in that case your current solution would probably work for either. But the policy decision is independent of the mechanism. I think I've said enough on this matter, and I won't be writing any code for this right now. If you want, I'm willing to work (alone or with you) on my approach during the sprint days. If you roll something else before then, we'll all use it and I'll be OK with whatever it is. If people prefer the 'modular scipy' approach, then that's fine. Cheers, f From saba.ahsan at telematix.com.pk Thu Jul 27 00:29:06 2006 From: saba.ahsan at telematix.com.pk (Saba Ahsan) Date: Thu, 27 Jul 2006 09:29:06 +0500 Subject: [SciPy-user] Error Importing SciPy Windows XP Python2.4 References: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> <44C78FC7.6080604@gmail.com> Message-ID: <005e01c6b135$34a290d0$1c00a8c0@TXPWKS09> My numpy version is 1.0b1. I tried using numpy with matplotlib-0.87.4. as well and kept getting this error: RuntimeError: module compiled against version 90709 of C-API but this version of numpy is 1000000 I believe matplotlib provides its own numpy as well and I probably have two versions right now. But how do I solve this problem. ----- Original Message ----- From: "Robert Kern" To: "SciPy Users List" Sent: Wednesday, July 26, 2006 8:52 PM Subject: Re: [SciPy-user] Error Importing SciPy Windows XP Python2.4 > Saba Ahsan wrote: >> Hi, >> I have used scipy-0.4.9.win32-py2.4 for installing python on my windows >> xp machine. I tried importing scipy and I get the following error. >> >> Traceback (most recent call last): >> File "", line 1, in ? >> from scipy import * >> File "D:\Python24\lib\site-packages\scipy\__init__.py", line 33, in ? >> del lib >> NameError: name 'lib' is not defined >> >> I have searched all mailing lists but could not find anyone else who had >> the same problem. Can somebody please tell me where I went wrong. > > What is your numpy version? That statement was there to remove the 'lib' > module > from the namespace which used to be imported with "from numpy import *". > It was > interfering with importing scipy.lib. However, recent versions for numpy > do no > export that name, and the del statement no longer exists in > scipy/__init__.py . > You can just remove it (and probably the adjacent del statement, too). > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From willemjagter at gmail.com Thu Jul 27 02:49:31 2006 From: willemjagter at gmail.com (William Hunter) Date: Thu, 27 Jul 2006 08:49:31 +0200 Subject: [SciPy-user] Example 1 - Error Message-ID: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> David Grant wrote: >Why is it called SciPy_Tutorial by the way when it is just a sparse >matrix tutorial? > >In Example 1, when trying to solve the Ax=b where A is sparse, I get >the following error: > >In [16]: xsp=linsolve.spsolve(Asp,b) >data-ftype: d compared to data d >Calling _superlu.dgssv >Use minimum degree ordering on A'+A. David; I get the same output on my machine. I didn't know that's an error. Is it? If I print xsp it gives me an array which is equal to b, as one expects. The answer is certainly correct (it seems), or are you refering to the output "data-ftype...on A'+A"? I'd like some clarity on this, or else it's been a case of "ignorance is bliss" for me... -- Regards, WH From oliphant at ee.byu.edu Thu Jul 27 12:24:25 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 27 Jul 2006 10:24:25 -0600 Subject: [SciPy-user] Error Importing SciPy Windows XP Python2.4 In-Reply-To: <005e01c6b135$34a290d0$1c00a8c0@TXPWKS09> References: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> <44C78FC7.6080604@gmail.com> <005e01c6b135$34a290d0$1c00a8c0@TXPWKS09> Message-ID: <44C8E8B9.8090706@ee.byu.edu> Saba Ahsan wrote: >My numpy version is 1.0b1. I tried using numpy with matplotlib-0.87.4. as >well and kept getting this error: > > RuntimeError: module compiled against version 90709 of C-API but >this version of > numpy is 1000000 > >I believe matplotlib provides its own numpy as well and I probably have two >versions right now. But how do I solve this problem. > > No, matplotlib doesn't provide numpy. It's just been compiled against a very specific version of the C-API. That's why you get this error. You need to re-compile matplotlib against the new C-API. The C-API should not change during the beta-release period so a re-compile will not be necessary for each beta release. -Travis From davidgrant at gmail.com Thu Jul 27 13:38:39 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 27 Jul 2006 10:38:39 -0700 Subject: [SciPy-user] Example 1 - Error In-Reply-To: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> References: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> Message-ID: On 7/26/06, William Hunter wrote: > > David Grant wrote: > >Why is it called SciPy_Tutorial by the way when it is just a sparse > >matrix tutorial? > > > >In Example 1, when trying to solve the Ax=b where A is sparse, I get > >the following error: > > > >In [16]: xsp=linsolve.spsolve(Asp,b) > >data-ftype: d compared to data d > >Calling _superlu.dgssv > >Use minimum degree ordering on A'+A. > > David; > > I get the same output on my machine. I didn't know that's an error. Is it? > > If I print xsp it gives me an array which is equal to b, as one > expects. The answer is certainly correct (it seems), or are you > refering to the output "data-ftype...on A'+A"? > > I'd like some clarity on this, or else it's been a case of "ignorance > is bliss" for me... > Looks like I jumped the gun and assumed there was an error because of the output. Usually when I write code I ensure that there is no output if everything went 100% ok. Maybe that is the case here and I just don't know what that output means. Dave -- David Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Jul 27 14:52:42 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 27 Jul 2006 20:52:42 +0200 Subject: [SciPy-user] Example 1 - Error In-Reply-To: References: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> Message-ID: <20060727185242.GA22569@clipper.ens.fr> I get another error in the example 1 : In [20]: time xsp3 = linsolve.spsolve(Asp.tocsr(),b) --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/varoquau/ /usr/lib/python2.4/site-packages/IPython/iplib.py in ipmagic(self, arg_s) 857 else: 858 magic_args = self.var_expand(magic_args) --> 859 return fn(magic_args) 860 861 def ipalias(self,arg_s): /usr/lib/python2.4/site-packages/IPython/Magic.py in magic_time(self, parameter_s) 1584 else: 1585 st = clk() -> 1586 exec code in glob 1587 end = clk() 1588 out = None /home/varoquau/ /usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py in spsolve(A, b, permc_spec) 66 else: 67 mat, csc = _toCS_superLU( A ) ---> 68 ftype, lastel, data, index0, index1 = \ 69 mat.ftype, mat.nnz, mat.data, mat.rowind, mat.indptr 70 gssv = eval('_superlu.' + ftype + 'gssv') /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py in __getattr__(self, attr) 235 return self.getnnz() 236 else: --> 237 raise AttributeError, attr + " not found" 238 239 def transpose(self): AttributeError: rowind not found Maybe it is because my version of linsolve is not recent enough, but it fails with enthon 1.0.0 beta4. That not to pretty to see when you are doing the tutorial to see weather you are going to switch from Matlab to scipy (as a colleague of mine was doing when he found the error). -- Ga?l From fredantispam at free.fr Thu Jul 27 15:39:57 2006 From: fredantispam at free.fr (fred) Date: Thu, 27 Jul 2006 21:39:57 +0200 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <44C7A0BA.6020708@free.fr> References: <44C7A0BA.6020708@free.fr> Message-ID: <44C9168D.3030003@free.fr> fred a ?crit : > Hi all, > > Since the new release, I get this error message : > > Python 2.4.3 (#1, Jul 25 2006, 18:55:45) > [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>>from scipy.special.basic import jn_zeros >>>>print jn_zeros(0,1) > > Floating exception > > What am I doing wrong ? Nobody has any suggestion ? :-( PS : In [3]: from scipy.special import * In [4]: help(lpmn) Help on function lpmn in module scipy.special.basic: lpmn(m, n, z) Associated Legendre functions of the second kind, Pmn(z) and its derivative, Pmn'(z) of order m and degree n. Returns two arrays of size (m+1,n+1) containing Pmn(z) and Pmn'(z) for all orders from 0..m and degrees from 0..n. z can be complex. In [5]: help(lqmn) Help on function lqmn in module scipy.special.basic: lqmn(m, n, z) Associated Legendre functions of the second kind, Qmn(z) and its derivative, Qmn'(z) of order m and degree n. Returns two arrays of size (m+1,n+1) containing Qmn(z) and Qmn'(z) for all orders from 0..m and degrees from 0..n. z can be complex. lpmn should be associated Legendre functions of the first kind, no ? Cheers, -- Fred. From cookedm at physics.mcmaster.ca Thu Jul 27 16:00:04 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 27 Jul 2006 16:00:04 -0400 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <44C9168D.3030003@free.fr> References: <44C7A0BA.6020708@free.fr> <44C9168D.3030003@free.fr> Message-ID: <20060727160004.22990bcc@arbutus.physics.mcmaster.ca> On Thu, 27 Jul 2006 21:39:57 +0200 fred wrote: > fred a ?crit : > > Hi all, > > > > Since the new release, I get this error message : > > > > Python 2.4.3 (#1, Jul 25 2006, 18:55:45) > > [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>>from scipy.special.basic import jn_zeros > >>>>print jn_zeros(0,1) > > > > Floating exception > > > > What am I doing wrong ? > Nobody has any suggestion ? > :-( It works for me with the latest svn version. Try the newly-released 0.5.0. > > lpmn should be associated Legendre functions of the first kind, no ? Yep; fixed in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Thu Jul 27 16:00:04 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 27 Jul 2006 16:00:04 -0400 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <44C9168D.3030003@free.fr> References: <44C7A0BA.6020708@free.fr> <44C9168D.3030003@free.fr> Message-ID: <20060727160004.22990bcc@arbutus.physics.mcmaster.ca> On Thu, 27 Jul 2006 21:39:57 +0200 fred wrote: > fred a ?crit : > > Hi all, > > > > Since the new release, I get this error message : > > > > Python 2.4.3 (#1, Jul 25 2006, 18:55:45) > > [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > > >>>>from scipy.special.basic import jn_zeros > >>>>print jn_zeros(0,1) > > > > Floating exception > > > > What am I doing wrong ? > Nobody has any suggestion ? > :-( It works for me with the latest svn version. Try the newly-released 0.5.0. > > lpmn should be associated Legendre functions of the first kind, no ? Yep; fixed in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From prabhu at aero.iitb.ac.in Thu Jul 27 16:26:53 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 28 Jul 2006 01:56:53 +0530 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <44C9168D.3030003@free.fr> References: <44C7A0BA.6020708@free.fr> <44C9168D.3030003@free.fr> Message-ID: <17609.8589.168511.499003@prpc.aero.iitb.ac.in> >>>>> "fred" == fred writes: fred> fred a ?crit : >> Hi all, >> >> Since the new release, I get this error message : >> >> Python 2.4.3 (#1, Jul 25 2006, 18:55:45) [GCC 3.3.5 (Debian >> 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or >> "license" for more information. >> >>>>> from scipy.special.basic import jn_zeros print jn_zeros(0,1) >> Floating exception >> >> What am I doing wrong ? fred> Nobody has any suggestion ? :-( See if these articles help: http://article.gmane.org/gmane.comp.python.scientific.user/7661/ http://article.gmane.org/gmane.comp.python.scientific.devel/2810 http://article.gmane.org/gmane.comp.python.scientific.devel/3160/ cheers, prabhu From fredantispam at free.fr Thu Jul 27 17:24:44 2006 From: fredantispam at free.fr (fred) Date: Thu, 27 Jul 2006 23:24:44 +0200 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <17609.8589.168511.499003@prpc.aero.iitb.ac.in> References: <44C7A0BA.6020708@free.fr> <44C9168D.3030003@free.fr> <17609.8589.168511.499003@prpc.aero.iitb.ac.in> Message-ID: <44C92F1C.2040708@free.fr> Prabhu Ramachandran a ?crit : >>>>>>"fred" == fred writes: > > > fred> fred a ?crit : > >> Hi all, > >> > >> Since the new release, I get this error message : > >> > >> Python 2.4.3 (#1, Jul 25 2006, 18:55:45) [GCC 3.3.5 (Debian > >> 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or > >> "license" for more information. > >> > >>>>> from scipy.special.basic import jn_zeros print jn_zeros(0,1) > >> Floating exception > >> > >> What am I doing wrong ? > fred> Nobody has any suggestion ? :-( > > See if these articles help: > > http://article.gmane.org/gmane.comp.python.scientific.user/7661/ > > http://article.gmane.org/gmane.comp.python.scientific.devel/2810 > > http://article.gmane.org/gmane.comp.python.scientific.devel/3160/ Hmm, works on my x86_64 sarge. I will see it later, knowing that it work on one of my sarge distro... I have another problem for now... -- Fred. From fredantispam at free.fr Thu Jul 27 17:35:34 2006 From: fredantispam at free.fr (fred) Date: Thu, 27 Jul 2006 23:35:34 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... Message-ID: <44C931A6.5060608@free.fr> Hi, I like how jn works because I can array as arg (I use jn on 3D array). This is not the case for sph_jn & lpmn :-( I tried to bypass it, with something like def foo(m,n): x = arange(0,Lx,dx) y = arange(0,Ly,dy) z = arange(0,Lz,dz) tab = zeros((len(x), len(y), len(z)), dtype='f') for i in range(0,len(x)): for j in range(0,len(y)): for k in range(0,len(z)): tab[i,j,k] = cos(theta(x[i],y[j],z[k])) # tab[i,j,k] = Ymn_theta_p(x[i],y[j],z[k],m,n)*sph_jn(n,r(x[i],y[j],z[k]))[n][0]*rho(x[i],y[j],z[k]) return (tab) but knowing my arrays have more than hundred of thousand cells, it is _very_ slow. I guess sph_jn & lpmn are written like this because they return arrays (0 to n order, derivative, etc). I wish to have Legendre & spherical Bessel functions which could accept array as arg. I only need the last order (n) and the derivative (which could be called by another func) for the same order. How could I write this ? Or is there another way to do the same thing ? Ex. : In [1]: from scipy.special import * In [2]: x=arange(0,1,0.1) In [3]: print jn(1,x) [ 0. 0.04993753 0.09950083 0.14831882 0.19602658 0.24226846 0.28670099 0.32899574 0.36884205 0.40594955] In [4]: print sph_jn(1,x) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/fred/ /usr/local/lib/python2.4/site-packages/scipy/special/basic.py in sph_jn(n, z) 212 """ 213 if not (isscalar(n) and isscalar(z)): --> 214 raise ValueError, "arguments must be scalars." 215 if (n!= floor(n)) or (n<0): 216 raise ValueError, "n must be a non-negative integer." ValueError: arguments must be scalars. Cheers, -- Fred. From saba.ahsan at telematix.com.pk Fri Jul 28 01:05:57 2006 From: saba.ahsan at telematix.com.pk (Saba Ahsan) Date: Fri, 28 Jul 2006 10:05:57 +0500 Subject: [SciPy-user] Error Importing SciPy Windows XP Python2.4 References: <002801c6b09e$cf41b8b0$1c00a8c0@TXPWKS09> <44C78FC7.6080604@gmail.com><005e01c6b135$34a290d0$1c00a8c0@TXPWKS09> <44C8E8B9.8090706@ee.byu.edu> Message-ID: <000701c6b203$837903f0$1f00a8c0@TXPWKS09> Ok its working now. Thanks alot. :) ----- Original Message ----- From: "Travis Oliphant" To: "SciPy Users List" Sent: Thursday, July 27, 2006 9:24 PM Subject: Re: [SciPy-user] Error Importing SciPy Windows XP Python2.4 > Saba Ahsan wrote: > >>My numpy version is 1.0b1. I tried using numpy with matplotlib-0.87.4. as >>well and kept getting this error: >> >> RuntimeError: module compiled against version 90709 of C-API >> but >>this version of >> numpy is 1000000 >> >>I believe matplotlib provides its own numpy as well and I probably have >>two >>versions right now. But how do I solve this problem. >> >> > > No, matplotlib doesn't provide numpy. It's just been compiled against a > very specific version of the C-API. That's why you get this error. You > need to re-compile matplotlib against the new C-API. > > The C-API should not change during the beta-release period so a > re-compile will not be necessary for each beta release. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From willemjagter at gmail.com Fri Jul 28 03:01:45 2006 From: willemjagter at gmail.com (William Hunter) Date: Fri, 28 Jul 2006 09:01:45 +0200 Subject: [SciPy-user] Example 1 - Error In-Reply-To: <20060727185242.GA22569@clipper.ens.fr> References: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> <20060727185242.GA22569@clipper.ens.fr> Message-ID: <8b3894bc0607280001w1c3c0a69gd6f1b1dc3d4b804@mail.gmail.com> Ga?l; About "AttributeError: rowind not found" I got the same in the beginning, and like your colleague I also thought that this scipy business is overrated... You need to fix linsolve.py, which you can do by hand (i.e., cut from the website and paste into your local linsolve.py file). If you go to http://projects.scipy.org/scipy/scipy/timeline and scroll down to Changeset [2117] by rc and click on it, you'll see the part of the file that changed. If you correct your file you won't (shouldn't) get that error. How in the world are you supposed to know this, right? St?fan pointed this very issue out to me as I was under the impression that one will need to get the whole SVN version of scipy, but it's not necessary. I also got a mail from Robert (rc) explaining that he made the change, but at that stage I was still new to all this, and I didn't really know what was going on or what it meant. I think this is one of the biggest threats to potential Matlab(r) to SciPy converts or people like myself who don't have a programming background. (I'll write another mail with my thoughts on this and how one may adress it). I think I should probably add this to the Scipy_Tutorial wiki, although I did mention that you need the latest version of linsolve.py in the tutorial. In all fairness, for new users (like myself!) one assumes that one has the latest version, but you know what they say about assumptions... I hope this helps. -- Regards, WH On 27/07/06, Gael Varoquaux wrote: > I get another error in the example 1 : > > In [20]: time xsp3 = linsolve.spsolve(Asp.tocsr(),b) > --------------------------------------------------------------------------- > exceptions.AttributeError Traceback (most > recent call last) > > /home/varoquau/ > > /usr/lib/python2.4/site-packages/IPython/iplib.py in ipmagic(self, arg_s) > 857 else: > 858 magic_args = self.var_expand(magic_args) > --> 859 return fn(magic_args) > 860 > 861 def ipalias(self,arg_s): > > /usr/lib/python2.4/site-packages/IPython/Magic.py in magic_time(self, > parameter_s) > 1584 else: > 1585 st = clk() > -> 1586 exec code in glob > 1587 end = clk() > 1588 out = None > > /home/varoquau/ > > /usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py in > spsolve(A, b, permc_spec) > 66 else: > 67 mat, csc = _toCS_superLU( A ) > ---> 68 ftype, lastel, data, index0, index1 = \ > 69 mat.ftype, mat.nnz, mat.data, mat.rowind, > mat.indptr > 70 gssv = eval('_superlu.' + ftype + 'gssv') > > /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py in > __getattr__(self, attr) > 235 return self.getnnz() > 236 else: > --> 237 raise AttributeError, attr + " not found" > 238 > 239 def transpose(self): > > AttributeError: rowind not found > > Maybe it is because my version of linsolve is not recent enough, but > it fails with enthon 1.0.0 beta4. That not to pretty to see when you are > doing the tutorial to see weather you are going to switch from Matlab to > scipy (as a colleague of mine was doing when he found the error). > > -- > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Fri Jul 28 03:11:58 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 28 Jul 2006 09:11:58 +0200 Subject: [SciPy-user] Example 1 - Error In-Reply-To: <8b3894bc0607280001w1c3c0a69gd6f1b1dc3d4b804@mail.gmail.com> References: <8b3894bc0607262349q4066cfe0lb509c3b7db5157fe@mail.gmail.com> <20060727185242.GA22569@clipper.ens.fr> <8b3894bc0607280001w1c3c0a69gd6f1b1dc3d4b804@mail.gmail.com> Message-ID: <20060728071158.GA28287@clipper.ens.fr> On Fri, Jul 28, 2006 at 09:01:45AM +0200, William Hunter wrote: > I got the same in the beginning, and like your colleague I also thought that > this scipy business is overrated... Well I think scipy is _great_, I just think a tutorial should"just work". But then, scipy is evolving so quickly. I added a small note about this. If you want to complete it to detail the way to fix this, it might be a good idea. Anyway, this makes the wiki go forward and slowly but surely we will be getting a good documentation. -- Ga?l From david.huard at gmail.com Fri Jul 28 08:51:39 2006 From: david.huard at gmail.com (David Huard) Date: Fri, 28 Jul 2006 08:51:39 -0400 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44C931A6.5060608@free.fr> References: <44C931A6.5060608@free.fr> Message-ID: <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> I hope someone else will find a better answer to your question, but here is something you could try if you are really hungry for speed. make a copy of the function sphj in scipy/lib/special/specfun/specfun.f and rename it tweak it so it takes a vector argument. Loop over all inputs in vector return vector of answer (I'd like to help, but my knowledge of fortran is pretty limited.) make a copy of the sphj function interface in scipy/lib/special/specfun.pyf and rename it tweak it so it accepts arrays as input and returns an array, something like subroutine sphj_1(n,x,nx, nm, sj,dj) ! in :specfun:specfun.f integer intent(in), check(n>=1) :: n double precision intent(in), dimension(nx), depend(nx) :: x integer intent(in) :: nx integer intent(out) :: nm double precision intent(out),dimension(nx),depend(nx) :: sj double precision intent(out),dimension(nx),depend(nx) :: dj end subroutine sph_1j Recompile scipy. You could also modify sphj in basic.py to do the loop there, but I doubt you'd gain much in terms of speed up. I had a similar problem once, and after coding the loop inside the f code, I got a speedup of about 6x compared to the python loop. Good luck. David 2006/7/27, fred : > > Hi, > > I like how jn works because I can array as arg (I use jn on 3D array). > > This is not the case for sph_jn & lpmn :-( > > I tried to bypass it, with something like > > def foo(m,n): > x = arange(0,Lx,dx) > y = arange(0,Ly,dy) > z = arange(0,Lz,dz) > tab = zeros((len(x), len(y), len(z)), dtype='f') > for i in range(0,len(x)): > for j in range(0,len(y)): > for k in range(0,len(z)): > tab[i,j,k] = cos(theta(x[i],y[j],z[k])) > # tab[i,j,k] = > > Ymn_theta_p(x[i],y[j],z[k],m,n)*sph_jn(n,r(x[i],y[j],z[k]))[n][0]*rho(x[i],y[j],z[k]) > return (tab) > > but knowing my arrays have more than hundred of thousand cells, it is > _very_ slow. > > I guess sph_jn & lpmn are written like this because they return arrays > (0 to n order, derivative, etc). > > I wish to have Legendre & spherical Bessel functions which could accept > array as arg. > I only need the last order (n) and the derivative (which could be called > by another func) for the same order. > > How could I write this ? > > Or is there another way to do the same thing ? > > Ex. : > > In [1]: from scipy.special import * > > In [2]: x=arange(0,1,0.1) > > In [3]: print jn(1,x) > [ 0. 0.04993753 0.09950083 0.14831882 0.19602658 0.24226846 > 0.28670099 0.32899574 0.36884205 0.40594955] > > In [4]: print sph_jn(1,x) > > --------------------------------------------------------------------------- > exceptions.ValueError Traceback (most > recent call last) > > /home/fred/ > > /usr/local/lib/python2.4/site-packages/scipy/special/basic.py in > sph_jn(n, z) > 212 """ > 213 if not (isscalar(n) and isscalar(z)): > --> 214 raise ValueError, "arguments must be scalars." > 215 if (n!= floor(n)) or (n<0): > 216 raise ValueError, "n must be a non-negative integer." > > ValueError: arguments must be scalars. > > > Cheers, > > -- > Fred. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Jul 29 13:19:02 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 29 Jul 2006 19:19:02 +0200 Subject: [SciPy-user] concatenate, r_ ... Message-ID: <20060729171902.GC26701@clipper.ens.fr> I am trying to complete the tentative numpy tutorial and trying to write the part about matrix reshaping. Doing so I came to wonder what the difference between r_ and concatenate was, and which one was prefered. -- Ga?l From schofield at ftw.at Sat Jul 29 17:15:58 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 29 Jul 2006 23:15:58 +0200 Subject: [SciPy-user] Athlon problems In-Reply-To: <44C7BDBC.3030101@adelphia.net> References: <44C786E7.50209@ftw.at> <44C7BDBC.3030101@adelphia.net> Message-ID: <20060729211558.GA9828@ftw.at> +++ John Hassler [2006.07.26 15:08:44 -0400]: > This still has the same "Athlon problem." I don't remember if this > build was supposed to address that or not ... probably not. > john Yes, it was supposed to :( Both NumPy 1.0b1 and SciPy 0.5.0 are now built against Pentium2 ATLAS libraries without SSE instructions. So I'm stumped. I don't have one of these Athlons to use for testing. Could you try building your own ATLAS library and compiling NumPy and SciPy against it? There are instructions at http://www.scipy.org/Installing_SciPy. You could post questions here if you run into any problems. -- Ed From gnchen at cortechs.net Sat Jul 29 21:40:09 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Sat, 29 Jul 2006 18:40:09 -0700 Subject: [SciPy-user] Athlon problems In-Reply-To: <20060729211558.GA9828@ftw.at> References: <44C786E7.50209@ftw.at> <44C7BDBC.3030101@adelphia.net> <20060729211558.GA9828@ftw.at> Message-ID: <37539DEF-928C-4D63-9A1C-2FE84F628680@cortechs.net> I did build them with dual dual core Opteron 285 with FC5 i386. Compiling works fine. Numpy passed all tests. But not scipy. However, I did not compile against ATALAS_SSE or MKL yet. Gen On Jul 29, 2006, at 2:15 PM, Ed Schofield wrote: > +++ John Hassler [2006.07.26 15:08:44 -0400]: >> This still has the same "Athlon problem." I don't remember if this >> build was supposed to address that or not ... probably not. >> john > > Yes, it was supposed to :( Both NumPy 1.0b1 and SciPy 0.5.0 are > now built > against Pentium2 ATLAS libraries without SSE instructions. So I'm > stumped. > > I don't have one of these Athlons to use for testing. Could you try > building your own ATLAS library and compiling NumPy and SciPy > against it? > There are instructions at http://www.scipy.org/Installing_SciPy. > You could > post questions here if you run into any problems. > > -- Ed > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at adelphia.net Sat Jul 29 22:08:58 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 29 Jul 2006 22:08:58 -0400 Subject: [SciPy-user] Athlon problems In-Reply-To: <37539DEF-928C-4D63-9A1C-2FE84F628680@cortechs.net> References: <44C786E7.50209@ftw.at> <44C7BDBC.3030101@adelphia.net> <20060729211558.GA9828@ftw.at> <37539DEF-928C-4D63-9A1C-2FE84F628680@cortechs.net> Message-ID: <44CC14BA.7000907@adelphia.net> An HTML attachment was scrubbed... URL: From farmerje at uchicago.edu Sat Jul 29 22:35:29 2006 From: farmerje at uchicago.edu (Jesse Farmer) Date: Sat, 29 Jul 2006 19:35:29 -0700 Subject: [SciPy-user] SVD on huge sparse matrices Message-ID: <7100DB9E-D18F-49AE-A09D-D9159F30B704@uchicago.edu> Hello, I searched the archives as best I could and found no conclusive answer to this question. For a piece of software I am working on I'm going to need to calculate the SVD of a huge sparse matrix. The matrix might be somewhere in the neighborhood of 10e6-by-10e6. Obviously it would be absurd to store this as a 2d array, but all the implementations of SVD I've seen require it. In particular, scipy.linalg.svd seems to require it. Is there anything that can be done? Thanks. -- Jesse E.I. Farmer email: farmerje at uchicago.edu AIM: farmerje Phone: 773-319-9355 From wbaxter at gmail.com Sun Jul 30 08:42:23 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 30 Jul 2006 21:42:23 +0900 Subject: [SciPy-user] concatenate, r_ ... In-Reply-To: <20060729171902.GC26701@clipper.ens.fr> References: <20060729171902.GC26701@clipper.ens.fr> Message-ID: Hi Gael, Did you see the summary (and proposal for changes) I sent to the numpy list last week? On 7/30/06, Gael Varoquaux wrote: > I am trying to complete the tentative numpy tutorial and trying to > write the part about matrix reshaping. > > Doing so I came to wonder what the difference between r_ and > concatenate was, and which one was prefered. From gael.varoquaux at normalesup.org Sun Jul 30 09:40:50 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 30 Jul 2006 15:40:50 +0200 Subject: [SciPy-user] concatenate, r_ ... In-Reply-To: References: <20060729171902.GC26701@clipper.ens.fr> Message-ID: <20060730134050.GB29890@clipper.ens.fr> On Sun, Jul 30, 2006 at 09:42:23PM +0900, Bill Baxter wrote: > Did you see the summary (and proposal for changes) I sent to the numpy > list last week? No, I hadn't. I don't read the numpy list. Maybe I should. It did make my ideas clearer as far as the current status. It does raise questions as far as the future status. By the way, you seem puzzled by the behaviour of column_stack. I think it fits well with the docstring. -- Ga?l From wbaxter at gmail.com Sun Jul 30 10:31:16 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 30 Jul 2006 23:31:16 +0900 Subject: [SciPy-user] concatenate, r_ ... In-Reply-To: <20060730134050.GB29890@clipper.ens.fr> References: <20060729171902.GC26701@clipper.ens.fr> <20060730134050.GB29890@clipper.ens.fr> Message-ID: > By the way, you seem puzzled by the behaviour of column_stack. I think > it fits well with the docstring. What was unexpected to me was its behavior when handling inputs that are not 1-d. The docstring doesn't say what will happen in that case. But my expectation is that it should associate. I.e.: column_stack(( a,b,c )) should be the same as: column_stack(( column_stack(( a,b )),c )) But it's not. column_stack((a,b,c)) is the same as: column_stack(( column_stack(( a,b )).transpose() ,c )) --bb From bart.vandereycken at cs.kuleuven.be Sun Jul 30 12:11:38 2006 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Sun, 30 Jul 2006 18:11:38 +0200 Subject: [SciPy-user] SVD on huge sparse matrices In-Reply-To: <7100DB9E-D18F-49AE-A09D-D9159F30B704@uchicago.edu> References: <7100DB9E-D18F-49AE-A09D-D9159F30B704@uchicago.edu> Message-ID: Jesse Farmer wrote: > Hello, > > I searched the archives as best I could and found no conclusive > answer to this question. For a piece of software I am working on I'm > going to need to calculate the SVD of a huge sparse matrix. The > matrix might be somewhere in the neighborhood of 10e6-by-10e6. > Obviously it would be absurd to store this as a 2d array, but all the > implementations of SVD I've seen require it. In particular, > scipy.linalg.svd seems to require it. > > Is there anything that can be done? Thanks. I don't think storage is the biggest problem but the computation of the svd itself. Maybe you could look at an iterative solver like PROPACK http://soi.stanford.edu/~rmunk/PROPACK/index.html, it only requires matrix-vector products. Like ARPACK, PROPACK would be a _very nice_ addition to scipy, and like ARPACK, it will not be trivial to wrap it. Bart From gael.varoquaux at normalesup.org Mon Jul 31 06:00:58 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 31 Jul 2006 12:00:58 +0200 Subject: [SciPy-user] Making notes out of a python script Message-ID: <20060731100058.GB4279@clipper.ens.fr> Hello list, I would like to share with you a script I wrote for my work that I find extremely useful (more than I originaly thought it would). Briefly this script allows for litterate programming with python, generating a pdf output mixing pretty-printed code, ouput from the script, figures, and special comments interpreted as rst. It is on the wiki, with an example: http://scipy.org/GaelVaroquaux have a look and please comment. -- Ga?l From doug-scipy at sadahome.ca Mon Jul 31 06:11:37 2006 From: doug-scipy at sadahome.ca (Doug Latornell) Date: Mon, 31 Jul 2006 12:11:37 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731100058.GB4279@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> Message-ID: <6279c0a40607310311i3bd9f0d9lfae6857239843177@mail.gmail.com> This looks really interesting, Gael. I was thinking about literate programming the other night... One thing I noticed immediately is that your ## "literate comment" convention conflicts with the comment-region prefix in emacs (also ##). I think that would cause emacs users who have commented out a block of code to find it in the text of the document produced by your script. Just an observation... I will try to experiment with your script soon. Doug On 7/31/06, Gael Varoquaux wrote: > > Hello list, > > I would like to share with you a script I wrote for my work that I find > extremely useful (more than I originaly thought it would). > > Briefly this script allows for litterate programming with python, > generating a pdf output mixing pretty-printed code, ouput from the > script, figures, and special comments interpreted as rst. > > It is on the wiki, with an example: http://scipy.org/GaelVaroquaux have > a look and please comment. > > -- > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Jul 31 06:33:41 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 31 Jul 2006 12:33:41 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731100058.GB4279@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> Message-ID: <44CDDC85.6040000@ntc.zcu.cz> Gael Varoquaux wrote: > Hello list, > > I would like to share with you a script I wrote for my work that I find > extremely useful (more than I originaly thought it would). > > Briefly this script allows for litterate programming with python, > generating a pdf output mixing pretty-printed code, ouput from the > script, figures, and special comments interpreted as rst. > > It is on the wiki, with an example: http://scipy.org/GaelVaroquaux have > a look and please comment. > Hi Gael, it looks great! On the wiki, you write about looking for some better name - what about pylitterate? BTW Doug's comment on emacs using '##' as region comment has just bitten me - your script tried to pdflatex the comment and there was a function name with '_'... r. From gael.varoquaux at normalesup.org Mon Jul 31 07:48:48 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 31 Jul 2006 13:48:48 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <44CDDC85.6040000@ntc.zcu.cz> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> Message-ID: <20060731114848.GA23013@clipper.ens.fr> On Mon, Jul 31, 2006 at 12:33:41PM +0200, Robert Cimrman wrote: > BTW Doug's comment on emacs using '##' as region comment has just bitten > me - your script tried to pdflatex the comment and there was a function > name with '_'... Yes, indeed, this is a problem. I am open for suggestions to replace this. "#r " seem not to good an idea: one could have code like "r = 1" that gets commented and ends up in a litterate comment. Does "#% " seem unlikely enough ? Other ideas ? By the way, I found a bug ! If you have a string like this for instance: foo = """ ## This is not a litterate comment """ The line begining with "## " will be interpretated as a litterate comment. I really shouldn't be doing the splitting of the input code with home made function. Anybody knows how to use the python compiler to do this ? I like the name pylitterate. What do others think about it ? -- Ga?l From david.huard at gmail.com Mon Jul 31 08:19:30 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 31 Jul 2006 08:19:30 -0400 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731114848.GA23013@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> Message-ID: <91cf711d0607310519v67df5926s6ed41f20c17317f1@mail.gmail.com> > > I like the name pylitterate. What do others think about it ? +1 Nice. I'm impressed. This will make nice looking tutorials. Kudos. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd55 at cornell.edu Mon Jul 31 08:40:53 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 31 Jul 2006 08:40:53 -0400 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731114848.GA23013@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> Message-ID: <200607310840.53101.dd55@cornell.edu> On Monday 31 July 2006 07:48, Gael Varoquaux wrote: > I like the name pylitterate. What do others think about it ? I think pyliterate might be a better choice. Darren From nwagner at iam.uni-stuttgart.de Mon Jul 31 09:12:10 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 31 Jul 2006 15:12:10 +0200 Subject: [SciPy-user] machine epsilon Message-ID: <44CE01AA.4050503@iam.uni-stuttgart.de> Hi all, I found a function to compute machine epsilon http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/pysparse/tests/test_superlu.py but is there a built-in function which one can use ? Nils From evert.rol at gmail.com Mon Jul 31 09:23:58 2006 From: evert.rol at gmail.com (Evert Rol) Date: Mon, 31 Jul 2006 14:23:58 +0100 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731114848.GA23013@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> Message-ID: <68c840280607310623m46ec13b0sbd887d6734ef08b5@mail.gmail.com> > > > BTW Doug's comment on emacs using '##' as region comment has just bitten > > me - your script tried to pdflatex the comment and there was a function > > name with '_'... > > Yes, indeed, this is a problem. I am open for suggestions to replace this. > "#r " seem not to good an idea: one could have code like "r = 1" that > gets commented and ends up in a litterate comment. Does "#% " seem > unlikely enough ? Other ideas ? Seen the syntaxical meaning of these 'comments', isn't a doc-string started with a special symbol better? Eg, """! Bifurcation diagram of a mapping ============================================================================ We are interested in the long term behavior of a sequence created by a the iteration of map. The logistic map ------------------- """ With a small chance of mixing up proper doc-strings with these ones, when one uses pydoc or help(). Otherwise, #% seems ok (shell/Python comment + LaTeX comment). I don't think there's any Python command (line) starting with '%', so these can easily be distinghuised from real comments I like the name pylitterate. What do others think about it ? pyliterate then, as Darren suggested. Unless I missed the 'litter' joke ;-) Evert -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Jul 31 09:35:06 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 31 Jul 2006 15:35:06 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <68c840280607310623m46ec13b0sbd887d6734ef08b5@mail.gmail.com> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> <68c840280607310623m46ec13b0sbd887d6734ef08b5@mail.gmail.com> Message-ID: <44CE070A.2040702@ntc.zcu.cz> Evert Rol wrote: >> I like the name pylitterate. What do others think about it ? > pyliterate then, as Darren suggested. Unless I missed the 'litter' joke ;-) and now that speaking (writing?) French does not 'litter' one's English! :-) r. From a.u.r.e.l.i.a.n at gmx.net Mon Jul 31 09:40:30 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 31 Jul 2006 15:40:30 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <20060731114848.GA23013@clipper.ens.fr> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> Message-ID: <200607311540.30599.a.u.r.e.l.i.a.n@gmx.net> Hi, > Yes, indeed, this is a problem. I am open for suggestions to replace this. > "#r " seem not to good an idea: one could have code like "r = 1" that > gets commented and ends up in a litterate comment. Does "#% " seem > unlikely enough ? Other ideas ? How about "#|"? This looks quite nice for multiline comments, like a vertical bar at the left. > By the way, I found a bug ! If you have a string like this for > instance: > foo = """ > ## This is not a litterate comment > """ > The line begining with "## " will be interpretated as a litterate > comment. I really shouldn't be doing the splitting of the input code > with home made function. Anybody knows how to use the python compiler to > do this ? A multiline string at the top of the file leads to lots of errors, too. Maybe this should be treated as ReST as well. > I like the name pylitterate. What do others think about it ? hm, from "pylitterate" I could not tell what the script does. imo a "speaking" name would be better. But the others seem to like it.. Johannes From fullung at gmail.com Mon Jul 31 09:41:30 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 31 Jul 2006 15:41:30 +0200 Subject: [SciPy-user] machine epsilon In-Reply-To: <44CE01AA.4050503@iam.uni-stuttgart.de> Message-ID: <01ac01c6b4a7$07a3a0e0$0100a8c0@dsp.sun.ac.za> Hey Nils > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > On Behalf Of Nils Wagner > Sent: 31 July 2006 15:12 > To: SciPy Users List > Subject: [SciPy-user] machine epsilon > > Hi all, > > I found a function to compute machine epsilon > > http://svn.scipy.org/svn/scipy/trunk/Lib/sandbox/pysparse/tests/test_super > lu.py > > but is there a built-in function which one can use ? > > Nils I think you might be looking for NumPy's finfo function: In [38]: N.finfo(N.double) Out[38]: In [37]: dir(N.finfo(N.double)) Out[37]: [... 'dtype', 'eps', 'epsneg', 'iexp', 'machar', 'machep', 'max', 'maxexp', 'min', 'minexp', 'negep', 'nexp', 'nmant', 'precision', 'resolution', 'tiny'] In [39]: N.finfo(N.double).eps Out[39]: array(2.2204460492503131e-016) etc. Regards, Albert From willemjagter at gmail.com Mon Jul 31 09:52:21 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 31 Jul 2006 15:52:21 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <200607310840.53101.dd55@cornell.edu> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> <200607310840.53101.dd55@cornell.edu> Message-ID: <8b3894bc0607310652k61406192k4a3e0bbc756b3715@mail.gmail.com> I concur. William On 31/07/06, Darren Dale wrote: > On Monday 31 July 2006 07:48, Gael Varoquaux wrote: > > I like the name pylitterate. What do others think about it ? > > I think pyliterate might be a better choice. > > Darren > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From yann.ledu at noos.fr Mon Jul 31 10:32:48 2006 From: yann.ledu at noos.fr (Yann Le Du) Date: Mon, 31 Jul 2006 16:32:48 +0200 (CEST) Subject: [SciPy-user] Making notes out of a python script Message-ID: Hi, This program is great ! Now, you can't yet insert latex commands in the comments, can you ? In the TODO list you mention LaTex in the code, does that mean you plan to handle LaTex both in ## comments and in the # comments inside the python code itself ? I mean could this example code work one day with your PyLiterate : ======================== ## We now compute \sigma sigma = alpha * 2 # Remember that \alpha is computed before ======================== and produce nice pdf output with \sigma and \alpha properly printed out ? Cheers, -- Yann Le Du http://yledu.free.fr From gael.varoquaux at normalesup.org Mon Jul 31 11:25:21 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 31 Jul 2006 17:25:21 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: References: Message-ID: <20060731152520.GD7083@clipper.ens.fr> On Mon, Jul 31, 2006 at 04:32:48PM +0200, Yann Le Du wrote: > This program is great ! Now, you can't yet insert latex commands in the > comments, can you ? No, > In the TODO list you mention LaTex in the code, does > that mean you plan to handle LaTex both in ## comments and in the # > comments inside the python code itself ? Only in the literate comments ( currently ## ) > I mean could this example code work one day with your PyLiterate : > ======================== > ## We now compute \sigma > sigma = alpha * 2 # Remember that \alpha is computed before > ======================== > and produce nice pdf output with \sigma and \alpha properly printed out ? I was more thinking of having something like #%ltx We now compute $\sigma$ But we could make it an option (off be default) to have $foo$ in comments be passed to LaTeX in math mode. I would however like to stress that in the long run this script could and should be usable with different output, and it would be nice if it could run only with a python install (say on enthon, under windows). I would love to have it use reportlabs for an output engine, and it turns out that a reportlabs backend for rst is being developed. But your ideas are interesting and I will have a look at implementing them. I am very glad of all this positive feedback. It will definitely encourage me to work more on this script and add features. -- Ga?l From robert.kern at gmail.com Mon Jul 31 14:04:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 31 Jul 2006 13:04:44 -0500 Subject: [SciPy-user] odr In-Reply-To: <44C6BCB6.9060405@hoc.net> References: <44C6BCB6.9060405@hoc.net> Message-ID: <44CE463C.9000302@gmail.com> Christian Kristukat wrote: > Hi, > I'd like to vote for moving the odr module from the sandbox to the optimize > package. I've been testing it the last two weeks and I'm really pleased. The > only thing to do is to update odrpack.py to current numpy, which is done by the > patch I sent last week. > What do you think? Every time it's come up in the past, we've kind of hemmed and hawed about it not being an f2py wrapper. I think it's relatively clear by now that I'm never going to get around to doing that. If no one objects, I will clean it up to be placed as scipy.odr during the sprints SciPy'06. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Mon Jul 31 14:40:17 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 31 Jul 2006 20:40:17 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: <200607311540.30599.a.u.r.e.l.i.a.n@gmx.net> References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> <200607311540.30599.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <20060731184013.GE7083@clipper.ens.fr> On Mon, Jul 31, 2006 at 03:40:30PM +0200, Johannes Loehnert wrote: > How about "#|"? This looks quite nice for multiline comments, like a vertical > bar at the left. The average computer user does not know where the "|" is on a keyboard. I am currently hesitating between "#%" and "#!", probably leaning on the "#!" side. What would everybody think about this. > > I like the name pylitterate. What do others think about it ? > hm, from "pylitterate" I could not tell what the script does. imo a > "speaking" name would be better. But the others seem to like it.. All right, I see your point: most people do not know what literate programming is. Then how about "pyreport" ? -- Ga?l From grante at visi.com Mon Jul 31 14:46:22 2006 From: grante at visi.com (Grant Edwards) Date: Mon, 31 Jul 2006 18:46:22 +0000 (UTC) Subject: [SciPy-user] Making notes out of a python script References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> <200607311540.30599.a.u.r.e.l.i.a.n@gmx.net> <20060731184013.GE7083@clipper.ens.fr> Message-ID: On 2006-07-31, Gael Varoquaux wrote: > On Mon, Jul 31, 2006 at 03:40:30PM +0200, Johannes Loehnert wrote: >> How about "#|"? This looks quite nice for multiline comments, like a vertical >> bar at the left. > > The average computer user does not know where the "|" is on a > keyboard. The average computer doesn't write programs and has never even heard of Python. Therefore we should abandon both Python in particular and programming general. That would render your project rather moot. ;) > I am currently hesitating between "#%" and "#!", probably > leaning on the "#!" side. What would everybody think about this. #! is the Unix magic number for interpreted scripts and will be found as the first two bytes in many Python programs. As long as your system isn't confused by this overloading, it's fine. -- Grant Edwards grante Yow! Now, let's SEND OUT at for QUICHE!! visi.com From gael.varoquaux at normalesup.org Mon Jul 31 15:18:14 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 31 Jul 2006 21:18:14 +0200 Subject: [SciPy-user] Making notes out of a python script In-Reply-To: References: <20060731100058.GB4279@clipper.ens.fr> <44CDDC85.6040000@ntc.zcu.cz> <20060731114848.GA23013@clipper.ens.fr> <200607311540.30599.a.u.r.e.l.i.a.n@gmx.net> <20060731184013.GE7083@clipper.ens.fr> Message-ID: <20060731191813.GG7083@clipper.ens.fr> On Mon, Jul 31, 2006 at 06:46:22PM +0000, Grant Edwards wrote: > The average computer doesn't write programs and has never even > heard of Python. Therefore we should abandon both Python in > particular and programming general. That would render your > project rather moot. ;) OK, my ultimate target is my average colleague, which I think should start being interested in python (and actually some are !). > > I am currently hesitating between "#%" and "#!", probably > > leaning on the "#!" side. What would everybody think about this. > #! is the Unix magic number for interpreted scripts and will be > found as the first two bytes in many Python programs. As long > as your system isn't confused by this overloading, it's fine. Good point ! I will probably exclude the first line if it begins by "#!/" if I go for this. -- Ga?l From davidgrant at gmail.com Mon Jul 31 15:27:17 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 31 Jul 2006 12:27:17 -0700 Subject: [SciPy-user] [Numpy-discussion] numpy vs numarray In-Reply-To: <44CE4604.8070402@ieee.org> References: <44CE3EF5.9030508@ieee.org> <44CE4312.8030803@noaa.gov> <44CE4604.8070402@ieee.org> Message-ID: On 7/31/06, Tim Hochberg wrote: > > David L Goldsmith wrote: > > All I can say is, if someone that confused about basic facts is being > > cited as an authority and teaching a podcast class, I'm glad I have > > someone on-site at my work who actually knows what they're talking about > > and not relying on the Net for my numpy education. > > > The numpy == Numeric confusion is understandable. Numeric Python (AKA > Numeric) was typically referred to as NumPy even though the name of the > module was actually Numeric. That's what I thought too... So, in a sense numarray was a replacement > for (the old) NumPy, and numpy is a replacement for both the old NumPy > and numarray. Yes, maybe that's what he meant. Anyways, I think anyone can be forgiven for getting a bit confused over the names. I for one, got confused many times over which package was the newer one out of numeric and numarray. I always used numeric so I didn't really care that much about the difference. I do think that if you put your lectures on the web though, you do have some greater responsibiliy to get your facts right, or at least maintain an errata page or some such thing. At least I sure hope so! > > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidgrant at gmail.com Mon Jul 31 18:38:06 2006 From: davidgrant at gmail.com (David Grant) Date: Mon, 31 Jul 2006 15:38:06 -0700 Subject: [SciPy-user] sets in python and/or numpy Message-ID: I find myself needing the set operations provided by python 2.4 such as intersection, difference, or even just the advantages of the data strucure itself, like that fact that I can try adding something to it and if it's already there, it won't get added again. Will my decision to use of the python 'set' datastructure come back to haunt me later by being too slow? Is there anything equivalent in scipy or numpy that I can use? I find myself going between numpy arrays and sets a lot because I sometimes need to treat it like an array to use some of the array functions. Sorry for cross-posting to scipy and numpy... is that a bad idea? -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.ramanah at uq.edu.au Mon Jul 31 19:50:08 2006 From: d.ramanah at uq.edu.au (Dwishen Ramanah) Date: Tue, 1 Aug 2006 09:50:08 +1000 Subject: [SciPy-user] sparse solver for large non-sqaure A matrix Message-ID: Dear list, I have to solve the system of equations, Ax = b . A is large and non-square. In matlab one will do the following : A= spalloc(neqns,npts,nzeros); .. B= zeros(neqns,1) .. x= full(a \ b) Is there a similar way to solve this using scipy/numpy? I have worked through the recent scipy.sparse tutorial. a = csc_matrix((neqns, npts)) .. b = zeros((neqns,1),dtype=float) .. phi = linalg.solve(a,b) error : ValueError: expected square matrix Can this library be used for a non-square matrix? Regards Dwishen