From stefan at sun.ac.za Sun Mar 1 05:05:24 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 1 Mar 2009 12:05:24 +0200 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <49AA14D4.1040100@caltech.edu> References: <49AA14D4.1040100@caltech.edu> Message-ID: <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> Hi Michael 2009/3/1 Michael Cohen : > I have been trying to switch from numpy.fft.rfft calls to > scipy.fftpack.rfft calls in order to make use of fftw3, but I find that > the array sizes are different. NumPy's rfft returns a complex result, whether scipy unpacks the real and imaginary components. I wonder if this makes much sense, and whether we shouldn't modify SciPy to yield the same output as NumPy? Cheers St?fan From mcohen at caltech.edu Sun Mar 1 12:30:39 2009 From: mcohen at caltech.edu (Michael Cohen) Date: Sun, 01 Mar 2009 09:30:39 -0800 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> Message-ID: <49AAC63F.9080703@caltech.edu> St?fan van der Walt wrote: > NumPy's rfft returns a complex result, whether scipy unpacks the real > and imaginary components. I wonder if this makes much sense, and > whether we shouldn't modify SciPy to yield the same output as NumPy? Ah, that makes sense. In order to switch to the scipy version, the speed improvement has to also take into account the fact that I have to recast the result into the numpy form then. Right now, my scipy takes twice as long as it is. How do I make sure it's using fftw3, and is there a way to make numpy use this library? Cheers, Michael From stefan at sun.ac.za Sun Mar 1 12:47:45 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 1 Mar 2009 19:47:45 +0200 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <49AAC63F.9080703@caltech.edu> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> <49AAC63F.9080703@caltech.edu> Message-ID: <9457e7c80903010947x4f454db4hc7bf07cfaa960599@mail.gmail.com> 2009/3/1 Michael Cohen : > Right now, my scipy takes twice as long as it is. ?How do I make sure > it's using fftw3, and is there a way to make numpy use this library? SciPy no longer builds against fftw, but uses fftpack. David, is it possible to still use fftw at all? Cheers St?fan From cournape at gmail.com Sun Mar 1 12:48:56 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Mar 2009 02:48:56 +0900 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <9457e7c80903010947x4f454db4hc7bf07cfaa960599@mail.gmail.com> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> <49AAC63F.9080703@caltech.edu> <9457e7c80903010947x4f454db4hc7bf07cfaa960599@mail.gmail.com> Message-ID: <5b8d13220903010948w5cd5a798y735eb1ca6c908702@mail.gmail.com> On Mon, Mar 2, 2009 at 2:47 AM, St?fan van der Walt wrote: > > David, is it possible to still use fftw at all? No, not with scipy 0.7 and later. David From mcohen at caltech.edu Sun Mar 1 13:08:35 2009 From: mcohen at caltech.edu (Michael Cohen) Date: Sun, 01 Mar 2009 10:08:35 -0800 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <5b8d13220903010948w5cd5a798y735eb1ca6c908702@mail.gmail.com> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> <49AAC63F.9080703@caltech.edu> <9457e7c80903010947x4f454db4hc7bf07cfaa960599@mail.gmail.com> <5b8d13220903010948w5cd5a798y735eb1ca6c908702@mail.gmail.com> Message-ID: <49AACF23.6030102@caltech.edu> David Cournapeau wrote: > On Mon, Mar 2, 2009 at 2:47 AM, St?fan van der Walt wrote: > >> David, is it possible to still use fftw at all? > > No, not with scipy 0.7 and later. I'm working with scipy 0.6 right now. Partly because of the hassle of shifting and partly because I don't administer the cluster and they are too busy to shift. That's partly why I want to check whether their implementation of scipy is using fftw or not. Michael From cournape at gmail.com Sun Mar 1 13:32:11 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Mar 2009 03:32:11 +0900 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <49AACF23.6030102@caltech.edu> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> <49AAC63F.9080703@caltech.edu> <9457e7c80903010947x4f454db4hc7bf07cfaa960599@mail.gmail.com> <5b8d13220903010948w5cd5a798y735eb1ca6c908702@mail.gmail.com> <49AACF23.6030102@caltech.edu> Message-ID: <5b8d13220903011032q7b9c55au391daf7560cef916@mail.gmail.com> On Mon, Mar 2, 2009 at 3:08 AM, Michael Cohen wrote: > David Cournapeau wrote: >> On Mon, Mar 2, 2009 at 2:47 AM, St?fan van der Walt wrote: >> >>> David, is it possible to still use fftw at all? >> >> No, not with scipy 0.7 and later. > > I'm working with scipy 0.6 right now. > Partly because of the hassle of shifting and partly because I don't > administer the cluster and they are too busy to shift. ?That's partly > why I want to check whether their implementation of scipy is using fftw > or not. import scipy scipy.show_config() should give you that information, David From cycomanic at gmail.com Sun Mar 1 15:29:10 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Mon, 2 Mar 2009 09:29:10 +1300 Subject: [SciPy-user] Numpy/Scipy rfft transformations do not match? In-Reply-To: <49AAC63F.9080703@caltech.edu> References: <49AA14D4.1040100@caltech.edu> <9457e7c80903010205w3bf76126ub5a44bed5058ceb0@mail.gmail.com> <49AAC63F.9080703@caltech.edu> Message-ID: <20090301202909.GA4219@jochen.schroeder.phy.auckland.ac.nz> On 01/03/09 09:30, Michael Cohen wrote: > St?fan van der Walt wrote: > > NumPy's rfft returns a complex result, whether scipy unpacks the real > > and imaginary components. I wonder if this makes much sense, and > > whether we shouldn't modify SciPy to yield the same output as NumPy? > > Ah, that makes sense. > In order to switch to the scipy version, the speed improvement has to > also take into account the fact that I have to recast the result into > the numpy form then. > Right now, my scipy takes twice as long as it is. How do I make sure > it's using fftw3, and is there a way to make numpy use this library? Hi Michael, the fftw3 implementation was never that fast on scipy and is not used in 0.7 anymore. I've written a wrapper around fftw3 using ctypes. The interface follows the C-implementation of fftw3 so it's less "pythonic" (i.e. you execute plans instead of doing y=fft(x)) however it's quite a bit faster than both numpy and scipy on my machines. You can find it here: http://developer.berlios.de/projects/pyfftw/ Cheers Jochen From cimrman3 at ntc.zcu.cz Mon Mar 2 09:14:00 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Mar 2009 15:14:00 +0100 Subject: [SciPy-user] ANN: SfePy 2009.1 Message-ID: <49ABE9A8.2040002@ntc.zcu.cz> I am pleased to announce the release of SfePy 2009.1. SfePy (simple finite elements in Python) is a finite element analysis software based primarily on Numpy and SciPy. Mailing lists, issue tracking, git repository: http://sfepy.org Home page: http://sfepy.kme.zcu.cz Major improvements: - new solvers: - simple backtracking steepest descent optimization solver - PETSc Krylov solvers via petsc4py, sequential mode - LOBPCG eigenvalue solver (SciPy implementation) - new mesh readers: - mesh3d (hermes3d) - AVS UCD ascii mesh - Hypermesh ascii mesh - homogenization framework: - unified approach to resolve data dependencies: HomogenizationEngine class - switched DVCS from mercurial to git Applications: - phononic materials: - dispersion analysis, phase velocity computation for phononic materials - caching of coefficients to speed up parametric runs - schroedinger.py: - fixed DFT iterations, iteration plot saving - basic smearing around Fermi limit For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/2009.1_RELEASE_NOTES.txt Best regards, Robert Cimrman From lists_ravi at lavabit.com Mon Mar 2 10:41:01 2009 From: lists_ravi at lavabit.com (Ravi) Date: Mon, 2 Mar 2009 10:41:01 -0500 Subject: [SciPy-user] reverse an array In-Reply-To: References: Message-ID: <200903021041.01196.lists_ravi@lavabit.com> On Friday 27 February 2009 22:02:30 Williamson, Ross wrote: > Is there an easy way to reverse an array without converting it to a list? Two ways: x = array([0,1,2,3,4]) y1 = flipud( x ) # See also fliplr y2 = x[ ::-1 ] Regards, Ravi From Ross.Williamson at usap.gov Mon Mar 2 15:23:21 2009 From: Ross.Williamson at usap.gov (Williamson, Ross) Date: Tue, 3 Mar 2009 09:23:21 +1300 Subject: [SciPy-user] bounds with optimize.leastsq Message-ID: Hi everyone Is there a way to set bounds on variables when using optimize.leastsq. I've found optimize.fmin_l_bfgs_b but that unfortunately does not work when approx_grad=True. I get an error: /home/rw247/lib/python/scipy/optimize/optimize.pyc in approx_fprime(xk, f, epsilon, *args) 615 for k in range(len(xk)): 616 ei[k] = epsilon --> 617 grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon 618 ei[k] = 0.0 619 return grad ValueError: setting an array element with a sequence. Is there another function I should be looking at? Cheers Ross From dwf at cs.toronto.edu Mon Mar 2 18:04:23 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 2 Mar 2009 18:04:23 -0500 Subject: [SciPy-user] LiberMate was Re: Automating Matlab In-Reply-To: <49A67E0C.4010002@ru.nl> References: <4984F58C.5070605@gmail.com> <49A02E1A.5050703@gmail.com> <49A67E0C.4010002@ru.nl> Message-ID: On 26-Feb-09, at 6:33 AM, Stef Mientki wrote: > * graphs are nicer, but axis are uglier It looks like the problem is mostly font size and perhaps margins. There's not a lot that can be done about this, since it will depend on your window size as well. You could try some heuristic to decrease the font size for subplots, I suppose. David From Ross.Williamson at usap.gov Tue Mar 3 04:15:08 2009 From: Ross.Williamson at usap.gov (Williamson, Ross) Date: Tue, 3 Mar 2009 22:15:08 +1300 Subject: [SciPy-user] Cannot get optimize.fmin_bfgs to work Message-ID: Hi All I'm trying to get optimize.fmin_bfgs to work with no luck - I have an x,y array of numbers (about 100) to which I'm trying to fit a curve to. Both the auto fprime calculator and my own Jacbian do not work: ValueError: setting an array element with a sequence. I'm presuming I'm doing something really silly but I cannot find a simple example on the web (The one using the Rosenbrock function does not help). The reason for doing this is that I want to progress to the bounded version of this function as leastsq works (but does not allow for bounded data). Please find code below: from numpy import zeros from scipy import optimize def fitfunc(p, x): a_white = p[0] a_pink = p[1] a_red = p[2] a_phot = p[3] fk = p[4] # photon_noise roll-off ~ 11.4 Hz mean 2008 season alpha2 = p[5] #result = wlevel*(1.+(f_knee/x)^alpha) result = a_white + a_pink * x ** (-1.0) + a_red * x ** (-1.0 * alpha2) + a_phot / (1. + x ** 2 / fk ** 2) return result def fitgrad(p,x,y,err): #Calculate Jacobian J = zeros(len(p),float) J[0] = 1.0 J[1] = 1.0/x J[2] = x ** (-1.0 * alpha2) J[3] = 1.0/(1. + x ** 2 / fk ** 2) J[4] = (2 * a_phot * x**2)/(fk**3 * ((x**2/a_phot**2) + 1)**2) J[5] = a_red * x **(-1.0* alpha2) * log(x) return J def errorfunc(p,x,y,err): err_r = (y-fitfunc(p,x))/err print err_r return err_r def test(x,y,noise): pinit = [0.00125645, 0.00062822510, 0.00062822510,0.00062822510, 11.0, 2.0] out = optimize.leastsq(errorfunc,pinit,args=(x,y,noise), full_output=1) out2 = optimize.fmin_bfgs(errorfunc,pinit,fprime=fitgrad,args=(x,y,noise)) print out[0] print out[1] return out From bastian.weber at gmx-topmail.de Tue Mar 3 05:05:08 2009 From: bastian.weber at gmx-topmail.de (Bastian Weber) Date: Tue, 03 Mar 2009 11:05:08 +0100 Subject: [SciPy-user] Cannot get optimize.fmin_bfgs to work In-Reply-To: References: Message-ID: <49AD00D4.4060103@gmx-topmail.de> Hi Ross, as I am a scipy beginner I doubt that I can help but nevertheless I will try. > ValueError: setting an array element with a sequence. I recently had a similar error in my own script, when a function returned an array where it should have returned a scalar value only. The return value was broadcasted to an array by a multplication - normal behaviour which I simply did not consider at the beginning. I found the reason by examining the traceback. Thereby I found it really useful to see the call arguments for each function and to invoke the debugger right where the exception happened. (To achieve this, I use something like import IPython.ultraTB import sys sys.excepthook = IPython.ultraTB.FormattedTB(mode='Verbose', color_scheme='Linux', call_pdb=1) in almost every script I write) If you dont get it workin: Do you have considered to reformulate your optimization problem to one with unbounded parameters by introducing something like a penalty term in errfunc or by mapping parameters into a finite interval (via arctan for example)? As I said: maybe I am completely off the track. Regards, Bastian. From jeremy at jeremysanders.net Tue Mar 3 06:08:23 2009 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Tue, 03 Mar 2009 11:08:23 +0000 Subject: [SciPy-user] Change in behaviour of interpolate.interp1d Message-ID: Hi - I just got hit by the change in behaviour of interpolate.interp1d as Fedora have just pushed scipy 0.7.0. The Scipy release notes seem to be inaccurate to me, as they say: The shape of return values from ``scipy.interpolate.interp1d`` used to be incorrect, if interpolated data had more than 2 dimensions and the axis keyword was set to a non-default value. This has been fixed. Users of ``scipy.interpolate.interp1d`` may need to revise their code if it relies on the incorrect behavior. However I'm only using 1D interpolation. The function used to return a 1D array as a result. Now it returns a 0D array: In [1]: import scipy.interpolate.interpolate In [2]: import numpy In [3]: f = scipy.interpolate.interpolate.interp1d( numpy.array([1,2,3,4]), numpy.array([1,2,3,4]), kind='cubic') In [4]: f(3.1) Out[4]: array(3.1000000000000019) Maybe the release notes should be changed? Jeremy From josef.pktd at gmail.com Tue Mar 3 06:48:24 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Mar 2009 06:48:24 -0500 Subject: [SciPy-user] Cannot get optimize.fmin_bfgs to work In-Reply-To: References: Message-ID: <1cd32cbb0903030348r574f601chafb053314e15e849@mail.gmail.com> On Tue, Mar 3, 2009 at 4:15 AM, Williamson, Ross wrote: > Hi All > > I'm trying to get optimize.fmin_bfgs to work with no luck - I have an x,y array of numbers (about 100) to which I'm trying to fit a curve to. Both the auto fprime calculator and my own Jacbian do not work: > > ValueError: setting an array element with a sequence. > > I'm presuming I'm doing something really silly but I cannot find a simple example on the web (The one using the Rosenbrock function does not help). > > The reason for doing this is that I want to progress to the bounded version of this function as leastsq works (but does not allow for bounded data). > > Please find code below: > > from numpy import zeros > from scipy import optimize > def fitfunc(p, x): > > ? a_white = p[0] > ? a_pink = p[1] > ? a_red = p[2] > ? a_phot = p[3] > ? fk = p[4] ?# photon_noise roll-off ~ 11.4 Hz mean 2008 season > ? alpha2 = p[5] > > ? #result = wlevel*(1.+(f_knee/x)^alpha) > ? result = a_white + a_pink * x ** (-1.0) + a_red * x ** (-1.0 * alpha2) + a_phot / (1. + x ** 2 / fk ** 2) > > ? return result > > def fitgrad(p,x,y,err): > ? #Calculate Jacobian > ? J = zeros(len(p),float) > ? J[0] = 1.0 > ? J[1] = 1.0/x > ? J[2] = x ** (-1.0 * alpha2) > ? J[3] = 1.0/(1. + x ** 2 / fk ** 2) > ? J[4] = (2 * a_phot * x**2)/(fk**3 * ((x**2/a_phot**2) + 1)**2) > ? J[5] = a_red * x **(-1.0* alpha2) * log(x) > > ? return J > > def errorfunc(p,x,y,err): > ? ?err_r = (y-fitfunc(p,x))/err > ? ?print err_r > ? ?return err_r > > def test(x,y,noise): > > ? ?pinit = [0.00125645, 0.00062822510, 0.00062822510,0.00062822510, 11.0, 2.0] > > ? ?out = optimize.leastsq(errorfunc,pinit,args=(x,y,noise), full_output=1) > ? ?out2 = optimize.fmin_bfgs(errorfunc,pinit,fprime=fitgrad,args=(x,y,noise)) fmin_bfgs and other optimization functions expect errorfunc to be the objective function, which returns a single value e.g. define func = np.sum(errorfunc**2) and then try optimize.fmin_bfgs(func,pinit,fprime=fitgrad,args=(x,y,noise)) this is different from leastsq, where taking the square and summing is done inside of leastsq Josef > ? ?print out[0] > ? ?print out[1] > > ? ?return out > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Tue Mar 3 09:01:58 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Mar 2009 23:01:58 +0900 Subject: [SciPy-user] SVN and TRAC migrations starting NOW Message-ID: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> Dear Numpy and Scipy developers, We are now starting the svn and trac migrations to new servers: - The svn repositories of both numpy and scipy are now unavailable, and should be available around 16:00 UTC (3rd March 2009). You will then be able to update/commit again. - Trac for numpy and scipy are also unavailable. We will send an email when everything will be backed up, The Scipy website administrators From pwang at enthought.com Tue Mar 3 14:05:50 2009 From: pwang at enthought.com (Peter Wang) Date: Tue, 3 Mar 2009 13:05:50 -0600 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> Message-ID: <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> Hi everyone, We have moved the scipy and numpy Trac and SVN servers to a new machine. We have also moved the scikits SVN repository, but not its Trac (scipy.org/scipy/scikits). The SVN repositories for wavelets, mpi4py, and other projects that are hosted on scipy have not been moved yet, and will be temporarily unavailable until we get them moved over. Please poke around (gently!) and let us know if you experience any broken links, incorrect redirects, and the like. A few things to note: - The URLs for the trac pages have been simplified to: http://projects.scipy.org/numpy http://projects.scipy.org/scipy You should be seemlessly redirected to these sites if you try to access any of the old URLs (which were of the form /scipy/scipy/ or / scipy/numpy/). - The mailman archives and listinfo pages should now redirect to mail.scipy.org/mailman/ and mail.scipy.org/pipermail/. Again, this should be seemless, so if you experience any difficulties please let us know. Thanks, Peter, Stefan, and David From josef.pktd at gmail.com Tue Mar 3 14:15:59 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Mar 2009 14:15:59 -0500 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> Message-ID: <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> On Tue, Mar 3, 2009 at 2:05 PM, Peter Wang wrote: > Hi everyone, > > We have moved the scipy and numpy Trac and SVN servers to a new > machine. ?We have also moved the scikits SVN repository, but not its > Trac (scipy.org/scipy/scikits). ?The SVN repositories for wavelets, > mpi4py, and other projects that are hosted on scipy have not been > moved yet, and will be temporarily unavailable until we get them moved > over. > > Please poke around (gently!) and let us know if you experience any > broken links, incorrect redirects, and the like. ?A few things to note: > > ?- The URLs for the trac pages have been simplified to: > ? ? ?http://projects.scipy.org/numpy > ? ? ?http://projects.scipy.org/scipy I get a 404 Not Found on both of them Josef > ? ?You should be seemlessly redirected to these sites if you try to > access any of the old URLs (which were of the form /scipy/scipy/ or / > scipy/numpy/). > > ?- The mailman archives and listinfo pages should now redirect to > mail.scipy.org/mailman/ and mail.scipy.org/pipermail/. ?Again, this > should be seemless, so if you experience any difficulties please let > us know. > > > Thanks, > Peter, Stefan, and David > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Tue Mar 3 14:03:38 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Mar 2009 04:03:38 +0900 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> Message-ID: <49AD7F0A.9020506@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > On Tue, Mar 3, 2009 at 2:05 PM, Peter Wang wrote: > >> Hi everyone, >> >> We have moved the scipy and numpy Trac and SVN servers to a new >> machine. We have also moved the scikits SVN repository, but not its >> Trac (scipy.org/scipy/scikits). The SVN repositories for wavelets, >> mpi4py, and other projects that are hosted on scipy have not been >> moved yet, and will be temporarily unavailable until we get them moved >> over. >> >> Please poke around (gently!) and let us know if you experience any >> broken links, incorrect redirects, and the like. A few things to note: >> >> - The URLs for the trac pages have been simplified to: >> http://projects.scipy.org/numpy >> http://projects.scipy.org/scipy >> > > I get a 404 Not Found on both of them You may have to force a "true" refresh on your browser to make it work, David From pav at iki.fi Tue Mar 3 14:20:12 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 3 Mar 2009 19:20:12 +0000 (UTC) Subject: [SciPy-user] Change in behaviour of interpolate.interp1d References: Message-ID: Tue, 03 Mar 2009 11:08:23 +0000, Jeremy Sanders wrote: > Hi - I just got hit by the change in behaviour of interpolate.interp1d > as Fedora have just pushed scipy 0.7.0. [clip] > In [3]: f = scipy.interpolate.interpolate.interp1d( > numpy.array([1,2,3,4]), numpy.array([1,2,3,4]), kind='cubic') > > In [4]: f(3.1) > Out[4]: array(3.1000000000000019) > > Maybe the release notes should be changed? Yes, sorry. I missed this change. Probably needs to be mentioned in the release notes. -- Pauli Virtanen From pwang at enthought.com Tue Mar 3 14:32:51 2009 From: pwang at enthought.com (Peter Wang) Date: Tue, 3 Mar 2009 13:32:51 -0600 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> Message-ID: <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> On Mar 3, 2009, at 1:15 PM, josef.pktd at gmail.com wrote: > On Tue, Mar 3, 2009 at 2:05 PM, Peter Wang > wrote: >> - The URLs for the trac pages have been simplified to: >> http://projects.scipy.org/numpy >> http://projects.scipy.org/scipy > > I get a 404 Not Found on both of them > > Josef Can you verify that the DNS updates have reached your DNS server? The hostname projects.scipy.org should resolve to 216.62.213.249. -Peter From josef.pktd at gmail.com Tue Mar 3 15:17:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Mar 2009 15:17:29 -0500 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> Message-ID: <1cd32cbb0903031217t43499a27t7c80d92250dcae25@mail.gmail.com> On Tue, Mar 3, 2009 at 2:32 PM, Peter Wang wrote: > On Mar 3, 2009, at 1:15 PM, josef.pktd at gmail.com wrote: > >> On Tue, Mar 3, 2009 at 2:05 PM, Peter Wang >> wrote: >>> ?- The URLs for the trac pages have been simplified to: >>> ? ? ?http://projects.scipy.org/numpy >>> ? ? ?http://projects.scipy.org/scipy >> >> I get a 404 Not Found on both of them >> >> Josef > > Can you verify that the DNS updates have reached your DNS server? ?The > hostname projects.scipy.org should resolve to 216.62.213.249. Not on my computer >ping projects.scipy.org Pinging projects.scipy.org [216.62.213.231] with 32 bytes of data: http://216.62.213.249/numpy in browser responds with Not Found The requested URL /numpy was not found on this server. Apache/2.2.3 (CentOS) Server at 216.62.213.249 Port 80 So, I think I will have to wait. Josef > > -Peter > - Show quoted text - > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Tue Mar 3 15:34:47 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 3 Mar 2009 22:34:47 +0200 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <1cd32cbb0903031217t43499a27t7c80d92250dcae25@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> <1cd32cbb0903031217t43499a27t7c80d92250dcae25@mail.gmail.com> Message-ID: <9457e7c80903031234v6fc18b68t5b5e6e4ba80625ff@mail.gmail.com> 2009/3/3 : > http://216.62.213.249/numpy ? in browser responds with > > Not Found Since that machine acts as a virtual host, this won't work. There are some issues with DNS (I can't resolve projects.scipy.org correctly either), but the situation will be rectified soon. Regards St?fan From cournape at gmail.com Tue Mar 3 15:38:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 4 Mar 2009 05:38:36 +0900 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <9457e7c80903031234v6fc18b68t5b5e6e4ba80625ff@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> <1cd32cbb0903031217t43499a27t7c80d92250dcae25@mail.gmail.com> <9457e7c80903031234v6fc18b68t5b5e6e4ba80625ff@mail.gmail.com> Message-ID: <5b8d13220903031238s3bffe963k229280539683566f@mail.gmail.com> On Wed, Mar 4, 2009 at 5:34 AM, St?fan van der Walt wrote: > 2009/3/3 ?: >> http://216.62.213.249/numpy ? in browser responds with >> >> Not Found > > Since that machine acts as a virtual host, this won't work. > > There are some issues with DNS (I can't resolve projects.scipy.org > correctly either), but the situation will be rectified soon. What works is to set up the actual ip + name in /etc/hosts (C:?Windows?system32?drivers?etc on windows). But you better remember removing it later :) cheers, David From gyromagnetic at gmail.com Tue Mar 3 15:40:11 2009 From: gyromagnetic at gmail.com (gyro) Date: Tue, 3 Mar 2009 13:40:11 -0700 Subject: [SciPy-user] Additional output from scipy.integrate.ode[int] Message-ID: Hi, I was wondering if it is possible to specify additional outputs that are calculated in the function specifying the rhs of the odes. As an trivial example (adapting the example in the documentation): ========== from scipy.integrate import ode y0, t0 = [1.0j, 2.0], 0 def f(t, y, arg1): extra = y[0]*t - y[1] rhs = [1j*arg1*y[0] + y[1], -arg1*y[1]**2] return (rhs,extra) def jac(t, y, arg1): return [[1j*arg1, 1], [0, -arg1*2*y[1]]] r = ode(f, jac).set_integrator('zvode', method='bdf', with_jacobian=True) r.set_initial_value(y0, t0).set_f_params(2.0).set_jac_params(2.0) t1 = 10 dt = 1 while r.successful() and r.t < t1: r.integrate(r.t+dt) print r.t, r.y, r.extra ========== I realize that I can calculate the 'extra' after I have the complete solution, but it would be convenient to have the ability to get this information directly from the solver. If this capability is not currently available, is it of interest to have it implemented? Thanks. -gyro From pav at iki.fi Tue Mar 3 15:54:51 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 3 Mar 2009 20:54:51 +0000 (UTC) Subject: [SciPy-user] Additional output from scipy.integrate.ode[int] References: Message-ID: Tue, 03 Mar 2009 13:40:11 -0700, gyro wrote: > I was wondering if it is possible to specify additional outputs that are > calculated in the function specifying the rhs of the odes. > > As an trivial example (adapting the example in the documentation): > [clip] > def f(t, y, arg1): > extra = y[0]*t - y[1] > rhs = [1j*arg1*y[0] + y[1], -arg1*y[1]**2] > return (rhs,extra) [clip| > while r.successful() and r.t < t1: > r.integrate(r.t+dt) > print r.t, r.y, r.extra [clip] > I realize that I can calculate the 'extra' after I have the complete > solution, but it would be convenient to have the ability to get this > information directly from the solver. > > If this capability is not currently available, is it of interest to have > it implemented? I'm not sure if there's interest to add this feature, since you can do def f(t, y, arg1): r.extra = y[0]*t - y[1] rhs = [1j*arg1*y[0] + y[1], -arg1*y[1]**2] return rhs Note that `r` is available in `f`, since it's in the outer scope. -- Pauli Virtanen From millman at berkeley.edu Tue Mar 3 16:24:29 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 3 Mar 2009 13:24:29 -0800 Subject: [SciPy-user] SVN and Trac servers are back up In-Reply-To: <9457e7c80903031234v6fc18b68t5b5e6e4ba80625ff@mail.gmail.com> References: <49AD3856.3090407@ar.media.kyoto-u.ac.jp> <67278550-7BBB-4499-B578-CC05702533ED@enthought.com> <1cd32cbb0903031115g673b9271q480d0dc3b767ffe8@mail.gmail.com> <7DEFBA85-EFC9-4646-8E0C-ED10314E4FA8@enthought.com> <1cd32cbb0903031217t43499a27t7c80d92250dcae25@mail.gmail.com> <9457e7c80903031234v6fc18b68t5b5e6e4ba80625ff@mail.gmail.com> Message-ID: On Tue, Mar 3, 2009 at 12:34 PM, St?fan van der Walt wrote: > There are some issues with DNS (I can't resolve projects.scipy.org > correctly either), but the situation will be rectified soon. The DNS issues should be resolved for now. Please let us know if you experience any more problems due to the recent change. Jarrod From bnuttall at uky.edu Tue Mar 3 16:43:16 2009 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Tue, 3 Mar 2009 16:43:16 -0500 Subject: [SciPy-user] Stepwise Discriminant Analysis In-Reply-To: <1cd32cbb0903030348r574f601chafb053314e15e849@mail.gmail.com> References: <1cd32cbb0903030348r574f601chafb053314e15e849@mail.gmail.com> Message-ID: Hello, Is a stepwise discriminant analysis routine available in Python? Can someone point me in the right direction? Thanks. Brandon Nuttall Kentucky Geological Survey www.uky.edu/kgs From josef.pktd at gmail.com Tue Mar 3 18:49:57 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 3 Mar 2009 18:49:57 -0500 Subject: [SciPy-user] Stepwise Discriminant Analysis In-Reply-To: References: <1cd32cbb0903030348r574f601chafb053314e15e849@mail.gmail.com> Message-ID: <1cd32cbb0903031549xacadfa0y4191950460a61d5@mail.gmail.com> On Tue, Mar 3, 2009 at 4:43 PM, Nuttall, Brandon C wrote: > Hello, > > Is a stepwise discriminant analysis routine available in Python? Can someone point me in the right direction? > I don't know of any stepwise implementation and searching with google I find mostly articles with the message: "Stepwise Descriptive or Predictive Discriminant Analysis: Don't Even Think about Using It!" is the title of: http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED438321&ERICExtSearch_SearchType_0=no&accno=ED438321 or see http://epm.sagepub.com/cgi/content/refs/55/4/525 There are several python packages on machine learning and classifier analysis that might do something similar to what you want. e.g. http://www.pymvpa.org/classifiers.html#basic-supervised-learning-methods or logistic regression in nipy neuroimaging stats.models FDANode in http://mdp-toolkit.sourceforge.net/tutorial.html#node-list and there are several others. However, I don't know how easy it is, to use just the discriminant analysis without the overhead of learning an entire package, and for just a linear discriminant analysis they might be overkill. Josef From gyromagnetic at gmail.com Tue Mar 3 20:57:16 2009 From: gyromagnetic at gmail.com (gyro) Date: Tue, 3 Mar 2009 17:57:16 -0800 (PST) Subject: [SciPy-user] Additional output from scipy.integrate.ode[int] In-Reply-To: References: Message-ID: <4e6e7b3f-ae97-41f5-9638-c55ee5b37e68@u18g2000pro.googlegroups.com> On Mar 3, 1:54?pm, Pauli Virtanen wrote: > Tue, 03 Mar 2009 13:40:11 -0700, gyro wrote: > > > I was wondering if it is possible to specify additional outputs that are > > calculated in the function specifying the rhs of the odes. > > > As an trivial example (adapting the example in the documentation): > > [clip] > > def f(t, y, arg1): > > ? ? extra = y[0]*t - y[1] > > ? ? rhs = [1j*arg1*y[0] + y[1], -arg1*y[1]**2] > > ? ? return (rhs,extra) > [clip| > > while r.successful() and r.t < t1: > > ? ? r.integrate(r.t+dt) > > ? ? print r.t, r.y, r.extra > [clip] > > I realize that I can calculate the 'extra' after I have the complete > > solution, but it would be convenient to have the ability to get this > > information directly from the solver. > > > If this capability is not currently available, is it of interest to have > > it implemented? > > I'm not sure if there's interest to add this feature, since you can do > > def f(t, y, arg1): > ? ? r.extra = y[0]*t - y[1] > ? ? rhs = [1j*arg1*y[0] + y[1], -arg1*y[1]**2] > ? ? return rhs > > Note that `r` is available in `f`, since it's in the outer scope. > > -- > Pauli Virtanen > Excellent, Pauli! This is a great solution to my problem. -g From josef.pktd at gmail.com Wed Mar 4 00:49:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 4 Mar 2009 00:49:30 -0500 Subject: [SciPy-user] Stepwise Discriminant Analysis In-Reply-To: <1cd32cbb0903031549xacadfa0y4191950460a61d5@mail.gmail.com> References: <1cd32cbb0903030348r574f601chafb053314e15e849@mail.gmail.com> <1cd32cbb0903031549xacadfa0y4191950460a61d5@mail.gmail.com> Message-ID: <1cd32cbb0903032149o4a7725d8x23c408551423099@mail.gmail.com> On Tue, Mar 3, 2009 at 6:49 PM, wrote: > On Tue, Mar 3, 2009 at 4:43 PM, Nuttall, Brandon C wrote: >> Hello, >> >> Is a stepwise discriminant analysis routine available in Python? Can someone point me in the right direction? >> > > I don't know of any stepwise implementation and searching with google > I find mostly articles with the message: > > "Stepwise Descriptive or Predictive Discriminant Analysis: Don't Even > Think about Using It!" > > is the title of: > http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED438321&ERICExtSearch_SearchType_0=no&accno=ED438321 > > or see http://epm.sagepub.com/cgi/content/refs/55/4/525 > > There are several python packages on machine learning and classifier > analysis that might do something similar to what you want. > e.g. http://www.pymvpa.org/classifiers.html#basic-supervised-learning-methods > or logistic regression in nipy neuroimaging stats.models > FDANode in http://mdp-toolkit.sourceforge.net/tutorial.html#node-list > and there are several others. > > However, I don't know how easy it is, to use just the discriminant > analysis without the overhead of learning an entire package, and for > just a linear discriminant analysis they might be overkill. > > Josef > Just a clarification: Statisticians are mostly critical of many stepwise statistical methods, among other reasons, because the usually reported statistics are not correct if explanatory variables are selected, and because if the selection of included explanatory variables is based on in-sample fit, then some variables might capture idiosyncratic features of the sample which is not useful for out-of-sample prediction. It also creates problems for interpretation of the results of the statistical analysis. On the other hand, machine learning literature doesn't care much about the interpretation of the estimated parameters, compared to for example social sciences. So they can throw everything in the pot and use cross validation to keep or select those features that help in prediction. An interesting comparison is in http://anyall.org/blog/2008/12/statistics-vs-machine-learning-fight/ and some of the links there. Since I wanted to have something similar for another purpose, I wrote a version of the linear probability model, i.e. regress a binary variable on some continuous variables, where regressors are sequentially included if they improve out-of-sample prediction. This is not directly a stepwise discriminant analysis but close to it and since it uses out-of-sample prediction similar to cross-validation, it avoids the problem of overfitting, it doesn't have any test statistics. However, a statistician would recommend to use a logit or probit model instead. This was quickly written and only minimally tested but it seems to work pretty well for the generated sample. (I guess this went a bit off-topic.) Josef -------------- next part -------------- import numpy as np from numpy import linalg def gendata(nreg, nobs, noise_sig): '''generate sample data This function only generates independent regressors and not regressors with collinearity. Returns ------- y : array (nobs, 1) of binary data xdata : array (nobs, nreg) of continuous regressor data ''' nobsf = float(nobs) xdata = np.hstack((np.ones((nobs,1)), np.random.randn(nobs,nreg))) beta = np.array([[1, 1, 2, -1, 0.1, 0]+[0.05]*(nreg-5)]).T noise = noise_sig * np.random.randn(nobs,1) ylatent = np.dot(xdata,beta) + noise y = (ylatent > 0).astype(int) return xdata, y def estimate(xsample, ysample, xout, yout, full=0): '''estimate linear regression on sample data returns percent of correct predictions for out-of-sample data ''' betaest = linalg.lstsq(xsample,ysample)[0] yout_latent_est = np.dot(xout,betaest) yout_est = (yout_latent_est > 0.5).astype(int) ycorrect = sum((yout_est - yout) == 0) nyoutf = float(yout.shape[0]) if not full: return ycorrect/nyoutf else: yover = sum((yout_est - yout) > 0) yunder = sum((yout_est - yout) < 0) return betaest, ycorrect/nyoutf*100, yover/nyoutf*100, yunder/nyoutf*100 def stepregress(xdata, y, psample=0.8): '''regress binary random variable on a array of explanatory variables Regressors are added sequentially, one at a time, by best improvement in out-of-sample prediction. If no additional regressor improves the prediction then the iteration stops ''' nobs = y.shape[0] nreg = xdata.shape[1] nsample = np.floor(nobs * psample) nout = nobs - nsample xsample = xdata[:nsample,:] ysample = y[:nsample,:] xout = xdata[nsample:,:] yout = y[nsample:,:] idxincl = [] idxremain = range(nreg) bestcorr = 0 oldbest = 0 while len(idxremain)>0: bestidx = 0 print 'remaining', idxremain for xidx in idxremain: idxuse = idxincl + [xidx] correctpercent = estimate(xsample[:,idxuse], ysample, xout[:,idxuse], yout) print idxuse, correctpercent if correctpercent > bestcorr: bestidx = xidx bestcorr = correctpercent if bestcorr > oldbest: idxincl.append(bestidx) idxremain.remove(bestidx) oldbest = bestcorr print 'added: ', bestidx, 'new correct prediction', bestcorr else: print 'no improvement found' break return idxincl, idxremain if __name__ == '__main__': nreg = 10 #I tried 5, 10 and 20 nobs = 100 #I tried 100, 500 and 1000 noise_sig = 0.1 #standard deviation of noise of latent y variable psample = 0.8 #fraction of data to include in estimation xdata, y = gendata(nreg, nobs, noise_sig) idxincl, idxremain = stepregress(xdata, y, psample=psample) print '\nSummary' print 'nobs=%d, nreg=%d, noise_std=%4.2f, psample=%3.2f'% (nobs, nreg, noise_sig, psample) print 'included regressors', idxincl print 'excluded regressors', idxremain #prediction, classification on the full data set with regressors selected # by stepregress: betaest, ycorrect, yover, yunder = estimate(xdata[:,idxincl], y, xdata[:,idxincl], y, full=1) print 'percent label 1 in data', sum(y)/float(nobs)*100 print 'ycorrect, yover, yunder (%) =', ycorrect, yover, yunder print 'estimated parameters for regressors', idxincl print betaest.T From fperez.net at gmail.com Wed Mar 4 08:51:16 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 4 Mar 2009 05:51:16 -0800 Subject: [SciPy-user] ANN: python for scientific computing at SIAM CSE 09 Message-ID: Hi all, sorry for the spam, but in case any of you are coming to the SIAM Conference on Computational Science and Engineering (CSE09) in Miami: http://www.siam.org/meetings/cse09/ you might be interested in stopping by the Python sessions on Thursday: http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8044 http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8045 http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=8046 Think of it as the East Coast March mini-edition of Scipy'09 ;) Cheers, f From Dharhas.Pothina at twdb.state.tx.us Wed Mar 4 11:15:10 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 04 Mar 2009 10:15:10 -0600 Subject: [SciPy-user] scikits.timeseries : how would I plot (or calculate) monthly statistics. Message-ID: <49AE54AE.63BA.009B.0@twdb.state.tx.us> Hi, This a question for those of you who use scikits.timeseries . Given 10-15 years of timeseries data how would I plot monthly statistics like max, min, mean, std deviation etc for a year. ie. on the x axis would be the 12 months (Jan,Feb...Dec) and y axis would have the stats. I know how to read in my timeseries and convert it to a monthly frequency but I'm not sure the best way to proceed from here. I guess one way would be to reshape the monthly timeseries array into 12 columns and then calculate the stats on each column but I was wondering if there was a more appropriate method to do this? On another note: shouldn't there be a link on this page (http://scikits.appspot.com/timeseries) to the timeseries home page http://pytseries.sourceforge.net/ . The link on the appspot page (http://pypi.python.org/pypi/scikits.timeseries) doesn't work. thanks - dharhas From timmichelsen at gmx-topmail.de Wed Mar 4 15:20:10 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 04 Mar 2009 21:20:10 +0100 Subject: [SciPy-user] scikits.timeseries : how would I plot (or calculate) monthly statistics. In-Reply-To: <49AE54AE.63BA.009B.0@twdb.state.tx.us> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> Message-ID: Hello > Given 10-15 years of timeseries data how would I plot monthly > statistics like max, min, mean, std deviation etc for a year. Something like: import scikits.timeseries as ts import numpy as np start_data = ts.Date(freq='M', year=1990, month=1) data = np.random.uniform(0, 20, 120) ts_monthly = ts.time_series(data, freq='M', start_date=start_data) aser = ts_monthly.convert('A', func=np.ma.std) => now plot aser? Regards, Timmie P.S.: regarding time series, I have sth. for you . Please pass me your PM. From pgmdevlist at gmail.com Wed Mar 4 15:38:46 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 4 Mar 2009 15:38:46 -0500 Subject: [SciPy-user] scikits.timeseries : how would I plot (or calculate) monthly statistics. In-Reply-To: <49AE54AE.63BA.009B.0@twdb.state.tx.us> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> Message-ID: <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> On Mar 4, 2009, at 11:15 AM, Dharhas Pothina wrote: > > Given 10-15 years of timeseries data What frequency ? Monthly ? Daily ? > > I know how to read in my timeseries and convert it to a monthly > frequency but I'm not sure the best way to proceed from here. I > guess one way would be to reshape the monthly timeseries array into > 12 columns and then calculate the stats on each column but I was > wondering if there was a more appropriate method to do this? * If you have a monhtly frequency: just use .convert('A') and compute the stats for each column. * If you have a daily frequency: well, I'm afraid you didn't give enough information: what do you want for each month, what is the expected shape of the output ? Statistics far all the years (eg, the mean for January irrespectively of the year), with an output of shape (12,) ? For each year, with an output of shape (n, 12) and n the nb of years? The generic idea is indeed to first convert to monthly. You end up with a (12*n, 31) series that you have to reshape. However, because you won't need the dates, I strongly advise you to work only on the .series part for the reshaping. > On another note: shouldn't there be a link on this page (http://scikits.appspot.com/timeseries > ) to the timeseries home page http://pytseries.sourceforge.net/ . > The link on the appspot page (http://pypi.python.org/pypi/scikits.timeseries > ) doesn't work. Thanks for reporting. We were already aware of the problem, that shall be fixed when timeseries will be officially released (and then, we'll be able to put it on pypi). From Dharhas.Pothina at twdb.state.tx.us Wed Mar 4 16:26:36 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 04 Mar 2009 15:26:36 -0600 Subject: [SciPy-user] scikits.timeseries : how would I plot (orcalculate) monthly statistics. In-Reply-To: <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> Message-ID: <49AE9DAC.63BA.009B.0@twdb.state.tx.us> Original data is on an approximately two week frequency. I'll be reading them in as daily values and converting to monthly. I eventually want to calculate plot monthly values in a particular year vs mean monthly values for all years (ie vs mean for each month irrespective of year) I'll try playing around with the what you and Tim have mentioned. but I'm a bit confused about how some of this works. > If you have a monthly frequency: just use .convert('A') and compute > the stats for each column. Wouldn't this convert it to an Annual frequency, ie one value per year? > The generic idea is indeed to first convert to monthly. You end up > with a (12*n, 31) series that you have to reshape I get the 12*n , ie 12 months * number of years. Where does the 31 come from. Does the series still contain the daily values after conversion to monthly? thanks, - dharhas >>> Pierre GM 3/4/2009 2:38 PM >>> On Mar 4, 2009, at 11:15 AM, Dharhas Pothina wrote: > > Given 10-15 years of timeseries data What frequency ? Monthly ? Daily ? > > I know how to read in my timeseries and convert it to a monthly > frequency but I'm not sure the best way to proceed from here. I > guess one way would be to reshape the monthly timeseries array into > 12 columns and then calculate the stats on each column but I was > wondering if there was a more appropriate method to do this? * If you have a monhtly frequency: just use .convert('A') and compute the stats for each column. * If you have a daily frequency: well, I'm afraid you didn't give enough information: what do you want for each month, what is the expected shape of the output ? Statistics far all the years (eg, the mean for January irrespectively of the year), with an output of shape (12,) ? For each year, with an output of shape (n, 12) and n the nb of years? The generic idea is indeed to first convert to monthly. You end up with a (12*n, 31) series that you have to reshape. However, because you won't need the dates, I strongly advise you to work only on the .series part for the reshaping. > On another note: shouldn't there be a link on this page (http://scikits.appspot.com/timeseries > ) to the timeseries home page http://pytseries.sourceforge.net/ . > The link on the appspot page (http://pypi.python.org/pypi/scikits.timeseries > ) doesn't work. Thanks for reporting. We were already aware of the problem, that shall be fixed when timeseries will be officially released (and then, we'll be able to put it on pypi). _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From timmichelsen at gmx-topmail.de Wed Mar 4 16:43:35 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Wed, 4 Mar 2009 21:43:35 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scikits=2Etimeseries_=3A_how_would_I=09plo?= =?utf-8?q?t=09=28orcalculate=29monthly=09statistics=2E?= References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> <49AE9DAC.63BA.009B.0@twdb.state.tx.us> Message-ID: > Original data is on an approximately two week frequency. I'll be reading them in as daily values and > converting to monthly. It would be helpful if you could create an example maybe using random data. > > If you have a monthly frequency: just use .convert('A') and compute > > the stats for each column. > > Wouldn't this convert it to an Annual frequency, ie one value per year? Yes, I though this is what you wanted. If not maybe you want to take a look at: scikits.timeseries.extrax.convert_to_annual(series) in http://bazaar.launchpad.net/~vcs-imports/scipy/timeseries/annotate/head%3A/scikits/timeseries//extras.py This will align the years. Then you can create an average over the years. Greetings, Timmie From pgmdevlist at gmail.com Wed Mar 4 16:46:31 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 4 Mar 2009 16:46:31 -0500 Subject: [SciPy-user] scikits.timeseries : how would I plot (orcalculate) monthly statistics. In-Reply-To: <49AE9DAC.63BA.009B.0@twdb.state.tx.us> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> <49AE9DAC.63BA.009B.0@twdb.state.tx.us> Message-ID: <9717FF09-ACFB-4848-ACEB-A705809F0FA9@gmail.com> Dharhas, On Mar 4, 2009, at 4:26 PM, Dharhas Pothina wrote: > Original data is on an approximately two week frequency. I'll be > reading them in as daily values and converting to monthly. OK, you could try 'W', but 'D' is also OK. > I eventually want to calculate plot monthly values in a particular > year vs mean monthly values for all years (ie vs mean for each month > irrespective of year) Then, that's quite easy: First, compute the means per month: >>> mseries = series.convert('M', func=ma.mean) You end up with a (n*12) series. Convert it to annual, but w/o precising a function >>> aseries = mseries.convert('A') aseries is a (n, 12) series: years in rows, months in columns. aseries[i,j] is the mean over month j for year i. If you want the monthly values over all the years, just take the mean along axis=0 > > >> If you have a monthly frequency: just use .convert('A') and compute >> the stats for each column. > > Wouldn't this convert it to an Annual frequency, ie one value per > year? It does, but because you don't precise a func, you end up with a 2D array http://pytseries.sourceforge.net/generated/scikits.timeseries.TimeSeries.convert.html#scikits.timeseries.TimeSeries.convert >> The generic idea is indeed to first convert to monthly. You end up >> with a (12*n, 31) series that you have to reshape > > I get the 12*n , ie 12 months * number of years. Where does the 31 > come from. Does the series still contain the daily values after > conversion to monthly? I was talking about a 'D'aily series to convert to monthly, without using a 'func' parameter. You then end up with a 2D series, with 12*n rows and 31 columns (because there's at most 31 days per month). Note that the last column (day 31) would be masked for Feb, Apr, Jun, Sep, Nov. From timmichelsen at gmx-topmail.de Wed Mar 4 16:46:00 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Wed, 4 Mar 2009 21:46:00 +0000 (UTC) Subject: [SciPy-user] server migration not complete Message-ID: Dear scipy administrators, it seems that http://www.scipy.org/scipy/scikits/browser/trunk/ is not uptodate with the svn repository. Kind ragards, Timmie From cournape at gmail.com Thu Mar 5 03:39:16 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 5 Mar 2009 17:39:16 +0900 Subject: [SciPy-user] server migration not complete In-Reply-To: References: Message-ID: <5b8d13220903050039w164ec169y21f0c98941606d36@mail.gmail.com> On Thu, Mar 5, 2009 at 6:46 AM, Timmie wrote: > Dear scipy administrators, > it seems that http://www.scipy.org/scipy/scikits/browser/trunk/ > is not uptodate with the svn repository. Yes, the scikits trac has not yet been updated yet. The svn repository is safe to use, though. We hope to fix within the end of the week, sorry for the inconvenience, David From timmichelsen at gmx-topmail.de Thu Mar 5 09:10:13 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 5 Mar 2009 14:10:13 +0000 (UTC) Subject: [SciPy-user] server migration not complete References: <5b8d13220903050039w164ec169y21f0c98941606d36@mail.gmail.com> Message-ID: > sorry for the inconvenience, No inconvenience. Just a note. I am very thankful for your efforts! From timmichelsen at gmx-topmail.de Thu Mar 5 09:15:25 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 5 Mar 2009 14:15:25 +0000 (UTC) Subject: [SciPy-user] scikits.timeseries: how to exclude year from plots? Message-ID: Hello, when plotting a timeseries from a long-term year [1] on, for example, monthly frequency, the data is independant of the year. How can I exclude the year figure from being plotted on the x-axis? For example: how to I subpress the year number 2009 in the following picture: http://pytseries.sourceforge.net/_images/zoom3.png [1] long-term year: average created from 10 years of data. Think of average temperature per month for the last decade. Thanks in advance for your hints, Timmie From timmichelsen at gmx-topmail.de Thu Mar 5 09:49:16 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 5 Mar 2009 14:49:16 +0000 (UTC) Subject: [SciPy-user] autolabels for barcharts Message-ID: Hello, I have a question regarding autolabels for bar charts. It seems that the pie charts have already incorporated such a functionality [1]. Is there any reason why this isn't built in the bar chart function [2]? The function I am referring to is: def autolabel(rects): # attach some text labels for rect in rects: #~ print rect height = rect.get_height() print height ax.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%d'%int(height), ha='center', va='bottom') Thanks in advance for the clarification, Timmie [1] http://matplotlib.sourceforge.net/examples/pylab_examples/pie_demo.html?highlight=autolabel [2] http://matplotlib.sourceforge.net/examples/api/barchart_demo.html?highlight=autolabel From timmichelsen at gmx-topmail.de Thu Mar 5 09:51:15 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 5 Mar 2009 14:51:15 +0000 (UTC) Subject: [SciPy-user] autolabels for barcharts References: Message-ID: Sorry for the noise. I was to send it to the matplotlib-list. Please ignore this. From pgmdevlist at gmail.com Thu Mar 5 12:03:37 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 5 Mar 2009 12:03:37 -0500 Subject: [SciPy-user] scikits.timeseries: how to exclude year from plots? In-Reply-To: References: Message-ID: On Mar 5, 2009, at 9:15 AM, Timmie wrote: > Hello, > when plotting a timeseries from a long-term year [1] on, for > example, monthly > frequency, the data is independant of the year. > > How can I exclude the year figure from being plotted on the x-axis? Timmie, I'm afraid it si not currently possible w/ timeseries. I'll try to see what we can do. Meanwhile, if you don't need the interactive ticks that tsfigure provides, don't use it: just plot your (12,) monthly or (365,) daily series using the standard commands of matplotlib, and set the ticks yourself. That should be straightforward w/ a monthly series, that might be a bit trickier w/ a daily series (ticks should be placed at 0, 31, 60, 91, 121...) From wnbell at gmail.com Thu Mar 5 14:26:49 2009 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 5 Mar 2009 14:26:49 -0500 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) Message-ID: We are pleased to announce the first release of PyAMG: Algebraic Multigrid Solvers in Python. With a user-friendly interface and efficient implementation, PyAMG addresses the growing demand for scalable solvers by non-experts. PyAMG features implementations of * Ruge-Stuben (RS) or Classical AMG * AMG based on Smoothed Aggregation (SA) and experimental support for * Adaptive Smoothed Aggregation (=E1SA) * Compatible Relaxation (CR) along with many tunable options for coarsening, interpolation, relaxation, prolongator smoothing. Our goal with the PyAMG project is to provide a framework for existing AMG methods and to allow for quick testing and prototyping of additional functionality and algorithms. Specifically, our objectives are: * ease of use o interface is accessible to non-experts o extensive documentation and references * speed o solves problems with millions of unknowns efficiently o core multigrid algorithms are implemented in C++ o sparse matrix support is provided by scipy.sparse * readability o source code is organized into intuitive components * extensibility o core components can be reused to implement additional techniques o new features are easy integrated * experimentation o facilitates rapid prototyping and analysis of multigrid methods * portability o tested on several platforms (Linux, Windows, Mac) o relies only on Python, NumPy, and SciPy PyAMG relies extensively on the NumPy and SciPy libraries for scientific programming with Python. We thank the NumPy/SciPy community for their support and continued efforts. For more information see http://www.pyamg.org PyAMG developers: Nathan Bell (http://graphics.cs.uiuc.edu/~wnbell/) Luke Olson (http://www.cs.uiuc.edu/homes/lukeo/) Jacob Schroder (www.cse.uiuc.edu/~jschrod3/) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From Dharhas.Pothina at twdb.state.tx.us Thu Mar 5 14:49:28 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 05 Mar 2009 13:49:28 -0600 Subject: [SciPy-user] scikits.timeseries : how would Iplot (orcalculate) monthly statistics. In-Reply-To: <9717FF09-ACFB-4848-ACEB-A705809F0FA9@gmail.com> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> <49AE9DAC.63BA.009B.0@twdb.state.tx.us> <9717FF09-ACFB-4848-ACEB-A705809F0FA9@gmail.com> Message-ID: <49AFD868.63BA.009B.0@twdb.state.tx.us> Thank you. This worked great and saved me a lot of time. I'm slowly getting the hang of this package. Sorry for the obvious question but I assume that ma.mean() etc automatically excludes masked values from the calculation. - d >>> Pierre GM 3/4/2009 3:46 PM >>> Dharhas, On Mar 4, 2009, at 4:26 PM, Dharhas Pothina wrote: > Original data is on an approximately two week frequency. I'll be > reading them in as daily values and converting to monthly. OK, you could try 'W', but 'D' is also OK. > I eventually want to calculate plot monthly values in a particular > year vs mean monthly values for all years (ie vs mean for each month > irrespective of year) Then, that's quite easy: First, compute the means per month: >>> mseries = series.convert('M', func=ma.mean) You end up with a (n*12) series. Convert it to annual, but w/o precising a function >>> aseries = mseries.convert('A') aseries is a (n, 12) series: years in rows, months in columns. aseries[i,j] is the mean over month j for year i. If you want the monthly values over all the years, just take the mean along axis=0 > > >> If you have a monthly frequency: just use .convert('A') and compute >> the stats for each column. > > Wouldn't this convert it to an Annual frequency, ie one value per > year? It does, but because you don't precise a func, you end up with a 2D array http://pytseries.sourceforge.net/generated/scikits.timeseries.TimeSeries.convert.html#scikits.timeseries.TimeSeries.convert >> The generic idea is indeed to first convert to monthly. You end up >> with a (12*n, 31) series that you have to reshape > > I get the 12*n , ie 12 months * number of years. Where does the 31 > come from. Does the series still contain the daily values after > conversion to monthly? I was talking about a 'D'aily series to convert to monthly, without using a 'func' parameter. You then end up with a 2D series, with 12*n rows and 31 columns (because there's at most 31 days per month). Note that the last column (day 31) would be masked for Feb, Apr, Jun, Sep, Nov. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From cimrman3 at ntc.zcu.cz Fri Mar 6 04:56:14 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 06 Mar 2009 10:56:14 +0100 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: References: Message-ID: <49B0F33E.2020609@ntc.zcu.cz> Hi Nathan, thank you for PyAMG - it's great to have multigrid solvers in Python. Best regards, r. Nathan Bell wrote: > We are pleased to announce the first release of PyAMG: Algebraic > Multigrid Solvers in Python. With a user-friendly interface and > efficient implementation, PyAMG addresses the growing demand for > scalable solvers by non-experts. PyAMG features implementations of > * Ruge-Stuben (RS) or Classical AMG > * AMG based on Smoothed Aggregation (SA) > and experimental support for > * Adaptive Smoothed Aggregation (=E1SA) > * Compatible Relaxation (CR) > along with many tunable options for coarsening, interpolation, > relaxation, prolongator smoothing. > > > Our goal with the PyAMG project is to provide a framework for existing > AMG methods and to allow for quick testing and prototyping of > additional functionality and algorithms. Specifically, our objectives > are: > * ease of use > o interface is accessible to non-experts > o extensive documentation and references > * speed > o solves problems with millions of unknowns efficiently > o core multigrid algorithms are implemented in C++ > o sparse matrix support is provided by scipy.sparse > * readability > o source code is organized into intuitive components > * extensibility > o core components can be reused to implement additional techniques > o new features are easy integrated > * experimentation > o facilitates rapid prototyping and analysis of multigrid methods > * portability > o tested on several platforms (Linux, Windows, Mac) > o relies only on Python, NumPy, and SciPy > > > PyAMG relies extensively on the NumPy and SciPy libraries for > scientific programming with Python. We thank the NumPy/SciPy > community for their support and continued efforts. > > > For more information see > http://www.pyamg.org > > > PyAMG developers: > Nathan Bell (http://graphics.cs.uiuc.edu/~wnbell/) > Luke Olson (http://www.cs.uiuc.edu/homes/lukeo/) > Jacob Schroder (www.cse.uiuc.edu/~jschrod3/) From fperez.net at gmail.com Fri Mar 6 10:24:31 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 6 Mar 2009 07:24:31 -0800 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: <49B0F33E.2020609@ntc.zcu.cz> References: <49B0F33E.2020609@ntc.zcu.cz> Message-ID: On Fri, Mar 6, 2009 at 1:56 AM, Robert Cimrman wrote: > Hi Nathan, > > thank you for PyAMG - it's great to have multigrid solvers in Python. Seconded, many thanks! This is great to have... We just finished the SIAM CSE09 python panel (I'll post a little writeup soon and will get the slides from the speakers), it would have been great to know you were working on this to have you give a talk here :) Next time... When you feel the time is right, it would be worth listing it in the canonical 'yellow pages' for scipy: http://www.scipy.org/Topical_Software Cheers, f From pgmdevlist at gmail.com Fri Mar 6 11:36:36 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 6 Mar 2009 11:36:36 -0500 Subject: [SciPy-user] scikits.timeseries : how would Iplot (orcalculate) monthly statistics. In-Reply-To: <49AFD868.63BA.009B.0@twdb.state.tx.us> References: <49AE54AE.63BA.009B.0@twdb.state.tx.us> <34C4E9B1-B92A-4B41-A1CE-0EF74C7FAF3B@gmail.com> <49AE9DAC.63BA.009B.0@twdb.state.tx.us> <9717FF09-ACFB-4848-ACEB-A705809F0FA9@gmail.com> <49AFD868.63BA.009B.0@twdb.state.tx.us> Message-ID: <298D30C7-EF7A-4167-95A4-FF4CE8F04583@gmail.com> On Mar 5, 2009, at 2:49 PM, Dharhas Pothina wrote: > > Thank you. This worked great and saved me a lot of time. Good to hear! > I'm slowly getting the hang of this package. Sorry for the obvious > question but I assume that ma.mean() etc automatically excludes > masked values from the calculation. Indeed. Most of the time, the np and ma versions of functions behave similarly, but it is recommended to stick to the ma versions. Here, ma.mean computes the mean by discarding masked values. From stefan at sun.ac.za Fri Mar 6 11:42:07 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 6 Mar 2009 18:42:07 +0200 Subject: [SciPy-user] server migration not complete In-Reply-To: References: Message-ID: <9457e7c80903060842x7f6c0112s60d2c68fb80f4313@mail.gmail.com> Hi Timmie 2009/3/4 Timmie : > Dear scipy administrators, > it seems that http://www.scipy.org/scipy/scikits/browser/trunk/ > is not uptodate with the svn repository. I've migrated the trac, so please let me know if you see anything out of place. Cheers St?fan From gaedol at gmail.com Fri Mar 6 12:03:10 2009 From: gaedol at gmail.com (Marco) Date: Fri, 6 Mar 2009 18:03:10 +0100 Subject: [SciPy-user] Cycling the arrays Message-ID: Hi all, just a quick question: what is the fastest way to cycle inside an N dimensional array? I just want to visit each and every element in the N dimensional array, then do things. I can do it with N 'for' cycles, obviously. Is there a more compact and maybe faster way to do it? Thanks in advance, marco From warren.weckesser at gmail.com Fri Mar 6 12:16:01 2009 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 6 Mar 2009 11:16:01 -0600 Subject: [SciPy-user] Cycling the arrays In-Reply-To: References: Message-ID: <114880320903060916n7a323279r2a09478a0120f041@mail.gmail.com> Marco, The 'flat' attribute will give you an iterator over the elements of the array. For example, -------------------------------------------------------------------- In [1]: from numpy.random import rand In [2]: a = rand(2,3,4) In [3]: a Out[3]: array([[[ 0.14858246, 0.75368248, 0.40294597, 0.42646085], [ 0.88369801, 0.00360736, 0.58419198, 0.13083728], [ 0.61364837, 0.43983396, 0.90157461, 0.21592489]], [[ 0.41765192, 0.08989583, 0.15821354, 0.53446492], [ 0.96593351, 0.21903464, 0.16993252, 0.74305159], [ 0.41083145, 0.40778255, 0.41558961, 0.59216869]]]) In [4]: for x in a.flat: ...: print x, x**2 ...: ...: 0.148582455636 0.0220767461228 0.75368248355 0.56803728601 0.402945971634 0.162365456056 0.426460853634 0.181868859683 0.883698012759 0.780922177755 0.00360735939407 1.3013041798e-05 0.584191982439 0.341280272346 0.130837280824 0.0171183940535 0.613648370127 0.376564322159 0.439833959795 0.193453912189 0.901574608475 0.812836774646 0.215924889889 0.0466235580736 0.41765191591 0.174433122864 0.0898958274532 0.0080812597935 0.158213541673 0.0250315247686 0.53446492479 0.285652755831 0.965933507608 0.93302754112 0.21903464047 0.047976173726 0.169932518795 0.0288770609439 0.743051591064 0.552125666983 0.410831453714 0.168782483361 0.407782547576 0.166286606108 0.415589605938 0.172714720563 0.592168688173 0.350663755252 In [5]: ------------------------------------------------------------ On Fri, Mar 6, 2009 at 11:03 AM, Marco wrote: > Hi all, > > just a quick question: what is the fastest way to cycle inside an N > dimensional array? > I just want to visit each and every element in the N dimensional > array, then do things. > > I can do it with N 'for' cycles, obviously. > Is there a more compact and maybe faster way to do it? > > Thanks in advance, > > marco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Fri Mar 6 12:45:42 2009 From: sturla at molden.no (Sturla Molden) Date: Fri, 6 Mar 2009 18:45:42 +0100 (CET) Subject: [SciPy-user] Cycling the arrays In-Reply-To: References: Message-ID: > just a quick question: what is the fastest way to cycle inside an N > dimensional array? > I just want to visit each and every element in the N dimensional > array, then do things. > > I can do it with N 'for' cycles, obviously. > Is there a more compact and maybe faster way to do it? It depends on what you want to do. You have to be more specific than this. Generally, 'for' loops may be evil but not always. An example of a for loop that may not be evil is: for i in range(20): do_a_lot_of_work( array[i,:,:] ) Evil for loops are those that are (1a) very long or (1b) executed repetitively, and (2) each iteration do a relatively small amount of work. These are candidates for vectorization or migration to a lower level language (Cython, C or Fortran). Sturla Molden From Dharhas.Pothina at twdb.state.tx.us Fri Mar 6 15:26:33 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 06 Mar 2009 14:26:33 -0600 Subject: [SciPy-user] scikits.timeseries fill between two curves Message-ID: <49B13299.63BA.009B.0@twdb.state.tx.us> Hi, I was wondering if there was a way to shade the region between two curves on a scikits.timeseries plot. I see that matplotlib has a fill_between() function and I was wondering if there was a way to use that in a tsplot. Thanks - dharhas From sbucking at gmail.com Fri Mar 6 15:59:28 2009 From: sbucking at gmail.com (Scott Bucking) Date: Fri, 6 Mar 2009 15:59:28 -0500 Subject: [SciPy-user] implementation of multi-objective genetic algorithm in scipy Message-ID: <3114a7250903061259p5642c92bic91cbede5a9a2fc6@mail.gmail.com> Hi, Anyone familiar with a multi-objective genetic algorithm (MOGA) written in python? I saw a note on the scipy website that a GA existed (under the optimize package). Does this kit include a MOGA? I can't find any reference to a GA in the documentation or source code. Cheers, Scott From pgmdevlist at gmail.com Fri Mar 6 16:11:36 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 6 Mar 2009 16:11:36 -0500 Subject: [SciPy-user] scikits.timeseries fill between two curves In-Reply-To: <49B13299.63BA.009B.0@twdb.state.tx.us> References: <49B13299.63BA.009B.0@twdb.state.tx.us> Message-ID: <41EEC95B-E4BC-4AF8-A5EB-53D0E224324D@gmail.com> Dharhas, Not directly, but there is some workaround. After all, a TimeSeriesPlot is only a subclass of SubPlot, so all its methods are available. Keep in mind that the dates are just an array of integers used for the xaxis. You can access it with the `xdata` attribute. For example, the following code works: from scikits.timeseries.lib import moving_funcs as mf import scikits.timeseries.lib.plotlib as tpl coseries = ts.time_series(np.cos(np.arange(36)), start_date=ts.Date('M', '2001-01')) snseries = ts.time_series(np.cos(np.arange(36)+3), start_date=ts.Date('M', '2001-01')) tpl.tsplot(series=coseries) fsp = tpl.gca() fsp.tsplot(snseries) fsp.fill_between(fsp.xdata, coseries, snseries, color='grey') On Mar 6, 2009, at 3:26 PM, Dharhas Pothina wrote: > Hi, > > I was wondering if there was a way to shade the region between two > curves on a scikits.timeseries plot. I see that matplotlib has a > fill_between() function and I was wondering if there was a way to > use that in a tsplot. > > Thanks > > - dharhas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Mar 6 16:24:42 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 6 Mar 2009 16:24:42 -0500 Subject: [SciPy-user] implementation of multi-objective genetic algorithm in scipy In-Reply-To: <3114a7250903061259p5642c92bic91cbede5a9a2fc6@mail.gmail.com> References: <3114a7250903061259p5642c92bic91cbede5a9a2fc6@mail.gmail.com> Message-ID: <1cd32cbb0903061324tf5dbfa4y5cb2dc8c646deccf@mail.gmail.com> On Fri, Mar 6, 2009 at 3:59 PM, Scott Bucking wrote: > Hi, > > Anyone familiar with a multi-objective genetic algorithm (MOGA) > written in python? ?I saw a note on the scipy website that a GA > existed (under the optimize package). ?Does this kit include a MOGA? > > I can't find any reference to a GA in the documentation or source code. > > Cheers, scipy.optimize has simulated annealing `anneal` there is a genetic algorithm that has been moved to the `learn` scikits, but I never used it, and it hasn't been modified in 3 years. http://scipy.org/scipy/scikits/browser/trunk/learn/scikits/learn/machine/ga I don't think there is a MOGA. There are several other machine learning packages for python, but I don't know if any of them contains more. If your optimization is computationally not very expensive, then you could trace the pareto frontier (of the convex hull of the feasible payoff set) by varying linear weight parameters of the objective functions. Josef From pgmdevlist at gmail.com Fri Mar 6 16:50:38 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 6 Mar 2009 16:50:38 -0500 Subject: [SciPy-user] Scikits or not scikits... Message-ID: All, What are the requirements for a scikits, not so much in terms of implementation than in terms of contents ? Over the last couple of years, I have developed a collection of utilities to manipulate environmental time series (such as computing potential evapotranspiration, plotting flow-duration curves, adding climate information and so forth) that I'd like to share w/ the community. These utilities are based on out scikits.timeseries package, and were developed using the scikits structure. There's a preversion here: http://bazaar.launchpad.net/~pierregm/scipy/climpy/files (The whole package has since been reorganized w/ docs and adapted to the latest version of timeseries). Bzr oesn't play nice w/ Eclipse and I'm considering dropping it. I was wondering whether it'd be OK to make these modules an official part of the scikits or not. Waiting for your comments, P. From robert.kern at gmail.com Fri Mar 6 16:52:21 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 6 Mar 2009 15:52:21 -0600 Subject: [SciPy-user] Scikits or not scikits... In-Reply-To: References: Message-ID: <3d375d730903061352w537b811dhb11462ca07e521bc@mail.gmail.com> On Fri, Mar 6, 2009 at 15:50, Pierre GM wrote: > All, > What are the requirements for a scikits, not so much in terms of > implementation than in terms of contents ? > Over the last couple of years, I have developed a collection of > utilities to manipulate environmental time series (such as computing > potential evapotranspiration, plotting flow-duration curves, adding > climate information and so forth) that I'd like to share w/ the > community. These utilities are based on out scikits.timeseries > package, and were developed using the scikits structure. > There's a preversion ?here: > http://bazaar.launchpad.net/~pierregm/scipy/climpy/files > (The whole package has since been reorganized w/ docs and adapted to > the latest version of timeseries). > > Bzr oesn't play nice w/ Eclipse and I'm considering dropping it. I was > wondering whether it'd be OK to make these modules an official part of > the scikits or not. Absolutely. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Fri Mar 6 16:55:42 2009 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 6 Mar 2009 16:55:42 -0500 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: References: <49B0F33E.2020609@ntc.zcu.cz> Message-ID: On Fri, Mar 6, 2009 at 10:24 AM, Fernando Perez wrote: > Seconded, many thanks! This is great to have... We just finished the > SIAM CSE09 python panel (I'll post a little writeup soon and will get > the slides from the speakers), it would have been great to know you > were working on this to have you give a talk here :) ?Next time... Absolutely! > When you feel the time is right, it would be worth listing it in the > canonical 'yellow pages' for scipy: > > http://www.scipy.org/Topical_Software > Done. For lack of a better category I put it under "Science: basic tools". -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From matthieu.brucher at gmail.com Fri Mar 6 17:21:45 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 6 Mar 2009 23:21:45 +0100 Subject: [SciPy-user] Scikits or not scikits... In-Reply-To: References: Message-ID: > Bzr oesn't play nice w/ Eclipse and I'm considering dropping it. I was > wondering whether it'd be OK to make these modules an official part of > the scikits or not. I'm using Bazaar with Eclipse in my everyday work (Linux), and I don't have the feeling that they do not play nice. I have a Bazaar + XML plugin that runs fine, safe from the fact that it doesn't tell me that the local branch must be merged before a new push. Using it for 6 months now, I don't regret it. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From mcsefl at gmail.com Fri Mar 6 17:52:22 2009 From: mcsefl at gmail.com (mcsefl) Date: Fri, 6 Mar 2009 17:52:22 -0500 Subject: [SciPy-user] histogram q's Message-ID: <9fddf64a0903061452h7936511y458cdaf8586be445@mail.gmail.com> Hi Everyone, Is there a function in scipy.stats (I can't seem to find it...) that will calculate the Freedman-Diaconis bin size for a data set you want to histogram? Thanks, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcsefl at gmail.com Fri Mar 6 18:20:13 2009 From: mcsefl at gmail.com (mcsefl) Date: Fri, 6 Mar 2009 18:20:13 -0500 Subject: [SciPy-user] histogram q's Message-ID: <9fddf64a0903061520y1e9fae8cg6474f0961dd337f9@mail.gmail.com> Hi all, Sorry to bother on the F-D bin size question. It was easier to code than I thought because of the scoreatpercentile function. Even if scipy.stats doesn't have it, here is my implementation: from scipy.stats import scoreatpercentile from numpy import sort def FDbinSize(X): """Calculates the Freedman-Diaconis bin size for a data set for use in making a histogram Arguments: X: 1D Data set Returns: h: F-D bin size """ # First Calculate the interquartile range X = sort(X) upperQuartile = scoreatpercentile(X,.75) lowerQuartile = scoreatpercentile(X,.25) IQR = upperQuartile - lowerQuartile # Find the F-D bin size h = 2.*IQR/len(X)**(1./3.) return h -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From avi at sicortex.com Sat Mar 7 11:54:20 2009 From: avi at sicortex.com (Avi Purkayastha) Date: Sat, 7 Mar 2009 10:54:20 -0600 Subject: [SciPy-user] help on fortran compiler Message-ID: Hi, I am trying to build scipy v0.7.0 on SiCortex platform (MIPS64 architecture on linux). However the problem is that the script does not recognize the pathscale fortran compiler I am passing.. The build step was % python setup.py build --fcompiler=pathf95 After the build process proceeds for a while, I get.. : don't know how to compile Fortran code on platform 'posix' with 'pathf95' compiler. Supported compilers are: compaq,none,absoft,nag,gnu,sun,lahey,intelem,gnu95,intelv,g95,intele,int elev,pg,intel,mips,hpux,vast,ibm) building 'dfftpack' library error: library dfftpack has Fortran sources but no Fortran compiler found How do I force the script to accept pathf95 since I have no other fortran compilers? Both BLAS and LAPACK have been built with pathscale and that is recognized in the build step. : ATLAS version 3.7.32 built by root on Fri Jul 25 11:31:20 EDT 2008: UNAME : Linux sf1-m0n0.scsystem 2.6.18.8-sc-lustre-perfmon #1 SMP Fri Jul 4 18:40:20 Local time zone must be set--see zic m mips64 SiCortex ICE9A V1.0 FPU V0.1 SiCortex SC-1000 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_MIPSICE9 -DATL_USE64BITS F2CDEFS : -DAdd__ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 131072 F77 : pathf95, version F77FLAGS : -mabi=64 SMC : gcc, version gcc (GCC) 4.1.2 (Gentoo 4.1.2) SMCFLAGS : -O2 -mips64 -march=5kf -mtune=5kf -fno-schedule-insns - fschedule-insns2 -fno-peephole -fno-peephole2 -mabi=64 SKC : gcc, version gcc (GCC) 4.1.2 (Gentoo 4.1.2) SKCFLAGS : -O2 -mips64 -march=5kf -mtune=5kf -fno-schedule-insns - fschedule-insns2 -fno-peephole -fno-peephole2 -mabi=64 success! : Here are some other informations.. 1) (sc1-m3n6:~/builds/scipy-0.7.0) avi% uname -a Linux sc1-m3n6.scsystem 2.6.18.8-sc-lustre-perfmon #1 SMP Thu Sep 25 13:19:45 EDT 2008 mips64 SiCortex ICE9B V1.0 FPU V0.1 SiCortex SC-1000 GNU/Linux 2) (sc1-m3n6:~/builds/scipy-0.7.0) avi% python -c 'import os,sys;print os.name,sys.platform' posix linux2 II) I have a second issue once the fortran issue is resolved: we do not have dfftpack or umfpack built on our system. Does scipy build them, if not found in our library stack, so can I force the script to ignore dfftpack and umfpack? Thanks for any help and suggestions. -- Avi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sat Mar 7 11:58:38 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 8 Mar 2009 01:58:38 +0900 Subject: [SciPy-user] help on fortran compiler In-Reply-To: References: Message-ID: <5b8d13220903070858o3a0633f2xb683281ccb6ab2b5@mail.gmail.com> On Sun, Mar 8, 2009 at 1:54 AM, Avi Purkayastha wrote: : > don't know how to compile Fortran code on platform 'posix' with 'pathf95' > compiler. Supported compilers are: > compaq,none,absoft,nag,gnu,sun,lahey,intelem,gnu95,intelv,g95,intele,intelev,pg,intel,mips,hpux,vast,ibm) > building 'dfftpack' library > error: library dfftpack has Fortran sources but no Fortran compiler found > How do I force the script to accept pathf95 since I have no other fortran > compilers? You have to add your own fortran compiler tool (look at numpy sources, in numpy?distutils?fcompiler for examples). > II) I have a second issue once the fortran issue is resolved: we do not have > dfftpack or umfpack built on our system. Does scipy build them, if not found > in our library stack, so can I force the script to ignore dfftpack and > umfpack? dfftpack is included in scipy sources, so this is no problem. Umfpack is not mandatory. Only BLAS and LAPACK are mandatory for scipy. cheers, David From elmickerino at hotmail.com Sat Mar 7 19:18:58 2009 From: elmickerino at hotmail.com (ElMickerino) Date: Sat, 7 Mar 2009 16:18:58 -0800 (PST) Subject: [SciPy-user] curve_fit failure when supplying uncertainties Message-ID: <22380378.post@talk.nabble.com> I think I've identified a problem with curve_fit which occurs when one attempts to fit normally distributed data with a gaussian. From the documentation of curve_fit, it appears that 'sigma' should be the uncertainties on the y-values of the data; however, the following example (see attached code) should make it clear that there's a problem with this. My best guess is that the sigma are actually weights (=1.0/sigma). Can anyone confirm or deny this? Seems like from the name it should be uncertainties but from the behavior of the code it appears otherwise. Also, I was wondering if there's a way to supply asymmetric errors to curve_fit (or for that matter, to leastsqr or any wrapper thereof). Thanks very much, Michael http://www.nabble.com/file/p22380378/testFit.py testFit.py -- View this message in context: http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22380378.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Sat Mar 7 20:20:07 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 7 Mar 2009 20:20:07 -0500 Subject: [SciPy-user] curve_fit failure when supplying uncertainties In-Reply-To: <22380378.post@talk.nabble.com> References: <22380378.post@talk.nabble.com> Message-ID: <1cd32cbb0903071720v1c3547c5ub72fc9f671f5ce50@mail.gmail.com> On Sat, Mar 7, 2009 at 7:18 PM, ElMickerino wrote: > > I think I've identified a problem with curve_fit which occurs when one > attempts to fit normally distributed data with a gaussian. ?From the > documentation of curve_fit, it appears that 'sigma' should be the > uncertainties on the y-values of the data; however, the following example > (see attached code) should make it clear that there's a problem with this. > > My best guess is that the sigma are actually weights (=1.0/sigma). ?Can > anyone confirm or deny this? ?Seems like from the name it should be > uncertainties but from the behavior of the code it appears otherwise. > > Also, I was wondering if there's a way to supply asymmetric errors to > curve_fit (or for that matter, to leastsqr or any wrapper thereof). > > Thanks very much, > Michael > > http://www.nabble.com/file/p22380378/testFit.py testFit.py > -- > View this message in context: http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22380378.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > sigma is the vector of standard deviations for the observations. The weights in the least squares objective function are 1/sigma. Essentially it is a heteroscedastic non-linear generalized least squares estimation. In your case err contains zeros, which means the variance is zero and the weight 1/sigma for these observations would be infinite. curve_fit cannot handle zero standard deviation. Infinite standard deviation works because then the weight becomes zero. If you force your err to be strictly positive, then this works, e.g. >>> curve_fit(gaus, centers, data, sigma=1e-6+err) (array([ 1.91520894e+03, 1.68957830e-01, 6.19128687e-01]), array([[ 3.03590409e-09, 5.11571694e-18, -6.69243949e-18], [ 5.11571694e-18, 4.43479563e-16, 6.18126633e-17], [ -6.69243949e-18, 6.18126633e-17, 3.02641036e-17]])) I haven't looked closely at the rest of your file, so I'm not sure about the interpretation of your err Josef From elmickerino at hotmail.com Sat Mar 7 20:43:32 2009 From: elmickerino at hotmail.com (ElMickerino) Date: Sat, 7 Mar 2009 17:43:32 -0800 (PST) Subject: [SciPy-user] curve_fit failure when supplying uncertainties In-Reply-To: <1cd32cbb0903071720v1c3547c5ub72fc9f671f5ce50@mail.gmail.com> References: <22380378.post@talk.nabble.com> <1cd32cbb0903071720v1c3547c5ub72fc9f671f5ce50@mail.gmail.com> Message-ID: <22394056.post@talk.nabble.com> Great, thanks so much for your help. I seem to recall that empty bins require special handling when fitting histograms, so this makes sense. I was using variance of the bin content, which are assumed to poisson distributed (ie. if bin 'i' has content N_i, then var(bin_i) = sqrt(N_i) ). This works fine for large N, but I believe that asymmetric errors must be used for small bin content. Is there a convenient way of supplying asymmetric errors to a least square fit in scipy? Thanks again, Michael josef.pktd wrote: > > On Sat, Mar 7, 2009 at 7:18 PM, ElMickerino > wrote: >> >> I think I've identified a problem with curve_fit which occurs when one >> attempts to fit normally distributed data with a gaussian. ?From the >> documentation of curve_fit, it appears that 'sigma' should be the >> uncertainties on the y-values of the data; however, the following example >> (see attached code) should make it clear that there's a problem with >> this. >> >> My best guess is that the sigma are actually weights (=1.0/sigma). ?Can >> anyone confirm or deny this? ?Seems like from the name it should be >> uncertainties but from the behavior of the code it appears otherwise. >> >> Also, I was wondering if there's a way to supply asymmetric errors to >> curve_fit (or for that matter, to leastsqr or any wrapper thereof). >> >> Thanks very much, >> Michael >> >> http://www.nabble.com/file/p22380378/testFit.py testFit.py >> -- >> View this message in context: >> http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22380378.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > sigma is the vector of standard deviations for the observations. The > weights in the least squares objective function are 1/sigma. > Essentially it is a heteroscedastic non-linear generalized least > squares estimation. > > In your case err contains zeros, which means the variance is zero and > the weight 1/sigma for these observations would be infinite. curve_fit > cannot handle zero standard deviation. Infinite standard deviation > works because then the weight becomes zero. > > If you force your err to be strictly positive, then this works, e.g. > >>>> curve_fit(gaus, centers, data, sigma=1e-6+err) > (array([ 1.91520894e+03, 1.68957830e-01, 6.19128687e-01]), > array([[ 3.03590409e-09, 5.11571694e-18, -6.69243949e-18], > [ 5.11571694e-18, 4.43479563e-16, 6.18126633e-17], > [ -6.69243949e-18, 6.18126633e-17, 3.02641036e-17]])) > > > I haven't looked closely at the rest of your file, so I'm not sure > about the interpretation of your err > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22394056.html Sent from the Scipy-User mailing list archive at Nabble.com. From peridot.faceted at gmail.com Sat Mar 7 20:52:05 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 7 Mar 2009 20:52:05 -0500 Subject: [SciPy-user] curve_fit failure when supplying uncertainties In-Reply-To: <22394056.post@talk.nabble.com> References: <22380378.post@talk.nabble.com> <1cd32cbb0903071720v1c3547c5ub72fc9f671f5ce50@mail.gmail.com> <22394056.post@talk.nabble.com> Message-ID: 2009/3/7 ElMickerino : > > Great, thanks so much for your help. ?I seem to recall that empty bins > require special handling when fitting histograms, so this makes sense. ?I > was using variance of the bin content, which are assumed to poisson > distributed (ie. if bin 'i' has content N_i, then var(bin_i) = sqrt(N_i) ). > This works fine for large N, but I believe that asymmetric errors must be > used for small bin content. > > Is there a convenient way of supplying asymmetric errors to a least square > fit in scipy? If you get into small-number statistics like this, least-squares fitting will no longer give you the right answer. If the issue is just a few bins with low values you might be able to get away with discarding them from the fit, but that's very iffy statistically. You can do a maximum-likelihood fit directly, by constructing the likelihood (-log probability of your data) using scipy.stats.poisson. Moreover, if you're doing this to a histogram, you may want to think about avoiding the binning entirely and fitting the unbinned data directly, possibly going for a maximum probability on the K-S test (also in scipy, though you may want to look at how it works and write your own for fitting purposes). However you do it I strongly advise doing a round-trip: take the best-fit model, generate some fake data sets, and fit them using the same code. Not only does this check that your fitting code does what you think it does, it also gives you an estimate of the errors on the fitted parameters. Anne > Thanks again, > Michael > > > josef.pktd wrote: >> >> On Sat, Mar 7, 2009 at 7:18 PM, ElMickerino >> wrote: >>> >>> I think I've identified a problem with curve_fit which occurs when one >>> attempts to fit normally distributed data with a gaussian. ?From the >>> documentation of curve_fit, it appears that 'sigma' should be the >>> uncertainties on the y-values of the data; however, the following example >>> (see attached code) should make it clear that there's a problem with >>> this. >>> >>> My best guess is that the sigma are actually weights (=1.0/sigma). ?Can >>> anyone confirm or deny this? ?Seems like from the name it should be >>> uncertainties but from the behavior of the code it appears otherwise. >>> >>> Also, I was wondering if there's a way to supply asymmetric errors to >>> curve_fit (or for that matter, to leastsqr or any wrapper thereof). >>> >>> Thanks very much, >>> Michael >>> >>> http://www.nabble.com/file/p22380378/testFit.py testFit.py >>> -- >>> View this message in context: >>> http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22380378.html >>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> sigma is the vector of standard deviations for the observations. The >> weights in the least squares objective function are 1/sigma. >> Essentially it is a heteroscedastic non-linear generalized least >> squares estimation. >> >> In your case err contains zeros, which means the variance is zero and >> the weight 1/sigma for these observations would be infinite. curve_fit >> cannot handle zero standard deviation. Infinite standard deviation >> works because then the weight becomes zero. >> >> If you force your err to be strictly positive, then this works, e.g. >> >>>>> curve_fit(gaus, centers, data, sigma=1e-6+err) >> (array([ ?1.91520894e+03, ? 1.68957830e-01, ? 6.19128687e-01]), >> array([[ ?3.03590409e-09, ? 5.11571694e-18, ?-6.69243949e-18], >> ? ? ? ?[ ?5.11571694e-18, ? 4.43479563e-16, ? 6.18126633e-17], >> ? ? ? ?[ -6.69243949e-18, ? 6.18126633e-17, ? 3.02641036e-17]])) >> >> >> I haven't looked closely at the rest of your file, so I'm not sure >> about the interpretation of your err >> >> Josef >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -- > View this message in context: http://www.nabble.com/curve_fit-failure-when-supplying-uncertainties-tp22380378p22394056.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sat Mar 7 22:22:15 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 7 Mar 2009 22:22:15 -0500 Subject: [SciPy-user] curve_fit failure when supplying uncertainties In-Reply-To: References: <22380378.post@talk.nabble.com> <1cd32cbb0903071720v1c3547c5ub72fc9f671f5ce50@mail.gmail.com> <22394056.post@talk.nabble.com> Message-ID: <1cd32cbb0903071922x3bb67269p223399a69d5d816b@mail.gmail.com> On Sat, Mar 7, 2009 at 8:52 PM, Anne Archibald wrote: > 2009/3/7 ElMickerino : >> >> Great, thanks so much for your help. ?I seem to recall that empty bins >> require special handling when fitting histograms, so this makes sense. ?I >> was using variance of the bin content, which are assumed to poisson >> distributed (ie. if bin 'i' has content N_i, then var(bin_i) = sqrt(N_i) ). >> This works fine for large N, but I believe that asymmetric errors must be >> used for small bin content. >> >> Is there a convenient way of supplying asymmetric errors to a least square >> fit in scipy? > > If you get into small-number statistics like this, least-squares > fitting will no longer give you the right answer. If the issue is just > a few bins with low values you might be able to get away with > discarding them from the fit, but that's very iffy statistically. You > can do a maximum-likelihood fit directly, by constructing the > likelihood (-log probability of your data) using scipy.stats.poisson. > Moreover, if you're doing this to a histogram, you may want to think > about avoiding the binning entirely and fitting the unbinned data > directly, possibly going for a maximum probability on the K-S test > (also in scipy, though you may want to look at how it works and write > your own for fitting purposes). > > However you do it I strongly advise doing a round-trip: take the > best-fit model, generate some fake data sets, and fit them using the > same code. Not only does this check that your fitting code does what > you think it does, it also gives you an estimate of the errors on the > fitted parameters. > > Anne Depends on what you want to estimate. If you want a known parametric distribution, then you are better of using maximum likelihood, or moment estimator or other estimators of the parameters. For unbounded support, the fit method in stats distribution works quite well (maximum likelihood estimator). If you want a non-parametric distribution, e.g. by fitting some arbitrary function or polynomial to the histogram, then the recent discussion by mcsfl: "histogram q's" with reference to Freedman-Diaconis might be helpful. It gives the optimal binsize for a quadratic estimation loss, the normalization is sample size * binsize, which is the same as your integral, Freedman-Diaconis give the variance in terms of the true distribution. (This is based on quick reading and no guarantee that I didn't misread things, and until recently I didn't even know that there is a theory behind histograms.) Hx = data/float(nGen)/widths[0] #in analogy to Freedman-Diaconis theoretical variance, but using histogram values: sig=np.sqrt(Hx*(1-widths[0]*Hx)/float(nGen)/widths[0]) curve_fit(gaus, centers, data/float(nGen)/widths[0], sigma=1e-9+sig) an idea is to drop empty bins, to avoid 0 variance problem >>> idx = (data/float(nGen)/widths[0]>0) >>> curve_fit(gaus, centers[idx], (data/float(nGen)/widths[0])[idx], sigma=sig[idx]) (array([ 0.99628941, -0.01978185, 1.00475741]), array([[ 4.97224664e-12, 9.23881182e-15, 1.06746682e-13], [ 9.23881182e-15, 5.26682373e-12, 1.33888239e-13], [ 1.06746682e-13, 1.33888239e-13, 2.72832368e-12]])) which is close to the moments of the sample >>> y.mean() -0.017628277951167625 >>> y.std() 1.0093475298725469 An alternative non-parametric approach is to use kernel density estimate in stats.kde Josef From rgiuly at gmail.com Sun Mar 8 03:22:18 2009 From: rgiuly at gmail.com (Rick Giuly) Date: Sat, 07 Mar 2009 23:22:18 -0800 Subject: [SciPy-user] maximum_filter1d size parameter Message-ID: <49B3722A.5090508@gmail.com> Hello All, It seems I'm not understanding the "size" parameter for maximum_filter1d. The documentation says size is the "length along which to calculate 1D maximum" Starting with this array: >>> a array([[1, 2, 3], [4, 1, 1], [1, 6, 7]]) Because there are 3 rows in the matrix, it seems to me that with size set to 3, the result of the filter would be: array([[4, 6, 7], [4, 6, 7], [4, 6, 7]]) But the result is this: >>> maximum_filter1d(a, 3, axis=0) array([[4, 2, 3], [4, 6, 7], [4, 6, 7]]) I don't see why. Any help is appreciated. -rick From josef.pktd at gmail.com Sun Mar 8 08:29:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 8 Mar 2009 07:29:23 -0500 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <49B3722A.5090508@gmail.com> References: <49B3722A.5090508@gmail.com> Message-ID: <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> On Sun, Mar 8, 2009 at 2:22 AM, Rick Giuly wrote: > Hello All, > > It seems I'm not understanding the "size" parameter for > maximum_filter1d. The documentation says size is the "length along which > to calculate 1D maximum" > > Starting with this array: > ?>>> a > array([[1, 2, 3], > ? ? ? ?[4, 1, 1], > ? ? ? ?[1, 6, 7]]) > > > Because there are 3 rows in the matrix, it seems to me that with size > set to 3, the result of the filter would be: > array([[4, 6, 7], > ? ? ? ?[4, 6, 7], > ? ? ? ?[4, 6, 7]]) > > > But the result is this: > > ?>>> maximum_filter1d(a, 3, axis=0) > array([[4, 2, 3], > ? ? ? ?[4, 6, 7], > ? ? ? ?[4, 6, 7]]) > > > I don't see why. > > Any help is appreciated. > > -rick The filter is centered at the current point (keyword order=0), so at the boundary it uses only the boundary point and the point next to it. There are different modes (keyword mode), but none of them seem to deliver what you want. I tried different mode and order combinations, but it seems changing the order at the boundaries to use the filter in one direction only is not possible. Josef From rpg.314 at gmail.com Sun Mar 8 13:03:32 2009 From: rpg.314 at gmail.com (Rohit Garg) Date: Sun, 8 Mar 2009 22:33:32 +0530 Subject: [SciPy-user] numpy aligned memory Message-ID: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> Hi all, I saw this http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html while googling for numpy memory alignment. I wish to know if anything on that account has come to pass yet? On linux 64 bit platform, can I assume anything beyond the glibc alignment as of now? -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technology Bombay From pav at iki.fi Sun Mar 8 13:27:55 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 8 Mar 2009 17:27:55 +0000 (UTC) Subject: [SciPy-user] numpy aligned memory References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> Message-ID: Sun, 08 Mar 2009 22:33:32 +0530, Rohit Garg wrote: > I saw this > > http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html > > while googling for numpy memory alignment. I wish to know if anything on > that account has come to pass yet? On linux 64 bit platform, can I > assume anything beyond the glibc alignment as of now? You cannot assume even glibc alignment, unless the array has the ALIGNED flag set. (This has actually caused a few bugs also in Numpy & Scipy, cf. eg. #551.) For example unpickling can produce arrays that are not aligned at itemsize boundaries, if the original array did not have its ALIGNED flag set. -- Pauli Virtanen From cournape at gmail.com Sun Mar 8 14:12:20 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 9 Mar 2009 03:12:20 +0900 Subject: [SciPy-user] numpy aligned memory In-Reply-To: References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> Message-ID: <5b8d13220903081112m3702155fw8d131c393dd896c9@mail.gmail.com> On Mon, Mar 9, 2009 at 2:27 AM, Pauli Virtanen wrote: > Sun, 08 Mar 2009 22:33:32 +0530, Rohit Garg wrote: >> I saw this >> >> http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html >> >> while googling for numpy memory alignment. I wish to know if anything on >> that account has come to pass yet? On linux 64 bit platform, can I >> assume anything beyond the glibc alignment as of now? > > You cannot assume even glibc alignment, unless the array has the ALIGNED > flag set. (This has actually caused a few bugs also in Numpy & Scipy, cf. > eg. #551.) For example unpickling can produce arrays that are not aligned > at itemsize boundaries, if the original array did not have its ALIGNED > flag set. You can assume your malloc guaranteed alignment from the memory allocation functions, though. Even if/when aligned allocator will be available in numpy, only newly allocated buffers will be concerned. cheers, David From python-ml at nn7.de Sun Mar 8 16:44:46 2009 From: python-ml at nn7.de (Soeren Sonnenburg) Date: Sun, 08 Mar 2009 21:44:46 +0100 Subject: [SciPy-user] [RFC] running r, octave from python Message-ID: <1236545086.23095.26.camel@localhost> Dear all, a Shogun 0.7.1 is out and available at http://www.shogun-toolbox.org which contains one new feature that might be of interest to python-scipy/numpy users. The eierlegendewollmilchsau interface. In case you don't know what this term stands for use google images :-) It is one file that will interface shogun to octave,r,python,matlab. It provides commands to run code in foreign languages: Example: from elwms import elwms import numpy x=numpy.array([[1,2,3],[4,5,6]],dtype=numpy.float64) y=numpy.array([[7,8,9],[0,1,2]],dtype=numpy.float64) elwms('run_octave','octavecode', 'disp("hi")') a,b,c=elwms('run_octave','x', x, 'y', y, 'octavecode', 'class(x), disp(x),results=list(x+y,1,{"a"})') res1=elwms('run_octave','x', x, 'y', y, 'octavecode', 'disp(x); disp(y); results=x+y+rand(2,3)\n') res2=elwms('run_octave','A', ['test','bla','foo'], 'octavecode', ''' disp(A); disp("hi"); results={"a","b","c"} ''') This would pass around matrices x and y do some processing and return results. So you could use your old matlab scripts passing around strings cells, or whatever (u)int8/16/32, single/double matrix type. See http://www.shogun-toolbox.org/doc/elwmsinterface.html . Don't even try to run octave from python from octave etc nested. Neither octave, R nor python-numpy nor libshogun supports this :-) Soeren From rgiuly at gmail.com Sun Mar 8 19:25:20 2009 From: rgiuly at gmail.com (Rick Giuly) Date: Sun, 08 Mar 2009 16:25:20 -0700 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> References: <49B3722A.5090508@gmail.com> <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> Message-ID: <49B453E0.8060502@gmail.com> Thanks Josef, that explains it. What I will probably do is use a size twice as large as the number of rows. -Rick josef.pktd at gmail.com wrote: > On Sun, Mar 8, 2009 at 2:22 AM, Rick Giuly wrote: >> Hello All, >> >> It seems I'm not understanding the "size" parameter for >> maximum_filter1d. The documentation says size is the "length along which >> to calculate 1D maximum" >> >> Starting with this array: >> >>> a >> array([[1, 2, 3], >> [4, 1, 1], >> [1, 6, 7]]) >> >> >> Because there are 3 rows in the matrix, it seems to me that with size >> set to 3, the result of the filter would be: >> array([[4, 6, 7], >> [4, 6, 7], >> [4, 6, 7]]) >> >> >> But the result is this: >> >> >>> maximum_filter1d(a, 3, axis=0) >> array([[4, 2, 3], >> [4, 6, 7], >> [4, 6, 7]]) >> >> >> I don't see why. >> >> Any help is appreciated. >> >> -rick > > The filter is centered at the current point (keyword order=0), so at > the boundary it uses only the boundary point and the point next to it. > There are different modes (keyword mode), but none of them seem to > deliver what you want. > > I tried different mode and order combinations, but it seems changing > the order at the boundaries to use the filter in one direction only is > not possible. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sun Mar 8 20:04:47 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 8 Mar 2009 19:04:47 -0500 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <49B453E0.8060502@gmail.com> References: <49B3722A.5090508@gmail.com> <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> <49B453E0.8060502@gmail.com> Message-ID: <1cd32cbb0903081704i146f419fg3ae9bc4c9e64fed4@mail.gmail.com> On Sun, Mar 8, 2009 at 6:25 PM, Rick Giuly wrote: > Thanks Josef, that explains it. What I will probably do is use a size > twice as large as the number of rows. > > -Rick But then you are just replacing the elements of the array by the column max. If that is your intention, then you could replace it directly: a[:,:] = a.max(0) Josef From zachary.pincus at yale.edu Sun Mar 8 20:14:14 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 8 Mar 2009 20:14:14 -0400 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <49B453E0.8060502@gmail.com> References: <49B3722A.5090508@gmail.com> <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> <49B453E0.8060502@gmail.com> Message-ID: > Thanks Josef, that explains it. What I will probably do is use a size > twice as large as the number of rows. If your intention is simply to calculate the maximum along the entire axis (which is what filtering with such a large kernel will do), you could use a.max(axis=0). This combined with some newaxis-es should give an array that broadcasts just like 'a' does. Or if you really need an output array the exact same shape as 'a', but with the values filled in according to the max along the entire axis, you could probably do something bizarre with the strides like: m = a.max(axis=0) m_2d = numpy.ndarray(a.shape, a.dtype, m, strides = (a.itemsize, 0)) or m = a.max(axis=1) m_2d = numpy.ndarray(a.shape, a.dtype, m, strides = (0, a.itemsize)) Zach > -Rick > > josef.pktd at gmail.com wrote: >> On Sun, Mar 8, 2009 at 2:22 AM, Rick Giuly wrote: >>> Hello All, >>> >>> It seems I'm not understanding the "size" parameter for >>> maximum_filter1d. The documentation says size is the "length along >>> which >>> to calculate 1D maximum" >>> >>> Starting with this array: >>>>>> a >>> array([[1, 2, 3], >>> [4, 1, 1], >>> [1, 6, 7]]) >>> >>> >>> Because there are 3 rows in the matrix, it seems to me that with >>> size >>> set to 3, the result of the filter would be: >>> array([[4, 6, 7], >>> [4, 6, 7], >>> [4, 6, 7]]) >>> >>> >>> But the result is this: >>> >>>>>> maximum_filter1d(a, 3, axis=0) >>> array([[4, 2, 3], >>> [4, 6, 7], >>> [4, 6, 7]]) >>> >>> >>> I don't see why. >>> >>> Any help is appreciated. >>> >>> -rick >> >> The filter is centered at the current point (keyword order=0), so at >> the boundary it uses only the boundary point and the point next to >> it. >> There are different modes (keyword mode), but none of them seem to >> deliver what you want. >> >> I tried different mode and order combinations, but it seems changing >> the order at the boundaries to use the filter in one direction only >> is >> not possible. >> >> Josef >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From rgiuly at gmail.com Sun Mar 8 21:12:01 2009 From: rgiuly at gmail.com (Rick Giuly) Date: Sun, 08 Mar 2009 18:12:01 -0700 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <1cd32cbb0903081704i146f419fg3ae9bc4c9e64fed4@mail.gmail.com> References: <49B3722A.5090508@gmail.com> <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> <49B453E0.8060502@gmail.com> <1cd32cbb0903081704i146f419fg3ae9bc4c9e64fed4@mail.gmail.com> Message-ID: <49B46CE1.8060406@gmail.com> Zach and Josef, thank you both for the suggestions. I believe a.max(0) is what I really wanted. What I'm trying to do is make a "maximum image" from two images (The new image should have the maximum from the two input images at every pixel). If I stack two 2D images (forming a 3D array) I figure I can use max(0) to get what I want. If there is a better way to do this, please let me know... Regards -Rick josef.pktd at gmail.com wrote: > On Sun, Mar 8, 2009 at 6:25 PM, Rick Giuly wrote: >> Thanks Josef, that explains it. What I will probably do is use a size >> twice as large as the number of rows. >> >> -Rick > > But then you are just replacing the elements of the array by the > column max. If that is your intention, then you could replace it > directly: a[:,:] = a.max(0) > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sun Mar 8 21:27:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 8 Mar 2009 20:27:38 -0500 Subject: [SciPy-user] maximum_filter1d size parameter In-Reply-To: <49B46CE1.8060406@gmail.com> References: <49B3722A.5090508@gmail.com> <1cd32cbb0903080529w47214706l8475fe5f19877026@mail.gmail.com> <49B453E0.8060502@gmail.com> <1cd32cbb0903081704i146f419fg3ae9bc4c9e64fed4@mail.gmail.com> <49B46CE1.8060406@gmail.com> Message-ID: <1cd32cbb0903081827o4921b5acj8c3c65151414f2f4@mail.gmail.com> On Sun, Mar 8, 2009 at 8:12 PM, Rick Giuly wrote: > > > Zach and Josef, thank you both for the suggestions. I believe a.max(0) > is what I really wanted. > > What I'm trying to do is make a "maximum image" from two images (The new > image should have the maximum from the two input images at every pixel). > If I stack two 2D images (forming a 3D array) I figure I can use max(0) > to get what I want. If there is a better way to do this, please let me > know... > > Regards > > -Rick I usually use numpy.max but in the help file there is also numpy.maximum(x1, x2[, out]) Element-wise maximum of array elements. Compare two arrays and returns a new array containing the element-wise maxima this would save you the stack operation, and thinking about axis argument. Josef From ondrej at certik.cz Mon Mar 9 00:43:42 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 8 Mar 2009 21:43:42 -0700 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: References: <49B0F33E.2020609@ntc.zcu.cz> Message-ID: <85b5c3130903082143u6adf5de5pc6ce7abeb01542d8@mail.gmail.com> On Fri, Mar 6, 2009 at 8:24 AM, Fernando Perez wrote: > On Fri, Mar 6, 2009 at 1:56 AM, Robert Cimrman wrote: >> Hi Nathan, >> >> thank you for PyAMG - it's great to have multigrid solvers in Python. > > Seconded, many thanks! This is great to have... We just finished the Thanks for this. Fernando told me at the conference that you released it, but I haven't found time until now to read about it. > SIAM CSE09 python panel (I'll post a little writeup soon and will get > the slides from the speakers), it would have been great to know you Please post it, so that I can link to it from my blog. :) Ondrej From cimrman3 at ntc.zcu.cz Mon Mar 9 05:26:36 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 09 Mar 2009 10:26:36 +0100 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: References: <49B0F33E.2020609@ntc.zcu.cz> Message-ID: <49B4E0CC.7060102@ntc.zcu.cz> Fernando Perez wrote: > On Fri, Mar 6, 2009 at 1:56 AM, Robert Cimrman wrote: >> Hi Nathan, >> >> thank you for PyAMG - it's great to have multigrid solvers in Python. > > Seconded, many thanks! This is great to have... We just finished the > SIAM CSE09 python panel (I'll post a little writeup soon and will get > the slides from the speakers), it would have been great to know you > were working on this to have you give a talk here :) Next time... > > When you feel the time is right, it would be worth listing it in the > canonical 'yellow pages' for scipy: > > http://www.scipy.org/Topical_Software Ah, Topical_Software page - can I add a finite element codes section? I know some based on scipy :) r. From jtoudahl.nielsen at gmail.com Mon Mar 9 09:04:05 2009 From: jtoudahl.nielsen at gmail.com (Jakob Nielsen) Date: Mon, 9 Mar 2009 14:04:05 +0100 Subject: [SciPy-user] fitpack spline fitting of periodic data Message-ID: <25a1352e0903090604j7f78f55cj57921e1c9d312a5d@mail.gmail.com> Dear scipy users I am trying to use the per<>0 option in the fitpack routine splrep of /scipy/interpolate/fitpack.py The below test script gives an error: "ValueError: Error on input data": ar=linspace(0,2*pi,30) vals=sin(ar) ar[-1]=2*pi knots=arange(30,360,30)*pi/180.0 tck=splrep(ar,vals,t=knots,per=1) There is no errors using per=0, there is also errors using no t=None and per=1. Am I missing something? I though the first and last point of the x-values should encode the periodicity. hope you can help me, best regards, JTN -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Mon Mar 9 10:31:42 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 09 Mar 2009 09:31:42 -0500 Subject: [SciPy-user] scikits.timeseries fill between two curves In-Reply-To: <41EEC95B-E4BC-4AF8-A5EB-53D0E224324D@gmail.com> References: <49B13299.63BA.009B.0@twdb.state.tx.us> <41EEC95B-E4BC-4AF8-A5EB-53D0E224324D@gmail.com> Message-ID: <49B4E1FE.63BA.009B.0@twdb.state.tx.us> Thank you. I had to upgrade matplotlib since fill_between isn't part of 0.98.3 which is what I had installed. - dharhas >>> Pierre GM 3/6/2009 3:11 PM >>> Dharhas, Not directly, but there is some workaround. After all, a TimeSeriesPlot is only a subclass of SubPlot, so all its methods are available. Keep in mind that the dates are just an array of integers used for the xaxis. You can access it with the `xdata` attribute. For example, the following code works: from scikits.timeseries.lib import moving_funcs as mf import scikits.timeseries.lib.plotlib as tpl coseries = ts.time_series(np.cos(np.arange(36)), start_date=ts.Date('M', '2001-01')) snseries = ts.time_series(np.cos(np.arange(36)+3), start_date=ts.Date('M', '2001-01')) tpl.tsplot(series=coseries) fsp = tpl.gca() fsp.tsplot(snseries) fsp.fill_between(fsp.xdata, coseries, snseries, color='grey') On Mar 6, 2009, at 3:26 PM, Dharhas Pothina wrote: > Hi, > > I was wondering if there was a way to shade the region between two > curves on a scikits.timeseries plot. I see that matplotlib has a > fill_between() function and I was wondering if there was a way to > use that in a tsplot. > > Thanks > > - dharhas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From avi at sicortex.com Mon Mar 9 11:40:13 2009 From: avi at sicortex.com (Avi Purkayastha) Date: Mon, 9 Mar 2009 10:40:13 -0500 Subject: [SciPy-user] help on fortran compiler In-Reply-To: <5b8d13220903070858o3a0633f2xb683281ccb6ab2b5@mail.gmail.com> References: <5b8d13220903070858o3a0633f2xb683281ccb6ab2b5@mail.gmail.com> Message-ID: Hi David, > > You have to add your own fortran compiler tool (look at numpy sources, > in numpy?distutils?fcompiler for examples). thanks for the hint.. but I am still running into yet unresolved issues.. I created a pathscale.py as the other compilers and edited __init__.py to include the compiler name. However when I run .. % python setup.py config_fc I get the pathscale compiler recognized but then bypasses and continues and subsequently picks the gfortran, which I don't want.. The log with a bit more verbosity.. % python setup.py -v config_fc Running os.system('( /opt/pathscale/ice9_native_3.2n_B_sicortex/bin/ pathf95 -v ; echo $? > /tmp/tmpoXv0XE/VCZd1T ) > /tmp/tmpoXv0XE/ yNOdmp 2>&1') _update_environment(...) customize PGroupFCompiler : with gfortran.. Running os.system('( /.root0/usr/bin/gfortran --version ; echo $? > / tmp/tmpoXv0XE/xX3wVg ) > /tmp/tmpoXv0XE/tQhxpb 2>&1') _update_environment(...) exec_command(['/.root0/usr/bin/gfortran', '-Wall', '-ffixed-form', '- fno-second-underscore', '-fPIC', '-O3', '-funroll-loops', '-print- libgcc-file-name'],) Retaining cwd: /.root0/net/home/avi/builds/numpy-1.2.1 _preserve_environment([]) Seems like with pathscale it is finding the compiler but skipping the exec_command. Any suggestions as why that might be happening? Are there other files to edit except under numpy/distutils/fcompiler? I am also attaching the pathscale.py file, in case.. Thanks for any suggestions. -- Avi -------------- next part -------------- A non-text attachment was scrubbed... Name: pathscale2.py Type: text/x-python-script Size: 979 bytes Desc: not available URL: -------------- next part -------------- From sturla at molden.no Mon Mar 9 11:50:51 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 09 Mar 2009 16:50:51 +0100 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> Message-ID: <49B53ADB.9010809@molden.no> On 3/8/2009 6:03 PM, Rohit Garg wrote: > http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html > > while googling for numpy memory alignment. I wish to know if anything > on that account has come to pass yet? On linux 64 bit platform, can I > assume anything beyond the glibc alignment as of now? If you are willing to waste a few bytes, there is nothing that prevents you from ensuring arbitrary alignment manually. You just allocate more space than you need (16 bytes for 16 bytes alignment), and return a view to a properly aligned segment. Something like this: import numpy as np def aligned_zeros(shape, boundary=16, dtype=float, order='C'): N = np.prod(shape) d = np.dtype(dtype) tmp = np.zeros(N * d.itemsize + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary return tmp[offset:offset+N]\ .view(dtype=d)\ .reshape(shape, order=order) We had questions regarding this for an FFTW interface as well (how to use fftw_malloc instead of malloc). It also affect all coding using SIMD extensions on x86 (MMX, SSE, SSE2). I don't use PPC so I don't know what altivec needs. In any case, should this be in the cookbook? Or even in numpy? It seems a bit redundant to answer this question over and over again. Sturla Molden From david.huard at gmail.com Mon Mar 9 12:31:47 2009 From: david.huard at gmail.com (David Huard) Date: Mon, 9 Mar 2009 12:31:47 -0400 Subject: [SciPy-user] Scikits or not scikits... In-Reply-To: References: Message-ID: <91cf711d0903090931n2709aabeo742f31c959cc33d2@mail.gmail.com> Pierre, This is useful stuff, I think it's good to have it as a scikit. David On Fri, Mar 6, 2009 at 6:21 PM, Matthieu Brucher wrote: > > Bzr oesn't play nice w/ Eclipse and I'm considering dropping it. I was > > wondering whether it'd be OK to make these modules an official part of > > the scikits or not. > > I'm using Bazaar with Eclipse in my everyday work (Linux), and I don't > have the feeling that they do not play nice. I have a Bazaar + XML > plugin that runs fine, safe from the fact that it doesn't tell me that > the local branch must be merged before a new push. Using it for 6 > months now, I don't regret it. > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon Mar 9 13:04:19 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 9 Mar 2009 13:04:19 -0400 Subject: [SciPy-user] Scikits or not scikits... In-Reply-To: <91cf711d0903090931n2709aabeo742f31c959cc33d2@mail.gmail.com> References: <91cf711d0903090931n2709aabeo742f31c959cc33d2@mail.gmail.com> Message-ID: <0F48DC89-C90E-448D-B177-6D85A6B6E347@gmail.com> On Mar 9, 2009, at 12:31 PM, David Huard wrote: > Pierre, > > This is useful stuff, I think it's good to have it as a scikit. Great! Thanks for the support. The package is already on SVN. I'm finalizing putting the docs on sourceforge.net ? la scikits.timeseries. A first official version will come up shortly after scikits.timeseries is officially released (late March, early April, depending on numpy 1.3). From jtoudahl.nielsen at gmail.com Mon Mar 9 13:13:23 2009 From: jtoudahl.nielsen at gmail.com (Jakob Nielsen) Date: Mon, 9 Mar 2009 18:13:23 +0100 Subject: [SciPy-user] fitpack spline fitting of periodic data In-Reply-To: <25a1352e0903090604j7f78f55cj57921e1c9d312a5d@mail.gmail.com> References: <25a1352e0903090604j7f78f55cj57921e1c9d312a5d@mail.gmail.com> Message-ID: <25a1352e0903091013t28aa550fm3519a4005f688d41@mail.gmail.com> hi, I've tried it on my pc, where it work fine without errors, I am using 0-5.1 on my linux where I get errors together with python2.4. I am using one of the newest python enthought editions on my pc. I dont want change to python2.5 on the linux, would you advice me to upgrade scipy on myu linux system or could you point me to any bugfix for scipy 0-5.1? best regards, JTN 2009/3/9 Jakob Nielsen > Dear scipy users I am trying to use the per<>0 option in the fitpack > routine splrep of /scipy/interpolate/fitpack.py > > The below test script gives an error: "ValueError: Error on input > data": > > ar=linspace(0,2*pi,30) > vals=sin(ar) > ar[-1]=2*pi > knots=arange(30,360,30)*pi/180.0 > tck=splrep(ar,vals,t=knots,per=1) > > There is no errors using per=0, there is also errors using no t=None and > per=1. > > Am I missing something? I though the first and last point of the x-values > should encode the periodicity. > > hope you can help me, > > best regards, JTN > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon Mar 9 13:33:39 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 09 Mar 2009 10:33:39 -0700 Subject: [SciPy-user] PyCon, anyone? Message-ID: <49B552F3.7020007@noaa.gov> Hey folks, I'm trying to get an idea of how many folks from the numpy/scipy/mpl community will be at PyCon this year. If enough of us, maybe a sprint is in order, but in any case, it might be nice to get together. Please send me a note off-list (to keep the clutter down) if you are going. I may compile a list and post that, so let me know if it's OK to post your name. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From fperez.net at gmail.com Mon Mar 9 14:09:44 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 9 Mar 2009 11:09:44 -0700 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: <49B4E0CC.7060102@ntc.zcu.cz> References: <49B0F33E.2020609@ntc.zcu.cz> <49B4E0CC.7060102@ntc.zcu.cz> Message-ID: On Mon, Mar 9, 2009 at 2:26 AM, Robert Cimrman wrote: > Ah, Topical_Software page - can I add a finite element codes section? I > know some based on scipy :) Of course! You don't need to ask for permission, it's a wiki :) Cheers, f From peter.skomoroch at gmail.com Mon Mar 9 18:53:26 2009 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 9 Mar 2009 18:53:26 -0400 Subject: [SciPy-user] sparse version of stats.pearsonr ? Message-ID: Before I re-invent the wheel, is there an existing version of stats.pearsonr(x,y) that will work with scipy.sparse vectors? -Pete -- Peter N. Skomoroch 617.285.8348 http://www.datawrangling.com http://delicious.com/pskomoroch http://twitter.com/peteskomoroch -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.skomoroch at gmail.com Mon Mar 9 20:18:16 2009 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 9 Mar 2009 20:18:16 -0400 Subject: [SciPy-user] sparse version of stats.pearsonr ? In-Reply-To: References: Message-ID: Here is what I have based on pearsonr in scipy.stats: def sparse_vector_dot(x, y): '''Calculates the dot product for two sparse vectors''' return (x.T*y).data[0] def sparse_pearsonr(x, y): """Calculates a Pearson correlation coefficient and the p-value for testing non-correlation using two sparse vectors as inputs. Parameters ---------- x : 1D sparse array y : 1D sparse array the same length as x Returns ------- (Pearson's correlation coefficient, 2-tailed p-value) References ---------- http://www.statsoft.com/textbook/glosp.html#Pearson%20Correlation""" # we form a third sparse vector z where the nonzero entries of z # are the union of the nonzero entries in x and y z = x + y n = z.getnnz() #length of x mx = x.data.mean() my = y.data.mean() # we only want to subtract the mean for non-zero values... # so we copy & access the sparse vector components directly: xm, ym = x, y xm.data, ym.data = x.data-mx, y.data-my r_num = n*(sparse_vector_dot(xm,ym)) r_den = n*sqrt(sparse_vector_dot(xm,xm)*sparse_vector_dot(ym,ym)) r = (r_num / r_den) # Presumably, if r > 1, then it is only some small artifact of floating # point arithmetic. r = min(r, 1.0) df = n-2 # Use a small floating point value to prevent divide-by-zero nonsense # fixme: TINY is probably not the right value and this is probably not # the way to be robust. The scheme used in spearmanr is probably better. TINY = 1.0e-20 t = r*sqrt(df/((1.0-r+TINY)*(1.0+r+TINY))) prob = betai(0.5*df,0.5,df/(df+t*t)) return r,prob On Mon, Mar 9, 2009 at 6:53 PM, Peter Skomoroch wrote: > Before I re-invent the wheel, is there an existing version of > stats.pearsonr(x,y) that will work with scipy.sparse vectors? > > -Pete > > -- > Peter N. Skomoroch > 617.285.8348 > http://www.datawrangling.com > http://delicious.com/pskomoroch > http://twitter.com/peteskomoroch > -- Peter N. Skomoroch 617.285.8348 http://www.datawrangling.com http://delicious.com/pskomoroch http://twitter.com/peteskomoroch -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Mar 9 20:39:36 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 9 Mar 2009 19:39:36 -0500 Subject: [SciPy-user] sparse version of stats.pearsonr ? In-Reply-To: References: Message-ID: <1cd32cbb0903091739m1b03385cy2dfe4f53d4ee4caa@mail.gmail.com> On Mon, Mar 9, 2009 at 5:53 PM, Peter Skomoroch wrote: > Before I re-invent the wheel, is there an existing version of > stats.pearsonr(x,y) that will work with scipy.sparse vectors? > > -Pete Pearson correlation coefficient is just the regular correlation, numpy.corrcoef plus the t-statistic for the test that the correlation coefficient is zero. I'm not familiar enough with the sparse package to know how the details work, but in my first try, `mean` seems strange to me >>> B <4x4 sparse matrix of type '' with 4 stored elements in Compressed Sparse Row format> >>> B.todense() matrix([[3, 0, 1, 0], [0, 2, 0, 0], [0, 0, 0, 0], [0, 0, 0, 1]]) >>> B.todense().mean(axis=0) matrix([[ 0.75, 0.5 , 0.25, 0.25]]) >>> B.mean(axis=0) matrix([[0, 0, 0, 0]]) >>> B.todense().mean(axis=1) matrix([[ 1. ], [ 0.5 ], [ 0. ], [ 0.25]]) >>> B.mean(axis=1) matrix([[1], [0], [0], [0]]) >>> B.sum(axis=1) matrix([[4], [2], [0], [1]]) >>> B.sum(axis=0) matrix([[3, 2, 1, 1]]) Here is my version of sparse corrcoef and cov, that takes also zero points into account, i.e. it is the same as using np.cov(sparsematrix.todense()) but (I hope) it avoids any dense calculation on the original matrix: import numpy as np from scipy import sparse, stats # example from doc strings, help I = np.array([0,0,1,3,1,0,0]) J = np.array([0,2,1,3,1,0,0]) V = np.array([1,1,1,1,1,1,1]) B = sparse.coo_matrix((V,(I,J)),shape=(4,4)).tocsr() def covcsr(x): '''return covariance matrix, assumes column variable''' meanx = x.sum(axis=0)/float(x.shape[0]) return ((x.T*x)/x.shape[0] - meanx.T*meanx) def corrcoefcsr(x): covx = covcsr(x) stdx = np.sqrt(np.diag(covx))[np.newaxis,:] return covx/(stdx.T * stdx) B1 = B[:,:2] B1d = B1.todense() print 'sparse cov:\n', covcsr(B1) print 'np.cov:\n', np.cov(B1d, rowvar=0, bias=1) print 'sparse corrcoef:\n', corrcoefcsr(B1) print 'np.corrcoef:\n', np.corrcoef(B1d, rowvar=0, bias=1) print 'stats.pearsonr:', stats.pearsonr(B1d[:,0],B1d[:,1]) Josef From fperez.net at gmail.com Mon Mar 9 20:46:28 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 9 Mar 2009 17:46:28 -0700 Subject: [SciPy-user] ANN: python for scientific computing at SIAM CSE 09 In-Reply-To: References: Message-ID: Hi folks, On Wed, Mar 4, 2009 at 6:51 AM, Fernando Perez wrote: > Hi all, > > sorry for the spam, but in case any of you are coming to the SIAM > Conference on Computational Science and Engineering (CSE09) in Miami: > > http://www.siam.org/meetings/cse09/ A little trip report: http://fdoperez.blogspot.com/2009/03/python-at-siam-cse09-meeting.html and the slides I have so far, for those who may be interested (I'll continue to add more as I get them): https://cirl.berkeley.edu/fperez/py4science/2009_siam_cse/ Thanks to all the speakers! Cheers, f From pav at iki.fi Mon Mar 9 21:09:10 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Mar 2009 01:09:10 +0000 (UTC) Subject: [SciPy-user] sparse version of stats.pearsonr ? References: <1cd32cbb0903091739m1b03385cy2dfe4f53d4ee4caa@mail.gmail.com> Message-ID: Mon, 09 Mar 2009 19:39:36 -0500, josef.pktd wrote: [clip] > I'm not familiar enough with the sparse package to know how the details > work, but in my first try, `mean` seems strange to me > >>>> B > <4x4 sparse matrix of type '' > with 4 stored elements in Compressed Sparse Row format> >>>> B.todense() > matrix([[3, 0, 1, 0], > [0, 2, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 1]]) >>>> B.todense().mean(axis=0) > matrix([[ 0.75, 0.5 , 0.25, 0.25]]) >>>> B.mean(axis=0) > matrix([[0, 0, 0, 0]]) Sparse probably uses an integer accumulator for these functions, whereas normal numpy arrays use a float accumulator. Probably a bug: http://projects.scipy.org/scipy/ticket/884 -- Pauli Virtanen From wnbell at gmail.com Mon Mar 9 23:29:57 2009 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 9 Mar 2009 23:29:57 -0400 Subject: [SciPy-user] sparse version of stats.pearsonr ? In-Reply-To: References: <1cd32cbb0903091739m1b03385cy2dfe4f53d4ee4caa@mail.gmail.com> Message-ID: On Mon, Mar 9, 2009 at 9:09 PM, Pauli Virtanen wrote: > > Sparse probably uses an integer accumulator for these functions, whereas > normal numpy arrays use a float accumulator. > > Probably a bug: http://projects.scipy.org/scipy/ticket/884 > Yep, this is something we'll address in SciPy 0.8: http://projects.scipy.org/scipy/ticket/658 As a work around, you could do >>> B * ones((B.shape[1],1), dtype='float64') and scipy.sparse will upcast B's data array accordingly. This is somewhat more efficient than >>> B.astype('float64').sum(axis=0) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From josef.pktd at gmail.com Mon Mar 9 23:54:17 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 9 Mar 2009 22:54:17 -0500 Subject: [SciPy-user] sparse version of stats.pearsonr ? In-Reply-To: References: Message-ID: <1cd32cbb0903092054g560130a5x3bf68db497334864@mail.gmail.com> On Mon, Mar 9, 2009 at 7:18 PM, Peter Skomoroch wrote: > Here is what I have based on pearsonr in scipy.stats: > > def sparse_vector_dot(x, y): > ??? '''Calculates the dot product for two sparse vectors''' > ??? return (x.T*y).data[0] > > def sparse_pearsonr(x, y): > ??? """Calculates a Pearson correlation coefficient and the p-value for > testing > ??? non-correlation using two sparse vectors as inputs. > > ??? Parameters > ??? ---------- > ??? x : 1D sparse array > ??? y : 1D sparse array the same length as x > > ??? Returns > ??? ------- > ??? (Pearson's correlation coefficient, > ???? 2-tailed p-value) > > ??? References > ??? ---------- > ??? http://www.statsoft.com/textbook/glosp.html#Pearson%20Correlation""" > > ??? # we form a third sparse vector z where the nonzero entries of z > ??? # are the union of the nonzero entries in x and y > ??? z = x + y > ??? n = z.getnnz() #length of x > ??? mx = x.data.mean() > ??? my = y.data.mean() > ??? # we only want to subtract the mean for non-zero values... > ??? # so we copy & access the sparse vector components directly: > ??? xm, ym = x, y > ??? xm.data, ym.data = x.data-mx, y.data-my > ??? r_num = n*(sparse_vector_dot(xm,ym)) > ??? r_den = n*sqrt(sparse_vector_dot(xm,xm)*sparse_vector_dot(ym,ym)) > ??? r = (r_num / r_den) > > ??? # Presumably, if r > 1, then it is only some small artifact of floating > ??? # point arithmetic. > ??? r = min(r, 1.0) > ??? df = n-2 > > ??? # Use a small floating point value to prevent divide-by-zero nonsense > ??? # fixme: TINY is probably not the right value and this is probably not > ??? # the way to be robust. The scheme used in spearmanr is probably better. > ??? TINY = 1.0e-20 > ??? t = r*sqrt(df/((1.0-r+TINY)*(1.0+r+TINY))) > ??? prob = betai(0.5*df,0.5,df/(df+t*t)) > ??? return r,prob > - Show quoted text - > > On Mon, Mar 9, 2009 at 6:53 PM, Peter Skomoroch > wrote: >> >> Before I re-invent the wheel, is there an existing version of >> stats.pearsonr(x,y) that will work with scipy.sparse vectors? >> >> -Pete I don't know what the interpretation of zero values in the sparse matrix would be in this case, but to me it looks like you are using inconsistent sample size n (where at least one of two is non-zero) while the mean is based on a smaller n (non-zero for one vector). I also thought that, how the zeros in one vector are treated, depends on the zeros in the second vector, but maybe not because you subtract the mean only from the non-zero values x.data-mx, y.data-my and zeros in the dot product shouldn't matter. According to my interpretation, you are replacing the zero elements in the joint vectors (size n) by the means of each vector. Under this interpretation the sample size and the degrees of freedom might be correct. I don't know if this is the desired behavior. I updated my version of the dense equivalent correlation a bit: import numpy as np from scipy import sparse, stats I = np.array([0,3,1,0,4,5,5]) J = np.array([0,3,1,2,3,0,1]) V = np.array([4,5,7,9,3,1,2]) B = sparse.coo_matrix((V,(I,J)),shape=(6,4)).tocsr() def cov_csr(x, axis=0): '''return covariance matrix, assumes column variable return type ndarray''' meanx = x.sum(axis=axis)/float(x.shape[axis]) if axis == 0: return np.array((x.T*x)/x.shape[axis] - meanx.T*meanx) else: return np.array((x*x.T)/x.shape[axis] - meanx*meanx.T) def corrcoef_csr(x, axis=0): '''correlation matrix, return type ndarray''' covx = cov_csr(x, axis=axis) stdx = np.sqrt(np.diag(covx))[np.newaxis,:] return covx/(stdx.T * stdx) def pearsonr_csr(x, axis=0): '''returns correlation coefficient or matrix, t-statistic and p-value for null hypothesis of zero correlation (of pairwise correlation if more than 2 variables, columns)''' r = corrcoef_csr(x, axis=axis) TINY = 1e-15 df = x.shape[axis]-2 t = r*np.sqrt(df/((1.0-r+TINY)*(1.0+r+TINY))) if t.shape[axis] == 2: t = t[0,1] r = r[0,1] else: t += np.diag(np.inf*np.ones(t.shape[0])) return r, t, stats.t.sf(np.abs(t),df)*2 B1 = B[:,:2] B1d = B1.todense() print 'sparse cov:\n', cov_csr(B1) print 'np.cov:\n', np.cov(B1d, rowvar=0, bias=1) print 'sparse corrcoef:\n', corrcoef_csr(B1) print 'np.corrcoef:\n', np.corrcoef(B1d, rowvar=0, bias=1) r, t, p = pearsonr_csr(B1) print 'sparse pearson r\n', r print 'sparse t-test:t-stat\n', t print 'sparse t-test:pvalue\n', p rs, ps =stats.pearsonr(B1d[:,0],B1d[:,1]) print 'stats.pearsonr:', rs, ps r0, t0, p0 = pearsonr_csr(B) print 'sparse pearson r\n', r0 print 'sparse t-test:t-stat\n', t0 print 'sparse t-test:pvalue\n', p0 r1, t1, p1 = pearsonr_csr(B.T, axis=1) ##print 'sparse pearson r\n', r1 ##print 'sparse t-test:t-stat\n', t1 ##print 'sparse t-test:pvalue\n', p1 from numpy.testing import assert_equal, assert_almost_equal assert_almost_equal(cov_csr(B1), np.cov(B1d, rowvar=0, bias=1), decimal=14) assert_almost_equal(corrcoef_csr(B1), np.corrcoef(B1d, rowvar=0, bias=1), decimal=14) assert_almost_equal(r, rs, decimal=14) assert_almost_equal(p, ps, decimal=14) #test axis assert_equal(r0, r1) assert_equal(t0, t1) assert_equal(p0, p1) assert_equal(cov_csr(B), cov_csr(B.T, axis=1)) assert_equal(corrcoef_csr(B), corrcoef_csr(B.T, axis=1)) Josef From yosefmel at post.tau.ac.il Tue Mar 10 04:09:37 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Tue, 10 Mar 2009 10:09:37 +0200 Subject: [SciPy-user] Cycling the arrays In-Reply-To: References: Message-ID: <200903101009.38119.yosefmel@post.tau.ac.il> On Friday 06 March 2009 19:03:10 Marco wrote: > just a quick question: what is the fastest way to cycle inside an N > dimensional array? > I just want to visit each and every element in the N dimensional > array, then do things. > > I can do it with N 'for' cycles, obviously. > Is there a more compact and maybe faster way to do it? You can also use ndindex: for indexes in ndindex(arr.shape): print arr[indexes] there's also ndenumerate that gives you the value as well as the indexes. From cimrman3 at ntc.zcu.cz Tue Mar 10 05:54:42 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 10 Mar 2009 10:54:42 +0100 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: References: <49B0F33E.2020609@ntc.zcu.cz> <49B4E0CC.7060102@ntc.zcu.cz> Message-ID: <49B638E2.6060301@ntc.zcu.cz> Fernando Perez wrote: > On Mon, Mar 9, 2009 at 2:26 AM, Robert Cimrman wrote: > >> Ah, Topical_Software page - can I add a finite element codes section? I >> know some based on scipy :) > > Of course! You don't need to ask for permission, it's a wiki :) But it is a very visible one. :) I have added a new section (Partial differential equation (PDE) solvers) and listed there FiPy and SfePy. I might have overlooked other PDE solvers listed on the Topical Software page, so, please, fill yourselves in. r. From nadavh at visionsense.com Tue Mar 10 07:21:19 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Tue, 10 Mar 2009 13:21:19 +0200 Subject: [SciPy-user] Bug in ndimage package Message-ID: <710F2847B0018641891D9A216027636029C475@ex3.envision.co.il> Line 61 in scipy/ndimage/fourier.py: output = numpy.zeros(input.shape, dtype = numpy.Complex64) should be complex128 instead of Complex64 Nadav. From ondrej at certik.cz Tue Mar 10 10:22:43 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 10 Mar 2009 07:22:43 -0700 Subject: [SciPy-user] ANN: PyAMG v1.0 (Algebraic Multigrid Solvers in Python) In-Reply-To: <49B638E2.6060301@ntc.zcu.cz> References: <49B0F33E.2020609@ntc.zcu.cz> <49B4E0CC.7060102@ntc.zcu.cz> <49B638E2.6060301@ntc.zcu.cz> Message-ID: <85b5c3130903100722n77e2e0e6x864d4990cf2928ba@mail.gmail.com> On Tue, Mar 10, 2009 at 2:54 AM, Robert Cimrman wrote: > Fernando Perez wrote: >> On Mon, Mar 9, 2009 at 2:26 AM, Robert Cimrman wrote: >> >>> Ah, Topical_Software page - can I add a finite element codes section? I >>> know some based on scipy :) >> >> Of course! ?You don't need to ask for permission, it's a wiki :) > > But it is a very visible one. :) > > I have added a new section (Partial differential equation (PDE) solvers) > and listed there FiPy and SfePy. I might have overlooked other PDE > solvers listed on the Topical Software page, so, please, fill yourselves in. I think our hermes hp-FEM solver counts there too, as it can be used from Python and the sage notebook (at least the 2d one). http://hpfem.org/ so I added it in. Ondrej From stefan at sun.ac.za Tue Mar 10 10:26:40 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 16:26:40 +0200 Subject: [SciPy-user] Bug in ndimage package In-Reply-To: <710F2847B0018641891D9A216027636029C475@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C475@ex3.envision.co.il> Message-ID: <9457e7c80903100726y43b26c74qd905148787164e8b@mail.gmail.com> 2009/3/10 Nadav Horesh : > > Line 61 in scipy/ndimage/fourier.py: > > ? ? ? ? ? ? output = numpy.zeros(input.shape, dtype = numpy.Complex64) > > should be complex128 instead of Complex64 Thanks, fixed in r5615. This should have been caught by our unit tests. Do you have a code snippet that we can use as a regression test? Cheers St?fan From giorgio.luciano at inwind.it Tue Mar 10 10:39:29 2009 From: giorgio.luciano at inwind.it (giorgio.luciano at inwind.it) Date: Tue, 10 Mar 2009 15:39:29 +0100 Subject: [SciPy-user] data analysis, chemomentrics, python and open software Message-ID: Dear All We?ve decided to start a project for developing a small, fast, efficient and easy software for univariate and multivariate software analysis applied to chemical data IDEA : The idea almost came since while teaching in short courses or to ?lab? people, it seems that the approach of a ?compact software? is more fruitful. Instead of ?wasting? time on teaching how to slice matrix, adding path, etc. (which are very easy one but for anyone that has been teaching matlab/octave/r know what I am talking about) people can just copy and paste (from excel), set parameters or conveniently leave as default and then run the analysis. Eventually mark ?outlier? and rerun the analysis. HOW : So for not reinventing the wheel what do we choose ? We decided to develop the software using Python and Qt. The choice seemed the most suitable since python can behave like a glue importing already existent and tested R and Octave routines and also can rely on solid and stable libraries like scipy and numeric. Qt also is a mature library that can rely on very good RAD and a big community of developers (Nokia bought Qt, just to give an idea) In this way we should be able to rapidly write GUI for every platform. Everything will be kept open to ensure also quick response to bug, issues and feedback for everyone that will like to contribute. We would like also to kept the architecture open in order to eventually port the software to custom hardware and to keep the possibility to tailor the software on user?s need WHAT WE OFFER AND WHO WE NEED: ? We have a repository on google CODE and also the domain www.chemometrics.it will became the ?house? of the sofware For now we are three main developer but we need all the help possible especially for the GUI part (bridge between QT and python)and people interested in increasing the python "libraries" for chemometrics routines So we search for other developer to join us and also tester. Of course sponsors are welcome ;) From arnar.flatberg at gmail.com Tue Mar 10 11:40:26 2009 From: arnar.flatberg at gmail.com (Arnar Flatberg) Date: Tue, 10 Mar 2009 16:40:26 +0100 Subject: [SciPy-user] data analysis, chemomentrics, python and open software In-Reply-To: References: Message-ID: <5d3194020903100840g6a49a077u26140df93200f195@mail.gmail.com> Hi Giorgio, I might be interested in contributing to such a project. Depending on the time available and how the project develops of course. I have some experience with both chemometrics and GUI development in python (mostly GTK, some QT). My experience does require me to issue a warning though: GUI + interactive graphics takes a lot of resources to develop, test, maintain and deploy. Expecially the last two, and it is even more stressing when "real" science is also a part of your job. On that thought, have you considered using some Enthought tools to relieve some of these issues? (QT still an option). Arnar On Tue, Mar 10, 2009 at 3:39 PM, giorgio.luciano at inwind.it < giorgio.luciano at inwind.it> wrote: > Dear All > > We?ve decided to start a project for developing a small, fast, efficient > and easy software for univariate and multivariate software analysis applied > to chemical data > IDEA : The idea almost came since while teaching in short courses or to > ?lab? people, it seems that the > approach of a ?compact software? is more fruitful. Instead of ?wasting? > time on teaching how to slice matrix, adding path, etc. (which are very > easy one but for anyone that has been teaching matlab/octave/r know what I > am talking about) people can just copy and paste (from excel), set > parameters or conveniently leave as default and then run the analysis. > Eventually mark ?outlier? and rerun the analysis. > HOW : So for not reinventing the wheel what do we choose ? We decided to > develop the software using Python and Qt. The choice seemed the most > suitable since python can behave like a glue importing already existent and > tested R and Octave routines and also can rely on solid and stable > libraries like scipy and numeric. Qt also is a mature library that can rely > on very good RAD and a big community of developers (Nokia bought Qt, just to > give an idea) In this way we should be able to rapidly write GUI for every > platform. > Everything will be kept open to ensure also quick response to bug, issues > and feedback for everyone that will like to contribute. We would like also > to kept the architecture open in order to eventually port the software to > custom hardware and to keep the possibility to tailor the software on user?s > need > > WHAT WE OFFER AND WHO WE NEED: ? > We have a repository on google CODE and also the domain > www.chemometrics.it will became the ?house? of the sofware > For now we are three main developer but we need all the help possible > especially for the GUI part (bridge between QT and python)and people > interested in increasing the python "libraries" for chemometrics routines > So we search for other developer to join us and also tester. Of course > sponsors are welcome ;) > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Mar 10 11:40:58 2009 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 10 Mar 2009 11:40:58 -0400 Subject: [SciPy-user] data analysis, chemomentrics, python and open software In-Reply-To: References: Message-ID: <49B68A0A.6050609@american.edu> A couple reactions. 1. What is the license? On this list, you will arouse much more interest if it is BSD. (See the NumPy, SciPy, and Matplotlib licenses.) 2. Reinventing the wheel ... it seems like much of what you are discussing at the algorithmic level may be in SciPy or might be fruitfully added to it. Much of what you have in mind seems to imply general purpose (rather than specifically chemometric) code. 3. Decoupling the GUI: you did not make it clear, but the current commitment to Qt (which seems fine given their new license) should be kept entirely separate from the real work (i.e., the numerical algorithms). Best wishes, Alan Isaac PS Where is your Google Code repository? From stefan at sun.ac.za Tue Mar 10 11:45:11 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Mar 2009 17:45:11 +0200 Subject: [SciPy-user] fitpack spline fitting of periodic data In-Reply-To: <25a1352e0903091013t28aa550fm3519a4005f688d41@mail.gmail.com> References: <25a1352e0903090604j7f78f55cj57921e1c9d312a5d@mail.gmail.com> <25a1352e0903091013t28aa550fm3519a4005f688d41@mail.gmail.com> Message-ID: <9457e7c80903100845l6f3c5062v48f7ce29ebbfadca@mail.gmail.com> Hi Jakob 2009/3/9 Jakob Nielsen : > hi, I've tried it on my pc, where it work fine without errors, I am using > 0-5.1 on my linux where I get errors together with python2.4.?I am using one > of the newest python enthought editions on my pc. I dont want change to > python2.5 on the linux, would you advice me to upgrade scipy on myu linux > system or could you point me to any bugfix for scipy 0-5.1? Your code snippet seems to run fine on the latest NumPy and SciPy. You don't have to upgrade to 2.5, but you have to install a newer numpy and scipy. I think the latest EPD should provide a newer version of SciPy? You can also look at http://www.pythonxy.com. Under Linux, it depends on the distribution you run, but there it would be very easy to compile from source. Regards St?fan From aisaac at american.edu Tue Mar 10 11:46:24 2009 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 10 Mar 2009 11:46:24 -0400 Subject: [SciPy-user] GSoC 2009 Message-ID: <49B68B50.2070501@american.edu> Shouldn't SciPy have some projects under the MathSci heading? http://wiki.python.org/moin/SummerOfCode/2009 Alan Isaac From egranstedt at gmail.com Tue Mar 10 11:57:06 2009 From: egranstedt at gmail.com (Erik Granstedt) Date: Tue, 10 Mar 2009 11:57:06 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file Message-ID: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> Hello, I am using scipy.io.read_array to read in values from data files to arrays. The data files occasionally contain "NaN"s, and I would like the returned array to also contain "NaN"s. I've tried calling read_array with: scipy.io.read_array(file('read_array_test.dat','r'),missing=float('NaN')) but this still seems to convert the "NaN"s to 0.0 Is there a way to get it to return "NaN"s in the array instead of converting them to 0.0 ? Thanks, -Erik From fredmfp at gmail.com Tue Mar 10 11:59:30 2009 From: fredmfp at gmail.com (fred) Date: Tue, 10 Mar 2009 16:59:30 +0100 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> Message-ID: <49B68E62.2090405@gmail.com> Erik Granstedt a ?crit : > > Is there a way to get it to return "NaN"s in the array instead of > converting them to 0.0 ? I use fread() and fwrite() from scipy.io.numpyio without problem. My 2 cts. Cheers, -- Fred From millman at berkeley.edu Tue Mar 10 12:11:16 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 10 Mar 2009 09:11:16 -0700 Subject: [SciPy-user] GSoC 2009 In-Reply-To: <49B68B50.2070501@american.edu> References: <49B68B50.2070501@american.edu> Message-ID: On Tue, Mar 10, 2009 at 8:46 AM, Alan G Isaac wrote: > Shouldn't SciPy have some projects under the MathSci heading? > ? http://wiki.python.org/moin/SummerOfCode/2009 I added a link to the list of ideas of NumPy/SciPy summer of code project ideas: http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas From pgmdevlist at gmail.com Tue Mar 10 12:14:40 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 10 Mar 2009 12:14:40 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> Message-ID: <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> With the SVN version of Numpy: >>> import numpy as np >>> import StringIO >>> a = np.genfromtxtx(StringIO.StringIO("1, NaN"), delimiter=",") If you want to output a MaskedArray: >>> a = np.genfromtxt(StringIO.StringIO("1, NaN"), delimiter=",", missing="NaN", usemask=True) >>> isinstance(a, np.ma.MaskedArray) True On Mar 10, 2009, at 11:57 AM, Erik Granstedt wrote: > Hello, > > I am using scipy.io.read_array to read in values from data files to > arrays. The data files occasionally contain "NaN"s, and I would like > the returned array to also contain "NaN"s. I've tried calling > read_array with: > > scipy > .io.read_array(file('read_array_test.dat','r'),missing=float('NaN')) > > but this still seems to convert the "NaN"s to 0.0 > > Is there a way to get it to return "NaN"s in the array instead of > converting them to 0.0 ? > > Thanks, > > -Erik > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Tue Mar 10 12:22:38 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 10 Mar 2009 11:22:38 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> Message-ID: <49B64D7E.63BA.009B.0@twdb.state.tx.us> so does np.genfromtxtx also deal with missing values in a file? i.e something like: 1999,1,22,42 1999,2,18,23 1999,3,,22 1999,4,12, 1999,5,,, 1999,6,12,34 I've worked out how to do this using np.loadtxt by defining conversions for each column buts its pretty cumbersome and looks like spagetti in the code. - dharhas >>> Pierre GM 3/10/2009 11:14 AM >>> With the SVN version of Numpy: >>> import numpy as np >>> import StringIO >>> a = np.genfromtxtx(StringIO.StringIO("1, NaN"), delimiter=",") If you want to output a MaskedArray: >>> a = np.genfromtxt(StringIO.StringIO("1, NaN"), delimiter=",", missing="NaN", usemask=True) >>> isinstance(a, np.ma.MaskedArray) True On Mar 10, 2009, at 11:57 AM, Erik Granstedt wrote: > Hello, > > I am using scipy.io.read_array to read in values from data files to > arrays. The data files occasionally contain "NaN"s, and I would like > the returned array to also contain "NaN"s. I've tried calling > read_array with: > > scipy > .io.read_array(file('read_array_test.dat','r'),missing=float('NaN')) > > but this still seems to convert the "NaN"s to 0.0 > > Is there a way to get it to return "NaN"s in the array instead of > converting them to 0.0 ? > > Thanks, > > -Erik > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From pgmdevlist at gmail.com Tue Mar 10 12:32:35 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 10 Mar 2009 12:32:35 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <49B64D7E.63BA.009B.0@twdb.state.tx.us> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> Message-ID: <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> On Mar 10, 2009, at 12:22 PM, Dharhas Pothina wrote: > > so does np.genfromtxtx also deal with missing values in a file? Yep: >>> data = StringIO.StringIO("""# 1999,4,12, 1999,5,,, 1999,6,12,34 """) >>> a = np.genfromtxt(data, delimiter=",", usemask=True) >>> a masked_array(data = [[1999.0 4.0 12.0 --] [1999.0 5.0 -- --] [1999.0 6.0 12.0 34.0]], mask = [[False False False True] [False False True True] [False False False False]], fill_value = 1e+20) Looks like the first 2 columns are YYYY and MM: you can use scikits.timeseries.tsfromtxt for that, with a special converter to transform the first 2 columns into a datearray: >>> dconv=lambda y,m: Date('M', year=y, month=m) > i.e something like: > 1999,1,22,42 > 1999,2,18,23 > 1999,3,,22 > 1999,4,12, > 1999,5,,, > 1999,6,12,34 > > I've worked out how to do this using np.loadtxt by defining > conversions for each column buts its pretty cumbersome and looks > like spagetti in the code. > > - dharhas > >>>> Pierre GM 3/10/2009 11:14 AM >>> > With the SVN version of Numpy: > >>>> import numpy as np >>>> import StringIO > >>>> a = np.genfromtxtx(StringIO.StringIO("1, NaN"), delimiter=",") > > If you want to output a MaskedArray: >>>> a = np.genfromtxt(StringIO.StringIO("1, NaN"), delimiter=",", > missing="NaN", usemask=True) >>>> isinstance(a, np.ma.MaskedArray) > True > > On Mar 10, 2009, at 11:57 AM, Erik Granstedt wrote: > >> Hello, >> >> I am using scipy.io.read_array to read in values from data files to >> arrays. The data files occasionally contain "NaN"s, and I would like >> the returned array to also contain "NaN"s. I've tried calling >> read_array with: >> >> scipy >> .io.read_array(file('read_array_test.dat','r'),missing=float('NaN')) >> >> but this still seems to convert the "NaN"s to 0.0 >> >> Is there a way to get it to return "NaN"s in the array instead of >> converting them to 0.0 ? >> >> Thanks, >> >> -Erik >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Tue Mar 10 12:44:29 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 10 Mar 2009 11:44:29 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> Message-ID: <49B6529D.63BA.009B.0@twdb.state.tx.us> >> so does np.genfromtxtx also deal with missing values in a file? >Yep: sweet. This is going to be very useful. >>> data = StringIO.StringIO("""# 1999,4,12, 1999,5,,, 1999,6,12,34 """) > Looks like the first 2 columns are YYYY and MM: you can use > scikits.timeseries.tsfromtxt for that, with a special converter to > transform the first 2 columns into a datearray: > dconv=lambda y,m: Date('M', year=y, month=m) This was just an example I made up. But most of the files I'm reading are in the format : columns that define date followed by columns of various data Could you run me through the commands to go from the file containing the data to the timeseries masking missing data in the process? ie. can StringIO read from a file or do I need to load the data first and then call StringIO and then call tsfromtxt() to reread the file? thanks , - dharhas From avi at sicortex.com Tue Mar 10 12:48:14 2009 From: avi at sicortex.com (Avi Purkayastha) Date: Tue, 10 Mar 2009 11:48:14 -0500 Subject: [SciPy-user] build_ext fails to pick up fortran compiler Message-ID: <22A21E9F-C805-4ABC-BFD6-D36034981C59@sicortex.com> Hi, When building scipy with a new compiler (pathscale) added to the numpy*distutils*fcompiler list, the build log shows that the build process picks up the fortran compiler for parts of the build where needed except for build_ext. Some parts of the log is listed below for explanation. : customize PathScaleFCompiler customize PathScaleFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /opt/pathscale/ice9_native_3.2n_B_sicortex/bin/pathf95 -fixedform -O3 Fortran f90 compiler: /opt/pathscale/ice9_native_3.2n_B_sicortex/bin/pathf95 -O3 Fortran fix compiler: /opt/pathscale/ice9_native_3.2n_B_sicortex/bin/pathf95 -fixedform -O3 creating build/temp.linux-mips64-2.4 creating build/temp.linux-mips64-2.4/scipy creating build/temp.linux-mips64-2.4/scipy/fftpack creating build/temp.linux-mips64-2.4/scipy/fftpack/src creating build/temp.linux-mips64-2.4/scipy/fftpack/src/dfftpack compile options: '-c' pathf95:f77: scipy/fftpack/src/dfftpack/dcosqi.f pathf95:f77: scipy/fftpack/src/dfftpack/dcosqf.f pathf95:f77: scipy/fftpack/src/dfftpack/zfftf.f : and the build continues with success on picking up and building with the pathscale compiler on other pieces that need the fortran compiler until.. : running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext resetting extension 'scipy.integrate._odepack' language from 'c' to 'f77'. resetting extension 'scipy.integrate.vode' language from 'c' to 'f77'. resetting extension 'scipy.lib.blas.fblas' language from 'c' to 'f77'. resetting extension 'scipy.odr.__odrpack' language from 'c' to 'f77'. extending extension 'scipy.sparse.linalg.dsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize PathScaleFCompiler warning: build_ext: f77_compiler=pathscale is not available. : scgcc -shared build/temp.linux- mips64-2.4/scipy/cluster/src/hierarchy_wrap.o build/temp.linux- mips64-2.4/scipy/cluster/src/h ierarchy.o -Lbuild/temp.linux-mips64-2.4 -o build/lib.linux- mips64-2.4/scipy/cluster/_hierarchy_wrap.so building 'scipy.fftpack._fftpack' extension warning: build_ext: extension 'scipy.fftpack._fftpack' has Fortran libraries but no Fortran linker found, using default linker compiling C sources : and ultimately failure because of this reason.. scgcc -shared build/temp.linux- mips64-2.4/scipy/interpolate/src/_fitpackmodule.o -Lbuild/temp.linux- mips64-2.4 -lfitpack -o build/lib.linux-mips64-2.4/scipy/interpolate/_fitpack.so building 'scipy.interpolate.dfitpack' extension error: extension 'scipy.interpolate.dfitpack' has Fortran sources but no Fortran compiler found Any suggestions on why build_ext is failing on picking up the fortran compiler or any work-arounds for this? Thanks Avi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Mar 10 12:54:52 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 10 Mar 2009 12:54:52 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <49B6529D.63BA.009B.0@twdb.state.tx.us> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> Message-ID: <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> On Mar 10, 2009, at 12:44 PM, Dharhas Pothina wrote: > >>> so does np.genfromtxtx also deal with missing values in a file? >> Yep: > > sweet. This is going to be very useful. That was the whole aim of the game ;) > > This was just an example I made up. But most of the files I'm > reading are in the format : > > columns that define date followed by columns of various data > > Could you run me through the commands to go from the file containing > the data to the timeseries masking missing data in the process? > > ie. can StringIO read from a file or do I need to load the data > first and then call StringIO and then call tsfromtxt() to reread the > file? ts.tsfromtxt is just a tailored version of np.genfromtxt. The input can be a filename ("data.txt"), a file (gzip version supported), or a string content (a la StringIO). Just use datecols to precise what column should be interpreted as date, your delimiter, any specific string representing a missing data (eg, "NaN". By default, '' is recognized), any additional converter... Just check the docstrings of ts.tsfromtxt and np.genfromtxt for more info, and let us know how we can improve them. From ckkart at hoc.net Tue Mar 10 12:55:40 2009 From: ckkart at hoc.net (Christian K.) Date: Tue, 10 Mar 2009 16:55:40 +0000 (UTC) Subject: [SciPy-user] 2d trapz Message-ID: Hi, is there any method to do efficient 2d numeric integration in scipy/numpy? Thanks for any hint, Christian From Dharhas.Pothina at twdb.state.tx.us Tue Mar 10 13:01:22 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 10 Mar 2009 12:01:22 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> Message-ID: <49B65692.63BA.009B.0@twdb.state.tx.us> So this would need svn versions of numpy & the timeseries scikit? What is the roadmap for release versions? my only other concern would be whether tsfromtxt would choke if duplicate dates were present in the data file. I still haven't found a good way in python to remove duplicate dates in general. thanks, - dharhas >>> Pierre GM 3/10/2009 11:54 AM >>> On Mar 10, 2009, at 12:44 PM, Dharhas Pothina wrote: > >>> so does np.genfromtxtx also deal with missing values in a file? >> Yep: > > sweet. This is going to be very useful. That was the whole aim of the game ;) > > This was just an example I made up. But most of the files I'm > reading are in the format : > > columns that define date followed by columns of various data > > Could you run me through the commands to go from the file containing > the data to the timeseries masking missing data in the process? > > ie. can StringIO read from a file or do I need to load the data > first and then call StringIO and then call tsfromtxt() to reread the > file? ts.tsfromtxt is just a tailored version of np.genfromtxt. The input can be a filename ("data.txt"), a file (gzip version supported), or a string content (a la StringIO). Just use datecols to precise what column should be interpreted as date, your delimiter, any specific string representing a missing data (eg, "NaN". By default, '' is recognized), any additional converter... Just check the docstrings of ts.tsfromtxt and np.genfromtxt for more info, and let us know how we can improve them. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Tue Mar 10 13:10:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 10 Mar 2009 12:10:08 -0500 Subject: [SciPy-user] 2d trapz In-Reply-To: References: Message-ID: <1cd32cbb0903101010p3dd7f682lae969f53a58aeff2@mail.gmail.com> On Tue, Mar 10, 2009 at 11:55 AM, Christian K. wrote: > Hi, > > is there any method to do efficient 2d numeric integration in scipy/numpy? > > Thanks for any hint, Christian > According to the help file scipy.integrate.dblquad is the only numerical integrator for 2d functions. There might be indirect ways ? Josef Josef From pgmdevlist at gmail.com Tue Mar 10 13:11:41 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 10 Mar 2009 13:11:41 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <49B65692.63BA.009B.0@twdb.state.tx.us> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> Message-ID: <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> On Mar 10, 2009, at 1:01 PM, Dharhas Pothina wrote: > > So this would need svn versions of numpy & the timeseries scikit? > What is the roadmap for release versions? numpy 1.3 should be released on 04/01. scikits.timeseries 1. should be released shortly afterwards. > my only other concern would be whether tsfromtxt would choke if > duplicate dates were present in the data file. A TimeSeries object support duplicated dates, so no problem on this side: you'll have duplicated dates in your resulting series. > I still haven't found a good way in python to remove duplicate dates > in general. Well, because there's no standard way to do that: when you have duplicated dates, should you take the first one? The last one ? Take some kind of average of the values ? From nwagner at iam.uni-stuttgart.de Tue Mar 10 13:18:55 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Mar 2009 18:18:55 +0100 Subject: [SciPy-user] GSoC 2009 In-Reply-To: <49B68B50.2070501@american.edu> References: <49B68B50.2070501@american.edu> Message-ID: On Tue, 10 Mar 2009 11:46:24 -0400 Alan G Isaac wrote: > Shouldn't SciPy have some projects under the MathSci >heading? > http://wiki.python.org/moin/SummerOfCode/2009 > > Alan Isaac > Just two ideas for scipy's ODE solvers: Event handling would be nice. Add DDE solvers to scipy.integrate and one idea for scipy.linalg Solvers for matrix equations, e.g. Sylvester, Riccati, Lyapunov Nils From nwagner at iam.uni-stuttgart.de Tue Mar 10 13:22:00 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Mar 2009 18:22:00 +0100 Subject: [SciPy-user] 2d trapz In-Reply-To: References: Message-ID: On Tue, 10 Mar 2009 16:55:40 +0000 (UTC) Christian K. wrote: > Hi, > > is there any method to do efficient 2d numeric >integration in scipy/numpy? > > Thanks for any hint, Christian > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user >>> from scipy.integrate import dblquad >>> help (dblquad) Nils From Dharhas.Pothina at twdb.state.tx.us Tue Mar 10 13:40:16 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 10 Mar 2009 12:40:16 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> Message-ID: <49B65FB0.63BA.009B.0@twdb.state.tx.us> >> I still haven't found a good way in python to remove duplicate dates >> in general. > Well, because there's no standard way to do that: when you have > duplicated dates, should you take the first one? The last one ? Take > some kind of average of the values ? Assuming I choose one of the three options above. Most likely the first. How would I proceed then? - dharhas From Chris.Barker at noaa.gov Tue Mar 10 14:32:32 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 10 Mar 2009 11:32:32 -0700 Subject: [SciPy-user] Google summer of Code 2009 In-Reply-To: <49B67625.2010706@stsci.edu> References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> Message-ID: <49B6B240.4030306@noaa.gov> Michael Droettboom wrote: > The PSF will do the work of applying to Google -- we can encourage > prospective students and mentors to apply through the PSF. hmmm -- I wonder if that is best -- it would put MPL projects in competition with all other python projects. My first thought is that a SciPy application would be best -- with SciPy, numpy, MPL, Sage, Cython, etc, it's plenty big, but would have a bit more focus. As an example, wxPython has been a mentoring organization for the last few years. Not that I'm volunteering to put together the application.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From millman at berkeley.edu Tue Mar 10 15:07:32 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 10 Mar 2009 12:07:32 -0700 Subject: [SciPy-user] Google summer of Code 2009 In-Reply-To: <49B6B240.4030306@noaa.gov> References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> <49B6B240.4030306@noaa.gov> Message-ID: On Tue, Mar 10, 2009 at 11:32 AM, Christopher Barker wrote: > hmmm -- I wonder if that is best -- it would put MPL projects in > competition with all other python projects. > > My first thought is that a SciPy application would be best -- with > SciPy, numpy, MPL, Sage, Cython, etc, it's plenty big, but would have a > bit more focus. I spoke with the SoC coordinator about this last year and was told they would prefer us to stay under the PSF umbrella. This year they plan to sponsor fewer mentoring organizations, I believe (so less chance we would get accepted). Finally, the deadline for submitting an application to be a mentoring organization is Friday (March 13) at 12 noon PDT: http://code.google.com/opensource/gsoc/2009/faqs.html#0_1_mentoring_orgs_52990812492_14255507054617844 From silva at lma.cnrs-mrs.fr Tue Mar 10 15:14:00 2009 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 10 Mar 2009 20:14:00 +0100 Subject: [SciPy-user] 2d trapz In-Reply-To: References: Message-ID: <1236712440.2629.52.camel@localhost.localdomain> Le mardi 10 mars 2009, Nils Wagner a ?crit : > >>> from scipy.integrate import dblquad > >>> help (dblquad) I do not agree. *quad integrators are kind of auto-adaptative integrators, whereas trapz (and simps) handles fixed sample values and performs an approximated integration which error is linked to the step size (in the case of regularly spaced sampled). In my humble opinion, these are two really different kind of integrators... -- Fabrice Silva From ckkart at hoc.net Tue Mar 10 15:44:56 2009 From: ckkart at hoc.net (Christian K.) Date: Tue, 10 Mar 2009 20:44:56 +0100 Subject: [SciPy-user] 2d trapz In-Reply-To: <1236712440.2629.52.camel@localhost.localdomain> References: <1236712440.2629.52.camel@localhost.localdomain> Message-ID: Fabrice Silva schrieb: > Le mardi 10 mars 2009, Nils Wagner a ?crit : >> >>> from scipy.integrate import dblquad >> >>> help (dblquad) > > I do not agree. *quad integrators are kind of auto-adaptative > integrators, whereas trapz (and simps) handles fixed sample values and > performs an approximated integration which error is linked to the step > size (in the case of regularly spaced sampled). In my humble opinion, > these are two really different kind of integrators... I could not have said that better :) Thanks for jumping in. I found some implementations in maple and mathematica but of course I would prefer to reuse someone's code rather than implementing it on my own. http://www.mathlab.cornell.edu/local_maple/mvc/local/lib/num_int_2d http://math.fullerton.edu/mathews/n2003/SimpsonsRule2DMod.html Christian From ckkart at hoc.net Tue Mar 10 15:54:40 2009 From: ckkart at hoc.net (Christian K.) Date: Tue, 10 Mar 2009 20:54:40 +0100 Subject: [SciPy-user] 2d trapz In-Reply-To: References: Message-ID: Nils Wagner schrieb: > On Tue, 10 Mar 2009 16:55:40 +0000 (UTC) > Christian K. wrote: >> Hi, >> >> is there any method to do efficient 2d numeric >>integration in scipy/numpy? >> > >>>> from scipy.integrate import dblquad >>>> help (dblquad) > I notice that I made not clear that by 'numerical' I meant integration of fixed sample data. So dblquad is not what I was looking for. Christian From pav at iki.fi Tue Mar 10 16:30:28 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Mar 2009 20:30:28 +0000 (UTC) Subject: [SciPy-user] 2d trapz References: Message-ID: Tue, 10 Mar 2009 20:54:40 +0100, Christian K. wrote: [clip] > I notice that I made not clear that by 'numerical' I meant integration > of fixed sample data. So dblquad is not what I was looking for. Use trapz twice? import scipy as sp import numpy as np x = np.linspace(-10, 10, 200) y = np.linspace(-10, 10, 80) f = np.exp(-x[:,np.newaxis]**2 - y[np.newaxis,:]**2/7)/(np.sqrt(7)*np.pi) print sp.trapz(sp.trapz(f, y[np.newaxis,:], axis=1), x, axis=0) # -> 0.99999990566 *** There's a bug in Numpy <= 1.2.1 that requires that `f` and `y` to have same number of dimensions, so that's why the newaxis. -- Pauli Virtanen From contact at pythonxy.com Tue Mar 10 17:19:23 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 10 Mar 2009 22:19:23 +0100 Subject: [SciPy-user] Pydee v0.3.0 Message-ID: <49B6D95B.2030006@pythonxy.com> Hi all, Two months ago, I made an announcement regarding a little open-source project of mine, PyQtShell -- that is a module providing embeddable console widgets for your PyQt applications (interactive Python shell, workspace, working directory browser, editor, ...) as well as "Pydee", a PYthon Development EnvironmEnt based on these widgets (which could become an interesting alternative to IDLE for example). Pydee features have been greatly enhanced these last weeks, and a lot of bugs were fixed thanks to Christopher Brown (thank you again Christopher for your bug reports/feature requests which are always very detailed and constructive). I recently (a few minutes ago actually..) added an interesting feature in Pydee v0.3.0: matplotlib integration (i.e. matplotlib figures can be docked inside Pydee which is quite convenient). See this screenshot for example: http://source.pythonxy.com/PyQtShell/screenshots/ss3.png Other screenshots and informations: http://pypi.python.org/pypi/PyQtShell/ http://code.google.com/p/pyqtshell/ As some of you may have noticed, Pydee is intended to be a mini-MATLAB environment -- that being said, it still at an early stage of development. Cheers, Pierre From ckkart at hoc.net Tue Mar 10 18:10:03 2009 From: ckkart at hoc.net (Christian K.) Date: Tue, 10 Mar 2009 23:10:03 +0100 Subject: [SciPy-user] 2d trapz In-Reply-To: References: Message-ID: Pauli Virtanen schrieb: > Tue, 10 Mar 2009 20:54:40 +0100, Christian K. wrote: > [clip] >> I notice that I made not clear that by 'numerical' I meant integration >> of fixed sample data. So dblquad is not what I was looking for. > > Use trapz twice? > I doubt that this is exact. But at the moment (late in the evening) this is just my intuition and I might be wrong. I'll give it a try. Christian From contact at pythonxy.com Wed Mar 11 03:07:42 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Wed, 11 Mar 2009 08:07:42 +0100 Subject: [SciPy-user] Pydee v0.3.0 Message-ID: <49B7633E.50606@pythonxy.com> ----------------------------------------------------- Edit: Apparently, there's a bug (a what?!) in Pydee v0.3.0 You won't be able to install it and test it until v0.3.1 (I'll release it in 10-12 hours) Sorry for that ----------------------------------------------------- Hi all, Two months ago, I made an announcement regarding a little open-source project of mine, PyQtShell -- that is a module providing embeddable console widgets for your PyQt applications (interactive Python shell, workspace, working directory browser, editor, ...) as well as "Pydee", a PYthon Development EnvironmEnt based on these widgets (which could become an interesting alternative to IDLE for example). Pydee features have been greatly enhanced these last weeks, and a lot of bugs were fixed thanks to Christopher Brown (thank you again Christopher for your bug reports/feature requests which are always very detailed and constructive). I recently (a few minutes ago actually..) added an interesting feature in Pydee v0.3.0: matplotlib integration (i.e. matplotlib figures can be docked inside Pydee which is quite convenient). See this screenshot for example: http://source.pythonxy.com/PyQtShell/screenshots/ss3.png Other screenshots and informations: http://pypi.python.org/pypi/PyQtShell/ http://code.google.com/p/pyqtshell/ As some of you may have noticed, Pydee is intended to be a mini-MATLAB environment -- that being said, it still at an early stage of development. Cheers, Pierre From timmichelsen at gmx-topmail.de Wed Mar 11 05:33:07 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Wed, 11 Mar 2009 09:33:07 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy=2Eio=2Eread=5Farray=3A_NaN_in_data_f?= =?utf-8?q?ile?= References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> <49B65FB0.63BA.009B.0@twdb.state.tx.us> Message-ID: Hello! > > Well, because there's no standard way to do that: when you have > > duplicated dates, should you take the first one? The last one ? Take > > some kind of average of the values ? > > Assuming I choose one of the three options above. Most likely the first. How would I proceed then? I haven't solved that problem either. But maybe the code from interpolate modules of scikit have some checkers on consecutive values? Please tell what ideas you have. Thanks, Timmie From timmichelsen at gmx-topmail.de Wed Mar 11 05:35:19 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Wed, 11 Mar 2009 09:35:19 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy=2Eio=2Eread=5Farray=3A_NaN_in_data_f?= =?utf-8?q?ile?= References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> Message-ID: > Well, because there's no standard way to do that: when you have > duplicated dates, should you take the first one? The last one ? Take > some kind of average of the values ? Sometimes, there are inherent faults in the data set. Therefore, a automatic treatment may introduce further errors. It's only possible when this errors are occuring somewhat systematically. From nwagner at iam.uni-stuttgart.de Wed Mar 11 06:58:41 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 11 Mar 2009 11:58:41 +0100 Subject: [SciPy-user] Skyline matrix support in scipy.sparse Message-ID: Hi all, Is it possible to import matrices given in skyline format ? Reference: http://en.wikipedia.org/wiki/Skyline_matrix From Dharhas.Pothina at twdb.state.tx.us Wed Mar 11 09:13:26 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 11 Mar 2009 08:13:26 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> Message-ID: <49B772A6.63BA.009B.0@twdb.state.tx.us> In this particular case we know the cause: It is either : a) Overlapping files have been appended. ie file1 contains data from Jan1 to Feb1 and file2 contains data from jan1 to March1. The overlap region has identical data. b) The data comes from sequential deployments and there is an small overlap at the beginning of the second file. ie file1 has data from Jan1 to Feb1 and file2 contains data from Feb1 to March1. There may be a few data points overlap. These are junk because the equipment was set up in the lab and took measurements in the air until it was swapped with the installed instrument in the water. In both these cases it is appropriate to take the first value. In the second case we really should be stripping the bad data before appending but this is a work in progress. Right now we are developing a semi-automated QA/QC procedure to clean up data before posting it on the web. We presently use a mix of awk and shell scripts but I'm trying to convert everything to python to make it easier to use, more maintainable, have nicer plots than gnuplot and to develop a gui application to help us do this. - dharhas >>> Timmie 3/11/2009 4:35 AM >>> > Well, because there's no standard way to do that: when you have > duplicated dates, should you take the first one? The last one ? Take > some kind of average of the values ? Sometimes, there are inherent faults in the data set. Therefore, a automatic treatment may introduce further errors. It's only possible when this errors are occuring somewhat systematically. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From wnbell at gmail.com Wed Mar 11 09:28:44 2009 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 11 Mar 2009 09:28:44 -0400 Subject: [SciPy-user] Skyline matrix support in scipy.sparse In-Reply-To: References: Message-ID: On Wed, Mar 11, 2009 at 6:58 AM, Nils Wagner wrote: > Hi all, > > Is it possible to import matrices given in skyline format > ? > > Reference: > > http://en.wikipedia.org/wiki/Skyline_matrix Formats like CSR and CSC (which scipy.sparse supports) are more general than skyline, so it's just a matter of converting the input to one of these formats. Does your skyline matrix come from a file, or some other source? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From nwagner at iam.uni-stuttgart.de Wed Mar 11 09:42:06 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 11 Mar 2009 14:42:06 +0100 Subject: [SciPy-user] Skyline matrix support in scipy.sparse In-Reply-To: References: Message-ID: On Wed, 11 Mar 2009 09:28:44 -0400 Nathan Bell wrote: > On Wed, Mar 11, 2009 at 6:58 AM, Nils Wagner > wrote: >> Hi all, >> >> Is it possible to import matrices given in skyline >>format >> ? >> >> Reference: >> >> http://en.wikipedia.org/wiki/Skyline_matrix > >Formats like CSR and CSC (which scipy.sparse supports) >are more > general than skyline, so it's just a matter of >converting the input to > one of these formats. > > Does your skyline matrix come from a file, or some other >source? > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi Nathan, The matrix at hand is a NASTRAN matrix given in the so-called output4 format. ASCII (FORMATTED) and binary (UNFORMATTED) format is available depending on the NASTRAN statement ASSIGN OUTPUT4='test.op4', UNKNOWN, FORMATTED, UNIT=21 Anyway, the contents of the op4 file looks like skyline format. I can send you an example matrix off-list. Cheers, Nils From pgmdevlist at gmail.com Wed Mar 11 12:26:46 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 11 Mar 2009 12:26:46 -0400 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <49B772A6.63BA.009B.0@twdb.state.tx.us> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> <49B772A6.63BA.009B.0@twdb.state.tx.us> Message-ID: <6DB3CD07-670D-453A-BD7B-5D34F675F82D@gmail.com> Dharhas, To find duplicates, you can use the following function (on SVN r2111). find_duplicated_dates will give you a dictionary, you can then use the values to decide what you want to do. remove_duplicated_dates will strip the series to keep only the first occurrence of duplicated dates. def find_duplicated_dates(series): """ Return a dictionary (duplicated dates <> indices) for the input series. The indices are given as a tuple of ndarrays, a la :meth:`nonzero`. Parameters ---------- series : TimeSeries, DateArray A valid :class:`TimeSeries` or :class:`DateArray` object. Examples -------- >>> series = time_series(np.arange(10), dates=[2000, 2001, 2002, 2003, 2003, 2003, 2004, 2005, 2005, 2006], freq='A') >>> test = find_duplicated_dates(series) {: (array([3, 4, 5]),), : (array([7, 8]),)} """ dates = getattr(series, '_dates', series) steps = dates.get_steps() duplicated_dates = tuple(set(dates[steps==0])) indices = {} for d in duplicated_dates: indices[d] = (dates==d).nonzero() return indices def remove_duplicated_dates(series): """ Remove the entries of `series` corresponding to duplicated dates. The series is first sorted in chronological order. Only the first occurence of a date is then kept, the others are discarded. Parameters ---------- series : TimeSeries Time series to process """ dates = getattr(series, '_dates', series) steps = np.concatenate(([1,], dates.get_steps())) if not dates.is_chronological(): series = series.copy() series.sort_chronologically() dates = series._dates return series[steps.nonzero()] On Mar 11, 2009, at 9:13 AM, Dharhas Pothina wrote: > > In this particular case we know the cause: > > It is either : > > a) Overlapping files have been appended. ie file1 contains data from > Jan1 to Feb1 and file2 contains data from jan1 to March1. The > overlap region has identical data. > > b) The data comes from sequential deployments and there is an small > overlap at the beginning of the second file. ie file1 has data from > Jan1 to Feb1 and file2 contains data from Feb1 to March1. There may > be a few data points overlap. These are junk because the equipment > was set up in the lab and took measurements in the air until it was > swapped with the installed instrument in the water. > > In both these cases it is appropriate to take the first value. In > the second case we really should be stripping the bad data before > appending but this is a work in progress. Right now we are > developing a semi-automated QA/QC procedure to clean up data before > posting it on the web. We presently use a mix of awk and shell > scripts but I'm trying to convert everything to python to make it > easier to use, more maintainable, have nicer plots than gnuplot and > to develop a gui application to help us do this. > > - dharhas > >>>> Timmie 3/11/2009 4:35 AM >>> >> Well, because there's no standard way to do that: when you have >> duplicated dates, should you take the first one? The last one ? Take >> some kind of average of the values ? > Sometimes, there are inherent faults in the data set. Therefore, a > automatic > treatment may introduce further errors. > It's only possible when this errors are occuring somewhat > systematically. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Chris.Barker at noaa.gov Wed Mar 11 12:50:35 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 11 Mar 2009 09:50:35 -0700 Subject: [SciPy-user] Google summer of Code 2009 In-Reply-To: References: <49B57EE3.9050307@creativetrax.com> <49B65E34.3080706@stsci.edu> <88e473830903100615m195a1510j3a809673d72da615@mail.gmail.com> <49B67625.2010706@stsci.edu> <49B6B240.4030306@noaa.gov> Message-ID: <49B7EBDB.4090705@noaa.gov> Jarrod Millman wrote: > I spoke with the SoC coordinator about this last year and was told > they would prefer us to stay under the PSF umbrella. Fair enough -- I'll shut up now. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Dharhas.Pothina at twdb.state.tx.us Wed Mar 11 13:01:40 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 11 Mar 2009 12:01:40 -0500 Subject: [SciPy-user] scipy.io.read_array: NaN in data file In-Reply-To: <6DB3CD07-670D-453A-BD7B-5D34F675F82D@gmail.com> References: <2a65fcc80903100857w3bce49b0m3b174af3404ab620@mail.gmail.com> <54952AF1-4C32-4579-80AE-B53F609B9F85@gmail.com> <49B64D7E.63BA.009B.0@twdb.state.tx.us> <40CEAD00-656E-44B9-A27E-D48DD878F7B0@gmail.com> <49B6529D.63BA.009B.0@twdb.state.tx.us> <004A3842-DAEB-4E2B-B541-82FE1E37D6CC@gmail.com> <49B65692.63BA.009B.0@twdb.state.tx.us> <53109E8A-83C5-4706-A796-0B15FBE0D701@gmail.com> <49B772A6.63BA.009B.0@twdb.state.tx.us> <6DB3CD07-670D-453A-BD7B-5D34F675F82D@gmail.com> Message-ID: <49B7A824.63BA.009B.0@twdb.state.tx.us> Great to hear. Once I'm done with my present project I'll see if I can install and play around with the SVN version. - d >>> Pierre GM 3/11/2009 11:26 AM >>> Dharhas, To find duplicates, you can use the following function (on SVN r2111). find_duplicated_dates will give you a dictionary, you can then use the values to decide what you want to do. remove_duplicated_dates will strip the series to keep only the first occurrence of duplicated dates. def find_duplicated_dates(series): """ Return a dictionary (duplicated dates <> indices) for the input series. The indices are given as a tuple of ndarrays, a la :meth:`nonzero`. Parameters ---------- series : TimeSeries, DateArray A valid :class:`TimeSeries` or :class:`DateArray` object. Examples -------- >>> series = time_series(np.arange(10), dates=[2000, 2001, 2002, 2003, 2003, 2003, 2004, 2005, 2005, 2006], freq='A') >>> test = find_duplicated_dates(series) {: (array([3, 4, 5]),), : (array([7, 8]),)} """ dates = getattr(series, '_dates', series) steps = dates.get_steps() duplicated_dates = tuple(set(dates[steps==0])) indices = {} for d in duplicated_dates: indices[d] = (dates==d).nonzero() return indices def remove_duplicated_dates(series): """ Remove the entries of `series` corresponding to duplicated dates. The series is first sorted in chronological order. Only the first occurence of a date is then kept, the others are discarded. Parameters ---------- series : TimeSeries Time series to process """ dates = getattr(series, '_dates', series) steps = np.concatenate(([1,], dates.get_steps())) if not dates.is_chronological(): series = series.copy() series.sort_chronologically() dates = series._dates return series[steps.nonzero()] On Mar 11, 2009, at 9:13 AM, Dharhas Pothina wrote: > > In this particular case we know the cause: > > It is either : > > a) Overlapping files have been appended. ie file1 contains data from > Jan1 to Feb1 and file2 contains data from jan1 to March1. The > overlap region has identical data. > > b) The data comes from sequential deployments and there is an small > overlap at the beginning of the second file. ie file1 has data from > Jan1 to Feb1 and file2 contains data from Feb1 to March1. There may > be a few data points overlap. These are junk because the equipment > was set up in the lab and took measurements in the air until it was > swapped with the installed instrument in the water. > > In both these cases it is appropriate to take the first value. In > the second case we really should be stripping the bad data before > appending but this is a work in progress. Right now we are > developing a semi-automated QA/QC procedure to clean up data before > posting it on the web. We presently use a mix of awk and shell > scripts but I'm trying to convert everything to python to make it > easier to use, more maintainable, have nicer plots than gnuplot and > to develop a gui application to help us do this. > > - dharhas > >>>> Timmie 3/11/2009 4:35 AM >>> >> Well, because there's no standard way to do that: when you have >> duplicated dates, should you take the first one? The last one ? Take >> some kind of average of the values ? > Sometimes, there are inherent faults in the data set. Therefore, a > automatic > treatment may introduce further errors. > It's only possible when this errors are occuring somewhat > systematically. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From helstreak at hotmail.com Wed Mar 11 17:43:02 2009 From: helstreak at hotmail.com (Joseph Johnson) Date: Wed, 11 Mar 2009 16:43:02 -0500 Subject: [SciPy-user] Install newest version of scipy to Ubuntu 8.04 Message-ID: Hi, I was wondering if there is an apt source for Ubuntu 8.04? I did find: https://edge.launchpad.net/~scipy/+archive/ppa but its for 8.10 Thanks for the help. _________________________________________________________________ Windows Live?: Life without walls. http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1a_explore_032009 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcohen at caltech.edu Wed Mar 11 19:42:35 2009 From: mcohen at caltech.edu (Michael Cohen) Date: Wed, 11 Mar 2009 16:42:35 -0700 Subject: [SciPy-user] write_array deprecated? In-Reply-To: References: Message-ID: <49B84C6B.10703@caltech.edu> Hi all, Is write-array deprecated wth the new version? I get the warning: /usr/local/python-2.5.4/lib/python2.5/site-packages/numpy/lib/utils.py:110: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) I am using write_array from scipy.io. It doesn't tell me what it is being deprecated in favour of. Cheers, Mike From david at ar.media.kyoto-u.ac.jp Thu Mar 12 00:06:28 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Mar 2009 13:06:28 +0900 Subject: [SciPy-user] Install newest version of scipy to Ubuntu 8.04 In-Reply-To: References: Message-ID: <49B88A44.6090907@ar.media.kyoto-u.ac.jp> Joseph Johnson wrote: > Hi, > > I was wondering if there is an apt source for Ubuntu 8.04? > > I did find: > https://edge.launchpad.net/~scipy/+archive/ppa > but its for 8.10 No, only Ubuntu 8.10 is supported. Supporting both 8.04 and 8.10 is too time consuming, David From strawman at astraw.com Thu Mar 12 01:59:40 2009 From: strawman at astraw.com (Andrew Straw) Date: Wed, 11 Mar 2009 22:59:40 -0700 Subject: [SciPy-user] Install newest version of scipy to Ubuntu 8.04 In-Reply-To: <49B88A44.6090907@ar.media.kyoto-u.ac.jp> References: <49B88A44.6090907@ar.media.kyoto-u.ac.jp> Message-ID: <49B8A4CC.7070602@astraw.com> David Cournapeau wrote: > Joseph Johnson wrote: >> Hi, >> >> I was wondering if there is an apt source for Ubuntu 8.04? >> >> I did find: >> https://edge.launchpad.net/~scipy/+archive/ppa >> but its for 8.10 > > No, only Ubuntu 8.10 is supported. Supporting both 8.04 and 8.10 is too > time consuming, That being said, you can try my packages: http://debs.astraw.com/hardy From nwagner at iam.uni-stuttgart.de Thu Mar 12 03:52:16 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 12 Mar 2009 08:52:16 +0100 Subject: [SciPy-user] write_array deprecated? In-Reply-To: <49B84C6B.10703@caltech.edu> References: <49B84C6B.10703@caltech.edu> Message-ID: On Wed, 11 Mar 2009 16:42:35 -0700 Michael Cohen wrote: > Hi all, > Is write-array deprecated wth the new version? > > I get the warning: > /usr/local/python-2.5.4/lib/python2.5/site-packages/numpy/lib/utils.py:110: > DeprecationWarning: write_array is deprecated > warnings.warn(str1, DeprecationWarning) > > I am using write_array from scipy.io. > It doesn't tell me what it is being deprecated in favour >of. > > Cheers, > Mike > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Use savetxt >>> from numpy import savetxt Nils From Ross.Williamson at usap.gov Thu Mar 12 06:00:55 2009 From: Ross.Williamson at usap.gov (Williamson, Ross) Date: Thu, 12 Mar 2009 23:00:55 +1300 Subject: [SciPy-user] pytools Message-ID: Hi there As anyone tried to install pytools? I need the pytools.nmpfit module and it just does not exist. I've downloaded the latest is version (7) using: easy_install pytools and nothing - Spent 6 hours trying to find this - so much for bloody easy_install Any ideas? Ross From perry at stsci.edu Thu Mar 12 08:55:30 2009 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 12 Mar 2009 08:55:30 -0400 Subject: [SciPy-user] pytools In-Reply-To: References: Message-ID: <34B5D44E-8E5A-4320-A07B-BA204F2A339C@stsci.edu> Sounds like you are trying to use the pytools package in stsci_python. That isn't part of scipy. Where did you learn about it? Perry Greenfield On Mar 12, 2009, at 6:00 AM, Williamson, Ross wrote: > Hi there > > As anyone tried to install pytools? I need the pytools.nmpfit module > and it just does not exist. I've downloaded the latest is version > (7) using: > > easy_install pytools and nothing - Spent 6 hours trying to find this > - so much for bloody easy_install > > Any ideas? > > Ross > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From timmichelsen at gmx-topmail.de Thu Mar 12 09:13:12 2009 From: timmichelsen at gmx-topmail.de (Timmie) Date: Thu, 12 Mar 2009 13:13:12 +0000 (UTC) Subject: [SciPy-user] Article(s) on Test Driven Development for Science Message-ID: Hello, from many blogs, in the book "Expert Python Programming" and in the code I read about test driven development. It's said to help preventing future breakage or failures of the code. Is see on major diffuculty when adopting this for science scripts: In Science we not only have to control the program flow but also to validate the output. I think such a validation needs to be included in tests. I may change something in my code and still pass tests from the software side but the result data may be totally wrong. Are there already concepts for such testing? The tests I have seen so far play mostely with generated random data. Are there presentations or articles about this? Regards, Timmie From strawman at astraw.com Thu Mar 12 11:05:14 2009 From: strawman at astraw.com (Andrew Straw) Date: Thu, 12 Mar 2009 08:05:14 -0700 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <49B53ADB.9010809@molden.no> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> Message-ID: <49B924AA.9030509@astraw.com> Sturla Molden wrote: > On 3/8/2009 6:03 PM, Rohit Garg wrote: > > >> http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html >> >> while googling for numpy memory alignment. I wish to know if anything >> on that account has come to pass yet? On linux 64 bit platform, can I >> assume anything beyond the glibc alignment as of now? >> > > If you are willing to waste a few bytes, there is nothing that prevents > you from ensuring arbitrary alignment manually. You just allocate more > space than you need (16 bytes for 16 bytes alignment), and return a view > to a properly aligned segment. Something like this: > > import numpy as np > > def aligned_zeros(shape, boundary=16, dtype=float, order='C'): > N = np.prod(shape) > d = np.dtype(dtype) > tmp = np.zeros(N * d.itemsize + boundary, dtype=np.uint8) > address = tmp.__array_interface__['data'][0] > offset = (boundary - address % boundary) % boundary > return tmp[offset:offset+N]\ > .view(dtype=d)\ > .reshape(shape, order=order) > > We had questions regarding this for an FFTW interface as well (how to > use fftw_malloc instead of malloc). It also affect all coding using SIMD > extensions on x86 (MMX, SSE, SSE2). I don't use PPC so I don't know what > altivec needs. In any case, should this be in the cookbook? Or even in > numpy? It seems a bit redundant to answer this question over and over again. > > > Sturla Molden Sturla, I just tried your example, and I discovered that for a 2D array, it did not align rows on boundaries -- just the first element of the first row. My understanding is that for image processing with SIMD this is desired. For example, Intel IPP allocates images such that each image row is 32-byte aligned. (I just checked that Framewave does _not_ do this, so maybe times have changed or maybe Framewave just isn't optimized in this regard.) So, what's your take on having each row aligned? Is this also useful for FFTW, for example? If so, we should perhaps come up with a better routine for the cookbook. -Andrew From sturla at molden.no Thu Mar 12 11:45:53 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 16:45:53 +0100 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <49B924AA.9030509@astraw.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> Message-ID: <49B92E31.9060608@molden.no> On 3/12/2009 4:05 PM, Andrew Straw wrote: > Sturla, I just tried your example, and I discovered that for a 2D array, > it did not align rows on boundaries -- just the first element of the > first row. The whole buffer is aligned, but not every row. The first element on row n is contiguous with the last element on row n-1 (with C ordering). > My understanding is that for image processing with SIMD this > is desired. For example, Intel IPP allocates images such that each image > row is 32-byte aligned. (I just checked that Framewave does _not_ do > this, so maybe times have changed or maybe Framewave just isn't > optimized in this regard.) > > So, what's your take on having each row aligned? In that case we must pad each row (or column with Fortran ordering). Sure it can be done. It gets a bit more complex, and wastes a bit more memory, but ok I can fix this. Regards, Sturla Molden From rpg.314 at gmail.com Thu Mar 12 12:02:08 2009 From: rpg.314 at gmail.com (Rohit Garg) Date: Thu, 12 Mar 2009 21:32:08 +0530 Subject: [SciPy-user] read/write_array deprecated, replacements? Message-ID: <4d5dd8c20903120902q5c00dc8g7b27d6aed28000f7@mail.gmail.com> I just ran some of my scripts and now I am told that the read/write_array are deprected. What are their replacements for i/o of arrays in text format. No binary files please. -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technology Bombay From pgmdevlist at gmail.com Thu Mar 12 12:12:26 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 12 Mar 2009 12:12:26 -0400 Subject: [SciPy-user] read/write_array deprecated, replacements? In-Reply-To: <4d5dd8c20903120902q5c00dc8g7b27d6aed28000f7@mail.gmail.com> References: <4d5dd8c20903120902q5c00dc8g7b27d6aed28000f7@mail.gmail.com> Message-ID: <23E569EB-CA67-47B0-84AE-60DF8EDD56C0@gmail.com> On Mar 12, 2009, at 12:02 PM, Rohit Garg wrote: > I just ran some of my scripts and now I am told that the > read/write_array are deprected. What are their replacements for i/o of > arrays in text format. No binary files please. You can try np.genfromtxt and others from numpy (v1.3). From josef.pktd at gmail.com Thu Mar 12 13:02:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 12 Mar 2009 13:02:16 -0400 Subject: [SciPy-user] Article(s) on Test Driven Development for Science In-Reply-To: References: Message-ID: <1cd32cbb0903121002y18020879gd4038cfdff125d26@mail.gmail.com> On Thu, Mar 12, 2009 at 9:13 AM, Timmie wrote: > Hello, > from many blogs, in the book "Expert Python Programming" and in the code I read > about test driven development. > It's said to help preventing future breakage or failures of the code. > > Is see on major diffuculty when adopting this for science scripts: > > In Science we not only have to control the program flow but also to validate the > output. > I think such a validation needs to be included in tests. > > I may change something in my code and still pass tests from the software side > but the result data may be totally wrong. > > Are there already concepts for such testing? > > The tests I have seen so far play mostely with generated random data. > > Are there presentations or articles about this? > > Regards, > Timmie > I don't think validating the results needs much of a special discussion besides the regular testing tools, and it would be very field specific. For example in stats, and similar in my other work, I use four types of (unit) tests: * validating special cases, where I know what the right results are supposed to be. This is usually my first step to get the basic mistakes fixed. * comparison with other implementation: often I have several implementation available to calculate the same results, e.g. in stats.distributions, numerical integration versus explicit formula, or unoptimized version of a function with loops and simple structure, second version with optimized matrix algebra. * comparison with validated numbers: e.g. comparing with results from R, from publications or certified examples as the ones from NIST, * using theoretical properties: the random tests in stats are based on the statistical properties of the statistic, distribution or estimator, either from the definition or, for example, from the law of large numbers. If I can simulate a large enough sample or run a Monte Carlo with enough replications, I can test that the computed results correspond to the theoretical results. To simplify the actual tests, I also use regression tests after verifying the results, but regression tests don't validate the results if they were wrong in the first place. For big models, I have often found nothing better than visual inspection and relying on intuition whether it looks correct. I try to verify the individual pieces with unit tests, but whether everything works correctly together, I don't have formally tested. So my impression is that, validating the results in tests should just be part of the regular testing strategy, which should not be restricted to tests that verify whether the function runs and the result has the correct shape and type. Josef From sturla at molden.no Thu Mar 12 14:15:32 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 19:15:32 +0100 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <49B924AA.9030509@astraw.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> Message-ID: <49B95144.2090304@molden.no> On 3/12/2009 4:05 PM, Andrew Straw wrote: > So, what's your take on having each row aligned? Is this also useful for > FFTW, for example? If so, we should perhaps come up with a better > routine for the cookbook. Ok, so here is how it could be done. It fails for a reason I'll attribute to a bug in NumPy. import numpy as np def _nextpow(b,isize): i = 1 while b**i < isize: i += 1 return b**i def aligned_zeros(shape, boundary=16, dtype=float, order='C', imagealign=True): if (not imagealign) or (not hasattr(shape,'__len__')): N = np.prod(shape) d = np.dtype(dtype) tmp = np.zeros(N * d.itemsize + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary return tmp[offset:offset+N*d.itemsize]\ .view(dtype=d)\ .reshape(shape, order=order) else: if order == 'C': ndim0 = shape[-1] dim0 = -1 else: ndim0 = shape[0] dim0 = 0 d = np.dtype(dtype) bshape = [i for i in shape] padding = boundary + _nextpow(boundary, d.itemsize) - d.itemsize bshape[dim0] = ndim0*d.itemsize + padding print bshape tmp = np.zeros(bshape, dtype=np.uint8, order=order) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary aligned_slice = slice(offset, offset + ndim0*d.itemsize) if tmp.flags['C_CONTIGUOUS']: tmp = tmp[..., aligned_slice] print tmp.shape else: tmp = tmp[aligned_slice, ...] print tmp.shape return tmp.view(dtype=dtype) # this will often fail, # probably a bug in numpy So lets reproduce the NumPy issue: >>> a = zeros((10,52), dtype=uint8) >>> b = a[:, 3:8*2+3] >>> b.shape (10, 16) >>> b.view(dtype=float) Traceback (most recent call last): File "", line 1, in b.view(dtype=float) ValueError: new type not compatible with array. However: >>> a = zeros((10,16), dtype=uint8) >>> a.view(dtype=float) array([[ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., 0.]]) Until we find a way to overcome this, it will be difficult to align rows to particular byte boundaries. It fails even if we make sure the padding is a multiple of the item size: padding = (boundary + _nextpow(boundary, d.itemsize) \ - d.itemsize) * d.itemsize Very annoying.. Using allocators in libraries (e.g. FFTW) would not help either, as NumPy would fail in the same way. Maybe we can force NumPy to do the right thing by hard-coding an array descriptor? We can do this in Cython though, as it supports pointers and double indirection. But it would be like using C. Sturla Molden From cycomanic at gmail.com Thu Mar 12 16:29:59 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Fri, 13 Mar 2009 09:29:59 +1300 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <49B924AA.9030509@astraw.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> Message-ID: <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> On 12/03/09 08:05, Andrew Straw wrote: > Sturla Molden wrote: > > On 3/8/2009 6:03 PM, Rohit Garg wrote: > > > > > >> http://www.mail-archive.com/numpy-discussion at scipy.org/msg04005.html > >> > >> while googling for numpy memory alignment. I wish to know if anything > >> on that account has come to pass yet? On linux 64 bit platform, can I > >> assume anything beyond the glibc alignment as of now? > >> > > > > If you are willing to waste a few bytes, there is nothing that prevents > > you from ensuring arbitrary alignment manually. You just allocate more > > space than you need (16 bytes for 16 bytes alignment), and return a view > > to a properly aligned segment. Something like this: > > > > import numpy as np > > > > def aligned_zeros(shape, boundary=16, dtype=float, order='C'): > > N = np.prod(shape) > > d = np.dtype(dtype) > > tmp = np.zeros(N * d.itemsize + boundary, dtype=np.uint8) > > address = tmp.__array_interface__['data'][0] > > offset = (boundary - address % boundary) % boundary > > return tmp[offset:offset+N]\ > > .view(dtype=d)\ > > .reshape(shape, order=order) > > > > We had questions regarding this for an FFTW interface as well (how to > > use fftw_malloc instead of malloc). It also affect all coding using SIMD > > extensions on x86 (MMX, SSE, SSE2). I don't use PPC so I don't know what > > altivec needs. In any case, should this be in the cookbook? Or even in > > numpy? It seems a bit redundant to answer this question over and over again. > > > > > > Sturla Molden > Sturla, I just tried your example, and I discovered that for a 2D array, > it did not align rows on boundaries -- just the first element of the > first row. My understanding is that for image processing with SIMD this > is desired. For example, Intel IPP allocates images such that each image > row is 32-byte aligned. (I just checked that Framewave does _not_ do > this, so maybe times have changed or maybe Framewave just isn't > optimized in this regard.) > > So, what's your take on having each row aligned? Is this also useful for > FFTW, for example? If so, we should perhaps come up with a better > routine for the cookbook. I don't think fftw_malloc is doing that either (I don't know how it could as it doesn't have any information about the shape of the allocated data), so I doubt fftw will profit from this. That's not saying some other things might profit from this. Cheers Jochen From stefan at sun.ac.za Thu Mar 12 17:15:32 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 12 Mar 2009 23:15:32 +0200 Subject: [SciPy-user] numpy aligned memory In-Reply-To: <49B95144.2090304@molden.no> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <49B95144.2090304@molden.no> Message-ID: <9457e7c80903121415i5bd1341fw4385a132cf74af8b@mail.gmail.com> Hi Sturla, Andrew 2009/3/12 Sturla Molden : > On 3/12/2009 4:05 PM, Andrew Straw wrote: >> So, what's your take on having each row aligned? Is this also useful for >> FFTW, for example? If so, we should perhaps come up with a better >> routine for the cookbook. > > Ok, so here is how it could be done. It fails for a reason ?I'll > attribute to a bug in NumPy. I must be missing some subtlety. Why not just allocate a block of memory, and create an array with the strides adjusted appropriately? Regards St?fan From timmichelsen at gmx-topmail.de Thu Mar 12 17:17:51 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 12 Mar 2009 22:17:51 +0100 Subject: [SciPy-user] Article(s) on Test Driven Development for Science In-Reply-To: <1cd32cbb0903121002y18020879gd4038cfdff125d26@mail.gmail.com> References: <1cd32cbb0903121002y18020879gd4038cfdff125d26@mail.gmail.com> Message-ID: > I don't think validating the results needs much of a special > discussion besides the regular testing tools, and it would be very > field specific. .... Thank you very much for your extensive answer! Comming from a science and not a programmer background, I am still happy that my code actually produces output ;-) Therefore, using tests is still not a habit for me. But as the code base grows, I think I cannot affort not to have. It's simply impossible to monitor all code manually. I hope that the following speech will be made available as PDF: Building tests for large, untested codebases by C. Titus Brown Covering large codebases with automated tests is a challenging and frustrating task that can be made much easier by approaching the problem systematically and choosing good tools. I will discuss a number of approaches and freely available tools that can help people "tame" pre-existing codebases with automated tests, and show the evolution of tests for a complex codebase under these approaches and tools. http://us.pycon.org/2009/conference/keynotes/ Kind regards, Timmie From sturla at molden.no Thu Mar 12 17:18:36 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 22:18:36 +0100 (CET) Subject: [SciPy-user] numpy aligned memory In-Reply-To: <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> Message-ID: <7ffef50a41c3e805ff0f8ce4ba096755.squirrel@webmail.uio.no> > On 12/03/09 08:05, Andrew Straw wrote: > I don't think fftw_malloc is doing that either (I don't know how it > could as it doesn't have any information about the shape of the > allocated data), so I doubt fftw will profit from this. That's not > saying some other things might profit from this. Here's a version that makes sure the fastest varying version stays aligned if the keyword argument 'imagealign' is True. After a lot of trial and error I got it working. The trick is aligning the very first element, and adding the right amount of padding to each row. The length of each padded row has to be myltiple of the itemsize and the alignment boundary. It seems to work. If you find bugs, please correct. Regards, Sturla Molden -------------- next part -------------- A non-text attachment was scrubbed... Name: alignedarray.py Type: text/x-python Size: 3079 bytes Desc: not available URL: From sturla at molden.no Thu Mar 12 17:19:47 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 22:19:47 +0100 (CET) Subject: [SciPy-user] numpy aligned memory In-Reply-To: <9457e7c80903121415i5bd1341fw4385a132cf74af8b@mail.gmail.com> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <49B95144.2090304@molden.no> <9457e7c80903121415i5bd1341fw4385a132cf74af8b@mail.gmail.com> Message-ID: <6fdc54088a359bc345f2a373528de38f.squirrel@webmail.uio.no> > I must be missing some subtlety. Why not just allocate a block of > memory, and create an array with the strides adjusted appropriately? Please disregard the noise. I was not thinking clearly. Sturla From sturla at molden.no Thu Mar 12 17:29:47 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 22:29:47 +0100 (CET) Subject: [SciPy-user] numpy aligned memory In-Reply-To: <7ffef50a41c3e805ff0f8ce4ba096755.squirrel@webmail.uio.no> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> <7ffef50a41c3e805ff0f8ce4ba096755.squirrel@webmail.uio.no> Message-ID: <796a07323c631684a02446e0f3976d1c.squirrel@webmail.uio.no> > It seems to work. If you find bugs, please correct. Speaking of which, the first if-else block should probably contain this, so it does not fail on a length 1 tupe as shape. else: if len(shape) == 1: return aligned_empty(shape, boundary=boundary, dtype=dtype, order=order, imagealign=imagealign) Sturla Molden -------------- next part -------------- A non-text attachment was scrubbed... Name: alignedarray.py Type: text/x-python Size: 3248 bytes Desc: not available URL: From mhearne at usgs.gov Thu Mar 12 17:35:42 2009 From: mhearne at usgs.gov (Michael Hearne) Date: Thu, 12 Mar 2009 15:35:42 -0600 Subject: [SciPy-user] storing numpy arrays Message-ID: <49B9802E.9050308@usgs.gov> I have a MySQL database, in which I would like to store a numpy array as a BLOB. I'm trying to find out the easiest/most efficient way to store/extract this data. I looked at scipy.io.write_array, which looked somewhat promising, until I saw the notice that this method is deprecated in favor of numpy.savetxt. I could potentially use scipy.io.savemat/loadmat, but that would require writing to a binary (.mat file), then reading it back in again just to stuff it into the database. I could also use pickle, but I found a bug report somewhere that pickle may do strange things to the array in the process or reading or writing. Does anyone have any suggestions? Thanks, Mike From conor.robinson at gmail.com Thu Mar 12 17:39:15 2009 From: conor.robinson at gmail.com (Conor Robinson) Date: Thu, 12 Mar 2009 14:39:15 -0700 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <49B9802E.9050308@usgs.gov> References: <49B9802E.9050308@usgs.gov> Message-ID: Look into pytables. On Thu, Mar 12, 2009 at 2:35 PM, Michael Hearne wrote: > I have a MySQL database, in which I would like to store a numpy array as > a BLOB. I'm trying to find out the easiest/most efficient way to > store/extract this data. > > I looked at scipy.io.write_array, which looked somewhat promising, until > I saw the notice that this method is deprecated in favor of numpy.savetxt. > > I could potentially use scipy.io.savemat/loadmat, but that would require > writing to a binary (.mat file), then reading it back in again just to > stuff it into the database. > > I could also use pickle, but I found a bug report somewhere that pickle > may do strange things to the array in the process or reading or writing. > > Does anyone have any suggestions? > > Thanks, > > Mike > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Conor Robinson | Emergent Analytics, Inc. Cell: 530.318.7354 Office: 949.873.5150 Fax: 714.776.0534 --- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Mar 12 17:46:28 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 22:46:28 +0100 (CET) Subject: [SciPy-user] numpy aligned memory In-Reply-To: <796a07323c631684a02446e0f3976d1c.squirrel@webmail.uio.no> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> <7ffef50a41c3e805ff0f8ce4ba096755.squirrel@webmail.uio.no> <796a07323c631684a02446e0f3976d1c.squirrel@webmail.uio.no> Message-ID: <3a375d3ecb2efc0bcc7ac1fbdcce7816.squirrel@webmail.uio.no> > else: > if len(shape) == 1: > return aligned_empty(shape, boundary=boundary, > dtype=dtype, order=order, imagealign=imagealign) This obviously should be > else: > if len(shape[0]) == 1: > return aligned_empty(shape, boundary=boundary, > dtype=dtype, order=order, imagealign=imagealign) From sturla at molden.no Thu Mar 12 17:49:14 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Mar 2009 22:49:14 +0100 (CET) Subject: [SciPy-user] numpy aligned memory In-Reply-To: <3a375d3ecb2efc0bcc7ac1fbdcce7816.squirrel@webmail.uio.no> References: <4d5dd8c20903081003m3322b5c4j21521f88eb647ae3@mail.gmail.com> <49B53ADB.9010809@molden.no> <49B924AA.9030509@astraw.com> <20090312202958.GA23265@jochen.schroeder.phy.auckland.ac.nz> <7ffef50a41c3e805ff0f8ce4ba096755.squirrel@webmail.uio.no> <796a07323c631684a02446e0f3976d1c.squirrel@webmail.uio.no> <3a375d3ecb2efc0bcc7ac1fbdcce7816.squirrel@webmail.uio.no> Message-ID: <6dbba492629a92b2a0abb125e112c661.squirrel@webmail.uio.no> Or even... >> else: >> if len(shape) == 1: >> return aligned_empty(shape[0], boundary=boundary, >> dtype=dtype, order=order, imagealign=imagealign) LOL Now lets see if I finally get it correct :) Sturla Molden -------------- next part -------------- A non-text attachment was scrubbed... Name: alignedarray.py Type: text/x-python Size: 3251 bytes Desc: not available URL: From stefan at sun.ac.za Thu Mar 12 17:59:23 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 12 Mar 2009 23:59:23 +0200 Subject: [SciPy-user] write_array deprecated? In-Reply-To: <49B84C6B.10703@caltech.edu> References: <49B84C6B.10703@caltech.edu> Message-ID: <9457e7c80903121459w73fab451w9d925a09187f7e0a@mail.gmail.com> Hi Mike 2009/3/12 Michael Cohen : > Is write-array deprecated wth the new version? > > I get the warning: > /usr/local/python-2.5.4/lib/python2.5/site-packages/numpy/lib/utils.py:110: > DeprecationWarning: write_array is deprecated > ? warnings.warn(str1, DeprecationWarning) > > I am using write_array from scipy.io. > It doesn't tell me what it is being deprecated in favour of. Thanks for noticing! I've clarified the message in r5617. Regards St?fan From kael.fischer at gmail.com Thu Mar 12 18:12:52 2009 From: kael.fischer at gmail.com (Kael Fischer) Date: Thu, 12 Mar 2009 16:12:52 -0600 Subject: [SciPy-user] SciPy_Tutorial sparse examples broken Message-ID: The "new" Tutorial (most of it is from 2006) sparse matrix examples seem to have a problem. I'm using SciPy 0.7.0.b1 on FreeBSD 7.0R amd64. setup like this, per tutorial: ### Asp = sparse.lil_matrix((50000,50000)) Asp.setdiag(ones(50000)) b = arange(0,50000) ### Now Asp.tocsc(), Asp.tocsr(),linsolve.spsolve.(Asp,b) all work, but it you assign to Asp with some fancy indexing along dimension 0 tocsc and tocsr (and hence many other methods) fail. E.g.: ### Asp[200:250,30] = 10*random.rand(50) ### or ### Asp[0:3,30] = [1,2,3] ### followed by: ### Asp.tocsr() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /nfs/r1/home/fishnet/taxBootstrap/ in () /usr/local/lib/python2.5/site-packages/scipy/sparse/lil.pyc in tocsr(self) 425 for x in self.data: 426 data.extend(x) --> 427 data = np.asarray(data, dtype=self.dtype) 428 429 from csr import csr_matrix /usr/local/lib/python2.5/site-packages/numpy/core/numeric.pyc in asarray(a, dtype, order) 228 229 """ --> 230 return array(a, dtype, copy=False, order=order) 231 232 def asanyarray(a, dtype=None, order=None): ValueError: setting an array element with a sequence. ### Note other types of assignment, like fancy indexing on dimension 1 with the same right sides, do not cause this problem. I was going to fix the wiki but this seems like a bug. -Kael Kael Fischer, Ph.D Research Assistant Professor Department of Pathology, Univ. of Utah (801) 213-3766 From mhearne at usgs.gov Thu Mar 12 18:50:08 2009 From: mhearne at usgs.gov (Michael Hearne) Date: Thu, 12 Mar 2009 16:50:08 -0600 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <49B9802E.9050308@usgs.gov> References: <49B9802E.9050308@usgs.gov> Message-ID: <49B991A0.7060409@usgs.gov> Conor - Thank you for your suggestion. I am in fact already using pytables in another part of my application. However, using PyTables for this portion is not a desirable option. Perhaps I should rephrase the question: What is the the most efficient way to serialize a numpy array into a structure that _could_ be written to a disk, or stuffed into a database, etc? Thanks, Mike Hearne Michael Hearne wrote: > I have a MySQL database, in which I would like to store a numpy array > as a BLOB. I'm trying to find out the easiest/most efficient way to > store/extract this data. > > I looked at scipy.io.write_array, which looked somewhat promising, > until I saw the notice that this method is deprecated in favor of > numpy.savetxt. > > I could potentially use scipy.io.savemat/loadmat, but that would > require writing to a binary (.mat file), then reading it back in again > just to stuff it into the database. > > I could also use pickle, but I found a bug report somewhere that > pickle may do strange things to the array in the process or reading or > writing. > > Does anyone have any suggestions? > > Thanks, > > Mike > > From robert.kern at gmail.com Thu Mar 12 18:57:20 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 12 Mar 2009 17:57:20 -0500 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <49B991A0.7060409@usgs.gov> References: <49B9802E.9050308@usgs.gov> <49B991A0.7060409@usgs.gov> Message-ID: <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> On Thu, Mar 12, 2009 at 17:50, Michael Hearne wrote: > Conor - Thank you for your suggestion. ?I am in fact already using > pytables in another part of my application. ?However, using PyTables for > this portion is not a desirable option. > > Perhaps I should rephrase the question: > What is the the most efficient way to serialize a numpy array into a > structure that _could_ be written to a disk, or stuffed into a database, > etc? Use numpy.lib.format.{read,write}_array() to use the NPY format, which should be the most robust and efficient way to do what you want. I have used this to store numpy arrays as MySQL BLOBs before, and it worked quite well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dav at alum.mit.edu Thu Mar 12 19:14:07 2009 From: dav at alum.mit.edu (Dav Clark) Date: Thu, 12 Mar 2009 16:14:07 -0700 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> References: <49B9802E.9050308@usgs.gov> <49B991A0.7060409@usgs.gov> <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> Message-ID: <92E12C13-9A37-4359-8768-7DF2B1874BE3@alum.mit.edu> On Mar 12, 2009, at 3:57 PM, Robert Kern wrote: > On Thu, Mar 12, 2009 at 17:50, Michael Hearne > wrote: >> Conor - Thank you for your suggestion. I am in fact already using >> pytables in another part of my application. However, using >> PyTables for >> this portion is not a desirable option. >> >> Perhaps I should rephrase the question: >> What is the the most efficient way to serialize a numpy array into a >> structure that _could_ be written to a disk, or stuffed into a >> database, >> etc? > > Use numpy.lib.format.{read,write}_array() to use the NPY format, which > should be the most robust and efficient way to do what you want. I > have used this to store numpy arrays as MySQL BLOBs before, and it > worked quite well. It sounds like you are trying to avoid writing a file in order to get your blob? I don't know if you could provide a "file-like" object straight to your database for write_array. You could certainly use the dumps function, which returns a pickled array (which is not as efficient), or maybe do something clever storing the array.data (a python "buffer"), array.dtype.str and array.shape attributes? That could leave you with a result that works across languages. Is there a more canonical "modern" (a la np.lib.format.write_array) which is equivalent to the old dumps method? Cheers, Dav From robert.kern at gmail.com Thu Mar 12 19:19:26 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 12 Mar 2009 18:19:26 -0500 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <92E12C13-9A37-4359-8768-7DF2B1874BE3@alum.mit.edu> References: <49B9802E.9050308@usgs.gov> <49B991A0.7060409@usgs.gov> <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> <92E12C13-9A37-4359-8768-7DF2B1874BE3@alum.mit.edu> Message-ID: <3d375d730903121619y2816f776w7dbf385a7dc1c448@mail.gmail.com> On Thu, Mar 12, 2009 at 18:14, Dav Clark wrote: > On Mar 12, 2009, at 3:57 PM, Robert Kern wrote: > >> On Thu, Mar 12, 2009 at 17:50, Michael Hearne >> wrote: >>> Conor - Thank you for your suggestion. ?I am in fact already using >>> pytables in another part of my application. ?However, using >>> PyTables for >>> this portion is not a desirable option. >>> >>> Perhaps I should rephrase the question: >>> What is the the most efficient way to serialize a numpy array into a >>> structure that _could_ be written to a disk, or stuffed into a >>> database, >>> etc? >> >> Use numpy.lib.format.{read,write}_array() to use the NPY format, which >> should be the most robust and efficient way to do what you want. I >> have used this to store numpy arrays as MySQL BLOBs before, and it >> worked quite well. > > It sounds like you are trying to avoid writing a file in order to get > your blob? ?I don't know if you could provide a "file-like" object > straight to your database for write_array. No you can't. Just use cStringIO.StringIO() to collect the string. This is standard idiom. >?You could certainly use > the dumps function, which returns a pickled array (which is not as > efficient), or maybe do something clever storing the array.data (a > python "buffer"), array.dtype.str and array.shape attributes? The NPY format is just such a clever storage. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simpson at math.toronto.edu Thu Mar 12 19:38:15 2009 From: simpson at math.toronto.edu (Gideon Simpson) Date: Thu, 12 Mar 2009 19:38:15 -0400 Subject: [SciPy-user] cblas warning on os x Message-ID: I only just noticed the two test warnings: WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. I thought OS X's vecLib provided a cblas and lapack. Is this not the case? -gideon From bsouthey at gmail.com Thu Mar 12 22:33:17 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 12 Mar 2009 21:33:17 -0500 Subject: [SciPy-user] Article(s) on Test Driven Development for Science In-Reply-To: References: Message-ID: Really can not to what Josef said. On Thu, Mar 12, 2009 at 8:13 AM, Timmie wrote: > Hello, > from many blogs, in the book "Expert Python Programming" and in the code I read > about test driven development. > It's said to help preventing future breakage or failures of the code. > > Is see on major diffuculty when adopting this for science scripts: > > In Science we not only have to control the program flow but also to validate the > output. > I think such a validation needs to be included in tests. I would presume that the code was written to give the correct results in the first place. > > I may change something in my code and still pass tests from the software side I thought that it is what you hope happens > but the result data may be totally wrong. In that case there are really a few reasons: 1) the test is wrong such as not allowing sufficient iterations or is too lax in setting convergence so the test output is actually incorrect but code would have given the correct answer if given say more iterations. 2) your original code had a bug like bad logic 3) the algorithm does not work for that data or may need more iterations. 4) the data is work or inappropriate - may not have sufficient information or numerical issues to do what you want. > > Are there already concepts for such testing? Writing even more tests that cover any special corner cases. > > The tests I have seen so far play mostely with generated random data. > > Are there presentations or articles about this? You might find some references under software quality. > Regards, > Timmie In Numpy there is usually a request for people to write new tests for the corner cases that get found or any unexpected behavior. I am also sure that people will ask or file bug reports when they try various functions and do not get the expected answer. More fundamental are tests that really test accuracy by giving complex or difficult problems to solve (not just to show the errors in Microsoft Excel). I only know statistics and I know there some datasets and models that really test software. I do not recall the papers but one attempt is The Statistical Reference Datasets Project: http://www.itl.nist.gov/div898/strd/ Regards Bruce From wnbell at gmail.com Fri Mar 13 00:35:29 2009 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 13 Mar 2009 00:35:29 -0400 Subject: [SciPy-user] SciPy_Tutorial sparse examples broken In-Reply-To: References: Message-ID: On Thu, Mar 12, 2009 at 6:12 PM, Kael Fischer wrote: > The "new" Tutorial (most of it is from 2006) sparse matrix examples > seem to have a problem. ?I'm using SciPy 0.7.0.b1 on FreeBSD 7.0R > amd64. > > setup like this, per tutorial: > ### > Asp = sparse.lil_matrix((50000,50000)) > Asp.setdiag(ones(50000)) > b = arange(0,50000) > ### > > Now Asp.tocsc(), Asp.tocsr(),linsolve.spsolve.(Asp,b) all work, but it > you assign to Asp with some fancy indexing along dimension 0 tocsc and > tocsr (and hence many other methods) fail. ?E.g.: > ### > Asp[200:250,30] = 10*random.rand(50) > ### or ### > Asp[0:3,30] = [1,2,3] > ### > > followed by: > ### > Asp.tocsr() > --------------------------------------------------------------------------- > ValueError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Traceback (most recent call last) > This seems to be related to ticket #226: http://projects.scipy.org/scipy/ticket/226 I haven't had time to look at lil_matrix extensively, so I can't say what the fundamental problem is here. It's clear that fancy indexing/slicing with lil_matrix are broken though. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From mhearne at usgs.gov Fri Mar 13 12:33:30 2009 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 13 Mar 2009 10:33:30 -0600 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> References: <49B9802E.9050308@usgs.gov> <49B991A0.7060409@usgs.gov> <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> Message-ID: <49BA8ADA.2030506@usgs.gov> Robert - Thanks. I hadn't run across those methods. However... I tried the script below (using numpy version "1.1.0.dev5077"), and got the following output: ####output#### [ 0.05867671 0.71670412 0.99391546 0.02065827] ?NUMPYF{'descr': ' y = numpy.lib.format.read_array(output) File "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", line 302, in read_array version = read_magic(fp) File "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", line 105, in read_magic raise ValueError("could not read %d characters for the magic string; got %r" % (MAGIC_LEN, magic_str)) ValueError: could not read 8 characters for the magic string; got '' ####output#### ######script##### #!/usr/bin/python from numpy import array from numpy.random import random import numpy.lib.format import StringIO output = StringIO.StringIO() x = random(4) print x numpy.lib.format.write_array(output, x) print output.getvalue() y = numpy.lib.format.read_array(output) print y ######script##### Robert Kern wrote: > On Thu, Mar 12, 2009 at 17:50, Michael Hearne wrote: > >> Conor - Thank you for your suggestion. I am in fact already using >> pytables in another part of my application. However, using PyTables for >> this portion is not a desirable option. >> >> Perhaps I should rephrase the question: >> What is the the most efficient way to serialize a numpy array into a >> structure that _could_ be written to a disk, or stuffed into a database, >> etc? >> > > Use numpy.lib.format.{read,write}_array() to use the NPY format, which > should be the most robust and efficient way to do what you want. I > have used this to store numpy arrays as MySQL BLOBs before, and it > worked quite well. > > From josef.pktd at gmail.com Fri Mar 13 12:50:44 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 13 Mar 2009 12:50:44 -0400 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <49BA8ADA.2030506@usgs.gov> References: <49B9802E.9050308@usgs.gov> <49B991A0.7060409@usgs.gov> <3d375d730903121557q573b1d39ud0d631324bc99b18@mail.gmail.com> <49BA8ADA.2030506@usgs.gov> Message-ID: <1cd32cbb0903130950y7345cbd8s608d45d958fd5dac@mail.gmail.com> use output.seek(0) (see below), then you get this result, at least on winxp [ 0.22661754 0.83012122 0.10411541 0.99754672] ?NUMPY [ 0.22661754 0.83012122 0.10411541 0.99754672] Josef On Fri, Mar 13, 2009 at 12:33 PM, Michael Hearne wrote: > Robert - Thanks. ?I hadn't run across those methods. > > However... > > I tried the script below (using numpy version "1.1.0.dev5077"), and got > the following output: > > ####output#### > [ 0.05867671 ?0.71670412 ?0.99391546 ?0.02065827] > ?NUMPYF{'descr': ' pm??? > ???{=????=??'??? 9m?p'?? > Traceback (most recent call last): > ?File "./testme.py", line 13, in > ? ?y = numpy.lib.format.read_array(output) > ?File > "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", > line 302, in read_array > ? ?version = read_magic(fp) > ?File > "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", > line 105, in read_magic > ? ?raise ValueError("could not read %d characters for the magic string; > got %r" % (MAGIC_LEN, magic_str)) > ValueError: could not read 8 characters for the magic string; got '' > ####output#### > > ######script##### > #!/usr/bin/python > > from numpy import array > from numpy.random import random > import numpy.lib.format > import StringIO > > output = StringIO.StringIO() > x = random(4) > print x > numpy.lib.format.write_array(output, x) > print output.getvalue() output.seek(0) <-------------------- add this > y = numpy.lib.format.read_array(output) > print y > ######script##### > > Robert Kern wrote: >> On Thu, Mar 12, 2009 at 17:50, Michael Hearne wrote: >> >>> Conor - Thank you for your suggestion. ?I am in fact already using >>> pytables in another part of my application. ?However, using PyTables for >>> this portion is not a desirable option. >>> >>> Perhaps I should rephrase the question: >>> What is the the most efficient way to serialize a numpy array into a >>> structure that _could_ be written to a disk, or stuffed into a database, >>> etc? >>> >> >> Use numpy.lib.format.{read,write}_array() to use the NPY format, which >> should be the most robust and efficient way to do what you want. I >> have used this to store numpy arrays as MySQL BLOBs before, and it >> worked quite well. >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mhearne at usgs.gov Fri Mar 13 12:53:59 2009 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 13 Mar 2009 10:53:59 -0600 Subject: [SciPy-user] [Fwd: Re: storing numpy arrays] Message-ID: <49BA8FA7.1010201@usgs.gov> All: Never mind - I figured out I needed to seek to the beginning of the file-like object I created! --Mike -------- Original Message -------- Subject: Re: [SciPy-user] storing numpy arrays Date: Fri, 13 Mar 2009 10:33:30 -0600 From: Michael Hearne Reply-To: SciPy Users List To: SciPy Users List References: <49B9802E.9050308 at usgs.gov> <49B991A0.7060409 at usgs.gov> <3d375d730903121557q573b1d39ud0d631324bc99b18 at mail.gmail.com> Robert - Thanks. I hadn't run across those methods. However... I tried the script below (using numpy version "1.1.0.dev5077"), and got the following output: ####output#### [ 0.05867671 0.71670412 0.99391546 0.02065827] ?NUMPYF{'descr': ' y = numpy.lib.format.read_array(output) File "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", line 302, in read_array version = read_magic(fp) File "/Library/Python/2.5/site-packages/numpy-1.1.0.dev5077-py2.5-macosx-10.3-i386.egg/numpy/lib/format.py", line 105, in read_magic raise ValueError("could not read %d characters for the magic string; got %r" % (MAGIC_LEN, magic_str)) ValueError: could not read 8 characters for the magic string; got '' ####output#### ######script##### #!/usr/bin/python from numpy import array from numpy.random import random import numpy.lib.format import StringIO output = StringIO.StringIO() x = random(4) print x numpy.lib.format.write_array(output, x) print output.getvalue() y = numpy.lib.format.read_array(output) print y ######script##### Robert Kern wrote: > On Thu, Mar 12, 2009 at 17:50, Michael Hearne wrote: > >> Conor - Thank you for your suggestion. I am in fact already using >> pytables in another part of my application. However, using PyTables for >> this portion is not a desirable option. >> >> Perhaps I should rephrase the question: >> What is the the most efficient way to serialize a numpy array into a >> structure that _could_ be written to a disk, or stuffed into a database, >> etc? >> > > Use numpy.lib.format.{read,write}_array() to use the NPY format, which > should be the most robust and efficient way to do what you want. I > have used this to store numpy arrays as MySQL BLOBs before, and it > worked quite well. > > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From ellisonbg.net at gmail.com Fri Mar 13 13:36:40 2009 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 13 Mar 2009 10:36:40 -0700 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <49B9802E.9050308@usgs.gov> References: <49B9802E.9050308@usgs.gov> Message-ID: <6ce0ac130903131036qbfb531la16d9635a452b9d3@mail.gmail.com> Another option that could be better is to simply store the name of a file containing the array in the db and then just write the array to the file (use the array hash as a filename) using the NPY format. Depending on the size and number of arrays, there could be a performance benefit to this approach compared to storing the full array in the db. Brian On Thu, Mar 12, 2009 at 2:35 PM, Michael Hearne wrote: > I have a MySQL database, in which I would like to store a numpy array as > a BLOB. ?I'm trying to find out the easiest/most efficient way to > store/extract this data. > > I looked at scipy.io.write_array, which looked somewhat promising, until > I saw the notice that this method is deprecated in favor of numpy.savetxt. > > I could potentially use scipy.io.savemat/loadmat, but that would require > writing to a binary (.mat file), then reading it back in again just to > stuff it into the database. > > I could also use pickle, but I found a bug report somewhere that pickle > may do strange things to the array in the process or reading or writing. > > Does anyone have any suggestions? > > Thanks, > > Mike > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri Mar 13 14:15:39 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 13:15:39 -0500 Subject: [SciPy-user] [Fwd: Re: storing numpy arrays] In-Reply-To: <49BA8FA7.1010201@usgs.gov> References: <49BA8FA7.1010201@usgs.gov> Message-ID: <3d375d730903131115y66d4afd4qe969a287320500f4@mail.gmail.com> On Fri, Mar 13, 2009 at 11:53, Michael Hearne wrote: > All: Never mind - I figured out I needed to seek to the beginning of the > file-like object I created! Also, it's worth using a cStringIO.StringIO instead of StringIO.StringIO. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 13 14:22:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Mar 2009 13:22:40 -0500 Subject: [SciPy-user] storing numpy arrays In-Reply-To: <6ce0ac130903131036qbfb531la16d9635a452b9d3@mail.gmail.com> References: <49B9802E.9050308@usgs.gov> <6ce0ac130903131036qbfb531la16d9635a452b9d3@mail.gmail.com> Message-ID: <3d375d730903131122y36c3cbc0tabf637b204d562b@mail.gmail.com> On Fri, Mar 13, 2009 at 12:36, Brian Granger wrote: > Another option that could be better is to simply store the name of a > file containing the array in the db and then just write the array to > the file (use the array hash as a filename) using the NPY format. > Depending on the size and number of arrays, there could be a > performance benefit to this approach compared to storing the full > array in the db. That works well if the database is on the same system as the program. If you are connecting to it remotely, you will have to write a server to expose those files over the network. A simple HTTP server would probably suffice. My use case was lots of smallish arrays (~1000 elements) rather than fewer big arrays, so BLOB storage is adequate, I think. It would be great if the database itself could do that transparently, though. You would declare the column as a BLOBFILE, and it would only store a reference to a file in its workspace. Ah well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fred.mailhot at gmail.com Sat Mar 14 00:55:31 2009 From: fred.mailhot at gmail.com (Fred Mailhot) Date: Sat, 14 Mar 2009 00:55:31 -0400 Subject: [SciPy-user] Install newest version of scipy to Ubuntu 8.04 In-Reply-To: <49B8A4CC.7070602@astraw.com> References: <49B88A44.6090907@ar.media.kyoto-u.ac.jp> <49B8A4CC.7070602@astraw.com> Message-ID: 2009/3/12 Andrew Straw : > David Cournapeau wrote: >> Joseph Johnson wrote: >>> Hi, >>> >>> I was wondering if there is an apt source for Ubuntu 8.04? >>> >>> I did find: >>> https://edge.launchpad.net/~scipy/+archive/ppa >>> but its for 8.10 >> >> No, only Ubuntu 8.10 is supported. Supporting both 8.04 and 8.10 is too >> time consuming, > > That being said, you can try my packages: > > http://debs.astraw.com/hardy I'm using Andrew's packages now with 8.04 and have had exactly zero problems with them, in fact, they've worked well for the last 2 or 3 versions of Ubuntu as well (thanks Andrew!). My $0.02, Fred. From tioguerra at gmail.com Sat Mar 14 12:04:24 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Sun, 15 Mar 2009 01:04:24 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error Message-ID: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> Hi, I am trying to compile SciPy from SVN r5623 on a Mac PPC G5 running Leopard, Python 2.5.1 (that came bundled with the system), gcc 4.0.1, gfortran 4.2.3, NumPy 1.2.1 from sources. Here is the snippet of the output at the point the first error occurs: creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -c' gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:33: error: syntax error before 'float' /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:34: error: syntax error before 'double' In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:145: error: syntax error before '*' token /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:146: error: syntax error before '*' token Any help appreciated. Cheers, Guerra From cournape at gmail.com Sat Mar 14 12:24:46 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 01:24:46 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> Message-ID: <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> On Sun, Mar 15, 2009 at 1:04 AM, Rodrigo Guerra wrote: > Hi, > > I am trying to compile SciPy from SVN r5623 on a Mac PPC G5 running > Leopard, Python 2.5.1 (that came bundled with the system), gcc 4.0.1, > gfortran 4.2.3, NumPy 1.2.1 from sources. > > Here is the snippet of the output at the point the first error occurs: > > creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen > creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack > creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK > creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS > compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -c' > gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c > In file included from > /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, > ? ? ? ? ? ? ? ? from > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: > /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:33: > error: syntax error before 'float' > /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:34: > error: syntax error before 'double' > In file included from > /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, > ? ? ? ? ? ? ? ? from > scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: > /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:145: > error: syntax error before '*' token > /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:146: > error: syntax error before '*' token This error looks strange - it is as if the compiler did not recognize the complex typedef, at least if we have the same vForce.h file. Lines 33 and 34 read as : typedef complex float __float_complex_t; typedef complex double __double_complex_t; Do you have the same content ? If so, what does compiling the following return (as a C file): #include typedef complex float __foo; int main(void) { return 0; } cheers, David From tioguerra at gmail.com Sat Mar 14 12:35:36 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Sun, 15 Mar 2009 01:35:36 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> Message-ID: <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> Hi David, Lines 33 and 34 look exactly the same. When I try to compile the program I get the following error: $ gcc test.c -o test test.c:3: error: syntax error before 'float' Cheers, Guerra On Sun, Mar 15, 2009 at 1:24 AM, David Cournapeau wrote: > On Sun, Mar 15, 2009 at 1:04 AM, Rodrigo Guerra wrote: >> Hi, >> >> I am trying to compile SciPy from SVN r5623 on a Mac PPC G5 running >> Leopard, Python 2.5.1 (that came bundled with the system), gcc 4.0.1, >> gfortran 4.2.3, NumPy 1.2.1 from sources. >> >> Here is the snippet of the output at the point the first error occurs: >> >> creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen >> creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack >> creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK >> creating build/temp.macosx-10.5-ppc-2.5/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS >> compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -c' >> gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c >> In file included from >> /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, >> ? ? ? ? ? ? ? ? from >> scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: >> /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:33: >> error: syntax error before 'float' >> /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:34: >> error: syntax error before 'double' >> In file included from >> /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:61, >> ? ? ? ? ? ? ? ? from >> scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:1: >> /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:145: >> error: syntax error before '*' token >> /System/Library/Frameworks/vecLib.framework/Headers/vForce.h:146: >> error: syntax error before '*' token > > This error looks strange - it is as if the compiler did not recognize > the complex typedef, at least if we have the same vForce.h file. Lines > 33 and 34 read as : > > typedef complex float __float_complex_t; > typedef complex double __double_complex_t; > > Do you have the same content ? If so, what does compiling the > following return (as a C file): > > #include > > typedef complex float __foo; > > int main(void) > { > ? ?return 0; > } > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Sat Mar 14 12:46:47 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 01:46:47 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> Message-ID: <5b8d13220903140946t6ddc6244u3557f09b7d70f975@mail.gmail.com> On Sun, Mar 15, 2009 at 1:35 AM, Rodrigo Guerra wrote: > Hi David, > > Lines 33 and 34 look exactly the same. > > When I try to compile the program I get the following error: > > $ gcc test.c -o test > test.c:3: error: syntax error before 'float' Ok, so the error is indeed the unrecognized complex typedef.Does the following compile ? int main(void) { _Complex a; } From tioguerra at gmail.com Sat Mar 14 12:53:07 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Sun, 15 Mar 2009 01:53:07 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <5b8d13220903140946t6ddc6244u3557f09b7d70f975@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> <5b8d13220903140946t6ddc6244u3557f09b7d70f975@mail.gmail.com> Message-ID: <817c9f950903140953r4890c9aehf365b42fa8356a08@mail.gmail.com> Hi David, It seems the machine had another complex.h header (from Numerical Recipes) installed in usr/local/include. I guess the compiler was including that one instead of the system's one in /usr/include. I moved that file aside and both your tests compiled fine. Now I am trying to compile SciPy again. Thanks for the help so far! Guerra On Sun, Mar 15, 2009 at 1:46 AM, David Cournapeau wrote: > On Sun, Mar 15, 2009 at 1:35 AM, Rodrigo Guerra wrote: >> Hi David, >> >> Lines 33 and 34 look exactly the same. >> >> When I try to compile the program I get the following error: >> >> $ gcc test.c -o test >> test.c:3: error: syntax error before 'float' > > Ok, so the error is indeed the unrecognized complex typedef.Does the > following compile ? > > int main(void) > { > ?_Complex a; > } > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Sat Mar 14 13:07:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 15 Mar 2009 02:07:10 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <817c9f950903140953r4890c9aehf365b42fa8356a08@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> <5b8d13220903140946t6ddc6244u3557f09b7d70f975@mail.gmail.com> <817c9f950903140953r4890c9aehf365b42fa8356a08@mail.gmail.com> Message-ID: <5b8d13220903141007v32174bf0nc82625f3c4e7ae5a@mail.gmail.com> On Sun, Mar 15, 2009 at 1:53 AM, Rodrigo Guerra wrote: > Hi David, > > It seems the machine had another complex.h header (from Numerical > Recipes) installed in usr/local/include. Duh, I should have thought about this first. Glad it is working, David From tioguerra at gmail.com Sat Mar 14 13:11:09 2009 From: tioguerra at gmail.com (Rodrigo Guerra) Date: Sun, 15 Mar 2009 02:11:09 +0900 Subject: [SciPy-user] Mac OS X 10.5 PPC Python 2.5.1 vecLib.framework error In-Reply-To: <5b8d13220903141007v32174bf0nc82625f3c4e7ae5a@mail.gmail.com> References: <817c9f950903140904k59a74e1cxe642b7c09cbf9948@mail.gmail.com> <5b8d13220903140924w68bf4ecxd870cbc24098d51a@mail.gmail.com> <817c9f950903140935u628ddf2bk8c84eae1faf348c@mail.gmail.com> <5b8d13220903140946t6ddc6244u3557f09b7d70f975@mail.gmail.com> <817c9f950903140953r4890c9aehf365b42fa8356a08@mail.gmail.com> <5b8d13220903141007v32174bf0nc82625f3c4e7ae5a@mail.gmail.com> Message-ID: <817c9f950903141011o5b6b2205ia7fa2194844f8c69@mail.gmail.com> Hi, It did work now -- just to put a happy ending in the thread. :-) Thanks! On Sun, Mar 15, 2009 at 2:07 AM, David Cournapeau wrote: > On Sun, Mar 15, 2009 at 1:53 AM, Rodrigo Guerra wrote: >> Hi David, >> >> It seems the machine had another complex.h header (from Numerical >> Recipes) installed in usr/local/include. > > Duh, I should have thought about this first. Glad it is working, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Sat Mar 14 16:21:46 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 14 Mar 2009 21:21:46 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix Message-ID: Hi all, I am interested in the stability of time periodic ODE's of the form \dot{y} = A(t) y, A(t)=A(t+T) (1) I have used scipy.integrate.ode to compute the eigenvalues of the Floquet matrix. See attachment for details. The eigenvalues \lambda of the Floquet matrix are called multipliers. Multipliers of system (1) possess symmetry: If \lambda is the multiplier, then 1/\lambda is also a multiplier. The system is stable iff all eigenvalues are inside the unit circle. However, I cannot reproduce the symmetry of the numerical multipliers computed by scipy, e.g. >>> evals array([ -2.55239771e-02+0.j , -2.39509330e-02+0.99921743j, -2.39509330e-02-0.99921743j, -3.85602951e+01+0.j ]) >>> 1./evals array([ -3.91788472e+01-0.j , -2.39746890e-02-1.00020852j, -2.39746890e-02+1.00020852j, -2.59334115e-02-0.j ]) How can I improve the numerical results concerning the symmetry ? The computation of a stability chart is a time-consuming task even for low dimensional systems. How can one accelerate the process ? Is it possible to parallelize the integration, I mean each processor could solve (1) for a new set of initial conditions ? Any pointer would be appreciated. BTW, the example is taken from a recent paper by Seyranian. Thanks in advance. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: seyranian.py Type: text/x-python Size: 1941 bytes Desc: not available URL: From rob.clewley at gmail.com Sat Mar 14 16:47:07 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Sat, 14 Mar 2009 16:47:07 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Sat, Mar 14, 2009 at 4:21 PM, Nils Wagner wrote: > Hi all, > > I am interested in the stability of time periodic ODE's of > the form > > \dot{y} = A(t) y, A(t)=A(t+T) (1) > How can I improve the numerical results concerning the > symmetry ? I can't see straight away what's causing the numerical inaccuracy. > BTW, the example is taken from a recent paper by Seyranian. If you send me Seyranian's paper I might be able to work it out. Could you do that? -Rob From nwagner at iam.uni-stuttgart.de Sat Mar 14 17:03:40 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 14 Mar 2009 22:03:40 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Sat, 14 Mar 2009 16:47:07 -0400 Rob Clewley wrote: > On Sat, Mar 14, 2009 at 4:21 PM, Nils Wagner > wrote: >> Hi all, >> >> I am interested in the stability of time periodic ODE's >>of >> the form >> >> \dot{y} = A(t) y, A(t)=A(t+T) (1) > >> How can I improve the numerical results concerning the >> symmetry ? > > I can't see straight away what's causing the numerical >inaccuracy. > >> BTW, the example is taken from a recent paper by >>Seyranian. > > If you send me Seyranian's paper I might be able to work >it out. Could > you do that? > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user http://dx.doi.org/10.1016/j.jsv.2009.01.042 Cheers, Nils From nwagner at iam.uni-stuttgart.de Sat Mar 14 17:49:38 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 14 Mar 2009 22:49:38 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Sat, 14 Mar 2009 16:47:07 -0400 Rob Clewley wrote: > On Sat, Mar 14, 2009 at 4:21 PM, Nils Wagner > wrote: >> Hi all, >> >> I am interested in the stability of time periodic ODE's >>of >> the form >> >> \dot{y} = A(t) y, A(t)=A(t+T) (1) > >> How can I improve the numerical results concerning the >> symmetry ? > > I can't see straight away what's causing the numerical >inaccuracy. > >> BTW, the example is taken from a recent paper by >>Seyranian. > > If you send me Seyranian's paper I might be able to work >it out. Could > you do that? > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Rob, I have added another test. Consider the dynamical systems \dot{X} = A(t) X (1) \dot{Y} = -A^T(t) Y (2) X^T Y = I (3), where I denotes the identity matrix. Differentiating (3) yields \dot{X}^T Y + X^T \dot{Y} = 0 (4) Inserting (1),(2) in (4) confirms (3) Here is the product of X^T Y [[ 0.99682762 0.00427521 -0.00417167 0.00414188] [ 0.00212284 0.99717958 0.00329442 -0.00330161] [ 0.00122548 -0.00120682 0.98516109 0.01544185] [-0.00108133 0.00108144 0.01406736 0.98537091]] Again, I cannot reproduce the theory. The off-diagonal elements should be zero... Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: seyranian.py Type: text/x-python Size: 2559 bytes Desc: not available URL: From avi at sicortex.com Sat Mar 14 17:53:24 2009 From: avi at sicortex.com (Avi Purkayastha) Date: Sat, 14 Mar 2009 16:53:24 -0500 Subject: [SciPy-user] numpy installation Message-ID: Hi, I installed numpy fine in ~/softwares/numpy. However when I went to test it.. >>> import numpy Traceback (most recent call last): File "", line 1, in ? ImportError: No module named numpy How do I "load" the numpy module in python? I have the numpy install directory in my $PATH, do I need to do anything else? Thanks -- Avi From rob.clewley at gmail.com Sat Mar 14 18:07:12 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Sat, 14 Mar 2009 18:07:12 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: > Here is the product of X^T Y > > [[ 0.99682762 ?0.00427521 -0.00417167 ?0.00414188] > ?[ 0.00212284 ?0.99717958 ?0.00329442 -0.00330161] > ?[ 0.00122548 -0.00120682 ?0.98516109 ?0.01544185] > ?[-0.00108133 ?0.00108144 ?0.01406736 ?0.98537091]] > > Again, I cannot reproduce the theory. The off-diagonal > elements should be zero... > > Nils > ... and the on-diagonal elements should be 1. I suspect you basically have it set up right, it's just that there must be some numerical ill-conditioning somewhere in the matrix solving (or inaccuracy in the integration) that leads to these fairly large errors. I'll take a look when I have a chance in the next couple of days! -Rob From pav at iki.fi Sat Mar 14 18:12:16 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 14 Mar 2009 22:12:16 +0000 (UTC) Subject: [SciPy-user] numpy installation References: Message-ID: Sat, 14 Mar 2009 16:53:24 -0500, Avi Purkayastha wrote: > How do I "load" the numpy module in python? I have the numpy install > directory in my $PATH, do I need to do anything else? http://docs.python.org/install/index.html#modifying-python-s-search-path -- Pauli Virtanen From cycomanic at gmail.com Sat Mar 14 21:51:17 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Sun, 15 Mar 2009 14:51:17 +1300 Subject: [SciPy-user] numpy installation In-Reply-To: References: Message-ID: <20090315015116.GA4002@jochen.schroeder.phy.auckland.ac.nz> Use the $PYTHONPATH variable. On 14/03/09 16:53, Avi Purkayastha wrote: > Hi, > I installed numpy fine in ~/softwares/numpy. > > However when I went to test it.. > > >>> import numpy > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named numpy > > How do I "load" the numpy module in python? I have the numpy install > directory in my $PATH, do I need to do anything else? > > Thanks > -- Avi > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From dannoritzer at web.de Sun Mar 15 08:03:13 2009 From: dannoritzer at web.de (=?ISO-8859-1?Q?G=FCnter_Dannoritzer?=) Date: Sun, 15 Mar 2009 13:03:13 +0100 Subject: [SciPy-user] remezord for scipy; ticket 475 In-Reply-To: <20090222155652.GY11361@localhost.ee.columbia.edu> References: <499F2FAC.9010605@web.de> <20090222155652.GY11361@localhost.ee.columbia.edu> Message-ID: <49BCEE81.5010604@web.de> Hi, A while back I asked about the remezord() function here on the mailing list and got the reply that unit tests will be needed in order to add it to the SciPy. I had sent some questions concerning the functionality to Lev who provided the function and he agreed on moving this discussion to the mailing list. So below are my questions and Lev's reply. Guenter Lev Givon wrote: > Received from G?nter Dannoritzer on Fri, Feb 20, 2009 at 05:33:16PM EST: >> Hi Lev, >> >> I saw you provided a patch to support remezord() in scipy with ticket #475: >> >> http://scipy.org/scipy/scipy/ticket/475 >> > > Hi, G?nter. Glad to see that someone besides myself finds the above > function useful :-) > >> Unfortunately this patch is not scheduled for any release yet due to the >> missing test cases. Now I am trying to figure out how difficult it would >> be to add test cases and was wondering whether you could give me some >> hints for them. >> >> Basically what I see is that remezord() actually is based on three >> functions, using methods from Kaiser, Herrmann, and Ichige papers to >> compute the required order for a filter. >> >> What I am having a hard time with is how to verify the proper function >> of those algorithms in self-checking tests. I can see that some simple >> parameter checking can be done, but then, how to determine that the >> results are correct? One way would be to calculate the actual filter >> after the order estimation and make sure that the transfer function is >> within the specified window. Maybe calculate further transfer functions >> with reduced order and make sure that the specified transfer function is >> not met anymore within a specified error range? > > Rather than checking whether the transfer functions determined by the > remez() function actually satisfy the filter design parameters passed > to remezord(), wouldn't it be preferrable (and possibly more feasible) > to check that the my implementations of the various filter order > estimation functions actually produce the output values described in > the respective papers where they are described? As far as I can > gather, numpy tests should check whether an algorithm implementation > is correct on the assumption that the underlying algorithm is correct. > >> Like allow the calculated order to be +/- 1 or 2 within the actual >> minimum order that can be calculated with the remez() function and meets >> the required transfer function? > > Not exactly sure what you mean. The Remez algorithm computes a filter's > coefficients given a specified filter order; it cannot calculate the > minimum order itself. > >> Next question would be how complex to do the transfer function? Just >> low, high, band pass filter and stop band filter? Then vary the corner >> frequencies? >> >> I was not sure whether you are still subscribed to the scipy user >> mailing list and send you this mail direct. If you are still subscribed >> we could just move the discussion to the mailing list. > > Yes, I am subscribed to scipy-user; feel free to continue the > discussion there. I'm interested in feedback from others so that I can > learn a bit about writing numpy tests myself in the future. > >> Thanks for your ideas. >> >> Cheers, >> >> Guenter > > L.G. > From nadavh at visionsense.com Sun Mar 15 08:39:47 2009 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 15 Mar 2009 14:39:47 +0200 Subject: [SciPy-user] Chirp Z transform Message-ID: <710F2847B0018641891D9A216027636029C484@ex3.envision.co.il> Attached are a chirp-Z transform module composed by Paul Kienzle (mainly) and I. We tried hard to conform to the scipy's coding style. Nadav -------------- next part -------------- A non-text attachment was scrubbed... Name: chirp_z_transform.tar.bz2 Type: application/octet-stream Size: 6385 bytes Desc: chirp_z_transform.tar.bz2 URL: From nwagner at iam.uni-stuttgart.de Sun Mar 15 10:34:45 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 15 Mar 2009 15:34:45 +0100 Subject: [SciPy-user] scipy.sparse.linalg.eigen Message-ID: Hi all, I am interested in the eigenvalues of a product of matrices: M_m M_{m-1} \cdots M_1 x_0 = \lambda x_0 (1) The problem can be rewritten as a generalized eigenvalue problem A z = \lambda B z, (2) where B is singular. I am only interested in the f i n i t e eigenvalues of (2). So far I have solved the problem by scipy.linalg.eigvals. Is it possible to compute these eigenvalues with scipy.sparse.linalg.eigen ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: product.py Type: text/x-python Size: 1651 bytes Desc: not available URL: From dannoritzer at web.de Sun Mar 15 10:46:37 2009 From: dannoritzer at web.de (=?ISO-8859-1?Q?G=FCnter_Dannoritzer?=) Date: Sun, 15 Mar 2009 15:46:37 +0100 Subject: [SciPy-user] remezord for scipy; ticket 475 In-Reply-To: <49BCEE81.5010604@web.de> References: <499F2FAC.9010605@web.de> <20090222155652.GY11361@localhost.ee.columbia.edu> <49BCEE81.5010604@web.de> Message-ID: <49BD14CD.5010006@web.de> > Lev Givon wrote: >> Received from G?nter Dannoritzer on Fri, Feb 20, 2009 at 05:33:16PM EST: ... > Rather than checking whether the transfer functions determined by the > remez() function actually satisfy the filter design parameters passed > to remezord(), wouldn't it be preferrable (and possibly more feasible) > to check that the my implementations of the various filter order > estimation functions actually produce the output values described in > the respective papers where they are described? As far as I can > gather, numpy tests should check whether an algorithm implementation > is correct on the assumption that the underlying algorithm is correct. I guess that is true. I did some search for those papers and had a hard time finding them. That is why I tried to figure out another way to test the functions. >From the three papers: O. Herrmann, L.R. Raviner, and D.S.K. Chan, Practical Design Rules for Optimum Finite Impulse Response Low-Pass Digital Filters, Bell Syst. Tech. Jour., 52(6):769-799, Jul./Aug. 1973. J.F. Kaiser, Nonrecursive Digital Filter Design Using I_0-sinh Window function, Proc. IEEE Int. Symp. Circuits and Systems, 20-23, April 1974. K. Ichige, M. Iwaki, and R. Ishii, Accurate Estimation of Minimum Filter Length for Optimum FIR Digital Filters, IEEE Transactions on Circuits and Systems, 47(10):1008-1017, October 2000. I only found the Ichige one. About the other two I did not even find information on the IEEE web page. I will study that one paper I found and try to understand the implemented algorithm. Lev, do you have some pointers on how to get to the other papers? Did you get them over a library? Guenter From rob.clewley at gmail.com Sun Mar 15 13:56:00 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Sun, 15 Mar 2009 13:56:00 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: Hi Nils, > The eigenvalues \lambda of the Floquet matrix are called multipliers. > Multipliers of system (1) possess symmetry: If \lambda is the > multiplier, then 1/\lambda is also a multiplier. I am still not certain what the problem with your code is specifically, but I can verify that when I put your model and calculations into PyDSTool I get symmetric eigenvalues. > The computation of a stability chart is a time-consuming task even for low dimensional systems. The integrations with PyDSTool are an order of magnitude faster or more even at the higher tolerances, so that should help a lot. > How can I improve the numerical results concerning the symmetry ? It could be just that the VODE integrator is not having its absolute and relative tolerances set small enough (I'm not sure what these are by default for VODE but I'm using values for Radau down near 1e-12). I am also integrating with smaller maximum step size (0.001). For the first pair of (omega, delta) = (16.2, 0.0128205128205) parameter values that your code detects as giving a stable orbit, my code does not agree. Your final point on the last of the four computed orbits ("ic=3" in your loop) is q1: 0.04280477 q1dot: 0.35978081 q2: 0.33468969 q2dot: 0.48781428 whereas mine is q1: 0.0404874798281 q1dot: 0.350985516948 q2: 0.336574584737 q2dot: 0.504365520376 which would appear to suggest a source of error during integration between our codes. My eigenvalues of F are [ 0.96241252+0.j 0.00799704+0.99996802j 0.00799704-0.99996802j 1.03905548+0.j ] which are indistinguishable, as a set, from their reciprocals. I've attached my PyDSTool script in case changing the VODE tolerances don't get you the accuracy you need. Please let me know if you get VODE to be more agreeable, I'm curious. -Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: seyranian-PyDSTool.py Type: application/octet-stream Size: 2278 bytes Desc: not available URL: From rob.clewley at gmail.com Sun Mar 15 14:19:59 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Sun, 15 Mar 2009 14:19:59 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: > It could be just that the VODE integrator is not having its absolute > and relative tolerances set small enough (I'm not sure what these are > by default for VODE but I'm using values for Radau down near 1e-12). I > am also integrating with smaller maximum step size (0.001). FYI it doesn't look like that is it. The default tolerances I see at the python level for VODE are rtol=1e-6 and atol=1e-12, which should be enough. It's most likely to be the more efficient way that Radau solves the matrix inversions, which are probably tailored for implicit ODEs of this kind. -Rob From lev at columbia.edu Sun Mar 15 13:57:56 2009 From: lev at columbia.edu (Lev Givon) Date: Sun, 15 Mar 2009 13:57:56 -0400 Subject: [SciPy-user] remezord for scipy; ticket 475 In-Reply-To: <49BD14CD.5010006@web.de> References: <499F2FAC.9010605@web.de> <20090222155652.GY11361@localhost.ee.columbia.edu> <49BCEE81.5010604@web.de> <49BD14CD.5010006@web.de> Message-ID: <20090315175756.GA31954@avicenna.ee.columbia.edu> Received from G?nter Dannoritzer on Sun, Mar 15, 2009 at 10:46:37AM EDT: (snip) > I guess that is true. I did some search for those papers and had a hard > time finding them. That is why I tried to figure out another way to test > the functions. > > >From the three papers: > > O. Herrmann, L.R. Raviner, and D.S.K. Chan, Practical Design Rules for > Optimum Finite Impulse Response Low-Pass Digital Filters, Bell Syst. > Tech. Jour., 52(6):769-799, Jul./Aug. 1973. > > J.F. Kaiser, Nonrecursive Digital Filter Design Using I_0-sinh Window > function, Proc. IEEE Int. Symp. Circuits and Systems, 20-23, April 1974. > > K. Ichige, M. Iwaki, and R. Ishii, Accurate Estimation of Minimum > Filter Length for Optimum FIR Digital Filters, IEEE Transactions on > Circuits and Systems, 47(10):1008-1017, October 2000. > > I only found the Ichige one. About the other two I did not even find > information on the IEEE web page. > > I will study that one paper I found and try to understand the > implemented algorithm. > > Lev, do you have some pointers on how to get to the other papers? Did > you get them over a library? > > Guenter I actually used the descriptions of the Herrmann and Kaiser algorithms quoted in Ichige's paper because I didn't have immediate access to those papers. I just placed a request for my university's offsite copies of the relevant journals; barring complications, I can send you copies of the articles when I receive them in a few days. L.G. From dmitrey15 at ukr.net Sun Mar 15 15:01:03 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Sun, 15 Mar 2009 21:01:03 +0200 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 Message-ID: <49BD506F.5000900@ukr.net> Hi all, I'm glad to inform you about new release (0.23) of OpenOpt, a free universal numerical optimization framework (license: BSD). Our new homepage: http://openopt.org Introduction to the framework: http://openopt.org/Foreword All release details here: http://openopt.org/Changelog or http://forum.openopt.org/viewtopic.php?id=58 Special thanks to Stepan Hlushak for writing GLP (global) solver "de" ("differential evolution"). Regards, Dmitrey From fperez.net at gmail.com Mon Mar 16 00:42:46 2009 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 15 Mar 2009 21:42:46 -0700 Subject: [SciPy-user] Has IPython been useful to you? Please let me know... Message-ID: Hi all, [ apologies for the semi-spam, I'll keep this brief and expect all replies off-list ] IPython is a project that many of you on this list are likely to use in your daily work, either directly or indirectly (if you've embedded it or used it as a component of some other system). I would simply like to ask you, if IPython has been significantly useful for a project you use, lead, develop, etc., to let me know. For legal/professional reasons, I need to gather information about who has found IPython to be of value. I started IPython as a toy 'afternoon hack' in late 2001, and today it continues to grow, as the nicely summarized Ohloh stats show: https://www.ohloh.net/p/ipython (obviously, this is now the result of the work of many, not just myself, as is true of any healthy open source project as it grows). But I have never systematically tracked its impact, and now I need to do so. So, if you have used IPython and it has made a significant contribution to your project, work, research, company, whatever, I'd be very grateful if you let me know. A short paragraph on what this benefit has been is all I ask. Once I gather any information I get, I would contact directly some of the responders to ask for your authorization before quoting you. I should stress that any information you give me will only go in a documentation packet in support of my legal/residency process here in the USA (think of it as an oversized, obnoxiously detailed CV that goes beyond just publications and regular academic information). To keep traffic off this list, please send your replies directly to me, either at this address or my regular work one: Fernando.Perez at berkeley.edu In advance, many thanks to anyone willing to reply. I've never asked for anything in return for working on IPython and the ecosystem of scientific Python tools, but this is actually very important, so any information you can provide me will be very useful. Best regards, Fernando Perez. From ghsohn at gmail.com Mon Mar 16 04:15:49 2009 From: ghsohn at gmail.com (H.S.) Date: Mon, 16 Mar 2009 09:15:49 +0100 Subject: [SciPy-user] Has IPython been useful to you? Please let me know... In-Reply-To: References: Message-ID: I know Fernando wants to keep this off the list, but given the nature of his request, I just wanted to say "publicly" thanks for the great work, I love IPython, and encourage everyone, particularly those with "name-brand" positions in industry or academia, who use IPython to answer the request. This immigration bureaucracy stuff is awful, and I hope the community helps. H.S. On Mon, Mar 16, 2009 at 5:42 AM, Fernando Perez wrote: > Hi all, > > [ apologies for the semi-spam, I'll keep this brief and expect all > replies off-list ] > > IPython is a project that many of you on this list are likely to use > in your daily work, either directly or indirectly (if you've embedded > it or used it as a component of some other system). ?I would simply > like to ask you, if IPython has been significantly useful for a > project you use, lead, develop, etc., to let me know. > > For legal/professional reasons, I need to gather information about who > has found IPython to be of value. ?I started IPython as a toy > 'afternoon hack' in late 2001, and today it continues to grow, as the > nicely summarized Ohloh stats show: ?https://www.ohloh.net/p/ipython > (obviously, this is now the result of the work of many, not just > myself, as is true of any healthy open source project as it grows). > But I have never systematically tracked its impact, and now I need to > do so. > > So, if you have used IPython and it has made a significant > contribution to your project, work, research, company, whatever, I'd > be very grateful if you let me know. ?A short paragraph on what this > benefit has been is all I ask. ?Once I gather any information I get, I > would contact directly some of the responders to ask for your > authorization before quoting you. > > I should stress that any information you give me will only go in a > documentation packet in support of my legal/residency process here in > the USA (think of it as an oversized, obnoxiously detailed CV that > goes beyond just publications and regular academic information). > > To keep traffic off this list, please send your replies directly to > me, either at this address or my regular work one: > > Fernando.Perez at berkeley.edu > > In advance, many thanks to anyone willing to reply. ?I've never asked > for anything in return for working on IPython and the ecosystem of > scientific Python tools, but this is actually very important, so any > information you can provide me will be very useful. > > Best regards, > > Fernando Perez. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Mon Mar 16 05:26:01 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 10:26:01 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Sun, 15 Mar 2009 13:56:00 -0400 Rob Clewley wrote: > Hi Nils, > >> The eigenvalues \lambda of the Floquet matrix are called >>multipliers. >> Multipliers of system (1) possess symmetry: If \lambda >>is the >> multiplier, then 1/\lambda is also a multiplier. > > I am still not certain what the problem with your code >is > specifically, but I can verify that when I put your >model and > calculations into PyDSTool I get symmetric eigenvalues. > >> The computation of a stability chart is a time-consuming >>task even for low dimensional systems. > > The integrations with PyDSTool are an order of magnitude >faster or > more even at the higher tolerances, so that should help >a lot. > >> How can I improve the numerical results concerning the >>symmetry ? > > It could be just that the VODE integrator is not having >its absolute > and relative tolerances set small enough (I'm not sure >what these are > by default for VODE but I'm using values for Radau down >near 1e-12). I > am also integrating with smaller maximum step size >(0.001). > >For the first pair of (omega, delta) = (16.2, >0.0128205128205) > parameter values that your code detects as giving a >stable orbit, my > code does not agree. Your final point on the last of the >four computed > orbits ("ic=3" in your loop) is > > q1: 0.04280477 > q1dot: 0.35978081 > q2: 0.33468969 > q2dot: 0.48781428 > > whereas mine is > > q1: 0.0404874798281 > q1dot: 0.350985516948 > q2: 0.336574584737 > q2dot: 0.504365520376 > > which would appear to suggest a source of error during >integration > between our codes. My eigenvalues of F are > > [ 0.96241252+0j 0.00799704+0.99996802j > 0.00799704-0.99996802j > 1.03905548+0.j ] > > which are indistinguishable, as a set, from their >reciprocals. I've > attached my PyDSTool script in case changing the VODE >tolerances don't > get you the accuracy you need. Please let me know if you >get VODE to > be more agreeable, I'm curious. > > -Rob Hi Rob, Thank you very much for your detailed answers ! Again, I wrestle with the installation of PyDSTool on my x86_64 box (OpenSuSe11.1). It would be really nice to have a setup.py ;-) Anyway, I have used svn co http://jay.cam.cornell.edu/svn/PyDSTool/trunk PyDSTool to get the latest version of PyDSTool. Here is the contents of ~/local/lib64/python2.6/site-packages/PyDSTool.pth $HOME $HOME/svn $HOME/svn/PyDSTool $HOME/svn/PyDSTool/PyDSTool $HOME/svn/PyDSTool/PyDSTool/tests echo $PYTHONPATH /home/nwagner/:/home/nwagner/local/lib64/python2.6/site-packages/:/home/nwagner/svn/PyDSTool/PyDSTool:/home/nwagner/svn/PyDSTool/PyDSTool/tests python -i seyranian-PyDSTool.py Traceback (most recent call last): File "seyranian-PyDSTool.py", line 1, in from PyDSTool import * ImportError: No module named PyDSTool Best wishes, Nils From nwagner at iam.uni-stuttgart.de Mon Mar 16 05:34:46 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 10:34:46 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 10:26:01 +0100 "Nils Wagner" wrote: > On Sun, 15 Mar 2009 13:56:00 -0400 > Rob Clewley wrote: >> Hi Nils, >> >>> The eigenvalues \lambda of the Floquet matrix are called >>>multipliers. >>> Multipliers of system (1) possess symmetry: If \lambda >>>is the >>> multiplier, then 1/\lambda is also a multiplier. >> >> I am still not certain what the problem with your code >>is >> specifically, but I can verify that when I put your >>model and >> calculations into PyDSTool I get symmetric eigenvalues. >> >>> The computation of a stability chart is a time-consuming >>>task even for low dimensional systems. >> >> The integrations with PyDSTool are an order of magnitude >>faster or >> more even at the higher tolerances, so that should help >>a lot. >> >>> How can I improve the numerical results concerning the >>>symmetry ? >> >> It could be just that the VODE integrator is not having >>its absolute >> and relative tolerances set small enough (I'm not sure >>what these are >> by default for VODE but I'm using values for Radau down >>near 1e-12). I >> am also integrating with smaller maximum step size >>(0.001). >> >>For the first pair of (omega, delta) = (16.2, >>0.0128205128205) >> parameter values that your code detects as giving a >>stable orbit, my >> code does not agree. Your final point on the last of the >>four computed >> orbits ("ic=3" in your loop) is >> >> q1: 0.04280477 >> q1dot: 0.35978081 >> q2: 0.33468969 >> q2dot: 0.48781428 >> >> whereas mine is >> >> q1: 0.0404874798281 >> q1dot: 0.350985516948 >> q2: 0.336574584737 >> q2dot: 0.504365520376 >> >> which would appear to suggest a source of error during >>integration >> between our codes. My eigenvalues of F are >> >> [ 0.96241252+0j 0.00799704+0.99996802j >> 0.00799704-0.99996802j >> 1.03905548+0.j ] >> >> which are indistinguishable, as a set, from their >>reciprocals. I've >> attached my PyDSTool script in case changing the VODE >>tolerances don't >> get you the accuracy you need. Please let me know if you >>get VODE to >> be more agreeable, I'm curious. >> >> -Rob > > Hi Rob, > > Thank you very much for your detailed answers ! > > Again, I wrestle with the installation of PyDSTool > on my x86_64 box (OpenSuSe11.1). > > It would be really nice to have a setup.py ;-) > > Anyway, I have used > > svn co http://jay.cam.cornell.edu/svn/PyDSTool/trunk > PyDSTool > > to get the latest version of PyDSTool. > > Here is the contents of > ~/local/lib64/python2.6/site-packages/PyDSTool.pth > > $HOME > $HOME/svn > $HOME/svn/PyDSTool > $HOME/svn/PyDSTool/PyDSTool > $HOME/svn/PyDSTool/PyDSTool/tests > > echo $PYTHONPATH > /home/nwagner/:/home/nwagner/local/lib64/python2.6/site-packages/:/home/nwagner/svn/PyDSTool/PyDSTool:/home/nwagner/svn/PyDSTool/PyDSTool/tests > > > python -i seyranian-PyDSTool.py > Traceback (most recent call last): > File "seyranian-PyDSTool.py", line 1, in > from PyDSTool import * > ImportError: No module named PyDSTool > > Best wishes, > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Sorry for replying to myself. I have also added /home/nwagner/svn/PyDSTool/ to my $PYTHONPATH. Now I am able to import PyDSTool. However if I run python -i seyranian-PyDSTool.py /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] -3.41421356237 Error occurred in generating Radau system (while moving library extension modules to CWD) cannot concatenate 'str' and 'int' objects Traceback (most recent call last): File "seyranian-PyDSTool.py", line 43, in osc = Generator.Radau_ODEsystem(DSargs) File "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", line 545, in __init__ self.compileLib() File "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", line 991, in compileLib raise RuntimeError RuntimeError Any idea ? Nils From jeanluc.menut at free.fr Mon Mar 16 09:33:09 2009 From: jeanluc.menut at free.fr (Jean-Luc Menut) Date: Mon, 16 Mar 2009 14:33:09 +0100 Subject: [SciPy-user] strange behavior of scipy.interpolate.interp2d ? Message-ID: <49BE5515.106@free.fr> Hello All, I'm testing scipy.interpolate.interp2d, and it looks like there is a strange behavior of the 'linear' interpolation. It looks like only a part of the image is interpolated. Maybe I'm mistaking, but the cubic interpolation looks fine to me. I attach 3 examples images to this mail : original.png is the original image, linear.png is the original image linearly interpolated (with twice the number of pixel of original.png) and cubic.png is the same but with a cubic interpolation. here the corresponding few lines of code: a=zeros([5,5]) x=y=arange(5) b=scipy.interpolate.interp2d(x,y,a,'cubic') #or 'linear xx=yy=arange(9)/2. imshow(b(xx,yy),interpolation='nearest') Have you any idea on what's is going on ? Best regards, Jean-Luc -------------- next part -------------- A non-text attachment was scrubbed... Name: linear.png Type: image/png Size: 7194 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: original.png Type: image/png Size: 6450 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cubic.png Type: image/png Size: 7406 bytes Desc: not available URL: From rob.clewley at gmail.com Mon Mar 16 11:00:26 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 16 Mar 2009 11:00:26 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: >> >> It would be really nice to have a setup.py ;-) I know, I'm trying to pay somebody this spring to write me one! > /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: > DeprecationWarning: BaseException.message has been > deprecated as of Python 2.6 OK, for a start PyDSTool has not been tested at all on Python 2.6. I just haven't been able to keep up! If there's any chance you can also have 2.4.3 or 2.5 on your system then that would be the easiest thing to do. I know that people have installed successfully recently from SVN on Python 2.5. Maybe I need to post a clear disclaimer on my webpage since Python 2.6 has become so common. > Error occurred in generating Radau system > (while moving library extension modules to CWD) > cannot concatenate 'str' and > 'int' objects > Traceback (most recent call last): > File "seyranian-PyDSTool.py", line 43, in > osc = Generator.Radau_ODEsystem(DSargs) > File > "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", > line 545, in __init__ > self.compileLib() > File > "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", > line 991, in compileLib > raise RuntimeError > RuntimeError > > > Any idea ? I don't have a clue right now. The list of test failures you sent me is pretty catastrophic, which makes me suspect the problem is something basic in python that has changed between 2.5 and 2.6 that is at the heart of every one of the tests. And/or it's something to do with Pyrex getting involved for the legacy ODE solvers. I don't know why the pyrex compiler is being invoked, because it should be gcc. Can you look at the wiki GettingStarted page (Section 1.4.5.) on getting C integrators set up on Linux and tell me if you've set up SWIG and set the CC environment variable? -Rob From emanuele at relativita.com Mon Mar 16 11:05:24 2009 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 16 Mar 2009 16:05:24 +0100 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? Message-ID: <49BE6AB4.6020201@relativita.com> Hi All, I'm playing with PyEM [0] in scikits and would like to feed a dataset for which Euclidean distance is not supposed to work. So I'm wondering how simple is to modify the code with a custom distance (e.g., 1-norm). Thanks for any help, Emanuele [0]: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ From emanuele at relativita.com Mon Mar 16 12:05:56 2009 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 16 Mar 2009 17:05:56 +0100 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <49BE6AB4.6020201@relativita.com> References: <49BE6AB4.6020201@relativita.com> Message-ID: <49BE78E4.108@relativita.com> Emanuele Olivetti wrote: > Hi All, > > I'm playing with PyEM [0] in scikits and would like to feed > a dataset for which Euclidean distance is not supposed to > work. So I'm wondering how simple is to modify the code with > a custom distance (e.g., 1-norm). > > Additional info. My final goal is to run the EM algorithm and estimate the Gaussian mixture from data, but assuming a different distance function. I had a look to densities.py which seems to be the relevant file for this question. I can see the computation of Euclidean distance in: - _scalar_gauss_den() - _diag_gauss_den() - _full_gauss_den() So the question is: if I change those functions according to a new distance function, is it expected the EM estimation em.train() to work meaningfully? Are there other parts of PyEM that assumes Euclidean distance function? Emanuele From josef.pktd at gmail.com Mon Mar 16 12:28:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 12:28:30 -0400 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <49BE78E4.108@relativita.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> Message-ID: <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> On Mon, Mar 16, 2009 at 12:05 PM, Emanuele Olivetti wrote: > Emanuele Olivetti wrote: >> Hi All, >> >> I'm playing with PyEM [0] in scikits and would like to feed >> a dataset for which Euclidean distance is not supposed to >> work. So I'm wondering how simple is to modify the code with >> a custom distance (e.g., 1-norm). >> >> > > Additional info. My final goal is to run the EM algorithm > and estimate the Gaussian mixture from data, but assuming > a different distance function. I had a look to densities.py > which seems to be the relevant file for this question. I > can see the computation of Euclidean distance in: > - _scalar_gauss_den() > - _diag_gauss_den() > - _full_gauss_den() > > > So the question is: if I change those functions according to a > new distance function, is it expected the EM estimation > em.train() to work meaningfully? Are there other parts of PyEM > that assumes Euclidean distance function? > > > Emanuele I don't know the answer, but I'm curious about your data and the problem that you cannot calculate Euclidean distance. The Gaussian mixture is based on the normal distribution for continuous random variables and as such uses euclidean distance, or a variant based on the covariance matrix to define the density function. This seems to me a conflict between trying to fit the data to a gaussian mixture if it doesn't allow gaussian distance calculations. If the data is really different, then a gaussian mixture might not be appropriate. >From a quick look, gmm_em.py and gauss_mix.py are specialized to the normal distribution and fully parametric, and I'm not sure what distribution you get if you just change the distance function. And to correctly allow for other distributions, would require more far reaching changes than just changing the distance function, at least that is my impression. Josef From emanuele at relativita.com Mon Mar 16 12:46:30 2009 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 16 Mar 2009 17:46:30 +0100 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> Message-ID: <49BE8266.3050600@relativita.com> josef.pktd at gmail.com wrote: > On Mon, Mar 16, 2009 at 12:05 PM, Emanuele Olivetti > wrote: > >> Emanuele Olivetti wrote: >> >>> Hi All, >>> >>> I'm playing with PyEM [0] in scikits and would like to feed >>> a dataset for which Euclidean distance is not supposed to >>> work. So I'm wondering how simple is to modify the code with >>> a custom distance (e.g., 1-norm). >>> >>> >>> >> Additional info. My final goal is to run the EM algorithm >> and estimate the Gaussian mixture from data, but assuming >> a different distance function. I had a look to densities.py >> which seems to be the relevant file for this question. I >> can see the computation of Euclidean distance in: >> - _scalar_gauss_den() >> - _diag_gauss_den() >> - _full_gauss_den() >> >> >> So the question is: if I change those functions according to a >> new distance function, is it expected the EM estimation >> em.train() to work meaningfully? Are there other parts of PyEM >> that assumes Euclidean distance function? >> >> >> Emanuele >> > > I don't know the answer, but I'm curious about your data and the > problem that you cannot calculate Euclidean distance. > > The Gaussian mixture is based on the normal distribution for > continuous random variables and as such uses euclidean distance, or a > variant based on the covariance matrix to define the density function. > This seems to me a conflict between trying to fit the data to a > gaussian mixture if it doesn't allow gaussian distance calculations. > If the data is really different, then a gaussian mixture might not be > appropriate. > > >From a quick look, gmm_em.py and gauss_mix.py are specialized to the > normal distribution and fully parametric, and I'm not sure what > distribution you get if you just change the distance function. And to > correctly allow for other distributions, would require more far > reaching changes than just changing the distance function, at least > that is my impression. > > You are right. I'm coming from K-means (MacKay's book) and moving to GMM, that's why I had in mind custom distances. I can try to transform my data to be meaningful under Euclidean distance. Thanks anyway. E. From cournape at gmail.com Mon Mar 16 12:53:54 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 17 Mar 2009 01:53:54 +0900 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <49BE8266.3050600@relativita.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> Message-ID: <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> On Tue, Mar 17, 2009 at 1:46 AM, Emanuele Olivetti wrote: > > You are right. I'm coming from K-means (MacKay's book) and > moving to GMM, that's why I had in mind custom distances. Note that GMM is what is called soft kmean in MacKay's book. You can use other distances for kmeans, and other kind of soft-kmeans - but as said by Josef, I am more puzzled by the idea of non euclidean distance in the EM context, because of the inherent probabilistic view. Because of the probabilities, there is no obvious interpretation in distance anymore (it is not an argmin_c ||x-c|| for each point x). There are soft kmeans algorithms with non euclidean distances, but not in a probabilistic framework - at least I am not aware of any. cheers, David From josef.pktd at gmail.com Mon Mar 16 13:10:28 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 13:10:28 -0400 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <49BE8266.3050600@relativita.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> Message-ID: <1cd32cbb0903161010p2c6179b8xda83e72aa6ef06eb@mail.gmail.com> A comment on gmm_em.py: in def _update_em_full(self, data, gamma, ngamma): there is a triple loop, the inner two loops are: # This should be much faster than recursing on n... for i in range(d): for j in range(d): xx[i, j] = N.sum(data[:, i] * data[:, j] * gamma.T[c, :], axis = 0) in my reading data[:, i], data[:, j], and gamma.T[c, :] are all 1 dimensional. If this is correct, then to me this looks like xx = N.dot(data.T, data * gamma[:,c:c+1]) I'm not completely sure about the shape of gamma, why you transposed it. According to a numpy ticket using dot should be much faster than sum. This is just reading the code, not actually tested. On the distribution: I don't know about EM applications, but from a maximum likelihood view point, it might be possible to find the distribution class for the mixture that corresponds to different kinds of distance measures or that is appropriate for discrete data. Josef From nwagner at iam.uni-stuttgart.de Mon Mar 16 13:18:11 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 18:18:11 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 11:00:26 -0400 Rob Clewley wrote: >>> >>> It would be really nice to have a setup.py ;-) > > I know, I'm trying to pay somebody this spring to write >me one! > >> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: >> DeprecationWarning: BaseException.message has been >> deprecated as of Python 2.6 > > OK, for a start PyDSTool has not been tested at all on >Python 2.6. I > just haven't been able to keep up! If there's any chance >you can also > have 2.4.3 or 2.5 on your system then that would be the >easiest thing > to do. I know that people have installed successfully >recently from > SVN on Python 2.5. Maybe I need to post a clear >disclaimer on my > webpage since Python 2.6 has become so common. > >> Error occurred in generating Radau system >> (while moving library extension modules to CWD) >> cannot concatenate 'str' >>and >> 'int' objects >> Traceback (most recent call last): >> File "seyranian-PyDSTool.py", line 43, in >> osc = Generator.Radau_ODEsystem(DSargs) >> File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 545, in __init__ >> self.compileLib() >> File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 991, in compileLib >> raise RuntimeError >> RuntimeError >> >> >> Any idea ? > > I don't have a clue right now. > > The list of test failures you sent me is pretty >catastrophic, which > makes me suspect the problem is something basic in >python that has > changed between 2.5 and 2.6 that is at the heart of >every one of the > tests. > > And/or it's something to do with Pyrex getting involved >for the legacy > ODE solvers. I don't know why the pyrex compiler is >being invoked, > because it should be gcc. Can you look at the wiki >GettingStarted page > (Section 1.4.5.) on getting C integrators set up on >Linux and tell me > if you've set up SWIG and set the CC environment >variable? > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi Rob, the entries I found in /usr/lib64/python2.6/config/Makefile CC= gcc -pthread CXX= g++ MAINCC= $(CC) LINKCC= $(PURIFY) $(MAINCC) AR= ar RANLIB= ranlib SVNVERSION= echo exported swig -version SWIG Version 1.3.36 Compiled with g++ [x86_64-suse-linux-gnu] Please see http://www.swig.org for reporting bugs and further information echo $CC is currently empty. How do I set the environmental variable ? Cheers, Nils Here is the new output of run_all_tests.py Summary: Basic PyDSTool functions: appears to work on your system Map related modules: appears to work on your system VODE related modules: appears to work on your system Symbolic differentiation module: appears to work on your system Parameter estimation module: appears to be broken on your system PyCont: appears to be broken on your system Dopri ODE systems: appears to be broken on your system Radau ODE systems: appears to be broken on your system Parameter estimation module with external compilers: appears to work on your system PyCont interface to AUTO: appears to be broken on your system python -i pest_test2.py /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] Generating non-uniform samples from HH orbit... ... done HH spike time found at 8.43278894472 Traceback (most recent call last): File "pest_test2.py", line 127, in verbose=True) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 1215, in run int(verbose)) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/optimize/optimize.py", line 1199, in fminbound fx = func(x,*args) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 354, in residual r = self._res_fn(p, extra_args) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 416, in _res_fn return pest.evaluate()[0] File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 813, in evaluate res = self.context.residual(self.testModel) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1019, in residual self.evaluate(model) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1003, in evaluate new_result = test_mi(ref_mi) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1282, in evaluate target.get_test_traj(force=force) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1374, in get_test_traj self.compute_traj() File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1360, in compute_traj self.model.compute(trajname=self._trajname, force=True) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2449, in compute MI, swRules) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2064, in _findTrajInitiator self.verboselevel) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2885, in findTrajInitiator dxdt = dict(MI.Rhs(t, icdict, pardict)) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1209, in Rhs return self.model.Rhs(t, xdict, pdict) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 1646, in Rhs 'norm': self._normord}) File "/home/nwagner/svn/PyDSTool/PyDSTool/Points.py", line 124, in __init__ datalist.append(coorddict[c][0]) IndexError: invalid index to scalar variable. From cournape at gmail.com Mon Mar 16 13:33:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 17 Mar 2009 02:33:36 +0900 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <1cd32cbb0903161010p2c6179b8xda83e72aa6ef06eb@mail.gmail.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <1cd32cbb0903161010p2c6179b8xda83e72aa6ef06eb@mail.gmail.com> Message-ID: <5b8d13220903161033k2895c3d8ib20b7182cc673c2e@mail.gmail.com> On Tue, Mar 17, 2009 at 2:10 AM, wrote: > A comment on gmm_em.py: > > in > ? ? ? def _update_em_full(self, data, gamma, ngamma): > > there is a triple loop, the inner two loops are: > > ? ? ? ? ? ?# This should be much faster than recursing on n... > ? ? ? ? ? ?for i in range(d): > ? ? ? ? ? ? ? ?for j in range(d): > ? ? ? ? ? ? ? ? ? ?xx[i, j] = N.sum(data[:, i] * data[:, j] * gamma.T[c, :], > ? ? ? ? ? ? ? ? ? ? ? ? ? ?axis = 0) > > in my reading data[:, i], data[:, j], and gamma.T[c, :] are all 1 dimensional. > If this is correct, then to me this looks like > > xx = N.dot(data.T, data * gamma[:,c:c+1]) > > I'm not completely sure about the shape of gamma, why you transposed it. To be honest, this is not ideal code. This is actually the first code I wrote in numpy (or even python), as an exercise to look at python :) > > According to a numpy ticket using dot should be much faster than sum. Yes, it is, because it use ATLAS (if available). This can be much faster than sum. > I don't know about EM applications, but from a maximum likelihood view > point, it might be possible to find the distribution class for the > mixture that corresponds to different kinds of distance measures or > that is appropriate for discrete data. EM (for MLE) is applicable to many models within the exponential hidden family (that is when the complete data follow a density in the exponential family). So it is definitely much more general than GMM, and can be applied to discrete data (for example mixture of multinomials). In my own field, speech processing, the EM algorithm is applied to both continuous data (GMM and HMM with GMM emission densities for acoustic modelling) and discrete date (for language modelling). I am still not sure to understand how distance may come in that context, though. cheers, David From nwagner at iam.uni-stuttgart.de Mon Mar 16 13:37:55 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 18:37:55 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 11:00:26 -0400 Rob Clewley wrote: >>> >>> It would be really nice to have a setup.py ;-) > > I know, I'm trying to pay somebody this spring to write >me one! > >> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: >> DeprecationWarning: BaseException.message has been >> deprecated as of Python 2.6 > > OK, for a start PyDSTool has not been tested at all on >Python 2.6. I > just haven't been able to keep up! If there's any chance >you can also > have 2.4.3 or 2.5 on your system then that would be the >easiest thing > to do. I know that people have installed successfully >recently from > SVN on Python 2.5. Maybe I need to post a clear >disclaimer on my > webpage since Python 2.6 has become so common. > >> Error occurred in generating Radau system >> (while moving library extension modules to CWD) >> cannot concatenate 'str' >>and >> 'int' objects >> Traceback (most recent call last): >> File "seyranian-PyDSTool.py", line 43, in >> osc = Generator.Radau_ODEsystem(DSargs) >> File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 545, in __init__ >> self.compileLib() >> File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 991, in compileLib >> raise RuntimeError >> RuntimeError >> >> >> Any idea ? > > I don't have a clue right now. > > The list of test failures you sent me is pretty >catastrophic, which > makes me suspect the problem is something basic in >python that has > changed between 2.5 and 2.6 that is at the heart of >every one of the > tests. > > And/or it's something to do with Pyrex getting involved >for the legacy > ODE solvers. I don't know why the pyrex compiler is >being invoked, > because it should be gcc. Can you look at the wiki >GettingStarted page > (Section 1.4.5.) on getting C integrators set up on >Linux and tell me > if you've set up SWIG and set the CC environment >variable? > > -Rob Hi Rob, I added the environmental variable to my .bashrc echo $CC /usr/bin/gcc The problem persists python -i seyranian-PyDSTool.py /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] -3.41421356237 Error occurred in generating Radau system (while moving library extension modules to CWD) cannot concatenate 'str' and 'int' objects Traceback (most recent call last): File "seyranian-PyDSTool.py", line 43, in osc = Generator.Radau_ODEsystem(DSargs) File "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", line 545, in __init__ self.compileLib() File "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", line 991, in compileLib raise RuntimeError RuntimeError Nils From josef.pktd at gmail.com Mon Mar 16 14:05:21 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 14:05:21 -0400 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <5b8d13220903161033k2895c3d8ib20b7182cc673c2e@mail.gmail.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <1cd32cbb0903161010p2c6179b8xda83e72aa6ef06eb@mail.gmail.com> <5b8d13220903161033k2895c3d8ib20b7182cc673c2e@mail.gmail.com> Message-ID: <1cd32cbb0903161105w8715071pf5834f2c16ffde3a@mail.gmail.com> > >> I don't know about EM applications, but from a maximum likelihood view >> point, it might be possible to find the distribution class for the >> mixture that corresponds to different kinds of distance measures or >> that is appropriate for discrete data. > > EM (for MLE) is applicable to many models within the exponential > hidden family (that is when the complete data follow a density in the > exponential family). So it is definitely much more general than GMM, > and can be applied to discrete data (for example mixture of > multinomials). In my own field, speech processing, the EM algorithm is > applied to both continuous data (GMM and HMM with GMM emission > densities for acoustic modelling) and discrete date (for language > modelling). > > I am still not sure to understand how distance may come in that context, though. > I only have a vague intuition, since I never worked much with non-probabilistic models in this area. But I'm thinking of the similarity of an iso-distance contour with an iso-likelihood contour and that classifying a point as belonging to one of the mixture distributions depends on the likelihood or posterior ratio. A simple case, if one of the variables is in logs, then the joint distribution would be normal-lognormal and the corresponding distance measure would need log-scale on one axis. The likelihood ratio or posterior probability, to which mixture distribution an observed point belongs, would use different scales and not a Euclidean distance in the random variables unless they are correctly transformed. A similar intuition should work for other non-linear transformations. But I don't know if the commonly used distance measures would make sense in this interpretation. Josef From rob.clewley at gmail.com Mon Mar 16 14:26:18 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 16 Mar 2009 14:26:18 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: > Here is the new output of run_all_tests.py > > Basic PyDSTool functions: appears to work on your system > Map related modules: appears to work on your system > VODE related modules: appears to work on your system > Symbolic differentiation module: appears to work on your system Good so far. > PyCont: appears to be broken on your system I'd like to see the specific errors raised by any of this group. > Dopri ODE systems: appears to be broken on your system > Radau ODE systems: appears to be broken on your system > PyCont interface to AUTO: appears to be broken on your system See below. > Parameter estimation module: appears to be broken on your system > Parameter estimation module with external compilers: appears to work on your system Weird that the parameter estimation doesn't work with vode but it claims that it does with the external compilers. I'd like to see a specific error raised by the first of these groups. >> >>> Error occurred in generating Radau system >>> (while moving library extension modules to CWD) >>> cannot concatenate 'str' >>>and >>> 'int' objects >>> Traceback (most recent call last): >>> ? File "seyranian-PyDSTool.py", line 43, in >>> ? ? osc = Generator.Radau_ODEsystem(DSargs) >>> ? File >>> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >>> line 545, in __init__ >>> ? ? self.compileLib() >>> ? File >>> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >>> line 991, in compileLib >>> ? ? raise RuntimeError >>> RuntimeError >>> > > The problem persists > > Error occurred in generating Radau system Yes, let's fix Dopri first on simple examples before we get in to Radau. In my experience, Dopri is easier to get working than Radau, as Dopri requires gcc while Radau requires gfortran. Typically, systems have gcc set up OK but often need more work for gfortran. So can you check that gfortran --version works? Also, what are your gcc and gfortran versions? Sometimes there have been problems with gcc being too new (also noted on the GettingStarted page somewhere) > (while moving library extension modules to CWD) > cannot concatenate 'str' and > 'int' objects > Traceback (most recent call last): > ? File "seyranian-PyDSTool.py", line 43, in > ? ? osc = Generator.Radau_ODEsystem(DSargs) > ? File > "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", > line 545, in __init__ > ? ? self.compileLib() > ? File > "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", > line 991, in compileLib > ? ? raise RuntimeError > RuntimeError > Can you get a traceback on the TypeError exception? Maybe by commenting out the raising of that RuntimeError that I raise? Do any of the Dopri tests run? e.g. interp_dopri_test.py? The run_all_tests report says some of them don't run, but I'd like to be sure if any of them do. In particular, I'd like to see exactly what part is failing. Is it the compilation of the DLL or something that happens while the DLL is being used? > python -i pest_test2.py [SNIP] File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1209, in Rhs return self.model.Rhs(t, xdict, pdict) [SNIP] > line 124, in __init__ > datalist.append(coorddict[c][0]) > IndexError: invalid index to scalar variable. Looks like the RHS function isn't working. Could you run this script in pdb or a debugger, or at least put in some print statements here to show me what t, xdict and pdict are when Rhs is called. And then, inside Points, what coorddict and c are? This might be another Python 2.6 issue because there is some funky magic used to create some class methods dynamically. From nwagner at iam.uni-stuttgart.de Mon Mar 16 14:30:01 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 19:30:01 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 14:26:18 -0400 Rob Clewley wrote: >> Here is the new output of run_all_tests.py >> >> Basic PyDSTool functions: appears to work on your system >> Map related modules: appears to work on your system >> VODE related modules: appears to work on your system >> Symbolic differentiation module: appears to work on your >>system > > Good so far. > >> PyCont: appears to be broken on your system > > I'd like to see the specific errors raised by any of >this group. > >> Dopri ODE systems: appears to be broken on your system >> Radau ODE systems: appears to be broken on your system >> PyCont interface to AUTO: appears to be broken on your >>system > > See below. > >> Parameter estimation module: appears to be broken on >>your system >> Parameter estimation module with external compilers: >>appears to work on your system > > Weird that the parameter estimation doesn't work with >vode but it > claims that it does with the external compilers. > I'd like to see a specific error raised by the first of >these groups. > >>> >>>> Error occurred in generating Radau system >>>> (while moving library extension modules to CWD) >>>> cannot concatenate 'str' >>>>and >>>> 'int' objects >>>> Traceback (most recent call last): >>>> ? File "seyranian-PyDSTool.py", line 43, in >>>> ? ? osc = Generator.Radau_ODEsystem(DSargs) >>>> ? File >>>> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >>>> line 545, in __init__ >>>> ? ? self.compileLib() >>>> ? File >>>> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >>>> line 991, in compileLib >>>> ? ? raise RuntimeError >>>> RuntimeError >>>> >> >> The problem persists >> >> Error occurred in generating Radau system > > Yes, let's fix Dopri first on simple examples before we >get in to > Radau. In my experience, Dopri is easier to get working >than Radau, as > Dopri requires gcc while Radau requires gfortran. >Typically, systems > have gcc set up OK but often need more work for >gfortran. So can you > check that gfortran --version works? Also, what are your >gcc and > gfortran versions? Sometimes there have been problems >with gcc being > too new (also noted on the GettingStarted page >somewhere) > >> (while moving library extension modules to CWD) >> cannot concatenate 'str' >>and >> 'int' objects >> Traceback (most recent call last): >> ? File "seyranian-PyDSTool.py", line 43, in >> ? ? osc = Generator.Radau_ODEsystem(DSargs) >> ? File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 545, in __init__ >> ? ? self.compileLib() >> ? File >> "/home/nwagner/svn/PyDSTool/PyDSTool/Generator/Radau_ODEsystem.py", >> line 991, in compileLib >> ? ? raise RuntimeError >> RuntimeError >> > > Can you get a traceback on the TypeError exception? >Maybe by > commenting out the raising of that RuntimeError that I >raise? > > Do any of the Dopri tests run? e.g. >interp_dopri_test.py? The > run_all_tests report says some of them don't run, but >I'd like to be > sure if any of them do. In particular, I'd like to see >exactly what > part is failing. Is it the compilation of the DLL or >something that > happens while the DLL is being used? > >> python -i pest_test2.py > > [SNIP] > >File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", > line 1209, in Rhs > return self.model.Rhs(t, xdict, pdict) > > [SNIP] > >> line 124, in __init__ >> datalist.append(coorddict[c][0]) >> IndexError: invalid index to scalar variable. > > Looks like the RHS function isn't working. Could you run >this script > in pdb or a debugger, or at least put in some print >statements here to > show me what t, xdict and pdict are when Rhs is called. >And then, > inside Points, what coorddict and c are? This might be >another Python > 2.6 issue because there is some funky magic used to >create some class > methods dynamically. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Here is the output of interp_dopri_test.py gdb python GNU gdb (GDB; openSUSE 11.1) 6.8.50.20081120-cvs Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-suse-linux". For bug reporting instructions, please see: ... (no debugging symbols found) (gdb) run interp_dopri_test.py Starting program: /usr/bin/python interp_dopri_test.py (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] -------- Test: InterpolateTable names for variables: xnames = ['x1', 'x2'] timeData = [ 0. 11. 20. 30.] x1data = [ 10.2 -1.4 4.1 6. ] x2data = [ 0.1 0.01 0.4 -0.8 ] interptable.variables['x1'](0.4) = 9.77818181818 itabletraj(11, ['x1']) = x1: -1.4 (preferred syntax) -------- Test: ODE system Information for Contents algparams: init_step: 0.02 checklevel: 2 fnspecs: auxval1: ([t, x], if(x>100,2*cos(10*t/pi),-0.5)) auxval2: ([x], x/2) ics: w: 30.0 y: 80 inputs: Variable x1(t) name: ODEtest pars: a: 2 k: 1 tdata: [0, 10] tdomain: [0, 20] varspecs: w: 100 -(1+a)*w*heav(0.2-sin(t+1)) -2*x1 -k*auxval1(t, w)*auxval2(y) y: 50 - (w/100)*y xdomain: w: [0, 340] y: [0, 150] Integrating... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7b14cab in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 (gdb) bt #0 0x00007ffff7b14cab in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 #1 0x00007ffff7b1b6c0 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0 #2 0x00007ffff7b14f24 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 #3 0x00007ffff7b1b6c0 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0 #4 0x00007ffff7b14f24 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0 #5 0x00007ffff7b1b6c0 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0 #6 0x00007ffff7b13822 in PyEval_EvalCode () from /usr/lib64/libpython2.6.so.1.0 #7 0x00007ffff7b34b13 in ?? () from /usr/lib64/libpython2.6.so.1.0 #8 0x00007ffff7b34bca in PyRun_FileExFlags () from /usr/lib64/libpython2.6.so.1.0 #9 0x00007ffff7b3550d in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.6.so.1.0 #10 0x00007ffff7b40dba in Py_Main () from /usr/lib64/libpython2.6.so.1.0 #11 0x00007ffff6e94586 in __libc_start_main () from /lib64/libc.so.6 #12 0x00000000004006e9 in _start () Nils From josef.pktd at gmail.com Mon Mar 16 14:31:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 14:31:03 -0400 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <1cd32cbb0903161105w8715071pf5834f2c16ffde3a@mail.gmail.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <1cd32cbb0903161010p2c6179b8xda83e72aa6ef06eb@mail.gmail.com> <5b8d13220903161033k2895c3d8ib20b7182cc673c2e@mail.gmail.com> <1cd32cbb0903161105w8715071pf5834f2c16ffde3a@mail.gmail.com> Message-ID: <1cd32cbb0903161131m3b86f029l818092a9f5985730@mail.gmail.com> On Mon, Mar 16, 2009 at 2:05 PM, wrote: >> >>> I don't know about EM applications, but from a maximum likelihood view >>> point, it might be possible to find the distribution class for the >>> mixture that corresponds to different kinds of distance measures or >>> that is appropriate for discrete data. >> >> EM (for MLE) is applicable to many models within the exponential >> hidden family (that is when the complete data follow a density in the >> exponential family). So it is definitely much more general than GMM, >> and can be applied to discrete data (for example mixture of >> multinomials). In my own field, speech processing, the EM algorithm is >> applied to both continuous data (GMM and HMM with GMM emission >> densities for acoustic modelling) and discrete date (for language >> modelling). >> >> I am still not sure to understand how distance may come in that context, though. >> > > I only have a vague intuition, since I never worked much with > non-probabilistic models in this area. But I'm thinking of the > similarity of an iso-distance contour with an iso-likelihood contour > and that classifying a point as belonging to one of the mixture > distributions depends on the likelihood or posterior ratio. > > A simple case, if one of the variables is in logs, then the joint > distribution would be normal-lognormal and the corresponding distance > measure would need log-scale on one axis. small correction to sloppy phrasing: "If X is a random variable with a normal distribution, then Y = exp(X) has a log-normal distribution; likewise, if Y is log-normally distributed, then log(Y) is normally distributed." (Wikipedia) > The likelihood ratio or > posterior probability, to which mixture distribution an observed point > belongs, would use different scales and not a Euclidean distance in > the random variables unless they are correctly transformed. A similar > intuition should work for other non-linear transformations. But I > don't know if the commonly used distance measures would make sense in > this interpretation. > > Josef > From rob.clewley at gmail.com Mon Mar 16 14:48:25 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 16 Mar 2009 14:48:25 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: Nils, > /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: I'm not up to speed with everything in scipy-land but my understanding was that numpy and scipy aren't even fully compatible with python 2.6 yet. I thought that was going to happen with numpy 1.3, etc.? I can't install python 2.6 with these libraries on the only computer I have available to test this right now. There are too many possible issues that I can't control for if I don't have a working python 2.6 installation to test with. I don't know if I can be of further help with this unless you switch to python 2.5, for which all the tests are known to pass on linux boxes set up according to the wiki instructions. Sorry! Rob From nwagner at iam.uni-stuttgart.de Mon Mar 16 15:17:32 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 20:17:32 +0100 Subject: [SciPy-user] PyDSTool and python2.6 In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 14:48:25 -0400 Rob Clewley wrote: > Nils, > >> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: > > I'm not up to speed with everything in scipy-land but my >understanding > was that numpy and scipy aren't even fully compatible >with python 2.6 > yet. I thought that was going to happen with numpy 1.3, >etc.? I can't > install python 2.6 with these libraries on the only >computer I have > available to test this right now. There are too many >possible issues > that I can't control for if I don't have a working >python 2.6 > installation to test with. I don't know if I can be of >further help > with this unless you switch to python 2.5, for which all >the tests are > known to pass on linux boxes set up according to the >wiki > instructions. > > Sorry! > Rob Hi all, Is there someone on the list using PyDSTool with python2.6 ? IMHO, numpy and scipy work fine with python2.6 >>> import scipy >>> numpy.__version__ '1.3.0.dev6685' >>> scipy.__version__ '0.8.0.dev5620' gfortran --version GNU Fortran (SUSE Linux) 4.3.2 [gcc-4_3-branch revision 141291] Copyright (C) 2008 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING gcc --version gcc (SUSE Linux) 4.3.2 [gcc-4_3-branch revision 141291] Copyright (C) 2008 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Nils From bastian.weber at gmx-topmail.de Mon Mar 16 15:24:05 2009 From: bastian.weber at gmx-topmail.de (Bastian Weber) Date: Mon, 16 Mar 2009 20:24:05 +0100 Subject: [SciPy-user] Has IPython been useful to you? Please let me know... In-Reply-To: References: Message-ID: <49BEA755.8010005@gmx-topmail.de> Hi Fernando, I'm a graduate student and I am currently writing my "Diplomarbeit", which is the German equivalent to a master thesis, in the field of control engineering. I have to state that even doing the humble things I do, would be far more time consuming, less fun, or even frustrating and sometimes almost impossible without a tool like IPython. I use IPython mainly in three different ways: *First I use "ipython -p sh" as substitute for the bash. The reason is that I do not need to do bash-scripting very often and hence I am likely to forget everything from one occasion to the next. Having the possibility to do filesystem related python scripting (which is much better to memorize to me) ad hoc on the one hand and having available all the interactive features of bash, that I need, on the other hand, is really great. *Second: I use IPython to play around with unknown (parts of) modules. The ease of accessing the docstrings and even the sourcecode of objects or simply the easy way to determine the length of an array, the type of an object increases both my productivity and my creativity. The most valuable feature in this context however is the autocompletion of all names in a namespace. I could not imagine a better way to explore a new module than to have a tour through it and using IPythons capabilities to approach exactly that piece of the module, in wich I am interested. *Third: I usually import IPython into all the scripts I write. This makes me able to open an embedded IPython shell anywhere in my code. This in fact is my preferred way to find (the reasons for) bugs or to make some consistency test. A very nice side effect of importing IPython is, the improvement of error output. It helps me a lot to have syntax highlighted tracebacks and to see the argument values which led to an exception. I could have made this text ten times as long and I would still be able to add reasons why IPython helps me and saves me time and why it is an integral part of my python experience. So thank you for being the driving force behind that golden piece of software. Bastian. From karl.young at ucsf.edu Mon Mar 16 15:35:38 2009 From: karl.young at ucsf.edu (Young, Karl) Date: Mon, 16 Mar 2009 12:35:38 -0700 Subject: [SciPy-user] python windows set up References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Sorry to spam the list with a silly question but I'm trying to convert a windows using colleague to python so that interacting with her doesn't mean my having to convert her matlab code. She's an excellent scientist but has a low tolerance for complicated software installations and it seems like I'm currently losing the battle re. her installing scipy/numpy. I'm a linux type (though I understand the importance of being cross platform I rarely leave my linux womb, e.g. I currently do most work in ubuntu under parallels on a mac so don't even look much at OS X issues) so I'm not much use re. helping her with her installation issues. I know this type of question has been asked endlessly but I thought I'd try to get the latest suggestions on whether there was anywhere I could point her. She's tried to install the Enthought distribution (at the moment I forget what the problem she had with that was); is that still the best suggestion ? Do people recommend PythonXY in this type of situation (seems like a bit of baggage if you only need numpy/scipy) ? Anyway, sorry again for the spam and thanks for any suggestions. Karl Young Center for Imaging of Neurodegenerative Disease, UCSF VA Medical Center, MRS Unit (114M) Phone: (415) 221-4810 x3114 FAX: (415) 668-2864 Email: karl young at ucsf edu From haase at msg.ucsf.edu Mon Mar 16 15:57:36 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 16 Mar 2009 20:57:36 +0100 Subject: [SciPy-user] python windows set up In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Message-ID: Hi Karl, sorry for advertising my own stuff.... but maybe you wouldn't mind trying this on windows: if you get the file: Priithon_25_win#20081219.zip from http://code.google.com/p/priithon/downloads/list you would actually not even have to download/install Python first. Just extract this above zip file into the C directory, and it creates a new directory: C:/Priithon_25_win There start: priithon_script.bat and see what happens (or priithon_script_oldDiv.bat, if you prefer 3/2 to be 1 rather than 1.5) It should start a graphical python shell based on wxPython - PyShell - so no IDLE, or IPython, but I have some tab-completion. This is my packages, that I have put together to target image processing of microscopy data. In case you get interested, you could get more help on the Priithon mailing-list (no traffic yet ;-) Cheers, Sebastian Haase On Mon, Mar 16, 2009 at 8:35 PM, Young, Karl wrote: > > Sorry to spam the list with a silly question but I'm trying to convert a windows using colleague to python so that interacting with her > doesn't mean my having to convert her matlab code. She's an excellent scientist but has a low tolerance for complicated software installations and it seems like I'm currently losing the battle re. her installing scipy/numpy. I'm a linux type (though I understand the importance of being cross platform I rarely leave my linux womb, e.g. I currently do most work in ubuntu under parallels on a mac so don't even look much at OS X issues) so I'm not much use re. helping her with her installation issues. I know this type of question has been asked endlessly but I thought I'd try to get the latest suggestions on whether there was anywhere I could point her. She's tried to install the Enthought distribution (at the moment I forget what the problem she had with that was); is that still the best suggestion ? Do people recommend PythonXY in this type of situation (seems like a bit of baggage if you only need numpy/scipy) ? Anyway, sorry again for the spam and thanks for any suggestions. > > > Karl Young > Center for Imaging of Neurodegenerative Disease, UCSF > VA Medical Center, MRS Unit (114M) > Phone: ?(415) 221-4810 x3114 > FAX: ? ?(415) 668-2864 > Email: ?karl young at ucsf edu > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Mar 16 16:17:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 16:17:38 -0400 Subject: [SciPy-user] python windows set up In-Reply-To: References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Message-ID: <1cd32cbb0903161317g2cd62e1dp5ce33ab63e1e0193@mail.gmail.com> On Mon, Mar 16, 2009 at 3:57 PM, Sebastian Haase wrote: > Hi Karl, > sorry for advertising my own stuff.... > but maybe you wouldn't mind trying this on windows: > if you get the file: > Priithon_25_win#20081219.zip > from http://code.google.com/p/priithon/downloads/list > > you would actually not even have to download/install Python first. > Just extract this above zip file into the C directory, and it creates > a new directory: > C:/Priithon_25_win > There start: priithon_script.bat and see what happens > (or priithon_script_oldDiv.bat, if you prefer 3/2 to be 1 rather than 1.5) > It should start a graphical python shell based on wxPython - PyShell - > so no IDLE, or IPython, but I have some tab-completion. > > This is my packages, that I have put together to target image > processing of microscopy data. > In case you get interested, you could get more help on the Priithon > mailing-list (no traffic yet ;-) > > Cheers, > Sebastian Haase > > > > On Mon, Mar 16, 2009 at 8:35 PM, Young, Karl wrote: >> >> Sorry to spam the list with a silly question but I'm trying to convert a windows using colleague to python so that interacting with her >> doesn't mean my having to convert her matlab code. She's an excellent scientist but has a low tolerance for complicated software installations and it seems like I'm currently losing the battle re. her installing scipy/numpy. I'm a linux type (though I understand the importance of being cross platform I rarely leave my linux womb, e.g. I currently do most work in ubuntu under parallels on a mac so don't even look much at OS X issues) so I'm not much use re. helping her with her installation issues. I know this type of question has been asked endlessly but I thought I'd try to get the latest suggestions on whether there was anywhere I could point her. She's tried to install the Enthought distribution (at the moment I forget what the problem she had with that was); is that still the best suggestion ? Do people recommend PythonXY in this type of situation (seems like a bit of baggage if you only need numpy/scipy) ? Anyway, sorry again for the spam and thanks for any suggestions. >> >> >> Karl Young >> Center for Imaging of Neurodegenerative Disease, UCSF >> VA Medical Center, MRS Unit (114M) >> Phone: ?(415) 221-4810 x3114 >> FAX: ? ?(415) 668-2864 >> Email: ?karl young at ucsf edu >> If she doesn't need a lot of extras, the windows installers for python, for numpy and scipy work very well, at least on windowsXP, I don't have any experience with Vista. Click through the installers, and maybe there is the python directory to add to the path of the windows environment. Installing pure python packages and binary distributions of packages is also simple, with setup.py or easyinstall. The only problem is, if one needs a full development environment with compiler, Lapack, Atlas, and MingW for example. Then a full distribution might be an advantage. Josef From cohen at lpta.in2p3.fr Mon Mar 16 16:28:30 2009 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Mon, 16 Mar 2009 21:28:30 +0100 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <49BD506F.5000900@ukr.net> References: <49BD506F.5000900@ukr.net> Message-ID: <49BEB66E.2040501@lpta.in2p3.fr> hi Dmitrey, I am confused : is the scikits openopt deprecated? I see that the head is version 2111 with info.py specifying 0.19.... but all this is not in the source browser on your web page.... I am sorry if I did not follow the new setup. best, Johann Dmitrey wrote: > Hi all, > I'm glad to inform you about new release (0.23) of OpenOpt, a free > universal numerical optimization framework (license: BSD). > > Our new homepage: > http://openopt.org > > Introduction to the framework: > http://openopt.org/Foreword > > All release details here: > http://openopt.org/Changelog > or > http://forum.openopt.org/viewtopic.php?id=58 > > Special thanks to Stepan Hlushak for writing GLP (global) solver "de" > ("differential evolution"). > > Regards, > Dmitrey > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Mon Mar 16 16:33:52 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 16 Mar 2009 21:33:52 +0100 Subject: [SciPy-user] python windows set up In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Message-ID: <20090316203352.GB22689@phare.normalesup.org> As a general rue of thumbs, go for pythonxy or EPD. Don't install stuff yourself, unless you like to learn things. There are soo many steps you can get wrong. Beside it takes too much time to go around and download all the packages :P. I bet she is having difficulties with Vista. Vista sometimes gives weird errors, and I don't know how to fix them. Some people are using all this jazz successfully under Vista, so it works for some ;|. Ga?l On Mon, Mar 16, 2009 at 12:35:38PM -0700, Young, Karl wrote: > Sorry to spam the list with a silly question but I'm trying to convert a windows using colleague to python so that interacting with her > doesn't mean my having to convert her matlab code. She's an excellent scientist but has a low tolerance for complicated software installations and it seems like I'm currently losing the battle re. her installing scipy/numpy. I'm a linux type (though I understand the importance of being cross platform I rarely leave my linux womb, e.g. I currently do most work in ubuntu under parallels on a mac so don't even look much at OS X issues) so I'm not much use re. helping her with her installation issues. I know this type of question has been asked endlessly but I thought I'd try to get the latest suggestions on whether there was anywhere I could point her. She's tried to install the Enthought distribution (at the moment I forget what the problem she had with that was); is that still the best suggestion ? Do people recommend PythonXY in this type of situation (seems like a bit of baggage if you only need numpy/scipy) ? Anyway, sorry again for the spam and thanks for any suggestions. From stefan at sun.ac.za Mon Mar 16 16:54:58 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 16 Mar 2009 22:54:58 +0200 Subject: [SciPy-user] Chirp Z transform In-Reply-To: <710F2847B0018641891D9A216027636029C484@ex3.envision.co.il> References: <710F2847B0018641891D9A216027636029C484@ex3.envision.co.il> Message-ID: <9457e7c80903161354i2cb8fc28y1ab260e8789b03f0@mail.gmail.com> Hi Nadav 2009/3/15 Nadav Horesh : > Attached are a chirp-Z transform module composed by Paul Kienzle (mainly) and I. We tried hard to conform to the scipy's coding style. Thank you both for the contribution! I've reviewed the code and think it should be included in SciPy. Would someone else like to have a look before I commit it? Where would the best place for this be? scipy.fftpack? Cheers St?fan From sturla at molden.no Mon Mar 16 17:04:21 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 16 Mar 2009 22:04:21 +0100 (CET) Subject: [SciPy-user] python windows set up In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Message-ID: <2f324786dac52e2cd5af7cfbb3674547.squirrel@webmail.uio.no> > Sorry to spam the list with a silly question but I'm trying to convert a > windows using colleague to python so that interacting with her > doesn't mean my having to convert her matlab code. She's an excellent > scientist but has a low tolerance for complicated software installations > and it seems like I'm currently losing the battle re. I am using Python on Windows Vista and XP for neuroscience. I don't use a particular distribution. I just use the Windows binary installers for the packages I need. This covers most of what I need: Python 2.5 NumPy SciPy matplotlib wxPython pywin32 Cython gfortran and gcc 4.4 (mingw binary from gfortran's wiki) In c:\Python25\Lib\distutils\distutils.cfg I have: [build] compiler=mingw32 [build_ext] compiler=mingw32 include_dirs=c:/gcc-gfortran/include library_dirs=c:/gcc-gfortran/lib Sturla Molden From amirnntp at gmail.com Mon Mar 16 17:09:39 2009 From: amirnntp at gmail.com (Amir) Date: Mon, 16 Mar 2009 14:09:39 -0700 (PDT) Subject: [SciPy-user] weave.inline with mpi Message-ID: <62c13bec-a088-41c8-9bbe-43564dea3870@w34g2000yqm.googlegroups.com> I am using mpi4py on a set of nodes, some which share a local file system and some that don't. They all share a slower, global filesystem. My script uses several pieces of weave.inline'd code. Do I need to worry about weave code generation running into problems in this setting? Is there a way to specify the location of the generated extension and have the root node generate it and force the other nodes to use that extension as well? The code generation is insignificant compared to the run time of the application. It would be ok if the code was generated on each node each time I ran the application. Is there a way to have weave generate a unique extension and ignore what is currently on the local or global filesystem, once per mpi process? From sturla at molden.no Mon Mar 16 17:10:56 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 16 Mar 2009 22:10:56 +0100 (CET) Subject: [SciPy-user] python windows set up In-Reply-To: <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> Message-ID: <81db2b8aff865f3c93d1b12612f5a310.squirrel@webmail.uio.no> > Sorry to spam the list with a silly question but I'm trying to convert a > windows using colleague to python so that interacting with her > doesn't mean my having to convert her matlab code. Or you could give her Sage, or install all a Linux with all she needs in a vmware image yourself. vmware player is free. http://www.sagemath.org/ From nwagner at iam.uni-stuttgart.de Mon Mar 16 17:11:51 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Mar 2009 22:11:51 +0100 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <49BEB66E.2040501@lpta.in2p3.fr> References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> Message-ID: On Mon, 16 Mar 2009 21:28:30 +0100 Cohen-Tanugi Johann wrote: > hi Dmitrey, I am confused : is the scikits openopt >deprecated? I see > that the head is version 2111 with info.py specifying >0.19.... but all > this is not in the source browser on your web page.... svn co svn://openopt.org/OOPy OpenOpt Nils From sturla at molden.no Mon Mar 16 17:14:33 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 16 Mar 2009 22:14:33 +0100 (CET) Subject: [SciPy-user] weave.inline with mpi In-Reply-To: <62c13bec-a088-41c8-9bbe-43564dea3870@w34g2000yqm.googlegroups.com> References: <62c13bec-a088-41c8-9bbe-43564dea3870@w34g2000yqm.googlegroups.com> Message-ID: <526f877c1707cec8bb59080b0e24f630.squirrel@webmail.uio.no> > Do I need to worry about weave code generation running into problems > in this setting? Yes, you have multiple processes potentially writing code to the same file. It can crash or produce strange race conditions. At least you should run your program with just one process first, to make sure everything is compiled. From karl.young at ucsf.edu Mon Mar 16 17:26:53 2009 From: karl.young at ucsf.edu (Karl Young) Date: Mon, 16 Mar 2009 14:26:53 -0700 Subject: [SciPy-user] Re. python windows set up Message-ID: <49BEC41D.8080608@ucsf.edu> Sturla, Gael, Josef, and Sebastian, Many thanks for the sage advice re. windows installation; I will pass the accumulated wisdom to my colleague (and hopefully make some progress re. her conversion to python ! :-)). -- KY From dwf at cs.toronto.edu Mon Mar 16 19:39:13 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 16 Mar 2009 19:39:13 -0400 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <49BE8266.3050600@relativita.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> Message-ID: On 16-Mar-09, at 12:46 PM, Emanuele Olivetti wrote: > You are right. I'm coming from K-means (MacKay's book) and > moving to GMM, that's why I had in mind custom distances. > I can try to transform my data to be meaningful under Euclidean > distance. Better yet, figure out what distribution might be the right one to use in the one-component case. What sort of data are you working with? MoGs aren't a magic bullet, and you might be better off putting some careful consideration into the form your data takes and choosing an appropriate base distribution. Pretty much any parametric distribution can be turned into a mixture distribution. The way a (finite) mixture works in the general case is that you have a discrete "hidden" random variable C that takes on values corresponding to one of the N clusters, and then N separate distributions from a parametric family (you can mix families too but that gets complicated and is rarely useful). Mixtures of Bernoulli, multinomial, Gamma, and Poisson distributions (for example) are all fairly common. EM will work for all of these cases, and many more; it relies on a fairly general set of assumptions, the details of which escape me at the moment. The machinery of the EM algorithm is much the same for any choice of parametric family, the difference is how you compute (or estimate) the posterior over C, and how you then solve for the maximum likelihood estimate given the (expected) "complete" data. MacKay's book should have a fairly general treatment of this, but if it doesn't, I know one is presented in Bishop (2006), http://tinyurl.com/dmkxe5 , and in various online course notes, for example see Andrew Ng's course at Stanford http://www.stanford.edu/class/cs229/materials.html or Max Welling's site: http://www.ics.uci.edu/~welling/classnotes/classnotes.html I haven't ever used PyEM so I don't know how general David's code is, but it might be a helpful guide. David From ejf27 at cornell.edu Mon Mar 16 20:06:42 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 17 Mar 2009 00:06:42 +0000 (UTC) Subject: [SciPy-user] spare matrices Message-ID: Hi, I'm new to python (from matlab) and am having some difficulties. In my program I need to use large sparse linear systems where A is nxm matrix with n is a few million, m is around 100,000 and there are about 50 entries per row, so there are several million nonzero entries. I'm trying scipy sparse and it is really slow (at least 10-20 seconds to much longer), while matlab can do multiplications almost instantly. I've looked at various online documents and googled a bit, and from what I see people seem to find these numbers large in scipy. Am I missing something or do I need to go back to Matlab? thanks, Eric From cournape at gmail.com Mon Mar 16 20:24:32 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 17 Mar 2009 09:24:32 +0900 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> Message-ID: <5b8d13220903161724p207036a3gdf4e87f810b3e6dd@mail.gmail.com> On Tue, Mar 17, 2009 at 8:39 AM, David Warde-Farley wrote: > > On 16-Mar-09, at 12:46 PM, Emanuele Olivetti wrote: > >> You are right. I'm coming from K-means (MacKay's book) and >> moving to GMM, that's why I had in mind custom distances. >> I can try to transform my data to be meaningful under Euclidean >> distance. > > > Better yet, figure out what distribution might be the right one to use > in the one-component case. What sort of data are you working with? > MoGs aren't a magic bullet, and you might be better off putting some > careful consideration into the form your data takes and choosing an > appropriate base distribution. > > Pretty much any parametric distribution can be turned into a mixture > distribution. The way a (finite) mixture works in the general case is > that you have a discrete "hidden" random variable C that takes on > values corresponding to one of the N clusters, and then N separate > distributions from a parametric family (you can mix families too but > that gets complicated and is rarely useful). Mixtures of Bernoulli, > multinomial, Gamma, ?and Poisson distributions (for example) are all > fairly common. EM will work for all of these cases, and many more; it > relies on a fairly general set of assumptions, the details of which > escape me at the moment. You need to be able to estimate efficiently the expected log-likelihood of the complete data given the observation (which reduces to the computation of the responsibilities for mixtures), and to be able to maximizes it w.r.t the parameters. For complete data in the exponential family with sufficient statistics s, the expected log-likelihood reduces to some averaged sufficient statistics, and the re-estimated parameter is the Legendre transform of the cumulant generating function. > I haven't ever used PyEM so I don't know how general David's code is, > but it might be a helpful guide. It only handles GMM. As you said, the machinery is much the same for many mixtures, but it only helps formally. Concretely, the actual formulations are quite different; mixture of Poisson would be simple to derive from GMM, but mixtures of multinomial would be quite different. To have an "automatic" EM algorithm for any given mixture, one could imagine some sort of formal compiler, the EM algorithm being a meta-algorithm. Needless to say, I have not attempted to do that - people smarter than me have attempted to do so in a more general context (infer.net from Minka et. al at MS). cheers, David From josef.pktd at gmail.com Mon Mar 16 21:27:04 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 16 Mar 2009 21:27:04 -0400 Subject: [SciPy-user] spare matrices In-Reply-To: References: Message-ID: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> On Mon, Mar 16, 2009 at 8:06 PM, Eric Friedman wrote: > Hi, I'm new to python (from matlab) and am having some difficulties. > > In my program I need to use large sparse linear systems where A is nxm matrix > with n is a few million, m is ?around 100,000 and there are about 50 entries per > row, so there are several million nonzero entries. > I'm trying scipy sparse and it is really slow (at least 10-20 seconds to much > longer), while matlab can do multiplications almost instantly. ?I've looked at > various online documents and googled a bit, and from what I see people seem to > find these numbers large in scipy. Am I missing something or do I need to go > back to Matlab? > > thanks, > Eric > Since there are several different sparse formats, which have advantages for different applications, you need to be a bit more specific what you tried and what operations you need. A brief description of each format and advantages and disadvantages are in the docs. I only tried out sparse a few times and initially I used the lil matrix type for linear algebra and it was slow, switching to csr or csc format made the linear algebra much faster. I don't know the speed comparison to matlab, since I never used the sparse matrices of matlab. Josef From ejf27 at cornell.edu Mon Mar 16 22:12:46 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 17 Mar 2009 02:12:46 +0000 (UTC) Subject: [SciPy-user] spare matrices References: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> Message-ID: gmail.com> writes: > > > > Since there are several different sparse formats, which have > advantages for different applications, you need to be a bit more > specific what you tried and what operations you need. A brief > description of each format and advantages and disadvantages are in the > docs. > > ... > Josef > thanks -- I played around with a few of the formats and read the docs, but what I'm looking for is someone to tell me that it really does work and if I spend the required time to figure it out it will be able handle millions of entries. (I read a couple of posts on various bulletin boards that seemed to suggest that I'm pushing the envelope here.) Eric From pgmdevlist at gmail.com Tue Mar 17 03:00:11 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 17 Mar 2009 03:00:11 -0400 Subject: [SciPy-user] Compiling extensions in a virtual environment Message-ID: <3EC05B37-8ED8-467F-9890-366B9D3E08D6@gmail.com> All, I'm trying to compile scikits.timeseries in a virtual environment (created w/ virtualenvwrapper), and I'm running into a puzzling problem. When I create a new virtual environment, install numpy in it (from SVN, r6685), I'm able to compile the scikit without problem. However, after removing the build and switching to another existing virtual environment, I get a: error: build/src.macosx-10.5-i386-2.5/gfortran_vs2003_hack.c: No such file or directory Where does this error come from ? As the virtual environment that causes problem is the one that I work with all the time, I'd like to know what's happening. Notes: * Reinstalling numpy in this environment (after removing previous installation and the corresponding eggs) doesn't seem to fix the pb. * Python 2.5.4 (r254:67916, Dec 29 2008, 17:02:44) [GCC 4.0.1 (Apple Inc. build 5488)] on darwin Thanks a lot in advance for any pointer. P. From cournape at gmail.com Tue Mar 17 04:11:43 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 17 Mar 2009 17:11:43 +0900 Subject: [SciPy-user] Compiling extensions in a virtual environment In-Reply-To: <3EC05B37-8ED8-467F-9890-366B9D3E08D6@gmail.com> References: <3EC05B37-8ED8-467F-9890-366B9D3E08D6@gmail.com> Message-ID: <5b8d13220903170111i212a9a29n8ea8f79329befaee@mail.gmail.com> On Tue, Mar 17, 2009 at 4:00 PM, Pierre GM wrote: > All, > I'm trying to compile scikits.timeseries in a virtual environment > (created w/ virtualenvwrapper), and I'm running into a puzzling problem. > > When I create a new virtual environment, install numpy in it (from > SVN, r6685), I'm able to compile the scikit without problem. However, > after removing the build and switching to another existing virtual > environment, I get a: > error: build/src.macosx-10.5-i386-2.5/gfortran_vs2003_hack.c: No such > file or directory > Where does this error come from ? As the virtual environment that > causes problem is the one that I work with all the time, I'd like to > know what's happening. I am sorry, I am the one to blame for this. I removed the offending code - you should make sure that you have updated numpy correctly (the revision 6685 has already the buggy code removed). As it introduced a bug in numpy.distutils, you should make sure which numpy you are using to build other extensions. This is a numpy problem at root, but I can't really help you for virtualenv - they do make some import magic which may make the problem more difficult to track down. cheers, David From stefan at sun.ac.za Tue Mar 17 04:17:50 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 17 Mar 2009 10:17:50 +0200 Subject: [SciPy-user] python windows set up In-Reply-To: <81db2b8aff865f3c93d1b12612f5a310.squirrel@webmail.uio.no> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> <9D202D4E86A4BF47BA6943ABDF21BE78058FAC08@EXVS06.net.ucsf.edu> <81db2b8aff865f3c93d1b12612f5a310.squirrel@webmail.uio.no> Message-ID: <9457e7c80903170117i1ce467f5ga50783890a2e708e@mail.gmail.com> 2009/3/16 Sturla Molden : > >> Sorry to spam the list with a silly question but I'm trying to convert a >> windows using colleague to python so that interacting with her >> doesn't mean my having to convert her matlab code. > > Or you could give her Sage, or install all a Linux with all she needs in a > vmware image yourself. vmware player is free. > > http://www.sagemath.org/ Just be careful; Sage does some pre-parsing of input, so start Sage using sage -ipython for numerical work. Cheers St?fan From pav at iki.fi Tue Mar 17 04:31:23 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Mar 2009 08:31:23 +0000 (UTC) Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> Message-ID: Mon, 16 Mar 2009 21:28:30 +0100, Cohen-Tanugi Johann wrote: > hi Dmitrey, I am confused : is the scikits openopt deprecated? I see > that the head is version 2111 with info.py specifying 0.19.... but all > this is not in the source browser on your web page.... I am sorry if I > did not follow the new setup. If the version of openopt in scikits SVN repo is deprecated and not used, it IMHO should be removed from there. -- Pauli Virtanen From stefan at sun.ac.za Tue Mar 17 04:44:38 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 17 Mar 2009 10:44:38 +0200 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> Message-ID: <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> 2009/3/17 Pauli Virtanen : >> hi Dmitrey, I am confused : is the scikits openopt deprecated? I see >> that the head is version 2111 with info.py specifying 0.19.... but all >> this is not in the source browser on your web page.... I am sorry if I >> did not follow the new setup. > > If the version of openopt in scikits SVN repo is deprecated and not used, > it IMHO should be removed from there. I agree. Dmitrey, would that be ok with you? Cheers St?fan From dmitrey15 at ukr.net Tue Mar 17 04:46:09 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Tue, 17 Mar 2009 10:46:09 +0200 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> Message-ID: <49BF6351.3080203@ukr.net> St?fan van der Walt wrote: > 2009/3/17 Pauli Virtanen : > >>> hi Dmitrey, I am confused : is the scikits openopt deprecated? I see >>> that the head is version 2111 with info.py specifying 0.19.... but all >>> this is not in the source browser on your web page.... I am sorry if I >>> did not follow the new setup. >>> >> If the version of openopt in scikits SVN repo is deprecated and not used, >> it IMHO should be removed from there. >> > > I agree. Dmitrey, would that be ok with you? > > Cheers > I hadn't removed that one yet because there is GenericOpt code there. Regards, D. From emanuele at relativita.com Tue Mar 17 05:35:07 2009 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 17 Mar 2009 10:35:07 +0100 Subject: [SciPy-user] PyEM: custom (non-Euclidean) distance function? In-Reply-To: <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> References: <49BE6AB4.6020201@relativita.com> <49BE78E4.108@relativita.com> <1cd32cbb0903160928k24adb3a0k50df5b5959ddda67@mail.gmail.com> <49BE8266.3050600@relativita.com> <5b8d13220903160953g4a8fa80fi253278c0d2ce14ce@mail.gmail.com> Message-ID: <49BF6ECB.2050804@relativita.com> David Cournapeau wrote: > On Tue, Mar 17, 2009 at 1:46 AM, Emanuele Olivetti > wrote: > >> You are right. I'm coming from K-means (MacKay's book) and >> moving to GMM, that's why I had in mind custom distances. >> > > Note that GMM is what is called soft kmean in MacKay's book. You can > use other distances for kmeans, and other kind of soft-kmeans - but as > said by Josef, I am more puzzled by the idea of non euclidean distance > in the EM context, because of the inherent probabilistic view. Because > of the probabilities, there is no obvious interpretation in distance > anymore (it is not an argmin_c ||x-c|| for each point x). > > There are soft kmeans algorithms with non euclidean distances, but not > in a probabilistic framework - at least I am not aware of any. > > I heard about kernel GMM, which could be of interest for this thread: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.2779 they seems to mix data projection and GMM in a single step. cheers, Emanuele From wnbell at gmail.com Tue Mar 17 06:19:53 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 17 Mar 2009 06:19:53 -0400 Subject: [SciPy-user] spare matrices In-Reply-To: References: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> Message-ID: On Mon, Mar 16, 2009 at 10:12 PM, Eric Friedman wrote: > > thanks -- I played around with a few of the formats and > read the docs, but what > I'm looking for is someone to tell me that it > really does work and if I spend > the required time to figure it out it will > be able handle millions of entries. > (I read a couple of posts on various bulletin > boards that seemed to suggest that > I'm pushing the envelope here.) > Yes, scipy.sparse should be competitive with MATLAB in terms of performance. For instance, I use scipy.sparse with PyAMG[1] to solve linear systems with millions of unknowns. With such large matrices you'll want to primarily use the COO, CSR, and CSC formats, whose operations are implemented in C++. You'll want to avoid using lil_matrix and dok_matrix to construct matrices, as they are about 100 times slower than the alternatives. Here's a example that highlights the difference[2]. Please let us know if you find something lacking. [1] http://www.pyamg.org/ [2] http://www.scipy.org/SciPyPackages/Sparse#head-be8a0be5d0e44c4d59550d64fb0173508073c36e -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ejf27 at cornell.edu Tue Mar 17 07:11:14 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 17 Mar 2009 11:11:14 +0000 (UTC) Subject: [SciPy-user] spare matrices References: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> Message-ID: Nathan Bell gmail.com> writes: > > On Mon, Mar 16, 2009 at 10:12 PM, Eric Friedman cornell.edu> wrote: > > Yes, scipy.sparse should be competitive with MATLAB in terms of > performance. For instance, I use scipy.sparse with PyAMG[1] to solve > linear systems with millions of unknowns. With such large matrices > you'll want to primarily use the COO, CSR, and CSC formats, whose > operations are implemented in C++. You'll want to avoid using > lil_matrix and dok_matrix to construct matrices, as they are about 100 > times slower than the alternatives. Here's a example that highlights > the difference[2]. > > [2] http://www.scipy.org/SciPyPackages/Sparse#head-be8a0be5d0e44c4d59550d64fb0173508073c36e > Excellent -- thanks. A few more newbie questions: 1)How do I find the detailed descriptions of functions like coo? Is there a good document (which I haven't been able to find) or do I need to look inside the code? For example, I don't see how to access individual elements in a coo. Also, I'd like to be able to pull out a submatrix from a subset of the rows and find the set of rows which are nonzero. 2) I tried linsolve.spsolve and it says I should use sparse.linalg.dsolve but when I try that directly I can't get it to work. Also, is there any documentation on dsolve or spsolve? My matrix is singular, but the matrix equation is still solvable -- can it deal with that? thanks again, Eric From cohen at lpta.in2p3.fr Tue Mar 17 07:13:52 2009 From: cohen at lpta.in2p3.fr (Cohen-Tanugi Johann) Date: Tue, 17 Mar 2009 12:13:52 +0100 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <49BF6351.3080203@ukr.net> References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> <49BF6351.3080203@ukr.net> Message-ID: <49BF85F0.8080606@lpta.in2p3.fr> thanks Dmitrey and Nils, I downloaded and installed OOpy with svn. +1 on cleaning up the scikits directory, when that is possible. How about mentioning that this version is deprecated and pointing toward the new location in the __init__ of the scikits package? Johann Dmitrey wrote: > St?fan van der Walt wrote: > >> 2009/3/17 Pauli Virtanen : >> >> >>>> hi Dmitrey, I am confused : is the scikits openopt deprecated? I see >>>> that the head is version 2111 with info.py specifying 0.19.... but all >>>> this is not in the source browser on your web page.... I am sorry if I >>>> did not follow the new setup. >>>> >>>> >>> If the version of openopt in scikits SVN repo is deprecated and not used, >>> it IMHO should be removed from there. >>> >>> >> I agree. Dmitrey, would that be ok with you? >> >> Cheers >> >> > I hadn't removed that one yet because there is GenericOpt code there. > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dmitrey15 at ukr.net Tue Mar 17 07:21:07 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Tue, 17 Mar 2009 13:21:07 +0200 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <49BF85F0.8080606@lpta.in2p3.fr> References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> <49BF6351.3080203@ukr.net> <49BF85F0.8080606@lpta.in2p3.fr> Message-ID: <49BF87A3.4060503@ukr.net> Cohen-Tanugi Johann wrote: > thanks Dmitrey and Nils, I downloaded and installed OOpy with svn. +1 on > cleaning up the scikits directory, when that is possible. How about > mentioning that this version is deprecated and pointing toward the new > location in the __init__ of the scikits package? > Johann > I think it's just better to remove openopt files from scikits repository completely, when GenericOpt files will be moved into other location. D. From stefan at sun.ac.za Tue Mar 17 09:09:07 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 17 Mar 2009 15:09:07 +0200 Subject: [SciPy-user] [ANN][optimization] OpenOpt 0.23 In-Reply-To: <49BF6351.3080203@ukr.net> References: <49BD506F.5000900@ukr.net> <49BEB66E.2040501@lpta.in2p3.fr> <9457e7c80903170144w5b7d9f87p568104f9dad81e15@mail.gmail.com> <49BF6351.3080203@ukr.net> Message-ID: <9457e7c80903170609g5214f970sec5b9f17b60d5a04@mail.gmail.com> 2009/3/17 Dmitrey : >> I agree. ?Dmitrey, would that be ok with you? >> > I hadn't removed that one yet because there is GenericOpt code there. Why did the GenericOpt code not travel along? Matthieu, can we integrate GenericOpt into scipy.optimize? Cheers St?fan From wnbell at gmail.com Tue Mar 17 09:20:38 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 17 Mar 2009 09:20:38 -0400 Subject: [SciPy-user] spare matrices In-Reply-To: References: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> Message-ID: On Tue, Mar 17, 2009 at 7:11 AM, Eric Friedman wrote: > > 1)How do I find the detailed descriptions of functions like coo? > Is there a good > document (which I haven't been able to find) or do I need > to look inside the code? > For example, I don't see how to access individual elements in a coo. > Also, I'd like to be able to pull out a submatrix from a > subset of the rows and > find the set of rows which are nonzero. The docstrings in scipy.sparse are the best source of information. I generally use IPython to read docstrings: $ipython In [1]: from scipy.sparse import * In [2]: csr_matrix? The same information is available here: http://docs.scipy.org/doc/scipy/reference/sparse.html To extract submatrices you'll need to use either the CSR or CSC formats. These formats support most of the fancy indexing tricks that numpy provides for arrays. The find() function in scipy.sparse may also be helpful. The sparse documentation doesn't currently inform users about "best practices" for efficiency, so feel free to ask on this list for help with specific operations. > > 2) I tried linsolve.spsolve and it says I should use > sparse.linalg.dsolve but > when I try that directly I can't get it to work. > Also, is there any > documentation on dsolve or spsolve? My matrix is > singular, but the matrix > equation is still solvable -- can it deal with that? > Yes, scipy.sparse.spsolve() can solve some consistent but singular systems. >>> from scipy import rand >>> from scipy.sparse.linalg import spsolve >>> from scipy.sparse import * >>> A = identity(10, format='csc') >>> A[9,9] = 0 >>> b = A*rand(10) >>> spsolve(A,b) However spsolve() is known to fail in some cases: http://projects.scipy.org/scipy/ticket/553 spsolve() uses an outdated version of SuperLU. We should update this by SciPy 0.8. If you find spsolve() inadequate, then consider the UMFPACK scikit instead: http://scikits.appspot.com/umfpack FWIW, UMFPACK is the same solver that MATLAB uses for solving sparse linear systems. Also, I believe it is generally faster than SuperLU. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ejf27 at cornell.edu Tue Mar 17 10:40:10 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 17 Mar 2009 14:40:10 +0000 (UTC) Subject: [SciPy-user] spare matrices References: <1cd32cbb0903161827m28c4d152s340f8f385eaf950f@mail.gmail.com> Message-ID: Nathan Bell gmail.com> writes: > > The docstrings in scipy.sparse are the best source of information. I > generally use IPython to read docstrings: > $ipython ... > The sparse documentation doesn't currently inform users about "best > practices" for efficiency, so feel free to ask on this list for help > with specific operations. > ... Thanks again for all your help! Eric From pgmdevlist at gmail.com Tue Mar 17 10:54:18 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 17 Mar 2009 10:54:18 -0400 Subject: [SciPy-user] Compiling extensions in a virtual environment [fixed] In-Reply-To: <5b8d13220903170111i212a9a29n8ea8f79329befaee@mail.gmail.com> References: <3EC05B37-8ED8-467F-9890-366B9D3E08D6@gmail.com> <5b8d13220903170111i212a9a29n8ea8f79329befaee@mail.gmail.com> Message-ID: OK, virtualenv magic must have hexed me somehow. After updating to the latest SVN in *and* outside of my virtual environment, the problem seems to be fixed. If any regular user of virtualenv could enlighten us and tell us what files block the other, that'd be great. Thx again. P. On Mar 17, 2009, at 4:11 AM, David Cournapeau wrote: > On Tue, Mar 17, 2009 at 4:00 PM, Pierre GM > wrote: >> All, >> I'm trying to compile scikits.timeseries in a virtual environment >> (created w/ virtualenvwrapper), and I'm running into a puzzling >> problem. >> >> When I create a new virtual environment, install numpy in it (from >> SVN, r6685), I'm able to compile the scikit without problem. However, >> after removing the build and switching to another existing virtual >> environment, I get a: >> error: build/src.macosx-10.5-i386-2.5/gfortran_vs2003_hack.c: No such >> file or directory >> Where does this error come from ? As the virtual environment that >> causes problem is the one that I work with all the time, I'd like to >> know what's happening. > > I am sorry, I am the one to blame for this. I removed the offending > code - you should make sure that you have updated numpy correctly (the > revision 6685 has already the buggy code removed). As it introduced a > bug in numpy.distutils, you should make sure which numpy you are using > to build other extensions. This is a numpy problem at root, but I > can't really help you for virtualenv - they do make some import magic > which may make the problem more difficult to track down. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Tue Mar 17 11:43:53 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Tue, 17 Mar 2009 10:43:53 -0500 Subject: [SciPy-user] Compiling extensions in a virtual environment[fixed] In-Reply-To: References: <3EC05B37-8ED8-467F-9890-366B9D3E08D6@gmail.com> <5b8d13220903170111i212a9a29n8ea8f79329befaee@mail.gmail.com> Message-ID: <49BF7EE9.63BA.009B.0@twdb.state.tx.us> I've had a similar problem before (don't remember the details now) where virtualenv wasn't importing something correctly. i.e there was an 'import blah' statement somewhere but then 'blah' wasn't available for the rest of the install script. I never did get further in working out what was wrong. I just ended up upgrading my regular install. - d >>> Pierre GM 3/17/2009 9:54 AM >>> OK, virtualenv magic must have hexed me somehow. After updating to the latest SVN in *and* outside of my virtual environment, the problem seems to be fixed. If any regular user of virtualenv could enlighten us and tell us what files block the other, that'd be great. Thx again. P. On Mar 17, 2009, at 4:11 AM, David Cournapeau wrote: > On Tue, Mar 17, 2009 at 4:00 PM, Pierre GM > wrote: >> All, >> I'm trying to compile scikits.timeseries in a virtual environment >> (created w/ virtualenvwrapper), and I'm running into a puzzling >> problem. >> >> When I create a new virtual environment, install numpy in it (from >> SVN, r6685), I'm able to compile the scikit without problem. However, >> after removing the build and switching to another existing virtual >> environment, I get a: >> error: build/src.macosx-10.5-i386-2.5/gfortran_vs2003_hack.c: No such >> file or directory >> Where does this error come from ? As the virtual environment that >> causes problem is the one that I work with all the time, I'd like to >> know what's happening. > > I am sorry, I am the one to blame for this. I removed the offending > code - you should make sure that you have updated numpy correctly (the > revision 6685 has already the buggy code removed). As it introduced a > bug in numpy.distutils, you should make sure which numpy you are using > to build other extensions. This is a numpy problem at root, but I > can't really help you for virtualenv - they do make some import magic > which may make the problem more difficult to track down. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Tue Mar 17 15:34:03 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Mar 2009 19:34:03 +0000 (UTC) Subject: [SciPy-user] Chirp Z transform References: <710F2847B0018641891D9A216027636029C484@ex3.envision.co.il> <9457e7c80903161354i2cb8fc28y1ab260e8789b03f0@mail.gmail.com> Message-ID: Mon, 16 Mar 2009 22:54:58 +0200, St?fan van der Walt wrote: > Hi Nadav > > 2009/3/15 Nadav Horesh : >> Attached are a chirp-Z transform module composed by Paul Kienzle >> (mainly) and I. We tried hard to conform to the scipy's coding style. > > Thank you both for the contribution! I've reviewed the code and think > it should be included in SciPy. Would someone else like to have a look > before I commit it? > > Where would the best place for this be? scipy.fftpack? I can't review CZT, but some comments: - The *test* and *demo* routines from czt.py should be moved elsewhere (test_czt.py, or documentation) or omitted. - In docstrings, write "Parameters" and "Returns", instead of "Parameters:", "Returns:" - In the module docstring, using autosummary:: could be used for listing the module contents, so that we can just use the module docstring as the official location of the CZT documentation. (It could be hooked to main docs via automodule:: in a suitable place.) In any case, the functions etc. should be hooked to the Sphinx documentation. - The tests should set the random seed, so that they would be deterministic. -- Pauli Virtanen From nwagner at iam.uni-stuttgart.de Wed Mar 18 02:44:35 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 07:44:35 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Mon, 16 Mar 2009 14:48:25 -0400 Rob Clewley wrote: > Nils, > >> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: > > I'm not up to speed with everything in scipy-land but my >understanding > was that numpy and scipy aren't even fully compatible >with python 2.6 > yet. I thought that was going to happen with numpy 1.3, >etc.? I can't > install python 2.6 with these libraries on the only >computer I have > available to test this right now. There are too many >possible issues > that I can't control for if I don't have a working >python 2.6 > installation to test with. I don't know if I can be of >further help > with this unless you switch to python 2.5, for which all >the tests are > known to pass on linux boxes set up according to the >wiki > instructions. > > Sorry! > Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Rob, I switched to python2.5 python -i seyranian-PyDSTool.py -3.41421356237 /home/nwagner/svn/PyDSTool/integrator/radau5.f: In subroutine `radcor': /home/nwagner/svn/PyDSTool/integrator/radau5.f:960: warning: `dynold' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/radau5.f:964: warning: `thqold' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/radau5.f:1020: warning: `hacc' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/radau5.f:1020: warning: `erracc' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/radau5.f:1066: warning: `hopt' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/lapackc.f: In subroutine `ztbsv': /home/nwagner/svn/PyDSTool/integrator/lapackc.f:2248: warning: `kx' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/dc_lapack.f: In subroutine `estrad': /home/nwagner/svn/PyDSTool/integrator/dc_lapack.f:1264: warning: `mm' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/dc_lapack.f: In subroutine `estrav': /home/nwagner/svn/PyDSTool/integrator/dc_lapack.f:1551: warning: `mm' might be used uninitialized in this function /home/nwagner/svn/PyDSTool/integrator/lapack.f: In subroutine `dtbsv': /home/nwagner/svn/PyDSTool/integrator/lapack.f:3406: warning: `kx' might be used uninitialized in this function Segmentation fault Nils Below is a backtrace gdb python GNU gdb 6.5 Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-suse-linux"...(no debugging symbols found) Using host libthread_db library "/lib64/libthread_db.so.1". (gdb) run seyranian-PyDSTool.py Starting program: /usr/bin/python seyranian-PyDSTool.py (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] [New Thread 47382264377744 (LWP 10240)] (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) -3.41421356237 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 47382264377744 (LWP 10240)] 0x00002b180a23065b in PyEval_EvalFrameEx () from /usr/lib64/libpython2.5.so.1.0 (gdb) bt #0 0x00002b180a23065b in PyEval_EvalFrameEx () from /usr/lib64/libpython2.5.so.1.0 #1 0x00002b180a233bfc in PyEval_EvalCodeEx () from /usr/lib64/libpython2.5.so.1.0 #2 0x00002b180a232324 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.5.so.1.0 #3 0x00002b180a233bfc in PyEval_EvalCodeEx () from /usr/lib64/libpython2.5.so.1.0 #4 0x00002b180a232324 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.5.so.1.0 #5 0x00002b180a233bfc in PyEval_EvalCodeEx () from /usr/lib64/libpython2.5.so.1.0 #6 0x00002b180a233c42 in PyEval_EvalCode () from /usr/lib64/libpython2.5.so.1.0 #7 0x00002b180a24c702 in Py_CompileString () from /usr/lib64/libpython2.5.so.1.0 #8 0x00002b180a24c7d6 in PyRun_FileExFlags () from /usr/lib64/libpython2.5.so.1.0 #9 0x00002b180a24dc98 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.5.so.1.0 #10 0x00002b180a256aa5 in Py_Main () from /usr/lib64/libpython2.5.so.1.0 #11 0x00002b180ad71ae4 in __libc_start_main () from /lib64/libc.so.6 #12 0x00000000004006a9 in _start () From nwagner at iam.uni-stuttgart.de Wed Mar 18 09:39:24 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 14:39:24 +0100 Subject: [SciPy-user] Arpack Message-ID: Hi all, Is it possible to solve a sparse eigenvalue problem of the form B x = \nu A x A^{-1} B x = \nu x with scipy.sparse.linalg.eigen ? B is singular. Nils From rob.clewley at gmail.com Wed Mar 18 11:29:54 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 11:29:54 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: Hi Nils, On Wed, Mar 18, 2009 at 2:44 AM, Nils Wagner wrote: > Rob, > > I switched to python2.5 > python -i seyranian-PyDSTool.py > -3.41421356237 > /home/nwagner/svn/PyDSTool/integrator/radau5.f: In [SNIP] > Segmentation fault > This is getting a bit out of my depth, and I don't normally see segfault problems with these integrators, but I have a couple of suggestions. Does interp_dopri_test run or do you also get a segfault? Radau is a fortran-based integrator and that can bring extra problems. The bug report you submitted on SF indicates that you ran that test with Python 2.6 not 2.5, although since you got a segfault there too it's possible that it's not a 2.5 vs. 2.6 issue after all. Could it be something about your 64 bit platform? Are the standard libraries all slightly different somehow? The segfault is coming from lib64/libpython2.5.so. What version of gcc are you running? Based on past experience I recommend 3.1 but 3.3 may also work. I will try to do some research about getting DLLs working via SWIG on 64 bit platforms to see if there are known problems. > > Below is a backtrace > > gdb python > This GDB was configured as "x86_64-suse-linux"...(no > debugging symbols found) > Using host libthread_db library > "/lib64/libthread_db.so.1". > [SNIP] > (gdb) run seyranian-PyDSTool.py > Starting program: /usr/bin/python seyranian-PyDSTool.py [SNIP] > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 47382264377744 (LWP 10240)] > 0x00002b180a23065b in PyEval_EvalFrameEx () from > /usr/lib64/libpython2.5.so.1.0 > (gdb) bt > #0 0x00002b180a23065b in PyEval_EvalFrameEx () from > /usr/lib64/libpython2.5.so.1.0 Is it possible to configure gdb to include debugging symbols? I don't know what that entails off the top of my head, but it would mean we'd get a much more detailed report. -Rob From rob.clewley at gmail.com Wed Mar 18 11:48:09 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 11:48:09 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: Nils, > possible that it's not a 2.5 vs. 2.6 issue after all. Could it be > something about your 64 bit platform? Are the standard libraries all > slightly different somehow? The segfault is coming from > lib64/libpython2.5.so. What version of gcc are you running? Based on > past experience I recommend 3.1 but 3.3 may also work. I will try to > do some research about getting DLLs working via SWIG on 64 bit > platforms to see if there are known problems. I think there *is* a problem trying to link the 32 bit code with 64 bit libraries. Check out 31.2.7 of http://www.swig.org/Doc1.3/Python.html#Python_nn11 I think we need to add the compiler option to link to the 32 bit libraries. On line 937 of Generator/Radau_ODEsystem.py you should have extra_compile_args=['-w', '-D__RADAU__'] According to the instructions on the SWIG page, you can add '-m32' at the beginning of this list and see if Radau now runs. If it does, find the equivalent line in Dopri_ODEsystem.py (just text search for extra_compile_args) and add it there too. Let me know what happens! -Rob From nwagner at iam.uni-stuttgart.de Wed Mar 18 12:44:13 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 17:44:13 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 11:48:09 -0400 Rob Clewley wrote: > Nils, > >> possible that it's not a 2.5 vs. 2.6 issue after all. >>Could it be >> something about your 64 bit platform? Are the standard >>libraries all >> slightly different somehow? The segfault is coming from >> lib64/libpython2.5.so. What version of gcc are you >>running? Based on >> past experience I recommend 3.1 but 3.3 may also work. I >>will try to >> do some research about getting DLLs working via SWIG on >>64 bit >> platforms to see if there are known problems. > > I think there *is* a problem trying to link the 32 bit >code with 64 > bit libraries. Check out 31.2.7 of > > http://www.swig.org/Doc1.3/Python.html#Python_nn11 > > I think we need to add the compiler option to link to >the 32 bit > libraries. On line 937 of Generator/Radau_ODEsystem.py >you should have > > extra_compile_args=['-w', '-D__RADAU__'] > > According to the instructions on the SWIG page, you can >add '-m32' at > the beginning of this list and see if Radau now runs. If >it does, find > the equivalent line in Dopri_ODEsystem.py (just text >search for > extra_compile_args) and add it there too. > > Let me know what happens! > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Rob, I have checked PyDSTool on different machines using both, python2.5 and python2.6. Now I have added the option '-m32'. The segfault vanished. Cheers, Nils Another issue python pest_test2.py /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] Generating non-uniform samples from HH orbit... ... done HH spike time found at 8.43278894472 Traceback (most recent call last): File "pest_test2.py", line 127, in verbose=True) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 1215, in run int(verbose)) File "/home/nwagner/local/lib64/python2.6/site-packages/scipy/optimize/optimize.py", line 1199, in fminbound fx = func(x,*args) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 354, in residual r = self._res_fn(p, extra_args) File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 416, in _res_fn return pest.evaluate()[0] File "/home/nwagner/svn/PyDSTool/PyDSTool/Toolbox/ParamEst.py", line 813, in evaluate res = self.context.residual(self.testModel) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1019, in residual self.evaluate(model) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1003, in evaluate new_result = test_mi(ref_mi) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1282, in evaluate target.get_test_traj(force=force) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1374, in get_test_traj self.compute_traj() File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1360, in compute_traj self.model.compute(trajname=self._trajname, force=True) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2449, in compute MI, swRules) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2064, in _findTrajInitiator self.verboselevel) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 2885, in findTrajInitiator dxdt = dict(MI.Rhs(t, icdict, pardict)) File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1209, in Rhs return self.model.Rhs(t, xdict, pdict) File "/home/nwagner/svn/PyDSTool/PyDSTool/Model.py", line 1646, in Rhs 'norm': self._normord}) File "/home/nwagner/svn/PyDSTool/PyDSTool/Points.py", line 124, in __init__ datalist.append(coorddict[c][0]) IndexError: invalid index to scalar variable. From rob.clewley at gmail.com Wed Mar 18 12:48:26 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 12:48:26 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: > Rob, > > I have checked PyDSTool on different machines using > both, python2.5 and python2.6. > > Now I have added the option '-m32'. > > The segfault vanished. OK, great. I'll add some information to the wiki and later put some kind of automatic check that will add this option. > Another issue > python pest_test2.py > /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: I still have no way of helping you debug this issue unless you also see it in Python 2.5. This is the error you showed me before, and I replied with a couple of questions about the internal state at the time of the error. Would you be able to look at those first? It's hard to help if I can't reproduce the error. -Rob From nwagner at iam.uni-stuttgart.de Wed Mar 18 12:59:03 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 17:59:03 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 12:48:26 -0400 Rob Clewley wrote: >> Rob, >> >> I have checked PyDSTool on different machines using >> both, python2.5 and python2.6. >> >> Now I have added the option '-m32'. >> >> The segfault vanished. > > OK, great. I'll add some information to the wiki and >later put some > kind of automatic check that will add this option. > >> Another issue > >> python pest_test2.py >> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: > That is only a DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 The bug follows at the end of my previous message: File "/home/nwagner/svn/PyDSTool/PyDSTool/Points.py", line 124, in __init__ datalist.append(coorddict[c][0]) IndexError: invalid index to scalar variable. Nils From rob.clewley at gmail.com Wed Mar 18 13:02:08 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 13:02:08 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: Nils, I don't understand: >>> python pest_test2.py >>> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: >> > That is only a DeprecationWarning: BaseException.message > has been deprecated as of Python 2.6 > > The bug follows at the end of my previous message: > > ? File "/home/nwagner/svn/PyDSTool/PyDSTool/Points.py", > line 124, in __init__ > ? ? datalist.append(coorddict[c][0]) > IndexError: invalid index to scalar variable. >From a previous email I sent: > python -i pest_test2.py [SNIP] File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", line 1209, in Rhs return self.model.Rhs(t, xdict, pdict) [SNIP] > line 124, in __init__ > datalist.append(coorddict[c][0]) > IndexError: invalid index to scalar variable. Looks like the RHS function isn't working. Could you run this script in pdb or a debugger, or at least put in some print statements here to show me what t, xdict and pdict are when Rhs is called. And then, inside Points, what coorddict and c are? This might be another Python 2.6 issue because there is some funky magic used to create some class methods dynamically. From nwagner at iam.uni-stuttgart.de Wed Mar 18 13:06:59 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 18:06:59 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 13:02:08 -0400 Rob Clewley wrote: > Nils, I don't understand: > >>>> python pest_test2.py >>>> /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: >>> >> That is only a DeprecationWarning: BaseException.message >> has been deprecated as of Python 2.6 >> >> The bug follows at the end of my previous message: >> >> ? File "/home/nwagner/svn/PyDSTool/PyDSTool/Points.py", >> line 124, in __init__ >> ? ? datalist.append(coorddict[c][0]) >> IndexError: invalid index to scalar variable. > >>From a previous email I sent: > >> python -i pest_test2.py > > [SNIP] > >File "/home/nwagner/svn/PyDSTool/PyDSTool/MProject.py", > line 1209, in Rhs > return self.model.Rhs(t, xdict, pdict) > > [SNIP] > >> line 124, in __init__ >> datalist.append(coorddict[c][0]) >> IndexError: invalid index to scalar variable. > > Looks like the RHS function isn't working. Could you run >this script > in pdb or a debugger, or at least put in some print >statements here to > show me what t, xdict and pdict are when Rhs is called. >And then, > inside Points, what coorddict and c are? This might be >another Python > 2.6 issue because there is some funky magic used to >create some class > methods dynamically. Rob, Sorry but I am newbie to pydb. How do I run the script in pydb ? Nils From rob.clewley at gmail.com Wed Mar 18 13:12:59 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 13:12:59 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: >> Looks like the RHS function isn't working. Could you run >>this script >> in pdb or a debugger, or at least put in some print >>statements here to >> show me what t, xdict and pdict are when Rhs is called. >>And then, >> inside Points, what coorddict and c are? This might be >>another Python >> 2.6 issue because there is some funky magic used to >>create some class >> methods dynamically. > > Rob, > > Sorry but I am newbie to pydb. How do I run the script in > pydb ? > > Nils Sorry, I don't use pydb, I use the debugger in Wing IDE. You'll have to work out how to do that yourself. But you can insert some print statements in front of the problem line to show what the state is. And, are you sure you get this error in Python 2.5 too? The error you showed me is from a run in Python 2.6. -Rob From nwagner at iam.uni-stuttgart.de Wed Mar 18 13:17:32 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 18:17:32 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 13:12:59 -0400 Rob Clewley wrote: >>> Looks like the RHS function isn't working. Could you run >>>this script >>> in pdb or a debugger, or at least put in some print >>>statements here to >>> show me what t, xdict and pdict are when Rhs is called. >>>And then, >>> inside Points, what coorddict and c are? This might be >>>another Python >>> 2.6 issue because there is some funky magic used to >>>create some class >>> methods dynamically. >> >> Rob, >> >> Sorry but I am newbie to pydb. How do I run the script >>in >> pydb ? >> >> Nils > > Sorry, I don't use pydb, I use the debugger in Wing IDE. >You'll have > to work out how to do that yourself. O.k. I will do that asap. But you can insert >some print > statements in front of the problem line to show what the >state is. > And, are you sure you get this error in Python 2.5 too? Oops, I have to check that first. >The error you > showed me is from a run in Python 2.6. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Meanwhile I played with your example : (gdb) run seyranian-PyDSTool.py Starting program: /usr/bin/python seyranian-PyDSTool.py (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) /home/nwagner/local/lib64/python2.6/site-packages/Pyrex/Compiler/Errors.py:17: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 self.message = message [ 0.05714896 0.20918955 0.58088371] [ 3.60595823e-01+0.j -2.29878191e-17+0.j 6.16263995e-02+0.j] [ 0.06162640+0.j 0.36059582+0.j] -3.41421356237 In file included from /usr/include/features.h:354, from /usr/include/limits.h:27, from /usr/lib64/gcc/x86_64-suse-linux/4.3/include-fixed/limits.h:122, from /usr/lib64/gcc/x86_64-suse-linux/4.3/include-fixed/syslimits.h:7, from /usr/lib64/gcc/x86_64-suse-linux/4.3/include-fixed/limits.h:11, from /usr/include/python2.6/Python.h:19, from radau5_temp/src.linux-x86_64-2.6/radau5_temp/radau5_Seyranian_vf_wrap.c:124: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: Datei oder Verzeichnis nicht gefunden means "file or directory is missing" Cheers, Nils From nwagner at iam.uni-stuttgart.de Wed Mar 18 15:18:09 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 20:18:09 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 13:12:59 -0400 Rob Clewley wrote: >>> Looks like the RHS function isn't working. Could you run >>>this script >>> in pdb or a debugger, or at least put in some print >>>statements here to >>> show me what t, xdict and pdict are when Rhs is called. >>>And then, >>> inside Points, what coorddict and c are? This might be >>>another Python >>> 2.6 issue because there is some funky magic used to >>>create some class >>> methods dynamically. >> >> Rob, >> >> Sorry but I am newbie to pydb. How do I run the script >>in >> pydb ? >> >> Nils > > Sorry, I don't use pydb, I use the debugger in Wing IDE. >You'll have > to work out how to do that yourself. But you can insert >some print > statements in front of the problem line to show what the >state is. > And, are you sure you get this error in Python 2.5 too? >The error you > showed me is from a run in Python 2.6. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Rob, I run the PyDSTool example "run_all_tests.py" on python2.5. Some tests failed, too. These failures seems to be connected to 64 bit. There seems to be another problem wrt fminbound. fminbound(func, x1, x2, args=(), xtol=1.0000000000000001e-05, maxfun=500, full_output=0, disp=1) Bounded minimization for scalar functions. :Parameters: func : callable f(x,*args) Objective function to be minimized (must accept and return scalars). x1, x2 : float or array scalar The optimization bounds. args : tuple Extra arguments passed to function. xtol : float The convergence tolerance. maxfun : int Maximum number of function evaluations allowed. full_output : bool If True, return optional outputs. disp : int If non-zero, print messages. 0 : no message printing. 1 : non-convergence notification messages only. 2 : print a message on convergence too. 3 : print iteration results. :Returns: (xopt, {fval, ierr, numfunc}) AFAIK the latest change wrt fminbound is r5205 | stefan | 2008-11-29 13:41:23 +0100 (Sa, 29. Nov 2008) | 2 lines In `fminbound`, raise an error if non-scalar bounds are specified [patch by Neil Muller]. Closes #544. Nils From amirnntp at gmail.com Wed Mar 18 15:27:19 2009 From: amirnntp at gmail.com (Amir) Date: Wed, 18 Mar 2009 12:27:19 -0700 (PDT) Subject: [SciPy-user] weave.inline with mpi In-Reply-To: <526f877c1707cec8bb59080b0e24f630.squirrel@webmail.uio.no> References: <62c13bec-a088-41c8-9bbe-43564dea3870@w34g2000yqm.googlegroups.com> <526f877c1707cec8bb59080b0e24f630.squirrel@webmail.uio.no> Message-ID: <3fbc570f-07ed-48be-8b19-4cfcb0564cb9@v38g2000yqb.googlegroups.com> On Mar 16, 10:14?pm, "Sturla Molden" wrote: > > Do I need to worry about weave code generation running into problems > > in this setting? > > Yes, you have multiple processes potentially writing code to the same > file. It can crash or produce strange race conditions. At least you should > run your program with just one process first, to make sure everything is > compiled. As described in the docs, you can build the extension and specify the location using weave.ext_tools. I didn't see this (actually didn't find the docs) until I recoded in cython. From jan.rauberg at gmx.de Wed Mar 18 15:48:45 2009 From: jan.rauberg at gmx.de (Jan Rauberg) Date: Wed, 18 Mar 2009 19:48:45 +0000 (UTC) Subject: [SciPy-user] scipy.signal.firwin Message-ID: I'm missing the functionality of firwin like in matlab/octave fir1, so that I can give a 'low', 'high' and 'stop' option. I don't know how to create a FIR window based high pass filter. Or is there something planned for the future? Thank you Jan From rob.clewley at gmail.com Wed Mar 18 16:16:28 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 18 Mar 2009 16:16:28 -0400 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: > > I run the PyDSTool example "run_all_tests.py" on > python2.5. > Some tests failed, too. These failures seems to be > connected to 64 bit. OK, maybe you could send me a full list of failures off-list so that I know what I'll be up against when I get around to making PyDSTool happy with 64 bits. Some of the problems might be compatibility issues with numpy and scipy too. > > There seems to be another problem wrt fminbound. What is the problem? You didn't provide any details of an error, just the function signature :) -Rob From nwagner at iam.uni-stuttgart.de Wed Mar 18 16:30:14 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Mar 2009 21:30:14 +0100 Subject: [SciPy-user] Eigenvalues of the Floquet matrix In-Reply-To: References: Message-ID: On Wed, 18 Mar 2009 16:16:28 -0400 Rob Clewley wrote: >> >> I run the PyDSTool example "run_all_tests.py" on >> python2.5. >> Some tests failed, too. These failures seems to be >> connected to 64 bit. > > OK, maybe you could send me a full list of failures >off-list so that I > know what I'll be up against when I get around to making >PyDSTool > happy with 64 bits. I will provide you with further information on Thursday. Do you have access to an x86_64 box ? Some of the problems might be >compatibility issues > with numpy and scipy too. > I use >>> numpy.__version__ '1.3.0.dev6685' >>> scipy.__version__ '0.8.0.dev5620' Nils From cournape at gmail.com Wed Mar 18 22:43:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 11:43:29 +0900 Subject: [SciPy-user] [Announce] Numpy 1.3.0b1 Message-ID: <5b8d13220903181943u32c3604dtbdbbae89ad687c3e@mail.gmail.com> Hi, I am pleased to announce the release of the first beta for numpy 1.3.0. You can find source tarballs and installers for both Mac OS X and Windows on the sourceforge page: https://sourceforge.net/projects/numpy/ The release note for the 1.3.0 release are below, The Numpy developers ========================= NumPy 1.3.0 Release Notes ========================= This minor includes numerous bug fixes, official python 2.6 support, and several new features such as generalized ufuncs. Highlights ========== Python 2.6 support ~~~~~~~~~~~~~~~~~~ Python 2.6 is now supported on all previously supported platforms, including windows. http://www.python.org/dev/peps/pep-0361/ Generalized ufuncs ~~~~~~~~~~~~~~~~~~ http://projects.scipy.org/numpy/ticket/887 Experimental Windows 64 bits support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS compilers and mingw-w64 compilers: This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt, Windows 64 bits section for more information on limitations and how to build it by yourself. New features ============ Formatting issues ~~~~~~~~~~~~~~~~~ Float formatting is now handled by numpy instead of the C runtime: this enables locale independent formatting, more robust fromstring and related methods. Special values (inf and nan) are also more consistent across platforms (nan vs IND/NaN, etc...), and more consistent with recent python formatting work (in 2.6 and later). Nan handling in max/min ~~~~~~~~~~~~~~~~~~~~~~~ The maximum/minimum ufuncs now reliably propagate nans. If one of the arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax and the array methods max/min. New ufuncs fmax and fmin have been added to deal with non-propagating nans. Nan handling in sign ~~~~~~~~~~~~~~~~~~~~ The ufunc sign now returns nan for the sign of anan. New ufuncs ~~~~~~~~~~ #. fmax - same as maximum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. fmin - same as minimum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. deg2rad - converts degrees to radians, same as the radians ufunc. #. rad2deg - converts radians to degrees, same as the degrees ufunc. #. log2 - base 2 logarithm. #. exp2 - base 2 exponential. #. logaddexp - add numbers stored as logarithms and return the logarithm of the result. #. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2 logarithm of the result result. Masked arrays ~~~~~~~~~~~~~ TODO gfortran support on windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gfortran can now be used as a fortran compiler for numpy on windows, even when the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work). Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran does). It is unclear whether it will be possible to use gfortran and visual studio at all on x64. Arch option for windows binary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Automatic arch detection can now be bypassed from the command line for the superpack installed: numpy-1.3.0-superpack-win32.exe /arch=nosse will install a numpy which works on any x86, even if the running computer supports SSE set. Deprecated features =================== Histogram ~~~~~~~~~ The semantics of histogram has been modified to fix long-standing issues with outliers handling. The main changes concern #. the definition of the bin edges, now including the rightmost edge, and #. the handling of upper outliers, now ignored rather than tallied in the rightmost bin. The previous behavior is still accessible using `new=False`, but this is deprecated, and will be removed entirely in 1.4.0. Documentation changes ===================== A lot of documentation improvements. New C API ========= Multiarray API ~~~~~~~~~~~~~~ The following functions have been added to the multiarray C API: * PyArray_GetEndianness: to get runtime endianness New defines ~~~~~~~~~~~ New public C defines are available for ARCH specific code through numpy/npy_cpu.h: * NPY_CPU_X86: x86 arch (32 bits) * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) * NPY_CPU_PPC: 32 bits ppc * NPY_CPU_PPC64: 64 bits ppc * NPY_CPU_SPARC: 32 bits sparc * NPY_CPU_SPARC64: 64 bits sparc * NPY_CPU_S390: S390 * NPY_CPU_PARISC: PARISC New macros for CPU endianness has been added as well (see internal changes below for details): * NPY_BYTE_ORDER: integer * NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines Those provide portable alternatives to glibc endian.h macros for platforms without it. Portable NAN, INFINITY, etc... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ npy_math.h now makes available several portable macro to get NAN, INFINITY: * NPY_NAN: equivalent to NAN, which is a GNU extension * NPY_INFINITY: equivalent to C99 INFINITY * NPY_PZERO, NPY_NZERO: positive and negative zero respectively Corresponding single and extended precision macros are available as well. All references to NAN, or home-grown computation of NAN on the fly have been removed for consistency. Internal changes ================ numpy.core math configuration revamp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This should make the porting to new platforms easier, and more robust. In particular, the configuration stage does not need to execute any code on the target platform, which is a first step toward cross-compilation. http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt umath refactor ~~~~~~~~~~~~~~ A lot of code cleanup for umath/ufunc code (charris). Improvements to build warnings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now build with -W -Wall without warnings http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt Separate core math library ~~~~~~~~~~~~~~~~~~~~~~~~~~ The core math functions (sin, cos, etc... for basic C types) have been put into a separate library; it acts as a compatibility layer, to support most C99 maths functions (real only for now). The library includes platform-specific fixes for various maths functions, such as using those versions should be more robust than using your platform functions directly. The API for existing functions is exactly the same as the C99 math functions API; the only difference is the npy prefix (npy_cos vs cos). The core library will be made available to any extension in 1.4.0. CPU arch detection ~~~~~~~~~~~~~~~~~~ npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc... Those are portable across OS and toolchains, and set up when the header is parsed, so that they can be safely used even in the case of cross-compilation (the values is not set when numpy is built), or for multi-arch binaries (e.g. fat binaries on Max OS X). npy_endian.h defines numpy specific endianness defines, modeled on the glibc endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set when the header is parsed by the compiler, and as such can be used for cross-compilation and multi-arch binaries. From c.j.lee at tnw.utwente.nl Thu Mar 19 07:21:01 2009 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 19 Mar 2009 12:21:01 +0100 Subject: [SciPy-user] binary installation question Message-ID: Hi All, I am wondering if the binary installation packages that are available for Mac OS X and Windows are bright enough to link to libraries when they are available (e.g. umfpack)? I don't really want to have to do the whole config build install dance (which always seems to require some debugging for me) Cheers Chris *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Thu Mar 19 08:48:10 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 19 Mar 2009 08:48:10 -0400 Subject: [SciPy-user] 1.3.0b1 howto test Message-ID: howto test without installing? PYTHONPATH=~/numpy-1.3.0b1/build/lib.linux-x86_64-2.5/ python -c 'import numpy; numpy.test()' Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- x86_64-2.5/numpy Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] nose version 0.10.3 ---------------------------------------------------------------------- Ran 0 tests in 0.004s OK Well, found numpy 1.3.0 OK, but what about the tests? From ndbecker2 at gmail.com Thu Mar 19 08:53:19 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 19 Mar 2009 08:53:19 -0400 Subject: [SciPy-user] 1.3.0b1 OK Message-ID: Tested using virtualenv fedora f10 x86_64 ~/python/bin/python -c 'import numpy; numpy.test()' Running unit tests for numpy NumPy version 1.2.0 NumPy is installed in /usr/lib64/python2.5/site-packages/numpy Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] nose version 0.10.3 ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................K.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ---------------------------------------------------------------------- Ran 1726 tests in 5.921s OK (KNOWNFAIL=1) From cournape at gmail.com Thu Mar 19 09:04:51 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 19 Mar 2009 22:04:51 +0900 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: Message-ID: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> On Thu, Mar 19, 2009 at 9:48 PM, Neal Becker wrote: > howto test without installing? > > ?PYTHONPATH=~/numpy-1.3.0b1/build/lib.linux-x86_64-2.5/ python -c 'import > numpy; numpy.test()' > Running unit tests for numpy > NumPy version 1.3.0b1 > NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- > x86_64-2.5/numpy > Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 > (Red Hat 4.3.2-4)] > nose version 0.10.3 The cause is usually that your scripts are not executable - and nose refuse to test them. This happens when the so called "easy" install script is used to install Install the standard way, and you will avoid those problems, David From andrmuel at ee.ethz.ch Thu Mar 19 09:05:07 2009 From: andrmuel at ee.ethz.ch (Andreas Mueller) Date: Thu, 19 Mar 2009 14:05:07 +0100 Subject: [SciPy-user] Problem with signal.correlate and sparse matrices Message-ID: <20090319130507.GA6519@anywhere.open.ch> Hello! I'm trying to use signal.correlate with sparse matrices, but I can't understand the results I get. As soon as I use sparse matrices, the results are weird, e.g. (my actual arrays are much larger of course): In [1]: from scipy import signal,sparse,array,matrix In [2]: a = array([[1,2,3],[4,5,6]]) In [3]: b = array([[1,1],[1,1]]) In [4]: print signal.correlate(a,b,mode='valid') [[12 16]] -> OK, thats what I would expect to get In [5]: ma = matrix(a) In [6]: mb = matrix(b) In [7]: print signal.correlate(ma,mb,mode='valid') [[12 16]] -> OK, same result In [8]: sa = sparse.lil_matrix(ma) In [9]: sb = sparse.lil_matrix(mb) In [10]: print signal.correlate(sa,sb,mode='valid') [..] ValueError: dimension mismatch -> why? After some trying around, I noticed that it works with the first matrix transposed: In [11]: print signal.correlate(sa.T,sb,mode='valid') (0, 1) 5 (0, 0) 5 (1, 1) 7 (1, 0) 7 (2, 1) 9 (2, 0) 9 But the result is weird. Obviously, 5+7=12 and 7+9=16, which is the result I expect, but what am I doing (or thinking) wrong? Is signal.correlate supported with sparse matrices? I'm pretty much a SciPy newbie and haven't used matrices before, but there seems to be no such thing as sparse arrays, so I guess I have no choice .. Regards, Andreas PS I tried this both on SciPy 0.6.0 (Ubuntu 8.10) and SciPy 0.7.0 (Gentoo), with the same results (expect I get a shape mismatch error instead of a dimension mismatch with 0.6.0). -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From lev at columbia.edu Thu Mar 19 12:03:25 2009 From: lev at columbia.edu (Lev Givon) Date: Thu, 19 Mar 2009 12:03:25 -0400 Subject: [SciPy-user] scipy.signal.firwin In-Reply-To: References: Message-ID: <20090319160325.GA29170@avicenna.ee.columbia.edu> Received from Jan Rauberg on Wed, Mar 18, 2009 at 03:48:45PM EDT: > I'm missing the functionality of firwin like in matlab/octave fir1, so that I > can give a 'low', 'high' and 'stop' option. I don't know how to create a FIR > window based high pass filter. Or is there something planned for the future? > > Thank you > Jan If you want to create a high-pass filter, just specify the cutoff to firwin and flip the signs of the obtained coefficients, i.e., b = -firwin(N,cutoff) L.G. From robert.kern at gmail.com Thu Mar 19 13:34:35 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Mar 2009 12:34:35 -0500 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> Message-ID: <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> On Thu, Mar 19, 2009 at 08:04, David Cournapeau wrote: > On Thu, Mar 19, 2009 at 9:48 PM, Neal Becker wrote: >> howto test without installing? >> >> ?PYTHONPATH=~/numpy-1.3.0b1/build/lib.linux-x86_64-2.5/ python -c 'import >> numpy; numpy.test()' >> Running unit tests for numpy >> NumPy version 1.3.0b1 >> NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- >> x86_64-2.5/numpy >> Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 >> (Red Hat 4.3.2-4)] >> nose version 0.10.3 > > The cause is usually that your scripts are not executable - and nose > refuse to test them. This happens when the so called "easy" install > script is used to install NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- x86_64-2.5/numpy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Thu Mar 19 13:49:21 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 19 Mar 2009 13:49:21 -0400 Subject: [SciPy-user] 1.3.0b1 howto test References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> Message-ID: Robert Kern wrote: > On Thu, Mar 19, 2009 at 08:04, David Cournapeau > wrote: >> On Thu, Mar 19, 2009 at 9:48 PM, Neal Becker wrote: >>> howto test without installing? >>> >>> PYTHONPATH=~/numpy-1.3.0b1/build/lib.linux-x86_64-2.5/ python -c 'import >>> numpy; numpy.test()' >>> Running unit tests for numpy >>> NumPy version 1.3.0b1 >>> NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- >>> x86_64-2.5/numpy >>> Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 >>> 20080917 (Red Hat 4.3.2-4)] >>> nose version 0.10.3 >> >> The cause is usually that your scripts are not executable - and nose >> refuse to test them. This happens when the so called "easy" install >> script is used to install > > NumPy is installed in /home/nbecker/numpy-1.3.0b1/build/lib.linux- > x86_64-2.5/numpy > I think the easiest way to test numpy without installing into system is to use virtualenv? That's what I did. If this is the best choice, maybe mention this in the documentation. From robert.kern at gmail.com Thu Mar 19 13:51:27 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Mar 2009 12:51:27 -0500 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> Message-ID: <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> On Thu, Mar 19, 2009 at 12:49, Neal Becker wrote: > I think the easiest way to test numpy without installing into system is to > use virtualenv? ?That's what I did. ?If this is the best choice, maybe > mention this in the documentation. I build in-place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Thu Mar 19 14:17:57 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 19 Mar 2009 14:17:57 -0400 Subject: [SciPy-user] 1.3.0b1 howto test References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> Message-ID: Robert Kern wrote: > On Thu, Mar 19, 2009 at 12:49, Neal Becker wrote: > >> I think the easiest way to test numpy without installing into system is >> to use virtualenv? That's what I did. If this is the best choice, maybe >> mention this in the documentation. > > I build in-place. > Sorry, what does this mean? How do you test? From robert.kern at gmail.com Thu Mar 19 14:50:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Mar 2009 13:50:37 -0500 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> Message-ID: <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> On Thu, Mar 19, 2009 at 13:17, Neal Becker wrote: > Robert Kern wrote: > >> On Thu, Mar 19, 2009 at 12:49, Neal Becker wrote: >> >>> I think the easiest way to test numpy without installing into system is >>> to use virtualenv? ?That's what I did. ?If this is the best choice, maybe >>> mention this in the documentation. >> >> I build in-place. >> > > Sorry, what does this mean? ?How do you test? $ cd ~/svn/numpy $ python setup.py build_src --inplace build_ext --inplace ... $ python -c "import numpy; numpy.test()" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Mar 20 01:38:31 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Mar 2009 14:38:31 +0900 Subject: [SciPy-user] 1.3.0b1 OK In-Reply-To: References: Message-ID: <49C32BD7.9080409@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > Tested using virtualenv > fedora f10 x86_64 > > ~/python/bin/python -c 'import numpy; numpy.test()' > Running unit tests for numpy > NumPy version 1.2.0 You are not testing the version you think you are testing here, I believe. David From jan.rauberg at gmx.de Fri Mar 20 03:25:59 2009 From: jan.rauberg at gmx.de (Jan Rauberg) Date: Fri, 20 Mar 2009 07:25:59 +0000 (UTC) Subject: [SciPy-user] scipy.signal.firwin References: <20090319160325.GA29170@avicenna.ee.columbia.edu> Message-ID: Lev Givon columbia.edu> writes: > > Received from Jan Rauberg on Wed, Mar 18, 2009 at 03:48:45PM EDT: > > I'm missing the functionality of firwin like in matlab/octave fir1, so that I > > can give a 'low', 'high' and 'stop' option. I don't know how to create a FIR > > window based high pass filter. Or is there something planned for the future? > > > > Thank you > > Jan > > If you want to create a high-pass filter, just specify the cutoff to > firwin and flip the signs of the obtained coefficients, i.e., > > b = -firwin(N,cutoff) > > L.G. > Thank you for the fast response. But in the way as you described I get an inverted low pass figure. That's not a high pass and no solution for my problem too. Perhaps there is another solution? J.R. From ndbecker2 at gmail.com Fri Mar 20 07:24:45 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 20 Mar 2009 07:24:45 -0400 Subject: [SciPy-user] 1.3.0b1 howto test References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> Message-ID: Robert Kern wrote: > On Thu, Mar 19, 2009 at 13:17, Neal Becker wrote: >> Robert Kern wrote: >> >>> On Thu, Mar 19, 2009 at 12:49, Neal Becker wrote: >>> >>>> I think the easiest way to test numpy without installing into system is >>>> to use virtualenv? That's what I did. If this is the best choice, >>>> maybe mention this in the documentation. >>> >>> I build in-place. >>> >> >> Sorry, what does this mean? How do you test? > > $ cd ~/svn/numpy > $ python setup.py build_src --inplace build_ext --inplace > ... > $ python -c "import numpy; numpy.test()" > That doesn't work at all here. All I get is: python -c "import numpy; numpy.test()" Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in numpy Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] nose version 0.10.3 EEEEEE.EE.EEEEEEEEE.EEEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE....EEEEEEEEE.EEEEEEE.EEEEEEE.EEEEEE.EEEEEEEE...E.............EEEEEEEEEEEEEEE.....EEEE ====================================================================== ERROR: Failure: NameError (name 'numeric' is not defined) [LOTS more errors like that...] From cournape at gmail.com Fri Mar 20 07:32:19 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 20 Mar 2009 20:32:19 +0900 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> Message-ID: <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> On Fri, Mar 20, 2009 at 8:24 PM, Neal Becker wrote: > EEEEEE.EE.EEEEEEEEE.EEEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE....EEEEEEEEE.EEEEEEE.EEEEEEE.EEEEEE.EEEEEEEE...E.............EEEEEEEEEEEEEEE.....EEEE > ====================================================================== > ERROR: Failure: NameError (name 'numeric' is not defined) > [LOTS more errors like that...] After the in-place build, you should run the testsuite in a directory *outside* numpy sources. David From ndbecker2 at gmail.com Fri Mar 20 08:21:22 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 20 Mar 2009 08:21:22 -0400 Subject: [SciPy-user] 1.3.0b1 howto test References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> Message-ID: David Cournapeau wrote: > On Fri, Mar 20, 2009 at 8:24 PM, Neal Becker wrote: > >> EEEEEE.EE.EEEEEEEEE.EEEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE....EEEEEEEEE.EEEEEEE.EEEEEEE.EEEEEE.EEEEEEEE...E.............EEEEEEEEEEEEEEE.....EEEE >> ====================================================================== >> ERROR: Failure: NameError (name 'numeric' is not defined) >> [LOTS more errors like that...] > > After the in-place build, you should run the testsuite in a directory > *outside* numpy sources. > > David Perhaps this procedure could be documented somewhere? PYTHONPATH=numpy-1.3.0b1 python -c "import numpy; numpy.test()" Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in /home/nbecker/numpy-1.3.0b1/numpy Python version 2.5.2 (r252:60911, Sep 30 2008, 15:42:03) [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] nose version 0.10.3 Ran 2029 tests in 6.837s OK (KNOWNFAIL=1) From dave.hirschfeld at gmail.com Fri Mar 20 09:34:45 2009 From: dave.hirschfeld at gmail.com (Dave) Date: Fri, 20 Mar 2009 13:34:45 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?cdf_and_integration_for_multivariate_norma?= =?utf-8?q?l=09distribution_in_stats=2Ekde?= References: <1cd32cbb0901101829w3890212cwa18694d958b6700b@mail.gmail.com> <3d375d730901101835u4e644898n50424f436b545775@mail.gmail.com> <1cd32cbb0901101853q4696aeb4l7e21c10aee1c92ee@mail.gmail.com> Message-ID: gmail.com> writes: > > On Sat, Jan 10, 2009 at 9:35 PM, Robert Kern gmail.com> wrote: > > On Sat, Jan 10, 2009 at 20:29, gmail.com> wrote: > >> I found the fortran code for rectangular integration of the > >> multivariate normal distribution in stats kde, which can be used to > >> calculate the cdf. > >> > >> I didn't see this function exposed anywhere in scipy. Did I miss it? > > > > No, I didn't expose it. Not for any particular reason; it's just that > > the only use case I had was the KDE stuff. > > > > I will add it to stats.distributions when I find time to clean it up > and add tests. > > mvn cdf will be useful to construct normal copulas. > > Josef > I'm currently using the mvncdf.py code you posted to the list but unless I'm mistaken it hasn't made the cut for 0.7 and there's no enhancement ticket for it in trac. It's a very useful function for me (and others I'm sure) so it would be a shame for it to just be forgotten as an list attachment. Should I file an enhancement ticket? From josef.pktd at gmail.com Fri Mar 20 10:55:37 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 Mar 2009 10:55:37 -0400 Subject: [SciPy-user] cdf and integration for multivariate normal distribution in stats.kde In-Reply-To: References: <1cd32cbb0901101829w3890212cwa18694d958b6700b@mail.gmail.com> <3d375d730901101835u4e644898n50424f436b545775@mail.gmail.com> <1cd32cbb0901101853q4696aeb4l7e21c10aee1c92ee@mail.gmail.com> Message-ID: <1cd32cbb0903200755o3653ed82v2b9040fcd40288f1@mail.gmail.com> On Fri, Mar 20, 2009 at 9:34 AM, Dave wrote: > ? gmail.com> writes: > >> >> On Sat, Jan 10, 2009 at 9:35 PM, Robert Kern gmail.com> > wrote: >> > On Sat, Jan 10, 2009 at 20:29, ? gmail.com> wrote: >> >> I found the fortran code for rectangular integration of the >> >> multivariate normal distribution in stats kde, which can be used to >> >> calculate the cdf. >> >> >> >> I didn't see this function exposed anywhere in scipy. Did I miss it? >> > >> > No, I didn't expose it. Not for any particular reason; it's just that >> > the only use case I had was the KDE stuff. >> > >> >> I will add it to stats.distributions when I find time to clean it up >> and add tests. >> >> mvn cdf will be useful to construct normal copulas. >> >> Josef >> > > I'm currently using the mvncdf.py code you posted to the list but unless I'm > mistaken it hasn't made the cut for 0.7 and there's no enhancement ticket for > it in trac. It's a very useful function for me (and others I'm sure) so it > would be a shame for it to just be forgotten as an list attachment. Should I > file an enhancement ticket? > I'm glad you find it useful. I had added the ticket: http://projects.scipy.org/scipy/ticket/846 but there was a typo in the title/summary so searching for "multivariate" didn't find it. Since you seem to be the first user of it, can you provide any comments on the API. Are the functions useful in the way they are, or what changes would yo recommend? You could add some comments to the trac ticket. I wrote these functions pretty fast and they need a review before they can be included. With the random number generator in numpy and this cdf, we would have most elements for a full multivariate normal class, but in this ticket, I just wanted to make it easier to access the fortran code. Thanks, Josef From HAWRYLA at novachem.com Fri Mar 20 12:33:38 2009 From: HAWRYLA at novachem.com (Andrew Hawryluk) Date: Fri, 20 Mar 2009 10:33:38 -0600 Subject: [SciPy-user] sparse.lil_matrix accepts array assignments to elements Message-ID: <48C01AE7354EC240A26F19CEB995E943033AF0BF@CHMAILMBX01.novachem.com> Hi, we noticed an unexpected behaviour in sparse.lil_matrix today. The following runs without errors: import numpy as np from scipy import sparse a = sparse.lil_matrix((10,10)) a[0,0] = np.array([3.1415926,3.1415926]) I would have expected some error ('angry red text' in IDLE) when I tried to assign a 1D array to a single element of the sparse array, but no error occurs. (This assignment was a bug in our code.) We found the error later when we tried a.tocsr(). Should there be a type-check of some sort in sparse.lil_matrix to prevent this? Andrew From bouloumag at gmail.com Fri Mar 20 23:16:34 2009 From: bouloumag at gmail.com (Darcoux Christine) Date: Fri, 20 Mar 2009 23:16:34 -0400 Subject: [SciPy-user] Monotone Interpolation Message-ID: <477ac73a0903202016va2f087bn283a9de9226a83cc@mail.gmail.com> Hi, I would like to interpolate 1D array. My data are monotonic and the interpolation method should also be monotonic if I want stability guaranteed in my application. I would like to know if there is a simple way to get third order monotone interpolation with scipy (for example monotone cubic spline interpolation like this http://en.wikipedia.org/wiki/Monotone_cubic_interpolation) ? Regards, Christine -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Mar 21 00:31:54 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 21 Mar 2009 13:31:54 +0900 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> Message-ID: <49C46DBA.4000203@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > David Cournapeau wrote: > > >> On Fri, Mar 20, 2009 at 8:24 PM, Neal Becker wrote: >> >> > EEEEEE.EE.EEEEEEEEE.EEEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE.EEEEEEE....EEEEEEEEE.EEEEEEE.EEEEEEE.EEEEEE.EEEEEEEE...E.............EEEEEEEEEEEEEEE.....EEEE > > ====================================================================== > >>> ERROR: Failure: NameError (name 'numeric' is not defined) >>> [LOTS more errors like that...] >>> >> After the in-place build, you should run the testsuite in a directory >> *outside* numpy sources. >> >> David >> > > Perhaps this procedure could be documented somewhere? Yes, it should. Actually, the whole installation documentation would greatly benefit from a significant overhaul. No-one has stepped up to do this, though, cheers, David From ndbecker2 at gmail.com Sat Mar 21 09:48:07 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 21 Mar 2009 09:48:07 -0400 Subject: [SciPy-user] scipy.signal.firwin References: <20090319160325.GA29170@avicenna.ee.columbia.edu> Message-ID: Jan Rauberg wrote: > Lev Givon columbia.edu> writes: > >> >> Received from Jan Rauberg on Wed, Mar 18, 2009 at 03:48:45PM EDT: >> > I'm missing the functionality of firwin like in matlab/octave fir1, so >> > that I can give a 'low', 'high' and 'stop' option. I don't know how to >> > create a FIR window based high pass filter. Or is there something >> > planned for the future? >> > >> > Thank you >> > Jan >> >> If you want to create a high-pass filter, just specify the cutoff to >> firwin and flip the signs of the obtained coefficients, i.e., >> >> b = -firwin(N,cutoff) >> >> L.G. >> > > > Thank you for the fast response. But in the way as you described I get an > inverted low pass figure. That's not a high pass and no solution for my > problem too. Perhaps there is another solution? J.R. A trivial (not ideal) solution is G(z) = 1-H(z) From tpk at kraussfamily.org Sat Mar 21 13:46:14 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 21 Mar 2009 10:46:14 -0700 (PDT) Subject: [SciPy-user] scipy.signal.firwin In-Reply-To: References: <20090319160325.GA29170@avicenna.ee.columbia.edu> Message-ID: <22638541.post@talk.nabble.com> Bugzilla from ndbecker2 at gmail.com wrote: > > Jan Rauberg wrote: > >> Lev Givon columbia.edu> writes: >> >>> > >>> If you want to create a high-pass filter, just specify the cutoff to >>> firwin and flip the signs of the obtained coefficients, i.e., >>> >>> b = -firwin(N,cutoff) >>> >>> L.G. >>> >> >> >> Thank you for the fast response. But in the way as you described I get an >> inverted low pass figure. That's not a high pass and no solution for my >> problem too. Perhaps there is another solution? J.R. > > A trivial (not ideal) solution is G(z) = 1-H(z) > > Simple inversion won't do the trick - you can invert the spectrum as Jan suggests, or shift the entire spectrum by pi radians. Example: from scipy import signal import numpy as np N=11 h=signal.firwin(11, .3) n=np.arange(-(N/2), (N/2)+1) 1) spectrum inversion G(z) = 1-H(z) How: Subtract original filter from a kronecker delta function. g=(n==1)-h 2) shift entire spectrum by pi radians How: Elementwise multiply by (-1)**n g1=(-1)**n*h The resulting highpass filters will have different characteristics: g is complementary filter to h (bandwidth is pi-bandwidth(h)) g1 has same bandwidth as h These techniques won't work for even length filters, but you probably won't want an even length highpass filter anyway because of the null at pi radians. - Tom K. -- View this message in context: http://www.nabble.com/scipy.signal.firwin-tp22587139p22638541.html Sent from the Scipy-User mailing list archive at Nabble.com. From tpk at kraussfamily.org Sat Mar 21 13:49:52 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 21 Mar 2009 10:49:52 -0700 (PDT) Subject: [SciPy-user] scipy.signal.firwin In-Reply-To: <22638541.post@talk.nabble.com> References: <20090319160325.GA29170@avicenna.ee.columbia.edu> <22638541.post@talk.nabble.com> Message-ID: <22638571.post@talk.nabble.com> Tom K. wrote: > > > Bugzilla from ndbecker2 at gmail.com wrote: >> >> Jan Rauberg wrote: >> >>> Lev Givon columbia.edu> writes: >>> >>>> >> >>>> If you want to create a high-pass filter, just specify the cutoff to >>>> firwin and flip the signs of the obtained coefficients, i.e., >>>> >>>> b = -firwin(N,cutoff) >>>> >>>> L.G. >>>> >>> >>> >>> Thank you for the fast response. But in the way as you described I get >>> an >>> inverted low pass figure. That's not a high pass and no solution for my >>> problem too. Perhaps there is another solution? J.R. >> >> A trivial (not ideal) solution is G(z) = 1-H(z) >> >> > > Simple inversion won't do the trick - you can invert the spectrum as Jan > suggests, or shift the entire spectrum by pi radians. > > Example: > > from scipy import signal > import numpy as np > > N=11 > h=signal.firwin(11, .3) > n=np.arange(-(N/2), (N/2)+1) > > 1) spectrum inversion G(z) = 1-H(z) > How: Subtract original filter from a kronecker delta function. > g=(n==1)-h > > 2) shift entire spectrum by pi radians > How: Elementwise multiply by (-1)**n > g1=(-1)**n*h > > The resulting highpass filters will have different characteristics: > g is complementary filter to h (bandwidth is pi-bandwidth(h)) > g1 has same bandwidth as h > > These techniques won't work for even length filters, but you probably > won't want an even length highpass filter anyway because of the null at pi > radians. > > - Tom K. > > > And now, with my typos fixed: N=11 h=signal.firwin(N, .3) n=np.arange(-(N/2), (N/2)+1) g=(n==0)-h # (n==0) has a single "1" where n[k]==0, and zeros else g1=(-1)**n*h # (-1)**n is [-1, +1, -1, ...] with a "+1" at n[k]==0 -- View this message in context: http://www.nabble.com/scipy.signal.firwin-tp22587139p22638571.html Sent from the Scipy-User mailing list archive at Nabble.com. From wnbell at gmail.com Sat Mar 21 14:26:10 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 21 Mar 2009 14:26:10 -0400 Subject: [SciPy-user] sparse.lil_matrix accepts array assignments to elements In-Reply-To: <48C01AE7354EC240A26F19CEB995E943033AF0BF@CHMAILMBX01.novachem.com> References: <48C01AE7354EC240A26F19CEB995E943033AF0BF@CHMAILMBX01.novachem.com> Message-ID: On Fri, Mar 20, 2009 at 12:33 PM, Andrew Hawryluk wrote: > Hi, we noticed an unexpected behaviour in sparse.lil_matrix today. The > following runs without errors: > > import numpy as np > from scipy import sparse > a = sparse.lil_matrix((10,10)) > a[0,0] = np.array([3.1415926,3.1415926]) > > I would have expected some error ('angry red text' in IDLE) when I tried > to assign a 1D array to a single element of the sparse array, but no > error occurs. (This assignment was a bug in our code.) We found the > error later when we tried a.tocsr(). > > Should there be a type-check of some sort in sparse.lil_matrix to > prevent this? Thanks for the report. It should raise a ValueError in r5630: http://projects.scipy.org/scipy/changeset/5630 We'll merge this into 0.7.1 so you don't have to wait for 0.8 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From p.hilscher at lsw.uni-heidelberg.de Mon Mar 23 03:54:29 2009 From: p.hilscher at lsw.uni-heidelberg.de (Paul Hilscher) Date: Mon, 23 Mar 2009 07:54:29 +0000 (UTC) Subject: [SciPy-user] Create n-dim array from axis Message-ID: Dear all, I am looking for a function which takes some axes as an argument and produce the correspond n-dimensional array. So let's say we have two dimensions with following axis x = linspace(0.0,1.0,3) y = linspace(2.0,3.0,3) and we want to create the array by providing only the axis arr = create_array(x,y) and "create_array" produces then arr = array([[0.0,2.0], [0.5,2.0], [1.0,2.0]], [[0.0,2.5], [0.5,2.5], [1.0,2.5]] [[0.0,3.0], [0.5,3.0], [1.0,3.0]]) I've been looking for quite a while now for such a function, but I did not found any versatile and clean one. Any hints ? Thanks for all help, Paul From pav at iki.fi Mon Mar 23 05:15:39 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 23 Mar 2009 09:15:39 +0000 (UTC) Subject: [SciPy-user] 1.3.0b1 howto test References: <5b8d13220903190604y47d9e025q1817484ef225b363@mail.gmail.com> <3d375d730903191034q2bfa263p4eef678e277e8442@mail.gmail.com> <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> <49C46DBA.4000203@ar.media.kyoto-u.ac.jp> Message-ID: Sat, 21 Mar 2009 13:31:54 +0900, David Cournapeau wrote: [clip] > Yes, it should. Actually, the whole installation documentation would > greatly benefit from a significant overhaul. No-one has stepped up to do > this, though, Actually, there *is* some work done on the installation documentation: http://docs.scipy.org/numpy/docs/numpy-docs/user/install.rst/ but it's unfinished. -- Pauli Virtanen From scott.sinclair.za at gmail.com Mon Mar 23 05:47:08 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 23 Mar 2009 11:47:08 +0200 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: References: <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> <49C46DBA.4000203@ar.media.kyoto-u.ac.jp> Message-ID: <6a17e9ee0903230247u3b10bdf5p27fa305d992e3d41@mail.gmail.com> > 2009/3/23 Pauli Virtanen > > Sat, 21 Mar 2009 13:31:54 +0900, David Cournapeau wrote: > [clip] > > Yes, it should. Actually, the whole installation documentation would > > greatly benefit from a significant overhaul. No-one has stepped up to do > > this, though, > > Actually, there *is* some work done on the installation documentation: > > ? ? ? ?http://docs.scipy.org/numpy/docs/numpy-docs/user/install.rst/ > > but it's unfinished. Yes. I got started with the intention of getting something useful into 1.3, but have been far too busy the last few weeks. There are probably some inaccuracies. My idea is to have a basic outline of install instructions and import the detail for main platforms from the current SciPy Wiki pages into separate sections in the docs. Please feel free to add comments or corrections in the docwiki. Cheers, Scott From david at ar.media.kyoto-u.ac.jp Mon Mar 23 05:46:20 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 23 Mar 2009 18:46:20 +0900 Subject: [SciPy-user] 1.3.0b1 howto test In-Reply-To: <6a17e9ee0903230247u3b10bdf5p27fa305d992e3d41@mail.gmail.com> References: <3d375d730903191051o39ba11c3v1647458ae340bb71@mail.gmail.com> <3d375d730903191150r3cd8aecfl9ab5ec13da0248e0@mail.gmail.com> <5b8d13220903200432x5801c412m83c3caead9134d74@mail.gmail.com> <49C46DBA.4000203@ar.media.kyoto-u.ac.jp> <6a17e9ee0903230247u3b10bdf5p27fa305d992e3d41@mail.gmail.com> Message-ID: <49C75A6C.3070707@ar.media.kyoto-u.ac.jp> Scott Sinclair wrote: > > Yes. I got started with the intention of getting something useful into > 1.3, but have been far too busy the last few weeks. There are probably > some inaccuracies. > Ah, nice, thank you very much. > My idea is to have a basic outline of install instructions and import > the detail for main platforms from the current SciPy Wiki pages into > separate sections in the docs. I think that's the right approach, cheers, David From david.huard at gmail.com Mon Mar 23 09:38:47 2009 From: david.huard at gmail.com (David Huard) Date: Mon, 23 Mar 2009 09:38:47 -0400 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: References: Message-ID: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> Paul, try >>> mx, my = numpy.meshgrid(x, y) >>> [zip(a,b) for (a,b) in zip(mx, my)] HTH, David On Mon, Mar 23, 2009 at 3:54 AM, Paul Hilscher < p.hilscher at lsw.uni-heidelberg.de> wrote: > Dear all, > > I am looking for a function which takes some axes as an argument and > produce the correspond n-dimensional array. > > So let's say we have two dimensions with following axis > > x = linspace(0.0,1.0,3) > y = linspace(2.0,3.0,3) > > and we want to create the array by providing only the axis > > arr = create_array(x,y) > > and "create_array" produces then > > arr = array([[0.0,2.0], [0.5,2.0], [1.0,2.0]], > [[0.0,2.5], [0.5,2.5], [1.0,2.5]] > [[0.0,3.0], [0.5,3.0], [1.0,3.0]]) > > > I've been looking for quite a while now for such a function, but I did not > found > any versatile and clean one. Any hints ? > > Thanks for all help, > > Paul > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.hilscher at lsw.uni-heidelberg.de Mon Mar 23 10:07:37 2009 From: p.hilscher at lsw.uni-heidelberg.de (Paul Hilscher) Date: Mon, 23 Mar 2009 14:07:37 +0000 (UTC) Subject: [SciPy-user] Create n-dim array from axis References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> Message-ID: > try mx, my = numpy.meshgrid(x, y) > [zip(a,b) for (a,b) in zip(mx, my)]HTH, Thanks a lot David, this works perfectly fine for 2-dimensions. Is there any possibility to extend it to 3-dimensions with axis z, or even to 4-dimensions giving axis w ? Thanks again, Paul From josef.pktd at gmail.com Mon Mar 23 10:20:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 23 Mar 2009 10:20:16 -0400 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> Message-ID: <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> On Mon, Mar 23, 2009 at 10:07 AM, Paul Hilscher wrote: >> try mx, my = numpy.meshgrid(x, y) >> [zip(a,b) for (a,b) in zip(mx, my)]HTH, > > Thanks a lot David, this works perfectly fine for 2-dimensions. > Is there any possibility to extend it to 3-dimensions with axis z, or > even to 4-dimensions giving axis w ? > > Thanks again, > > Paul I would also like to know what an efficient way is to do this. For arbitrary dimension, I never found anything except for a brute force loop or recursive function. For 2d you can convert the meshgrid directly to an ndarray >>> np.array(np.meshgrid(x,y)).shape (2, 3, 3) Josef From ejf27 at cornell.edu Mon Mar 23 12:41:02 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Mon, 23 Mar 2009 16:41:02 +0000 (UTC) Subject: [SciPy-user] umfpack installation (and dsolve error) Message-ID: Hi, sorry for the newbie question. I'd like to install umfpack, but am confused on what to do. I downloaded easy_installer, but where do I get the full umfpack etc. btw, the reason for this is that I'm having a problem with dsolve which I think is a bug, but may just be my error. for example: a=matrix([[1,1],[1,1]]) b= matrix([.5,.5]).T dsolve(a,b) yields [.5,.5] which is not a solution of ax=b - I think... (I also entered the matrix as sparse) any ideas what's going on here? thanks, Eric From nwagner at iam.uni-stuttgart.de Mon Mar 23 13:26:54 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 23 Mar 2009 18:26:54 +0100 Subject: [SciPy-user] umfpack installation (and dsolve error) In-Reply-To: References: Message-ID: On Mon, 23 Mar 2009 16:41:02 +0000 (UTC) Eric Friedman wrote: > Hi, sorry for the newbie question. > > I'd like to install umfpack, but am confused on what to >do. I downloaded > easy_installer, but where do I get the full umfpack etc. > > btw, the reason for this is that I'm having a problem >with dsolve which I think > is a bug, but may just be my error. > > for example: > > a=matrix([[1,1],[1,1]]) > b= matrix([.5,.5]).T > dsolve(a,b) > yields [.5,.5] > which is not a solution of ax=b - I think... > (I also entered the matrix as sparse) > > any ideas what's going on here? > > thanks, > Eric > Hi Eric, You may use svn co http://svn.scipy.org/svn/scikits/trunk/umfpack to download the scikits package "umfpack". cd umfpack rm -rf build python setup.py install --prefix=$HOME/local --single-version-externally-managed --record=/dev/null You will need the libraries libumfpack.a and libamd.a Please not that your (dense) matrix is singular. from scikits import umfpack as um from scipy.sparse import csr, csr_matrix import numpy as np a = csr_matrix(np.matrix([[1,1],[1,1.]])) b = np.array([.5,.5]) umfpack = um.UmfpackContext() sol = umfpack( um.UMFPACK_A, a, b, autoTranspose = True ) python -i eric.py warning: singular matrix zeroing nan and inf entries... warning: (almost) singular matrix! (estimated cond. number: inf) I am using opensuse11.1 which comes with the corresponding libraries libumfpack.a and libamd.a Nils rpm -qi amd Name : amd Relocations: (not relocatable) Version : 2.2.0 Vendor: packman.links2linux.de Release : 0.pm.1 Build Date: Fr 12 Dez 2008 14:58:29 CET Install Date: Mi 11 Feb 2009 21:10:18 CET Build Host: pmbs Group : System/Libraries Source RPM: amd-2.2.0-0.pm.1.src.rpm Size : 185753 License: LGPL Signature : DSA/SHA1, Fr 12 Dez 2008 14:59:11 CET, Key ID f899f20d9a795806 Packager : Andrea Florio URL : http://www.cise.ufl.edu/research/sparse/amd/ Summary : Approximate minimum degree ordering Description : AMD is a set of routines for ordering a sparse matrix prior to Cholesky factorization (or for LU factorization with diagonal pivoting). There are versions in both C and Fortran. Note that this software has nothing to do with AMD the company. Distribution: openSUSE 11.1 (x86_64) rpm -qi libumfpack5_2_0 Name : libumfpack5_2_0 Relocations: (not relocatable) Version : 5.2.0 Vendor: packman.links2linux.de Release : 2.pm.1 Build Date: Fr 12 Dez 2008 15:03:53 CET Install Date: Mi 11 Feb 2009 21:10:16 CET Build Host: pmbs Group : System/Libraries Source RPM: umfpack-5.2.0-2.pm.1.src.rpm Size : 748639 License: GPL v2 or later Signature : DSA/SHA1, Fr 12 Dez 2008 15:04:57 CET, Key ID f899f20d9a795806 Packager : Andrea Florio URL : http://www.cise.ufl.edu/research/sparse/umfpack/ Summary : Library of routines for solving unsymmetric sparse linear systems Description : UMFPACK provides a set of routines for solving unsymmetric sparse linear systems Ax=b using the Unsymmetric MultiFrontal method. It is written in ANSI/ISO C. Note that "UMFPACK" is pronounced in two syllables, "Umph Pack"; it is not "You Em Ef Pack". This package contains the library needed to run programs dynamically linked against UMFPACK. Distribution: openSUSE 11.1 (x86_64) From ejf27 at cornell.edu Mon Mar 23 13:38:27 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Mon, 23 Mar 2009 17:38:27 +0000 (UTC) Subject: [SciPy-user] umfpack installation (and dsolve error) References: Message-ID: Nils Wagner iam.uni-stuttgart.de> writes: > > btw, the reason for this is that I'm having a problem > >with dsolve which I think > > is a bug, but may just be my error. > > > > for example: > > > > a=matrix([[1,1],[1,1]]) > > b= matrix([.5,.5]).T > > dsolve(a,b) > > yields [.5,.5] > > which is not a solution of ax=b - I think... > > (I also entered the matrix as sparse) > > > > any ideas what's going on here? > > > > thanks, > > Eric > > > Hi Eric, ... > Please not that your (dense) matrix is singular. > sorry, I should have mentioned that I need to be able to solve singular matrices and will settle for any solution (since there will be many of them). (For nonsparse I've been using linalg.lstsq, which works fine for this.) any suggestions on which sparse solver to use? thanks Eric From nwagner at iam.uni-stuttgart.de Mon Mar 23 13:44:04 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 23 Mar 2009 18:44:04 +0100 Subject: [SciPy-user] umfpack installation (and dsolve error) In-Reply-To: References: Message-ID: On Mon, 23 Mar 2009 17:38:27 +0000 (UTC) Eric Friedman wrote: > Nils Wagner iam.uni-stuttgart.de> writes: > > >> > btw, the reason for this is that I'm having a problem >> >with dsolve which I think >> > is a bug, but may just be my error. >> > >> > for example: >> > >> > a=matrix([[1,1],[1,1]]) >> > b= matrix([.5,.5]).T >> > dsolve(a,b) >> > yields [.5,.5] >> > which is not a solution of ax=b - I think... >> > (I also entered the matrix as sparse) >> > >> > any ideas what's going on here? >> > >> > thanks, >> > Eric >> > > > >> Hi Eric, > ... >> Please not that your (dense) matrix is singular. >> > > sorry, I should have mentioned that I need to be able to >solve singular matrices > and will settle for any solution (since there will be >many of them). > (For nonsparse I've been using linalg.lstsq, which works >fine for this.) > > any suggestions on which sparse solver to use? > > thanks > Eric > Eric, Propack might be an option http://soi.stanford.edu/~rmunk/PROPACK/ AFAIK no python wrapper is available until now. I have also filed a ticket http://projects.scipy.org/scipy/ticket/330 Cheers, Nils From lev at columbia.edu Fri Mar 20 14:19:48 2009 From: lev at columbia.edu (Lev Givon) Date: Fri, 20 Mar 2009 14:19:48 -0400 Subject: [SciPy-user] scipy.signal.firwin In-Reply-To: References: <20090319160325.GA29170@avicenna.ee.columbia.edu> Message-ID: <20090320181948.GE29111@avicenna.ee.columbia.edu> Received from Jan Rauberg on Fri, Mar 20, 2009 at 03:25:59AM EDT: > Lev Givon columbia.edu> writes: > > > > > Received from Jan Rauberg on Wed, Mar 18, 2009 at 03:48:45PM EDT: > > > I'm missing the functionality of firwin like in matlab/octave fir1, so that I > > > can give a 'low', 'high' and 'stop' option. I don't know how to create a FIR > > > window based high pass filter. Or is there something planned for the future? > > > > > > Thank you > > > Jan > > > > If you want to create a high-pass filter, just specify the cutoff to > > firwin and flip the signs of the obtained coefficients, i.e., > > > > b = -firwin(N,cutoff) > > Thank you for the fast response. But in the way as you described I get an > inverted low pass figure. That's not a high pass and no solution for my problem > too. Oops..you're right; that's not going to work for an arbitrary cutoff frequency. > Perhaps there is another solution? J.R. You could construct an FIR filter using the Parks-McClellan algorithm; I once implemented a remezord() function to select the appropriate parameters: http://projects.scipy.org/scipy/ticket/475 L.G. From ejf27 at cornell.edu Mon Mar 23 20:59:24 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 24 Mar 2009 00:59:24 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?coo=5Fmatrix=2C_lapack=2C_cvxopt?= Message-ID: Hi, a couple of questions: 1) how do I look inside a coo_matrix -- I'd like to extract the i,j, data? 2) I need to solve large sparse rectangular matrix equation which doesn't have full rank, but is solvable. spsolve (dsolve) doesn't work. Can I do my own LUP decomposition using scipy.sparse? 3) How does lapack compare to umfpack? (I'm thinking about using cvxopt which has lapack.) Also more generally, how does cvxopt's sparse routines do compared to scipy? thanks, Eric From schut at sarvision.nl Tue Mar 24 07:43:38 2009 From: schut at sarvision.nl (Vincent Schut) Date: Tue, 24 Mar 2009 12:43:38 +0100 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> Message-ID: josef.pktd at gmail.com wrote: > On Mon, Mar 23, 2009 at 10:07 AM, Paul Hilscher > wrote: >>> try mx, my = numpy.meshgrid(x, y) >>> [zip(a,b) for (a,b) in zip(mx, my)]HTH, >> Thanks a lot David, this works perfectly fine for 2-dimensions. >> Is there any possibility to extend it to 3-dimensions with axis z, or >> even to 4-dimensions giving axis w ? >> >> Thanks again, >> >> Paul > > I would also like to know what an efficient way is to do this. For > arbitrary dimension, I never found anything except for a brute force > loop or recursive function. > Would this do? Should work with arbitrary number of dimensions. def create_array(startStopStep): # startStopStep is a tuple/list of (start,stop,step) entities to # define the axes slices = [slice(start, stop, complex(0, step)) for (start,stop,step) in startStopStep] g = numpy.mgrid[slices].transpose() return g print create_array(((0,1,3), (2,3,3))) [[[ 0. 2. ] [ 0.5 2. ] [ 1. 2. ]] [[ 0. 2.5] [ 0.5 2.5] [ 1. 2.5]] [[ 0. 3. ] [ 0.5 3. ] [ 1. 3. ]]] Regards, Vincent. From wnbell at gmail.com Tue Mar 24 08:37:24 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 24 Mar 2009 08:37:24 -0400 Subject: [SciPy-user] coo_matrix, lapack, cvxopt In-Reply-To: References: Message-ID: On Mon, Mar 23, 2009 at 8:59 PM, Eric Friedman wrote: > Hi, a couple of questions: > > 1) how do I look inside a coo_matrix -- I'd like to extract the i,j, data? If A is your coo_matrix, you can grab the COO arrays using: I = A.rows J = A.cols V = A.data > > 2) I need to solve large sparse rectangular matrix equation which doesn't have > full rank, but is solvable. ?spsolve (dsolve) doesn't work. ?Can I do my own LUP > decomposition using scipy.sparse? I believe the UMFPACK scikit supports that functionality. > 3) How does lapack compare to umfpack? (I'm thinking about using cvxopt which > has lapack.) ?Also more generally, how does cvxopt's sparse routines do compared > to scipy? Unknown. There are not so many sparse factorization methods (SuperLU, UMFPACK, Taucs), so it's likely that you'll get the one of them. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ejf27 at cornell.edu Tue Mar 24 09:07:16 2009 From: ejf27 at cornell.edu (Eric Friedman) Date: Tue, 24 Mar 2009 13:07:16 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?coo=5Fmatrix=2C_lapack=2C_cvxopt?= References: Message-ID: Nathan Bell gmail.com> writes: > > 2) I need to solve large sparse rectangular matrix equation which doesn't have > > full rank, but is solvable. ?spsolve (dsolve) doesn't work. Can I do my own LUP > > decomposition using scipy.sparse? > > I believe the UMFPACK scikit supports that functionality. > Thanks again! fyi, and in case anyone else has a problem like this, I just tried sparse.linalg.cg which seems to work (most of the time...but I'm trying to figure that out...) and is very fast and doesn't care about singular matrices! Eric From nwagner at iam.uni-stuttgart.de Tue Mar 24 09:56:15 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Mar 2009 14:56:15 +0100 Subject: [SciPy-user] coo_matrix, lapack, cvxopt In-Reply-To: References: Message-ID: On Tue, 24 Mar 2009 13:07:16 +0000 (UTC) Eric Friedman wrote: > Nathan Bell gmail.com> writes: > > >> > 2) I need to solve large sparse rectangular matrix >>equation > which doesn't have >> > full rank, but is solvable. spsolve (dsolve) doesn't >>work. > Can I do my own LUP >> > decomposition using scipy.sparse? >> >> I believe the UMFPACK scikit supports that >>functionality. >> > > > Thanks again! > > fyi, and in case anyone else has a problem like this, I >just tried > sparse.linalg.cg which seems to work > (most of the time...but I'm trying to > figure that out...) and is very fast and doesn't > care about singular matrices! > > Eric > Eric, A symmetric positive definite matrix is required in case of CG. http://en.wikipedia.org/wiki/Conjugate_gradient_method Nils From nwagner at iam.uni-stuttgart.de Tue Mar 24 10:49:35 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Mar 2009 15:49:35 +0100 Subject: [SciPy-user] coo_matrix, lapack, cvxopt In-Reply-To: References: Message-ID: On Tue, 24 Mar 2009 13:07:16 +0000 (UTC) Eric Friedman wrote: > Nathan Bell gmail.com> writes: > > >> > 2) I need to solve large sparse rectangular matrix >>equation > which doesn't have >> > full rank, but is solvable. spsolve (dsolve) doesn't >>work. > Can I do my own LUP >> > decomposition using scipy.sparse? >> >> I believe the UMFPACK scikit supports that >>functionality. >> > > > Thanks again! > > fyi, and in case anyone else has a problem like this, I >just tried > sparse.linalg.cg which seems to work > (most of the time...but I'm trying to > figure that out...) and is very fast and doesn't > care about singular matrices! > > Eric > You might be interested in http://www.stanford.edu/group/SOL/dissertations/sou-cheng-choi-thesis.pdf Nils From josef.pktd at gmail.com Tue Mar 24 10:56:53 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 24 Mar 2009 10:56:53 -0400 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> Message-ID: <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> On Tue, Mar 24, 2009 at 7:43 AM, Vincent Schut wrote: > josef.pktd at gmail.com wrote: >> On Mon, Mar 23, 2009 at 10:07 AM, Paul Hilscher >> wrote: >>>> try mx, my = numpy.meshgrid(x, y) >>>> [zip(a,b) for (a,b) in zip(mx, my)]HTH, >>> Thanks a lot David, this works perfectly fine for 2-dimensions. >>> Is there any possibility to extend it to 3-dimensions with axis z, or >>> even to 4-dimensions giving axis w ? >>> >>> Thanks again, >>> >>> Paul >> >> I would also like to know what an efficient way is to do this. For >> arbitrary dimension, I never found anything except for a brute force >> loop or recursive function. >> > Would this do? Should work with arbitrary number of dimensions. > > def create_array(startStopStep): > ? ? # startStopStep is a tuple/list of (start,stop,step) entities to > ? ? # define the axes > ? ? slices = [slice(start, stop, complex(0, step)) for > (start,stop,step) in startStopStep] > ? ? g = numpy.mgrid[slices].transpose() > ? ? return g > > print create_array(((0,1,3), (2,3,3))) > > [[[ 0. ? 2. ] > ? [ 0.5 ?2. ] > ? [ 1. ? 2. ]] > > ?[[ 0. ? 2.5] > ? [ 0.5 ?2.5] > ? [ 1. ? 2.5]] > > ?[[ 0. ? 3. ] > ? [ 0.5 ?3. ] > ? [ 1. ? 3. ]]] > > Regards, > Vincent. > This is a good example for using slices and mgrid and is much better for regularly spaced grids (representable of slices) than what I had. However, what I also needed in the past was building a grid for arbitrary points in each dimension a made-up example x = [2, 3, 5, 10] y = [0, 1] z = [100, 200, 500, 1000] create_array(x, y, z) ? Your function is also useful to get all combinations into a 2d array: >>> create_array(((0,1,2), (2,3,3), (1,5,2))).T.reshape(3,-1).T array([[ 0. , 2. , 1. ], [ 0. , 2. , 5. ], [ 0. , 2.5, 1. ], [ 0. , 2.5, 5. ], [ 0. , 3. , 1. ], [ 0. , 3. , 5. ], [ 1. , 2. , 1. ], [ 1. , 2. , 5. ], [ 1. , 2.5, 1. ], [ 1. , 2.5, 5. ], [ 1. , 3. , 1. ], [ 1. , 3. , 5. ]]) >>> create_array(((0,1,2), (2,3,3))).T.reshape(2,-1).T array([[ 0. , 2. ], [ 0. , 2.5], [ 0. , 3. ], [ 1. , 2. ], [ 1. , 2.5], [ 1. , 3. ]]) Thanks, Josef From schut at sarvision.nl Tue Mar 24 11:35:11 2009 From: schut at sarvision.nl (Vincent Schut) Date: Tue, 24 Mar 2009 16:35:11 +0100 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> Message-ID: > > This is a good example for using slices and mgrid and is much better > for regularly spaced grids (representable of slices) than what I had. > However, what I also needed in the past was building a grid for > arbitrary points in each dimension > > a made-up example > x = [2, 3, 5, 10] > y = [0, 1] > z = [100, 200, 500, 1000] > create_array(x, y, z) > ? > Hmm, tricky... I'd go for something like this: def create_array2(nodes): # nodes is a list with lists of axis points (e.g. [[2, 3, 5, 10], [0, 1], [100, 200, 500, 1000]] shape = tuple([len(a) for a in nodes]) idx = numpy.indices(shape) result = numpy.zeros(idx.shape, numpy.float) for d in range(idx.shape[0]): result[d] = numpy.take(nodes[d], idx[d]) return result.transpose() create_array2([[0.0, 0.5, 1], [2.0, 2.5, 3]]) though this stil contains a loop which might be avoidable... It ought to be pretty generic though. It's the end of the working day here, and I'm starting to get a bit fuzzy in the brains, so better solutions might very well be possible. Vincent. From josef.pktd at gmail.com Tue Mar 24 12:15:26 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 24 Mar 2009 12:15:26 -0400 Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> Message-ID: <1cd32cbb0903240915k4ef94cc7x4bfc10b2c8e36266@mail.gmail.com> On Tue, Mar 24, 2009 at 11:35 AM, Vincent Schut wrote: >> >> This is a good example for using slices and mgrid and is much better >> for regularly spaced grids (representable of slices) than what I had. >> However, what I also needed in the past was building a grid for >> arbitrary points in each dimension >> >> a made-up example >> x = [2, 3, 5, 10] >> y = [0, 1] >> z = [100, 200, 500, 1000] >> create_array(x, y, z) >> ? >> > Hmm, tricky... > I'd go for something like this: > > def create_array2(nodes): > ? ? # nodes is a list with lists of axis points (e.g. [[2, 3, 5, 10], > [0, 1], [100, 200, 500, 1000]] > ? ? shape = tuple([len(a) for a in nodes]) > ? ? idx = numpy.indices(shape) > ? ? result = numpy.zeros(idx.shape, numpy.float) > ? ? for d in range(idx.shape[0]): > ? ? ? ? result[d] = numpy.take(nodes[d], idx[d]) > ? ? return result.transpose() > > create_array2([[0.0, 0.5, 1], [2.0, 2.5, 3]]) > > > though this stil contains a loop which might be avoidable... It ought to > be pretty generic though. It's the end of the working day here, and I'm > starting to get a bit fuzzy in the brains, so better solutions might > very well be possible. > > Vincent. > Thanks, it's still very fast for a few dimensions (3-5) and works very well, I needed a few million entries in the constructed matrix before seeing a delay in interactive use. Josef From wnbell at gmail.com Tue Mar 24 12:31:03 2009 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 24 Mar 2009 12:31:03 -0400 Subject: [SciPy-user] coo_matrix, lapack, cvxopt In-Reply-To: References: Message-ID: On Tue, Mar 24, 2009 at 9:56 AM, Nils Wagner wrote: > > A symmetric positive definite matrix is required in case > of CG. > > http://en.wikipedia.org/wiki/Conjugate_gradient_method > You can generally apply it to semi-definite problems too, but it sometimes fails. scipy.sparse.minres() is a better choice with comparable costs to cg() -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From Chris.Barker at noaa.gov Tue Mar 24 16:08:38 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 24 Mar 2009 13:08:38 -0700 Subject: [SciPy-user] OS-X Universal binary? Message-ID: <49C93DC6.7020102@noaa.gov> Hi folks, I just found on the sourceforge site what is labeled as a Universal binary package -- nice! However, at least part of it is actually Intel only -- darn: >>> import scipy >>> import scipy.interpolate Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 13, in import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so: mach-o, but wrong architecture indeed: $ file /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so: Mach-O bundle i386 Is this supposed to be Universal?? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Tue Mar 24 16:12:09 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 24 Mar 2009 13:12:09 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49C93DC6.7020102@noaa.gov> References: <49C93DC6.7020102@noaa.gov> Message-ID: <49C93E99.8080902@noaa.gov> further note -- lots of other libs are i386 only -- this looks like a simple Intel build -- so what's wrong is the way it's described on the sourceforge site. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cournape at gmail.com Tue Mar 24 16:36:30 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 25 Mar 2009 05:36:30 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49C93E99.8080902@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> Message-ID: <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> On Wed, Mar 25, 2009 at 5:12 AM, Christopher Barker wrote: > further note -- lots of other libs are i386 only -- this looks like a > simple Intel build -- so what's wrong is the way it's described on the > sourceforge site. Yes, you're right - the problem is that fortran flags do not contain the necessary options for universal build (you can see that pure C/C++ extensions are fat binaries, but fortran aren't). David From strawman at astraw.com Tue Mar 24 19:13:05 2009 From: strawman at astraw.com (Andrew Straw) Date: Tue, 24 Mar 2009 16:13:05 -0700 Subject: [SciPy-user] scipy conference 2009? Message-ID: <49C96901.3040801@astraw.com> As summer plans are starting to get made, and I don't want to miss the SciPy Conference (again) this year -- what's the plan for SciPy 09? -Andrew From robert.kern at gmail.com Tue Mar 24 19:16:35 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Mar 2009 18:16:35 -0500 Subject: [SciPy-user] scipy conference 2009? In-Reply-To: <49C96901.3040801@astraw.com> References: <49C96901.3040801@astraw.com> Message-ID: <3d375d730903241616n20e56f14re22f601e0f724131@mail.gmail.com> On Tue, Mar 24, 2009 at 19:13, Andrew Straw wrote: > As summer plans are starting to get made, and I don't want to miss the > SciPy Conference (again) this year -- what's the plan for SciPy 09? Jarrod is firming up the details. We should know in a day or two, if everything goes according to plan. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tpk at kraussfamily.org Tue Mar 24 21:45:01 2009 From: tpk at kraussfamily.org (Tom K.) Date: Tue, 24 Mar 2009 18:45:01 -0700 (PDT) Subject: [SciPy-user] Create n-dim array from axis In-Reply-To: <1cd32cbb0903240915k4ef94cc7x4bfc10b2c8e36266@mail.gmail.com> References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> <1cd32cbb0903240915k4ef94cc7x4bfc10b2c8e36266@mail.gmail.com> Message-ID: <22693686.post@talk.nabble.com> josef.pktd wrote: > > On Tue, Mar 24, 2009 at 11:35 AM, Vincent Schut > wrote: > >>> a made-up example >>> x = [2, 3, 5, 10] >>> y = [0, 1] >>> z = [100, 200, 500, 1000] >>> create_array(x, y, z) >>> ? >>> >> Hmm, tricky... >> I'd go for something like this: >> >> def create_array2(nodes): >> ? ? # nodes is a list with lists of axis points (e.g. [[2, 3, 5, 10], >> [0, 1], [100, 200, 500, 1000]] >> ? ? shape = tuple([len(a) for a in nodes]) >> ? ? idx = numpy.indices(shape) >> ? ? result = numpy.zeros(idx.shape, numpy.float) >> ? ? for d in range(idx.shape[0]): >> ? ? ? ? result[d] = numpy.take(nodes[d], idx[d]) >> ? ? return result.transpose() >> > Neat problem. Here's an implementation that avoids creating the index arrays by using broadcasting: def create_array(*vectors): views = [vec.view() for vec in vectors] out = zeros([len(vectors)] + [vec.size for vec in vectors]) for viewIdx, view in enumerate(views): shape = [1]*len(views) shape[viewIdx] = -1 view.shape = shape out[viewIdx,:] = view return out Examples below. Note, you can still transpose if you want but it feels more natural to me to have the 1st dimension be the "axis" so g[i] is the i-th grid. In [126]: x = array([2, 3, 5, 10]) In [127]: y = array([0, 1] ) In [128]: z = array([100, 200, 500, 1000]) In [129]: g2=create_array(x, y) In [130]: g2.shape Out[130]: (2, 4, 2) In [131]: g2[0] Out[131]: array([[ 2., 2.], [ 3., 3.], [ 5., 5.], [ 10., 10.]]) In [132]: g2[1] Out[132]: array([[ 0., 1.], [ 0., 1.], [ 0., 1.], [ 0., 1.]]) In [133]: g3=create_array(x, y, z) In [134]: g3.shape Out[134]: (3, 4, 2, 4) In [135]: g3[0] Out[135]: array([[[ 2., 2., 2., 2.], [ 2., 2., 2., 2.]], [[ 3., 3., 3., 3.], [ 3., 3., 3., 3.]], [[ 5., 5., 5., 5.], [ 5., 5., 5., 5.]], [[ 10., 10., 10., 10.], [ 10., 10., 10., 10.]]]) In [136]: g3[1] Out[136]: array([[[ 0., 0., 0., 0.], [ 1., 1., 1., 1.]], [[ 0., 0., 0., 0.], [ 1., 1., 1., 1.]], [[ 0., 0., 0., 0.], [ 1., 1., 1., 1.]], [[ 0., 0., 0., 0.], [ 1., 1., 1., 1.]]]) In [137]: g3[2] Out[137]: array([[[ 100., 200., 500., 1000.], [ 100., 200., 500., 1000.]], [[ 100., 200., 500., 1000.], [ 100., 200., 500., 1000.]], [[ 100., 200., 500., 1000.], [ 100., 200., 500., 1000.]], [[ 100., 200., 500., 1000.], [ 100., 200., 500., 1000.]]]) -- View this message in context: http://www.nabble.com/Create-n-dim-array-from-axis-tp22655632p22693686.html Sent from the Scipy-User mailing list archive at Nabble.com. From david at ar.media.kyoto-u.ac.jp Wed Mar 25 02:14:51 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 25 Mar 2009 15:14:51 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> Message-ID: <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > > Yes, you're right - the problem is that fortran flags do not contain > the necessary options for universal build (you can see that pure C/C++ > extensions are fat binaries, but fortran aren't). > I created a new ticket: http://projects.scipy.org/numpy/ticket/1067 cheers, David From p.hilscher at lsw.uni-heidelberg.de Wed Mar 25 04:12:27 2009 From: p.hilscher at lsw.uni-heidelberg.de (Paul Hilscher) Date: Wed, 25 Mar 2009 08:12:27 +0000 (UTC) Subject: [SciPy-user] Create n-dim array from axis References: <91cf711d0903230638i4c0bbe74ma8aadaec16e936bd@mail.gmail.com> <1cd32cbb0903230720j6678d4ah614365ff864fe2d2@mail.gmail.com> <1cd32cbb0903240756ya0e1dd8p8e9da4ed06df010d@mail.gmail.com> Message-ID: > def create_array2(nodes): > # nodes is a list with lists of axis points (e.g. [[2, 3, 5, 10], > [0, 1], [100, 200, 500, 1000]] > shape = tuple([len(a) for a in nodes]) > idx = numpy.indices(shape) > result = numpy.zeros(idx.shape, numpy.float) > for d in range(idx.shape[0]): > result[d] = numpy.take(nodes[d], idx[d]) > return result.transpose() > > create_array2([[0.0, 0.5, 1], [2.0, 2.5, 3]]) > Thanks a lot, it work perfectly fine for me. If this feature is also useful for others, is there any chance to have it included in numpy ? Paul From lorenzo.isella at gmail.com Wed Mar 25 07:33:19 2009 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Wed, 25 Mar 2009 12:33:19 +0100 Subject: [SciPy-user] Strange Warning When Running Simple Script Message-ID: Dear All, I am running debian testing (for the amd64 architecture) on my machine and I have Num(Sci)Py and matplotlib installed from standard repositories. Today, I have run again a simple code to create arrays by stitching together several copies of the same array #! /usr/bin/env python import scipy as s import numpy as n import pylab as p def replica(pos_arr, N_rep): #temp=pos_arr dim=s.shape(pos_arr) n_row=dim[0] if (len(dim)>1): n_col=dim[1] print "n_row and n_col are, ", n_row, n_col else: print "n_row is, ", n_row if (len(dim)>1): copy=s.zeros(((n_row*N_rep),n_col)) else: copy=s.zeros((n_row*N_rep)) for i in xrange(N_rep): if (len(dim)>1): copy[(i*n_row):((i+1)*n_row),:]=pos_arr else: copy[(i*n_row):((i+1)*n_row)]=pos_arr return copy eta_0=s.array([0.389,0.389,0.389,0.389,0.389,0.389,0.389,0.389]) eta_1=s.sqrt(eta_0) my_copy=replica(eta_0,200) p.save("eta_0_8_200", my_copy) my_copy=replica(eta_1,200) p.save("eta_1_8_200", my_copy) print "So far so good" However, when I run this code (replica.py) now (I have not touched this script in a while) I get the following output $ ./replica.py /usr/lib/python2.5/site-packages/scipy/misc/__init__.py:25: DeprecationWarning: NumpyTest will be removed in the next release; please update your code to use nose or unittest test = NumpyTest().test n_row is, 8 n_row is, 8 So far so good I never had that warning before. Is it anything I should be worried about? Cheers Lorenzo From stefan at sun.ac.za Wed Mar 25 07:46:43 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 25 Mar 2009 13:46:43 +0200 Subject: [SciPy-user] Strange Warning When Running Simple Script In-Reply-To: References: Message-ID: <9457e7c80903250446o3ea7ae64v1df6030a35bb087b@mail.gmail.com> 2009/3/25 Lorenzo Isella : > $ ./replica.py > /usr/lib/python2.5/site-packages/scipy/misc/__init__.py:25: > DeprecationWarning: NumpyTest will be removed in the next release; > please update your code to use nose or unittest > ?test = NumpyTest().test > n_row is, ?8 > n_row is, ?8 > So far so good > > I never had that warning before. Is it anything I should be worried about? No, these warnings won't be present with a newer version of SciPy. Regards St?fan From opossumnano at gmail.com Wed Mar 25 10:55:28 2009 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 25 Mar 2009 15:55:28 +0100 Subject: [SciPy-user] "Advanced Scientific Programming in Python" Summer School in Berlin. Germany Message-ID: <20090325145528.GC16808@localhost> Hi all, the summer school I'm co-organizing may be of interest to some of you... cheers, tiziano ------------------- "Advanced Scientific Programming in Python" a G-Node Summer School Many scientists spend much of their time writing, debugging, and maintaining software. But while techniques for doing this efficiently have been developed, only few scientists actually use them. As a result, they spend far too much time writing deficient code and reinventing the wheel instead of doing research. In this course we present a selection of advanced programming techniques with theoretical lectures and practical exercises tailored to the needs of the programming scientist. To spice up theory and foster our new skills in a real-world programming project, we will team up to develop an entertaining scientific computer game. We will use the Python programming language for the entire course. With a large collection of open-source scientific modules and all features of a full-fledged programming language, Python is rapidly gaining popularity in the neuroscience community. It enables the scientist to quickly develop powerful, efficient, and structured software and is becoming an essential tool for scientific computing. The summer school is targeted at Post-docs and PhD students from all areas of neuroscience. Substantial proficiency in Python or in another language (e.g. Java, C/C++, MATLAB, Mathematica) is absolutely required. An optional, one-day pre-course is offered to participants without Python experience to familiarize with the language. Date and Location ----------------- August 31st, 2009 -- September 4th, 2009. Berlin, Germany. Preliminary Program ------------------- Day 0 (Mon Aug 31) -- [Optional] Dive into Python Day 1 (Tue Sep 1) -- Software Carpentry - Documenting code and using version control - Test-driven development & unit testing - Debugging, profiling and benchmarking techniques - Object-oriented programming, design patterns and Extreme Programming Day 2 (Wed Sep 2) -- Scientific Tools for Python - NumPy, SciPy, Matplotlib, IPython - Neuroscience libraries - Programming project in the afternoon Day 3 (Thu Sep 3) -- Parallelization - Python multiprocessing for SMP machines - Distributed parallelization for cluster computing - Programming project in the afternoon Day 4 (Fri Sep 4) -- Practical Software Development - Software design - Efficient programming in teams - Quality Assurance - Finalizing the programming project Applications ------------ Applications should be sent before May 31st, 2009 to pythonsummerschool at bccnberlin.de. No fee is charged but participants should take care of travel, living, and accommodation expenses. Applications should include full contact information (name, affiliation, email & phone), a short CV and a short statement addressing the following questions (maximum 500 words): - What is your educational background? - What experience do you have in programming? - Why do you think "Advanced Scientific Programming in Python" is an appropriate course for your skill profile? Candidates will be selected based on their profile. Places are limited: early application is recommended. Faculty ------- Pietro Berkes, Volen Center for Complex Systems, Brandeis University, USA Jens Kremkow, Institut de Neurosciences Cognitives de la M?diterran?e, CNRS, Marseille, France Eilif Muller, Laboratory of Computational Neuroscience, Ecole Polytechnique F?d?rale de Lausanne, Switzerland Michael Schmuker, Neurobiology, Freie Universit?t Berlin, Germany Bartosz Telenczuk, Charit? Universit?tsmedizin Berlin, Germany Niko Wilbert, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany Tiziano Zito, Bernstein Center for Computational Neuroscience Berlin, Germany Organized by Michael Schmuker and Tiziano Zito for the German Neuroinformatics Node of the INCF. Website: http://www.g-node.org/Teaching Contact: python-summerschool at bccn-berlin.de From djvine at gmail.com Wed Mar 25 10:58:11 2009 From: djvine at gmail.com (David Vine) Date: Wed, 25 Mar 2009 09:58:11 -0500 Subject: [SciPy-user] Loopless square 2d arrays with radially symmetric functions Message-ID: <79387210903250758y52fe548ama479cae6ab58c753@mail.gmail.com> Hello, I use a lot of radially symmetric arrays in my code and I would like to know if there is a more efficient method for creating these arrays than the nested for-loops i am currently using. For example, to create a Gaussian I would currently do this (Python 2.5, Ubuntu Intrepid): import scipy p = scipy.zeros((256,256)) for i in xrange(256): for j in xrange(256): p[i,j] = exp(-0.01*( (i-128.)**2. - (j-128.)**2.) ) and my question is whether there is a more efficient 'loopless' method (i.e involving no for-loops) ? Thanks in advance David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.roux at st.com Wed Mar 25 10:43:48 2009 From: nicolas.roux at st.com (Nicolas ROUX) Date: Wed, 25 Mar 2009 15:43:48 +0100 Subject: [SciPy-user] Equivalent in Scipy of Matlab resize Message-ID: <000101c9ad58$1b748c40$e7ad810a@gnb.st.com> Hi, I am looking for the equivalent of Matlab resize in Scipy. Matlab's resize is able to upscale/downscale an array with 'linear', 'bilinear', 'bicubic'. Giving a scale factor or the required output shape. I can't find the equivalent function in Scipy. Does it exist ? Thanks for your help ;-) Nicolas. From josef.pktd at gmail.com Wed Mar 25 11:22:46 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 25 Mar 2009 11:22:46 -0400 Subject: [SciPy-user] Loopless square 2d arrays with radially symmetric functions In-Reply-To: <79387210903250758y52fe548ama479cae6ab58c753@mail.gmail.com> References: <79387210903250758y52fe548ama479cae6ab58c753@mail.gmail.com> Message-ID: <1cd32cbb0903250822s78f66a08hc423b17aff761049@mail.gmail.com> On Wed, Mar 25, 2009 at 10:58 AM, David Vine wrote: > Hello, > > I use a lot of radially symmetric arrays in my code and I would like to know > if there is a more efficient method for creating these arrays than the > nested for-loops i am currently using. > > For example, to create a Gaussian I would currently do this (Python 2.5, > Ubuntu Intrepid): > import scipy > p = scipy.zeros((256,256)) > for i in xrange(256): > ?? for j in xrange(256): > ?????? p[i,j] = exp(-0.01*( (i-128.)**2. - (j-128.)**2.)? ) > > and my question is whether there is a more efficient 'loopless' method (i.e > involving no for-loops) ? > > Thanks in advance > David > I think this does the same with broadcasting import numpy as np i = np.arange(256) p = np.exp(-0.01*( (i[:,np.newaxis]-128.)**2. - (i[np.newaxis,:]-128.)**2.)) Josef From dwf at cs.toronto.edu Wed Mar 25 13:13:40 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 25 Mar 2009 13:13:40 -0400 Subject: [SciPy-user] Equivalent in Scipy of Matlab resize In-Reply-To: <000101c9ad58$1b748c40$e7ad810a@gnb.st.com> References: <000101c9ad58$1b748c40$e7ad810a@gnb.st.com> Message-ID: <706C258F-6E6E-4751-A6D1-F32E156C57A8@cs.toronto.edu> On 25-Mar-09, at 10:43 AM, Nicolas ROUX wrote: > Hi, > > I am looking for the equivalent of Matlab resize in Scipy. > Matlab's resize is able to upscale/downscale an array with 'linear', > 'bilinear', 'bicubic'. > Giving a scale factor or the required output shape. > I can't find the equivalent function in Scipy. > Does it exist ? As far as I know, it doesn't; I thought something like that existed in scipy.ndimage.interpolation, but apparently I was wrong. This functionality definitely exists in PIL: In [63]: import Image In [64]: x = Image.fromarray(randn(50,50)) In [65]: y = x.resize((500,500),Image.BICUBIC) However, you probably lose a lot of precision this way. David From pav at iki.fi Wed Mar 25 13:39:22 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 25 Mar 2009 17:39:22 +0000 (UTC) Subject: [SciPy-user] Equivalent in Scipy of Matlab resize References: <000101c9ad58$1b748c40$e7ad810a@gnb.st.com> <706C258F-6E6E-4751-A6D1-F32E156C57A8@cs.toronto.edu> Message-ID: Wed, 25 Mar 2009 13:13:40 -0400, David Warde-Farley wrote: > On 25-Mar-09, at 10:43 AM, Nicolas ROUX wrote: >> I am looking for the equivalent of Matlab resize in Scipy. Matlab's >> resize is able to upscale/downscale an array with 'linear', 'bilinear', >> 'bicubic'. >> Giving a scale factor or the required output shape. I can't find the >> equivalent function in Scipy. Does it exist ? > > As far as I know, it doesn't; I thought something like that existed in > scipy.ndimage.interpolation, but apparently I was wrong. There's `ndimage.zoom` but it doesn't allow zooming different axes separately. But you can use `ndimage.affine_transform`: ---------------------------- import scipy.ndimage as ndimage import matplotlib.pyplot as plt import numpy as np def imresize(a, nx, ny, **kw): return ndimage.affine_transform( a, [(a.shape[0]-1)*1.0/nx, (a.shape[1]-1)*1.0/ny], output_shape=[nx,ny], **kw) a = np.zeros((5,5)) a[2,2] = 1 plt.subplot(121) plt.imshow(a, interpolation='nearest') plt.subplot(122) plt.imshow(imresize(a, 50, 50, order=1), interpolation='nearest') plt.show() ---------------------------- Seems to work, but no guarantees. -- Pauli Virtanen From Chris.Barker at noaa.gov Wed Mar 25 15:27:44 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 25 Mar 2009 12:27:44 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> Message-ID: <49CA85B0.3070308@noaa.gov> David Cournapeau wrote: > I created a new ticket: > > http://projects.scipy.org/numpy/ticket/1067 Thanks. This brings up a n issue from way back -- with the trickiness of Fortran support, it would be nice if one could use the parts of Scipy that don't need it. In my case, the problem at hand required simple 1-d linear interpolation. I went in and commented out all the imports that failed, and got it to work for me. So, it might be nice to be able to use what's possible without the Fortran libs, maybe by wrapping those imports in try: blocks, and reporting something meaningful if they fail, but not stop the rest of the package from being usable. Or just fix the build! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cournape at gmail.com Wed Mar 25 15:59:33 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 26 Mar 2009 04:59:33 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CA85B0.3070308@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> Message-ID: <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> On Thu, Mar 26, 2009 at 4:27 AM, Christopher Barker wrote: > David Cournapeau wrote: >> I created a new ticket: >> >> http://projects.scipy.org/numpy/ticket/1067 > > Thanks. > > This brings up a n issue from way back -- with the trickiness of Fortran > support, it would be nice if one could use the parts of Scipy that don't > need it. That sounds like a good idea, until packages using scipy will break on your fortran-disabled scipy. We should avoid variations of the scipy capabilities as much as possible. > > Or just fix the build! Yes, that's a much better alternative. I just have to understand why the universal capability detection breaks now for gfortran. It used to work, and the related code has not changed recently, cheers, David From david_baddeley at yahoo.com.au Wed Mar 25 16:56:53 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 25 Mar 2009 13:56:53 -0700 (PDT) Subject: [SciPy-user] Loopless square 2d arrays with radially symmetric functions In-Reply-To: References: Message-ID: <749289.45221.qm@web33001.mail.mud.yahoo.com> There's always: X, Y = numpy.mgrid[-128:128, -128:128] R2 = X**2 + Y**2 p = numpy.exp(-0.01*R2) which uses a bit more memory, but avoids loops. David Message: 3 Date: Wed, 25 Mar 2009 11:22:46 -0400 From: josef.pktd at gmail.com Subject: Re: [SciPy-user] Loopless square 2d arrays with radially symmetric functions To: SciPy Users List Message-ID: <1cd32cbb0903250822s78f66a08hc423b17aff761049 at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Wed, Mar 25, 2009 at 10:58 AM, David Vine wrote: > Hello, > > I use a lot of radially symmetric arrays in my code and I would like to know > if there is a more efficient method for creating these arrays than the > nested for-loops i am currently using. > > For example, to create a Gaussian I would currently do this (Python 2.5, > Ubuntu Intrepid): > import scipy > p = scipy.zeros((256,256)) > for i in xrange(256): > ?? for j in xrange(256): > ?????? p[i,j] = exp(-0.01*( (i-128.)**2. - (j-128.)**2.)? ) > > and my question is whether there is a more efficient 'loopless' method (i.e > involving no for-loops) ? > > Thanks in advance > David > I think this does the same with broadcasting import numpy as np i = np.arange(256) p = np.exp(-0.01*( (i[:,np.newaxis]-128.)**2. - (i[np.newaxis,:]-128.)**2.)) Josef ------------------------------ Get fit, feel better! All the help you need at Yahoo!Xtra Health - http://nz.lifestyle.yahoo.com/health From mustafarahman23 at gmail.com Wed Mar 25 17:48:12 2009 From: mustafarahman23 at gmail.com (Moofle) Date: Wed, 25 Mar 2009 21:48:12 +0000 (UTC) Subject: [SciPy-user] Using fmin Message-ID: Dear All, I am a beginner to python and scipy for that matter. I am trying to program GARCH(1,1) and I am attempting to use fmin or fmin_powell to optimize the equation : = SUM(from i to N) [-ln(vi)-((ui**2)/(vi))] (where i is the index) I am trying to get some hints as to how to answer this question: 1. vi is dependent on ui and the v before it (vi-1). It is defined by this equation: vi = omega + alpha*ui-1 + beta*vi-1 (where omega, alpha, beta are the parameters I need to optimize, i is the index) I am not sure how to pass alpha, omega, beta into the fmin method. To be honest, I am not even completely sure about how that method even works even though I have looked at the documentation for hours. In particular, I dont know what the args=() is used for!! I can solve this in excel using solver and AMPL, but I sure would appreciate a few hints!! Many Thanks. From josef.pktd at gmail.com Wed Mar 25 18:42:00 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 25 Mar 2009 18:42:00 -0400 Subject: [SciPy-user] Using fmin In-Reply-To: References: Message-ID: <1cd32cbb0903251542o519fc3ebof66489e82917eb6b@mail.gmail.com> On Wed, Mar 25, 2009 at 5:48 PM, Moofle wrote: > Dear All, > > I am a beginner to python and scipy for that matter. I am trying to program > GARCH(1,1) and I am attempting to use fmin or fmin_powell to optimize the > equation : = > > SUM(from i to N) [-ln(vi)-((ui**2)/(vi))] ?(where i is the index) > > I am trying to get some hints as to how to answer this question: > > 1. vi is dependent on ui and the v before it (vi-1). It is defined by this > equation: vi = omega + alpha*ui-1 + beta*vi-1 (where omega, alpha, beta are the > parameters I need to optimize, i is the index) > > I am not sure how to pass alpha, omega, beta into the fmin method. To be honest, > I am not even completely sure about how that method even works even though I > have looked at the documentation for hours. In particular, I dont know what the > args=() is used for!! I can solve this in excel using solver and AMPL, but I > sure would appreciate a few hints!! > > Many Thanks. > A good place to start is looking at the different versions of curve fitting in http://www.scipy.org/Cookbook/FittingData the basic idea is similar for all numerical optimization routines fmin(func, x0, args=(), ...) where you need to provide func as the objective function, that takes an array as optimization parameters, and additional arguments for example the time series of your observations. x0 are the starting values args is a list/tuple of additional arguments example : -------------- mlogitloglike is the negative loglikelihood of the multivariate logit model, the objective function param are the parameters I want to estimate y is independent variable x,z, are explanatory variables def mlogitloglike(param, y, x, z): '''wrapper to get negative loglikelihood to feed to optimization procedure param are parameters of the logit model''' unpack parameters and calculate likelihood function ... define starting parameters param0 then: result = optimize.fmin(mlogitloglike, param0, args=(y,x,z)) Josef From rmay31 at gmail.com Wed Mar 25 19:22:53 2009 From: rmay31 at gmail.com (Ryan May) Date: Wed, 25 Mar 2009 18:22:53 -0500 Subject: [SciPy-user] Loopless square 2d arrays with radially symmetric functions In-Reply-To: <749289.45221.qm@web33001.mail.mud.yahoo.com> References: <749289.45221.qm@web33001.mail.mud.yahoo.com> Message-ID: On Wed, Mar 25, 2009 at 3:56 PM, David Baddeley wrote: > > There's always: > > X, Y = numpy.mgrid[-128:128, -128:128] > R2 = X**2 + Y**2 > p = numpy.exp(-0.01*R2) > > which uses a bit more memory, but avoids loops. > > David > If you use ogrid instead of mgrid, you'll avoid the increase in memory, since the arrays are created in a form ready for broadcasting. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Mar 26 00:33:55 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 26 Mar 2009 00:33:55 -0400 Subject: [SciPy-user] Using fmin In-Reply-To: References: Message-ID: <135071D4-38FE-4672-BCD2-C4D8E195E319@cs.toronto.edu> On 25-Mar-09, at 5:48 PM, Moofle wrote: > I am not sure how to pass alpha, omega, beta into the fmin method. > To be honest, > I am not even completely sure about how that method even works even > though I > have looked at the documentation for hours. In particular, I dont > know what the > args=() is used for!! I can solve this in excel using solver and > AMPL, but I > sure would appreciate a few hints!! If you're simultaneously optimizing over all of these parameters, you'll need to write a function that takes as its first parameter a vector argument, then unpack it. Are the u_i's held constant? "args" is for parameters to your function that don't change from one step to the next. They are passed in every time your function is called as additional arguments. For example, if I wanted to minimize (x - p + q)**2, but didn't want to hardcode values of p and q into my function, I'd do something like def foo(x, p, q): return (x - p + q)**2 and then call fmin_powell(foo, 20, args=(5,3)). Then every time foo gets called it will receive 5 as its argument for p and 3 as it's argument for q, with 20 as the starting value. In [8]: fmin_powell(foo, 20, args=(30,2)) Optimization terminated successfully. Current function value: 0.000000 Iterations: 2 Function evaluations: 20 Out[8]: array(28.000000000000039) If, as I suspect, the u's are constant, I'd do something like this: def myfunction(parameters, u): alpha, beta, omega = parameters total = 0 for u_i in u: v_i = ... # fill in code for v_i total += ... # fill in code for the i'th term return total Then call fmin_powell(myfunction, initialguesses, args=(U,)) Where initialguesses is an array containing the initial value for alpha, beta, and omega, and U is an array containing the u_i's. NOTE that args=(U,) creates a tuple of length 1. args=U would treat each u_i as a separate argument to the function. David From haase at msg.ucsf.edu Thu Mar 26 01:17:17 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 26 Mar 2009 06:17:17 +0100 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> Message-ID: On Wed, Mar 25, 2009 at 8:59 PM, David Cournapeau wrote: > On Thu, Mar 26, 2009 at 4:27 AM, Christopher Barker > wrote: >> David Cournapeau wrote: >>> I created a new ticket: >>> >>> http://projects.scipy.org/numpy/ticket/1067 >> >> Thanks. >> >> This brings up a n issue from way back -- with the trickiness of Fortran >> support, it would be nice if one could use the parts of Scipy that don't >> need it. > > That sounds like a good idea, until packages using scipy will break on > your fortran-disabled scipy. We should avoid variations of the scipy > capabilities as much as possible. > This of course reminds of a fundamental discussion from long ago -- as I remember there were plans to split scipy into sub-packages that would possibly be able to live by themselves and make the whole thing easier to install. "there was a time when numpy was called scipy-core ..." Anyway, even if this plan is not to be followed through, I would like to know: * What are "easily separable packages" ? I would think ndimages might be an examples, or ? * How much really depends on fortran ? This is also interesting to get a "quick" win-64 built. [ Chris, which imports did you comment out and where ? ] * Would numscons make such a separation potentially more feasible than it used to be ? Thanks, Sebastian Haase From Chris.Barker at noaa.gov Thu Mar 26 01:34:37 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 25 Mar 2009 22:34:37 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> Message-ID: <49CB13ED.601@noaa.gov> Sebastian Haase wrote: > This of course reminds of a fundamental discussion from long ago -- as > I remember there were plans to split scipy into sub-packages that > would possibly be able to live by themselves and make the whole thing > easier to install. yup. > Anyway, even if this plan is not to be followed through, I would like to know: > * What are "easily separable packages" ? I would think ndimages might > be an examples, or ? > * How much really depends on fortran ? This is also interesting to > get a "quick" win-64 built. I think that's the trick -- the fortran depedencies are not alwyas clearly separable... In the past, I've found it a bit tricky to have a package that could be installed separately, but when installed, it would be just part of a bigger package -- that's not to say it can't be done. > [ Chris, which imports did you comment out and where ? ] I don't have that machine handy, but it was in the scipy.interpolate package. In it's __init__.py, it imports a bunch of stuff, some of which is (or relies on ) fortran. I think it was mostly fitpack. The trick is that is is nice to have the fancier interpolaters right there with the rest. -Chris From david at ar.media.kyoto-u.ac.jp Thu Mar 26 02:07:36 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Mar 2009 15:07:36 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> Message-ID: <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> Sebastian Haase wrote: > On Wed, Mar 25, 2009 at 8:59 PM, David Cournapeau wrote: > >> On Thu, Mar 26, 2009 at 4:27 AM, Christopher Barker >> wrote: >> >>> David Cournapeau wrote: >>> >>>> I created a new ticket: >>>> >>>> http://projects.scipy.org/numpy/ticket/1067 >>>> >>> Thanks. >>> >>> This brings up a n issue from way back -- with the trickiness of Fortran >>> support, it would be nice if one could use the parts of Scipy that don't >>> need it. >>> >> That sounds like a good idea, until packages using scipy will break on >> your fortran-disabled scipy. We should avoid variations of the scipy >> capabilities as much as possible. >> >> > This of course reminds of a fundamental discussion from long ago -- as > I remember there were plans to split scipy into sub-packages that > would possibly be able to live by themselves and make the whole thing > easier to install. > "there was a time when numpy was called scipy-core ..." > > Anyway, even if this plan is not to be followed through, I would like to know: > * What are "easily separable packages" ? I would think ndimages might > be an examples, or ? > Almost every package in scipy/ can be installed separately. I actually do that quite often when porting on 'exotic' platforms like win64 or solaris: I only first try to build/install/test the pure C code, removing any package which depends on fortran. This works really well from a build POV. But I don't think it is desirable to enable such scipy installation for the end users, not in the current state of things (if we could have something like cran for numpy, I would certainly change my mind). Being able to install things which don't depend on C++/Fortran would mean that the fortran parts are even less tested. And fundamentally, I don't think scipy wo fortran would be that useful: no fft, no linalg, no special functions. Who can use this ? Being able to strip things inside a package (inside scipy/interpolate for example) is a very bad idea IMHO, because different scipy installation would have different configurations. If you have a package which depends on scipy, you would have to test whether scipy.special support the function you request. > * Would numscons make such a separation potentially more feasible than > it used to be ? > I don't think so, unless I misunderstand what you mean by separation, cheers, David From jan.rauberg at gmx.de Thu Mar 26 04:05:21 2009 From: jan.rauberg at gmx.de (Jan Rauberg) Date: Thu, 26 Mar 2009 08:05:21 +0000 (UTC) Subject: [SciPy-user] scipy.signal.firwin References: <20090319160325.GA29170@avicenna.ee.columbia.edu> <22638541.post@talk.nabble.com> <22638571.post@talk.nabble.com> Message-ID: Tom K. kraussfamily.org> writes: ... > And now, with my typos fixed: > N=11 > h=signal.firwin(N, .3) > n=np.arange(-(N/2), (N/2)+1) > g=(n==0)-h # (n==0) has a single "1" where n[k]==0, and zeros else > g1=(-1)**n*h # (-1)**n is [-1, +1, -1, ...] with a "+1" at n[k]==0 > Thank you Tom, your solution seems to work. But if scipy want to be a Matlab-replacement I think it should have such basic functions like 'fir1' with the full range of that functionality. So I have to switch to octave or R. But they have not such a nice and fast plotting tool like matplotlib. So I hope someone will do this work. Thank you Jan From nicolas.roux at st.com Thu Mar 26 04:48:05 2009 From: nicolas.roux at st.com (Nicolas ROUX) Date: Thu, 26 Mar 2009 09:48:05 +0100 Subject: [SciPy-user] scipy.signal.firwin...replacement for Matlab In-Reply-To: Message-ID: <000201c9adef$944f6530$e7ad810a@gnb.st.com> Hi, About, "if scipy want to be a Matlab-replacement" I think it should be good to have a detailed list of function equivalence Matlab/Scipy. We are always not sure if such Matlab functions exist in Scipy, and some of my team switch back to Matlab by lack of information ;-( It seems that some missing function in Scipy, could be replaced by few line of code. It should be good to add in this equivalence list such workaround. Does this detailed list exists already ? Thanks, Cheers, Nicolas. -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Jan Rauberg Sent: Thursday, March 26, 2009 9:05 AM To: scipy-user at scipy.org Subject: Re: [SciPy-user] scipy.signal.firwin Tom K. kraussfamily.org> writes: ... > And now, with my typos fixed: > N=11 > h=signal.firwin(N, .3) > n=np.arange(-(N/2), (N/2)+1) > g=(n==0)-h # (n==0) has a single "1" where n[k]==0, and zeros else > g1=(-1)**n*h # (-1)**n is [-1, +1, -1, ...] with a "+1" at n[k]==0 > Thank you Tom, your solution seems to work. But if scipy want to be a Matlab-replacement I think it should have such basic functions like 'fir1' with the full range of that functionality. So I have to switch to octave or R. But they have not such a nice and fast plotting tool like matplotlib. So I hope someone will do this work. Thank you Jan _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From nwagner at iam.uni-stuttgart.de Thu Mar 26 05:16:22 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Mar 2009 10:16:22 +0100 Subject: [SciPy-user] scipy.signal.firwin...replacement for Matlab In-Reply-To: <000201c9adef$944f6530$e7ad810a@gnb.st.com> References: <000201c9adef$944f6530$e7ad810a@gnb.st.com> Message-ID: On Thu, 26 Mar 2009 09:48:05 +0100 Nicolas ROUX wrote: > Hi, > > About, > "if scipy want to be a Matlab-replacement" > > I think it should be good to have a detailed list of >function equivalence > Matlab/Scipy. > We are always not sure if such Matlab functions exist in >Scipy, and some of > my team switch back to Matlab by lack of information ;-( > It seems that some missing function in Scipy, could be >replaced by few line > of code. > It should be good to add in this equivalence list such >workaround. > > Does this detailed list exists already ? There are some (incomplete) liste, e.g. http://mathesaurus.sourceforge.net/matlab-numpy.html http://www.scipy.org/NumPy_for_Matlab_Users Nils From sebastian.walter at gmail.com Thu Mar 26 05:50:24 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 26 Mar 2009 10:50:24 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency Message-ID: Hello, I have implemented a wrapper for the C++ Automatic Differentiation (AD) tool ADOL-C. You can use it to differentiate complex algorithms to arbitrary order. It works quite well with numpy. You can have a look at it at http://github.com/b45ch1/pyadolc . EXAMPLE USAGE: ============== compute the Jacobian J of f(x) = numpy.dot(A,x), where A is an (N,M) array --------------- get_started.py ---------------------- import numpy from adolc import * N = M = 10 A = numpy.zeros((M,N)) A[:] = [[ 1./N +(n==m) for n in range(N)] for m in range(M)] def f(x): return numpy.dot(A,x) # tape a function evaluation ax = numpy.array([adouble(0) for n in range(N)]) trace_on(1) independent(ax) ay = f(ax) dependent(ay) trace_off() x = numpy.array([n+1 for n in range(N)]) # compute jacobian of f at x J = jacobian(1,x) # compute gradient of f at x if M==1: g = gradient(1,x) --------------- end get_started.py ---------------------- PERFORMANCE: ============= It is really fast compared to existing AD tools for Python as for example Scientific.Functions.Derivatives. Benchmark available at http://github.com/b45ch1/pyadolc/blob/239a18c773c19a71bb5508dee175473a2fad7c83/tests/speed_comparison_pyadolc_ScientificPythonFunctionsDerivatives/pyadolc_vs_scientific_python.py compute hessian of: def f(x): return 0.5*dot(x,dot(A,x)) Runtime comparison: adolc: elapsed time = 0.000411 sec Scientific: elapsed time = 0.041264 sec ratio time adolc/Scientific Python: 0.009961 I.e. pyadolc is a factor 100 faster. Removing Boost::Python dependency ? =============================== I have used Boost::Python to wrap it, but I am not happy with that additional dependency! So I wondered if someone could give me advice how to avoid users having to download and install boost::python to use pyadolc. (include boost::python sources ? port to C API?) best regards, Sebastian Walter From david at ar.media.kyoto-u.ac.jp Thu Mar 26 05:44:37 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Mar 2009 18:44:37 +0900 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: <49CB4E85.4030508@ar.media.kyoto-u.ac.jp> Sebastian Walter wrote: > Removing Boost::Python dependency ? > =============================== > > I have used Boost::Python to wrap it, but I am not happy with that > additional dependency! > So I wondered if someone could give me advice how to avoid users > having to download and install boost::python to use pyadolc. > (include boost::python sources ? port to C API?) > If you care about portability, and if that's an option, I would advise to just drop boost. Boost is not portable nor stable, every new version is almost always incompatible with each other. In my own experience, everytime I had to build something from source with boost, it was a problem, cheers, David From nwagner at iam.uni-stuttgart.de Thu Mar 26 07:35:13 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Mar 2009 12:35:13 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: On Thu, 26 Mar 2009 10:50:24 +0100 Sebastian Walter wrote: > Hello, > > I have implemented a wrapper for the C++ Automatic >Differentiation > (AD) tool ADOL-C. > You can use it to differentiate complex algorithms to >arbitrary order. > It works quite well with numpy. > > You can have a look at it at >http://github.com/b45ch1/pyadolc . > > EXAMPLE USAGE: > ============== > compute the Jacobian J of > f(x) = numpy.dot(A,x), where A is an (N,M) array > > > --------------- get_started.py ---------------------- > import numpy > from adolc import * > > N = M = 10 > A = numpy.zeros((M,N)) > A[:] = [[ 1./N +(n==m) for n in range(N)] for m in >range(M)] > > > def f(x): > return numpy.dot(A,x) > > # tape a function evaluation > ax = numpy.array([adouble(0) for n in range(N)]) > trace_on(1) > independent(ax) > ay = f(ax) > dependent(ay) > trace_off() > > > x = numpy.array([n+1 for n in range(N)]) > > # compute jacobian of f at x > J = jacobian(1,x) > > # compute gradient of f at x > if M==1: > g = gradient(1,x) > > --------------- end get_started.py >---------------------- > > > PERFORMANCE: > ============= > > It is really fast compared to existing AD tools for >Python as for > example Scientific.Functions.Derivatives. > Benchmark available at > http://github.com/b45ch1/pyadolc/blob/239a18c773c19a71bb5508dee175473a2fad7c83/tests/speed_comparison_pyadolc_ScientificPythonFunctionsDerivatives/pyadolc_vs_scientific_python.py > > compute hessian of: > def f(x): > return 0.5*dot(x,dot(A,x)) > > Runtime comparison: > adolc: elapsed time = 0.000411 sec > Scientific: elapsed time = 0.041264 sec > ratio time adolc/Scientific Python: 0.009961 > > I.e. pyadolc is a factor 100 faster. > > > > Removing Boost::Python dependency ? > =============================== > > I have used Boost::Python to wrap it, but I am not happy >with that > additional dependency! > So I wondered if someone could give me advice how to >avoid users > having to download and install boost::python to use >pyadolc. > (include boost::python sources ? port to C API?) > > > best regards, > Sebastian Walter > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi all, I tried to install pyadolc on an x86_64 box. python setup.py build running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "_adolc" sources running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building '_adolc' extension compiling C++ sources C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -fPIC creating build creating build/temp.linux-x86_64-2.5 compile options: '-I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include -I./adolc-2.0.0 -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include -I/data/home/nwagner/local/include/python2.5 -c' extra options: '-ftemplate-depth-100 -DBOOST_PYTHON_DYNAMIC_LIB' g++: ./py_adolc.cpp ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, npy_intp) [with T = double]': ./py_adolc.cpp:62: instantiated from here ./num_util.h:76: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, std::vector >) [with T = double]': ./py_adolc.cpp:86: instantiated from here ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, std::vector >) [with T = short int]': ./py_adolc.cpp:417: instantiated from here ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, npy_intp) [with T = double]': ./py_adolc.cpp:62: instantiated from here ./num_util.h:76: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, std::vector >) [with T = double]': ./py_adolc.cpp:86: instantiated from here ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden ./num_util.h: In function `boost::python::numeric::array num_util::makeNum(T*, std::vector >) [with T = short int]': ./py_adolc.cpp:417: instantiated from here ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach ?int*? in argument passing umgewandelt werden error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -fPIC -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include -I./adolc-2.0.0 -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include -I/data/home/nwagner/local/include/python2.5 -c ./py_adolc.cpp -o build/temp.linux-x86_64-2.5/py_adolc.o -ftemplate-depth-100 -DBOOST_PYTHON_DYNAMIC_LIB" failed with exit status 1 How can I fix the problem ? Nils From sebastian.walter at gmail.com Thu Mar 26 08:22:03 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 26 Mar 2009 13:22:03 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: hey, thanks for the bug report! Yes, that's something I have expected since I didn't have the chance to compile it on a 64bit system. I was a little lax with the integers! I guess I'll have to use npy_intp instead of int in some places. Shouldn't take too long to fix, but I have to set up a 64bit Linux first. On Thu, Mar 26, 2009 at 12:35 PM, Nils Wagner wrote: > On Thu, 26 Mar 2009 10:50:24 +0100 > Sebastian Walter wrote: >> Hello, >> >> I have implemented a wrapper for the C++ Automatic >>Differentiation >> (AD) tool ADOL-C. >> You can use it to differentiate complex algorithms to >>arbitrary order. >> It works quite well with numpy. >> >> You can have a look at it at >>http://github.com/b45ch1/pyadolc . >> >> EXAMPLE USAGE: >> ============== >> compute the Jacobian J of >> f(x) = numpy.dot(A,x), where A is an (N,M) array >> >> >> --------------- get_started.py ---------------------- >> import numpy >> from adolc import * >> >> N = M = 10 >> A = numpy.zeros((M,N)) >> A[:] = [[ 1./N +(n==m) for n in range(N)] for m in >>range(M)] >> >> >> def f(x): >> return numpy.dot(A,x) >> >> # tape a function evaluation >> ax = numpy.array([adouble(0) for n in range(N)]) >> trace_on(1) >> independent(ax) >> ay = f(ax) >> dependent(ay) >> trace_off() >> >> >> x = numpy.array([n+1 for n in range(N)]) >> >> # compute jacobian of f at x >> J = jacobian(1,x) >> >> # compute gradient of f at x >> if M==1: >> g = gradient(1,x) >> >> --------------- end get_started.py >>---------------------- >> >> >> PERFORMANCE: >> ============= >> >> It is really fast compared to existing AD tools for >>Python as for >> example Scientific.Functions.Derivatives. >> Benchmark available at >> http://github.com/b45ch1/pyadolc/blob/239a18c773c19a71bb5508dee175473a2fad7c83/tests/speed_comparison_pyadolc_ScientificPythonFunctionsDerivatives/pyadolc_vs_scientific_python.py >> >> compute hessian of: >> def f(x): >> return 0.5*dot(x,dot(A,x)) >> >> Runtime comparison: >> adolc: elapsed time = 0.000411 sec >> Scientific: elapsed time = 0.041264 sec >> ratio time adolc/Scientific Python: 0.009961 >> >> I.e. pyadolc is a factor 100 faster. >> >> >> >> Removing Boost::Python dependency ? >> =============================== >> >> I have used Boost::Python to wrap it, but I am not happy >>with that >> additional dependency! >> So I wondered if someone could give me advice how to >>avoid users >> having to download and install boost::python to use >>pyadolc. >> (include boost::python sources ? port to C API?) >> >> >> best regards, >> Sebastian Walter >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > Hi all, > > I tried to install pyadolc on an x86_64 box. > > python setup.py build > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build > commands --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build > commands --fcompiler options > running build_src > building extension "_adolc" sources > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > building '_adolc' extension > compiling C++ sources > C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g > -O3 -Wall -fPIC > > creating build > creating build/temp.linux-x86_64-2.5 > compile options: > '-I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include > -I./adolc-2.0.0 > -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include > -I/data/home/nwagner/local/include/python2.5 -c' > extra options: '-ftemplate-depth-100 > -DBOOST_PYTHON_DYNAMIC_LIB' > g++: ./py_adolc.cpp > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, npy_intp) [with T = double]': > ./py_adolc.cpp:62: instantiated from here > ./num_util.h:76: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, std::vector std::allocator >) [with T = double]': > ./py_adolc.cpp:86: instantiated from here > ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, std::vector std::allocator >) [with T = short int]': > ./py_adolc.cpp:417: instantiated from here > ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, npy_intp) [with T = double]': > ./py_adolc.cpp:62: instantiated from here > ./num_util.h:76: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, std::vector std::allocator >) [with T = double]': > ./py_adolc.cpp:86: instantiated from here > ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > ./num_util.h: In function `boost::python::numeric::array > num_util::makeNum(T*, std::vector std::allocator >) [with T = short int]': > ./py_adolc.cpp:417: instantiated from here > ./num_util.h:94: Fehler: ?npy_intp*? kann nicht nach > ?int*? in argument passing umgewandelt werden > error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG > -g -O3 -Wall -fPIC > -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include > -I./adolc-2.0.0 > -I/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/include > -I/data/home/nwagner/local/include/python2.5 -c > ./py_adolc.cpp -o build/temp.linux-x86_64-2.5/py_adolc.o > -ftemplate-depth-100 -DBOOST_PYTHON_DYNAMIC_LIB" failed > with exit status 1 > > > How can I fix the problem ? > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ndbecker2 at gmail.com Thu Mar 26 09:49:46 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 26 Mar 2009 09:49:46 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency References: <49CB4E85.4030508@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Sebastian Walter wrote: >> Removing Boost::Python dependency ? >> =============================== >> >> I have used Boost::Python to wrap it, but I am not happy with that >> additional dependency! >> So I wondered if someone could give me advice how to avoid users >> having to download and install boost::python to use pyadolc. >> (include boost::python sources ? port to C API?) >> > > If you care about portability, and if that's an option, I would advise > to just drop boost. Boost is not portable nor stable, every new version > is almost always incompatible with each other. In my own experience, > everytime I had to build something from source with boost, it was a > problem, > > cheers, > > David IMO, that's harsh. I use boost::python heavily. I don't have any such problem. I doubt that instability is an issue - I don't think this code has changed at all in quite some time. From david at ar.media.kyoto-u.ac.jp Thu Mar 26 09:56:10 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Mar 2009 22:56:10 +0900 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: <49CB4E85.4030508@ar.media.kyoto-u.ac.jp> Message-ID: <49CB897A.4050200@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > > IMO, that's harsh. I use boost::python heavily. I don't have any such > problem. But that's not the same thing. In your case, you use boost internally for your own usage, right ? What is problematic is distributing boost-based software from sources. In this later case, it is well known that boost is a significant problem. cheers, David From tpk at kraussfamily.org Thu Mar 26 10:57:43 2009 From: tpk at kraussfamily.org (Tom K.) Date: Thu, 26 Mar 2009 07:57:43 -0700 (PDT) Subject: [SciPy-user] scipy.signal.firwin In-Reply-To: References: <20090319160325.GA29170@avicenna.ee.columbia.edu> <22638541.post@talk.nabble.com> <22638571.post@talk.nabble.com> Message-ID: <22723748.post@talk.nabble.com> I created a ticket at the scipy development wiki: http://projects.scipy.org/scipy/ticket/902 Jan Rauberg wrote: > > ... > your solution seems to work. But if scipy want to be a Matlab-replacement > I > think it should have such basic functions like 'fir1' with the full range > of > that functionality. So I have to switch to octave or R. But they have not > such a > nice and fast plotting tool like matplotlib. So I hope someone will do > this work. > -- View this message in context: http://www.nabble.com/scipy.signal.firwin-tp22587139p22723748.html Sent from the Scipy-User mailing list archive at Nabble.com. From lists_ravi at lavabit.com Thu Mar 26 11:41:18 2009 From: lists_ravi at lavabit.com (Ravi) Date: Thu, 26 Mar 2009 11:41:18 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: <200903261141.18716.lists_ravi@lavabit.com> Hi Sebastian, Thank you for writing a wrapper for ADOL-C. On Thursday 26 March 2009 05:50:24 Sebastian Walter wrote: > I have implemented a wrapper for the C++ Automatic Differentiation > (AD) tool ADOL-C. > You can use it to differentiate complex algorithms to arbitrary order. > It works quite well with numpy. > > You can have a look at it at http://github.com/b45ch1/pyadolc . From a quick look at your code, the C++ wrapper seems really nice. The python wrapper, though, has some cosmetic problems: 1. The entry point function BOOST_PYTHON_MODULE( _adolc ) should be part of the cpp file, not the hpp file. 2. In my experience, it is difficult to maintain docstrings in the C/C++ portion of the code. It would be a lot easier to add the docstrings to the python part of the python wrapper; see the boost.python documentation for examples. With this method, if the docstrings need to change, you will not need to recompile the C/C++ files. 3. I don't understand why the addition of unary operators for badouble messes up any remaining computations. Could you please post a reduced example on cplusplus-sig exhibiting the problem? [snip example usage] > Removing Boost::Python dependency ? > =============================== > > I have used Boost::Python to wrap it, but I am not happy with that > additional dependency! > So I wondered if someone could give me advice how to avoid users > having to download and install boost::python to use pyadolc. > (include boost::python sources ? port to C API?) Predictably, I would recommend against removing the boost.python dependency. While there are parts of boost where development speed outstrips the ability of the authors of dependent packages to keep up, boost.python is not one such part. Second, the transparent usage of boost.python in your code is pretty much immune to changes even in boost.python since you do not use "complex" features like custom converters. What you have is very easily maintainable, and requires no tools other than your C++ compiler, which is a big selling point for some applications of your code. Finally, boost.python is more portable in some cases than even python; I disagree with David C here, but my experience is the only sample point I have from which to compute statistics. Given the straightforward nature of your boost.python usage, it will compile with boost versions from 1.34.1 (perhaps even older versions) to the forthcoming 1.39. Most (if not all) linux distributions supply one of those versions of boost. For linux users, your code is straightforward to compile and use (though I'd have preferred a CMake-based system). For Windows users, you can provide binaries built against the Boost Consulting boost-1.37 binaries and supply the 3 relevant boost DLLs as part of your download. FWIW, at my workplace, we have some python bindings which are about the same level as yours which we encourage our customers to build against the Boost Computing binaries. Hope this helps. Regards, Ravi From Chris.Barker at noaa.gov Thu Mar 26 12:37:03 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 26 Mar 2009 09:37:03 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> Message-ID: <49CBAF2F.1020902@noaa.gov> David Cournapeau wrote: > Being > able to install things which don't depend on C++/Fortran would mean that > the fortran parts are even less tested. And fundamentally, I don't think > scipy wo fortran would be that useful: no fft, no linalg, no special > functions. Who can use this ? Quite true. Has anyone tried using f2c for the Fortran code? this would give us a C-only option, much easier to build. If native Fortran gives notably better performance, it could be optionally enabled, kind of like ATLAS for numpy. Just a thought. Anyway, it sound like my issue is not a deal-breaker, but rather an oversight, so it can be fixed. If there are folks that can commit to doing the builds, then most folks will be happy. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cournape at gmail.com Thu Mar 26 12:43:20 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 01:43:20 +0900 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: <200903261141.18716.lists_ravi@lavabit.com> References: <200903261141.18716.lists_ravi@lavabit.com> Message-ID: <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> On Fri, Mar 27, 2009 at 12:41 AM, Ravi wrote: > Hi Sebastian, > ?Thank you for writing a wrapper for ADOL-C. > > On Thursday 26 March 2009 05:50:24 Sebastian Walter wrote: >> I have implemented a wrapper for ?the C++ Automatic Differentiation >> (AD) tool ADOL-C. >> You can use it to differentiate complex algorithms to arbitrary order. >> It works quite well with numpy. >> >> You can have a look at it at http://github.com/b45ch1/pyadolc . > > >From a quick look at your code, the C++ wrapper seems really nice. The python > wrapper, though, has some cosmetic problems: > > 1. The entry point function BOOST_PYTHON_MODULE( _adolc ) should be part of > the cpp file, not the hpp file. > > 2. In my experience, it is difficult to maintain docstrings in the C/C++ > portion of the code. It would be a lot easier to add the docstrings to the > python part of the python wrapper; see the boost.python documentation for > examples. With this method, if the docstrings need to change, you will not > need to recompile the C/C++ files. > > 3. I don't understand why the addition of unary operators for badouble messes > up any remaining computations. Could you please post a reduced example on > cplusplus-sig exhibiting the problem? > > [snip example usage] > >> Removing Boost::Python dependency ? >> =============================== >> >> I have used Boost::Python to wrap it, but I am not happy with that >> additional dependency! >> So I wondered if someone could give me advice how to avoid users > Given the straightforward nature of your boost.python usage, it will compile > with boost versions from 1.34.1 (perhaps even older versions) to the > forthcoming 1.39. Most (if not all) linux distributions supply one of those > versions of boost. For linux users, your code is straightforward to compile > and use (though I'd have preferred a CMake-based system). For python packages, distutils is often the best bet, for better or worse. Since there is a dependency on a non trivial library which is built through autoconf, windows support may not be straightforward (depending on your definition on windows - if cygwin is ok, then it is not really a problem). The autoconf script looks quite easy, though (not many checks - certainly nothing that cannot be done through distutils and numpy extensions of it) It boils down to how much you are ready to work on it, I think. But from quickly looking at it, you could build the library and your wrapper entirely from distutils - you could then build binary installers, tarballs, etc... automatically from distutils "for free". David From Chris.Barker at noaa.gov Thu Mar 26 13:11:20 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Thu, 26 Mar 2009 10:11:20 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CBAF2F.1020902@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> Message-ID: <49CBB738.8030001@noaa.gov> OK -- it turns out that the Universal binary has more than just the Universal issue. The extensions depend on libgfortran, too I just put in on an Intel machine, and got: >>> import scipy.interpolate Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 13, in import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): Library not loaded: /usr/local/lib/libgfortran.2.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so Reason: image not found What a pain! -Chris From cournape at gmail.com Thu Mar 26 13:18:55 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 02:18:55 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CBB738.8030001@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> Message-ID: <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker wrote: > OK -- it turns out that the Universal binary has more than just the > Universal issue. The extensions depend on libgfortran, too Hm, right - that by itself is expected. I guess we never thought about it because every developer has gfortran. But surely, it cannot work as we do currently. That's actually a non trivial problem, unless libgfortran can easily be statically linked. Hm... cheers, David From cournape at gmail.com Thu Mar 26 13:21:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 02:21:13 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CBB738.8030001@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> Message-ID: <5b8d13220903261021re693aaeof13c633d81c18c2@mail.gmail.com> On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker wrote: > OK -- it turns out that the Universal binary has more than just the > Universal issue. The extensions depend on libgfortran, too > If you don't mind, can you avoid installing gfortran at all on your machine until we find a solution to the problem (if you need scipy on your machine, installing gfortran would solve the problem in your case, but I would like to fix this), cheers, David From rob.clewley at gmail.com Thu Mar 26 13:34:16 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 26 Mar 2009 13:34:16 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> References: <200903261141.18716.lists_ravi@lavabit.com> <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> Message-ID: > On Fri, Mar 27, 2009 at 12:41 AM, Ravi wrote: >> Hi Sebastian, >> ?Thank you for writing a wrapper for ADOL-C. Absolutely -- it's a very welcome and valuable tool that I have been waiting for in python for some time. On Thu, Mar 26, 2009 at 12:43 PM, David Cournapeau wrote: > It boils down to how much you are ready to work on it, I think. But > from quickly looking at it, you could build the library and your > wrapper entirely from distutils - you could then build binary > installers, tarballs, etc... automatically from distutils "for free". > Ravi's points are well taken, and I do not pretend to know much about boost.python myself, but I am tempted to agree with David that it would still be more helpful to have a source-based version that does not depend on boost at all. Maybe Ravi could educate me as to what it would mean for boost.python to be more portable than python in this example? As an example, I would love to use this code in PyDSTool to find derivatives more accurately to compute periodic orbit solutions of differential equations, but it does not appeal to me to require boost as a dependency. My impression from occasional reading of the boost website is that users would need to install an awful lot of additional things (often from source on non-windows machines) to get this interface to work. Alternatively, SWIG is a lightweight additional dependency that easily wraps simple numpy-dependent C/C++ code, and this is what I use instead. I'll be delighted to learn otherwise if I am mistaken in my impression. As for compilation, solving it with distutils can be a little messy but is basically platform independent (modulo cygwin, as mentioned). This is actually the solution we already use for legacy ODE solvers (via SWIG) and it works well for us. -Rob From Chris.Barker at noaa.gov Thu Mar 26 13:34:24 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 26 Mar 2009 10:34:24 -0700 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> Message-ID: <49CBBCA0.5020806@noaa.gov> David Cournapeau wrote: > On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker wrote: >> OK -- it turns out that the Universal binary has more than just the >> Universal issue. The extensions depend on libgfortran, too > > Hm, right - that by itself is expected. I guess we never thought about > it because every developer has gfortran. But surely, it cannot work as > we do currently. > That's actually a non trivial problem, unless libgfortran can easily > be statically linked. Hm... I don't know how you are building that mpkg, but py2app has code that tracks dependencies on non-system libs, includes them, and re-writes the binaries to point to the new ones. I think macholib does the later magic, but know nothing of the details. As it happens, I'm off to SciPy in a couple hours, and am going to join the MacPython sprint next Monday -- maybe I can find out more about how to do that there. > If you don't mind, can you avoid installing gfortran at all on your > machine until we find a solution to the problem sure. > (if you need scipy on > your machine, installing gfortran would solve the problem in your > case, but I would like to fix this), I don't need any of the Fortran stuff right at this moment, so I'll just comment out those imports for now. thanks for working on this -- having a good binary for OS-X would be really nice. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From sebastian.walter at gmail.com Thu Mar 26 13:36:26 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 26 Mar 2009 18:36:26 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: <200903261141.18716.lists_ravi@lavabit.com> References: <200903261141.18716.lists_ravi@lavabit.com> Message-ID: On Thu, Mar 26, 2009 at 4:41 PM, Ravi wrote: > Hi Sebastian, > ?Thank you for writing a wrapper for ADOL-C. > > On Thursday 26 March 2009 05:50:24 Sebastian Walter wrote: >> I have implemented a wrapper for ?the C++ Automatic Differentiation >> (AD) tool ADOL-C. >> You can use it to differentiate complex algorithms to arbitrary order. >> It works quite well with numpy. >> >> You can have a look at it at http://github.com/b45ch1/pyadolc . > > >From a quick look at your code, the C++ wrapper seems really nice. The python > wrapper, though, has some cosmetic problems: > > 1. The entry point function BOOST_PYTHON_MODULE( _adolc ) should be part of > the cpp file, not the hpp file. ok, I could change that. It works though. > > 2. In my experience, it is difficult to maintain docstrings in the C/C++ > portion of the code. It would be a lot easier to add the docstrings to the > python part of the python wrapper; see the boost.python documentation for > examples. With this method, if the docstrings need to change, you will not > need to recompile the C/C++ files. I thought about that too. If I did the project again, I would do it that way. It's the question what costs more time: changing the code or keeping the code. Up to now changes to the docstrings were very rare, so I'll keep it that way for now. > > 3. I don't understand why the addition of unary operators for badouble messes > up any remaining computations. Could you please post a reduced example on > cplusplus-sig exhibiting the problem? Ermm, well, to be honest, that's something I do not fully understand myself! The problem is that ADOL-C is internally quite a hack and I have only limited understanding how boost::python works internally. When the constructor of a badouble is called, the badouble object is given a unique id. Somehow, pyadolc used the same ID twice, if i remember correctly. It's quite a while when I had that problem. > > [snip example usage] > >> Removing Boost::Python dependency ? >> =============================== >> >> I have used Boost::Python to wrap it, but I am not happy with that >> additional dependency! >> So I wondered if someone could give me advice how to avoid users >> having to download and install boost::python to use pyadolc. >> (include boost::python sources ? port to C API?) > > Predictably, I would recommend against removing the boost.python dependency. > While there are parts of boost where development speed outstrips the ability > of the authors of dependent packages to keep up, boost.python is not one such > part. Second, the transparent usage of boost.python in your code is pretty > much immune to changes even in boost.python since you do not use "complex" > features like custom converters. What you have is very easily maintainable, > and requires no tools other than your C++ compiler, which is a big selling > point for some applications of your code. Finally, boost.python is more > portable in some cases than even python; I disagree with David C here, but my > experience is the only sample point I have from which to compute statistics. > > Given the straightforward nature of your boost.python usage, it will compile > with boost versions from 1.34.1 (perhaps even older versions) to the > forthcoming 1.39. Most (if not all) linux distributions supply one of those > versions of boost. For linux users, your code is straightforward to compile > and use (though I'd have preferred a CMake-based system). For Windows users, > you can provide binaries built against the Boost Consulting boost-1.37 > binaries and supply the 3 relevant boost DLLs as part of your download. FWIW, > at my workplace, we have some python bindings which are about the same level > as yours which we encourage our customers to build against the Boost Computing > binaries. That sounds reasonable. I havent used Windows in a while and I thought that it might be hard to distribute it with the boost dependency. > > Hope this helps. > > Regards, > Ravi > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Thu Mar 26 13:40:09 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 02:40:09 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49CBBCA0.5020806@noaa.gov> References: <49C93DC6.7020102@noaa.gov> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> <49CBBCA0.5020806@noaa.gov> Message-ID: <5b8d13220903261040j2a2de8baodc9ff84647f5eb54@mail.gmail.com> On Fri, Mar 27, 2009 at 2:34 AM, Christopher Barker wrote: > I don't know how you are building that mpkg, but py2app has code that > tracks dependencies on non-system libs, includes them, and re-writes the > binaries to point to the new ones. I think macholib does the later > magic, but know nothing of the details. The problem is that libgfortran is not a usual library - it is the runtime library for fortran support, and as such, you can't treat it as "lightly" as the usual library. You have to use special compiler flags to do it. The good new is that recent gfortran has this option - the bad news is that the option does not work on darwin, unless you use a very recent (read not released yet) gfortran... http://www.nabble.com/compiling-libgfortran-statically-td22423425.html David From robert.kern at gmail.com Thu Mar 26 14:44:41 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Mar 2009 13:44:41 -0500 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: <200903261141.18716.lists_ravi@lavabit.com> <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> Message-ID: <3d375d730903261144v1caacd2fo92e0b799ebb1f3a7@mail.gmail.com> On Thu, Mar 26, 2009 at 12:34, Rob Clewley wrote: > As an example, I would love to use this code in PyDSTool to find > derivatives more accurately to compute periodic orbit solutions of > differential equations, but it does not appeal to me to require boost > as a dependency. You might want to take a look at Theano as an alternative to automatic differentiation: http://pylearn.org/theano -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Thu Mar 26 14:46:19 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 26 Mar 2009 14:46:19 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency References: <49CB4E85.4030508@ar.media.kyoto-u.ac.jp> <49CB897A.4050200@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Neal Becker wrote: >> >> IMO, that's harsh. I use boost::python heavily. I don't have any such >> problem. > > But that's not the same thing. In your case, you use boost internally > for your own usage, right ? > > What is problematic is distributing boost-based software from sources. > In this later case, it is well known that boost is a significant problem. > My main point is that I think boost::python API is quite stable at this point. From nwagner at iam.uni-stuttgart.de Thu Mar 26 15:44:42 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Mar 2009 20:44:42 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: On Thu, 26 Mar 2009 13:22:03 +0100 Sebastian Walter wrote: > hey, thanks for the bug report! > > Yes, that's something I have expected since I didn't >have the chance > to compile it on a 64bit system. > I was a little lax with the integers! I guess I'll have >to use > npy_intp instead of int in some places. > Shouldn't take too long to fix, but I have to set up a >64bit Linux first. > No problem. I can test it for you. Please let me know if I can help you. Cheers, Nils From lists_ravi at lavabit.com Thu Mar 26 16:41:18 2009 From: lists_ravi at lavabit.com (Ravi) Date: Thu, 26 Mar 2009 16:41:18 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> References: <200903261141.18716.lists_ravi@lavabit.com> <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> Message-ID: <200903261641.20488.lists_ravi@lavabit.com> On Thursday 26 March 2009 12:43:20 David Cournapeau wrote: > On Fri, Mar 27, 2009 at 12:41 AM, Ravi wrote: > > For linux users, your code is straightforward to > > compile and use (though I'd have preferred a CMake-based system). > > For python packages, distutils is often the best bet, for better or > worse. Agreed. I merely stated my preference for CMake over the OP's current build system. If distutils were not so fragile and did not require monkeypatching for almost all of my nontrivial work, I'd prefer it to CMake for all python projects as well. CMake has emerged to me as the least crappy tool of the build systems I have tried (auto*, scons, waf, cmake and ant). If scons could be enhanced to provide the feature set and the reliability of CMake, I'd choose it in a heartbeat, given the exceptionally painful scripting language used in CMake. Regards, Ravi From zachary.pincus at yale.edu Thu Mar 26 16:46:13 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 26 Mar 2009 16:46:13 -0400 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> Message-ID: <930600AE-A791-486E-B177-C427D5E520D5@yale.edu> > On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker > wrote: >> OK -- it turns out that the Universal binary has more than just the >> Universal issue. The extensions depend on libgfortran, too > > Hm, right - that by itself is expected. I guess we never thought about > it because every developer has gfortran. But surely, it cannot work as > we do currently. > > That's actually a non trivial problem, unless libgfortran can easily > be statically linked. Hm... In the past, (I think) I've forced static linkage by hiding libgfortran.dylib and *just* having libgfortran.a where the linker can find it. (On OS X there's no other reliable way to force static linkage of anything, so much does the linker prefer dylibs!) This worked for me when I needed to distribute bits of scipy that I'd parted out for other purposes, and built as 2-way (32bit i386/PPC) fat binaries. But that was with a previous version of gfortran from the R tools site. Zach From robert.kern at gmail.com Thu Mar 26 16:54:08 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Mar 2009 15:54:08 -0500 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <930600AE-A791-486E-B177-C427D5E520D5@yale.edu> References: <49C93DC6.7020102@noaa.gov> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> <930600AE-A791-486E-B177-C427D5E520D5@yale.edu> Message-ID: <3d375d730903261354h43f67889xb08b484bfe368465@mail.gmail.com> On Thu, Mar 26, 2009 at 15:46, Zachary Pincus wrote: >> On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker >> wrote: >>> OK -- it turns out that the Universal binary has more than just the >>> Universal issue. The extensions depend on libgfortran, too >> >> Hm, right - that by itself is expected. I guess we never thought about >> it because every developer has gfortran. But surely, it cannot work as >> we do currently. >> >> That's actually a non trivial problem, unless libgfortran can easily >> be statically linked. Hm... > > In the past, (I think) I've forced static linkage by hiding > libgfortran.dylib and *just* having libgfortran.a where the linker can > find it. (On OS X there's no other reliable way to force static > linkage of anything, so much does the linker prefer dylibs!) This > worked for me when I needed to distribute bits of scipy that I'd > parted out for other purposes, and built as 2-way (32bit i386/PPC) fat > binaries. But that was ?with a previous version of gfortran from the R > tools site. Actually, there is. Use -Wl,-search_paths_first to tell the linker to look at your -L flags before the standard locations. From a snippet of email I (apparently) have to trot out every few months or so: """ And, if you copy the libgfortran.a file to somewhere else, say ~/staticlibs/, you can force the linker to use it instead of the .dylib such that your users don't need to install gfortran. $ export LDFLAGS="-undefined dynamic_lookup -bundle -arch i386 -arch ppc -Wl,-search_paths_first" $ python setup.py config_fc --fcompiler=gnu95 --arch="-arch i386 -arch ppc" build_ext -L ~/staticlibs/ build ... $ file build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so: Mach-O universal binary with 2 architectures build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so (for architecture i386): Mach-O bundle i386 build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so (for architecture ppc): Mach-O bundle ppc $ otool -L build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so build/lib.macosx-10.3-fat-2.5/scipy/odr/__odrpack.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 88.3.7) The pointer to /usr/local/lib/libgcc_s.1.dylib is innocuous. That's just the first place it will look for that library at runtime. There's one in /usr/lib that appears to be picked up and used just fine. """ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lists_ravi at lavabit.com Thu Mar 26 17:00:15 2009 From: lists_ravi at lavabit.com (Ravi) Date: Thu, 26 Mar 2009 17:00:15 -0400 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> Message-ID: <200903261700.16065.lists_ravi@lavabit.com> On Thursday 26 March 2009 13:34:16 Rob Clewley wrote: > Ravi's points are well taken, and I do not pretend to know much about > boost.python myself, but I am tempted to agree with David that it > would still be more helpful to have a source-based version that does > not depend on boost at all. Maybe Ravi could educate me as to what it > would mean for boost.python to be more portable than python in this > example? On "exotic" systems (such as one with 24 bit ints and non-IEEE 754 floating point systems), boost.python (among other well-written packages) compiles where even python does not[1]. Python would be a wonderful scripting language for some embedded processor work, but I have been unable to compile python for such platforms (perhaps because I give up if it does not compile after ~20 hours of work). Perhaps python 2.4+ may be more portable than 2.3-, but I haven't tried compiling python on exotic systems in a while. [1] Interestingly enough, python headers are much cleaner than most people realize. Of course, when python does not compile, a boost.python static library, when one can be created, is useless. > As an example, I would love to use this code in PyDSTool to find > derivatives more accurately to compute periodic orbit solutions of > differential equations, but it does not appeal to me to require boost > as a dependency. My impression from occasional reading of the boost > website is that users would need to install an awful lot of additional > things (often from source on non-windows machines) to get this > interface to work. Not really. In fact, no external dependencies are required other than your reasonably standards-conformant C++ compiler. (You could optionally link against IBM ICU for the regexp library, and optionally build an MPI interface for some libraries if you have OpenMPI installed.) If you have a reasonably standard python installation, boost.python will be built. The steps are: - download boost-xxx.tar.gz - unpack the tarfile - cd to the bjam directory, build it - run bjam on the boost sources I currently use it on various linux flavors & Windows; in the past, I have used it on Solaris and a subset on HPUX. If you have a POSIX shell, there is even a script that does it all for you. > Alternatively, SWIG is a lightweight additional > dependency that easily wraps simple numpy-dependent C/C++ code, and > this is what I use instead. I'll be delighted to learn otherwise if I > am mistaken in my impression. I don't consider SWIG to be a lightweight dependency, but that's merely a matter of opinion. More importantly, SWIG does not provide easy methods to map the C++ object model to the python object model since it uses (essentially) opadue pointers. Overriding C++ virtual functions in python is (or at least used to be) a pain with SWIG; SWIG also has problems with interfaces that depend on Koenig lookup. SWIG works very well for C code, though. Regards, Ravi From cournape at gmail.com Thu Mar 26 17:03:10 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 27 Mar 2009 06:03:10 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <3d375d730903261354h43f67889xb08b484bfe368465@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <49CA85B0.3070308@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> <930600AE-A791-486E-B177-C427D5E520D5@yale.edu> <3d375d730903261354h43f67889xb08b484bfe368465@mail.gmail.com> Message-ID: <5b8d13220903261403xfa273ebie56bcfbf90690f78@mail.gmail.com> On Fri, Mar 27, 2009 at 5:54 AM, Robert Kern wrote: > On Thu, Mar 26, 2009 at 15:46, Zachary Pincus wrote: >>> On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker >>> wrote: >>>> OK -- it turns out that the Universal binary has more than just the >>>> Universal issue. The extensions depend on libgfortran, too >>> >>> Hm, right - that by itself is expected. I guess we never thought about >>> it because every developer has gfortran. But surely, it cannot work as >>> we do currently. >>> >>> That's actually a non trivial problem, unless libgfortran can easily >>> be statically linked. Hm... >> >> In the past, (I think) I've forced static linkage by hiding >> libgfortran.dylib and *just* having libgfortran.a where the linker can >> find it. (On OS X there's no other reliable way to force static >> linkage of anything, so much does the linker prefer dylibs!) This >> worked for me when I needed to distribute bits of scipy that I'd >> parted out for other purposes, and built as 2-way (32bit i386/PPC) fat >> binaries. But that was ?with a previous version of gfortran from the R >> tools site. > > Actually, there is. Use -Wl,-search_paths_first to tell the linker to > look at your -L flags before the standard locations. From a snippet of > email I (apparently) have to trot out every few months or so When I tried something similar, I got the same errors as in the following email: http://gcc.gnu.org/ml/fortran/2008-12/msg00237.html There is not much documentation on this, but I wonder why gfortran would grow a --static-libgfortran if the result was exactly the same as linking statically libgfortran.a ? You never encounter problems when statically linking the fortran runtime ? cheers, David From robert.kern at gmail.com Thu Mar 26 17:06:10 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Mar 2009 16:06:10 -0500 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <5b8d13220903261403xfa273ebie56bcfbf90690f78@mail.gmail.com> References: <49C93DC6.7020102@noaa.gov> <5b8d13220903251259h7d55a68ay5501131edd670332@mail.gmail.com> <49CB1BA8.4090301@ar.media.kyoto-u.ac.jp> <49CBAF2F.1020902@noaa.gov> <49CBB738.8030001@noaa.gov> <5b8d13220903261018pdfe3275wc002e81cf73ebefc@mail.gmail.com> <930600AE-A791-486E-B177-C427D5E520D5@yale.edu> <3d375d730903261354h43f67889xb08b484bfe368465@mail.gmail.com> <5b8d13220903261403xfa273ebie56bcfbf90690f78@mail.gmail.com> Message-ID: <3d375d730903261406m776459a3n3352f55f6668b88@mail.gmail.com> On Thu, Mar 26, 2009 at 16:03, David Cournapeau wrote: > On Fri, Mar 27, 2009 at 5:54 AM, Robert Kern wrote: >> On Thu, Mar 26, 2009 at 15:46, Zachary Pincus wrote: >>>> On Fri, Mar 27, 2009 at 2:11 AM, Chris Barker >>>> wrote: >>>>> OK -- it turns out that the Universal binary has more than just the >>>>> Universal issue. The extensions depend on libgfortran, too >>>> >>>> Hm, right - that by itself is expected. I guess we never thought about >>>> it because every developer has gfortran. But surely, it cannot work as >>>> we do currently. >>>> >>>> That's actually a non trivial problem, unless libgfortran can easily >>>> be statically linked. Hm... >>> >>> In the past, (I think) I've forced static linkage by hiding >>> libgfortran.dylib and *just* having libgfortran.a where the linker can >>> find it. (On OS X there's no other reliable way to force static >>> linkage of anything, so much does the linker prefer dylibs!) This >>> worked for me when I needed to distribute bits of scipy that I'd >>> parted out for other purposes, and built as 2-way (32bit i386/PPC) fat >>> binaries. But that was ?with a previous version of gfortran from the R >>> tools site. >> >> Actually, there is. Use -Wl,-search_paths_first to tell the linker to >> look at your -L flags before the standard locations. From a snippet of >> email I (apparently) have to trot out every few months or so > > When I tried something similar, I got the same errors as in the following email: > > http://gcc.gnu.org/ml/fortran/2008-12/msg00237.html > > There is not much documentation on this, but I wonder why gfortran > would grow a --static-libgfortran if the result was exactly the same > as linking statically libgfortran.a ? You never encounter problems > when statically linking the fortran runtime ? Not in any of the times I have tried it. However, there have been a few point releases since then. Maybe they changed something that broke it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From P.Schellart at student.science.ru.nl Thu Mar 26 17:40:16 2009 From: P.Schellart at student.science.ru.nl (Pim Schellart) Date: Thu, 26 Mar 2009 22:40:16 +0100 Subject: [SciPy-user] optimize.leastsq confusing when it comes to errors Message-ID: Dear Scipy users, This is my first post to the list so please let me know if I am posting the question in the wrong place. I need to fit several functions to a dataset and would love to use scipy for this, this would allow me to use Python for all my scientific work as this is the only task for which I still use gnuplot. But I find the documentation on optimize.leastsq very confusing. I can get the fit paramaters (using the example as a guide) but I also need the errors on the resulting parameters and basically all the information given by the default gnuplot fit command output, which is the following. ################################################################### final sum of squares of residuals : 1588.68 rel. change during last iteration : -2.92059e-06 degrees of freedom (FIT_NDF) : 27 rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.67072 variance of residuals (reduced chisquare) = WSSR/ndf : 58.84 Final set of parameters Asymptotic Standard Error ======================= ========================== a = 1.77246 +/- 1.984 (111.9%) b = 0.713972 +/- 0.06909 (9.677%) c = -0.207495 +/- 1.413 (681.2%) correlation matrix of the fit parameters: a b c a 1.000 b 0.004 1.000 c -0.103 -0.088 1.000 ################################################################### Furthermore my datapoints have associated errors and these need to be taken into account in the fit. How can I make sure this is done? Is optimize.leastsq build to do this or should I use a different function. It would be even better if the example in the tutorial is extended to include this information as I can imagine most real world applications need errors on both the input and the output. Thank you very much in advance. Kind regards, Pim Schellart P.S. I am aware of the gnuplot python module but I prefer scipy :) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1590 bytes Desc: not available URL: From josef.pktd at gmail.com Thu Mar 26 18:36:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 26 Mar 2009 18:36:23 -0400 Subject: [SciPy-user] optimize.leastsq confusing when it comes to errors In-Reply-To: References: Message-ID: <1cd32cbb0903261536u678dd784sc54f3251a029aa3a@mail.gmail.com> On Thu, Mar 26, 2009 at 5:40 PM, Pim Schellart wrote: > Dear Scipy users, > > This is my first post to the list so please let me know if I am posting the > question in the wrong place. > I need to fit several functions to a dataset and would love to use scipy for > this, this would allow me to use Python for all my scientific work as this > is the only task for which I still use gnuplot. > But I find the documentation on optimize.leastsq very confusing. > I can get the fit paramaters (using the example as a guide) but I also need > the errors on the resulting parameters and basically all the information > given by the default gnuplot fit command output, which is the following. > > ################################################################### > final sum of squares of residuals : 1588.68 > rel. change during last iteration : -2.92059e-06 > > degrees of freedom ? ?(FIT_NDF) ? ? ? ? ? ? ? ? ? ? ? ?: 27 > rms of residuals ? ? ?(FIT_STDFIT) = sqrt(WSSR/ndf) ? ?: 7.67072 > variance of residuals (reduced chisquare) = WSSR/ndf ? : 58.84 > > Final set of parameters ? ? ? ? ? ?Asymptotic Standard Error > ======================= ? ? ? ? ? ?========================== > > a ? ? ? ? ? ? ? = 1.77246 ? ? ? ? ?+/- 1.984 ? ? ? ?(111.9%) > b ? ? ? ? ? ? ? = 0.713972 ? ? ? ? +/- 0.06909 ? ? ?(9.677%) > c ? ? ? ? ? ? ? = -0.207495 ? ? ? ?+/- 1.413 ? ? ? ?(681.2%) > > > correlation matrix of the fit parameters: > > ? ? ? ? ? ? ? a ? ? ?b ? ? ?c > a ? ? ? ? ? ? ? 1.000 > b ? ? ? ? ? ? ? 0.004 ?1.000 > c ? ? ? ? ? ? ?-0.103 -0.088 ?1.000 > ################################################################### > > Furthermore my datapoints have associated errors and these need to be taken > into account in the fit. > How can I make sure this is done? > Is optimize.leastsq build to do this or should I use a different function. > It would be even better if the example in the tutorial is extended to > include this information as I can imagine most real world applications need > errors on both the input and the output. > Thank you very much in advance. > > Kind regards, > > Pim Schellart > > P.S. I am aware of the gnuplot python module but I prefer scipy :) look at curve_fit http://projects.scipy.org/scipy/browser/trunk/scipy/optimize/minpack.py#L331 it's a wrapper around optimize.leastsq that provides the correct variance-covariance matrix of the parameter estimates and allows (inverse) weights for the regression. the weights are the standard deviation for each observation. the rest you should be able to calculate in a few lines A search of the mailing lists will provide some more information/discussion on implementation details. Josef From gruben at bigpond.net.au Thu Mar 26 17:42:45 2009 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 27 Mar 2009 08:42:45 +1100 Subject: [SciPy-user] Loopless square 2d arrays with radially symmetric functions In-Reply-To: References: <749289.45221.qm@web33001.mail.mud.yahoo.com> Message-ID: <49CBF6D5.9070200@bigpond.net.au> or this avoids X and Y: R2 = numpy.hypot(*numpy.ogrid[-128:128,-128:128])**2 p = numpy.exp(-0.01*R2) or, naturally: p = numpy.exp(-0.01*numpy.hypot(*numpy.ogrid[-128:128,-128:128])**2) Gary R. Ryan May wrote: > > On Wed, Mar 25, 2009 at 3:56 PM, David Baddeley > > wrote: > > > There's always: > > X, Y = numpy.mgrid[-128:128, -128:128] > R2 = X**2 + Y**2 > p = numpy.exp(-0.01*R2) > > which uses a bit more memory, but avoids loops. > > David > > > If you use ogrid instead of mgrid, you'll avoid the increase in memory, > since the arrays are created in a form ready for broadcasting. > > Ryan From Roger.Fearick at uct.ac.za Fri Mar 27 03:16:21 2009 From: Roger.Fearick at uct.ac.za (Roger Fearick) Date: Fri, 27 Mar 2009 09:16:21 +0200 Subject: [SciPy-user] optimize.leastsq confusing when it comes to errors In-Reply-To: References: Message-ID: <49CC9965.8130.009D.0@uct.ac.za> >>> Pim Schellart 03/26/09 11:40 PM >>> > Dear Scipy users, > This is my first post to the list so please let me know if I am > posting the question in the wrong place. > I need to fit several functions to a dataset and would love to use > scipy for this, this would allow me to use Python for all my > scientific work as this is the only task for which I still use gnuplot. > But I find the documentation on optimize.leastsq very confusing. > I can get the fit paramaters (using the example as a guide) but I also > need the errors on the resulting parameters and basically all the > information given by the default gnuplot fit command output, which is > the following. I also like the information provided by gnuplot. Here is a fragment of code that computes this from the output of leastsq. Note that the fit parameters use the Parameter class from www.scipy.org/Cookbook/FittingData; #--------------------------------------------------- # do fit using Levenberg-Marquardt p2,cov,info,mesg,success=fit(resonance, p, freq, vr/v0, uvr) if success==1: print "Converged" else: print "Not converged" print mesg # calculate final chi square chisq=sum(info["fvec"]*info["fvec"]) dof=len(freq)-len(p) # chisq, sqrt(chisq/dof) agrees with gnuplot print "Converged with chi squared ",chisq print "degrees of freedom, dof ", dof print "RMS of residuals (i.e. sqrt(chisq/dof)) ", sqrt(chisq/dof) print "Reduced chisq (i.e. variance of residuals) ", chisq/dof print # uncertainties are calculated as per gnuplot, "fixing" the result # for non unit values of the reduced chisq. # values at min match gnuplot print "Fitted parameters at minimum, with 68% C.I.:" for i,pmin in enumerate(p2): print "%2i %-10s %12f +/- %10f"%(i,p[i].name,pmin,sqrt(cov[i,i])*sqrt(chisq/dof)) print print "Correlation matrix" # correlation matrix close to gnuplot print " ", for i in range(len(p)): print "%-10s"%(p[i].name,), print for i in range(len(p2)): print "%10s"%p[i].name, for j in range(i+1): print "%10f"%(cov[i,j]/sqrt(cov[i,i]*cov[j,j]),), print #----------------------------------------------- Roger. -- Roger Fearick Department of Physics University of Cape Town From sebastian.walter at gmail.com Fri Mar 27 04:22:57 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 27 Mar 2009 09:22:57 +0100 Subject: [SciPy-user] "Advanced Scientific Programming in Python" Summer School in Berlin. Germany In-Reply-To: <20090325145528.GC16808@localhost> References: <20090325145528.GC16808@localhost> Message-ID: On Wed, Mar 25, 2009 at 3:55 PM, Tiziano Zito wrote: > Hi all, > > the summer school I'm co-organizing may be of interest to some of > you... > > cheers, > tiziano > > ------------------- > > "Advanced Scientific Programming in Python" > a G-Node Summer School > > Many scientists spend much of their time writing, debugging, and > maintaining software. But while techniques for doing this efficiently > have been developed, only few scientists actually use them. As a > result, they spend far too much time writing deficient code and > reinventing the wheel instead of doing research. In this course we > present a selection of advanced programming techniques with > theoretical lectures and practical exercises tailored to the needs of > the programming scientist. To spice up theory and foster our new > skills in a real-world programming project, we will team up to develop > an entertaining scientific computer game. > > We will use the Python programming language for the entire > course. With a large collection of open-source scientific modules and > all features of a full-fledged programming language, Python is rapidly > gaining popularity in the neuroscience community. It enables the > scientist to quickly develop powerful, efficient, and structured > software and is becoming an essential tool for scientific computing. > > The summer school is targeted at Post-docs and PhD students from all > areas of neuroscience. Substantial proficiency in Python or in > another language (e.g. Java, C/C++, MATLAB, Mathematica) is absolutely > required. An optional, one-day pre-course is offered to participants > without Python experience to familiarize with the language. > > Date and Location > ----------------- > August 31st, 2009 -- September 4th, 2009. Berlin, Germany. > > Preliminary Program > ------------------- > Day 0 (Mon Aug 31) -- [Optional] Dive into Python > > Day 1 (Tue Sep 1) -- Software Carpentry > - Documenting code and using version control > - Test-driven development & unit testing > - Debugging, profiling and benchmarking techniques > - Object-oriented programming, design patterns and Extreme Programming > > Day 2 (Wed Sep 2) -- Scientific Tools for Python > - NumPy, SciPy, Matplotlib, IPython > - Neuroscience libraries > - Programming project in the afternoon > > Day 3 (Thu Sep 3) -- Parallelization > - Python multiprocessing for SMP machines > - Distributed parallelization for cluster computing > - Programming project in the afternoon > > Day 4 (Fri Sep 4) -- Practical Software Development > - Software design > - Efficient programming in teams > - Quality Assurance > - Finalizing the programming project > > Applications > ------------ > Applications should be sent before May 31st, 2009 to > pythonsummerschool at bccnberlin.de. No fee is charged but participants shouldn't it be python-summerschool at bccn-berlin.de ? > should take care of travel, living, and accommodation expenses. > > Applications should include full contact information (name, > affiliation, email & phone), a short CV and a short statement > addressing the following questions (maximum 500 words): > - What is your educational background? > - What experience do you have in programming? > - Why do you think "Advanced Scientific Programming in Python" is an > appropriate course for your skill profile? > > Candidates will be selected based on their profile. Places are > limited: early application is recommended. > > Faculty > ------- > Pietro Berkes, Volen Center for Complex Systems, Brandeis University, USA > Jens Kremkow, Institut de Neurosciences Cognitives de la M?diterran?e, CNRS, Marseille, France > Eilif Muller, Laboratory of Computational Neuroscience, Ecole Polytechnique F?d?rale de Lausanne, Switzerland > Michael Schmuker, Neurobiology, Freie Universit?t Berlin, Germany > Bartosz Telenczuk, Charit? Universit?tsmedizin Berlin, Germany > Niko Wilbert, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany > Tiziano Zito, Bernstein Center for Computational Neuroscience Berlin, Germany > > Organized by Michael Schmuker and Tiziano Zito for the German > Neuroinformatics Node of the INCF. > > Website: http://www.g-node.org/Teaching > Contact: python-summerschool at bccn-berlin.de > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Fri Mar 27 04:35:04 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 27 Mar 2009 09:35:04 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: <3d375d730903261144v1caacd2fo92e0b799ebb1f3a7@mail.gmail.com> References: <200903261141.18716.lists_ravi@lavabit.com> <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> <3d375d730903261144v1caacd2fo92e0b799ebb1f3a7@mail.gmail.com> Message-ID: On Thu, Mar 26, 2009 at 7:44 PM, Robert Kern wrote: > On Thu, Mar 26, 2009 at 12:34, Rob Clewley wrote: > >> As an example, I would love to use this code in PyDSTool to find >> derivatives more accurately to compute periodic orbit solutions of >> differential equations, but it does not appeal to me to require boost >> as a dependency. > > You might want to take a look at Theano as an alternative to automatic > differentiation: > > http://pylearn.org/theano I had a quick look at theano. It looks more like symbolic differentiation. Is that right? How does it handle functions with loops in the body? Also, I have another AD tool on github (http://github.com/b45ch1/algopy/tree/master) that implements its own graph structure at the moment. It might be nice if I could use theano internally to optimize the computational graph. Do you think that might be possible? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Fri Mar 27 04:48:23 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 27 Mar 2009 09:48:23 +0100 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: Message-ID: On Thu, Mar 26, 2009 at 8:44 PM, Nils Wagner wrote: > On Thu, 26 Mar 2009 13:22:03 +0100 > Sebastian Walter wrote: >> hey, thanks for the bug report! >> >> Yes, that's something I have expected since I didn't >>have the chance >> to compile it on a 64bit system. >> I was a little lax with the integers! I guess I'll have >>to use >> npy_intp instead of int in some places. >> Shouldn't take too long to fix, but I have to set up a >>64bit Linux first. >> > > No problem. I can test it for you. Please let me know if I > can help you. well, sure, help is always appreciated :)! I haven't used github to its full extent yet and I'm curious how good this "social coding" thing works: You could try to make a fork of the project on github and send me a pull request. Or just send me a patch and I'll incorporate it. > > > Cheers, > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri Mar 27 04:51:42 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Mar 2009 03:51:42 -0500 Subject: [SciPy-user] Automatic Differentiation with PYADOLC and Removing Boost::Python dependency In-Reply-To: References: <200903261141.18716.lists_ravi@lavabit.com> <5b8d13220903260943n60274daey8d2259621db2138f@mail.gmail.com> <3d375d730903261144v1caacd2fo92e0b799ebb1f3a7@mail.gmail.com> Message-ID: <3d375d730903270151v3a8f72dfocf3ba8f5e0ad0ecd@mail.gmail.com> On Fri, Mar 27, 2009 at 03:35, Sebastian Walter wrote: > On Thu, Mar 26, 2009 at 7:44 PM, Robert Kern wrote: >> On Thu, Mar 26, 2009 at 12:34, Rob Clewley wrote: >> >>> As an example, I would love to use this code in PyDSTool to find >>> derivatives more accurately to compute periodic orbit solutions of >>> differential equations, but it does not appeal to me to require boost >>> as a dependency. >> >> You might want to take a look at Theano as an alternative to automatic >> differentiation: >> >> ?http://pylearn.org/theano > > I had a quick look at theano. It looks more like symbolic > differentiation. Is that right? > How does it handle functions with loops in the body? I don't think it does. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From opossumnano at gmail.com Fri Mar 27 05:20:18 2009 From: opossumnano at gmail.com (Tiziano Zito) Date: Fri, 27 Mar 2009 10:20:18 +0100 Subject: [SciPy-user] "Advanced Scientific Programming in Python" Summer School in Berlin. Germany In-Reply-To: References: <20090325145528.GC16808@localhost> Message-ID: <20090327092018.GC22004@localhost> On Fri 27 Mar, 09:22, Sebastian Walter wrote: [...] > > Applications > > ------------ > > Applications should be sent before May 31st, 2009 to > > pythonsummerschool at bccnberlin.de. No fee is charged but participants > > shouldn't it be > python-summerschool at bccn-berlin.de ? > yep! From bouloumag at gmail.com Fri Mar 27 13:34:29 2009 From: bouloumag at gmail.com (Darcoux Christine) Date: Fri, 27 Mar 2009 13:34:29 -0400 Subject: [SciPy-user] Monotone Interpolation Message-ID: <477ac73a0903271034nc8d8ddcu7e303ba243591530@mail.gmail.com> Any hints on how to do this with scipy ? Should I ask dev to implement it ? From david.huard at gmail.com Fri Mar 27 14:15:41 2009 From: david.huard at gmail.com (David Huard) Date: Fri, 27 Mar 2009 14:15:41 -0400 Subject: [SciPy-user] Ship synoptic code parser Message-ID: <91cf711d0903271115m59a43fdcqd3d1af05332a09cc@mail.gmail.com> Hi, Does anyone have code to parse ship synoptic code ? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Mar 27 14:17:57 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 27 Mar 2009 14:17:57 -0400 Subject: [SciPy-user] Monotone Interpolation In-Reply-To: <477ac73a0903271034nc8d8ddcu7e303ba243591530@mail.gmail.com> References: <477ac73a0903271034nc8d8ddcu7e303ba243591530@mail.gmail.com> Message-ID: 2009/3/27 Darcoux Christine : > Any hints on how to do this with scipy ? Should I ask dev to implement it ? As far as I know, code exists to do this. Scipy does have spline objects for which evaluations (and integrations, derivatives, and root-finding) are done with fast FORTRAN routines, so in principle if you implemented code that generated the knots you could take advantage of the fast evaluation routines. Monotone interpolation would indeed be useful, for example for concocting efficiently invertible CDFs for random number generation. But unfortunately right now there is no code to allow construction of cubic Hermite splines, so you would have to figure out scipy's tck representation and make up the knots yourself. Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Fri Mar 27 14:35:20 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 27 Mar 2009 14:35:20 -0400 Subject: [SciPy-user] statistical tests notes Message-ID: <1cd32cbb0903271135o32173ccdtf6bc2d36ae3e4d90@mail.gmail.com> There was a discussion in ticket 901 on some of the statistical tests in scipy.stats, and I thought, I post some notes that I keep to have an overview on the status of it. This doesn't cover all of stats (e.g. not descriptive statistics) Josef Inferential Statistics ====================== tests for location: ------------------- t-tests and similar ttest_1samp ttest_ind ttest_rel f_oneway (F-test) glm Notes ----- the t-tests, ttest_1samp, ttest_ind and ttest_rel, have been rewritten and are well tested glm has very incomplete description, just a t-test, needs rewrite f_oneway: verified with Nist test set for balanced anova, correct but looses numerical precision at medium to high difficulty, I have a rewrite with higher numerical precision rank based tests (some are equivalent) mannwhitneyu ranksums wilcoxon kruskal friedmanchisquare Notes ----- For 2 random variables and no ties mannwhitneyu, ranksums and kruskal are equivalent (i.e. return the same p-values but based on different statistics) kruskal has correct tie handling and works for more than two random variables friedmanchisquare has been verified, (corrected tie handling) mannwhitneyu: corrected, verified ranksums: no tie handling look at Monte Carlo p-values again, initial trying out didn't show improvement tests for scale: ---------------- ansari bartlett levene fligner mood Notes ----- I didn't verify any of them by comparing to R or matlab Brief checking with Monte Carlo shows that they work (reject wrong Null, accept correct Null) tests for distribution: ----------------------- general chisquare kstest ks_2samp anderson Notes ----- kstest, ks_2samp were rewritten and verified anderson may be fishy, but didn't look very carefully chisquare: I use a copy of it in test of discrete distributions and seems to work well for normal distribution skewtest kurtosistest normaltest shapiro Notes ----- not verified but look ok in brief Monte Carlo tests and use in examples other binom_test Notes ----- no idea Anova Ftests ------------ f_oneway (for the following: no statistics from data calculated, no p-values returned) f_value_wilks_lambda f_value f_value_multivariate Notes ----- f_oneway see above others no idea Correlation measures including pvalues -------------------------------------- pearsonr spearmanr pointbiserialr kendalltau Notes ----- pearsonr is just standard corrcoef, can be rewritten (mostly delegated to numpy.corrcoef) spearmanr needs rewriting, no tiehandling yet, can be reduced to corrcoef on rankdata pointbiserialr can be reduced to np.corrcoef, dropped? kendalltau is verified, p-value (variance) does not correct for ties extension in cython attached to ticket (but no p-values) Distributions - diagnostics and graphical analysis ================================================== box-cox transformation only checked whether they run plots look ok, converted to matplotlib pdfapprox is broken, I have enhanced rewrite, no good tests yet From martin.hoefling at gmx.de Fri Mar 27 16:17:44 2009 From: martin.hoefling at gmx.de (=?ISO-8859-1?Q?H=F6fling_Martin?=) Date: Fri, 27 Mar 2009 21:17:44 +0100 Subject: [SciPy-user] SciPy / NumPy on MacOS 10.5 / Python 2.6 - 64bit Message-ID: <05268CD2-8F44-40B3-9CF9-DC312C009CD9@gmx.de> Hey Folks, I tried to compile 1.3.0b1 with a python 2.6, compiled as 64bit version (installed to $HOME/local). Is this in general working on MacOS Leopard? I know that Michael made some attempts... For some reason - the compilation always complains about an unset MATHLIB variable although I set it to "m" here're the FLAGS I used for compilation CFLAGS= -I/Users/martin/local/include -m64 CPPFLAGS= -I/Users/martin/local/include -m64 FFLAGS= -I/Users/martin/local/include -m64 CXXFLAGS= -I/Users/martin/local/include -m64 FCFLAGS= -I/Users/martin/local/include -m64 LDFLAGS= -L/Users/martin/local/lib -m64 -arch x86_64 Best Martin Find attached the logfile from compilation process. -------------- next part -------------- A non-text attachment was scrubbed... Name: logfile Type: application/octet-stream Size: 16176 bytes Desc: not available URL: -------------- next part -------------- From cournape at gmail.com Fri Mar 27 16:31:29 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 05:31:29 +0900 Subject: [SciPy-user] SciPy / NumPy on MacOS 10.5 / Python 2.6 - 64bit In-Reply-To: <05268CD2-8F44-40B3-9CF9-DC312C009CD9@gmx.de> References: <05268CD2-8F44-40B3-9CF9-DC312C009CD9@gmx.de> Message-ID: <5b8d13220903271331h1f465e39ib564713087807542@mail.gmail.com> On Sat, Mar 28, 2009 at 5:17 AM, H?fling Martin wrote: > Hey Folks, > > I tried to compile 1.3.0b1 with a python 2.6, compiled as 64bit version > (installed to $HOME/local). Is this in general working on MacOS Leopard? I > know that Michael made some attempts... > > For some reason - the compilation always complains about an unset MATHLIB > variable although I set it to "m" I think the message is likely to be bogus - as mathlib is the first config test which is *run* (instead of just built/linked), you often get failures which were undetected before. You should look above in the build log to find the real reason, David From martin.hoefling at gmx.de Fri Mar 27 18:06:36 2009 From: martin.hoefling at gmx.de (=?ISO-8859-1?Q?H=F6fling_Martin?=) Date: Fri, 27 Mar 2009 23:06:36 +0100 Subject: [SciPy-user] SciPy / NumPy on MacOS 10.5 / Python 2.6 - 64bit In-Reply-To: <5b8d13220903271331h1f465e39ib564713087807542@mail.gmail.com> References: <05268CD2-8F44-40B3-9CF9-DC312C009CD9@gmx.de> <5b8d13220903271331h1f465e39ib564713087807542@mail.gmail.com> Message-ID: <3D12074C-C0DA-4EC8-9C7F-6E4960B5D967@gmx.de> Am 27.03.2009 um 21:31 schrieb David Cournapeau: Hey David, ok here a bit closes what it could be... > get failures which were undetected before. You should look above in > the build log to find the real reason, C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/ Users/martin/local/include -m64 -I/Users/martin/local/include -m64 compile options: '-Inumpy/core/src -Inumpy/core/include -I/Users/ martin/local/include/python2.6 -c' gcc: _configtest.c gcc _configtest.o -lm -o _configtest ld: warning in _configtest.o, file is not of required architecture Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status ld: warning in _configtest.o, file is not of required architecture Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Any idea why the build script forwarding the -arch x86_64 and -m64 switches in flags I exported? Best Martin From millman at berkeley.edu Fri Mar 27 19:20:20 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 27 Mar 2009 16:20:20 -0700 Subject: [SciPy-user] SciPy 2009 Conference will be Aug. 18-23 Message-ID: The subject says it all. Over the next few days, we will be updating the conference website with additional information. So if you are interested, please keep an eye on: http://conference.scipy.org/ Jarrod From cournape at gmail.com Sat Mar 28 04:01:35 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 28 Mar 2009 17:01:35 +0900 Subject: [SciPy-user] OS-X Universal binary? In-Reply-To: <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> References: <49C93DC6.7020102@noaa.gov> <49C93E99.8080902@noaa.gov> <5b8d13220903241336w6ec4f5c7n44b865362834fdd1@mail.gmail.com> <49C9CBDB.6030208@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220903280101vd710fcbo414255ee8529e6c2@mail.gmail.com> On Wed, Mar 25, 2009 at 3:14 PM, David Cournapeau wrote: > David Cournapeau wrote: >> >> Yes, you're right - the problem is that fortran flags do not contain >> the necessary options for universal build (you can see that pure C/C++ >> extensions are fat binaries, but fortran aren't). >> > > I created a new ticket: > > http://projects.scipy.org/numpy/ticket/1067 I think I fixed the problem. Can you test whether this works for you ? You would have to rebuild both numpy and scipy from scratch. cheers, David From scott.sinclair.za at gmail.com Sat Mar 28 06:06:02 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Sat, 28 Mar 2009 12:06:02 +0200 Subject: [SciPy-user] Ship synoptic code parser In-Reply-To: <91cf711d0903271115m59a43fdcqd3d1af05332a09cc@mail.gmail.com> References: <91cf711d0903271115m59a43fdcqd3d1af05332a09cc@mail.gmail.com> Message-ID: <6a17e9ee0903280306s40d45b11sdc4d533491a02b3b@mail.gmail.com> > 2009/3/27 David Huard : > Does anyone have code to parse ship synoptic code ? I don't have any code, but this link might be useful: http://metaf2xml.sourceforge.net/ It might be easier for you to process XML than SYNOP. Cheers, Scott From david at ar.media.kyoto-u.ac.jp Sat Mar 28 09:26:31 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Mar 2009 22:26:31 +0900 Subject: [SciPy-user] [Announce] Numpy 1.3.0 rc1 Message-ID: <49CE2587.2050007@ar.media.kyoto-u.ac.jp> Hi, I am pleased to announce the release of the rc1 for numpy 1.3.0. You can find source tarballs and installers for both Mac OS X and Windows on the sourceforge page: https://sourceforge.net/projects/numpy/ The release note for the 1.3.0 release are below, The Numpy developers ========================= NumPy 1.3.0 Release Notes ========================= This minor includes numerous bug fixes, official python 2.6 support, and several new features such as generalized ufuncs. Highlights ========== Python 2.6 support ~~~~~~~~~~~~~~~~~~ Python 2.6 is now supported on all previously supported platforms, including windows. http://www.python.org/dev/peps/pep-0361/ Generalized ufuncs ~~~~~~~~~~~~~~~~~~ There is a general need for looping over not only functions on scalars but also over functions on vectors (or arrays), as explained on http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose to realize this concept by generalizing the universal functions (ufuncs), and provide a C implementation that adds ~500 lines to the numpy code base. In current (specialized) ufuncs, the elementary function is limited to element-by-element operations, whereas the generalized version supports "sub-array" by "sub-array" operations. The Perl vector library PDL provides a similar functionality and its terms are re-used in the following. Each generalized ufunc has information associated with it that states what the "core" dimensionality of the inputs is, as well as the corresponding dimensionality of the outputs (the element-wise ufuncs have zero core dimensions). The list of the core dimensions for all arguments is called the "signature" of a ufunc. For example, the ufunc numpy.add has signature "(),()->()" defining two scalar inputs and one scalar output. Another example is (see the GeneralLoopingFunctions page) the function inner1d(a,b) with a signature of "(i),(i)->()". This applies the inner product along the last axis of each input, but keeps the remaining indices intact. For example, where a is of shape (3,5,N) and b is of shape (5,N), this will return an output of shape (3,5). The underlying elementary function is called 3*5 times. In the signature, we specify one core dimension "(i)" for each input and zero core dimensions "()" for the output, since it takes two 1-d arrays and returns a scalar. By using the same name "i", we specify that the two corresponding dimensions should be of the same size (or one of them is of size 1 and will be broadcasted). The dimensions beyond the core dimensions are called "loop" dimensions. In the above example, this corresponds to (3,5). The usual numpy "broadcasting" rules apply, where the signature determines how the dimensions of each input/output object are split into core and loop dimensions: While an input array has a smaller dimensionality than the corresponding number of core dimensions, 1's are pre-pended to its shape. The core dimensions are removed from all inputs and the remaining dimensions are broadcasted; defining the loop dimensions. The output is given by the loop dimensions plus the output core dimensions. Experimental Windows 64 bits support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now be built on windows 64 bits (amd64 only, not IA64), with both MS compilers and mingw-w64 compilers: This is *highly experimental*: DO NOT USE FOR PRODUCTION USE. See INSTALL.txt, Windows 64 bits section for more information on limitations and how to build it by yourself. New features ============ Formatting issues ~~~~~~~~~~~~~~~~~ Float formatting is now handled by numpy instead of the C runtime: this enables locale independent formatting, more robust fromstring and related methods. Special values (inf and nan) are also more consistent across platforms (nan vs IND/NaN, etc...), and more consistent with recent python formatting work (in 2.6 and later). Nan handling in max/min ~~~~~~~~~~~~~~~~~~~~~~~ The maximum/minimum ufuncs now reliably propagate nans. If one of the arguments is a nan, then nan is retured. This affects np.min/np.max, amin/amax and the array methods max/min. New ufuncs fmax and fmin have been added to deal with non-propagating nans. Nan handling in sign ~~~~~~~~~~~~~~~~~~~~ The ufunc sign now returns nan for the sign of anan. New ufuncs ~~~~~~~~~~ #. fmax - same as maximum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. fmin - same as minimum for integer types and non-nan floats. Returns the non-nan argument if one argument is nan and returns nan if both arguments are nan. #. deg2rad - converts degrees to radians, same as the radians ufunc. #. rad2deg - converts radians to degrees, same as the degrees ufunc. #. log2 - base 2 logarithm. #. exp2 - base 2 exponential. #. trunc - truncate floats to nearest integer towards zero. #. logaddexp - add numbers stored as logarithms and return the logarithm of the result. #. logaddexp2 - add numbers stored as base 2 logarithms and return the base 2 logarithm of the result result. Masked arrays ~~~~~~~~~~~~~ Several new features and bug fixes, including: * structured arrays should now be fully supported by MaskedArray (r6463, r6324, r6305, r6300, r6294...) * Minor bug fixes (r6356, r6352, r6335, r6299, r6298) * Improved support for __iter__ (r6326) * made baseclass, sharedmask and hardmask accesible to the user (but read-only) * doc update gfortran support on windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gfortran can now be used as a fortran compiler for numpy on windows, even when the C compiler is Visual Studio (VS 2005 and above; VS 2003 will NOT work). Gfortran + Visual studio does not work on windows 64 bits (but gcc + gfortran does). It is unclear whether it will be possible to use gfortran and visual studio at all on x64. Arch option for windows binary ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Automatic arch detection can now be bypassed from the command line for the superpack installed: numpy-1.3.0-superpack-win32.exe /arch=nosse will install a numpy which works on any x86, even if the running computer supports SSE set. Deprecated features =================== Histogram ~~~~~~~~~ The semantics of histogram has been modified to fix long-standing issues with outliers handling. The main changes concern #. the definition of the bin edges, now including the rightmost edge, and #. the handling of upper outliers, now ignored rather than tallied in the rightmost bin. The previous behavior is still accessible using `new=False`, but this is deprecated, and will be removed entirely in 1.4.0. Documentation changes ===================== A lot of documentation has been added. Both user guide and references can be built from sphinx. New C API ========= Multiarray API ~~~~~~~~~~~~~~ The following functions have been added to the multiarray C API: * PyArray_GetEndianness: to get runtime endianness Ufunc API ~~~~~~~~~~~~~~ The following functions have been added to the ufunc API: * PyUFunc_FromFuncAndDataAndSignature: to declare a more general ufunc (generalized ufunc). New defines ~~~~~~~~~~~ New public C defines are available for ARCH specific code through numpy/npy_cpu.h: * NPY_CPU_X86: x86 arch (32 bits) * NPY_CPU_AMD64: amd64 arch (x86_64, NOT Itanium) * NPY_CPU_PPC: 32 bits ppc * NPY_CPU_PPC64: 64 bits ppc * NPY_CPU_SPARC: 32 bits sparc * NPY_CPU_SPARC64: 64 bits sparc * NPY_CPU_S390: S390 * NPY_CPU_IA64: ia64 * NPY_CPU_PARISC: PARISC New macros for CPU endianness has been added as well (see internal changes below for details): * NPY_BYTE_ORDER: integer * NPY_LITTLE_ENDIAN/NPY_BIG_ENDIAN defines Those provide portable alternatives to glibc endian.h macros for platforms without it. Portable NAN, INFINITY, etc... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ npy_math.h now makes available several portable macro to get NAN, INFINITY: * NPY_NAN: equivalent to NAN, which is a GNU extension * NPY_INFINITY: equivalent to C99 INFINITY * NPY_PZERO, NPY_NZERO: positive and negative zero respectively Corresponding single and extended precision macros are available as well. All references to NAN, or home-grown computation of NAN on the fly have been removed for consistency. Internal changes ================ numpy.core math configuration revamp ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This should make the porting to new platforms easier, and more robust. In particular, the configuration stage does not need to execute any code on the target platform, which is a first step toward cross-compilation. http://projects.scipy.org/numpy/browser/trunk/doc/neps/math_config_clean.txt umath refactor ~~~~~~~~~~~~~~ A lot of code cleanup for umath/ufunc code (charris). Improvements to build warnings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Numpy can now build with -W -Wall without warnings http://projects.scipy.org/numpy/browser/trunk/doc/neps/warnfix.txt Separate core math library ~~~~~~~~~~~~~~~~~~~~~~~~~~ The core math functions (sin, cos, etc... for basic C types) have been put into a separate library; it acts as a compatibility layer, to support most C99 maths functions (real only for now). The library includes platform-specific fixes for various maths functions, such as using those versions should be more robust than using your platform functions directly. The API for existing functions is exactly the same as the C99 math functions API; the only difference is the npy prefix (npy_cos vs cos). The core library will be made available to any extension in 1.4.0. CPU arch detection ~~~~~~~~~~~~~~~~~~ npy_cpu.h defines numpy specific CPU defines, such as NPY_CPU_X86, etc... Those are portable across OS and toolchains, and set up when the header is parsed, so that they can be safely used even in the case of cross-compilation (the values is not set when numpy is built), or for multi-arch binaries (e.g. fat binaries on Max OS X). npy_endian.h defines numpy specific endianness defines, modeled on the glibc endian.h. NPY_BYTE_ORDER is equivalent to BYTE_ORDER, and one of NPY_LITTLE_ENDIAN or NPY_BIG_ENDIAN is defined. As for CPU archs, those are set when the header is parsed by the compiler, and as such can be used for cross-compilation and multi-arch binaries. Checksums ========= 5c6b2f02d0846317c6e7bffa39f6f828 release/installers/numpy-1.3.0rc1.zip 20cdddd69594420b0f8556bbc4a27a5a release/installers/numpy-1.3.0rc1.tar.gz f85231c4a27b39f7cb713ef22926931e release/installers/numpy-1.3.0rc1-py2.5-macosx10.5.dmg b24bb536492502611ea797d9410bb7c2 release/installers/numpy-1.3.0rc1-win32-superpack-python2.5.exe From contact at pythonxy.com Sat Mar 28 11:03:26 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 28 Mar 2009 16:03:26 +0100 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.12 Message-ID: <49CE3C3E.6060505@pythonxy.com> Hi all, Release 2.1.12 is now available on http://www.pythonxy.com: - All-in-One Installer ("Full Edition"), - Plugin Installer -- to be downloaded with xyweb, - Update Changes history Version 2.1.12 (03-28-2009) * Added: o Python(x,y) installer: command line parameters "/S /ALLUSERS=1" now allow to install Python(x,y) silently for all users o PyQtShell 0.3.16 - Pydee (an alternative to IDLE), interactive Python shell and other widgets (Workspace, Editor, DocViewer, ...) based on PyQt4 o ViTables 2.0 - Graphical tool for browsing and editing files in both HDF5 and PyTables formats o gettext 0.14.4 - GNU gettext Win32 porting - the GNU translation tool (useful tools for pygettext, a standard library module) * Updated: o Enthought Tool Suite 3.2.0 o PyTables 2.1.1 o Cython 0.11 o Pylint 0.18.0 o Sphinx 0.6.1 o Pywin32 2.13 o xy 1.0.24 Regards, Pierre Raybaut From hristov at ucalgary.ca Sun Mar 29 19:14:34 2009 From: hristov at ucalgary.ca (Iordan Hristov) Date: Sun, 29 Mar 2009 17:14:34 -0600 Subject: [SciPy-user] ellipk() with negative arguments? Message-ID: <830C2FD6-13FC-4A8C-A4E8-F6D2AB061D89@ucalgary.ca> Hello, I'm implementing a Mathematica equation which has EllipticK[x], x<0. It seems the domain of ellipk() is between 0 and 1 and anything outside gives nan. For example: Scipy: ellipk(.1) -> 1.6124413487202192 Mathematica: N[EllipticK[.1]] -> 1.61244 Scipy: ellipk(-1) -> nan Mathematica: N[EllipticK[-1]] -> 1.31103 Scipy: ellipk(3) -> nan Mathematica: N[EllipticK[3]] -> 1.00108 - 1.17142 I How could one get the Mathematica results in Scipy? Thanks, Iordan Hristov University of Calgary From fredrik.johansson at gmail.com Sun Mar 29 19:58:09 2009 From: fredrik.johansson at gmail.com (Fredrik Johansson) Date: Mon, 30 Mar 2009 01:58:09 +0200 Subject: [SciPy-user] ellipk() with negative arguments? In-Reply-To: <830C2FD6-13FC-4A8C-A4E8-F6D2AB061D89@ucalgary.ca> References: <830C2FD6-13FC-4A8C-A4E8-F6D2AB061D89@ucalgary.ca> Message-ID: <3d0cebfb0903291658y5d0945d1v7f0252f6f301abbf@mail.gmail.com> On Mon, Mar 30, 2009 at 1:14 AM, Iordan Hristov wrote: > Hello, > > I'm implementing a Mathematica equation which has EllipticK[x], x<0. > It seems the domain of ?ellipk() is between 0 and 1 and anything > outside gives nan. For example: > > Scipy: ? ? ? ? ? ? ? ? ?ellipk(.1) -> 1.6124413487202192 > Mathematica: ? ?N[EllipticK[.1]] -> 1.61244 > > Scipy: ? ? ? ? ? ? ? ? ?ellipk(-1) -> nan > Mathematica: ? ?N[EllipticK[-1]] -> 1.31103 > > Scipy: ? ? ? ? ? ? ? ? ?ellipk(3) -> nan > Mathematica: ? ?N[EllipticK[3]] -> 1.00108 - 1.17142 I > > How could one get the Mathematica results in Scipy? A simple workaround would be to use the ellipk function in mpmath: >>> from mpmath import ellipk >>> ellipk(-1) mpf('1.3110287771460598') >>> ellipk(3) mpc(real='1.0010773804561062', imag='-1.1714200841467699') Fredrik From vvinuv at gmail.com Mon Mar 30 01:29:54 2009 From: vvinuv at gmail.com (vvinuv) Date: Sun, 29 Mar 2009 22:29:54 -0700 (PDT) Subject: [SciPy-user] fourier series Message-ID: <22776639.post@talk.nabble.com> Hi I would like to demonstrate the fourier series graphically. I have a function in an array and want to decompose it. I used rfft to do that. But I am not able to interpret the coefficients I am getting. How can I use the array returns by rfft in the fourier series, ie how can I extract the coefficients (a_0, a_1, b_1, a_2, b_2 etc.) out of that? Thanks Vinu -- View this message in context: http://www.nabble.com/fourier-series-tp22776639p22776639.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Mon Mar 30 04:05:01 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 Mar 2009 08:05:01 +0000 (UTC) Subject: [SciPy-user] ellipk() with negative arguments? References: <830C2FD6-13FC-4A8C-A4E8-F6D2AB061D89@ucalgary.ca> Message-ID: Sun, 29 Mar 2009 17:14:34 -0600, Iordan Hristov wrote: > I'm implementing a Mathematica equation which has EllipticK[x], x<0. It > seems the domain of ellipk() is between 0 and 1 and anything outside > gives nan. For example: > > Scipy: ellipk(.1) -> 1.6124413487202192 Mathematica: > N[EllipticK[.1]] -> 1.61244 > > Scipy: ellipk(-1) -> nan > Mathematica: N[EllipticK[-1]] -> 1.31103 > > Scipy: ellipk(3) -> nan > Mathematica: N[EllipticK[3]] -> 1.00108 - 1.17142 I > > How could one get the Mathematica results in Scipy? In general the type of output is real if input is real -> nans when out of real-valued domain. Now, for `ellipk` there does not appear to be a complex-valued implementation in Scipy: >>> scipy.special.ellipk(-1+0j) Traceback (most recent call last): File "", line 1, in TypeError: function not supported for these types, and can't coerce safely to supported types So at the moment, the answer is simply that it's you can't, except if you can express ellipk in terms of other special functions that are implemented in Scipy. But having complex-calued ellipk in Scipy would be desirable. Please file an enhancement request in the Scipy Trac: http://projects.scipy.org/scipy/ so that we won't forget this feature request. As Frederik suggested, you can use the ellipk from mpmath as a work- around, for the present. -- Pauli Virtanen From bkomaki at yahoo.com Mon Mar 30 05:58:23 2009 From: bkomaki at yahoo.com (Ch B Komaki) Date: Mon, 30 Mar 2009 02:58:23 -0700 (PDT) Subject: [SciPy-user] calculate average by excluding NaN value in Array Message-ID: <595001.78520.qm@web30403.mail.mud.yahoo.com> Dear Sirs/Madams I wanna calculate average of some lists, but having Nan value makes difficult to calculate average of data , I wanna exclude the position has NaN.It should calculate? average of? the others instead? ; for example; >>> from numpy import * >>> a=array([2.,3,4,5]) >>> b=([3.,2,4, NaN]) >>> average(array([a,b],"f"),axis=0) array([ 2.5,? 2.5,? 4. ,? NaN], dtype=float32) >>> I mean that I need is [2.5,2.5,4,5], instead. thanks ba salam , khili mamnu az javab tabrike shoma, ?man ye soal fanny (GIS) daram, ?Man chand ta data(MODIS , 8 daily) daram ke bari har mah 4 ta mojoud ast, ?hal mi kham mingin mahanh enjam deham vali pixel hai Nodata(Null) mask mishand, man baid touri minagin begiram ke onja hai ke Nodata hast ,on data ke Nodata dar mohasebh miangin be kar nareh ?masalan? (einj a farz be ein ast ke 2 ta data darim); Mean([2,3,4,5] ,[3,2,4, None])=[2.5,2.5,4,5] -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Mar 30 06:27:11 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 30 Mar 2009 19:27:11 +0900 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <595001.78520.qm@web30403.mail.mud.yahoo.com> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> Message-ID: <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> Ch B Komaki wrote: > Dear Sirs/Madams > I wanna calculate average of some lists, but having Nan value makes > difficult to calculate average of data , I wanna exclude the position > has NaN.It should calculate average of the others instead ; > for example; > nanmean should do what you want, David From david at ar.media.kyoto-u.ac.jp Mon Mar 30 06:44:05 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 30 Mar 2009 19:44:05 +0900 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> Message-ID: <49D0A275.9020200@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Ch B Komaki wrote: > >> Dear Sirs/Madams >> I wanna calculate average of some lists, but having Nan value makes >> difficult to calculate average of data , I wanna exclude the position >> has NaN.It should calculate average of the others instead ; >> for example; >> >> > > nanmean should do what you want, > > which is in scipy.stats: import numpy as np from scipy.stats import nanmean nanmean(np.array([1, 2, 3, np.nan])) # should return 2. David From josef.pktd at gmail.com Mon Mar 30 08:46:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 30 Mar 2009 08:46:38 -0400 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <49D0A275.9020200@ar.media.kyoto-u.ac.jp> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> <49D0A275.9020200@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0903300546q73e483a6lffe82b00336224c8@mail.gmail.com> On Mon, Mar 30, 2009 at 6:44 AM, David Cournapeau wrote: > David Cournapeau wrote: >> Ch B Komaki wrote: >> >>> Dear Sirs/Madams >>> I wanna calculate average of some lists, but having Nan value makes >>> difficult to calculate average of data , I wanna exclude the position >>> has NaN.It should calculate ?average of ?the others instead ?; >>> for example; >>> >>> >> >> nanmean should do what you want, >> >> > > which is in scipy.stats: > > import numpy as np > from scipy.stats import nanmean > > nanmean(np.array([1, 2, 3, np.nan])) # should return 2. > > David If you need to do more work with arrays that have nans or missing values, then using masked arrays is more convenient and powerful. e.g. numpy.ma.mean help(numpy.ma) Josef From Chris.Barker at noaa.gov Mon Mar 30 09:21:23 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 30 Mar 2009 06:21:23 -0700 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <49D0A275.9020200@ar.media.kyoto-u.ac.jp> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> <49D0A275.9020200@ar.media.kyoto-u.ac.jp> Message-ID: <49D0C753.8080100@noaa.gov> David Cournapeau wrote: >> nanmean should do what you want, Actually, I don't think that's what the OP wanted: >>>> a=array([2.,3,4,5]) >>>> b=([3.,2,4, NaN]) >>>> average(array([a,b],"f"),axis=0) > array([ 2.5, 2.5, 4. , NaN], dtype=float32) >>>> > I mean that I need is [2.5,2.5,4,5], instead. In this case, numpy really is doing the only correct thing, in an operation like this, the result should be the same size as the input arrays, and the average of NaN and anything else can only be NaN. Imagine there were more than on NaN in the inputs -- how would you eve know which values in the resulting array belonged to which inputs? So you should probably check for NaN afterwards, and then do what you need with them: >>> import numpy as np >>> a=np.array([2.,3,4,5]) >>> b=np.array([3.,2,4, np.NaN]) >>> avg = np.average(np.array([a,b],dtype = np.float),axis=0) >>> avg array([ 2.5, 2.5, 4. , NaN]) >>> # now check for NaN: >>> np.isfinite(avg) array([ True, True, True, False], dtype=bool) >>> # or >>> np.isnan(avg) array([False, False, False, True], dtype=bool) >>> # get a version without the non-finite numbers: >>> avg[np.isfinite(avg)] array([ 2.5, 2.5, 4. ]) So I think what you want is to strip the NaNs out later: From david.huard at gmail.com Mon Mar 30 10:00:22 2009 From: david.huard at gmail.com (David Huard) Date: Mon, 30 Mar 2009 10:00:22 -0400 Subject: [SciPy-user] Ship synoptic code parser In-Reply-To: <6a17e9ee0903280306s40d45b11sdc4d533491a02b3b@mail.gmail.com> References: <91cf711d0903271115m59a43fdcqd3d1af05332a09cc@mail.gmail.com> <6a17e9ee0903280306s40d45b11sdc4d533491a02b3b@mail.gmail.com> Message-ID: <91cf711d0903300700q2b494d39y28963d89a1542cf7@mail.gmail.com> Scott, I came across that but didn't want to add a dependency. I ended up writing a regular expression. Thanks, David On Sat, Mar 28, 2009 at 6:06 AM, Scott Sinclair wrote: > > 2009/3/27 David Huard : > > Does anyone have code to parse ship synoptic code ? > > I don't have any code, but this link might be useful: > > http://metaf2xml.sourceforge.net/ > > It might be easier for you to process XML than SYNOP. > > Cheers, > Scott > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon Mar 30 11:03:40 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 30 Mar 2009 11:03:40 -0400 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <49D0C753.8080100@noaa.gov> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> <49D0A275.9020200@ar.media.kyoto-u.ac.jp> <49D0C753.8080100@noaa.gov> Message-ID: <32820C1D-AFCD-4B00-A115-E21FE4E2FD8A@gmail.com> On Mar 30, 2009, at 9:21 AM, Chris Barker wrote: > David Cournapeau wrote: >>> nanmean should do what you want, > > Actually, I don't think that's what the OP wanted: > >>>>> a=array([2.,3,4,5]) >>>>> b=([3.,2,4, NaN]) >>>>> average(array([a,b],"f"),axis=0) >> array([ 2.5, 2.5, 4. , NaN], dtype=float32) >>>>> >> I mean that I need is [2.5,2.5,4,5], instead. > > In this case, numpy really is doing the only correct thing, in an > operation like this, the result should be the same size as the input > arrays, and the average of NaN and anything else can only be NaN. Still, masked arrays were defined exactly for that kind of problem. >>> c=np.ma.fix_invalid(np.array([a,b],dtype=float)) >>> c.mean(axis=0) fix_invalid masks values that are NaNs/Infs. From cournape at gmail.com Mon Mar 30 11:09:53 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 31 Mar 2009 00:09:53 +0900 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <49D0C753.8080100@noaa.gov> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> <49D0A275.9020200@ar.media.kyoto-u.ac.jp> <49D0C753.8080100@noaa.gov> Message-ID: <5b8d13220903300809m6bc5f59cya478db1d9fc58787@mail.gmail.com> On Mon, Mar 30, 2009 at 10:21 PM, Chris Barker wrote: > David Cournapeau wrote: >>> nanmean should do what you want, > > Actually, I don't think that's what the OP wanted: I guess the complex answer would be: use nanmean if you want to ignore nan and use masked arrays if you want more complete feature for nan as missing data. David From Chris.Barker at noaa.gov Mon Mar 30 11:40:33 2009 From: Chris.Barker at noaa.gov (Chris Barker) Date: Mon, 30 Mar 2009 08:40:33 -0700 Subject: [SciPy-user] calculate average by excluding NaN value in Array In-Reply-To: <32820C1D-AFCD-4B00-A115-E21FE4E2FD8A@gmail.com> References: <595001.78520.qm@web30403.mail.mud.yahoo.com> <49D09E7F.6090400@ar.media.kyoto-u.ac.jp> <49D0A275.9020200@ar.media.kyoto-u.ac.jp> <49D0C753.8080100@noaa.gov> <32820C1D-AFCD-4B00-A115-E21FE4E2FD8A@gmail.com> Message-ID: <49D0E7F1.4010400@noaa.gov> Pierre GM wrote: > On Mar 30, 2009, at 9:21 AM, Chris Barker wrote: >> Actually, I don't think that's what the OP wanted: my mistake -- I misread -- I though he was trying to eilimate the columns that had a NaN in them...oh well, hopefully I didn't simply add to the confusion... > Still, masked arrays were defined exactly for that kind of problem. yes, they really are the way to go for this kind of thing. -CHB From jh at physics.ucf.edu Mon Mar 30 12:26:39 2009 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 30 Mar 2009 12:26:39 -0400 Subject: [SciPy-user] JOB: write numpy docs Message-ID: Last year's Doc Marathon got us off to a great start on documenting NumPy! But, there's still much work to be done, and SciPy after that. It's time to gear up for doing it again. Critical to last year's success was Stefan van der Walt's committed time, but he will be unable to play that role this year. So, I am looking to hire someone to write NumPy docs and help coordinate the doc project and its volunteers. The job includes working with me, the doc team, doc volunteers, and developers to: write and review a lot of docs, mainly those that others don't want to write help define milestones organize campaigns and volunteer teams to meet them research the NumPy and SciPy source codes to help plan: the eventual SciPy documentation the writing of a good User Manual work with the packaging team to meet their release deadlines perform other duties as assigned I am seeking someone to work full time if possible, and at least half time, from mid-April (or soon thereafter) through at least the (northern) summer. Candidates must be experienced NumPy and SciPy programmers; familiarity under the hood is a strong plus. They must also demonstrate their ability to produce excellent docs on the docs.SciPy.org wiki. Having contributed at a high level to an open-source community, especially to SciPy, is a big plus. Ability to take direction, work with and lead a team, and to work for extended periods without direct supervision on a list of assigned tasks are all critical. The applicant must be able to function well in a Linux environment; familiarity with multiple platforms is a plus. Please reply directly to me by email only. Include the following (PDF or ASCII formats strongly preferred): CV Statement of interest, qualifications per requirements above, availability, and wage expectations. Contact info for at least 3 professional references. Links to doc wiki pages for which you wrote the initial draft Links to doc wiki pages started by others to which you contributed significantly (edited, reviewed, proofed) The position is open until filled; candidates with complete applications by April 15 will receive full consideration. This is an open posting. Candidates who have not written any pages on the doc wiki yet have several weeks in which to do so. Pay will be commensurate with experience (up to a point). Relocation is not necessary. Candidates will need to provide their own computer and internet access. The University of Central Florida is an equal opportunity, equal access, affirmative action employer. --jh-- Prof. Joseph Harrington Department of Physics MAP 414 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 (407) 823-3416 voice (407) 823-5112 fax (407) 823-2325 physics office jh at physics.ucf.edu From Federico.Sacerdoti at DEShawResearch.com Mon Mar 30 15:30:50 2009 From: Federico.Sacerdoti at DEShawResearch.com (Sacerdoti, Federico) Date: Mon, 30 Mar 2009 15:30:50 -0400 Subject: [SciPy-user] [fc]blas.sasum tests fail Message-ID: <20DD12E793BCDF4183B21F6350BF7DE20874DE@maildrd1.nyc.deshaw.com> Hi, I have a problem with my scipy/0.7.0 failing its built in tests. It was built against numpy/1.3.0rc1 and acml/3.6.1-13, blas/CBLAS/netlib-20030223-15, and fftw/3.1.2-07/lib. I have narrowed it down to the following, it seems some single-precision tests are failing: >>> from scipy.lib.blas import cblas >>> from numpy.testing import * >>> from scipy.lib.blas import fblas >>> fblas.sasum([3,-4,5]) # Expected value = 12 0.0 >>> fblas.dasum([3,-4,5]) 12.0 >>> fblas.sdot([3,-4,5],[2,5,1]) # Expected value = -9 0.0 >>> fblas.ddot([3,-4,5],[2,5,1]) -9.0 >>> Has anyone else seen this or can give any advice? Thanks, Federico From dave.hirschfeld at gmail.com Tue Mar 31 04:28:14 2009 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 31 Mar 2009 08:28:14 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?cdf_and_integration_for_multivariate_norma?= =?utf-8?q?l=09distribution_in_stats=2Ekde?= References: <1cd32cbb0901101829w3890212cwa18694d958b6700b@mail.gmail.com> <3d375d730901101835u4e644898n50424f436b545775@mail.gmail.com> <1cd32cbb0901101853q4696aeb4l7e21c10aee1c92ee@mail.gmail.com> <1cd32cbb0903200755o3653ed82v2b9040fcd40288f1@mail.gmail.com> Message-ID: gmail.com> writes: > > On Fri, Mar 20, 2009 at 9:34 AM, Dave gmail.com> wrote: > > I'm currently using the mvncdf.py code you posted to the list but unless I'm > > mistaken it hasn't made the cut for 0.7 and there's no enhancement ticket for > > it in trac. It's a very useful function for me (and others I'm sure) so it > > would be a shame for it to just be forgotten as an list attachment. Should I > > file an enhancement ticket? > > > > I'm glad you find it useful. > > I had added the ticket: http://projects.scipy.org/scipy/ticket/846 > but there was a typo in the title/summary so searching for > "multivariate" didn't find it. > > Since you seem to be the first user of it, can you provide any > comments on the API. Are the functions useful in the way they are, or > what changes would yo recommend? You could add some comments to the > trac ticket. > > Thanks, > > Josef > Sorry for the delayed reply - it's been pretty hectic and I was hoping to use/test the code a bit more beforehand. As it is I'm just using the mvnormcdf function in a gaussian copula to simulate correlated random variables with arbitrary marginal distributions. The function seems to be working fine for me but as mentioned I've yet to test the code very much. Specifying a lower bound for a CDF initially struck me as a little strange but -Inf works fine and it gives other users more flexibility. Thanks for the code, I'm glad it's got a ticket attached. I'll post any suggestions I may have (after further testing) on the trac ticket. -Dave From berthe.loic at gmail.com Tue Mar 31 04:48:06 2009 From: berthe.loic at gmail.com (=?ISO-8859-1?Q?Lo=EFc_BERTHE?=) Date: Tue, 31 Mar 2009 10:48:06 +0200 Subject: [SciPy-user] How to fit a surface from a list of measured 3D points ? Message-ID: Hi, I have a list of 856 measured 3d points and would like to fit a 3D surface from theses points. Theses points are not regularly spaced. Here is the code I used : from numpy import * from matplotlib.mlab import csv2rec from matplotlib.pyplot import * data = csv2rec('data.csv', delimiter=';') figure(1) scatter(data.x, data.y, s=data.z, c=data.z) colorbar() title('data points : z=f(x,y)') xlabel('x') ylabel('y') grid() # interpolation spline with scipy from scipy import interpolate tck0 = interpolate.bisplrep(data.x, data.y, data.z) xnew,ynew = mgrid[-1:1:70j,-1:1:70j] znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck0) figure() pcolor(xnew,ynew,znew) colorbar() title("Interpolated z=f(xnew,ynew)") show() I've attached the two figures describing the data and the fit which is not very interesting. Is there a better approach to fit theses data ? I had a look to the ndimage module and the map_coordinates function but I don't know if this is a better tool for this problem. Have you an example of the ndimage use ? Regards, -------------- next part -------------- A non-text attachment was scrubbed... Name: small_data.jpg Type: image/jpeg Size: 38904 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: small_fit.jpg Type: image/jpeg Size: 25113 bytes Desc: not available URL: From Jean-Paul.JADAUD at CEA.FR Tue Mar 31 09:11:19 2009 From: Jean-Paul.JADAUD at CEA.FR (Jean-Paul.JADAUD at CEA.FR) Date: Tue, 31 Mar 2009 15:11:19 +0200 Subject: [SciPy-user] Loopless square 2d arrays with radially symmetricfunctions In-Reply-To: <79387210903250758y52fe548ama479cae6ab58c753@mail.gmail.com> References: <79387210903250758y52fe548ama479cae6ab58c753@mail.gmail.com> Message-ID: <5D20F4AE6EF4C44B8932FE91E08AF0E99D1CEB@U-SANTORIN.dif.dam.intra.cea.fr> You may try something like import scipy as S x = S.linspace(-128,128,256) y = S.linspace(-128,128, 256) mx,my =S.meshgrid(x,y) r = S.sqrt(mx**2 +my**2) p = S.exp(-0.01*r) JP ________________________________ De : scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] De la part de David Vine Envoy? : mercredi 25 mars 2009 15:58 ? : scipy-user at scipy.org Objet : [SciPy-user] Loopless square 2d arrays with radially symmetricfunctions Hello, I use a lot of radially symmetric arrays in my code and I would like to know if there is a more efficient method for creating these arrays than the nested for-loops i am currently using. For example, to create a Gaussian I would currently do this (Python 2.5, Ubuntu Intrepid): import scipy p = scipy.zeros((256,256)) for i in xrange(256): for j in xrange(256): p[i,j] = exp(-0.01*( (i-128.)**2. - (j-128.)**2.) ) and my question is whether there is a more efficient 'loopless' method (i.e involving no for-loops) ? Thanks in advance David -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Mar 31 11:04:12 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 Mar 2009 11:04:12 -0400 Subject: [SciPy-user] cdf and integration for multivariate normal distribution in stats.kde In-Reply-To: References: <1cd32cbb0901101829w3890212cwa18694d958b6700b@mail.gmail.com> <3d375d730901101835u4e644898n50424f436b545775@mail.gmail.com> <1cd32cbb0901101853q4696aeb4l7e21c10aee1c92ee@mail.gmail.com> <1cd32cbb0903200755o3653ed82v2b9040fcd40288f1@mail.gmail.com> Message-ID: <1cd32cbb0903310804v718f2be3jc88571fdee945b8f@mail.gmail.com> On Tue, Mar 31, 2009 at 4:28 AM, Dave Hirschfeld wrote: > ? gmail.com> writes: > >> >> On Fri, Mar 20, 2009 at 9:34 AM, Dave gmail.com> wrote: >> > I'm currently using the mvncdf.py code you posted to the list but unless I'm >> > mistaken it hasn't made the cut for 0.7 and there's no enhancement ticket for >> > it in trac. It's a very useful function for me (and others I'm sure) so it >> > would be a shame for it to just be forgotten as an list attachment. Should I >> > file an enhancement ticket? >> > >> >> I'm glad you find it useful. >> >> I had added the ticket: http://projects.scipy.org/scipy/ticket/846 >> but there was a typo in the title/summary so searching for >> "multivariate" didn't find it. >> >> Since you seem to be the first user of it, can you provide any >> comments on the API. Are the functions useful in the way they are, or >> what changes would yo recommend? You could add some comments to the >> trac ticket. >> >> Thanks, >> >> Josef >> > > Sorry for the delayed reply - it's been pretty hectic and I was hoping to > use/test the code a bit more beforehand. > > As it is I'm just using the mvnormcdf function in a gaussian copula to > simulate correlated random variables with arbitrary marginal distributions. > The function seems to be working fine for me but as mentioned I've yet to > test the code very much. Specifying a lower bound for a CDF initially struck > me as a little strange but -Inf works fine and it gives other users more > flexibility. I could convert the lower bound to a keyword argument with defaults -inf, so that it doesn't need to be specified in using mvnormcdf. > > Thanks for the code, I'm glad it's got a ticket attached. I'll post any > suggestions I may have (after further testing) on the trac ticket. I added docstrings to the two functions and attached the file to the ticket. I didn't change the functions themselves and I still don't have proper tests. I also added an example for the use of the keywords to control the numerical precision for the integration. A question on copulas got me initially started to look for the multivariate normal cdf. I would be nice to have also the cdf of the multivariate t distribution by Alan Genz, the author of mvndst. Do you have a simple example for the normal copula that you are willing to share for the cookbook? Thanks for the comments, Josef From josef.pktd at gmail.com Tue Mar 31 11:04:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 Mar 2009 11:04:30 -0400 Subject: [SciPy-user] cdf and integration for multivariate normal distribution in stats.kde In-Reply-To: References: <1cd32cbb0901101829w3890212cwa18694d958b6700b@mail.gmail.com> <3d375d730901101835u4e644898n50424f436b545775@mail.gmail.com> <1cd32cbb0901101853q4696aeb4l7e21c10aee1c92ee@mail.gmail.com> <1cd32cbb0903200755o3653ed82v2b9040fcd40288f1@mail.gmail.com> Message-ID: <1cd32cbb0903310804j128ae041w657970e77dc9f49@mail.gmail.com> On Tue, Mar 31, 2009 at 4:28 AM, Dave Hirschfeld wrote: > ? gmail.com> writes: > >> >> On Fri, Mar 20, 2009 at 9:34 AM, Dave gmail.com> wrote: >> > I'm currently using the mvncdf.py code you posted to the list but unless I'm >> > mistaken it hasn't made the cut for 0.7 and there's no enhancement ticket for >> > it in trac. It's a very useful function for me (and others I'm sure) so it >> > would be a shame for it to just be forgotten as an list attachment. Should I >> > file an enhancement ticket? >> > >> >> I'm glad you find it useful. >> >> I had added the ticket: http://projects.scipy.org/scipy/ticket/846 >> but there was a typo in the title/summary so searching for >> "multivariate" didn't find it. >> >> Since you seem to be the first user of it, can you provide any >> comments on the API. Are the functions useful in the way they are, or >> what changes would yo recommend? You could add some comments to the >> trac ticket. >> >> Thanks, >> >> Josef >> > > Sorry for the delayed reply - it's been pretty hectic and I was hoping to > use/test the code a bit more beforehand. > > As it is I'm just using the mvnormcdf function in a gaussian copula to > simulate correlated random variables with arbitrary marginal distributions. > The function seems to be working fine for me but as mentioned I've yet to > test the code very much. Specifying a lower bound for a CDF initially struck > me as a little strange but -Inf works fine and it gives other users more > flexibility. I could convert the lower bound to a keyword argument with defaults -inf, so that it doesn't need to be specified in using mvnormcdf. > > Thanks for the code, I'm glad it's got a ticket attached. I'll post any > suggestions I may have (after further testing) on the trac ticket. I added docstrings to the two functions and attached the file to the ticket. I didn't change the functions themselves and I still don't have proper tests. I also added an example for the use of the keywords to control the numerical precision for the integration. A question on copulas got me initially started to look for the multivariate normal cdf. I would be nice to have also the cdf of the multivariate t distribution by Alan Genz, the author of mvndst. Do you have a simple example for the normal copula that you are willing to share for the cookbook? Thanks for the comments, Josef From zachary.pincus at yale.edu Tue Mar 31 12:29:23 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 31 Mar 2009 12:29:23 -0400 Subject: [SciPy-user] How to fit a surface from a list of measured 3D points ? In-Reply-To: References: Message-ID: Hi Lo?c, What's your eventual goal for the fit surface? There are a lot of possible approaches possible with the tools in scipy (e.g. fit a spline, as you have done, or fit a simpler parameteric surface in a least-squares fashion, or go for a nonparametric approach and interpolate between the points with radial basis functions). The best approach will likely depend on what you want to do with the surface... but what you have seems like a reasonable start. Also, ndimage.map_coordinates is for resampling of an array (e.g. image) at arbitrary coordinates: perhaps not of direct use here. Zach On Mar 31, 2009, at 4:48 AM, Lo?c BERTHE wrote: > Hi, > > I have a list of 856 measured 3d points and would like to fit a 3D > surface from theses points. > Theses points are not regularly spaced. > > Here is the code I used : > > from numpy import * > from matplotlib.mlab import csv2rec > from matplotlib.pyplot import * > > data = csv2rec('data.csv', delimiter=';') > > figure(1) > scatter(data.x, data.y, s=data.z, c=data.z) > colorbar() > title('data points : z=f(x,y)') > xlabel('x') > ylabel('y') > grid() > > > # interpolation spline with scipy > from scipy import interpolate > > tck0 = interpolate.bisplrep(data.x, data.y, data.z) > xnew,ynew = mgrid[-1:1:70j,-1:1:70j] > znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck0) > figure() > pcolor(xnew,ynew,znew) > colorbar() > title("Interpolated z=f(xnew,ynew)") > show() > > I've attached the two figures describing the data and the fit which is > not very interesting. > Is there a better approach to fit theses data ? > I had a look to the ndimage module and the map_coordinates function > but I don't know if this is a better tool for this problem. > Have you an example of the ndimage use ? > > Regards, > < > small_data > .jpg>_______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From berthe.loic at gmail.com Tue Mar 31 14:34:02 2009 From: berthe.loic at gmail.com (LB) Date: Tue, 31 Mar 2009 11:34:02 -0700 (PDT) Subject: [SciPy-user] How to fit a surface from a list of measured 3D points ? In-Reply-To: References: Message-ID: I would like to estimate some geoemetric characteristic of this surface and view the variation of radius of curvature. > What's your eventual goal for the fit surface? There are a lot of ? > possible approaches possible with the tools in scipy (e.g. fit a ? > spline, as you have done, or fit a simpler parameteric surface in a ? > least-squares fashion, or go for a nonparametric approach and ? > interpolate between the points with radial basis functions). ?The best ? > approach will likely depend on what you want to do with the surface... ? > but what you have seems like a reasonable start. I think a 2D surface spline could do the trick, but I think I didn't fit it well : - the estimated z can go far below the range of my inputs ( [48-62] ) - As you can see on small_data.jpg, the surface is really 3D, there is a kind of crease in my surface, and there are two z value for (x,y) ~ (2,2). So I 'm not sure if this is the good approach > > Also, ndimage.map_coordinates is for resampling of an array (e.g. ? > image) at arbitrary coordinates: perhaps not of direct use here. Ok