From eric.moore2 at nih.gov Tue Jul 1 14:06:32 2014 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Tue, 1 Jul 2014 18:06:32 +0000 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0E27A3@MLBXV17.nih.gov> Message-ID: <649847CE7F259144A0FD99AC64E7326D0EF5AC@MLBXV17.nih.gov> > -----Original Message----- > From: Barrett B [mailto:barrett.n.b at gmail.com] > Sent: Sunday, June 29, 2014 3:46 AM > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] Optimizing odeint without a for loop > > Moore, Eric (NIH/NIDDK) [F] nih.gov> writes: > > > > > Untested: > > > > def f(X, t): > > N = len(X)/2 > > E = X[:N] > > n = X[1:N+1] > > n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) > > m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) > > dV = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - > E_K))/C > > dV += eps * np.dot(X[:N], connex) > > dn = (n_inf_E - n)/tau > > return np.concatenate((dV, dn)) > > > > The basic idea is to operate on full arrays at once rather than > looping > over them. See, for instance, > https://scipy-lectures.github.io/intro/numpy/operations.html > > > > Eric > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Thanks, that definitely steers me in the right direction. I made some > changes (additional code included for context): > > -------------------- > > def f(X, t): > N = len(X)/2 > E = X[:N]; n = X[N: 2*N] > n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) > m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) > dV = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - > E_K))/C > dV += eps*np.dot(connex, X[:N]) > dn = (n_inf_E - n)/tau > return np.concatenate((dV, dn)) > > connex = np.matrix([[-1,1,0], > [1,-2,1], > [0,1,-1]]) #connection matrix > > t = np.arange(0, stopTime, timeStep) > V0 = np.array([0, -20, -50]) > n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) > > soln = odeint(f, np.concatenate((V0, n0)), t) > > -------------- > > But the "dv +=" line (third to last in function f) gives me the > following > error repeated several times: > > Traceback (most recent call last): > File "C:/Users/Barrett/Documents/Research/Networks/HH n V.py", line > 31, in f > dV += eps*np.dot(connex, X[:N]) > ValueError: non-broadcastable output operand with shape (3) doesn't > match > the broadcast shape (1,3) > > ------------- > > I figured it might have been a problem with the matrix multiplication, > but > that wasn't it: Switching "connex" and "X[:N]" gives the exact same > error. > What's going on here? > Don't use np.matrix. In [168]: a = np.eye(3) In [169]: np.dot(a, np.ones(3)).shape Out[169]: (3L,) In [170]: b = np.matrix(np.eye(3)) In [171]: np.dot(b, np.ones(3)).shape Out[171]: (1L, 3L) Eric From takowl at gmail.com Wed Jul 2 15:49:26 2014 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 2 Jul 2014 12:49:26 -0700 Subject: [SciPy-User] Python 3 BoF at Scipy conference Message-ID: Fernando and I have arranged a Python 3 Birds-of-a-Feather session at Scipy next week. After the matrix multiplication PEP was accepted, this is our chance to work out if we want to propose any further changes to go into Python 3.5. We also want to talk about conventions for function annotations, and how to handle the switch in courses teaching people Python. We'll be joined by Nick Coghlan, a core Python developer who has thought and written a lot about the Python 3 transition. We've started a wiki page to collect ideas and questions for discussion - feel free to add to it: https://github.com/ipython/ipython/wiki/Sprints:-SciPy2014-Py3-BoF We're scheduled for 5.30pm on Tuesday, in the Grand Ballroom. If you're at SciPy, please come along and join the discussion. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jul 2 17:38:00 2014 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 2 Jul 2014 22:38:00 +0100 Subject: [SciPy-User] Python 3 BoF at Scipy conference In-Reply-To: References: Message-ID: On Wed, Jul 2, 2014 at 8:49 PM, Thomas Kluyver wrote: > Fernando and I have arranged a Python 3 Birds-of-a-Feather session at Scipy > next week. After the matrix multiplication PEP was accepted, this is our > chance to work out if we want to propose any further changes to go into > Python 3.5. We also want to talk about conventions for function annotations, > and how to handle the switch in courses teaching people Python. We'll be > joined by Nick Coghlan, a core Python developer who has thought and written > a lot about the Python 3 transition. > > We've started a wiki page to collect ideas and questions for discussion - > feel free to add to it: > https://github.com/ipython/ipython/wiki/Sprints:-SciPy2014-Py3-BoF I won't be there, but I summarized the "3.5 todo list" that I've been keeping informally in the back of my head on the wiki page. -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From a.mcmorland at auckland.ac.nz Fri Jul 4 00:13:47 2014 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Fri, 4 Jul 2014 16:13:47 +1200 Subject: [SciPy-User] 3-D edge detection Message-ID: Hi all, I'm in need of 3-D edge detection, and I haven't found anything suitable in scipy, skimage or OpenCV or general web searching. Skimage and OpenCV both have implementations of the Canny algorithm, but only for 2-D images. Canny has been written in 3-D in MATLAB [1], which I've tested and works well for my images, and I'm wondering if anyone has already done the work of implementing this or an equivalent in Python. Thanks for any suggestions, Angus [1] http://www.mathworks.com/matlabcentral/fileexchange/45459-canny-edge-detection-in-2-d-and-3-d -- AJC McMorland Lecturer in Computational Movement Science Sport and Exercise Science | Centre for Brain Research University of Auckland, New Zealand -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Fri Jul 4 04:28:00 2014 From: almar.klein at gmail.com (Almar Klein) Date: Fri, 04 Jul 2014 10:28:00 +0200 Subject: [SciPy-User] ANN: Pyzo 2014a Message-ID: <53B66590.2060102@gmail.com> Hi all, We are pleased to announce release version 2014a of the Pyzo scientific Python distribution. This release come with Python 3.4 and the latest packages from conda, and features the latest version of our IDE (IEP). The scipy stack and a few additional packages are installed by default, and additional packages can be easily added from inside the IDE. Get it at http://pyzo.org ! Happy coding, Almar From juanlu001 at gmail.com Fri Jul 4 12:17:47 2014 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Fri, 04 Jul 2014 18:17:47 +0200 Subject: [SciPy-User] ANN: Pyzo 2014a In-Reply-To: <53B66590.2060102@gmail.com> References: <53B66590.2060102@gmail.com> Message-ID: <53B6D3AB.90104@gmail.com> On 07/04/2014 10:28 AM, Almar Klein wrote: > Hi all, > > We are pleased to announce release version 2014a of the Pyzo scientific > Python distribution. > > This release come with Python 3.4 and the latest packages from conda, > and features the latest version of our IDE (IEP). The scipy stack and a > few additional packages are installed by default, and additional > packages can be easily added from inside the IDE. > > Get it at http://pyzo.org ! > > Happy coding, > Almar > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi Almar, congratulations for the new release! I miss an update in the release notes though http://www.pyzo.org/releasenotes.html I'm gonna try it and spread the word :) Cheers Juan Luis From almar.klein at gmail.com Fri Jul 4 18:29:28 2014 From: almar.klein at gmail.com (Almar Klein) Date: Sat, 05 Jul 2014 00:29:28 +0200 Subject: [SciPy-User] ANN: Pyzo 2014a In-Reply-To: <53B6D3AB.90104@gmail.com> References: <53B66590.2060102@gmail.com> <53B6D3AB.90104@gmail.com> Message-ID: <53B72AC8.3080400@gmail.com> > Hi Almar, congratulations for the new release! I miss an update in the > release notes though > > http://www.pyzo.org/releasenotes.html > > I'm gonna try it and spread the word :) Cheers > > Juan Luis Thanks Juan, I was so happy to finally finish it that I forgot these. Website is updated. - Almar From njs at pobox.com Sat Jul 5 10:27:39 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 5 Jul 2014 15:27:39 +0100 Subject: [SciPy-User] undefined symbol: clange_ Message-ID: On Debian testing, py27, x86-64: sudo apt-get build-dep python-scipy . myvenv/bin/activate pip uninstall scipy pip install scipy==0.14.0 python -c 'import scipy.linalg' gives me: ImportError: /home/njs/.user-python2.7-64bit-3/local/lib/python2.7/site-packages/scipy/linalg/_fblas.so: undefined symbol: clange_ Any ideas how to build a non-broken scipy here? Rebuilding this virtualenv from scratch would be tiresome... -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From ralf.gommers at gmail.com Sat Jul 5 10:39:36 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 5 Jul 2014 16:39:36 +0200 Subject: [SciPy-User] undefined symbol: clange_ In-Reply-To: References: Message-ID: On Sat, Jul 5, 2014 at 4:27 PM, Nathaniel Smith wrote: > On Debian testing, py27, x86-64: > > sudo apt-get build-dep python-scipy > . myvenv/bin/activate > pip uninstall scipy > pip install scipy==0.14.0 > python -c 'import scipy.linalg' > > gives me: > > ImportError: > /home/njs/.user-python2.7-64bit-3/local/lib/python2.7/site-packages/scipy/linalg/_fblas.so: > undefined symbol: clange_ > > Any ideas how to build a non-broken scipy here? Rebuilding this > virtualenv from scratch would be tiresome... > Never seen this one before. Does pip leave something behind after uninstall? Any idea if it's specific to this version of Debian or your set of compilers? Ralf > -n > > -- > Nathaniel J. Smith > Postdoctoral researcher - Informatics - University of Edinburgh > http://vorpus.org > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Jul 5 15:47:05 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 5 Jul 2014 20:47:05 +0100 Subject: [SciPy-User] undefined symbol: clange_ In-Reply-To: References: Message-ID: On Sat, Jul 5, 2014 at 3:39 PM, Ralf Gommers wrote: > > On Sat, Jul 5, 2014 at 4:27 PM, Nathaniel Smith wrote: >> >> On Debian testing, py27, x86-64: >> >> sudo apt-get build-dep python-scipy >> . myvenv/bin/activate >> pip uninstall scipy >> pip install scipy==0.14.0 >> python -c 'import scipy.linalg' >> >> gives me: >> >> ImportError: >> /home/njs/.user-python2.7-64bit-3/local/lib/python2.7/site-packages/scipy/linalg/_fblas.so: >> undefined symbol: clange_ >> >> Any ideas how to build a non-broken scipy here? Rebuilding this >> virtualenv from scratch would be tiresome... > > Never seen this one before. Does pip leave something behind after uninstall? > Any idea if it's specific to this version of Debian or your set of > compilers? After a lucky guess, I've determined that the problem had something to do with the BLAS installation -- I had both openblas and atlas installed, with Debian's "alternatives" set to default to openblas. I uninstalled openblas (apt-get remove libopenblas-base) and rebuilt, and now everything works. So I don't know exactly what the problem was, but it looks like some sort of nasty interaction between which library was being built against versus what library was being used at runtime, or something along those lines. -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From barrett.n.b at gmail.com Sat Jul 5 23:02:45 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sat, 5 Jul 2014 23:02:45 -0400 Subject: [SciPy-User] Heaviside function in vector form Message-ID: I am trying to code the Heaviside function in vector form. There are several different versions of this, but essentially, f(x) = 1 whenever x >= 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x is a vector here): ========= def Heaviside(x): mult = -100.0; diff = mult*(x - Theta_syn); # print x if abs(diff) < 50: #if x is close to Theta_syn return 1.0/(1 + np.exp(diff)) if x < Theta_syn: return 0 return 1 #otherwise # return 1.0/(1 + np.exp(diff)) ========= which produces the following error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() BTW, if I were to comment out every line except the first and last, it runs fine. See what I'm trying to do? Basically, I want to check whether current value of diff is not very large. But I don't know how to do this without rewriting the code to include for loops, which is what I've been trying to get away from all along? -------------- next part -------------- An HTML attachment was scrubbed... URL: From msarahan at gmail.com Sat Jul 5 23:15:02 2014 From: msarahan at gmail.com (Michael Sarahan) Date: Sat, 5 Jul 2014 20:15:02 -0700 Subject: [SciPy-User] Heaviside function in vector form In-Reply-To: References: Message-ID: when you say if x < Theta_syn: you are trying to figure out the truth value of a vector. That's what Numpy is telling you. It doesn't make sense - only the truth value of a single value makes sense for an if statement. Instead, you probably want a sum or something similar. HTH. Mike On Sat, Jul 5, 2014 at 8:02 PM, Barrett B wrote: > I am trying to code the Heaviside function in vector form. There are > several different versions of this, but essentially, f(x) = 1 whenever x >= > 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x > is a vector here): > > ========= > > def Heaviside(x): > mult = -100.0; diff = mult*(x - Theta_syn); > # print x > if abs(diff) < 50: #if x is close to Theta_syn > return 1.0/(1 + np.exp(diff)) > if x < Theta_syn: > return 0 > return 1 #otherwise > # return 1.0/(1 + np.exp(diff)) > > ========= > > which produces the following error: > > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > BTW, if I were to comment out every line except the first and last, it > runs fine. > > See what I'm trying to do? Basically, I want to check whether current > value of diff is not very large. But I don't know how to do this without > rewriting the code to include for loops, which is what I've been trying to > get away from all along? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msarahan at gmail.com Sat Jul 5 23:26:58 2014 From: msarahan at gmail.com (Michael Sarahan) Date: Sat, 5 Jul 2014 20:26:58 -0700 Subject: [SciPy-User] Heaviside function in vector form In-Reply-To: References: Message-ID: PS: if abs(diff) < 50: #if x is close to Theta_syn will also fail with this error if x is a vector. The error message suggests any() or all(), which will return a single value representing if any or all of the values are true (respectively) - you might also consider min() or max() as sort of threshold settings, or a sum or mean of the diff vector (it will be a vector if x is a vector). On Sat, Jul 5, 2014 at 8:15 PM, Michael Sarahan wrote: > when you say > > if x < Theta_syn: > > you are trying to figure out the truth value of a vector. That's what > Numpy is telling you. It doesn't make sense - only the truth value of a > single value makes sense for an if statement. Instead, you probably want a > sum or something similar. > > HTH. > Mike > > > On Sat, Jul 5, 2014 at 8:02 PM, Barrett B wrote: > >> I am trying to code the Heaviside function in vector form. There are >> several different versions of this, but essentially, f(x) = 1 whenever x >= >> 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x >> is a vector here): >> >> ========= >> >> def Heaviside(x): >> mult = -100.0; diff = mult*(x - Theta_syn); >> # print x >> if abs(diff) < 50: #if x is close to Theta_syn >> return 1.0/(1 + np.exp(diff)) >> if x < Theta_syn: >> return 0 >> return 1 #otherwise >> # return 1.0/(1 + np.exp(diff)) >> >> ========= >> >> which produces the following error: >> >> ValueError: The truth value of an array with more than one element is >> ambiguous. Use a.any() or a.all() >> >> BTW, if I were to comment out every line except the first and last, it >> runs fine. >> >> See what I'm trying to do? Basically, I want to check whether current >> value of diff is not very large. But I don't know how to do this without >> rewriting the code to include for loops, which is what I've been trying to >> get away from all along? >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sat Jul 5 23:37:12 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sun, 6 Jul 2014 03:37:12 +0000 (UTC) Subject: [SciPy-User] Heaviside function in vector form References: Message-ID: Michael Sarahan gmail.com> writes: > > > PS:if abs(diff) < 50: #if x is close to Theta_synwill also fail with this error if x is a vector.The error message suggests any() or all(), which will return a single value representing if any or all of the values are true (respectively) - you might also consider min() or max() as sort of threshold settings, or a sum or mean of the diff vector (it will be a vector if x is a vector). > > > On Sat, Jul 5, 2014 at 8:15 PM, Michael Sarahan gmail.com> wrote: > when you sayif x < Theta_syn:you are trying to figure out the truth value of a vector.? That's what Numpy is telling you.? It doesn't make sense - only the truth value of a single value makes sense for an if statement.? Instead, you probably want a sum or something similar.HTH.Mike > Yeah, I discovered that. For context, here's the function that's calling Heaviside(x): ===== def f(X, t): N = len(X)/3 V = X[:N]; n = X[N:2*N]; S = X[2*N:3*N] #Equations m_inf_E = 1/(1 + np.exp((E_half_m - V)/k_m)) n_inf_E = 1/(1 + np.exp((E_half_n - V)/k_n)) S_inf_E = 1/(1 + np.exp((E_half_S - V)/k_S)) I_Ca = g_Ca * m_inf_E * (V - E_Ca) I_K = g_K * n * (V - E_K) I_S = g_S * S * (V - E_S) dV = (-I_Ca - I_K - I_S)/tau dV += g_inh*(E_inh - V)*Heaviside(np.dot(V, connex))/tau dn = (n_inf_E - n)/tau dS = (S_inf_E - S)/tau_S return np.concatenate((dV, dn, dS)) ===== (plz pretend not to notice the two lines to establish dV, lol) The problem is that when x << Theta_syn in the Heaviside function, np.exp(diff) is astronomically large. My whole thought process was to say that on any given run, if x is nowhere close to Theta_syn, then Heaviside(x) should just return 1 or 0 depending on which side we're talking about. Simple enough, but the trick becomes: (1) How to do this when x is a vector; (2) How to avoid using a for loop, which would slow things down. That is where I am stuck. From msarahan at gmail.com Sat Jul 5 23:42:28 2014 From: msarahan at gmail.com (Michael Sarahan) Date: Sat, 5 Jul 2014 20:42:28 -0700 Subject: [SciPy-User] Heaviside function in vector form In-Reply-To: References: Message-ID: I think I'd do this with indexing: def Heaviside(x): mult = -100.0; diff = mult*(x - Theta_syn); output = np.ones_like(x) output[x wrote: > PS: > > > if abs(diff) < 50: #if x is close to Theta_syn > > will also fail with this error if x is a vector. > > The error message suggests any() or all(), which will return a single > value representing if any or all of the values are true (respectively) - > you might also consider min() or max() as sort of threshold settings, or a > sum or mean of the diff vector (it will be a vector if x is a vector). > > > On Sat, Jul 5, 2014 at 8:15 PM, Michael Sarahan > wrote: > >> when you say >> >> if x < Theta_syn: >> >> you are trying to figure out the truth value of a vector. That's what >> Numpy is telling you. It doesn't make sense - only the truth value of a >> single value makes sense for an if statement. Instead, you probably want a >> sum or something similar. >> >> HTH. >> Mike >> >> >> On Sat, Jul 5, 2014 at 8:02 PM, Barrett B wrote: >> >>> I am trying to code the Heaviside function in vector form. There are >>> several different versions of this, but essentially, f(x) = 1 whenever x >= >>> 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x >>> is a vector here): >>> >>> ========= >>> >>> def Heaviside(x): >>> mult = -100.0; diff = mult*(x - Theta_syn); >>> # print x >>> if abs(diff) < 50: #if x is close to Theta_syn >>> return 1.0/(1 + np.exp(diff)) >>> if x < Theta_syn: >>> return 0 >>> return 1 #otherwise >>> # return 1.0/(1 + np.exp(diff)) >>> >>> ========= >>> >>> which produces the following error: >>> >>> ValueError: The truth value of an array with more than one element is >>> ambiguous. Use a.any() or a.all() >>> >>> BTW, if I were to comment out every line except the first and last, it >>> runs fine. >>> >>> See what I'm trying to do? Basically, I want to check whether current >>> value of diff is not very large. But I don't know how to do this without >>> rewriting the code to include for loops, which is what I've been trying to >>> get away from all along? >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msarahan at gmail.com Sat Jul 5 23:44:24 2014 From: msarahan at gmail.com (Michael Sarahan) Date: Sat, 5 Jul 2014 20:44:24 -0700 Subject: [SciPy-User] Heaviside function in vector form In-Reply-To: References: Message-ID: oops, typos (in abs(diff parenthesis), beware. def Heaviside(x): mult = -100.0; diff = mult*(x - Theta_syn); output = np.ones_like(x) output[x < Theta_syn] = 0 output[abs(diff) < 50] = 1.0/(1 + np.exp(diff[abs(diff) < 50])) return output On Sat, Jul 5, 2014 at 8:42 PM, Michael Sarahan wrote: > I think I'd do this with indexing: > > > def Heaviside(x): > mult = -100.0; diff = mult*(x - Theta_syn); > output = np.ones_like(x) > output[x output[abs(diff < 50)] = 1.0/(1 + np.exp(diff[abs(diff < 50)])) > return output > > > On Sat, Jul 5, 2014 at 8:26 PM, Michael Sarahan > wrote: > >> PS: >> >> >> if abs(diff) < 50: #if x is close to Theta_syn >> >> will also fail with this error if x is a vector. >> >> The error message suggests any() or all(), which will return a single >> value representing if any or all of the values are true (respectively) - >> you might also consider min() or max() as sort of threshold settings, or a >> sum or mean of the diff vector (it will be a vector if x is a vector). >> >> >> On Sat, Jul 5, 2014 at 8:15 PM, Michael Sarahan >> wrote: >> >>> when you say >>> >>> if x < Theta_syn: >>> >>> you are trying to figure out the truth value of a vector. That's what >>> Numpy is telling you. It doesn't make sense - only the truth value of a >>> single value makes sense for an if statement. Instead, you probably want a >>> sum or something similar. >>> >>> HTH. >>> Mike >>> >>> >>> On Sat, Jul 5, 2014 at 8:02 PM, Barrett B wrote: >>> >>>> I am trying to code the Heaviside function in vector form. There are >>>> several different versions of this, but essentially, f(x) = 1 whenever x >= >>>> 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x >>>> is a vector here): >>>> >>>> ========= >>>> >>>> def Heaviside(x): >>>> mult = -100.0; diff = mult*(x - Theta_syn); >>>> # print x >>>> if abs(diff) < 50: #if x is close to Theta_syn >>>> return 1.0/(1 + np.exp(diff)) >>>> if x < Theta_syn: >>>> return 0 >>>> return 1 #otherwise >>>> # return 1.0/(1 + np.exp(diff)) >>>> >>>> ========= >>>> >>>> which produces the following error: >>>> >>>> ValueError: The truth value of an array with more than one element is >>>> ambiguous. Use a.any() or a.all() >>>> >>>> BTW, if I were to comment out every line except the first and last, it >>>> runs fine. >>>> >>>> See what I'm trying to do? Basically, I want to check whether current >>>> value of diff is not very large. But I don't know how to do this without >>>> rewriting the code to include for loops, which is what I've been trying to >>>> get away from all along? >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sun Jul 6 11:26:22 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sun, 6 Jul 2014 15:26:22 +0000 (UTC) Subject: [SciPy-User] Heaviside function in vector form References: Message-ID: Michael Sarahan gmail.com> writes: > > > oops, typos (in abs(diff parenthesis), beware.def Heaviside(x):??? mult = -100.0; diff = mult*(x - Theta_syn);??? output = np.ones_like(x)??? output[x < Theta_syn] = 0??? output[abs(diff) < 50] = 1.0/(1 + np.exp(diff[abs(diff) < 50])) > > > ??? return output > That got it working. Thanks! From djpine at gmail.com Sun Jul 6 13:44:48 2014 From: djpine at gmail.com (David J Pine) Date: Sun, 6 Jul 2014 10:44:48 -0700 Subject: [SciPy-User] Heaviside function in vector form In-Reply-To: References: Message-ID: Use NumPy "where" function. np.where(x<0., 0., 1.) On Saturday, July 5, 2014, Barrett B wrote: > I am trying to code the Heaviside function in vector form. There are > several different versions of this, but essentially, f(x) = 1 whenever x >= > 0, and f(x) = 0 whenever x < 0. Here is what I have so far (keep in mind, x > is a vector here): > > ========= > > def Heaviside(x): > mult = -100.0; diff = mult*(x - Theta_syn); > # print x > if abs(diff) < 50: #if x is close to Theta_syn > return 1.0/(1 + np.exp(diff)) > if x < Theta_syn: > return 0 > return 1 #otherwise > # return 1.0/(1 + np.exp(diff)) > > ========= > > which produces the following error: > > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > BTW, if I were to comment out every line except the first and last, it > runs fine. > > See what I'm trying to do? Basically, I want to check whether current > value of diff is not very large. But I don't know how to do this without > rewriting the code to include for loops, which is what I've been trying to > get away from all along? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at juliensalort.org Tue Jul 8 09:24:24 2014 From: lists at juliensalort.org (Julien Salort) Date: Tue, 8 Jul 2014 15:24:24 +0200 Subject: [SciPy-User] Scipy.io.netcdf: does nc.close release memory? Message-ID: <1loh116.1p1bepe1e1bxj4N%lists@juliensalort.org> Hello, Consider the following simple code snipet: ---8<------8<------8<------8<------8<------8<------8<------8<------ import numpy as np from scipy.io.netcdf import netcdf_file ncfile = 'bos-00008.nc' nc = netcdf_file(ncfile) grid_X = np.reshape(nc.variables['vec2_patch_X'].data, (ny, nx)) grid_Y = np.reshape(nc.variables['vec2_patch_Y'].data, (ny, nx)) grid_U = np.reshape(nc.variables['vec2_patch_U'].data, (ny, nx)) grid_V = np.reshape(nc.variables['vec2_patch_V'].data, (ny, nx)) nc.close() ---8<------8<------8<------8<------8<------8<------8<------8<------ It is not clear to me whether or not it is safe to use grid_X, grid_Y, grid_U and grid_V beyond the nc.close(). Does nc.close() release any memory? I am asking because I have one system where I get a segmentation fault when I do the following code, *after* nc.close(). If I remove the nc.close() then I get the proper plot. ---8<------8<------8<------8<------8<------8<------8<------8<------ grid_Module = np.sqrt(grid_U**2 + grid_V**2) plt.streamplot(grid_X, grid_Y, grid_U, grid_V, density=2, color=grid_Module) ---8<------8<------8<------8<------8<------8<------8<------8<------ System where I get the segmentation fault: Mac OS X 10.9.4 running MacPorts Python 2.7.8 with Scipy 0.14.0. No such problem with this system: Debian 7 running Python 2.7.3 with Scipy 0.10.1 So, I wonder if I have a problem with my Mac OS X system, or if it was unsafe to call nc.close anyway. Thanks for any pointers, Julien -- http://www.juliensalort.org From barrett.n.b at gmail.com Wed Jul 9 00:16:58 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Wed, 9 Jul 2014 00:16:58 -0400 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system Message-ID: I need some help setting up the calculation of the maximum Lyapunov exponent of the system I was describing in my previous thread, "Heaviside function in vector form." Note that I list the actual function being integrated, f(X, t), in my first follow-up post on that thread. Here is a brief discussion of the Lyapunov exponent: https://en.wikipedia.org/wiki/Lyapunov_exponent A simple Google search may come up with a better description. I tried simply introducing a fourth differential variable and starting to track it after some arbitrary time t_crit, but because of the nature of the integrator, that doesn't work. So I need help trying to calculate the MLE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at gulon.co.uk Wed Jul 9 11:40:31 2014 From: rob at gulon.co.uk (Robert Kent) Date: Wed, 09 Jul 2014 16:40:31 +0100 Subject: [SciPy-User] Transforming an Image for a Curved Surface Message-ID: Hi All, I was wondering if it were possible to use SciPy to manipulate an image such that when wrapped around a cylindrical object it would appear 'flat'. The application is a QR code that is to be wrapped around a tube of relatively small diameter which is making it impossible for reader applications to decode the url it contains. I am auto-generating the QR codes in python and adding them to a PDF such that they can be printed on labels for direct application. I understand the basic maths of how to do this: x/R = sin(s/R) where x is the orthographic coordinate, s is the 'distorted' coordinate and R is the radius of the cylinder, I'm just not sure how to apply it to an image in Python. I would greatly appreciate any help suggestions or examples (SciPy or otherwise). Thanks, Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Wed Jul 9 16:42:45 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Wed, 9 Jul 2014 20:42:45 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: Message-ID: Barrett B gmail.com> writes: > > > > I need some help setting up the calculation of the maximum Lyapunov exponent of the system I was describing in my previous thread, "Heaviside function in vector form." Note that I list the actual function being integrated, f(X, t), in my first follow-up post on that thread. > Here is a brief discussion of the Lyapunov exponent: https://en.wikipedia.org/wiki/Lyapunov_exponent > > A simple Google search may come up with a better description. > > I tried simply introducing a fourth differential variable and starting to track it after some arbitrary time t_crit, but because of the nature of the integrator, that doesn't work. So I need help trying to calculate the MLE. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > A little follow up--Here's the procedure of how to numerically integrate to find the maximum Lyapunov exponent (MLE): (1) Take the numerical solution for the system. (2) Starting after an arbitrary point, take the output value y_i and perturb it by an arbitrarily small amount, epsilon. (3) Simulate ONE timestep and record the result. (4) At each remaining timestep, repeat steps 2-3 for the ORIGINALLY simulated output, NOT the amount derived in step 3. Here is the code snippet: ========== # Calculate the maximum Lyapunov exponent. # Source: Clyde-Emmanuel Estorninho Meador. _Numerical Calculation of # Lyapunov Exponents for Three-Dimensional Systems of Ordinary Differential # Equations_. if Lyap: chops = 2 #Additional number of points to toss. N = int((t_Fin - t_track)/t_step); start = len(t) - N x_adj = np.array(x[start:], copy=True) #Approximate x_adj[i+1] using Euler's method. for i in range(N-2, 0, -1): x_adj[i+1] = x_adj[i] + Lorenz(np.array([x_adj[i] + eps_L,\ y[i + start], z[i + start]]))[0] * t_step Lf = np.log(np.absolute((x[start + chops:] - x_adj[chops:])/eps_L)) Ls = 0.0 for i in range (len(Lf)): Ls += Lf[i] print 'Maximum Lyapunov exponent:', Ls/(N*t_step) ========== I'm not getting anywhere close to what Meador's source says I should be getting for the MLE of the Lorenz system with standard parameters, which is 0.9056. The fact that my result heavily depends on the step size of the integrator tells me that selecting Euler's may be a problem here, which surprised me since the Lorenz system is reasonably numerically stable. From barrett.n.b at gmail.com Thu Jul 10 00:54:11 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Thu, 10 Jul 2014 04:54:11 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: Message-ID: Barrett B gmail.com> writes: > > > > I need some help setting up the calculation of the maximum Lyapunov exponent of the system I was describing in my previous thread, "Heaviside function in vector form." Note that I list the actual function being integrated, f(X, t), in my first follow-up post on that thread. > Here is a brief discussion of the Lyapunov exponent: https://en.wikipedia.org/wiki/Lyapunov_exponent > > A simple Google search may come up with a better description. > > I tried simply introducing a fourth differential variable and starting to track it after some arbitrary time t_crit, but because of the nature of the integrator, that doesn't work. So I need help trying to calculate the MLE. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Here is the code that I'm trying: ============== #Lorenz equations. def Lorenz(X): return np.array([sigma*(X[1] - X[0]), #dx/dt X[0]*(rho - X[2]) - X[1], #dy/dt X[0]*X[1] - beta*X[2]]) #dz/dt # Calculate the maximum Lyapunov exponent. # Source: Clyde-Emmanuel Estorninho Meador. _Numerical Calculation # Lyapunov Exponents for Three-Dimensional Systems of Ordinary # Differential Equations_. if Lyap: x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] #toss the early data x_Lap = x_0 + eps_L; y_Lap = y_0 + eps_L; z_Lap = z_0 + eps_L delta = np.absolute(Lorenz(np.array([x_Lap, y_Lap, z_Lap])))*t_step L_f = np.log(delta/eps_L) L_f_temp = L_f[0] L_S = np.linalg.norm(L_f[0]) +\ np.linalg.norm(L_f[1]) + np.linalg.norm(L_f[2]) print L_S/((N - i_L)*3*t_step) #p. 33 ============== With a timestep of 0.002 and a runtime of 90 (i.e., t = np.arange( 0, 90, 0.002), the last line above outputs 10.9275287. But according to the source listed above, I should be getting 0.9057--not even close to my result. Help--what am I doing wrong? From gregor.thalhammer at gmail.com Thu Jul 10 03:23:27 2014 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Thu, 10 Jul 2014 09:23:27 +0200 Subject: [SciPy-User] Transforming an Image for a Curved Surface In-Reply-To: References: Message-ID: Am 09.07.2014 um 17:40 schrieb Robert Kent : > Hi All, > > I was wondering if it were possible to use SciPy to manipulate an image such that when wrapped around a cylindrical object it would appear 'flat'. The application is a QR code that is to be wrapped around a tube of relatively small diameter which is making it impossible for reader applications to decode the url it contains. I am auto-generating the QR codes in python and adding them to a PDF such that they can be printed on labels for direct application. I understand the basic maths of how to do this: x/R = sin(s/R) where x is the orthographic coordinate, s is the 'distorted' coordinate and R is the radius of the cylinder, I'm just not sure how to apply it to an image in Python. I would greatly appreciate any help suggestions or examples (SciPy or otherwise). > > Thanks, Rob scipy.ndimage.interpolation.geometric_transform [1] should do the job. Gregor [1] http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.interpolation.geometric_transform.html#scipy.ndimage.interpolation.geometric_transform From aronne.merrelli at gmail.com Thu Jul 10 10:21:36 2014 From: aronne.merrelli at gmail.com (Aronne Merrelli) Date: Thu, 10 Jul 2014 10:21:36 -0400 Subject: [SciPy-User] Scipy.io.netcdf: does nc.close release memory? In-Reply-To: <1loh116.1p1bepe1e1bxj4N%lists@juliensalort.org> References: <1loh116.1p1bepe1e1bxj4N%lists@juliensalort.org> Message-ID: On Tue, Jul 8, 2014 at 9:24 AM, Julien Salort wrote: > Hello, > > Consider the following simple code snipet: > > ---8<------8<------8<------8<------8<------8<------8<------8<------ > import numpy as np > from scipy.io.netcdf import netcdf_file > > ncfile = 'bos-00008.nc' > nc = netcdf_file(ncfile) > grid_X = np.reshape(nc.variables['vec2_patch_X'].data, (ny, nx)) > grid_Y = np.reshape(nc.variables['vec2_patch_Y'].data, (ny, nx)) > grid_U = np.reshape(nc.variables['vec2_patch_U'].data, (ny, nx)) > grid_V = np.reshape(nc.variables['vec2_patch_V'].data, (ny, nx)) > nc.close() > ---8<------8<------8<------8<------8<------8<------8<------8<------ > > It is not clear to me whether or not it is safe to use grid_X, grid_Y, > grid_U and grid_V beyond the nc.close(). Does nc.close() release any > memory? > I think this is supposed to be safe, but it is usually the case that the variable returned from: >> var = nc.variables['varname'].data ultimately points to a "memmap" object. Try checking the .base attribute of the ndarray variable, to see if this is the case. You may need to go a couple levels to see the memmap (e.g., var.base.base). Since that seems to be causing problems, you could try something like the following: >> var = nc.variables['varname'].data.copy() This will make sure var is a full copy in memory with no reference back to the file. In this case the .base attribute of var should be None; Note this might not be desirable if the variable has a huge memory footprint. I often do this, I recall having problems in the past where I was looping through a large number of netCDF files and then producing an IOError related to "too may open files" or something like that. Hope that helps, Aronne -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moore2 at nih.gov Thu Jul 10 09:30:44 2014 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Thu, 10 Jul 2014 13:30:44 +0000 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: Message-ID: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> > -----Original Message----- > From: Barrett B [mailto:barrett.n.b at gmail.com] > Sent: Thursday, July 10, 2014 12:54 AM > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] Maximum Lyapunov exponent of my previous > system > > Barrett B gmail.com> writes: > > > > > > > > > I need some help setting up the calculation of the maximum Lyapunov > exponent of the system I was describing in my previous thread, > "Heaviside > function in vector form." Note that I list the actual function being > integrated, f(X, t), in my first follow-up post on that thread. > > Here is a brief discussion of the Lyapunov exponent: > https://en.wikipedia.org/wiki/Lyapunov_exponent > > > > A simple Google search may come up with a better description. > > > > I tried simply introducing a fourth differential variable and > starting to > track it after some arbitrary time t_crit, but because of the nature of > the > integrator, that doesn't work. So I need help trying to calculate the > MLE. > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Here is the code that I'm trying: > > ============== > > #Lorenz equations. > def Lorenz(X): > return np.array([sigma*(X[1] - X[0]), #dx/dt > X[0]*(rho - X[2]) - X[1], #dy/dt > X[0]*X[1] - beta*X[2]]) #dz/dt > > # Calculate the maximum Lyapunov exponent. > # Source: Clyde-Emmanuel Estorninho Meador. _Numerical Calculation > # Lyapunov Exponents for Three-Dimensional Systems of Ordinary > # Differential Equations_. > if Lyap: > x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] #toss the early data > x_Lap = x_0 + eps_L; y_Lap = y_0 + eps_L; z_Lap = z_0 + eps_L > delta = np.absolute(Lorenz(np.array([x_Lap, y_Lap, z_Lap])))*t_step > L_f = np.log(delta/eps_L) > L_f_temp = L_f[0] > L_S = np.linalg.norm(L_f[0]) +\ > np.linalg.norm(L_f[1]) + np.linalg.norm(L_f[2]) > print L_S/((N - i_L)*3*t_step) #p. 33 > > ============== > > With a timestep of 0.002 and a runtime of 90 (i.e., t = np.arange( > 0, 90, 0.002), the last line above outputs 10.9275287. But according to > the > source listed above, I should be getting 0.9057--not even close to my > result. Help--what am I doing wrong? > Generally, the likely hood of a helpful response will be higher if you post a working example. >From a few moments looking at Meador's thesis, I don't think you are implementing the algorithm he describes, but I haven't looked in detail and could be mistaken. I assume you've looked at his matlab code in the appendices? It might also be worthwhile to find a copy of his reference 22, where he claims more details are presented. Eric From jeffreback at gmail.com Fri Jul 11 09:31:15 2014 From: jeffreback at gmail.com (Jeff Reback) Date: Fri, 11 Jul 2014 09:31:15 -0400 Subject: [SciPy-User] ANN: pandas 0.14.1 released Message-ID: Hello, We are proud to announce v0.14.1 of pandas, a minor release from 0.14.0. This release includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. This was 1.5 months of work with 244 commits by 45 authors encompassing 306 issues. We recommend that all users upgrade to this version. *Highlights:* - New method select_dtypes() to select columns based on the dtype - New method sem() to calculate the standard error of the mean. - Support for dateutil timezones (see *docs* ). - Support for ignoring full line comments in the read_csv() text parser. - New documentation section on *Options and Settings* . - Lots of bug fixes For a more a full description of Whatsnew for v0.14.1 here: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html *What is it:* *pandas* is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Documentation: http://pandas.pydata.org/pandas-docs/stable/ Source tarballs, windows binaries are available on PyPI: https://pypi.python.org/pypi/pandas windows binaries are courtesy of Christoph Gohlke and are built on Numpy 1.8 macosx wheels will be available soon, courtesy of Matthew Brett Please report any issues here: https://github.com/pydata/pandas/issues Thanks The Pandas Development Team Contributors to the 0.14.1 release - Andrew Rosenfeld - Andy Hayden - Benjamin Adams - Benjamin M. Gross - Brian Quistorff - Brian Wignall - bwignall - clham - Daniel Waeber - David Bew - David Stephens - DSM - dsm054 - helger - immerrr - Jacob Schaer - jaimefrio - Jan Schulz - John David Reaver - John W. O?Brien - Joris Van den Bossche - jreback - Julien Danjou - Kevin Sheppard - K.-Michael Aye - Kyle Meyer - lexual - Matthew Brett - Matt Wittmann - Michael Mueller - Mortada Mehyar - onesandzeroes - Phillip Cloud - Rob Levy - rockg - sanguineturtle - Schaer, Jacob C - seth-p - sinhrks - Stephan Hoyer - Thomas Kluyver - Todd Jennings - TomAugspurger - unknown - yelite -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria-rosaria.antonelli at curie.fr Fri Jul 11 08:39:31 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Fri, 11 Jul 2014 12:39:31 +0000 Subject: [SciPy-User] matlab "uigetfile" analog for ipython Message-ID: Hi, Is there any way to select interactively the folder/directory where are stored the data that I want to analyse in my notebook. Matlab command should be 'uigetfile'. This open a window and I select the file I want from this window. Thank you very much. Best, Rosa -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Sun Jul 13 07:27:18 2014 From: hturesson at gmail.com (Hjalmar Turesson) Date: Sun, 13 Jul 2014 08:27:18 -0300 Subject: [SciPy-User] matlab "uigetfile" analog for ipython In-Reply-To: References: Message-ID: Hi, I used tkFileDialog for that type of thing. Hjalmar On Fri, Jul 11, 2014 at 9:39 AM, Antonelli Maria Rosaria < maria-rosaria.antonelli at curie.fr> wrote: > Hi, > > Is there any way to select interactively the folder/directory where are > stored the data that I want to analyse in my notebook. > Matlab command should be 'uigetfile'. This open a window and I select the > file I want from this window. > > Thank you very much. > Best, > Rosa > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexeftimiades at gmail.com Mon Jul 14 10:42:42 2014 From: alexeftimiades at gmail.com (Alex Eftimiades) Date: Mon, 14 Jul 2014 10:42:42 -0400 Subject: [SciPy-User] Does scipy automatically work with openblas for parallelization? In-Reply-To: <53BFD41F.60003@gmail.com> References: <53BFD41F.60003@gmail.com> Message-ID: <53C3EC62.9020808@gmail.com> I have been searching for a parallel linear algebra package that has been integrated with numpy/scipy for quite some time now. In particular, I was looking for parallel eigenvalue/eigenvector calculations for arbitrary matrices. I was trying a package called armadillo, which recommended I installed openblas for parallelization. When I installed openblas, I found that scipy started using all my cores to compute eigenvalues/eigenvectors when it was not doing so before. It was noticeably faster too. This might effect might be well known to developers, but after a good deal of searching I could not find any indication that installing openblas would make scipy run linear algebra routines in parallel. If this is known, or at least could be confirmed, it would be helpful if it was documented. Thanks, Alex Eftimiades From dmccully at mail.nih.gov Mon Jul 14 11:25:25 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Mon, 14 Jul 2014 15:25:25 +0000 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> Message-ID: <432A8E6B26DC62439F0201C069BE2B671D38713C@MLBXV08.nih.gov> Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm's that comes with Red Hat 6. Any help would be appreciated. Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajsai24 at gmail.com Mon Jul 14 11:32:37 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Mon, 14 Jul 2014 21:02:37 +0530 Subject: [SciPy-User] scipy improve performance by parallelizing Message-ID: hi all, im trying to optimise a python code takes huge amount of time on scipy functions such as scipy.signa.conv. Following are some of my queries regarding the same.. It would be great to hear from you.. thanks.. ---------------------------------------------------- 1) Can Scipy take advantage of multi-cores.. if so how 2)what are ways we can improve the performance of scipy/numpy functions eg: using openmp, mpi etc 3)If scipy internally use blas/mkl libraries can we enable parallelism through these? ----------------------- iam running my code on stampede tacc. where numpy,scipy are built against mkl libraries for optimal performance.. observations are as follows ------------------------------ -------------------- 1) setting different OMP_NUM_THREADS to different values didnot change the runtimes 2)the code took same time as it took on mac pro with accelerated framework for blas and lapack.. so is mkl not being helpful, or its not getting configured to use multithreads -------------------------- the statements taking lot fo time are like folllows -------------------- 1) for i in xrange(conv_out_shape[1]): conv_out[0][i]=scipy.signal. convolve(self.input[0][i%self.image_shape[1]],numpy.rot90(self.W[0][i/self.image_shape[1]],2),mode='valid') 2)for i in xrange(pooled_shape[1]): for j in xrange(pooled_shape[2]): for k in xrange(pooled_shape[3]): for l in xrange(pooled_shape[4]): pooled[0][i][j][k][l]=math.tanh((numpy.sum(conv_out[0][i][j][k*3][l*3:(l+1)*3])+numpy.sum(conv_out[0][i][j][k*3+1][l*3:(l+1)*3])+numpy.sum(conv_out[0][i][j][k*3+2][l*3:(l+1)*3]))/9.0+b[i][j]) *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritemio at gmail.com Mon Jul 14 12:17:49 2014 From: tritemio at gmail.com (Antonino Ingargiola) Date: Mon, 14 Jul 2014 09:17:49 -0700 Subject: [SciPy-User] matlab "uigetfile" analog for ipython In-Reply-To: References: Message-ID: Hi, I use to define a function that uses Qt (either PySide or PyQt) to open a dialog: try: from PySide import QtCore, QtGui except ImportError: from PyQt4 import QtCore, QtGui def gui_fname(dir=None): """Select a file via a dialog and returns the file name. """ if dir is None: dir ='./' fname = QtGui.QFileDialog.getOpenFileName(None, "Select data file...", dir, filter="All files (*)") return fname[0] Antonino On Sun, Jul 13, 2014 at 4:27 AM, Hjalmar Turesson wrote: > Hi, > > I used tkFileDialog for that type of thing. > > Hjalmar > > > On Fri, Jul 11, 2014 at 9:39 AM, Antonelli Maria Rosaria < > maria-rosaria.antonelli at curie.fr> wrote: > >> Hi, >> >> Is there any way to select interactively the folder/directory where are >> stored the data that I want to analyse in my notebook. >> Matlab command should be 'uigetfile'. This open a window and I select the >> file I want from this window. >> >> Thank you very much. >> Best, >> Rosa >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Mon Jul 14 13:58:13 2014 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 14 Jul 2014 10:58:13 -0700 Subject: [SciPy-User] Headphone pouch left at SciPy sprints Message-ID: Someone at the SciPy sprints on Saturday left a black headphone pouch in room 103. We picked it up thinking it belonged to one of the IPython core team, but it looks like we were wrong. If its yours, please let me know, and I'll arrange to get it back to you. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sat Jul 12 02:44:27 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sat, 12 Jul 2014 06:44:27 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> Message-ID: > > Generally, the likely hood of a helpful response will be higher if you post a working example. > > >From a few moments looking at Meador's thesis, I don't think you are implementing the algorithm he > describes, but I haven't looked in detail and could be mistaken. I assume you've looked at his matlab code > in the appendices? It might also be worthwhile to find a copy of his reference 22, where he claims more > details are presented. > > Eric > Apologies if this is a duplicate--Gmane has been very glitchy as of late. Here is my latest failed attempt, complete with copious documentation: ================== if Lyap: eps_L = np.array([1e-7, 1e-7, 1e-7]) #Toss any data with an index less than i_L: x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] #Offset each variable by arbitrarily small amounts: x_eps = x_0 + eps_L[0]; y_eps = y_0 + eps_L[1]; z_eps = z_0 + eps_L[2] #Take the second norm ("None") of this offset distance: d_0 = np.linalg.norm(eps_L, ord=None) #Since dX/dt * del(t) ~= del(X), acquire it from the equations. delX = Lorenz(np.array([x_eps, y_eps, z_eps]))*t_step #Shifts the array by one (for integration). Set delta[0] = 0. delX = np.roll(delX, 1) for i in range (len(delta)): delta[i][0] = 0 #Use Euler's method to integrate one timestep. (Can later be optimized.) x_Lap = x_eps + delX[0] y_Lap = y_eps + delX[1] z_Lap = z_eps + delX[2] #Take the second norm of each point. d_Lap = np.sqrt(x_Lap**2 + y_Lap**2 + z_Lap**2) #Take the natural log. LapLog = np.log(d_Lap/d_0) #Find the average. This is the largest Lyapunov exponent. print np.linalg.norm(LapLog, ord=1) / len(LapLog) =========== If you have the slightest clue what I am doing wrong, I would absolutely love to know. I have spent so much time on this to the point of great discouragement. :( From barrett.n.b at gmail.com Fri Jul 11 16:27:25 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Fri, 11 Jul 2014 20:27:25 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> Message-ID: Moore, Eric (NIH/NIDDK) [F] nih.gov> writes: > > Generally, the likely hood of a helpful response will be higher if you post a working example. > > >From a few moments looking at Meador's thesis, I don't think you are implementing the algorithm he > describes, but I haven't looked in detail and could be mistaken. I assume you've looked at his matlab code > in the appendices? It might also be worthwhile to find a copy of his reference 22, where he claims more > details are presented. > > Eric > I thought I did post an example? Here, here's some more code. This simulates the standard Lorenz system (with parameters rho=28, sigma=10, and beta = 8.0/3). I tested the numerical simulation part, and the graph suggests that the simulator itself works fine. It is the Lyapunov exponent calculation that somehow fails, and I cannot figure out why. ================================== rates = [-0.1, -0.4, -1] sigma = 10; beta = 8.0/3; rho = 28 t_step = 0.001; t_Fin = 90; N = int(t_Fin/t_step) #Whether or not to simulate or to calculate max Lyapunov exponent. Lyap = True; eps_L = 0.001; t_track = 15; i_L = int(t_track/t_step) #Lorenz equations. def Lorenz(X): return np.array([sigma*(X[1] - X[0]), #dx/dt X[0]*(rho - X[2]) - X[1], #dy/dt X[0]*X[1] - beta*X[2]]) #dz/dt #ODE. def f(X, t): return Lorenz(X) Y0 = np.array([1.0, 1.0, 1.0]) #x, y, z t = np.arange(0, t_Fin, t_step) soln = odeint(f, Y0, t) x = soln[:, 0]; y = soln[:, 1]; z = soln[:, 2] # Calculate the maximum Lyapunov exponent. # Source: Clyde-Emmanuel Estorninho Meador. _Numerical Calculation # of Lyapunov Exponents for Three-Dimensional Systems of Ordinary # Differential Equations_. if Lyap: x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] #toss the early data x_Lap = x_0 + eps_L; y_Lap = y_0 + eps_L; z_Lap = z_0 + eps_L delta = np.absolute(Lorenz(np.array([x_Lap, y_Lap, z_Lap])))*t_step L_f = np.log(delta/eps_L) L_f_temp = L_f[0] L_S = np.linalg.norm(L_f[0], ord=1) +\ np.linalg.norm(L_f[1], ord=1) + np.linalg.norm(L_f[2], ord=1) print L_S/((N - i_L)*3*t_step) #p. 33 ============================ After the "if Lyap" statement, each line does the following: (1; "x_0" line) Toss the early data. (2; "x_Lap" line) Shift each point in the numerical solution by an arbitrarily small amount. Separated in case I want to shift x, y, and z by different amounts. (3; "delta" line) Integrates one time step via Euler's method. (4; "L_f" line) Formula from the paper (p. 32). (5; "L_f_temp" line) (Dummy line--debugging.) (6; "L_S" line) Formula from the paper (p. 33). (7; "print" line) Ditto. I really don't see what's wrong here. I have tried so many times to debug this, and I still after all these attempts, cannot get a correct Lyapunov exponent. :( From dmccully at mail.nih.gov Mon Jul 14 09:09:45 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Mon, 14 Jul 2014 13:09:45 +0000 Subject: [SciPy-User] Three scipy.test() errors Message-ID: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm's that comes with Red Hat 6. Any help would be appreciated. Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sun Jul 13 13:42:10 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sun, 13 Jul 2014 17:42:10 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> Message-ID: > > Generally, the likely hood of a helpful response will be higher if you post a working example. > > >From a few moments looking at Meador's thesis, I don't think you are implementing the algorithm he > describes, but I haven't looked in detail and could be mistaken. I assume you've looked at his matlab code > in the appendices? It might also be worthwhile to find a copy of his reference 22, where he claims more > details are presented. > > Eric > Apologies if this is a duplicate--Gmane has been very glitchy as of late. Here is my latest failed attempt, complete with copious documentation: ================== if Lyap: eps_L = np.array([1e-7, 1e-7, 1e-7]) #Toss any data with an index less than i_L: x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] #Offset each variable by arbitrarily small amounts: x_eps = x_0 + eps_L[0]; y_eps = y_0 + eps_L[1]; z_eps = z_0 + eps_L[2] #Take the second norm ("None") of this offset distance: d_0 = np.linalg.norm(eps_L, ord=None) #Since dX/dt * del(t) ~= del(X), acquire it from the equations. delX = Lorenz(np.array([x_eps, y_eps, z_eps]))*t_step #Shifts the array by one (for integration). Set delta[0] = 0. delX = np.roll(delX, 1) for i in range (len(delta)): delta[i][0] = 0 #Use Euler's method to integrate one timestep. (Can later be optimized.) x_Lap = x_eps + delX[0] y_Lap = y_eps + delX[1] z_Lap = z_eps + delX[2] #Take the second norm of each point. d_Lap = np.sqrt(x_Lap**2 + y_Lap**2 + z_Lap**2) #Take the natural log. LapLog = np.log(d_Lap/d_0) #Find the average. This is the largest Lyapunov exponent. print np.linalg.norm(LapLog, ord=1) / len(LapLog) =========== If you have the slightest clue what I am doing wrong, I would absolutely love to know. I have spent so much time on this to the point of great discouragement. :( From sturla.molden at gmail.com Mon Jul 14 18:31:45 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 14 Jul 2014 22:31:45 +0000 (UTC) Subject: [SciPy-User] Does scipy automatically work with openblas for parallelization? References: <53C3EC62.9020808@gmail.com> Message-ID: <1307492174427068642.583414sturla.molden-gmail.com@news.gmane.org> Alex Eftimiades wrote: > This might effect might be well known to developers, but after a good > deal of searching I could not find any indication that installing > openblas would make scipy run linear algebra routines in parallel. If > this is known, or at least could be confirmed, it would be helpful if it > was documented. If you use a parallel BLAS (and LAPACK) library, numpy.dot, numpy.linalg.* and scipy.linalg.* will be parallelized depending on the BLAS implementation. Intel MKL, Apple Accelerate Framework (at least in OSX 10.9) and OpenBLAS give good parallel performance. ATLAS is less performant and less scalable, but the official installers that use ATLAS have some multithreading enabeled. AMD ACML scales very badly on multiple processors, but the single-threaded version of ACML is quite fast. Netlib reference BLAS has no parallelization beyond what the Fortran compiler can infere and vectorize. IBM, HP and CRAY have optimized BLAS libraries too, but I am not sure how well they perform. Note that NumPy/SciPy code is usually memory bound, not CPU bound, in which case it will help very little on the total runtime to run the computations in parallel. Also beware that you should only use Fortran contiguous arrays when calling numpy.linalg.* and scipy.linalg.* if you care about performance. Otherwise the BLAS and LAPACK wrappers will spend a significant amout of time copying and transposing input and output arrays. numpy.dot is not affected by this and is equally fast for C ordered arrays. Sturla From lou_boog2000 at yahoo.com Tue Jul 15 10:40:02 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 15 Jul 2014 07:40:02 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> Message-ID: <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> (view in monospace font) Hello Barrett, ? May I suggest a different approach.? It's one I've used and it's used a lot in the nonlinear dynamics community.? Also it's more mathematically correct and stable. ? You have the right idea that you want to examine the dynamics of a perturbation to the system, but the best way to do that is to use the original equations of motion along with what is called the variational equation.? The latter describes the motion of the perturbation itself. Here's a "derivation": ? Original equations:? dx/dt = x and F(x), F(x) are vector quantities and F(x) is the vector field which describes the motion at the next instant of time.? It's the right-hand side of your Lorenz equations. ? Perturbation equations (also called variational equations): ? Perturb x -> x+w, where w depends on t and is considered small.? Then we have, ? d(x+w)/dt = F(x+w) ~ F(x) + DF(x)w + O(2),?? (*)? (this is just a Taylor expansion) ? where ~ means approximately equal, O(2) are terms in w^2 or higher, and DF is the matrix of partial differentiations of F with respect to components of x.? DF is usually called the Jacobian of F. ? Now subtract the original equations from (*) and get the equations of motion for the perturbation: ? ? dw/dt = DF(x) w? (**) ? Note, these are linear, although DF is time dependent through x=x(t). ? To get the maximum Lyapunov exponent you solve the original equations dx/dt=F and the variational equations (**) simultaneously using a solver (e.g. odeint).? For the Lorenz system this will be a 6-dimensional system of equations. What you take advantage of is that the max Lyapunov exponent will cause the original perturbation to stretch in that direction the most and it will become the major cause of a change in the length of w as time progresses. Hence, you can start the system with w= some random unit vector (call this w(0)), let it run for a time= T, then stop (now you have w(T)), and calculate r= ln(|w(T)|/|w(0)|)/.? r/T is then an approximation for the largest Lyapunov exponent (I'll call that lam_max).? But you want to make sure you cover most of the attractor (the system's trajectory), so you repeat the above process sequentially many times and keep a running sum of the ratios? r_i = ln(|w(t_i)|/|w(t_i-1)|), where t_i=t_(i-1) + T above.? NOTE: when you do this you re-start the system at the last t_i and x(t_i), but you re-initialize w to a random unit vector again. ? You want to do this each time so w can grow a lot along the length of the lam_max direction in the dynamics.? A rough algorithm would look like this: ? (1) Intialize x(0), w= random unit vector, and running sum=0. (2) Run the system for some time = T (maybe the time to go around the attractor a few times roughly speaking, so that |w(t_i)|/|w(t_i-1)| > some max ratio, e.g. 1000) (4) Have we done this enough times (say N)?? (you can try to see how many repetitions give you a stable lam_max number). ???? If yes,? then the average of the final r_i sum is lam_max.? That is ??????????????? N lam_max= (1/NT) SUM r_i ?????????????? i=1 ? I hope that's clear.? You can check the explanation on page 74 of Practical Numerical Algorithms for Chaotic Systems by Parker and Chua (Springer-Verlag).? ? Good luck. ? -- Lou Pecora On Tuesday, July 15, 2014 5:49 AM, Barrett B wrote: > > Generally, the likely hood of a helpful response will be higher if you post a working example. > > >From a few moments looking at Meador's thesis, I don't think you are implementing the algorithm he > describes, but I haven't looked in detail and could be mistaken. I assume you've looked at his matlab code > in the appendices? It might also be worthwhile to find a copy of his reference 22, where he claims more > details are presented. > > Eric > Apologies if this is a duplicate--Gmane has been very glitchy as of late. Here is my latest failed attempt, complete with copious documentation: ================== if Lyap: ? ? eps_L = np.array([1e-7, 1e-7, 1e-7]) ? ? #Toss any data with an index less than i_L: ? ? x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ] ? ? #Offset each variable by arbitrarily small amounts: ? ? x_eps = x_0 + eps_L[0]; y_eps = y_0 + eps_L[1]; z_eps = z_0 + eps_L[2] ? ? #Take the second norm ("None") of this offset distance: ? ? d_0 = np.linalg.norm(eps_L, ord=None) ? ? #Since dX/dt * del(t) ~= del(X), acquire it from the equations. ? ? delX = Lorenz(np.array([x_eps, y_eps, z_eps]))*t_step ? ? #Shifts the array by one (for integration). Set delta[0] = 0. ? ? delX = np.roll(delX, 1) ? ? for i in range (len(delta)): delta[i][0] = 0 ? ? #Use Euler's method to integrate one timestep. (Can later be optimized.) ? ? x_Lap = x_eps + delX[0] ? ? y_Lap = y_eps + delX[1] ? ? z_Lap = z_eps + delX[2] ? ? #Take the second norm of each point. ? ? d_Lap = np.sqrt(x_Lap**2 + y_Lap**2 + z_Lap**2) ? ? #Take the natural log. ? ? LapLog = np.log(d_Lap/d_0) ? ? #Find the average. This is the largest Lyapunov exponent. ? ? print np.linalg.norm(LapLog, ord=1) / len(LapLog) =========== If you have the slightest clue what I am doing wrong, I would absolutely love to know. I have spent so much time on this to the point of great discouragement. :( _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmccully at mail.nih.gov Tue Jul 15 07:33:41 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Tue, 15 Jul 2014 11:33:41 +0000 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D38713C@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38713C@MLBXV08.nih.gov> Message-ID: <432A8E6B26DC62439F0201C069BE2B671D387F72@MLBXV08.nih.gov> Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm's that comes with Red Hat 6. Any help would be appreciated. Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmccully at mail.nih.gov Tue Jul 15 14:07:27 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Tue, 15 Jul 2014 18:07:27 +0000 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D387F72@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38713C@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D387F72@MLBXV08.nih.gov> Message-ID: <432A8E6B26DC62439F0201C069BE2B671D388171@MLBXV08.nih.gov> Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm's that comes with Red Hat 6. Any help would be appreciated. Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajsai24 at gmail.com Wed Jul 16 05:56:51 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Wed, 16 Jul 2014 15:26:51 +0530 Subject: [SciPy-User] building scipy with umfpack and amd .. how does it help Message-ID: hi, im running a code which uses scipy.signal.convolve and numpy.sum extensively. I ran the code on two machines. One machine took very less time compared to other with same configuration, i checked the scipy configuration in that machine. i found scipy in that is built with umfpack and amd.. is this the reason behind it.. in what way umfpack and amd aid scipy operations..? -------------------------------- >>> scipy.__config__.show() blas_info: libraries = ['blas'] library_dirs = ['/usr/lib64'] language = f77 amd_info: libraries = ['amd'] library_dirs = ['/usr/lib64'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/include/suitesparse'] include_dirs = ['/usr/include/suitesparse'] lapack_info: libraries = ['lapack'] library_dirs = ['/usr/lib64'] language = f77 atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['blas'] library_dirs = ['/usr/lib64'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_blas_threads_info: NOT AVAILABLE umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/lib64'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/include/suitesparse', '-I/usr/include/suitesparse'] include_dirs = ['/usr/include/suitesparse'] thanks a lot for your replies in advance *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jul 16 21:21:01 2014 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 17 Jul 2014 02:21:01 +0100 Subject: [SciPy-User] [ANN] patsy v0.3.0 released Message-ID: Hi all, I'm pleased to announce the v0.3.0 release of patsy. The main highlight of this release is the addition of builtin functions to compute natural and restricted cubic splines, and tensor spline products, with optional constraints, and all compatible with the R package 'mgcv'. (Note that if you wanted to replace mgcv itself then you still need to implement their penalized fitting algorithm -- these are just the spline basis functions. But these are very useful on their own, and allow you to fit model coefficients with mgcv and then use python to generate predictions from that model.) We also dropped support for python 2.4 and 2.5, and have switched to a single polyglot codebase for py2 and py3, allowing us to distribute universal wheels. Patsy is a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design matrices. Patsy brings the convenience of R "formulas" to Python. Changes: https://patsy.readthedocs.org/en/latest/changes.html#v0-3-0 General information: https://github.com/pydata/patsy/blob/master/README Share and enjoy, -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From erik.tollerud at gmail.com Thu Jul 17 09:36:11 2014 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Thu, 17 Jul 2014 09:36:11 -0400 Subject: [SciPy-User] ANN: Astropy v0.4 released Message-ID: Hello, We are very happy to announce the third major public release (v0.4) of the astropy package, a core Python package for Astronomy: http://www.astropy.org Astropy is a community-driven package intended to contain much of the core functionality and common tools needed for performing astronomy and astrophysics with Python. New and improved major functionality in this release includes: * A new astropy.vo.samp sub-package adapted from the previously standalone SAMPy package * A re-designed astropy.coordinates sub-package for celestial coordinates * A new ?fitsheader? command-line tool that can be used to quickly inspect FITS headers * A new HTML table reader/writer * Improved performance for Quantity objects * A re-designed configuration framework In addition, hundreds of smaller improvements and fixes have been made. An overview of the changes is provided at: http://docs.astropy.org/en/latest/whatsnew/0.4.html Instructions for installing Astropy are provided at the http://www.astropy.org website, and extensive documentation can be found at: http://docs.astropy.org In particular, if you use Anaconda, you can update to v0.4 with: conda update astropy Please report any issues, or request new features via our GitHub repository: https://github.com/astropy/astropy/issues Over 80 developers have contributed code to Astropy so far, and you can find out more about the team behind Astropy here: http://www.astropy.org/team.html If you use Astropy directly - or as a dependency to another package - for your work, please remember to include the following acknowledgment at the end of papers: "This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013)." where ?(Astropy Collaboration, 2013)? is the Astropy paper which was published last year: http://adsabs.harvard.edu/abs/2013A%26A...558A..33A Please feel free to forward this announcement to anyone you think might be interested in this release. We hope that you enjoy using Astropy as much as we enjoyed developing it! Thomas Robitaille, Erik Tollerud, and Perry Greenfield on behalf of The Astropy Collaboration From jschwabedal at gmail.com Thu Jul 17 16:15:17 2014 From: jschwabedal at gmail.com (Justus Schwabedal) Date: Thu, 17 Jul 2014 16:15:17 -0400 Subject: [SciPy-User] building scipy with umfpack and amd .. how does it help Message-ID: Hi Sai, is it possibly because the other configuration runs everything in 32-bit? Best, J 2014-07-16 13:00 GMT-04:00 : > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. building scipy with umfpack and amd .. how does it help > (Sai Rajeshwar) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 16 Jul 2014 15:26:51 +0530 > From: Sai Rajeshwar > Subject: [SciPy-User] building scipy with umfpack and amd .. how does > it help > To: scipy-user at scipy.org > Message-ID: > M40-CLBb9e+Let30mQ8xWk1LiA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > hi, > > im running a code which uses scipy.signal.convolve and numpy.sum > extensively. I ran the code on two machines. One machine took very less > time compared to other with same configuration, i checked the scipy > configuration in that machine. i found scipy in that is built with umfpack > and amd.. > > is this the reason behind it.. in what way umfpack and amd aid scipy > operations..? > > -------------------------------- > > >>> scipy.__config__.show() > blas_info: > libraries = ['blas'] > library_dirs = ['/usr/lib64'] > language = f77 > > amd_info: > libraries = ['amd'] > library_dirs = ['/usr/lib64'] > define_macros = [('SCIPY_AMD_H', None)] > swig_opts = ['-I/usr/include/suitesparse'] > include_dirs = ['/usr/include/suitesparse'] > > lapack_info: > libraries = ['lapack'] > library_dirs = ['/usr/lib64'] > language = f77 > > atlas_threads_info: > NOT AVAILABLE > > blas_opt_info: > libraries = ['blas'] > library_dirs = ['/usr/lib64'] > language = f77 > define_macros = [('NO_ATLAS_INFO', 1)] > > atlas_blas_threads_info: > NOT AVAILABLE > > umfpack_info: > libraries = ['umfpack', 'amd'] > library_dirs = ['/usr/lib64'] > define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] > swig_opts = ['-I/usr/include/suitesparse', > '-I/usr/include/suitesparse'] > include_dirs = ['/usr/include/suitesparse'] > > thanks a lot for your replies in advance > > *with regards..* > > *M. Sai Rajeswar* > *M-tech Computer Technology* > > > *IIT Delhi----------------------------------Cogito Ergo Sum---------* > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140716/b4cdaa7d/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 131, Issue 20 > ******************************************* > -- Justus Schwabedal skype: justus1802 Work: +1 617 353 4659 Handy (US): +1 617 449 8478 Handy (D): +49 177 939 5281 email: jschwabedal at googlemail.com 657 John Wesley Dobbs Ave NE 30312 Atlanta, GA USA Steinkreuzstr. 23 53757 Sankt Augustin Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Fri Jul 18 16:55:57 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Fri, 18 Jul 2014 20:55:57 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> Message-ID: > > Hello > Barrett, > > ? > > May I suggest a different approach.? It's one I've used and it's used a lot in the > nonlinear dynamics community.? Also it's > more mathematically correct and stable. > > ? > > You have the right idea that you want to > examine the dynamics of a perturbation to the system, but the best way to do > that is to use the original equations of motion along with what is called the > variational equation.? The latter > describes the motion of the perturbation itself. Here's a "derivation": > > ? > > Original equations:? dx/dt = x and F(x), F(x) are vector > quantities and F(x) is the vector field which describes the motion at the next > instant of time.? It's the right-hand > side of your Lorenz equations. > > ? > > Perturbation equations (also called > variational equations): > > ? > > Perturb x -> x+w, where w depends on t and > is considered small.? Then we have, > > ? > d(x+w)/dt = F(x+w) ~ F(x) + DF(x)w + O(2),?? (*)? > > (this is just a Taylor expansion) > > ? > > where ~ means approximately equal, O(2) are > terms in w^2 or higher, and DF is the matrix of partial differentiations of F > with respect to components of x.? DF is usually > called the Jacobian of F. > > ? > > Now subtract the original equations from (*) > and get the equations of motion for the perturbation: > > ? > > ? dw/dt > = DF(x) w? (**) > > ? > > Note, these are linear, although DF is time > dependent through x=x(t). > > ? > > To get the maximum Lyapunov exponent you solve > the original equations dx/dt=F and the variational equations (**) > simultaneously using a solver (e.g. odeint).? > For the Lorenz system this will be a 6-dimensional system of equations. What > you take advantage of is that the max Lyapunov exponent will cause the original > perturbation to stretch in that direction the most and it will become the major > cause of a change in the length of w as time progresses. Hence, you can start > the system with w= some random unit vector (call this w(0)), let it run for a time= > T, then stop (now you have w(T)), and calculate r= ln(|w(T)|/|w(0)|)/.? r/T is then an approximation for the largest > Lyapunov exponent (I'll call that lam_max).? > But you want to make sure you cover most of the attractor (the system's > trajectory), so you repeat the above process sequentially many times and keep a > running sum of the ratios? r_i = ln(|w(t_i)|/|w(t_i-1)|), > where t_i=t_(i-1) + T above.? NOTE: when > you do this you re-start the system at the last t_i and x(t_i), but you > re-initialize w to a random unit vector again. > > ? > > You want to do this each time so w can grow a > lot along the length of the lam_max direction in the dynamics.? A rough algorithm would look like this: > > ? > > (1) Intialize x(0), w= random unit vector, and > running sum=0. > > (2) Run the system for some time = T (maybe > the time to go around the attractor a few times roughly speaking, so that |w(t_i)|/|w(t_i-1)| > > some max ratio, e.g. 1000) > > (4) Have we done this enough times (say N)?? (you can try to see how many repetitions give > you a stable lam_max number). > > ???? If > yes,? then the average of the final r_i > sum is lam_max.? That is > > ??????????????? > N > > lam_max= (1/NT) SUM r_i > > ?????????????? > i=1 > > ? > > I hope that's clear.? You can check the explanation on page 74 of > Practical Numerical Algorithms for Chaotic Systems by Parker and Chua > (Springer-Verlag).? > > ? > > Good luck. > > > ? > -- Lou Pecora > > > On Tuesday, July 15, 2014 5:49 AM, Barrett B gmail.com> wrote: > > Generally, the likely hood of a helpful response will be higher if youpost a working example.> > >From a few moments looking at Meador's thesis, I don't think you areimplementing the algorithm he> describes, but I haven't looked in detail and could be mistaken. I assumeyou've looked at his matlab code> in the appendices? It might also be worthwhile to find a copy of hisreference 22, where he claims more> details are presented.> > Eric> Apologies if this is a duplicate--Gmane has been very glitchy as of late.Here is my latest failed attempt, complete with copious documentation:==================if Lyap:? ? eps_L = np.array([1e-7, 1e-7, 1e-7])? ? #Toss any data with an index less than i_L:? ? x_0 = x[i_L: ]; y_0 = y[i_L: ]; z_0 = z[i_L: ]? ? #Offset each variable by arbitrarily small amounts:? ? x_eps = x_0 + eps_L[0]; y_eps = y_0 + eps_L[1]; z_eps = z_0 + eps_L[2]? ? #Take the second norm ("None") of this offset distance:? ? > d_0 = np.linalg.norm(eps_L, ord=None)? ? #Since dX/dt * del(t) ~= del(X), acquire it from the equations.? ? delX = Lorenz(np.array([x_eps, y_eps, z_eps]))*t_step? ? #Shifts the array by one (for integration). Set delta[0] = 0.? ? delX = np.roll(delX, 1)? ? for i in range (len(delta)): delta[i][0] = 0? ? #Use Euler's method to integrate one timestep. (Can later be optimized.)? ? x_Lap = x_eps + delX[0]? ? y_Lap = y_eps + delX[1]? ? z_Lap = z_eps + delX[2]? ? #Take the second norm of each point.? ? d_Lap = np.sqrt(x_Lap**2 > + y_Lap**2 + z_Lap**2)? ? #Take the natural log.? ? LapLog = np.log(d_Lap/d_0)? ? #Find the average. This is the largest Lyapunov exponent.? ? print np.linalg.norm(LapLog, ord=1) / len(LapLog)===========If you have the slightest clue what I am doing wrong, I would absolutelylove to know. I have spent so much time on this to the point of greatdiscouragement. :(_______________________________________________SciPy-User mailing listSciPy-User scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Lou, Could you please post what this would look like in code? Or at least, an example of this algorithm used in some Python script? I have tried, and failed, and failed, and failed, to get this working. Desperate doesn't even begin to describe my state of affairs. :( From afraser at lanl.gov Fri Jul 18 17:14:31 2014 From: afraser at lanl.gov (Andrew Fraser) Date: Fri, 18 Jul 2014 15:14:31 -0600 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> Message-ID: <53C98E37.1090803@lanl.gov> I wrote code to estimate Lyapunov exponents for a book I wrote several years ago. I haven't looked at it for about a year. Can you access http://code.google.com/p/hmmds/source/browse/code/applications/synthetic/ ? If so, take a look at the file LyapPlot.py. On 07/18/2014 02:55 PM, Barrett B wrote: [...] > Lou, > > Could you please post what this would look like in code? Or at least, an > example of this algorithm used in some Python script? I have tried, and > failed, and failed, and failed, to get this working. Desperate doesn't even > begin to describe my state of affairs. :( > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From barrett.n.b at gmail.com Fri Jul 18 18:51:49 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Fri, 18 Jul 2014 22:51:49 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> Message-ID: Andrew Fraser lanl.gov> writes: > > I wrote code to estimate Lyapunov exponents for a book I wrote several > years ago. I haven't looked at it for about a year. Can you access > http://code.google.com/p/hmmds/source/browse/code/applications/synthetic/ ? > If so, take a look at the file LyapPlot.py. > It indeed is accessible, thanks. Quick questions: 1. What is file f? 2. I can't import lorenz. I see the part near the top about setting up the pathname--where should I generally search for the Lorenz package? From equilibrium87 at gmail.com Sat Jul 19 13:03:49 2014 From: equilibrium87 at gmail.com (Paul Mayer) Date: Sat, 19 Jul 2014 19:03:49 +0200 Subject: [SciPy-User] Linking problems MKL + numpy 1.8.1 + scipy 0.14.0 Message-ID: Dear Scipy-Users, I am running into problems trying to get scipy running on a machine that uses ICC (composer xe 2013 sp1.3.174 suite). As for my site.cfg, I followed this tutorial: https://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl?language=es and further have AMD / UMFPACK / FFTW installed. Unfortunately, my Intel suite does not come with ifort, so I am stuck with gfortran for compiling scipy. The numpy build works just fine, passes all the tests and shows the correct configuration. scipy however does not pass the tests, because I get several cases of: * ImportError: /usr/local/lib/python2.7/dist-packages/scipy/linalg/_fblas.so: undefined symbol: _intel_fast_memcpy * ImportError: /usr/local/lib/python2.7/dist-packages/scipy/special/_ufuncs.so: undefined symbol: __libm_sse2_sincos I am quite sure this is due to me not using ifort but gfortran. Could anyone maybe provide some pointers on how to fix that? I am happy to provide more information, however am not sure what exactly would be required. Thanks & Kind Regards, Paul From lou_boog2000 at yahoo.com Sat Jul 19 13:13:59 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sat, 19 Jul 2014 10:13:59 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> Message-ID: <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> (Hi, Andrew) Barrett, Here is a sample of python code that you should treat as pseudo code that shows how to code up the method I outlined. ? I have a few important things to say about this whole process. ? (1) I tried to give you a theory view of what you are doing. ?The version I showed is the more rigorous form of the dissertation you refer to which uses a rough approximation to the form I used. ?You should use the form (with the Jacobian) I showed you. (2) Calculating the?Lyapunov exponent is not as simple as doing something that is well established as, say, an FFT. You really want to understand the math behind it so you can properly test your code and check results. ?You don't need to follow all the theorems, but at least have a good sense of what your are doing and why your are using the Jacobian, for example. I strongly recommend against using any code you get as a black box and trusting what comes out. ?Know how to write your own. Read the section of the book by Chua I mentioned. ?You should also check the book by Schreiber and Kantz on numerical techniques for?nonlinear?dynamics (I forget the actual title, but a search using author names will bring it up). (3) Andrew Fraser's code appears to give all?Lyapunov exponents. ?Andy knows his stuff and it's a good example of how to do that. My code is just to find the largest?Lyapunov exponent. (4) I have not tested my code, but I think it will get you pretty far if you do the above. The code: import numpy as NP import numpy.random as RN # ---- Calc Lyapanov exponent ------------------------------------ # Input: ? # ?vics= initial conditions # ?n= dimension of original ODE, e.g n=3 for Lorenz # ?T= length of time step for each ODE integration. ?Test for best size to use to get? # ? ? dx to grow by about the 1000*initial dx value. ?You can try for other # ? ? growth ratios besides 1000. # ?NT= number of times to integrate the system for T "secs" each time, do this enough to? # ? ? ?cover most of the attractor. # def LypExp_calc(vics, n, T, NT): lnsum= 0.0 ? # Sum of logs of dx ratios x= vics ?# set dynamic variables to initial conditions for i in xrange(NT): dx= NP.RN.random_sample((n,))/sqrt(n) ?# Make random unit vector y=NP.concatenate([x,dx]) ?# Make 2n vector to integrate original ODEs and Variational system (dx ODE) # Integrate the system for T "secs". vecf is the vector field of # the original system, e.g. Lorenz ODE and the variational ODE. The combined system is? # an ODE of 2*n dimensions yT= YourODEintegrator(y,T, vecf) ?# You choose which integrator to put here, this is just ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?# a place holder dxT= yT[n:2*n] ? # Seperate out the evolved dx part lendx= sqrt(NP.dot(dxT,dxT)) ?# Calculate length of the evolved dx lnsum += log(lendx) ?# Add logarithm of dxT length to running sum x= yT[:n] ?# Reset x to new point LypExp= lnsum/(NT*T) ? # Calculate the Lyapanov Exponents return LypExp # ---- Combined vector field of original ODEs and Variational ODE ------------------ # You will probably have to write this to conform to what type of function YourODEintegrator # expects. # You have to provide the vector field function F and its Jacobian DF # def vecf(y): n=3 ? ? ? ? # Dimension of your original ODE (set for Lorenz here) x= y[:n] ? ?# Pull out first n dynamical variables, e.g. (x,y,z) for Lorenz dx= y[n:2*n] ? # Pull out last n dynamical variables, the perturbation vf= F(x) ? ?# Calculate the vector field Dvf= NP.dot(DF(x),dx) ?# Calculate the variational matrix (the Jacobian of F= DF) and? ? ? ? ? ? ? ? ? ? ?# multiply time the dx vector # Return the combined vector field return NP.concatenate([vf, Dvf]) ? -- Lou Pecora On Friday, July 18, 2014 6:52 PM, Barrett B wrote: Andrew Fraser lanl.gov> writes: > > I wrote code to estimate Lyapunov exponents for a book I wrote several > years ago.? I haven't looked at it for about a year.? Can you access > http://code.google.com/p/hmmds/source/browse/code/applications/synthetic/ ?? > If so, take a look at the file LyapPlot.py. > It indeed is accessible, thanks. Quick questions: 1. What is file f? 2. I can't import lorenz. I see the part near the top about setting up the pathname--where should I generally search for the Lorenz package? _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Sat Jul 19 14:09:28 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sat, 19 Jul 2014 11:09:28 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> Message-ID: <1405793368.79551.YahooMailNeo@web125505.mail.ne1.yahoo.com> One mistake in my pseudo code: The line, dx= NP.RN.random_sample((n,))/sqrt(n) ?# Make random unit vector should be replaced with, temp= NP.RN.random_sample((n,)) ?# Make random unit vector templen= sqrt(dot(temp,temp)) dx= temp/templen Sorry. ? -- Lou Pecora On Saturday, July 19, 2014 1:13 PM, Lou Pecora wrote: (Hi, Andrew) Barrett, Here is a sample of python code that you should treat as pseudo code that shows how to code up the method I outlined. ? I have a few important things to say about this whole process. ? (1) I tried to give you a theory view of what you are doing. ?The version I showed is the more rigorous form of the dissertation you refer to which uses a rough approximation to the form I used. ?You should use the form (with the Jacobian) I showed you. (2) Calculating the?Lyapunov exponent is not as simple as doing something that is well established as, say, an FFT. You really want to understand the math behind it so you can properly test your code and check results. ?You don't need to follow all the theorems, but at least have a good sense of what your are doing and why your are using the Jacobian, for example. I strongly recommend against using any code you get as a black box and trusting what comes out. ?Know how to write your own. Read the section of the book by Chua I mentioned. ?You should also check the book by Schreiber and Kantz on numerical techniques for?nonlinear?dynamics (I forget the actual title, but a search using author names will bring it up). (3) Andrew Fraser's code appears to give all?Lyapunov exponents. ?Andy knows his stuff and it's a good example of how to do that. My code is just to find the largest?Lyapunov exponent. (4) I have not tested my code, but I think it will get you pretty far if you do the above. The code: import numpy as NP import numpy.random as RN # ---- Calc Lyapanov exponent ------------------------------------ # Input: ? # ?vics= initial conditions # ?n= dimension of original ODE, e.g n=3 for Lorenz # ?T= length of time step for each ODE integration. ?Test for best size to use to get? # ? ? dx to grow by about the 1000*initial dx value. ?You can try for other # ? ? growth ratios besides 1000. # ?NT= number of times to integrate the system for T "secs" each time, do this enough to? # ? ? ?cover most of the attractor. # def LypExp_calc(vics, n, T, NT): lnsum= 0.0 ? # Sum of logs of dx ratios x= vics ?# set dynamic variables to initial conditions for i in xrange(NT): dx= NP.RN.random_sample((n,))/sqrt(n) ?# Make random unit vector y=NP.concatenate([x,dx]) ?# Make 2n vector to integrate original ODEs and Variational system (dx ODE) # Integrate the system for T "secs". vecf is the vector field of # the original system, e.g. Lorenz ODE and the variational ODE. The combined system is? # an ODE of 2*n dimensions yT= YourODEintegrator(y,T, vecf) ?# You choose which integrator to put here, this is just ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?# a place holder dxT= yT[n:2*n] ? # Seperate out the evolved dx part lendx= sqrt(NP.dot(dxT,dxT)) ?# Calculate length of the evolved dx lnsum += log(lendx) ?# Add logarithm of dxT length to running sum x= yT[:n] ?# Reset x to new point LypExp= lnsum/(NT*T) ? # Calculate the Lyapanov Exponents return LypExp # ---- Combined vector field of original ODEs and Variational ODE ------------------ # You will probably have to write this to conform to what type of function YourODEintegrator # expects. # You have to provide the vector field function F and its Jacobian DF # def vecf(y): n=3 ? ? ? ? # Dimension of your original ODE (set for Lorenz here) x= y[:n] ? ?# Pull out first n dynamical variables, e.g. (x,y,z) for Lorenz dx= y[n:2*n] ? # Pull out last n dynamical variables, the perturbation vf= F(x) ? ?# Calculate the vector field Dvf= NP.dot(DF(x),dx) ?# Calculate the variational matrix (the Jacobian of F= DF) and? ? ? ? ? ? ? ? ? ? ?# multiply time the dx vector # Return the combined vector field return NP.concatenate([vf, Dvf]) ? -- Lou Pecora On Friday, July 18, 2014 6:52 PM, Barrett B wrote: Andrew Fraser lanl.gov> writes: > > I wrote code to estimate Lyapunov exponents for a book I wrote several > years ago.? I haven't looked at it for about a year.? Can you access > http://code.google.com/p/hmmds/source/browse/code/applications/synthetic/ ?? > If so, take a look at the file LyapPlot.py. > It indeed is accessible, thanks. Quick questions: 1. What is file f? 2. I can't import lorenz. I see the part near the top about setting up the pathname--where should I generally search for the Lorenz package? _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sat Jul 19 19:19:11 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sat, 19 Jul 2014 23:19:11 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> <1405793368.79551.YahooMailNeo@web125505.mail.ne1.yahoo.com> Message-ID: Lou Pecora yahoo.com> writes: > > > One mistake in my pseudo code: > > The line, > > > dx= NP.RN.random_sample((n,))/sqrt(n) ?# Make random unit vector > > should be replaced with, > > > > temp= NP.RN.random_sample((n,)) ?# Make random unit vector > > templen= sqrt(dot(temp,temp)) > > dx= temp/templen > > Sorry. > Thanks, Lou. Can I double-check to make sure I've got the tabs in the correct places? (comments removed from this code) def LypExp_calc(vics, n, T, NT): lnsum= 0.0 x= vics for i in xrange(NT): temp= NP.RN.random_sample((n,)) templen= sqrt(dot(temp,temp)) dx= temp/templen y=NP.concatenate([x,dx]) yT= YourODEintegrator(y,T, vecf) dxT= yT[n:2*n] lnsum += log(lendx) x= yT[:n] LypExp= lnsum/(NT*T) return LypExp def vecf(y): n=3 x= y[:n] dx= y[n:2*n] vf= F(x) Dvf= NP.dot(DF(x),dx) return NP.concatenate([vf, Dvf]) From lou_boog2000 at yahoo.com Sun Jul 20 08:17:37 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 20 Jul 2014 05:17:37 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> <1405793368.79551.YahooMailNeo@web125505.mail.ne1.yahoo.com> Message-ID: <1405858657.28503.YahooMailNeo@web125502.mail.ne1.yahoo.com> Hi, Barrett, I added some needed tabs and a calculation of lendx (length of dx) which I left out. You can see by that why I want you to understand the math. ?Even experienced people like me can make code mistakes. ?:-)? def LypExp_calc(vics, n, T, NT): ? ? lnsum= 0.0?? ? ? x= vics?? ? ? for i in xrange(NT): ? ? ? ? temp= NP.RN.random_sample((n,)) ? ? ? ? templen= sqrt(dot(temp,temp)) ? ? ? ? dx= temp/templen ? ? ? ? y=NP.concatenate([x,dx]) ? ? yT= YourODEintegrator(y,T, vecf) ? ? dxT= yT[n:2*n]? lendx= sqrt(NP.dot(dxT,dxT)) ?# ?Line added ? ? lnsum += log(lendx) ? ? x= yT[:n] ? ? LypExp= lnsum/(NT*T) ? ? return LypExp def vecf(y): ? ? n=3 ? ? x= y[:n] ? ? dx= y[n:2*n] ? ? vf= F(x) ? ? Dvf= NP.dot(DF(x),dx) ? ? return NP.concatenate([vf, Dvf]) ? -- Lou Pecora On Saturday, July 19, 2014 7:19 PM, Barrett B wrote: Lou Pecora yahoo.com> writes: > > > One mistake in my pseudo code: > > The line, > > > ??? ??? dx= NP.RN.random_sample((n,))/sqrt(n) ?# Make random unit vector > > should be replaced with, > > > > ??? ??? temp= NP.RN.random_sample((n,)) ?# Make random unit vector > > ??? ??? templen= sqrt(dot(temp,temp)) > > ??? ??? dx= temp/templen > > Sorry. > Thanks, Lou. Can I double-check to make sure I've got the tabs in the correct places? (comments removed from this code) def LypExp_calc(vics, n, T, NT): ? ? lnsum= 0.0? ? ? x= vics? ? ? for i in xrange(NT): ? ? ? ? temp= NP.RN.random_sample((n,)) ? ? ? ? templen= sqrt(dot(temp,temp)) ? ? ? ? dx= temp/templen ? ? ? ? y=NP.concatenate([x,dx]) ? ? yT= YourODEintegrator(y,T, vecf) ? ? dxT= yT[n:2*n] ? ? lnsum += log(lendx) ? ? x= yT[:n] ? ? LypExp= lnsum/(NT*T) ? ? return LypExp def vecf(y): ? ? n=3 ? ? x= y[:n] ? ? dx= y[n:2*n] ? ? vf= F(x) ? ? Dvf= NP.dot(DF(x),dx) ? ? return NP.concatenate([vf, Dvf]) _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccordoba12 at gmail.com Sun Jul 20 18:48:45 2014 From: ccordoba12 at gmail.com (=?UTF-8?B?Q2FybG9zIEPDs3Jkb2Jh?=) Date: Sun, 20 Jul 2014 17:48:45 -0500 Subject: [SciPy-User] ANN: Spyder v2.3 is released! Message-ID: <53CC474D.90301@gmail.com> Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder 2.3 has been released and is available for Windows XP/Vista/7/8, GNU/Linux and MacOS X: https://bitbucket.org/spyder-ide/spyderlib/downloads This release represents 14 months of development since version 2.2 and introduces major enhancements and new features: * Python 3 support (versions 3.2, 3.3 and 3.4 are supported) * Drop Python 2.5 support * Various Editor improvements: - Use the Tab key to do code completions - Highlight cells - First-class support for Enaml files - Syntax highlighting for Julia files - Use Shift+Tab to show calltips - Improve how calltips are shown - Do code completions using the tokens found in a file * Better looking Object Inspector * Several refinements to the user interface to make it easier and more intuitive * And many other changes: http://code.google.com/p/spyderlib/wiki/ChangeLog Spyder 2.2 has been a huge success (being downloaded more than 400,000 times!) and we hope 2.3 will be as successful as it. For that we fixed 70 important bugs, merged 30 pull requests from 11 authors and added almost 1000 commits between these two releases. Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favorite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sun Jul 20 19:22:50 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sun, 20 Jul 2014 23:22:50 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> <1405793368.79551.YahooMailNeo@web125505.mail.ne1.yahoo.com> <1405858657.28503.YahooMailNeo@web125502.mail.ne1.yahoo.com> Message-ID: Lou Pecora yahoo.com> writes: > > > Hi, Barrett, > > I added some needed tabs and a calculation of lendx (length of dx) which I left out. You can see by that why I want you to understand the math. ?Even experienced people like me can make code mistakes. ?? > > def LypExp_calc(vics, n, T, NT):? ? lnsum= 0.0??? ? x= vics??? ? for i in xrange(NT):? ? ? ? temp= NP.RN.random_sample((n,))? ? ? ? templen= sqrt(dot(temp,temp))? ? ? ? dx= temp/templen? ? ? ? y=NP.concatenate([x,dx])? ? yT= YourODEintegrator(y,T, vecf)? ? dxT= yT[n:2*n]? > > lendx= sqrt(NP.dot(dxT,dxT)) ?# ?Line added? ? lnsum += log(lendx)? ? x= yT[:n]? ? LypExp= lnsum/(NT*T)? ? return LypExpdef vecf(y):? ? n=3? ? x= y[:n]? ? dx= y[n:2*n]? ? vf= F(x)? ? Dvf= NP.dot(DF(x),dx)? ? return NP.concatenate([vf, Dvf]) > > ? > -- Lou Pecora > Thanks, Lou, but I'm really not sure how to get this up and running. If it's not obvious by now, my programming skills are very, very weak. :( For example, this is what I have been using for my integrator: #Lorenz equations. def Lorenz(X): return np.array([sigma*(X[1] - X[0]), #dx/dt X[0]*(rho - X[2]) - X[1], #dy/dt X[0]*X[1] - beta*X[2]]) #dz/dt #Time-independent RK4 method. def RK4(X, timeStep): k1 = Lorenz(X) k2 = Lorenz(X + timeStep*k1 / 2) k3 = Lorenz(X + timeStep*k2 / 2) k4 = Lorenz(X + timeStep*k3) return X + timeStep*(k1 + 2*k2 + 2*k3 + k4)/6 And this won't work, because there are six variables to integrate, not three. Help! From lou_boog2000 at yahoo.com Mon Jul 21 07:27:36 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 21 Jul 2014 04:27:36 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: <649847CE7F259144A0FD99AC64E7326D0EFA43@MLBXV17.nih.gov> <1405435202.40803.YahooMailNeo@web125505.mail.ne1.yahoo.com> <53C98E37.1090803@lanl.gov> <1405790039.67414.YahooMailNeo@web125504.mail.ne1.yahoo.com> <1405793368.79551.YahooMailNeo@web125505.mail.ne1.yahoo.com> <1405858657.28503.YahooMailNeo@web125502.mail.ne1.yahoo.com> Message-ID: <1405942056.98849.YahooMailNeo@web125506.mail.ne1.yahoo.com> Dear Barrett, I suspect your problem is not so much weak programming skills, but a weak understanding of?dynamical?systems and ODEs. ?Sorry, I do not mean to sound negative. I think you really need to study more about these kinds of?systems and how to solve them. ?I would guess the RK solver is something you wrote. ?It is not how you would do Runge-Kutta. ?You could get by quite well with odeint in the scipy library. It's a good routine for solving ODEs. ?But you are hampered by not knowing what the Jacobian is. ?It provides the other three equations you need from the variational part of the problem. ?I tried to explain it, but I guess I didn't get it through. ?Please take a step back and put some time in to understand these concepts and methods. Going at it by just hoping someone's code will work is not the answer, but it can cause you lots of problems as you are seeing. ?I don't know why you are trying to calculate the?Lyapunov exponents, but if this is something that is important in your job or school, then a deeper understanding will be the most beneficial.? I wish you well in your endeavor. Sincerely yours, -- Lou Pecora On Sunday, July 20, 2014 7:22 PM, Barrett B wrote: Lou Pecora yahoo.com> writes: > > > Hi, Barrett, > > I added some needed tabs and a calculation of lendx (length of dx) which I left out. You can see by that why I want you to understand the math. ?Even experienced people like me can make code mistakes. ?? > > def LypExp_calc(vics, n, T, NT):? ? lnsum= 0.0??? ? x= vics??? ? for i in xrange(NT):? ? ? ? temp= NP.RN.random_sample((n,))? ? ? ? templen= sqrt(dot(temp,temp))? ? ? ? dx= temp/templen? ? ? ? y=NP.concatenate([x,dx])? ? ??? yT= YourODEintegrator(y,T, vecf)? ? ??? dxT= yT[n:2*n]? > > ??? lendx= sqrt(NP.dot(dxT,dxT)) ?# ?Line added? ? ??? lnsum += log(lendx)? ? x= yT[:n]? ? LypExp= lnsum/(NT*T)? ? return LypExpdef vecf(y):? ? n=3? ? x= y[:n]? ? dx= y[n:2*n]? ? vf= F(x)? ? Dvf= NP.dot(DF(x),dx)? ? return NP.concatenate([vf, Dvf]) > > ? > -- Lou Pecora > Thanks, Lou, but I'm really not sure how to get this up and running. If it's not obvious by now, my programming skills are very, very weak. :( For example, this is what I have been using for my integrator: #Lorenz equations. def Lorenz(X): ? ? return np.array([sigma*(X[1] - X[0]), #dx/dt ? ? ? ? ? ? ? ? ? ? X[0]*(rho - X[2]) - X[1], #dy/dt ? ? ? ? ? ? ? ? ? ? X[0]*X[1] - beta*X[2]]) #dz/dt ? ? #Time-independent RK4 method. def RK4(X, timeStep): ? ? k1 = Lorenz(X) ? ? k2 = Lorenz(X + timeStep*k1 / 2) ? ? k3 = Lorenz(X + timeStep*k2 / 2) ? ? k4 = Lorenz(X + timeStep*k3)? ? ? ? ? return X + timeStep*(k1 + 2*k2 + 2*k3 + k4)/6 And this won't work, because there are six variables to integrate, not three. Help! _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Mon Jul 21 14:55:36 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Mon, 21 Jul 2014 18:55:36 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: Message-ID: Alright, guys, I FINALLY got it to work. Thanks to all for the help. The current code is (ignore the annoying line returns associated with comments): import numpy as np #import matplotlib.pyplot as plt #import matplotlib.mlab as mlab from scipy.integrate import * #System constants rates = [-0.1, -0.4, -1] sigma = 10; beta = 8.0/3; rho = 28 #Simulation constants timeStep = 1e-3; tStartTrack = 20; #Time when Lyapunov exponent calculation begins. nStartTrack = int(tStartTrack/timeStep) #init step for Lyap expo tFinal = 4000; #Final timestep. nFinal = int(tFinal/timeStep) #a.k.a N Lyap = True #Whether to calculate the largest Lyapunov exponent. #Lorenz equations. def Lorenz(X): return np.array([sigma*(X[1] - X[0]), #dx/dt X[0]*(rho - X[2]) - X[1], #dy/dt X[0]*X[1] - beta*X[2]]) #dz/dt #ODE. Lorenz is time independent, but odeint requires the time to be input. def f(X, t): return Lorenz(X) #Time-independent RK4 method. def RK4(X, timeStep): k1 = Lorenz(X) k2 = Lorenz(X + timeStep*k1 / 2) k3 = Lorenz(X + timeStep*k2 / 2) k4 = Lorenz(X + timeStep*k3) return X + timeStep*(k1 + 2*k2 + 2*k3 + k4)/6 #Script starts here. Y0 = np.array([1.0, 1.0, 1.0]) #x, y, z if Lyap: t = np.arange(0, tStartTrack, timeStep) #Will use RK4 the rest of the way. else: t = np.arange(0, tFinal, timeStep) soln = odeint(f, Y0, t) if Lyap: L_s = 0.0; saves = 0; timesUsed = 0 offsetArray = np.array([1e-6, 1e-6, 1e-6]) tInterval = 10.0 #Time length of each interval. Reset soln_1 after each one. numberOfIntervals = int((tFinal - tStartTrack)/tInterval) pointsPerInterval = int(tInterval/timeStep) #Data points per interval. soln_0 = soln[nStartTrack-1, :] #The original solution soln_1 = soln_0 + offsetArray #The perturbed solution #Second norm ("None") of the distance, before some integration: d_0 = np.linalg.norm(offsetArray, ord=None) print 'Beginning Lyapunov calculation' for currentInterval in range (numberOfIntervals): if currentInterval>0: #Rescale soln_1 such that its dist from soln_0 = that of eps_L's. newPerturb = (soln_1 - soln_0) *\ np.linalg.norm(offsetArray, ord=None)/d_1 soln_1 = soln_0 + newPerturb for j in range(pointsPerInterval): #Update status. statusPct = 100.0*(currentInterval + 1)/numberOfIntervals #Use RK4 to integrate this ONE timestep. soln_1 = RK4(soln_1, timeStep) soln_0 = RK4(soln_0, timeStep) #Second norm ("None") of the distance, after some integration: d_1 = np.linalg.norm(soln_0 - soln_1, ord=None) #Keep a running total of the natural log of the ratio d_1/d_0. if d_1/d_0 > 1e-20: #just in case. L_s += np.log(d_1/d_0) timesUsed += 1 else: saves += 1 #Divide by the number of data points and the timestep. #This is the largest Lyapunov exponent. print L_s/tInterval/numberOfIntervals From jonathanrocher at gmail.com Mon Jul 21 14:58:53 2014 From: jonathanrocher at gmail.com (Jonathan Rocher) Date: Mon, 21 Jul 2014 13:58:53 -0500 Subject: [SciPy-User] [ANN] Submissions open for Python symposium at AMS2015 Message-ID: [Sorry for cross-posts] Dear all, Abstract submission has been open for the Fifth Symposium on *Modeling and Analysis Using Python* at the American Meteorological Society annual meeting in Phoenix, AZ, January 3-8, 2015. We are soliciting papers related to all areas of the use of Python in the atmospheric and oceanic sciences. The call for papers and link to submit an abstract is located at: http://annual.ametsoc.org/2015/index.cfm/programs-and-events/conferences-and-symposia/fifth-symposium-on-advances-in-modeling-and-analysis-using-python/ The abstract submission deadline is coming up quickly: *August 1st*. We encourage students to submit papers. There is a prize for the best paper. Information is here . The abstract deadline for students is later, August 21. The conference will be preceded by three or four 2-days of tutorials, called "short courses". They will cover a wide range of topics relevant to scientific analysis and modeling in Python for beginners and more advanced programmers (HPC). They will be announced shortly at: http://annual.ametsoc.org/2015/index.cfm/programs-and-events/short-courses/ (Last year's short courses are listed here and this year's offering will be a superset of what we offered last year.) Please pass along this announcement to your friends and colleagues! Thanks! Regards, Jonathan -- Jonathan Rocher Austin TX, USA Cell : +1 512 501 0865 http://www.linkedin.com/in/jonathanrocher ------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Jul 21 15:47:31 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 21 Jul 2014 21:47:31 +0200 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> Message-ID: On Mon, Jul 14, 2014 at 3:09 PM, McCully, Dwayne (NIH/NIAMS) [C] < dmccully at mail.nih.gov> wrote: > Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on > three modules with the configuration listed below. > > Using LAPACK, ATLAS, and BLAS rpm?s that comes with Red Hat 6. > > > > Any help would be appreciated. > Suggested to change gcc version, see https://github.com/scipy/scipy/issues/3568. Ralf > > > Dwayne > > > > > > ====================================================================== > > FAIL: test_basic.test_xlogy > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > > self.test(*self.arg) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", > line 2878, in test_xlogy > > assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", > line 87, in assert_func_equal > > fdata.check() > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", > line 292, in check > > assert_(False, "\n".join(msg)) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 44, in assert_ > > raise AssertionError(msg) > > AssertionError: > > Max |adiff|: 712.561 > > Max |rdiff|: 1028.01 > > Bad results (3 out of 6) for the following points (in output 0): > > 0j (nan+0j) > => (-0+0j) != (nan+nanj) > (rdiff 0.0) > > (1+0j) (2+0j) => > (-711.8665072622568+1.5707963267948752j) != > (0.6931471805599453+0j) (rdiff > 1028.0087776302707) > > (1+0j) 1j => > (-711.8665072622568+1.5707963267948752j) != > 1.5707963267948966j (rdiff > 453.18829380940315) > > > > ====================================================================== > > FAIL: test_lambertw.test_values > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > > self.test(*self.arg) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", > line 21, in test_values > > assert_equal(lambertw(inf,1).real, inf) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 304, in assert_equal > > raise AssertionError(msg) > > AssertionError: > > Items are not equal: > > ACTUAL: nan > > DESIRED: inf > > > > ====================================================================== > > FAIL: test_lambertw.test_ufunc > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 581, in chk_same_position > > assert_array_equal(x_id, y_id) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 718, in assert_array_equal > > verbose=verbose, header='Arrays are not equal') > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 644, in assert_array_compare > > raise AssertionError(msg) > > AssertionError: > > Arrays are not equal > > > > (mismatch 66.66666666666666%) > > x: array([False, True, True], dtype=bool) > > y: array([False, False, False], dtype=bool) > > > > During handling of the above exception, another exception occurred: > > > > Traceback (most recent call last): > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > > self.test(*self.arg) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", > line 93, in test_ufunc > > lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 811, in assert_array_almost_equal > > header=('Arrays are not almost equal to %d decimals' % decimal)) > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 607, in assert_array_compare > > chk_same_position(x_isnan, y_isnan, hasval='nan') > > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 587, in chk_same_position > > raise AssertionError(msg) > > AssertionError: > > Arrays are not almost equal to 6 decimals > > > > x and y nan location mismatch: > > x: array([ 0.+0.j, nan+0.j, nan+0.j]) > > y: array([ 0. , 1. , 0.56714329]) > > > > ---------------------------------------------------------------------- > > Ran 16420 tests in 223.156s > > > > FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) > > > > > > > > [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' > > ------ > > os.name='posix' > > ------ > > sys.platform='linux' > > ------ > > sys.version: > > 3.3.4 (default, Feb 27 2014, 17:05:47) > > [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] > > ------ > > sys.prefix: > > /cm/shared/apps/python/3.3.4 > > ------ > > > sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' > > ------ > > Found new numpy version '1.8.1' in > /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py > > Found f2py2e version '2' in > /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py > > Found numpy.distutils version '0.4.0' in > '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' > > ------ > > Importing numpy.distutils.fcompiler ... ok > > ------ > > Checking availability of supported Fortran compilers: > > Gnu95FCompiler instance properties: > > archiver = ['/usr/bin/gfortran', '-cr'] > > compile_switch = '-c' > > compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- > > second-underscore', '-fPIC', '-O3', '-funroll-loops'] > > compiler_f90 = ['/usr/bin/gfortran', '-Wall', > '-fno-second-underscore', > > '-fPIC', '-O3', '-funroll-loops'] > > compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- > > second-underscore', '-Wall', '-fno-second-underscore', > '- > > fPIC', '-O3', '-funroll-loops'] > > libraries = ['gfortran'] > > library_dirs = [] > > linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] > > linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] > > object_switch = '-o ' > > ranlib = ['/usr/bin/gfortran'] > > version = LooseVersion ('4.4.7') > > version_cmd = ['/usr/bin/gfortran', '--version'] > > GnuFCompiler instance properties: > > archiver = ['/usr/bin/g77', '-cr'] > > compile_switch = '-c' > > compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- > > underscore', '-fPIC', '-O3', '-funroll-loops'] > > compiler_f90 = None > > compiler_fix = None > > libraries = ['g2c'] > > library_dirs = [] > > linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] > > linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- > > shared'] > > object_switch = '-o ' > > ranlib = ['/usr/bin/g77'] > > version = LooseVersion ('3.4.6') > > version_cmd = ['/usr/bin/g77', '--version'] > > Fortran compilers found: > > --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) > > --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) > > Compilers available for this platform, but not found: > > --fcompiler=absoft Absoft Corp Fortran Compiler > > --fcompiler=compaq Compaq Fortran Compiler > > --fcompiler=g95 G95 Fortran Compiler > > --fcompiler=intel Intel Fortran Compiler for 32-bit apps > > --fcompiler=intele Intel Fortran Compiler for Itanium apps > > --fcompiler=intelem Intel Fortran Compiler for 64-bit apps > > --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler > > --fcompiler=nag NAGWare Fortran 95 Compiler > > --fcompiler=pathf95 PathScale Fortran Compiler > > --fcompiler=pg Portland Group Fortran Compiler > > --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler > > Compilers not available on this platform: > > --fcompiler=hpux HP Fortran 90 Compiler > > --fcompiler=ibm IBM XL Fortran Compiler > > --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps > > --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps > > --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps > > --fcompiler=mips MIPSpro Fortran Compiler > > --fcompiler=none Fake Fortran compiler > > --fcompiler=sun Sun or Forte Fortran 95 Compiler > > For compiler details, run 'config_fc --verbose' setup command. > > ------ > > Importing numpy.distutils.cpuinfo ... ok > > ------ > > CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 > has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ > > > > [root at niamsirpapp01 ~]# gcc -v > > Using built-in specs. > > Target: x86_64-redhat-linux > > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --with-bugurl= > http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > --enable-threads=posix --enable-checking=release --with-system-zlib > --enable-__cxa_atexit --disable-libunwind-exceptions > --enable-gnu-unique-object > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi > --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > --enable-libgcj-multifile --enable-java-maintainer-mode > --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > --build=x86_64-redhat-linux > > Thread model: posix > > gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > > [root at niamsirpapp01 ~]# g77 --version > > GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) > > Copyright (C) 2006 Free Software Foundation, Inc. > > > > GNU Fortran comes with NO WARRANTY, to the extent permitted by law. > > You may redistribute copies of GNU Fortran > > under the terms of the GNU General Public License. > > For more information about these matters, see the file named COPYING > > or type the command `info -f g77 Copying'. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lou_boog2000 at yahoo.com Mon Jul 21 15:56:48 2014 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 21 Jul 2014 12:56:48 -0700 Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system In-Reply-To: References: Message-ID: <1405972608.85009.YahooMailNeo@web125502.mail.ne1.yahoo.com> Do your calculations agree with published?Lyapunov exponents? ? -- Lou Pecora On Monday, July 21, 2014 2:56 PM, Barrett B wrote: Alright, guys, I FINALLY got it to work. Thanks to all for the help. The current code is (ignore the annoying line returns associated with comments): import numpy as np #import matplotlib.pyplot as plt #import matplotlib.mlab as mlab from scipy.integrate import * #System constants rates = [-0.1, -0.4, -1] sigma = 10; beta = 8.0/3; rho = 28 #Simulation constants timeStep = 1e-3; tStartTrack = 20; #Time when Lyapunov exponent calculation begins. nStartTrack = int(tStartTrack/timeStep) #init step for Lyap expo tFinal = 4000; #Final timestep. nFinal = int(tFinal/timeStep) #a.k.a N Lyap = True #Whether to calculate the largest Lyapunov exponent. #Lorenz equations. def Lorenz(X): ? ? return np.array([sigma*(X[1] - X[0]), #dx/dt ? ? ? ? ? ? ? ? ? ? X[0]*(rho - X[2]) - X[1], #dy/dt ? ? ? ? ? ? ? ? ? ? X[0]*X[1] - beta*X[2]]) #dz/dt #ODE. Lorenz is time independent, but odeint requires the time to be input. def f(X, t): ? ? return Lorenz(X) ? ? #Time-independent RK4 method. def RK4(X, timeStep): ? ? k1 = Lorenz(X) ? ? k2 = Lorenz(X + timeStep*k1 / 2) ? ? k3 = Lorenz(X + timeStep*k2 / 2) ? ? k4 = Lorenz(X + timeStep*k3)? ? ? ? ? return X + timeStep*(k1 + 2*k2 + 2*k3 + k4)/6 #Script starts here. Y0 = np.array([1.0, 1.0, 1.0]) #x, y, z if Lyap: ? ? t = np.arange(0, tStartTrack, timeStep) #Will use RK4 the rest of the way. else: ? ? t = np.arange(0, tFinal, timeStep) soln = odeint(f, Y0, t) if Lyap: ? ? L_s = 0.0; saves = 0; timesUsed = 0 ? ? offsetArray = np.array([1e-6, 1e-6, 1e-6]) ? ? tInterval = 10.0 #Time length of each interval. Reset soln_1 after each one. ? ? numberOfIntervals = int((tFinal - tStartTrack)/tInterval) ? ? pointsPerInterval = int(tInterval/timeStep) #Data points per interval. ? ? soln_0 = soln[nStartTrack-1, :] #The original solution ? ? soln_1 = soln_0 + offsetArray #The perturbed solution ? ? #Second norm ("None") of the distance, before some integration: ? ? d_0 = np.linalg.norm(offsetArray, ord=None) ? ? print 'Beginning Lyapunov calculation' ? ? ? ? for currentInterval in range (numberOfIntervals): ? ? ? ? if currentInterval>0: ? ? ? ? ? ? #Rescale soln_1 such that its dist from soln_0 = that of eps_L's. ? ? ? ? ? ? newPerturb = (soln_1 - soln_0) *\ ? ? ? ? ? ? ? ? np.linalg.norm(offsetArray, ord=None)/d_1 ? ? ? ? ? ? soln_1 = soln_0 + newPerturb ? ? ? ? ? ? ? ? for j in range(pointsPerInterval): ? ? ? ? ? ? #Update status. ? ? ? ? ? ? statusPct = 100.0*(currentInterval + 1)/numberOfIntervals ? ? ? ? ? ? #Use RK4 to integrate this ONE timestep. ? ? ? ? ? ? soln_1 = RK4(soln_1, timeStep) ? ? ? ? ? ? soln_0 = RK4(soln_0, timeStep) ? ? ? ? ? ? ? ? ? ? #Second norm ("None") of the distance, after some integration: ? ? ? ? d_1 = np.linalg.norm(soln_0 - soln_1, ord=None) ? ? ? ? #Keep a running total of the natural log of the ratio d_1/d_0. ? ? ? ? if d_1/d_0 > 1e-20: #just in case. ? ? ? ? ? ? L_s += np.log(d_1/d_0) ? ? ? ? ? ? timesUsed += 1 ? ? ? ? else: ? ? ? ? ? ? saves += 1 ? ? ? ? #Divide by the number of data points and the timestep. ? ? #This is the largest Lyapunov exponent. ? ? print L_s/tInterval/numberOfIntervals _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmccully at mail.nih.gov Mon Jul 21 21:26:37 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Tue, 22 Jul 2014 01:26:37 +0000 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov>, Message-ID: <432A8E6B26DC62439F0201C069BE2B671D38AF54@MLBXV08.nih.gov> Hi Ralf, Ran the install script with "python setup.py config_fc --fcompiler=gnu install". As always I get a lot of warning messages but now the scipy.test() crashes with a Segmentaion fault. I did see g77 and not gfortran when compiling. maybe used but uninitialized in fuction unused variable etc... Any suggestions on the command line? Dwayne ________________________________ From: Ralf Gommers [ralf.gommers at gmail.com] Sent: Monday, July 21, 2014 3:47 PM To: SciPy Users List Subject: Re: [SciPy-User] Three scipy.test() errors On Mon, Jul 14, 2014 at 3:09 PM, McCully, Dwayne (NIH/NIAMS) [C] > wrote: Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm?s that comes with Red Hat 6. Any help would be appreciated. Suggested to change gcc version, see https://github.com/scipy/scipy/issues/3568. Ralf Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From maria-rosaria.antonelli at curie.fr Tue Jul 22 09:21:53 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Tue, 22 Jul 2014 13:21:53 +0000 Subject: [SciPy-User] ipywidgets Message-ID: Hi, I would like to get the ipywidgets to make interactive plot in my notebooks. What should I do ? Here is the answer that I got to the command line "sys.version" : '2.7.7 |Anaconda 2.0.1 (x86_64)| (default, Jun 2 2014, 12:48:16) \n[GCC 4.0.1 (Apple Inc. build 5493)]' Best, Rosa -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrocher at enthought.com Tue Jul 22 10:42:45 2014 From: jrocher at enthought.com (Jonathan Rocher) Date: Tue, 22 Jul 2014 09:42:45 -0500 Subject: [SciPy-User] ipywidgets In-Reply-To: References: Message-ID: Hi Maria, This is a question for the IPython mailing list instead. I would start here though: http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Interactive%20Widgets/Index.ipynb HTH, Jonathan On Tue, Jul 22, 2014 at 8:21 AM, Antonelli Maria Rosaria < maria-rosaria.antonelli at curie.fr> wrote: > Hi, > > I would like to get the ipywidgets to make interactive plot in my > notebooks. What should I do ? > > Here is the answer that I got to the command line "sys.version" : > > > '2.7.7 |Anaconda 2.0.1 (x86_64)| (default, Jun 2 2014, 12:48:16) \n[GCC 4.0.1 (Apple Inc. build 5493)]' > > > Best, > > Rosa > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria-rosaria.antonelli at curie.fr Tue Jul 22 10:44:36 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Tue, 22 Jul 2014 14:44:36 +0000 Subject: [SciPy-User] ipywidgets In-Reply-To: References: Message-ID: Thank you Jonathan. Have a nice afternoon Rosa From: Jonathan Rocher > Reply-To: SciPy Users List > Date: Tuesday, July 22, 2014 4:42 PM To: SciPy Users List > Subject: Re: [SciPy-User] ipywidgets Hi Maria, This is a question for the IPython mailing list instead. I would start here though: http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Interactive%20Widgets/Index.ipynb HTH, Jonathan On Tue, Jul 22, 2014 at 8:21 AM, Antonelli Maria Rosaria > wrote: Hi, I would like to get the ipywidgets to make interactive plot in my notebooks. What should I do ? Here is the answer that I got to the command line "sys.version" : '2.7.7 |Anaconda 2.0.1 (x86_64)| (default, Jun 2 2014, 12:48:16) \n[GCC 4.0.1 (Apple Inc. build 5493)]' Best, Rosa _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jul 22 16:24:57 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 22 Jul 2014 22:24:57 +0200 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D38AF54@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38AF54@MLBXV08.nih.gov> Message-ID: On Tue, Jul 22, 2014 at 3:26 AM, McCully, Dwayne (NIH/NIAMS) [C] < dmccully at mail.nih.gov> wrote: > Hi Ralf, > > Ran the install script with "python setup.py config_fc --fcompiler=gnu > install". As always I get a lot of warning messages but now the > scipy.test() crashes with a Segmentaion fault. I did see g77 and not > gfortran when compiling. > If your blas/lapack was built with gfortran and you now build scipy with g77, that won't work. Search for "g77" on http://scipy.org/scipylib/building/linux.html for more details. Cheers, Ralf > maybe used but uninitialized in fuction > unused variable > etc... > > > Any suggestions on the command line? > > Dwayne > > > > ________________________________ > From: Ralf Gommers [ralf.gommers at gmail.com] > Sent: Monday, July 21, 2014 3:47 PM > To: SciPy Users List > Subject: Re: [SciPy-User] Three scipy.test() errors > > > > > On Mon, Jul 14, 2014 at 3:09 PM, McCully, Dwayne (NIH/NIAMS) [C] < > dmccully at mail.nih.gov> wrote: > Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on > three modules with the configuration listed below. > Using LAPACK, ATLAS, and BLAS rpm?s that comes with Red Hat 6. > > Any help would be appreciated. > > Suggested to change gcc version, see > https://github.com/scipy/scipy/issues/3568. > > Ralf > > > > Dwayne > > > ====================================================================== > FAIL: test_basic.test_xlogy > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > self.test(*self.arg) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", > line 2878, in test_xlogy > assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", > line 87, in assert_func_equal > fdata.check() > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", > line 292, in check > assert_(False, "\n".join(msg)) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 44, in assert_ > raise AssertionError(msg) > AssertionError: > Max |adiff|: 712.561 > Max |rdiff|: 1028.01 > Bad results (3 out of 6) for the following points (in output 0): > 0j (nan+0j) => > (-0+0j) != (nan+nanj) (rdiff > 0.0) > (1+0j) (2+0j) => > (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) > (rdiff 1028.0087776302707) > (1+0j) 1j => > (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j > (rdiff 453.18829380940315) > > ====================================================================== > FAIL: test_lambertw.test_values > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > self.test(*self.arg) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", > line 21, in test_values > assert_equal(lambertw(inf,1).real, inf) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 304, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: nan > DESIRED: inf > > ====================================================================== > FAIL: test_lambertw.test_ufunc > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 581, in chk_same_position > assert_array_equal(x_id, y_id) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > > (mismatch 66.66666666666666%) > x: array([False, True, True], dtype=bool) > y: array([False, False, False], dtype=bool) > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", > line 198, in runTest > self.test(*self.arg) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", > line 93, in test_ufunc > lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File > "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", > line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > x and y nan location mismatch: > x: array([ 0.+0.j, nan+0.j, nan+0.j]) > y: array([ 0. , 1. , 0.56714329]) > > ---------------------------------------------------------------------- > Ran 16420 tests in 223.156s > > FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) > > > > [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' > ------ > os.name='posix' > ------ > sys.platform='linux' > ------ > sys.version: > 3.3.4 (default, Feb 27 2014, 17:05:47) > [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] > ------ > sys.prefix: > /cm/shared/apps/python/3.3.4 > ------ > > sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' > ------ > Found new numpy version '1.8.1' in > /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py > Found f2py2e version '2' in > /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py > Found numpy.distutils version '0.4.0' in > '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' > ------ > Importing numpy.distutils.fcompiler ... ok > ------ > Checking availability of supported Fortran compilers: > Gnu95FCompiler instance properties: > archiver = ['/usr/bin/gfortran', '-cr'] > compile_switch = '-c' > compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- > second-underscore', '-fPIC', '-O3', '-funroll-loops'] > compiler_f90 = ['/usr/bin/gfortran', '-Wall', > '-fno-second-underscore', > '-fPIC', '-O3', '-funroll-loops'] > compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- > second-underscore', '-Wall', '-fno-second-underscore', > '- > fPIC', '-O3', '-funroll-loops'] > libraries = ['gfortran'] > library_dirs = [] > linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] > linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] > object_switch = '-o ' > ranlib = ['/usr/bin/gfortran'] > version = LooseVersion ('4.4.7') > version_cmd = ['/usr/bin/gfortran', '--version'] > GnuFCompiler instance properties: > archiver = ['/usr/bin/g77', '-cr'] > compile_switch = '-c' > compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- > underscore', '-fPIC', '-O3', '-funroll-loops'] > compiler_f90 = None > compiler_fix = None > libraries = ['g2c'] > library_dirs = [] > linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] > linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- > shared'] > object_switch = '-o ' > ranlib = ['/usr/bin/g77'] > version = LooseVersion ('3.4.6') > version_cmd = ['/usr/bin/g77', '--version'] > Fortran compilers found: > --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) > --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) > Compilers available for this platform, but not found: > --fcompiler=absoft Absoft Corp Fortran Compiler > --fcompiler=compaq Compaq Fortran Compiler > --fcompiler=g95 G95 Fortran Compiler > --fcompiler=intel Intel Fortran Compiler for 32-bit apps > --fcompiler=intele Intel Fortran Compiler for Itanium apps > --fcompiler=intelem Intel Fortran Compiler for 64-bit apps > --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler > --fcompiler=nag NAGWare Fortran 95 Compiler > --fcompiler=pathf95 PathScale Fortran Compiler > --fcompiler=pg Portland Group Fortran Compiler > --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler > Compilers not available on this platform: > --fcompiler=hpux HP Fortran 90 Compiler > --fcompiler=ibm IBM XL Fortran Compiler > --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps > --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps > --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps > --fcompiler=mips MIPSpro Fortran Compiler > --fcompiler=none Fake Fortran compiler > --fcompiler=sun Sun or Forte Fortran 95 Compiler > For compiler details, run 'config_fc --verbose' setup command. > ------ > Importing numpy.distutils.cpuinfo ... ok > ------ > CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 > has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ > > [root at niamsirpapp01 ~]# gcc -v > Using built-in specs. > Target: x86_64-redhat-linux > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --with-bugurl= > http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared > --enable-threads=posix --enable-checking=release --with-system-zlib > --enable-__cxa_atexit --disable-libunwind-exceptions > --enable-gnu-unique-object > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi > --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre > --enable-libgcj-multifile --enable-java-maintainer-mode > --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib > --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 > --build=x86_64-redhat-linux > Thread model: posix > gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) > [root at niamsirpapp01 ~]# g77 --version > GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) > Copyright (C) 2006 Free Software Foundation, Inc. > > GNU Fortran comes with NO WARRANTY, to the extent permitted by law. > You may redistribute copies of GNU Fortran > under the terms of the GNU General Public License. > For more information about these matters, see the file named COPYING > or type the command `info -f g77 Copying'. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmccully at mail.nih.gov Tue Jul 22 16:32:39 2014 From: dmccully at mail.nih.gov (McCully, Dwayne (NIH/NIAMS) [C]) Date: Tue, 22 Jul 2014 20:32:39 +0000 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38AF54@MLBXV08.nih.gov> Message-ID: <432A8E6B26DC62439F0201C069BE2B671D38B3CC@MLBXV08.nih.gov> Hi Ralf, When compiling scipy with g77 or gfortran, why are there so many warning messages? Do you get them? Dwayne From: Ralf Gommers [mailto:ralf.gommers at gmail.com] Sent: Tuesday, July 22, 2014 4:25 PM To: SciPy Users List Subject: Re: [SciPy-User] Three scipy.test() errors On Tue, Jul 22, 2014 at 3:26 AM, McCully, Dwayne (NIH/NIAMS) [C] > wrote: Hi Ralf, Ran the install script with "python setup.py config_fc --fcompiler=gnu install". As always I get a lot of warning messages but now the scipy.test() crashes with a Segmentaion fault. I did see g77 and not gfortran when compiling. If your blas/lapack was built with gfortran and you now build scipy with g77, that won't work. Search for "g77" on http://scipy.org/scipylib/building/linux.html for more details. Cheers, Ralf maybe used but uninitialized in fuction unused variable etc... Any suggestions on the command line? Dwayne ________________________________ From: Ralf Gommers [ralf.gommers at gmail.com] Sent: Monday, July 21, 2014 3:47 PM To: SciPy Users List Subject: Re: [SciPy-User] Three scipy.test() errors On Mon, Jul 14, 2014 at 3:09 PM, McCully, Dwayne (NIH/NIAMS) [C] >> wrote: Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on three modules with the configuration listed below. Using LAPACK, ATLAS, and BLAS rpm?s that comes with Red Hat 6. Any help would be appreciated. Suggested to change gcc version, see https://github.com/scipy/scipy/issues/3568. Ralf Dwayne ====================================================================== FAIL: test_basic.test_xlogy ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_basic.py", line 2878, in test_xlogy assert_func_equal(special.xlogy, w2, z2, rtol=1e-13, atol=1e-13) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 87, in assert_func_equal fdata.check() File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/_testutils.py", line 292, in check assert_(False, "\n".join(msg)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 44, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 712.561 Max |rdiff|: 1028.01 Bad results (3 out of 6) for the following points (in output 0): 0j (nan+0j) => (-0+0j) != (nan+nanj) (rdiff 0.0) (1+0j) (2+0j) => (-711.8665072622568+1.5707963267948752j) != (0.6931471805599453+0j) (rdiff 1028.0087776302707) (1+0j) 1j => (-711.8665072622568+1.5707963267948752j) != 1.5707963267948966j (rdiff 453.18829380940315) ====================================================================== FAIL: test_lambertw.test_values ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 21, in test_values assert_equal(lambertw(inf,1).real, inf) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 304, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: nan DESIRED: inf ====================================================================== FAIL: test_lambertw.test_ufunc ---------------------------------------------------------------------- Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 581, in chk_same_position assert_array_equal(x_id, y_id) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 718, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 66.66666666666666%) x: array([False, True, True], dtype=bool) y: array([False, False, False], dtype=bool) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg/nose/case.py", line 198, in runTest self.test(*self.arg) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/scipy/special/tests/test_lambertw.py", line 93, in test_ufunc lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 607, in assert_array_compare chk_same_position(x_isnan, y_isnan, hasval='nan') File "/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/testing/utils.py", line 587, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals x and y nan location mismatch: x: array([ 0.+0.j, nan+0.j, nan+0.j]) y: array([ 0. , 1. , 0.56714329]) ---------------------------------------------------------------------- Ran 16420 tests in 223.156s FAILED (KNOWNFAIL=277, SKIP=1178, failures=3) [root ~]# python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux' ------ sys.version: 3.3.4 (default, Feb 27 2014, 17:05:47) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] ------ sys.prefix: /cm/shared/apps/python/3.3.4 ------ sys.path=':/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/setuptools-2.2-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/snakemake-2.5-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/nose-1.3.0-py3.3.egg:/cm/shared/apps/python/3.3.4/lib/python33.zip:/cm/shared/apps/python/3.3.4/lib/python3.3:/cm/shared/apps/python/3.3.4/lib/python3.3/plat-linux:/cm/shared/apps/python/3.3.4/lib/python3.3/lib-dynload:/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages' ------ Found new numpy version '1.8.1' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/__init__.py Found f2py2e version '2' in /cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/f2py/f2py2e.py Found numpy.distutils version '0.4.0' in '/cm/shared/apps/python/3.3.4/lib/python3.3/site-packages/numpy/distutils/__init__.py' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: Gnu95FCompiler instance properties: archiver = ['/usr/bin/gfortran', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = ['/usr/bin/gfortran', '-Wall', '-fno-second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_fix = ['/usr/bin/gfortran', '-Wall', '-ffixed-form', '-fno- second-underscore', '-Wall', '-fno-second-underscore', '- fPIC', '-O3', '-funroll-loops'] libraries = ['gfortran'] library_dirs = [] linker_exe = ['/usr/bin/gfortran', '-Wall', '-Wall'] linker_so = ['/usr/bin/gfortran', '-Wall', '-Wall', '-shared'] object_switch = '-o ' ranlib = ['/usr/bin/gfortran'] version = LooseVersion ('4.4.7') version_cmd = ['/usr/bin/gfortran', '--version'] GnuFCompiler instance properties: archiver = ['/usr/bin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/usr/bin/g77', '-g', '-Wall', '-fno-second- underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = [] linker_exe = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall'] linker_so = ['/usr/bin/g77', '-g', '-Wall', '-g', '-Wall', '- shared'] object_switch = '-o ' ranlib = ['/usr/bin/g77'] version = LooseVersion ('3.4.6') version_cmd = ['/usr/bin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.6) --fcompiler=gnu95 GNU Fortran 95 compiler (4.4.7) Compilers available for this platform, but not found: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for 64-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=pathf95 PathScale Fortran Compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler Compilers not available on this platform: --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=intelvem Intel Visual Fortran Compiler for 64-bit apps --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=none Fake Fortran compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs has_mmx has_sse has_sse2 has_sse3 has_ssse3 is_64bit is_Intel is_XEON is_Xeon is_i686 ------ [root at niamsirpapp01 ~]# gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) [root at niamsirpapp01 ~]# g77 --version GNU Fortran (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6) Copyright (C) 2006 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING or type the command `info -f g77 Copying'. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org> http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jul 22 16:43:59 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 22 Jul 2014 22:43:59 +0200 Subject: [SciPy-User] Three scipy.test() errors In-Reply-To: <432A8E6B26DC62439F0201C069BE2B671D38B3CC@MLBXV08.nih.gov> References: <432A8E6B26DC62439F0201C069BE2B671D386FF0@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38AF54@MLBXV08.nih.gov> <432A8E6B26DC62439F0201C069BE2B671D38B3CC@MLBXV08.nih.gov> Message-ID: On Tue, Jul 22, 2014 at 10:32 PM, McCully, Dwayne (NIH/NIAMS) [C] < dmccully at mail.nih.gov> wrote: > Hi Ralf, > > > > When compiling scipy with g77 or gfortran, why are there so many warning > messages? Do you get them? > Yeah, everyone who compiles with distutils gets those. There is a lot of Fortran 77 code in scipy that is very old and was only wrapped by the scipy developers. We don't want to modify that code just to fix the build warnings, because that would make it quite hard to later apply patches from upstream (packages like Arpack are still developed). And of course it would be a lot of work as well. Modifying the default distutils build flags is not easy, because they come from Python distutils. Ralf > > Dwayne > > > > *From:* Ralf Gommers [mailto:ralf.gommers at gmail.com] > *Sent:* Tuesday, July 22, 2014 4:25 PM > > *To:* SciPy Users List > *Subject:* Re: [SciPy-User] Three scipy.test() errors > > > > > > > > On Tue, Jul 22, 2014 at 3:26 AM, McCully, Dwayne (NIH/NIAMS) [C] < > dmccully at mail.nih.gov> wrote: > > Hi Ralf, > > Ran the install script with "python setup.py config_fc --fcompiler=gnu > install". As always I get a lot of warning messages but now the > scipy.test() crashes with a Segmentaion fault. I did see g77 and not > gfortran when compiling. > > > > If your blas/lapack was built with gfortran and you now build scipy with > g77, that won't work. Search for "g77" on > http://scipy.org/scipylib/building/linux.html for more details. > > Cheers, > Ralf > > > maybe used but uninitialized in fuction > unused variable > etc... > > > Any suggestions on the command line? > > Dwayne > > > > ________________________________ > From: Ralf Gommers [ralf.gommers at gmail.com] > Sent: Monday, July 21, 2014 3:47 PM > To: SciPy Users List > Subject: Re: [SciPy-User] Three scipy.test() errors > > > > > On Mon, Jul 14, 2014 at 3:09 PM, McCully, Dwayne (NIH/NIAMS) [C] < > dmccully at mail.nih.gov> wrote: > Trying to install scipy 0.14.0 under Python 3.3.4 but got test errors on > three modules with the configuration listed below. > Using LAPACK, ATLAS, and BLAS rpm?s that comes with Red Hat 6. > > Any help would be appreciated. > > Suggested to change gcc version, see > https://github.com/scipy/scipy/issues/3568. > > Ralf > > > > Dwayne > > > ====================================================================== > FAIL: test_basic.test_xlogy > ---------------------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Tue Jul 22 18:29:00 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Tue, 22 Jul 2014 22:29:00 +0000 (UTC) Subject: [SciPy-User] Maximum Lyapunov exponent of my previous system References: <1405972608.85009.YahooMailNeo@web125502.mail.ne1.yahoo.com> Message-ID: Lou Pecora yahoo.com> writes: > > > Do your calculations agree with published?Lyapunov exponents? > > > ? > -- Lou Pecora > It only calculates the largest exponent, but that result is accurate to four decimal places. From lists at juliensalort.org Wed Jul 23 06:20:12 2014 From: lists at juliensalort.org (Julien Salort) Date: Wed, 23 Jul 2014 12:20:12 +0200 Subject: [SciPy-User] Scipy.io.netcdf: does nc.close release memory? References: <1loh116.1p1bepe1e1bxj4N%lists@juliensalort.org> Message-ID: <1lp8kwp.3bgwuf1m94pjiN%lists@juliensalort.org> Aronne Merrelli wrote: > I think this is supposed to be safe, but it is usually the case that the > variable returned from: > > >> var = nc.variables['varname'].data > > ultimately points to a "memmap" object. Try checking the .base attribute of > the ndarray variable, to see if this is the case. You may need to go a > couple levels to see the memmap (e.g., var.base.base). Since that seems to > be causing problems, you could try something like the following: > > >> var = nc.variables['varname'].data.copy() > > This will make sure var is a full copy in memory with no reference back to > the file. In this case the .base attribute of var should be None; Note this > might not be desirable if the variable has a huge memory footprint. I often > do this, I recall having problems in the past where I was looping through a > large number of netCDF files and then producing an IOError related to "too > may open files" or something like that. Indeed, the code works much better with copy(). Thanks, Julien -- http://www.juliensalort.org From tfmoraes at cti.gov.br Wed Jul 23 15:25:32 2014 From: tfmoraes at cti.gov.br (Thiago Franco de Moraes) Date: Wed, 23 Jul 2014 16:25:32 -0300 (BRT) Subject: [SciPy-User] =?utf-8?q?=5BOT=5D_-_Research_position_in_the_Brazil?= =?utf-8?q?ian_Research_Institute_for_Science_and_Neurotechnology_?= =?utf-8?q?=E2=80=93_BRAINN?= Message-ID: <1233554357.1751.1406143532749.JavaMail.zimbra@cti.gov.br> Research position in the Brazilian Research Institute for Science and Neurotechnology ? BRAINN Postdoc researcher to work with software development for medical imaging The Brazilian Research Institute for Neuroscience and Neurotechnology (BRAINN) (www.brainn.org.br) focuses on the investigation of basic mechanisms leading to epilepsy and stroke, and the injury mechanisms that follow disease onset and progression. This research has important applications related to prevention, diagnosis, treatment and rehabilitation and will serve as a model for better understanding normal and abnormal brain function. The BRAINN Institute is composed of 10 institutions from Brazil and abroad and hosted by State University of Campinas (UNICAMP). Among the associated institutions is Renato Archer Information Technology Center (CTI) that has a specialized team in open-source software development for medical imaging (www.cti.gov.br/invesalius) and 3D printing applications for healthcare. CTI is located close the UNICAMP in the city of Campinas, State of S?o Paulo in a very technological region of Brazil and is looking for a postdoc researcher to work with software development for medical imaging related to the imaging analysis, diagnosis and treatment of brain diseases. The postdoc position is for two years with the possibility of being renovated for more two years. Education - PhD in computer science, computer engineering, mathematics, physics or related. Requirements - Digital image processing (Medical imaging) - Computer graphics (basic) Benefits 6.143,40 Reais per month free of taxes (about US$ 2.800,00); 15% technical reserve for conferences participation and specific materials acquisition; Interested Send curriculum to: jorge.silva at cti.gov.br with subject ?Postdoc position? Applications reviews will begin August 1, 2014 and continue until the position is filled. From rajsai24 at gmail.com Thu Jul 24 13:40:46 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Thu, 24 Jul 2014 23:10:46 +0530 Subject: [SciPy-User] optimising python numpy snippet Message-ID: hi all, I have written the following for loop statement inn my code.. when i profiled it.. i found that it is taking huge amount of time.. can any one suggest to make this statement faster ---------------------------------------------------------------------------- f *or i in xrange(self.pooled_shape[1]): for j in xrange(self.pooled_shape[2]): for k in xrange(self.pooled_shape[3]): for l in xrange(self.pooled_shape[4]): self.pooled[0][i][j][k][l]=math.tanh((numpy.sum(self.conv_out[0][i][j][k*3][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+1][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+2][l*3:(l+1)*3]))/9.0+self.b[i][j])* thanks a lot in advance *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Thu Jul 24 14:37:17 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Thu, 24 Jul 2014 13:37:17 -0500 Subject: [SciPy-User] optimising python numpy snippet In-Reply-To: References: Message-ID: <53D1525D.70806@chem.wisc.edu> Hello, A quick and dirty simplification may end up in a bit of a speed boost. You can eliminate some of the loops and replace them with vector operations, as below. for k in xrange(self.pooled_shape[3]): for l in xrange(self.pooled_shape[4]): self.pooled[0, :, :, k, l] = numpy.tanh(self.conv_out[0, :, :, k*3:(k+1)*3, l*3:(l+1)*3].sum((2, 3))/9. + self.b[:, :]) I have left in some extraneous slices (:) for clarity of code. Eric On 7/24/2014 12:40 PM, Sai Rajeshwar wrote: > hi all, > > I have written the following for loop statement inn my code.. when > i profiled it.. i found that it is taking huge amount of time.. can > any one suggest to make this statement faster > ---------------------------------------------------------------------------- > > f*or i in xrange(self.pooled_shape[1]): > for j in xrange(self.pooled_shape[2]): > for k in xrange(self.pooled_shape[3]): > for l in xrange(self.pooled_shape[4]): > self.pooled[0][i][j][k][l]=math.tanh((numpy.sum(self.conv_out[0][i][j][k*3][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+1][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+2][l*3:(l+1)*3]))/9.0+self.b[i][j])* > > thanks a lot in advance > > > *with regards..* > * > * > *M. Sai Rajeswar* > *M-tech Computer Technology* > *IIT Delhi > ----------------------------------Cogito Ergo Sum--------- > * > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajsai24 at gmail.com Fri Jul 25 00:58:22 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Fri, 25 Jul 2014 10:28:22 +0530 Subject: [SciPy-User] optimising python numpy snippet In-Reply-To: <53D1525D.70806@chem.wisc.edu> References: <53D1525D.70806@chem.wisc.edu> Message-ID: thanks eric.. few queries 1) these vector operations are possible only because of numpy,? 2) is the optimisation you have suugested a kind of loop unrolling.. 3)does the above optimisation uses SIMD internally thanks *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* On Fri, Jul 25, 2014 at 12:07 AM, Eric Hermes wrote: > Hello, > > A quick and dirty simplification may end up in a bit of a speed boost. You > can eliminate some of the loops and replace them with vector operations, as > below. > > > for k in xrange(self.pooled_shape[3]): > for l in xrange(self.pooled_shape[4]): > self.pooled[0, :, :, k, l] = numpy.tanh(self.conv_out[0, :, :, > k*3:(k+1)*3, l*3:(l+1)*3].sum((2, 3))/9. + self.b[:, :]) > > I have left in some extraneous slices (:) for clarity of code. > > Eric > > > On 7/24/2014 12:40 PM, Sai Rajeshwar wrote: > > hi all, > > I have written the following for loop statement inn my code.. when i > profiled it.. i found that it is taking huge amount of time.. can any one > suggest to make this statement faster > > ---------------------------------------------------------------------------- > > f > > > > *or i in xrange(self.pooled_shape[1]): for j in > xrange(self.pooled_shape[2]): for k in > xrange(self.pooled_shape[3]): for l in > xrange(self.pooled_shape[4]): > self.pooled[0][i][j][k][l]=math.tanh((numpy.sum(self.conv_out[0][i][j][k*3][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+1][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+2][l*3:(l+1)*3]))/9.0+self.b[i][j])* > > thanks a lot in advance > > > *with regards..* > > *M. Sai Rajeswar* > *M-tech Computer Technology* > > > *IIT Delhi ----------------------------------Cogito Ergo Sum--------- * > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Fri Jul 25 10:42:04 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Fri, 25 Jul 2014 09:42:04 -0500 Subject: [SciPy-User] optimising python numpy snippet In-Reply-To: References: <53D1525D.70806@chem.wisc.edu> Message-ID: <53D26CBC.4000705@chem.wisc.edu> 1) Yes, that is correct. You should make sure that the relevant arrays (self.pooled, self.conv_out, and self.b) are NumPy arrays in order for the code I wrote to work properly. Specifically, it looks like you were mostly working with self.pooled[0] and self.conv_out[0], so I don't really know what's going on with the first dimension of that array. Depending on how these are structured, you may need to rewrite self.pooled[0, :, :, k, l] as self.pooled[0][:, :, k, l], and self.conv_out[0, :, :, k*3:(k+1)*3, l*3:(l+1)*3] as self.conv_out[0][:, :, k*3:(k+1)*3, l*3:(l+1)*3]. 2) Not really, no. NumPy/Python do not unroll loops, as this is typically a feature of compiled languages. 3) It should, yes. Eric On 7/24/2014 11:58 PM, Sai Rajeshwar wrote: > thanks eric.. few queries > 1) these vector operations are possible only because of numpy,? > 2) is the optimisation you have suugested a kind of loop unrolling.. > 3)does the above optimisation uses SIMD internally > > thanks > > > *with regards..* > * > * > *M. Sai Rajeswar* > *M-tech Computer Technology* > *IIT Delhi > ----------------------------------Cogito Ergo Sum--------- > * > > > On Fri, Jul 25, 2014 at 12:07 AM, Eric Hermes > wrote: > > Hello, > > A quick and dirty simplification may end up in a bit of a speed > boost. You can eliminate some of the loops and replace them with > vector operations, as below. > > > for k in xrange(self.pooled_shape[3]): > for l in xrange(self.pooled_shape[4]): > self.pooled[0, :, :, k, l] = numpy.tanh(self.conv_out[0, > :, :, k*3:(k+1)*3, l*3:(l+1)*3].sum((2, 3))/9. + self.b[:, :]) > > I have left in some extraneous slices (:) for clarity of code. > > Eric > > > On 7/24/2014 12:40 PM, Sai Rajeshwar wrote: >> hi all, >> >> I have written the following for loop statement inn my code.. >> when i profiled it.. i found that it is taking huge amount of >> time.. can any one suggest to make this statement faster >> ---------------------------------------------------------------------------- >> >> f*or i in xrange(self.pooled_shape[1]): >> for j in xrange(self.pooled_shape[2]): >> for k in xrange(self.pooled_shape[3]): >> for l in xrange(self.pooled_shape[4]): >> self.pooled[0][i][j][k][l]=math.tanh((numpy.sum(self.conv_out[0][i][j][k*3][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+1][l*3:(l+1)*3])+numpy.sum(self.conv_out[0][i][j][k*3+2][l*3:(l+1)*3]))/9.0+self.b[i][j])* >> >> thanks a lot in advance >> >> >> *with regards..* >> * >> * >> *M. Sai Rajeswar* >> *M-tech Computer Technology* >> *IIT Delhi >> ----------------------------------Cogito Ergo Sum--------- >> * >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajsai24 at gmail.com Sun Jul 27 04:28:38 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Sun, 27 Jul 2014 13:58:38 +0530 Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries Message-ID: hi all, Im trying to implement 3d convolutional networks.. for which I wanted to use convolve function from scipy.signal.convolve or fftconvolve.. but looks like both of them doesnot use MKL libraries.. is there any implementation of convolutoin which uses MKL libraries or MKL-threaded so that code runs faster. thanks a lot in advance *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.eberspaecher at gmail.com Sun Jul 27 07:57:30 2014 From: alex.eberspaecher at gmail.com (=?ISO-8859-15?Q?Alexander_Ebersp=E4cher?=) Date: Sun, 27 Jul 2014 13:57:30 +0200 Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries In-Reply-To: References: Message-ID: <53D4E92A.9080703@gmail.com> Dear Sai, On 27.07.2014 10:28, Sai Rajeshwar wrote: > Im trying to implement 3d convolutional networks.. for which I wanted > to use convolve function from scipy.signal.convolve or fftconvolve.. If MKL is not a hard requirement, you may consider using the awesome pyfftw interfacing the incredible FFTW along with a monkey patched version of fftconvolve. On monkey patching using pyfftw, see http://hgomersall.github.io/pyFFTW/sphinx/tutorial.html#quick-and-easy-the-pyfftw-interfaces-module This might be a good choice for "large" convolutions. Smaller convolutions might be better off with a "ordinary" convolution not using the convolution theorem. In case you learn about other solutions, please let us know what you've found. Regards Alex From njs at pobox.com Sun Jul 27 08:26:51 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 27 Jul 2014 13:26:51 +0100 Subject: [SciPy-User] [SciPy-Dev] convolution using numpy/scipy using MKL libraries In-Reply-To: References: Message-ID: (Dropping scipy-dev from the CC list since this is more of a user question) You might also look into Theano; fast convolutional networks are a specialty of theirs. -n On 27 Jul 2014 09:28, "Sai Rajeshwar" wrote: > hi all, > > Im trying to implement 3d convolutional networks.. for which I wanted > to use convolve function from scipy.signal.convolve or fftconvolve.. but > looks like both of them doesnot use MKL libraries.. is there any > implementation of convolutoin which uses MKL libraries or MKL-threaded so > that code runs faster. > > thanks a lot in advance > > *with regards..* > > *M. Sai Rajeswar* > *M-tech Computer Technology* > > > *IIT Delhi----------------------------------Cogito Ergo Sum---------* > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajsai24 at gmail.com Sun Jul 27 08:27:18 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Sun, 27 Jul 2014 17:57:18 +0530 Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries In-Reply-To: <53D4E92A.9080703@gmail.com> References: <53D4E92A.9080703@gmail.com> Message-ID: hi alex, i didnot get..exactly how to perform convolution with pyFFT... the link introduces fft.. but not about convolution... any pointers about how to perform 3d convolution would be great.. thanks *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* On Sun, Jul 27, 2014 at 5:27 PM, Alexander Ebersp?cher < alex.eberspaecher at gmail.com> wrote: > Dear Sai, > > On 27.07.2014 10:28, Sai Rajeshwar wrote: > > > Im trying to implement 3d convolutional networks.. for which I wanted > > to use convolve function from scipy.signal.convolve or fftconvolve.. > > If MKL is not a hard requirement, you may consider using the awesome > pyfftw interfacing the incredible FFTW along with a monkey patched > version of fftconvolve. > > On monkey patching using pyfftw, see > > http://hgomersall.github.io/pyFFTW/sphinx/tutorial.html#quick-and-easy-the-pyfftw-interfaces-module > > This might be a good choice for "large" convolutions. Smaller > convolutions might be better off with a "ordinary" convolution not using > the convolution theorem. > > In case you learn about other solutions, please let us know what you've > found. > > Regards > > Alex > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sun Jul 27 08:51:00 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 27 Jul 2014 14:51:00 +0200 Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries In-Reply-To: References: <53D4E92A.9080703@gmail.com> Message-ID: On 27 July 2014 14:27, Sai Rajeshwar wrote: > > i didnot get..exactly how to perform convolution with pyFFT... the link > introduces fft.. but not about convolution... any pointers about how to > perform 3d convolution would be great.. The convolution of two functions is a product in Fourier space. So, the convolution of f with g is: IFT(FT(f) ? FT(g)) There performing the FT is expensive, but multiplying is cheap. Depending on the size and if you are reusing arrays, it may be faster. Note that the same thing is true in arbitrary dimensions (depending on your definitions, there may be some factors missing). /David -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sun Jul 27 12:47:17 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 27 Jul 2014 16:47:17 +0000 (UTC) Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries References: <53D4E92A.9080703@gmail.com> Message-ID: <201610968428171931.348714sturla.molden-gmail.com@news.gmane.org> Sai Rajeshwar wrote: > i didnot get..exactly how to perform convolution with pyFFT... the link > introduces fft.. but not about convolution... any pointers about how to > perform 3d convolution would be great.. Just use the three-dimesional FFT. Convolution is always IFFT( FFT(x) * FFT(y) ). The same principles as for 1D FFT convolution applies, such as appropriate zero padding (on all three dimensions!) to avoid circular convolution. Consult any DSP textbook. You probably want numpy.fft.rfftn and numpy.fft.irfftn, unless your signals have complex numbers. As noted on the other mailing list, these functions will use MKL with Enthought Canopy, but not with a vanilla NumPy. Sturla From klemm at phys.ethz.ch Sun Jul 27 18:36:18 2014 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 28 Jul 2014 00:36:18 +0200 Subject: [SciPy-User] Confusing behaviour of map_interpolate Message-ID: Hi, I want to use scipy.ndimage.map_coordinates to interpolate a 3D cube of data. I am not sure if this is expected behaviour, or not what I am seeing. The example script below produces very ?rugged? looking traces, when I look at the data along the z-axis. It appears as if the interpolation in the third direction is a nearest neighbour interpolation. If this is expected, it is not made very clear in the docs, I expected a spline interpolation in that direction, as well. Is that a bug, or are the docs unclear? #This is the script import numpy as np import scipy.ndimage as sn import pylab as plt x, y = np.ogrid[-3:3:20j, -3:3:20j] z = np.linspace(1, 2, 11) def ff(x,y,z): return np.exp(-(x[:,:,None]**2+y[:,:,None]**2)/(2*z[None,None,::-1]**2)) res = ff(x,y,z) x_index,y_index = np.mgrid[:20:,:20:] z_i=np.linspace(0,11, 23) coords = np.array((x_index, y_index, np.zeros_like(x_index))) spline_filter = sn.spline_filter(res, order=3, output=np.float64) interp_res = np.empty((20,20,23)) for k,i in enumerate(z_i): coords[2] = i sn.map_coordinates(res, coords, output=interp_res[:,:,k], mode='constant', cval=0., prefilter=True, order=3 ) plt.plot(interp_res[3,3,:], '.-') plt.show() #After running it, I get: In [326]: interp_res[3,3,:] Out[326]: array([ 0.34877645, 0.34877645, 0.31126267, 0.31126267, 0.27242277, 0.27242277, 0.23272727, 0.23272727, 0.19285457, 0.19285457, 0.15372712, 0.15372712, 0.11652598, 0.11652598, 0.08265543, 0.08265543, 0.0536164 , 0.0536164 , 0.03074392, 0.03074392, 0.01479751, 0.01479751, 0. ]) In [327]: scipy.__version__ Out[327]: '0.14.0' In [328]: Thanks, Hanno From rajsai24 at gmail.com Mon Jul 28 03:56:26 2014 From: rajsai24 at gmail.com (Sai Rajeshwar) Date: Mon, 28 Jul 2014 13:26:26 +0530 Subject: [SciPy-User] convolution using numpy/scipy using MKL libraries In-Reply-To: <201610968428171931.348714sturla.molden-gmail.com@news.gmane.org> References: <53D4E92A.9080703@gmail.com> <201610968428171931.348714sturla.molden-gmail.com@news.gmane.org> Message-ID: hi Sturla, I haven't used Enthought Canopy before.. so this does use MKL- libraries.. thanks also, any resources on how to use Enthought Canopy's numpy or fft would br great thank you *with regards..* *M. Sai Rajeswar* *M-tech Computer Technology* *IIT Delhi----------------------------------Cogito Ergo Sum---------* On Sun, Jul 27, 2014 at 10:17 PM, Sturla Molden wrote: > Sai Rajeshwar wrote: > > > i didnot get..exactly how to perform convolution with pyFFT... the link > > introduces fft.. but not about convolution... any pointers about how to > > perform 3d convolution would be great.. > > Just use the three-dimesional FFT. Convolution is always IFFT( FFT(x) * > FFT(y) ). The same principles as for 1D FFT convolution applies, such as > appropriate zero padding (on all three dimensions!) to avoid circular > convolution. Consult any DSP textbook. > > You probably want numpy.fft.rfftn and numpy.fft.irfftn, unless your signals > have complex numbers. > > As noted on the other mailing list, these functions will use MKL with > Enthought Canopy, but not with a vanilla NumPy. > > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Tue Jul 29 09:59:25 2014 From: camillechambon at yahoo.fr (camille chambon) Date: Tue, 29 Jul 2014 14:59:25 +0100 Subject: [SciPy-User] integrate.ode sets t0 values outside of my data range Message-ID: <1406642365.28550.YahooMailNeo@web28904.mail.ir2.yahoo.com> Hello, I would like to solve the ODE dy/dt = -2y + data(t), between t=0..4, for y(t=0)=1. I wrote the following code: import numpy as np from scipy.integrate import odeint from scipy.interpolate import interp1d t = np.linspace(0, 3, 4) data = [1, 2, 3, 4] linear_interpolation = interp1d(t, data) def func(y, t0): ??? print 't0', t0 ??? return -2*y + linear_interpolation(t0) soln = odeint(func, 1, t) When I run this code, I get several errors: ValueError: A value in x_new is above the interpolation range. odepack.error: Error occurred while calling the Python function named func My interpolation range is between 0.0 and 3.0. Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... I have a few? questions: - how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range? - in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these errors? - if I shouldn't ignore these errors, what is the best way to avoid them? 2 suggestions for the last question:? - in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]: linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1]) But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1].?For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct.Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question). - in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated: def func(y, t0):???? ??? try: ? ?? ?? interpolated_value = linear_interpolation(t0) ??? except ValueError: ??????? interpolated_value = linear_interpolation(int(t0)) # truncate t0 ??? return -2*y + interpolated_value At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes a t0 above 4.0 or below -1.0. I can then be alerted of incoherent behavior.But it is not really readable and the truncation seems to me a little arbitrary by now. Maybe I'm just overthinking about these errors.Please let me know. Thanks in advance. Cheers, Camille -------------- next part -------------- An HTML attachment was scrubbed... URL: From abrham09 at gmail.com Tue Jul 29 18:31:42 2014 From: abrham09 at gmail.com (Abrham Melesse) Date: Tue, 29 Jul 2014 14:31:42 -0800 Subject: [SciPy-User] Spatio-temporal interpolation Message-ID: Dear users, I am new to python and trying to work hard in to it. I am trying to interpolate daily station climate data to the defined grid points. I have more than 20 years of daily climate data for 50 stations that I want to interpolate to the daily grid point values. I could handle how to interpolate one spatial data in to grid points. My data is very bulky. I organize my station data in one excel file with 50 sheets, each sheet is daily climate data for the station. The data in each sheet has 4 columns ( date, latitude, longitude, value/data) My grid points are given in another excel file that has 3 columns (Id, latitude, longitude) I want the interpolation output be one excel/text file in the form as below. I prefer kriging interpolation if someone of you already have the code. date value1 value2 ?.... valueN Here Value 1 ...valueN are grid point interpolated values for each grid point Id from 1 to N. Thank you, Abraham -- Abrham Melesse Arba Minch University Water Resource and Irrigation Engineering Department PO Box 2552 Cell Phone +251 912491406 Arba Minch Ethiopia -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Wed Jul 30 04:05:06 2014 From: camillechambon at yahoo.fr (camille chambon) Date: Wed, 30 Jul 2014 08:05:06 +0000 (UTC) Subject: [SciPy-User] integrate.ode sets t0 values outside of my data range References: <1406642365.28550.YahooMailNeo@web28904.mail.ir2.yahoo.com> Message-ID: camille chambon yahoo.fr> writes: > > > Hello, > > I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for y(t=0)=1. > > I > wrote the following code: > > import numpy as npfrom scipy.integrate import odeintfrom scipy.interpolate import interp1dt = np.linspace(0, 3, 4)data = [1, 2, 3, 4]linear_interpolation = interp1d(t, data)def func(y, t0): > ??? print 't0', t0??? return -2*y + linear_interpolation(t0)soln = odeint(func, 1, t) > > When I run this code, I get several errors: > > > ValueError: A value in x_new is above the interpolation range.odepack.error: Error occurred while calling the Python function named func > > My interpolation range is between 0.0 and 3.0. > > Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... > > I have a few? questions: > > - how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range? > > - in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these > errors? > > > - if I shouldn't ignore these errors, what is the best way to avoid them? > > > 2 suggestions for the last question:? > > - in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]: > linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1]) > But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1].?For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct. Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question). > > - in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated: > > def func(y, t0):???? > ??? try:? ?? ?? > interpolated_value = linear_interpolation(t0) > ??? except ValueError: > > ??????? interpolated_value = linear_interpolation(int(t0)) # truncate t0 > > > ??? return -2*y + interpolated_value > > > > > > At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes a t0 above 4.0 or below -1.0. I can then be alerted of incoherent behavior. But it is not really readable and the truncation seems to me a little arbitrary by now. > > > Maybe I'm just overthinking about these errors. Please let me know. > > Thanks in advance. > > Cheers, > > Camille > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Sorry, I made a typo in the range of t: I want to solve the ODE dy/dt = -2y + data(t) between t=0..3, and not between t=0..4. I corrected my original message. Cheers, Camille From guziy.sasha at gmail.com Wed Jul 30 07:00:24 2014 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Wed, 30 Jul 2014 07:00:24 -0400 Subject: [SciPy-User] integrate.ode sets t0 values outside of my data range In-Reply-To: References: <1406642365.28550.YahooMailNeo@web28904.mail.ir2.yahoo.com> Message-ID: Salut Camille: Just don't ask for the solution in the last point, since in order to calculate a value of the solution at that point it has to go a bit beyond (y should be defined from both sides of ti for the derivative to exist in ti (this is a necessary condition)), but interpolation does not really define values beyond the right limit. Here I've modified your example: http://nbviewer.ipython.org/github/guziy/PyNotebooks/blob/master/ode_demo.ipynb Cheers 2014-07-30 4:05 GMT-04:00 camille chambon : > > camille chambon yahoo.fr> writes: > >> >> >> Hello, >> >> I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for > y(t=0)=1. >> >> I >> wrote the following code: >> >> import numpy as npfrom scipy.integrate import odeintfrom scipy.interpolate > import interp1dt = np.linspace(0, 3, 4)data = [1, 2, 3, > 4]linear_interpolation = interp1d(t, data)def func(y, t0): >> print 't0', t0 return -2*y + linear_interpolation(t0)soln = > odeint(func, 1, t) >> >> When I run this code, I get several errors: >> >> >> ValueError: A value in x_new is above the interpolation > range.odepack.error: Error occurred while calling the Python function named func >> >> My interpolation range is between 0.0 and 3.0. >> >> Printing the value of t0 in func, I realized that t0 is actually sometimes > above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... >> >> I have a few questions: >> >> - how does integrate.ode makes t0 vary? Why does it make t0 exceed the > infimum (3.0) of my interpolation range? >> >> - in spite of these errors, integrate.ode returns an array which seems to > contain correct value. So, should I just catch and ignore these >> errors? >> >> >> - if I shouldn't ignore these errors, what is the best way to avoid them? >> >> >> 2 suggestions for the last question: >> >> - in interp1d, I could set bounds_error=False and fill_value=data[-1] > since the t0 outside of my interpolation range seem to be closed to t[-1]: >> linear_interpolation = interp1d(t, data, bounds_error=False, > fill_value=data[-1]) >> But first I would like to be sure that with any other func and any other > data the t0 will always remain closed to t[-1]. For example, if > integrate.ode chooses a t0 below my interpolation range, the fill_value > would be still data[-1], which would not be correct. Maybe to know how > integrate.ode makes t0 vary would help me to be sure of that (see my first > question). >> >> - in func, I could enclose the linear_interpolation call in a try/except > block, and, when I catch a ValueError, I recall linear_interpolation but > with t0 truncated: >> >> def func(y, t0): >> try: >> interpolated_value = linear_interpolation(t0) >> except ValueError: >> >> interpolated_value = linear_interpolation(int(t0)) # truncate t0 >> >> >> return -2*y + interpolated_value >> >> >> >> >> >> At least this solution permits linear_interpolation to still raise an > exception if integrate.ode makes a t0 above 4.0 or below -1.0. I can then be > alerted of incoherent behavior. But it is not really readable and the > truncation seems to me a little arbitrary by now. >> >> >> Maybe I'm just overthinking about these errors. Please let me know. >> >> Thanks in advance. >> >> Cheers, >> >> Camille >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Sorry, I made a typo in the range of t: I want to solve the ODE dy/dt = -2y > + data(t) between t=0..3, and not between t=0..4. I corrected my original > message. > Cheers, > Camille > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Sasha From warren.weckesser at gmail.com Wed Jul 30 10:29:08 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 30 Jul 2014 10:29:08 -0400 Subject: [SciPy-User] integrate.ode sets t0 values outside of my data range In-Reply-To: References: <1406642365.28550.YahooMailNeo@web28904.mail.ir2.yahoo.com> Message-ID: On Wed, Jul 30, 2014 at 4:05 AM, camille chambon wrote: > > camille chambon yahoo.fr> writes: > > > > > > > Hello, > > > > I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for > y(t=0)=1. > > > > I > > wrote the following code: > > > > import numpy as npfrom scipy.integrate import odeintfrom > scipy.interpolate > import interp1dt = np.linspace(0, 3, 4)data = [1, 2, 3, > 4]linear_interpolation = interp1d(t, data)def func(y, t0): > > print 't0', t0 return -2*y + linear_interpolation(t0)soln = > odeint(func, 1, t) > > > > When I run this code, I get several errors: > > > > > > ValueError: A value in x_new is above the interpolation > range.odepack.error: Error occurred while calling the Python function > named func > > > > My interpolation range is between 0.0 and 3.0. > > > > Printing the value of t0 in func, I realized that t0 is actually > sometimes > above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, > ... > > > > I have a few questions: > > > > - how does integrate.ode makes t0 vary? Why does it make t0 exceed the > infimum (3.0) of my interpolation range? > > > > - in spite of these errors, integrate.ode returns an array which seems to > contain correct value. So, should I just catch and ignore these > > errors? > > > > > > - if I shouldn't ignore these errors, what is the best way to avoid them? > > > > > > 2 suggestions for the last question: > > > > - in interp1d, I could set bounds_error=False and fill_value=data[-1] > since the t0 outside of my interpolation range seem to be closed to t[-1]: > > linear_interpolation = interp1d(t, data, bounds_error=False, > fill_value=data[-1]) > > But first I would like to be sure that with any other func and any other > data the t0 will always remain closed to t[-1]. For example, if > integrate.ode chooses a t0 below my interpolation range, the fill_value > would be still data[-1], which would not be correct. Maybe to know how > integrate.ode makes t0 vary would help me to be sure of that (see my first > question). > > > > - in func, I could enclose the linear_interpolation call in a try/except > block, and, when I catch a ValueError, I recall linear_interpolation but > with t0 truncated: > > > > def func(y, t0): > > try: > > interpolated_value = linear_interpolation(t0) > > except ValueError: > > > > interpolated_value = linear_interpolation(int(t0)) # truncate t0 > > > > > > return -2*y + interpolated_value > > > > > > > > > > > > At least this solution permits linear_interpolation to still raise an > exception if integrate.ode makes a t0 above 4.0 or below -1.0. I can then > be > alerted of incoherent behavior. But it is not really readable and the > truncation seems to me a little arbitrary by now. > > > > > > Maybe I'm just overthinking about these errors. Please let me know. > > > > Thanks in advance. > > > > Cheers, > > > > Camille > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Sorry, I made a typo in the range of t: I want to solve the ODE dy/dt = -2y > + data(t) between t=0..3, and not between t=0..4. I corrected my original > message. > Cheers, > Camille > Camille, I added an answer to your question on StackOverflow: http://stackoverflow.com/questions/25031966/integrate-ode-sets-t0-values-outside-of-my-data-range/ Summary: * It is normal for the ODE solver to evaluate your function at points beyond the last requested time value. * One way to avoid the interpolation problem is to extend the interpolation data linearly, using the last two data point. Warren > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alok.jadhav at credit-suisse.com Thu Jul 31 03:00:17 2014 From: alok.jadhav at credit-suisse.com (Jadhav, Alok ) Date: Thu, 31 Jul 2014 07:00:17 +0000 Subject: [SciPy-User] how to share weave.inline generated files accross machines? Message-ID: <5CC618D74A5E2747BD49AEB4F19B85AE1A6AECC0@HKW20030007.gbl.ad.hedani.net> according to the scipy.weave.inline documentation, weave stores the catalogue of already compiled code strings in an on-disk cache. What happens if the location of weave generated pyd files is commong between 2 machines (on a network drive), will the catalogue of this data shared between 2 machines? To be more precise, if 1 machine generates a pyd file and if we copy this pyd file to a different machine and run the same weave.inline function will the other machine try to recompile the code? My question is because other machine is not allowed to install gcc on it. I am wondering if just copying pyd files will work or not? Thanks. Alok Jadhav CREDIT SUISSE AG SMG - Systematic Market Making Asia IT, KFVA 463 International Commerce Centre | Hong Kong | Hong Kong Phone +852 2101 6274 | Mobile +852 9169 7172 alok.jadhav at credit-suisse.com | www.credit-suisse.com =============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html =============================================================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Thu Jul 31 17:22:42 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Thu, 31 Jul 2014 17:22:42 -0400 Subject: [SciPy-User] Matplotlib axis label spacing Message-ID: (Not sure if seeking help for matplotlib is appropriate here but I thought I would give it a try; if not, please direct me to somewhere I can get that assistance. Anyway...) Alright, I'm trying to run the code listed below. It results in the following color plot: http://s14.postimg.org/m16s2mmox/figure_1.png I need help resolving the x-axis labels, which are the right number but in the wrong location. I need them to align evenly with the columns, not be scaled proportionally to their actual value. I.e., if I were to plot 0.5, 0.6, 0.9, and 2.0 on a real number line from 0 to 4, that is exactly where they would land, but I don't want that. How do I get them to space evenly? Also, once I scale this up to a much, much finer resolution (i.e., many more rows and columns), which I will do once I've debugged this, it's not going to try to plot every single value on the x-axis, will it? If so, how would I fix that to print, say, every 10th value, or a total of only 5 values? Here is the code. Portions of this simulator not used for this run have been removed for brevity: ##SIMULATOR import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from scipy.integrate import odeint #Network constants tau = 20; tau_S = 10000 g_Ca = 3.6; g_K = 10; g_S = 4 #Siemens E_Ca = 25; E_K = -75; E_S = -75 #mV E_half_m = -20; E_half_n = -16; E_half_S = -35.245 #mV E_exc = 0; E_inh = -75 #mV k_m = 12; k_n = 5.6; k_S = 10 #(unitless) Theta_syn = -40 #mV #Initial conditions V0_0 = -60 n0 = np.array([0.6, 0.6]) S0 = np.array([0.1, 0.1]) #Simulation constants stopTime = 30000/100; timeStep = 4 #ms tStartTrack = 2000/100 #Time when Lyapunov exponent calculation begins. nStartTrack = int(tStartTrack/timeStep) #init step for Lyap expo #Plotting constants doNorm = True; V1VsV2 = False; nVsV = False; timeSeries = True pigments = ['r', 'g', 'b'] #Plotting colors. toPlot = [0, 1] #Indeces of the neurons to plot, starting at 0. #Heaviside function. Currently approximated; can be optimised. #Continuous equation: Gamma(x) = 1.0/(1 + np.exp(-100*(x - Theta_syn))) def Heaviside(x): mult = -100.0; diff = mult*(x - Theta_syn); output = np.ones_like(x) #Initialized to 1's. output[x < Theta_syn] = 0 #0 <= x < Theta_syn. output[abs(diff) < 50] = 1.0/(1 + np.exp(diff[abs(diff) < 50])) return output #ODE for Sherman model (Belykh and Reimbayev) def f(X, t, params): N = len(X)/3 V = X[:N]; n = X[N:2*N]; S = X[2*N:3*N] m_inf_E = 1/(1 + np.exp((E_half_m - V)/k_m)) n_inf_E = 1/(1 + np.exp((E_half_n - V)/k_n)) S_inf_E = 1/(1 + np.exp((E_half_S - V)/k_S)) I_Ca = g_Ca * m_inf_E * (V - E_Ca) I_K = g_K * n * (V - E_K) I_S = g_S * S * (V - E_S) dV = (-I_Ca - I_K - I_S)/tau +\ g_inh*(E_inh - V)*Heaviside(np.dot(V, connex))/tau dn = (n_inf_E - n)/tau dS = (S_inf_E - S)/tau_S return np.concatenate((dV, dn, dS)) # else: #if not useForLoops # g_inhibitory = params[0]; V1 = params[1] #Connection matrix. Simulator starts here. connex = np.array([[-1,1], [1,-1]]) inhibID = 7 #Used for below to avoid commenting out code. #List of the inhibitory conductivities (g_inh) to track: if inhibID == 7: inhibConductivity = np.array([0.5, 0.6, 0.9, 2.0]) voltageID = 5 #Used for below to avoid commenting out code. #Starting V1: if voltageID == 5: startingVoltage1 = np.arange(V0_0 + 10, V0_0 + 30, 5) Vsize = startingVoltage1.size #Whether the g_inh and V1 arrays have just one element: justOneRun = (Vsize == 1 and inhibConductivity.size == 1) t = np.arange(0, stopTime, timeStep) #useForLoops = True #if useForLoops: normVsVSpread = np.zeros((Vsize, inhibConductivity.size)) for i in range(inhibConductivity.size): g_inh = inhibConductivity[i] for j in range(Vsize): pctComplete = 100*((1.*i + j/Vsize)/inhibConductivity.size) #Integration and initial conditions V0 = np.array([V0_0, startingVoltage1[j]]); N = len(V0) #Solver soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(None,)) E = np.transpose(soln[:, :N]) n = np.transpose(soln[:, N:N*2]) VArray = E[:2, nStartTrack:] maxVSpread = np.amax(VArray) - np.amin(VArray) maxNorm = np.amax(abs(VArray[1] - VArray[0])) normVsVSpread[j, i] = maxNorm/maxVSpread #else: #DO NOT USE YET # gInhVsV1 = np.meshgrid(inhibConductivity, startingVoltage1) # V0 = np.array([V0_0, None]) #V1 will be dealt with inside the ODE # soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(gInhVsV1,)) #Single-run plotting strParameters = 'V0_0 = ' + str(V0[0]) + ' mV; n0 = ' + str(n0) +\ '; S0 = ' + str(S0) mpl.rcParams['image.cmap'] = 'gist_heat' fig = plt.subplots() plt.title('|V1 - V0|/(Vmax - Vmin).\n' + strParameters) plt.xlabel('g_inh (Siemens)') plt.ylabel('V1 (mV)') # xticks = np.linspace(np.amin(inhibConductivity),\ # np.amax(inhibConductivity), num = 4) # plt.xticks(xticks) plt.xticks(inhibConductivity) # plt.yticks(startingVoltage1) plt.pcolormesh(normVsVSpread) plt.colorbar(shrink=0.8) plt.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Thu Jul 31 17:38:42 2014 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 31 Jul 2014 17:38:42 -0400 Subject: [SciPy-User] Matplotlib axis label spacing In-Reply-To: References: Message-ID: Hi Barrett: If I understood correctly (I really do not want to read the code), but you are not using pcolormesh correctly. You have to specify coordinates of the corners of the cells and not the centers. This way the sizes of your x and y coordinate arrays will be (Nx+1)*(Ny+1) while the size of your data array should be Nx*Ny. Here I was showing an example of basemap.pcolormesh and how I constructed the coordinates of the corners from the coordinates of the centers of the cells: http://nbviewer.ipython.org/github/guziy/PyNotebooks/blob/master/pcolormesh_example.ipynb Cheers 2014-07-31 17:22 GMT-04:00 Barrett B : > (Not sure if seeking help for matplotlib is appropriate here but I thought I > would give it a try; if not, please direct me to somewhere I can get that > assistance. Anyway...) > > Alright, I'm trying to run the code listed below. It results in the > following color plot: > > http://s14.postimg.org/m16s2mmox/figure_1.png > > I need help resolving the x-axis labels, which are the right number but in > the wrong location. I need them to align evenly with the columns, not be > scaled proportionally to their actual value. I.e., if I were to plot 0.5, > 0.6, 0.9, and 2.0 on a real number line from 0 to 4, that is exactly where > they would land, but I don't want that. How do I get them to space evenly? > > Also, once I scale this up to a much, much finer resolution (i.e., many more > rows and columns), which I will do once I've debugged this, it's not going > to try to plot every single value on the x-axis, will it? If so, how would I > fix that to print, say, every 10th value, or a total of only 5 values? > > Here is the code. Portions of this simulator not used for this run have been > removed for brevity: > > ##SIMULATOR > > import numpy as np > import matplotlib as mpl > import matplotlib.pyplot as plt > from scipy.integrate import odeint > > #Network constants > tau = 20; tau_S = 10000 > g_Ca = 3.6; g_K = 10; g_S = 4 #Siemens > E_Ca = 25; E_K = -75; E_S = -75 #mV > E_half_m = -20; E_half_n = -16; E_half_S = -35.245 #mV > E_exc = 0; E_inh = -75 #mV > k_m = 12; k_n = 5.6; k_S = 10 #(unitless) > Theta_syn = -40 #mV > > #Initial conditions > V0_0 = -60 > n0 = np.array([0.6, 0.6]) > S0 = np.array([0.1, 0.1]) > > #Simulation constants > stopTime = 30000/100; timeStep = 4 #ms > tStartTrack = 2000/100 #Time when Lyapunov exponent calculation begins. > nStartTrack = int(tStartTrack/timeStep) #init step for Lyap expo > > #Plotting constants > doNorm = True; V1VsV2 = False; nVsV = False; timeSeries = True > pigments = ['r', 'g', 'b'] #Plotting colors. > toPlot = [0, 1] #Indeces of the neurons to plot, starting at 0. > > #Heaviside function. Currently approximated; can be optimised. > #Continuous equation: Gamma(x) = 1.0/(1 + np.exp(-100*(x - Theta_syn))) > def Heaviside(x): > mult = -100.0; diff = mult*(x - Theta_syn); > output = np.ones_like(x) #Initialized to 1's. > output[x < Theta_syn] = 0 #0 <= x < Theta_syn. > output[abs(diff) < 50] = 1.0/(1 + np.exp(diff[abs(diff) < 50])) > return output > > #ODE for Sherman model (Belykh and Reimbayev) > def f(X, t, params): > N = len(X)/3 > V = X[:N]; n = X[N:2*N]; S = X[2*N:3*N] > > m_inf_E = 1/(1 + np.exp((E_half_m - V)/k_m)) > n_inf_E = 1/(1 + np.exp((E_half_n - V)/k_n)) > S_inf_E = 1/(1 + np.exp((E_half_S - V)/k_S)) > > I_Ca = g_Ca * m_inf_E * (V - E_Ca) > I_K = g_K * n * (V - E_K) > I_S = g_S * S * (V - E_S) > > dV = (-I_Ca - I_K - I_S)/tau +\ > g_inh*(E_inh - V)*Heaviside(np.dot(V, connex))/tau > dn = (n_inf_E - n)/tau > dS = (S_inf_E - S)/tau_S > return np.concatenate((dV, dn, dS)) > # else: #if not useForLoops > # g_inhibitory = params[0]; V1 = params[1] > > #Connection matrix. Simulator starts here. > connex = np.array([[-1,1], > [1,-1]]) > > inhibID = 7 #Used for below to avoid commenting out code. > #List of the inhibitory conductivities (g_inh) to track: > if inhibID == 7: > inhibConductivity = np.array([0.5, 0.6, 0.9, 2.0]) > > voltageID = 5 #Used for below to avoid commenting out code. > #Starting V1: > if voltageID == 5: > startingVoltage1 = np.arange(V0_0 + 10, V0_0 + 30, 5) > Vsize = startingVoltage1.size > > #Whether the g_inh and V1 arrays have just one element: > justOneRun = (Vsize == 1 and inhibConductivity.size == 1) > t = np.arange(0, stopTime, timeStep) > > #useForLoops = True > #if useForLoops: > normVsVSpread = np.zeros((Vsize, inhibConductivity.size)) > for i in range(inhibConductivity.size): > g_inh = inhibConductivity[i] > for j in range(Vsize): > pctComplete = 100*((1.*i + j/Vsize)/inhibConductivity.size) > #Integration and initial conditions > V0 = np.array([V0_0, startingVoltage1[j]]); N = len(V0) > > #Solver > soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(None,)) > E = np.transpose(soln[:, :N]) > n = np.transpose(soln[:, N:N*2]) > > VArray = E[:2, nStartTrack:] > maxVSpread = np.amax(VArray) - np.amin(VArray) > maxNorm = np.amax(abs(VArray[1] - VArray[0])) > normVsVSpread[j, i] = maxNorm/maxVSpread > #else: #DO NOT USE YET > # gInhVsV1 = np.meshgrid(inhibConductivity, startingVoltage1) > # V0 = np.array([V0_0, None]) #V1 will be dealt with inside the ODE > # soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(gInhVsV1,)) > > #Single-run plotting > strParameters = 'V0_0 = ' + str(V0[0]) + ' mV; n0 = ' + str(n0) +\ > '; S0 = ' + str(S0) > mpl.rcParams['image.cmap'] = 'gist_heat' > fig = plt.subplots() > plt.title('|V1 - V0|/(Vmax - Vmin).\n' + strParameters) > plt.xlabel('g_inh (Siemens)') > plt.ylabel('V1 (mV)') > # xticks = np.linspace(np.amin(inhibConductivity),\ > # np.amax(inhibConductivity), num = 4) > # plt.xticks(xticks) > plt.xticks(inhibConductivity) > # plt.yticks(startingVoltage1) > plt.pcolormesh(normVsVSpread) > plt.colorbar(shrink=0.8) > plt.show() > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sasha From klemm at phys.ethz.ch Thu Jul 31 17:54:51 2014 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Thu, 31 Jul 2014 23:54:51 +0200 Subject: [SciPy-User] Matplotlib axis label spacing In-Reply-To: References: Message-ID: <57D0FB08-9FBE-44AD-A198-09362F92F0AA@phys.ethz.ch> This is how you can do it when you don?t specify the x, and y coordinates of your pcolormesh (which a cursory glance seems to suggest you don?t set). In this case, pcolormesh?s indices is the dimensionality of your array. B.t.w. the matplotlib mailing list can be found here: http://sourceforge.net/p/matplotlib/mailman/ The excellent matplotlib gallery with a lot of examples and code is here: http://matplotlib.org/gallery.html import numpy as np import matplotlib.pyplot as plt a = np.random.randn(10,10) fig = plt.figure() ax = fig.add_subplot(111) ax.pcolormesh(a) ax.set_xticks(np.linspace(0,10,5)) ax.set_xticklabels(['a', 'custom', 'label', 'comes', 'here']) plt.show() Cheers, Hanno On 31 Jul 2014, at 23:38, Oleksandr Huziy wrote: > Hi Barrett: > > If I understood correctly (I really do not want to read the code), but > you are not using pcolormesh correctly. > You have to specify coordinates of the corners of the cells and not > the centers. This way the sizes of your x and y coordinate > arrays will be (Nx+1)*(Ny+1) while the size of your data array should be Nx*Ny. > > Here I was showing an example of basemap.pcolormesh and how I > constructed the coordinates of the corners from the coordinates of the > centers of the cells: > > http://nbviewer.ipython.org/github/guziy/PyNotebooks/blob/master/pcolormesh_example.ipynb > > Cheers > > > > 2014-07-31 17:22 GMT-04:00 Barrett B : >> (Not sure if seeking help for matplotlib is appropriate here but I thought I >> would give it a try; if not, please direct me to somewhere I can get that >> assistance. Anyway...) >> >> Alright, I'm trying to run the code listed below. It results in the >> following color plot: >> >> http://s14.postimg.org/m16s2mmox/figure_1.png >> >> I need help resolving the x-axis labels, which are the right number but in >> the wrong location. I need them to align evenly with the columns, not be >> scaled proportionally to their actual value. I.e., if I were to plot 0.5, >> 0.6, 0.9, and 2.0 on a real number line from 0 to 4, that is exactly where >> they would land, but I don't want that. How do I get them to space evenly? >> >> Also, once I scale this up to a much, much finer resolution (i.e., many more >> rows and columns), which I will do once I've debugged this, it's not going >> to try to plot every single value on the x-axis, will it? If so, how would I >> fix that to print, say, every 10th value, or a total of only 5 values? >> >> Here is the code. Portions of this simulator not used for this run have been >> removed for brevity: >> >> ##SIMULATOR >> >> import numpy as np >> import matplotlib as mpl >> import matplotlib.pyplot as plt >> from scipy.integrate import odeint >> >> #Network constants >> tau = 20; tau_S = 10000 >> g_Ca = 3.6; g_K = 10; g_S = 4 #Siemens >> E_Ca = 25; E_K = -75; E_S = -75 #mV >> E_half_m = -20; E_half_n = -16; E_half_S = -35.245 #mV >> E_exc = 0; E_inh = -75 #mV >> k_m = 12; k_n = 5.6; k_S = 10 #(unitless) >> Theta_syn = -40 #mV >> >> #Initial conditions >> V0_0 = -60 >> n0 = np.array([0.6, 0.6]) >> S0 = np.array([0.1, 0.1]) >> >> #Simulation constants >> stopTime = 30000/100; timeStep = 4 #ms >> tStartTrack = 2000/100 #Time when Lyapunov exponent calculation begins. >> nStartTrack = int(tStartTrack/timeStep) #init step for Lyap expo >> >> #Plotting constants >> doNorm = True; V1VsV2 = False; nVsV = False; timeSeries = True >> pigments = ['r', 'g', 'b'] #Plotting colors. >> toPlot = [0, 1] #Indeces of the neurons to plot, starting at 0. >> >> #Heaviside function. Currently approximated; can be optimised. >> #Continuous equation: Gamma(x) = 1.0/(1 + np.exp(-100*(x - Theta_syn))) >> def Heaviside(x): >> mult = -100.0; diff = mult*(x - Theta_syn); >> output = np.ones_like(x) #Initialized to 1's. >> output[x < Theta_syn] = 0 #0 <= x < Theta_syn. >> output[abs(diff) < 50] = 1.0/(1 + np.exp(diff[abs(diff) < 50])) >> return output >> >> #ODE for Sherman model (Belykh and Reimbayev) >> def f(X, t, params): >> N = len(X)/3 >> V = X[:N]; n = X[N:2*N]; S = X[2*N:3*N] >> >> m_inf_E = 1/(1 + np.exp((E_half_m - V)/k_m)) >> n_inf_E = 1/(1 + np.exp((E_half_n - V)/k_n)) >> S_inf_E = 1/(1 + np.exp((E_half_S - V)/k_S)) >> >> I_Ca = g_Ca * m_inf_E * (V - E_Ca) >> I_K = g_K * n * (V - E_K) >> I_S = g_S * S * (V - E_S) >> >> dV = (-I_Ca - I_K - I_S)/tau +\ >> g_inh*(E_inh - V)*Heaviside(np.dot(V, connex))/tau >> dn = (n_inf_E - n)/tau >> dS = (S_inf_E - S)/tau_S >> return np.concatenate((dV, dn, dS)) >> # else: #if not useForLoops >> # g_inhibitory = params[0]; V1 = params[1] >> >> #Connection matrix. Simulator starts here. >> connex = np.array([[-1,1], >> [1,-1]]) >> >> inhibID = 7 #Used for below to avoid commenting out code. >> #List of the inhibitory conductivities (g_inh) to track: >> if inhibID == 7: >> inhibConductivity = np.array([0.5, 0.6, 0.9, 2.0]) >> >> voltageID = 5 #Used for below to avoid commenting out code. >> #Starting V1: >> if voltageID == 5: >> startingVoltage1 = np.arange(V0_0 + 10, V0_0 + 30, 5) >> Vsize = startingVoltage1.size >> >> #Whether the g_inh and V1 arrays have just one element: >> justOneRun = (Vsize == 1 and inhibConductivity.size == 1) >> t = np.arange(0, stopTime, timeStep) >> >> #useForLoops = True >> #if useForLoops: >> normVsVSpread = np.zeros((Vsize, inhibConductivity.size)) >> for i in range(inhibConductivity.size): >> g_inh = inhibConductivity[i] >> for j in range(Vsize): >> pctComplete = 100*((1.*i + j/Vsize)/inhibConductivity.size) >> #Integration and initial conditions >> V0 = np.array([V0_0, startingVoltage1[j]]); N = len(V0) >> >> #Solver >> soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(None,)) >> E = np.transpose(soln[:, :N]) >> n = np.transpose(soln[:, N:N*2]) >> >> VArray = E[:2, nStartTrack:] >> maxVSpread = np.amax(VArray) - np.amin(VArray) >> maxNorm = np.amax(abs(VArray[1] - VArray[0])) >> normVsVSpread[j, i] = maxNorm/maxVSpread >> #else: #DO NOT USE YET >> # gInhVsV1 = np.meshgrid(inhibConductivity, startingVoltage1) >> # V0 = np.array([V0_0, None]) #V1 will be dealt with inside the ODE >> # soln = odeint(f, np.concatenate((V0, n0, S0)), t, args=(gInhVsV1,)) >> >> #Single-run plotting >> strParameters = 'V0_0 = ' + str(V0[0]) + ' mV; n0 = ' + str(n0) +\ >> '; S0 = ' + str(S0) >> mpl.rcParams['image.cmap'] = 'gist_heat' >> fig = plt.subplots() >> plt.title('|V1 - V0|/(Vmax - Vmin).\n' + strParameters) >> plt.xlabel('g_inh (Siemens)') >> plt.ylabel('V1 (mV)') >> # xticks = np.linspace(np.amin(inhibConductivity),\ >> # np.amax(inhibConductivity), num = 4) >> # plt.xticks(xticks) >> plt.xticks(inhibConductivity) >> # plt.yticks(startingVoltage1) >> plt.pcolormesh(normVsVSpread) >> plt.colorbar(shrink=0.8) >> plt.show() >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Sasha > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user >