From barrett.n.b at gmail.com Wed Jun 4 00:18:27 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Wed, 4 Jun 2014 00:18:27 -0400 Subject: [SciPy-User] Optimizing odeint without a for loop Message-ID: This is an initial attempt to model a neural network with two differential variables per cell--voltage, and the response variable n. I can go back and add more later as needed. I have a question about this code (obviously, all constants are given): -------- #ODE def f(X, t): N = len(X)/2 dV = np.zeros(N); dn = np.zeros(N) for i in range(N): E = X[i]; n = X[N+i] #dummy n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) dV[i] = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - E_K))/C for j in range(N): dV[i] += eps*connex[j, i]*X[j] dn[i] = (n_inf_E - n)/tau return np.concatenate((dV, dn)) connex = np.matrix([[-1,1,0], [1,-2,1], [0,1,-1]]) #connection matrix t = np.arange(0, stopTime, timeStep) V0 = np.array([0, -20, -50]) n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) soln = odeint(f, np.concatenate((V0, n0)), t) ----------------- Is there a way to do this without the "for i in range (N)" loop; i.e., can I run through all the dV[i]'s in a more efficient method? -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Wed Jun 4 10:48:29 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Wed, 4 Jun 2014 10:48:29 -0400 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: References: Message-ID: Hi Barett, I am not sure, but as far as I know, this loop is essentially there due to the nature of your PDE. I would suggest: 1) Try Brian (http://briansimulator.org/); 2) Use Cython to optimize the for-loop. -Shawn On Wed, Jun 4, 2014 at 12:18 AM, Barrett B wrote: > This is an initial attempt to model a neural network with two differential > variables per cell--voltage, and the response variable n. I can go back and > add more later as needed. I have a question about this code (obviously, all > constants are given): > > -------- > > #ODE > > def f(X, t): > > N = len(X)/2 > > dV = np.zeros(N); dn = np.zeros(N) > > for i in range(N): > > E = X[i]; n = X[N+i] #dummy > > n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) > > m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) > > dV[i] = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - > E_K))/C > > for j in range(N): > > dV[i] += eps*connex[j, i]*X[j] > > dn[i] = (n_inf_E - n)/tau > > return np.concatenate((dV, dn)) > > > connex = np.matrix([[-1,1,0], > > [1,-2,1], > > [0,1,-1]]) #connection matrix > > > t = np.arange(0, stopTime, timeStep) > > V0 = np.array([0, -20, -50]) > > n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) > > > soln = odeint(f, np.concatenate((V0, n0)), t) > > > ----------------- > > Is there a way to do this without the "for i in range (N)" loop; i.e., can I > run through all the dV[i]'s in a more efficient method? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From eric.moore2 at nih.gov Wed Jun 4 13:06:18 2014 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Wed, 4 Jun 2014 17:06:18 +0000 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: References: Message-ID: <649847CE7F259144A0FD99AC64E7326D0E27A3@MLBXV17.nih.gov> From: Barrett B [mailto:barrett.n.b at gmail.com] Sent: Wednesday, June 04, 2014 12:18 AM To: scipy-user at scipy.org Subject: [SciPy-User] Optimizing odeint without a for loop This is an initial attempt to model a neural network with two differential variables per cell--voltage, and the response variable n. I can go back and add more later as needed. I have a question about this code (obviously, all constants are given): -------- #ODE def f(X, t): ?? N = len(X)/2 ?? dV = np.zeros(N); dn = np.zeros(N) ?? for i in range(N): ?? E = X[i]; n = X[N+i] #dummy ????? n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) ????? m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) ????? dV[i] = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - E_K))/C ????? for j in range(N): ????????? dV[i] += eps*connex[j, i]*X[j] ????? dn[i] = (n_inf_E - n)/tau ??? return np.concatenate((dV, dn)) connex = np.matrix([[-1,1,0], [1,-2,1], [0,1,-1]]) #connection matrix t = np.arange(0, stopTime, timeStep) V0 = np.array([0, -20, -50]) n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) soln = odeint(f, np.concatenate((V0, n0)), t) ----------------- Is there a way to do this without the "for i in range (N)" loop; i.e., can I run through all the dV[i]'s in a more efficient method? Untested: def f(X, t): N = len(X)/2 E = X[:N] n = X[1:N+1] n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) dV = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - E_K))/C dV += eps * np.dot(X[:N], connex) dn = (n_inf_E - n)/tau return np.concatenate((dV, dn)) The basic idea is to operate on full arrays at once rather than looping over them. See, for instance, https://scipy-lectures.github.io/intro/numpy/operations.html Eric From michael at moz.com Wed Jun 4 14:12:49 2014 From: michael at moz.com (Michael O'Leary) Date: Wed, 4 Jun 2014 11:12:49 -0700 Subject: [SciPy-User] 'could not convert integer scalar' error when multiplying large matrices Message-ID: I have a script that calls sklearn.metrics.pairwise.cosine_similarity on matrices of various sizes. This function, the way I am using it, multiplies a matrix with its transpose. When the matrix has more than about 45,000 rows, the matrix multiplication fails at scipy/sparse/coo.py line 275 with an error that says 'ValueError: could not convert integer scalar'. The error message is generated by a function called call_thunk in scipy/sparse/sparsetools/sparsetools.cxx, which first checks for PyArray_EquivTypenums(I_typenum, NPY_INT64) && value == (npy_int64)value) and then value == (npy_int32)value both of which evaluate as false, before it displays the error message, runs cleanup code and exits. Since the maximum 32 bit signed integer is 2,147,483,647, whose square root is 46,340.95, it looks like I have a version of SciPy installed that stores the dimensions of my matrices as int32 values. My installed SciPy version is 0.9.0, installed using pip install. Is there a way to build and install SciPy so that it treats these values as int64s? Can it be done in some way using pip install or apt-get install, or would I need to download a copy of the source files and build and install it with make commands? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Jun 4 14:35:14 2014 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 4 Jun 2014 14:35:14 -0400 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: <649847CE7F259144A0FD99AC64E7326D0E27A3@MLBXV17.nih.gov> References: <649847CE7F259144A0FD99AC64E7326D0E27A3@MLBXV17.nih.gov> Message-ID: Barrett, On Wed, Jun 4, 2014 at 1:06 PM, Moore, Eric (NIH/NIDDK) [F] wrote: > This is an initial attempt to model a neural network with two differential variables per cell--voltage, and the response variable n. I can go back and add more later as needed. I have a question about this code (obviously, all constants are given): A more "natural" way to work with your model, where you can focus on the modeling and not worry about optimizing its implementation "under the hood," is to use a simulator package designed for ODEs. A good example is PyDSTool, which has minimal dependencies, is very fast for smooth ODEs such as yours (compiles C code from your high-level model specs automatically), and has a neural template toolbox for building biophysical models in a modular fashion. What you've written is OK for one-off small model scripts, but once you start building networks, or wanting to manipulate the model parameters or structure in a more sophisticated fashion, you'll be much better off investing in a proper modeling tool. There are several other to choose from, depending on your eventual needs. -Rob From barrett.n.b at gmail.com Wed Jun 4 16:10:47 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Wed, 4 Jun 2014 20:10:47 +0000 (UTC) Subject: [SciPy-User] Optimizing odeint without a for loop References: Message-ID: Thanks, everyone! I'll take a closer look at Cython. I tried Brian earlier, but it seems to be lacking, particularly in its absence of an implicit integrator. From sturla.molden at gmail.com Wed Jun 4 16:41:14 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 04 Jun 2014 22:41:14 +0200 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: References: Message-ID: If you are solving Hodkin-Huxley equations you might consider NEURON. By the way, the conductivities are actually functions, not constants. Sturla On 04/06/14 06:18, Barrett B wrote: > This is an initial attempt to model a neural network with two > differential variables per cell--voltage, and the response variable n. I > can go back and add more later as needed. I have a question about this > code (obviously, all constants are given): > > -------- > > #ODE > > def f(X, t): > > N = len(X)/2 > > dV = np.zeros(N); dn = np.zeros(N) > > for i in range(N): > > E = X[i]; n = X[N+i] #dummy > > n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) > > m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) > > dV[i] = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - > E_K))/C > > for j in range(N): > > dV[i] += eps*connex[j, i]*X[j] > > dn[i] = (n_inf_E - n)/tau > > return np.concatenate((dV, dn)) > > > connex = np.matrix([[-1,1,0], > > [1,-2,1], > > [0,1,-1]]) #connection matrix > > > t = np.arange(0, stopTime, timeStep) > > V0 = np.array([0, -20, -50]) > > n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) > > > soln = odeint(f, np.concatenate((V0, n0)), t) > > > ----------------- > > Is there a way to do this without the "for i in range (N)" loop; i.e., > can I run through all the dV[i]'s in a more efficient method? > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From rob.clewley at gmail.com Wed Jun 4 16:59:52 2014 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 4 Jun 2014 16:59:52 -0400 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: References: Message-ID: On Wed, Jun 4, 2014 at 4:41 PM, Sturla Molden wrote: > If you are solving Hodkin-Huxley equations you might consider NEURON. While I don't disagree that you should consider all the available tools (and there is a python interface for it these days), you want to balance the overheads of model set up and learning curve with what kinds of study you are doing. For instance, for relatively small models (a handful of equations, maybe just single compartment neurons), even if they are conductance-based, NEURON is overkill, IMO. NEURON is particularly well suited for straight-up simulations of multi-compartment, highly anatomically realistic neurons. However, if you expect to do bifurcation analysis or some exploratory mathematical simplifications involving reduced model components, then NEURON will not be an appropriate tool. Also, as you mentioned, Barrett, Brian has some limitations regarding its numerical schemes, and focuses on spiking (I&F-like) models, although NEURON's numerical solvers are sound. If you want robust numerics and analytical capabilities, PyDSTool is a good place to go (it uses industry standard implicit solvers, Dopri and Radau as well as arbitrarily accurate zero-crossing detection). So it all depends what you are trying to achieve in your project. If you do want more advice, I'd suggest you describe your project more. -Rob From parrenin at ujf-grenoble.fr Thu Jun 5 06:46:49 2014 From: parrenin at ujf-grenoble.fr (=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Parrenin?=) Date: Thu, 5 Jun 2014 12:46:49 +0200 Subject: [SciPy-User] deterministic python code which is unstable Message-ID: Dear all, I have a simple python code which should be deterministic (no call to random functions), but which gives two different results when I run it several times. Always the same two different results. I am wondering if this is a python bug and where I should report it. Best regards, Fr?d?ric -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jun 5 06:55:13 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 Jun 2014 11:55:13 +0100 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: Message-ID: On Thu, Jun 5, 2014 at 11:46 AM, Fr?d?ric Parrenin wrote: > Dear all, > > > I have a simple python code which should be deterministic (no call to random > functions), but which gives two different results when I run it several > times. > Always the same two different results. > > I am wondering if this is a python bug and where I should report it. Without any details about what your code does and what libraries it may be calling, it's very hard to say. Some algorithms use pseudorandomization to break ties or avoid worst-case behavior on structured data. What functions from scipy are you calling? -- Robert Kern From josef.pktd at gmail.com Thu Jun 5 07:52:32 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 5 Jun 2014 07:52:32 -0400 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: Message-ID: On Thu, Jun 5, 2014 at 6:55 AM, Robert Kern wrote: > On Thu, Jun 5, 2014 at 11:46 AM, Fr?d?ric Parrenin > wrote: > > Dear all, > > > > > > I have a simple python code which should be deterministic (no call to > random > > functions), but which gives two different results when I run it several > > times. > > Always the same two different results. > > > > I am wondering if this is a python bug and where I should report it. > > Without any details about what your code does and what libraries it > may be calling, it's very hard to say. Some algorithms use > pseudorandomization to break ties or avoid worst-case behavior on > structured data. What functions from scipy are you calling? > some linalg functions do this (at least on Windows) when there is no unique solution a recent case https://github.com/scipy/scipy/issues/3675 Josef > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Thu Jun 5 09:18:54 2014 From: alan.isaac at gmail.com (Alan G Isaac) Date: Thu, 05 Jun 2014 09:18:54 -0400 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: Message-ID: <53906E3E.1050203@gmail.com> On 6/5/2014 6:46 AM, Fr?d?ric Parrenin wrote: > I have a simple python code which should be deterministic (no call to random functions), but which gives two different results when I run it several > times. > Always the same two different results. > > I am wondering if this is a python bug and where I should report it. This can even happen if you iterate over sets or dicts, if your code depends on the order in which items are fetched. Alan Isaac From parrenin at ujf-grenoble.fr Thu Jun 5 09:57:24 2014 From: parrenin at ujf-grenoble.fr (=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Parrenin?=) Date: Thu, 5 Jun 2014 15:57:24 +0200 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: <53906E3E.1050203@gmail.com> References: <53906E3E.1050203@gmail.com> Message-ID: Thanks for the answers. The code uses the following modules/functions: import math as m import numpy as np import matplotlib.pyplot as mpl from scipy.interpolate import interp1d from scipy.optimize import leastsq from matplotlib.backends.backend_pdf import PdfPages from scipy.special import erf Mainly, the code solves a leastsq problem. For those who are interested, I put the code here: https://drive.google.com/file/d/0BzX8dPORePBsWEpSb0I5S05ZQlk/edit?usp=sharing I run it on a standard debian 7 system (only matplotlib has been updated). Best regards, Fr?d?ric 2014-06-05 15:18 GMT+02:00 Alan G Isaac : > On 6/5/2014 6:46 AM, Fr?d?ric Parrenin wrote: > > I have a simple python code which should be deterministic (no call to > random functions), but which gives two different results when I run it > several > > times. > > Always the same two different results. > > > > I am wondering if this is a python bug and where I should report it. > > > This can even happen if you iterate over sets or dicts, > if your code depends on the order in which items are fetched. > > Alan Isaac > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jun 5 10:10:50 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 Jun 2014 15:10:50 +0100 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: On Thu, Jun 5, 2014 at 2:57 PM, Fr?d?ric Parrenin wrote: > Thanks for the answers. > > The code uses the following modules/functions: > > import math as m > import numpy as np > import matplotlib.pyplot as mpl > from scipy.interpolate import interp1d > from scipy.optimize import leastsq > from matplotlib.backends.backend_pdf import PdfPages > from scipy.special import erf > > Mainly, the code solves a leastsq problem. scipy.optimize.leastsq() and numpy.linalg.solve() are likely culprits. -- Robert Kern From parrenin at ujf-grenoble.fr Thu Jun 5 11:14:27 2014 From: parrenin at ujf-grenoble.fr (=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Parrenin?=) Date: Thu, 5 Jun 2014 17:14:27 +0200 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: Thanks for the hint. I would suggest to output a warning in scipy.optimize.leastsq() and numpy.linalg.solve() when such circumstances happen. It is disturbing to have an unstable model and to not know where it comes from. Best regards, Fr?d?ric 2014-06-05 16:10 GMT+02:00 Robert Kern : > On Thu, Jun 5, 2014 at 2:57 PM, Fr?d?ric Parrenin > wrote: > > Thanks for the answers. > > > > The code uses the following modules/functions: > > > > import math as m > > import numpy as np > > import matplotlib.pyplot as mpl > > from scipy.interpolate import interp1d > > from scipy.optimize import leastsq > > from matplotlib.backends.backend_pdf import PdfPages > > from scipy.special import erf > > > > Mainly, the code solves a leastsq problem. > > scipy.optimize.leastsq() and numpy.linalg.solve() are likely culprits. > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jun 5 11:33:45 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 Jun 2014 16:33:45 +0100 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: On Thu, Jun 5, 2014 at 4:14 PM, Fr?d?ric Parrenin wrote: > Thanks for the hint. > > I would suggest to output a warning in scipy.optimize.leastsq() and > numpy.linalg.solve() when such circumstances happen. We don't know when that happens. It's not our code that is doing it but the underlying linear algebra libraries, which are a black box to us as far as this is concerned. -- Robert Kern From parrenin at ujf-grenoble.fr Thu Jun 5 11:49:46 2014 From: parrenin at ujf-grenoble.fr (=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Parrenin?=) Date: Thu, 5 Jun 2014 17:49:46 +0200 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: Is it not possible to bug report those linear algebra libraries? Fr?d?ric 2014-06-05 17:33 GMT+02:00 Robert Kern : > On Thu, Jun 5, 2014 at 4:14 PM, Fr?d?ric Parrenin > wrote: > > Thanks for the hint. > > > > I would suggest to output a warning in scipy.optimize.leastsq() and > > numpy.linalg.solve() when such circumstances happen. > > We don't know when that happens. It's not our code that is doing it > but the underlying linear algebra libraries, which are a black box to > us as far as this is concerned. > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Thu Jun 5 11:58:14 2014 From: lists at hilboll.de (Andreas Hilboll) Date: Thu, 05 Jun 2014 17:58:14 +0200 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: <53909396.70108@hilboll.de> On 05.06.2014 17:49, Fr?d?ric Parrenin wrote: > Is it not possible to bug report those linear algebra libraries? If you could pinpoint the problem to scipy.optimize.leastsq, you could look into the 'ier' and 'mesg' return values to find a reason for possible problems. Cheers, Andreas. > > > 2014-06-05 17:33 GMT+02:00 Robert Kern >: > > On Thu, Jun 5, 2014 at 4:14 PM, Fr?d?ric Parrenin > > wrote: > > Thanks for the hint. > > > > I would suggest to output a warning in scipy.optimize.leastsq() and > > numpy.linalg.solve() when such circumstances happen. > > We don't know when that happens. It's not our code that is doing it > but the underlying linear algebra libraries, which are a black box to > us as far as this is concerned. > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- -- Andreas. From robert.kern at gmail.com Thu Jun 5 11:59:37 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 5 Jun 2014 16:59:37 +0100 Subject: [SciPy-User] deterministic python code which is unstable In-Reply-To: References: <53906E3E.1050203@gmail.com> Message-ID: On Thu, Jun 5, 2014 at 4:49 PM, Fr?d?ric Parrenin wrote: > Is it not possible to bug report those linear algebra libraries? We build against BLAS and LAPACK, which are API specifications that are implemented by many different vendors. One can build numpy and scipy against any of those implementations. You can try to track down the implementation of the BLAS and LAPACK library that you are currently using and try to fix it, if it's an open source one. These problems usually happen when there is some degeneracy or ill-conditioning in your data. Your time might be better spent identifying the source of that degeneracy. It might be causing you more problems than indeterminacy from run to run (i.e. *both* of your answers might be wrong). A bit of deterministic preconditioning on your side might eliminate the problem. -- Robert Kern From barrett.n.b at gmail.com Thu Jun 5 16:03:33 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Thu, 5 Jun 2014 20:03:33 +0000 (UTC) Subject: [SciPy-User] Optimizing odeint without a for loop References: Message-ID: Rob Clewley gmail.com> writes: > > On Wed, Jun 4, 2014 at 4:41 PM, Sturla Molden gmail.com> wrote: > > If you are solving Hodkin-Huxley equations you might consider NEURON. > > While I don't disagree that you should consider all the available > tools (and there is a python interface for it these days), you want to > balance the overheads of model set up and learning curve with what > kinds of study you are doing. For instance, for relatively small > models (a handful of equations, maybe just single compartment > neurons), even if they are conductance-based, NEURON is overkill, IMO. > NEURON is particularly well suited for straight-up simulations of > multi-compartment, highly anatomically realistic neurons. However, if > you expect to do bifurcation analysis or some exploratory mathematical > simplifications involving reduced model components, then NEURON will > not be an appropriate tool. Also, as you mentioned, Barrett, Brian has > some limitations regarding its numerical schemes, and focuses on > spiking (I&F-like) models, although NEURON's numerical solvers are > sound. If you want robust numerics and analytical capabilities, > PyDSTool is a good place to go (it uses industry standard implicit > solvers, Dopri and Radau as well as arbitrarily accurate zero-crossing > detection). So it all depends what you are trying to achieve in your > project. If you do want more advice, I'd suggest you describe your > project more. > > -Rob > Right now it's a pretty simple objective: Given a neuronal network, where each neuron has only a few first-order differential equations, I want to simulate the spiking or bursting behavior over time. I took a look at NEURON, and yes, while it seems extremely powerful, it seems to focus on individual parts of the neuron and not the neurons as wholes. That would require a lot of computational power, not to mention setup time. I'll take a look at PyDSTool. From sturla.molden at gmail.com Thu Jun 5 16:08:57 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 5 Jun 2014 20:08:57 +0000 (UTC) Subject: [SciPy-User] deterministic python code which is unstable References: <53906E3E.1050203@gmail.com> Message-ID: <1552835643423691486.540630sturla.molden-gmail.com@news.gmane.org> Robert Kern wrote: > We build against BLAS and LAPACK, which are API specifications that > are implemented by many different vendors. One can build numpy and > scipy against any of those implementations. You can try to track down > the implementation of the BLAS and LAPACK library that you are > currently using and try to fix it, if it's an open source one. scipy.optimize.leastsq uses a QR solver from MINPACK. Sturla From sturla.molden at gmail.com Thu Jun 5 16:11:52 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Thu, 5 Jun 2014 20:11:52 +0000 (UTC) Subject: [SciPy-User] Optimizing odeint without a for loop References: Message-ID: <192377916423691819.027527sturla.molden-gmail.com@news.gmane.org> Barrett B wrote: > Right now it's a pretty simple objective: Given a neuronal network, where > each neuron has only a few first-order differential equations, I want to > simulate the spiking or bursting behavior over time. It sounds like you want to use Brian. Sturla From andrew.giessel at gmail.com Thu Jun 5 18:03:01 2014 From: andrew.giessel at gmail.com (Andrew Giessel) Date: Thu, 5 Jun 2014 18:03:01 -0400 Subject: [SciPy-User] Optimizing odeint without a for loop In-Reply-To: <192377916423691819.027527sturla.molden-gmail.com@news.gmane.org> References: <192377916423691819.027527sturla.molden-gmail.com@news.gmane.org> Message-ID: Seconded. Very nice library and lets you incorporate other diff eqs if you want. If you are doing anything more than implementing for fun then the record keeping is a nightmare. The unit consistency alone is worth it. > On Jun 5, 2014, at 16:11, Sturla Molden wrote: > > Barrett B wrote: > >> Right now it's a pretty simple objective: Given a neuronal network, where >> each neuron has only a few first-order differential equations, I want to >> simulate the spiking or bursting behavior over time. > > It sounds like you want to use Brian. > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From b_troester at yahoo.de Fri Jun 6 07:24:26 2014 From: b_troester at yahoo.de (B. T.) Date: Fri, 06 Jun 2014 13:24:26 +0200 Subject: [SciPy-User] Playing around with Scipy filtering a frequency band from a .wav-file Message-ID: <5391A4EA.1020502@yahoo.de> Dear readers, I'm not sure if this is the right place to ask my question, but I hope you can help me. Currently I started a small project playing around with filtering frequency spectrums from .wav-files with Python. My goal is, to filter out the frequency range of the human voice (85 Hz - 155 Hz male; 165 Hz - 255 Hz female) do determine the gender of the speaker in a recorded sample. To do that I stumbled upon several similar solutions (code snip here): #http://mail.scipy.org/pipermail/scipy-user/2010-July/025999.html infile = 'media/male.wav' outfile = 'media/male.filtered.wav' from scipy.io.wavfile import read, write from scipy.signal.filter_design import butter, buttord from scipy.signal import lfilter, lfiltic import numpy as np from math import log def butter_bandpass(lowcut, highcut, fs, order=5): nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq b, a = butter(order, [low, high], btype='band') return b, a def butter_bandpass_filter(data, lowcut, highcut, fs, order=5): b, a = butter_bandpass(lowcut, highcut, fs, order=order) y = lfilter(b, a, data) return y def main2(): rate, sound_samples = read(infile) sound_samples = np.float64(sound_samples / 32768.0) pass_freq = [0.8, 1.8] stop_freq = [0.7, 1.9] pass_gain = 0.5 # permissible loss (ripple) in passband (dB) stop_gain = 10.0 # attenuation required in stopband (dB) num, denom = buttord(pass_freq, stop_freq, pass_gain, stop_gain) num, denom = butter(num, denom, btype = 'bandpass') filtered = lfilter(num, denom, sound_samples) #filtered = butter_bandpass_filter(sound_samples, 180.0, 85.0, rate) filtered = np.int16(filtered * 32768 * 10) write(outfile, rate, filtered) def main1(): rate, sound_samples = read(infile) sound_samples = np.float64(sound_samples / 32768.0) nyf= rate/2 #Sampling Frequency/2 (Nyquist Frequency) low=8.5/nyf #Lower cut of the filter high=15.5/nyf #High Cut of the filter b,a = butter(0,[low,high], btype='band') #Butter generates the coefficients for the filter filtered=lfilter(b, a, sound_samples) #Does the convoultion, and the output is the filtered signal filtered = np.int16(filtered * 32768 * 10) write(outfile, rate, filtered) main1() I'm certainly no DSP expert, so I got some issues in understanding what my mistake is when entering the filter-range in main1() and main2(): for main1() I'm not sure if the values for low and high are correct. for main2() the same applies for pass_freq and stop_freq (don't get confused by the variable names, they are from a lowpass filter example). The output .wav-files are very "clicky" and "cracky". But I actually expected them to sound nearly the same as the original, since I only filter out mostly frequency spectrums that are not used by the male voice. When applied to a music sample with vocals I get a very noisy result. I'm looking foreward to your answers and ideas. Greetings from Germany, Benedikt From matthieu.brucher at gmail.com Fri Jun 6 07:40:03 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 6 Jun 2014 13:40:03 +0200 Subject: [SciPy-User] Playing around with Scipy filtering a frequency band from a .wav-file In-Reply-To: <5391A4EA.1020502@yahoo.de> References: <5391A4EA.1020502@yahoo.de> Message-ID: Hi, I didn't try you example yet, but I have several notes: - filter template is usually done as reduced frequencies. Do if you have a sampling frequency of fs, the band you want for the male voice is (85*2/fs, 155*2/fs), not 85, 155 - the result should be an unintelligible voice. 85-155 is the frequency of the fondamental note, but there will be all multiples after to make the sound actually understandable (to be able to understand you have to add frequencies around 500Hz, for instance) In the end, I'm not sure it will be a reliable test, as the 2 bands are quite close. a simple FFT transform may be more efficient than this test. Cheers, Matthieu 2014-06-06 12:24 GMT+01:00 B. T. : > Dear readers, > > I'm not sure if this is the right place to ask my question, but I hope > you can help me. > Currently I started a small project playing around with filtering > frequency spectrums from .wav-files with Python. > My goal is, to filter out the frequency range of the human voice (85 Hz > - 155 Hz male; 165 Hz - 255 Hz female) do determine the gender of the > speaker in a recorded sample. To do that I stumbled upon several similar > solutions (code snip here): > #http://mail.scipy.org/pipermail/scipy-user/2010-July/025999.html > > infile = 'media/male.wav' > outfile = 'media/male.filtered.wav' > > from scipy.io.wavfile import read, write > from scipy.signal.filter_design import butter, buttord > from scipy.signal import lfilter, lfiltic > import numpy as np > from math import log > > def butter_bandpass(lowcut, highcut, fs, order=5): > nyq = 0.5 * fs > low = lowcut / nyq > high = highcut / nyq > b, a = butter(order, [low, high], btype='band') > return b, a > > > def butter_bandpass_filter(data, lowcut, highcut, fs, order=5): > b, a = butter_bandpass(lowcut, highcut, fs, order=order) > y = lfilter(b, a, data) > return y > > def main2(): > rate, sound_samples = read(infile) > sound_samples = np.float64(sound_samples / 32768.0) > pass_freq = [0.8, 1.8] > stop_freq = [0.7, 1.9] > pass_gain = 0.5 # permissible loss (ripple) in passband (dB) > stop_gain = 10.0 # attenuation required in stopband (dB) > num, denom = buttord(pass_freq, stop_freq, pass_gain, stop_gain) > num, denom = butter(num, denom, btype = 'bandpass') > filtered = lfilter(num, denom, sound_samples) > #filtered = butter_bandpass_filter(sound_samples, 180.0, 85.0, rate) > filtered = np.int16(filtered * 32768 * 10) > write(outfile, rate, filtered) > > > def main1(): > rate, sound_samples = read(infile) > sound_samples = np.float64(sound_samples / 32768.0) > nyf= rate/2 #Sampling Frequency/2 (Nyquist Frequency) > low=8.5/nyf #Lower cut of the filter > high=15.5/nyf #High Cut of the filter > b,a = butter(0,[low,high], btype='band') #Butter generates the > coefficients for the filter > filtered=lfilter(b, a, sound_samples) #Does the convoultion, and the > output is the filtered signal > filtered = np.int16(filtered * 32768 * 10) > write(outfile, rate, filtered) > > main1() > > I'm certainly no DSP expert, so I got some issues in understanding what > my mistake is when entering the filter-range in main1() and main2(): > for main1() I'm not sure if the values for low and high are correct. > for main2() the same applies for pass_freq and stop_freq (don't get > confused by the variable names, they are from a lowpass filter example). > The output .wav-files are very "clicky" and "cracky". But I actually > expected them to sound nearly the same as the original, since I only > filter out mostly frequency spectrums that are not used by the male > voice. When applied to a music sample with vocals I get a very noisy result. > > I'm looking foreward to your answers and ideas. > > Greetings from Germany, > Benedikt > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From pav at iki.fi Fri Jun 6 15:46:55 2014 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 6 Jun 2014 19:46:55 +0000 (UTC) Subject: [SciPy-User] deterministic python code which is unstable References: <53906E3E.1050203@gmail.com> Message-ID: Fr?d?ric Parrenin ujf-grenoble.fr> writes: > Is it not possible to bug report those linear algebra libraries? The behavior is not a bug, and not really fixable by the linear algebra libraries either. Modern compilers produce code that can lead to nondeterministic behavior under circumstances (think SSE and memory allocation alignment). -- Pauli Virtanen From pav at iki.fi Fri Jun 6 15:55:05 2014 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 6 Jun 2014 19:55:05 +0000 (UTC) Subject: [SciPy-User] deterministic python code which is unstable References: <53906E3E.1050203@gmail.com> Message-ID: Pauli Virtanen iki.fi> writes: > Fr?d?ric Parrenin ujf-grenoble.fr> writes: > > Is it not possible to bug report those linear algebra libraries? > > The behavior is not a bug, and not really fixable by the linear algebra > libraries either. Modern compilers produce code that can lead to > nondeterministic behavior under circumstances (think SSE and memory > allocation alignment). http://software.intel.com/sites/default/files/article/164389/fp-consistency-122712_1.pdf i.e., the alternatives are (i) faster execution (ii) reproducibility of rounding error Compilers default usually to (i). From jonathantu at gmail.com Mon Jun 9 16:18:32 2014 From: jonathantu at gmail.com (Jonathan Tu) Date: Mon, 9 Jun 2014 13:18:32 -0700 Subject: [SciPy-User] GMRES iteration number In-Reply-To: References: <2EAF2F65-0385-4899-8897-E62628EAD50D@gmail.com> Message-ID: <656159D5-8BF8-43C7-B81F-C5539211B32A@gmail.com> Hi, Using a callback makes sense to me conceptually, but I have never implemented something like this. Is there a standard way to do such a thing? I would like something lightweight, obviously. I can imagine defining a small class containing a counter attribute and a parens function that updates this value. This seems better than doing something like defining a global variable that callback() can modify. Since the callback function will be called as callback(rk), where rk is the residual, I don't know how else to have it update a value whose scope needs to lie outside the callback function itself. Jonathan Tu On May 30, 2014, at 1:47 AM, Ralf Gommers wrote: > > > > On Fri, May 30, 2014 at 10:37 AM, Arun Gokule wrote: > AFAICT no. > > > On Thu, May 29, 2014 at 1:20 PM, Jonathan Tu wrote: > Hi, > > Is there any way to access the number of iterations it takes to complete a GMRES computation? I've checked the documentation at http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.sparse.linalg.gmres.html and it doesn't appear so. I am doing some testing with passing in initial guesses x0, and I am interested to know whether or not this significantly reduces the required number of iterations. > > There's no return value that tells you tells (and we can't add one in nice a backwards-compatible way), but you can use a callback function to do this. Just provide a callback that increments some counter each time it is called. > > Ralf > > > > > > Thanks, > Jonathan Tu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Mon Jun 9 16:34:57 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Mon, 09 Jun 2014 15:34:57 -0500 Subject: [SciPy-User] GMRES iteration number In-Reply-To: <656159D5-8BF8-43C7-B81F-C5539211B32A@gmail.com> References: <2EAF2F65-0385-4899-8897-E62628EAD50D@gmail.com> <656159D5-8BF8-43C7-B81F-C5539211B32A@gmail.com> Message-ID: <53961A71.7010301@chem.wisc.edu> Would it be possible to make the callback function an append operation on a list? e.g. create some list "residuals = []" and do "callback=residuals.append". Then the length of the list would tell you how many calls were made, and you would know what the residuals were along the way. Alternatively, I imagine you could do something like this: class Counter(object): def __init__(self): self.i = 0 def __str__(self): return str(self.i) def addone(self, OPTIONAL_IGNORED_INPUT=None): self.i += 1 blah = Counter() gmres(..., callback=blah.addone) print blah Eric On 6/9/2014 3:18 PM, Jonathan Tu wrote: > Hi, > > Using a callback makes sense to me conceptually, but I have never > implemented something like this. Is there a standard way to do such a > thing? I would like something lightweight, obviously. I can imagine > defining a small class containing a counter attribute and a parens > function that updates this value. This seems better than doing > something like defining a global variable that callback() can modify. > Since the callback function will be called as callback(rk), where rk > is the residual, I don't know how else to have it update a value whose > scope needs to lie outside the callback function itself. > > > > Jonathan Tu > > > On May 30, 2014, at 1:47 AM, Ralf Gommers > wrote: > >> >> >> >> On Fri, May 30, 2014 at 10:37 AM, Arun Gokule > > wrote: >> >> AFAICT no. >> >> >> On Thu, May 29, 2014 at 1:20 PM, Jonathan Tu >> > wrote: >> >> Hi, >> >> Is there any way to access the number of iterations it takes >> to complete a GMRES computation? I've checked the >> documentation at >> http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.sparse.linalg.gmres.html and >> it doesn't appear so. I am doing some testing with passing >> in initial guesses x0, and I am interested to know whether or >> not this significantly reduces the required number of iterations. >> >> >> There's no return value that tells you tells (and we can't add one in >> nice a backwards-compatible way), but you can use a callback function >> to do this. Just provide a callback that increments some counter each >> time it is called. >> >> Ralf >> >> >> >> >> >> Thanks, >> Jonathan Tu >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Jun 9 21:27:41 2014 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 Jun 2014 02:27:41 +0100 Subject: [SciPy-User] CIECAM02 Message-ID: Hi all, CIECAM02 is a standard model of human color perception. I'm trying to implement a model of perceptual color similarity[1] that uses CIECAM02, and this requires the ability to convert from sRGB coordinates to CIECAM02's J, M, h coordinates. Code for doing this seems surprisingly short on the ground. Before I start trying to mechanically translate some incomprehensible matlab script [2], I thought I'd check whether around here has implemented such a thing, or knows of a suitable implementation? -n [1] Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on CIECAM02 colour appearance model. Color Research & Application, 31(4), 320?330. doi:10.1002/col.20227 [2] http://www.mathworks.co.uk/matlabcentral/fileexchange/40640-computational-colour-science-using-matlab-2e/content/ciecam02.m -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From pascal at bugnion.org Tue Jun 10 05:46:19 2014 From: pascal at bugnion.org (Pascal Bugnion) Date: Tue, 10 Jun 2014 10:46:19 +0100 Subject: [SciPy-User] GMRES iteration number In-Reply-To: <53961A71.7010301@chem.wisc.edu> References: <2EAF2F65-0385-4899-8897-E62628EAD50D@gmail.com> <656159D5-8BF8-43C7-B81F-C5539211B32A@gmail.com> <53961A71.7010301@chem.wisc.edu> Message-ID: <20140610104619.74e475b2@turing> The lightest way to make a callback that keeps track of the number of times it is called is probably to use a closure: def make_callback(): closure_variables = dict(counter=0) # initialize variables in this # dict. The callback function # has access to this data. def callback(residuals): closure_variables["counter"] += 1 print closure_variables["counter"] return callback Then, generate the callback function using callback = make_callback() callback() # prints 1 callback() # prints 2 callback() # prints 3 To give a full example using "gmres": # ----------------------------------------------------------- import numpy as np from scipy.sparse.linalg import gmres # Generate random input data A = 5*np.eye(10) + np.random.random(size=(10,10)) b = np.random.random(size=(10,)) # Callback generator def make_callback(): closure_variables = dict(counter=0, residuals=[]) def callback(residuals): closure_variables["counter"] += 1 closure_variables["residuals"].append(residuals) print closure_variables["counter"], residuals return callback gmres(A,b,callback=make_callback()) # ----------------------------------------------------------- See also: "http://eev.ee/blog/2011/04/24/gotcha-python-scoping-closures/#the-other-problem-mutating-outer-variables" for other ways to use closures to implement a counter. Pascal On Mon, 09 Jun 2014 15:34:57 -0500 Eric Hermes wrote: > Would it be possible to make the callback function an append > operation on a list? e.g. create some list "residuals = []" and do > "callback=residuals.append". Then the length of the list would tell > you how many calls were made, and you would know what the residuals > were along the way. Alternatively, I imagine you could do something > like this: > > class Counter(object): > def __init__(self): > self.i = 0 > def __str__(self): > return str(self.i) > def addone(self, OPTIONAL_IGNORED_INPUT=None): > self.i += 1 > > blah = Counter() > > gmres(..., callback=blah.addone) > > print blah > > Eric > > On 6/9/2014 3:18 PM, Jonathan Tu wrote: > > Hi, > > > > Using a callback makes sense to me conceptually, but I have never > > implemented something like this. Is there a standard way to do > > such a thing? I would like something lightweight, obviously. I > > can imagine defining a small class containing a counter attribute > > and a parens function that updates this value. This seems better > > than doing something like defining a global variable that > > callback() can modify. Since the callback function will be called > > as callback(rk), where rk is the residual, I don't know how else to > > have it update a value whose scope needs to lie outside the > > callback function itself. > > > > > > > > Jonathan Tu > > > > > > On May 30, 2014, at 1:47 AM, Ralf Gommers > > wrote: > > > >> > >> > >> > >> On Fri, May 30, 2014 at 10:37 AM, Arun Gokule > >> > wrote: > >> > >> AFAICT no. > >> > >> > >> On Thu, May 29, 2014 at 1:20 PM, Jonathan Tu > >> > wrote: > >> > >> Hi, > >> > >> Is there any way to access the number of iterations it > >> takes to complete a GMRES computation? I've checked the > >> documentation at > >> http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.sparse.linalg.gmres.html > >> and it doesn't appear so. I am doing some testing with passing > >> in initial guesses x0, and I am interested to know whether > >> or not this significantly reduces the required number of > >> iterations. > >> > >> > >> There's no return value that tells you tells (and we can't add one > >> in nice a backwards-compatible way), but you can use a callback > >> function to do this. Just provide a callback that increments some > >> counter each time it is called. > >> > >> Ralf > >> > >> > >> > >> > >> > >> Thanks, > >> Jonathan Tu > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > From davidmenhur at gmail.com Tue Jun 10 10:22:03 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 10 Jun 2014 16:22:03 +0200 Subject: [SciPy-User] GMRES iteration number In-Reply-To: <20140610104619.74e475b2@turing> References: <2EAF2F65-0385-4899-8897-E62628EAD50D@gmail.com> <656159D5-8BF8-43C7-B81F-C5539211B32A@gmail.com> <53961A71.7010301@chem.wisc.edu> <20140610104619.74e475b2@turing> Message-ID: On 10 June 2014 11:46, Pascal Bugnion wrote: > The lightest way to make a callback that keeps track of the number of > times it is called is probably to use a closure: > I class can be simpler: class Callback(object): def __init__(self): self.counter = 0 def.__call__(self): self.counter += 1 callback = Callback() callback() callback() ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria-rosaria.antonelli at curie.fr Wed Jun 11 05:08:53 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 09:08:53 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows Message-ID: Hello, I am working with Windows 7, 64bits and I have 8G of Ram. I have installed Anaconda 32bits and I get "memory" error when handling matrices of 80*100*285*384. Do installing Anaconda 64bits would solve this problem ? Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? Best regards, Rosa From matthieu.brucher at gmail.com Wed Jun 11 05:13:48 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 10:13:48 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Hi, Your arrays have more than 800 millions entries, so even with floats or int32, that means more than 3GB, meaning only one would fit in the process memory. You _have_ to switch to a 64bits Python and hope you have enough RAM as well. Cheers, Matthieu 2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria : > Hello, > > I am working with Windows 7, 64bits and I have 8G of Ram. > I have installed Anaconda 32bits and I get "memory" error when handling > matrices of 80*100*285*384. > Do installing Anaconda 64bits would solve this problem ? > Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? > > Best regards, > Rosa > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From maria-rosaria.antonelli at curie.fr Wed Jun 11 05:17:06 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 09:17:06 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows Message-ID: Sorry I have to correct my self. I already have installed Anaconda 64 bit? Best regards, Rosa On 6/11/14 11:13 AM, "Matthieu Brucher" wrote: >Hi, > >Your arrays have more than 800 millions entries, so even with floats >or int32, that means more than 3GB, meaning only one would fit in the >process memory. >You _have_ to switch to a 64bits Python and hope you have enough RAM as >well. > >Cheers, > >Matthieu > >2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >: >> Hello, >> >> I am working with Windows 7, 64bits and I have 8G of Ram. >> I have installed Anaconda 32bits and I get "memory" error when handling >> matrices of 80*100*285*384. >> Do installing Anaconda 64bits would solve this problem ? >> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? >> >> Best regards, >> Rosa >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Wed Jun 11 05:20:42 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 10:20:42 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: How many such matrices do you have? What is the content? Even if you could allocate up to 2**64 bytes with a 64bits app, it is not possible to do so. You are limited by your RAM and the size of your swap file. If the OS can find enough memory, it will fail with the error message you have. Perhaps you should use memmap to map your saved array on disk, instead of having it in memory. Cheers, Matthieu 2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria : > Sorry I have to correct my self. > I already have installed Anaconda 64 bit? > Best regards, > Rosa > > On 6/11/14 11:13 AM, "Matthieu Brucher" wrote: > >>Hi, >> >>Your arrays have more than 800 millions entries, so even with floats >>or int32, that means more than 3GB, meaning only one would fit in the >>process memory. >>You _have_ to switch to a 64bits Python and hope you have enough RAM as >>well. >> >>Cheers, >> >>Matthieu >> >>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>: >>> Hello, >>> >>> I am working with Windows 7, 64bits and I have 8G of Ram. >>> I have installed Anaconda 32bits and I get "memory" error when handling >>> matrices of 80*100*285*384. >>> Do installing Anaconda 64bits would solve this problem ? >>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? >>> >>> Best regards, >>> Rosa >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >>-- >>Information System Engineer, Ph.D. >>Blog: http://matt.eifelle.com >>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>Music band: http://liliejay.com/ >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From maria-rosaria.antonelli at curie.fr Wed Jun 11 05:26:34 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 09:26:34 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Hello, Thanks for answering. I do have only one matrix that big and some others 80*285*384. Have you ever heard about numpy binaries by Christoph Gohlke ? Best, Rosa On 6/11/14 11:20 AM, "Matthieu Brucher" wrote: >How many such matrices do you have? What is the content? Even if you >could allocate up to 2**64 bytes with a 64bits app, it is not possible >to do so. You are limited by your RAM and the size of your swap file. >If the OS can find enough memory, it will fail with the error message >you have. >Perhaps you should use memmap to map your saved array on disk, instead >of having it in memory. > >Cheers, > >Matthieu > >2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria >: >> Sorry I have to correct my self. >> I already have installed Anaconda 64 bit? >> Best regards, >> Rosa >> >> On 6/11/14 11:13 AM, "Matthieu Brucher" >>wrote: >> >>>Hi, >>> >>>Your arrays have more than 800 millions entries, so even with floats >>>or int32, that means more than 3GB, meaning only one would fit in the >>>process memory. >>>You _have_ to switch to a 64bits Python and hope you have enough RAM as >>>well. >>> >>>Cheers, >>> >>>Matthieu >>> >>>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>>: >>>> Hello, >>>> >>>> I am working with Windows 7, 64bits and I have 8G of Ram. >>>> I have installed Anaconda 32bits and I get "memory" error when >>>>handling >>>> matrices of 80*100*285*384. >>>> Do installing Anaconda 64bits would solve this problem ? >>>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? >>>> >>>> Best regards, >>>> Rosa >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>>-- >>>Information System Engineer, Ph.D. >>>Blog: http://matt.eifelle.com >>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>Music band: http://liliejay.com/ >>>_______________________________________________ >>>SciPy-User mailing list >>>SciPy-User at scipy.org >>>http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Wed Jun 11 05:32:02 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 10:32:02 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Yes, but I don't think that would change anything. It is still the same OS that allocates memory in the end. Cheers, 2014-06-11 10:26 GMT+01:00 Antonelli Maria Rosaria : > Hello, > Thanks for answering. > I do have only one matrix that big and some others 80*285*384. > Have you ever heard about numpy binaries by Christoph Gohlke ? > > Best, > Rosa > > On 6/11/14 11:20 AM, "Matthieu Brucher" wrote: > >>How many such matrices do you have? What is the content? Even if you >>could allocate up to 2**64 bytes with a 64bits app, it is not possible >>to do so. You are limited by your RAM and the size of your swap file. >>If the OS can find enough memory, it will fail with the error message >>you have. >>Perhaps you should use memmap to map your saved array on disk, instead >>of having it in memory. >> >>Cheers, >> >>Matthieu >> >>2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria >>: >>> Sorry I have to correct my self. >>> I already have installed Anaconda 64 bit? >>> Best regards, >>> Rosa >>> >>> On 6/11/14 11:13 AM, "Matthieu Brucher" >>>wrote: >>> >>>>Hi, >>>> >>>>Your arrays have more than 800 millions entries, so even with floats >>>>or int32, that means more than 3GB, meaning only one would fit in the >>>>process memory. >>>>You _have_ to switch to a 64bits Python and hope you have enough RAM as >>>>well. >>>> >>>>Cheers, >>>> >>>>Matthieu >>>> >>>>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>>>: >>>>> Hello, >>>>> >>>>> I am working with Windows 7, 64bits and I have 8G of Ram. >>>>> I have installed Anaconda 32bits and I get "memory" error when >>>>>handling >>>>> matrices of 80*100*285*384. >>>>> Do installing Anaconda 64bits would solve this problem ? >>>>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? >>>>> >>>>> Best regards, >>>>> Rosa >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>>-- >>>>Information System Engineer, Ph.D. >>>>Blog: http://matt.eifelle.com >>>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>Music band: http://liliejay.com/ >>>>_______________________________________________ >>>>SciPy-User mailing list >>>>SciPy-User at scipy.org >>>>http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >>-- >>Information System Engineer, Ph.D. >>Blog: http://matt.eifelle.com >>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>Music band: http://liliejay.com/ >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From maria-rosaria.antonelli at curie.fr Wed Jun 11 05:33:26 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 09:33:26 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Thank you. I am going to try with memmap. Best, Rosa On 6/11/14 11:32 AM, "Matthieu Brucher" wrote: >Yes, but I don't think that would change anything. It is still the >same OS that allocates memory in the end. > >Cheers, > >2014-06-11 10:26 GMT+01:00 Antonelli Maria Rosaria >: >> Hello, >> Thanks for answering. >> I do have only one matrix that big and some others 80*285*384. >> Have you ever heard about numpy binaries by Christoph Gohlke ? >> >> Best, >> Rosa >> >> On 6/11/14 11:20 AM, "Matthieu Brucher" >>wrote: >> >>>How many such matrices do you have? What is the content? Even if you >>>could allocate up to 2**64 bytes with a 64bits app, it is not possible >>>to do so. You are limited by your RAM and the size of your swap file. >>>If the OS can find enough memory, it will fail with the error message >>>you have. >>>Perhaps you should use memmap to map your saved array on disk, instead >>>of having it in memory. >>> >>>Cheers, >>> >>>Matthieu >>> >>>2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria >>>: >>>> Sorry I have to correct my self. >>>> I already have installed Anaconda 64 bit? >>>> Best regards, >>>> Rosa >>>> >>>> On 6/11/14 11:13 AM, "Matthieu Brucher" >>>>wrote: >>>> >>>>>Hi, >>>>> >>>>>Your arrays have more than 800 millions entries, so even with floats >>>>>or int32, that means more than 3GB, meaning only one would fit in the >>>>>process memory. >>>>>You _have_ to switch to a 64bits Python and hope you have enough RAM >>>>>as >>>>>well. >>>>> >>>>>Cheers, >>>>> >>>>>Matthieu >>>>> >>>>>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>>>>: >>>>>> Hello, >>>>>> >>>>>> I am working with Windows 7, 64bits and I have 8G of Ram. >>>>>> I have installed Anaconda 32bits and I get "memory" error when >>>>>>handling >>>>>> matrices of 80*100*285*384. >>>>>> Do installing Anaconda 64bits would solve this problem ? >>>>>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits >>>>>>? >>>>>> >>>>>> Best regards, >>>>>> Rosa >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>>> >>>>>-- >>>>>Information System Engineer, Ph.D. >>>>>Blog: http://matt.eifelle.com >>>>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>>Music band: http://liliejay.com/ >>>>>_______________________________________________ >>>>>SciPy-User mailing list >>>>>SciPy-User at scipy.org >>>>>http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>>-- >>>Information System Engineer, Ph.D. >>>Blog: http://matt.eifelle.com >>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>Music band: http://liliejay.com/ >>>_______________________________________________ >>>SciPy-User mailing list >>>SciPy-User at scipy.org >>>http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From h.chr at mail.ru Wed Jun 11 05:31:37 2014 From: h.chr at mail.ru (Horea Christian) Date: Wed, 11 Jun 2014 11:31:37 +0200 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: <539821F9.3060705@mail.ru> The memory error more likely is caused by how *exactly* you are handling the matrix. Tell us what operations you are trying to perform, and we might be able to tell you more. I doubt the error has much to do with 32 vs. 64 bits. On Mi 11 Jun 2014 11:08:53 CEST, Antonelli Maria Rosaria wrote: > Hello, > > I am working with Windows 7, 64bits and I have 8G of Ram. > I have installed Anaconda 32bits and I get "memory" error when handling > matrices of 80*100*285*384. > Do installing Anaconda 64bits would solve this problem ? > Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? > > Best regards, > Rosa > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Horea Christian http://chymera.eu From maria-rosaria.antonelli at curie.fr Wed Jun 11 05:54:54 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 09:54:54 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: <539821F9.3060705@mail.ru> References: <539821F9.3060705@mail.ru> Message-ID: Hello, Thanks for answering. I have to correct myself, I have Anaconda 64bits installed. I want to create a zeros matrix of that size, for the moment. I am not doing much? Best, Rosa On 6/11/14 11:31 AM, "Horea Christian" wrote: >The memory error more likely is caused by how *exactly* you are >handling the matrix. Tell us what operations you are trying to perform, >and we might be able to tell you more. I doubt the error has much to do >with 32 vs. 64 bits. > >On Mi 11 Jun 2014 11:08:53 CEST, Antonelli Maria Rosaria wrote: >> Hello, >> >> I am working with Windows 7, 64bits and I have 8G of Ram. >> I have installed Anaconda 32bits and I get "memory" error when handling >> matrices of 80*100*285*384. >> Do installing Anaconda 64bits would solve this problem ? >> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits ? >> >> Best regards, >> Rosa >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > >-- >Horea Christian >http://chymera.eu >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From maria-rosaria.antonelli at curie.fr Wed Jun 11 06:01:56 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 10:01:56 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Hello, I can np.memmap of that size, but when I ask to assign a zeros matrix of the same size to the np.memmap, it gives the same error : fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, 285)) fp = zeros((80, 100, 384, 285)). Best, Rosa On 6/11/14 11:32 AM, "Matthieu Brucher" wrote: >Yes, but I don't think that would change anything. It is still the >same OS that allocates memory in the end. > >Cheers, > >2014-06-11 10:26 GMT+01:00 Antonelli Maria Rosaria >: >> Hello, >> Thanks for answering. >> I do have only one matrix that big and some others 80*285*384. >> Have you ever heard about numpy binaries by Christoph Gohlke ? >> >> Best, >> Rosa >> >> On 6/11/14 11:20 AM, "Matthieu Brucher" >>wrote: >> >>>How many such matrices do you have? What is the content? Even if you >>>could allocate up to 2**64 bytes with a 64bits app, it is not possible >>>to do so. You are limited by your RAM and the size of your swap file. >>>If the OS can find enough memory, it will fail with the error message >>>you have. >>>Perhaps you should use memmap to map your saved array on disk, instead >>>of having it in memory. >>> >>>Cheers, >>> >>>Matthieu >>> >>>2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria >>>: >>>> Sorry I have to correct my self. >>>> I already have installed Anaconda 64 bit? >>>> Best regards, >>>> Rosa >>>> >>>> On 6/11/14 11:13 AM, "Matthieu Brucher" >>>>wrote: >>>> >>>>>Hi, >>>>> >>>>>Your arrays have more than 800 millions entries, so even with floats >>>>>or int32, that means more than 3GB, meaning only one would fit in the >>>>>process memory. >>>>>You _have_ to switch to a 64bits Python and hope you have enough RAM >>>>>as >>>>>well. >>>>> >>>>>Cheers, >>>>> >>>>>Matthieu >>>>> >>>>>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>>>>: >>>>>> Hello, >>>>>> >>>>>> I am working with Windows 7, 64bits and I have 8G of Ram. >>>>>> I have installed Anaconda 32bits and I get "memory" error when >>>>>>handling >>>>>> matrices of 80*100*285*384. >>>>>> Do installing Anaconda 64bits would solve this problem ? >>>>>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits >>>>>>? >>>>>> >>>>>> Best regards, >>>>>> Rosa >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>>> >>>>>-- >>>>>Information System Engineer, Ph.D. >>>>>Blog: http://matt.eifelle.com >>>>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>>Music band: http://liliejay.com/ >>>>>_______________________________________________ >>>>>SciPy-User mailing list >>>>>SciPy-User at scipy.org >>>>>http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>>-- >>>Information System Engineer, Ph.D. >>>Blog: http://matt.eifelle.com >>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>Music band: http://liliejay.com/ >>>_______________________________________________ >>>SciPy-User mailing list >>>SciPy-User at scipy.org >>>http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From davidmenhur at gmail.com Wed Jun 11 06:07:04 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 11 Jun 2014 12:07:04 +0200 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: On 11 June 2014 12:01, Antonelli Maria Rosaria < maria-rosaria.antonelli at curie.fr> wrote: > I can np.memmap of that size, but when I ask to assign a zeros matrix of > the same size to the np.memmap, it gives the same error : > fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, > 285)) > fp = zeros((80, 100, 384, 285) > You are not assigning values, but just replacing the memap object by a new matrix. You can zero the matrix by doing: fp = np.memap(...) fp[:] = 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria-rosaria.antonelli at curie.fr Wed Jun 11 06:12:03 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 10:12:03 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Yes, thank you,. But later I will have to assign a matrix to float to fp? Best, Rosa From: Da?id > Reply-To: SciPy Users List > Date: Wednesday, June 11, 2014 12:07 PM To: SciPy Users List > Subject: Re: [SciPy-User] Problem with handling big matrices with Windows On 11 June 2014 12:01, Antonelli Maria Rosaria > wrote: I can np.memmap of that size, but when I ask to assign a zeros matrix of the same size to the np.memmap, it gives the same error : fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, 285)) fp = zeros((80, 100, 384, 285) You are not assigning values, but just replacing the memap object by a new matrix. You can zero the matrix by doing: fp = np.memap(...) fp[:] = 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed Jun 11 06:13:58 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 11:13:58 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: You will need to construct this array bits by bits, not in one go. No other solution, as it would be even worse in terms of memory usage. 2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria : > Yes, thank you,. > But later I will have to assign a matrix to float to fp? > Best, > Rosa > > From: Da?id > Reply-To: SciPy Users List > Date: Wednesday, June 11, 2014 12:07 PM > To: SciPy Users List > Subject: Re: [SciPy-User] Problem with handling big matrices with Windows > > On 11 June 2014 12:01, Antonelli Maria Rosaria > wrote: >> >> I can np.memmap of that size, but when I ask to assign a zeros matrix of >> the same size to the np.memmap, it gives the same error : >> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, >> 285)) >> fp = zeros((80, 100, 384, 285) > > > You are not assigning values, but just replacing the memap object by a new > matrix. You can zero the matrix by doing: > > fp = np.memap(...) > fp[:] = 0 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From maria-rosaria.antonelli at curie.fr Wed Jun 11 06:16:07 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 10:16:07 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Thank you very much ! I just wanted to be sure that there are not other solution. Do Blaze say anything to you ? Otherwise I will split my matrix by two ! Best, Rosa On 6/11/14 12:13 PM, "Matthieu Brucher" wrote: >You will need to construct this array bits by bits, not in one go. No >other solution, as it would be even worse in terms of memory usage. > >2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria >: >> Yes, thank you,. >> But later I will have to assign a matrix to float to fp? >> Best, >> Rosa >> >> From: Da?id >> Reply-To: SciPy Users List >> Date: Wednesday, June 11, 2014 12:07 PM >> To: SciPy Users List >> Subject: Re: [SciPy-User] Problem with handling big matrices with >>Windows >> >> On 11 June 2014 12:01, Antonelli Maria Rosaria >> wrote: >>> >>> I can np.memmap of that size, but when I ask to assign a zeros matrix >>>of >>> the same size to the np.memmap, it gives the same error : >>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, >>> 285)) >>> fp = zeros((80, 100, 384, 285) >> >> >> You are not assigning values, but just replacing the memap object by a >>new >> matrix. You can zero the matrix by doing: >> >> fp = np.memap(...) >> fp[:] = 0 >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Wed Jun 11 06:23:19 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 11:23:19 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: You can use whatever tool you want, in the end you will have to use directly on undirectly this kind of tools, and this means that you won't be able to create your array as is. If you split it in half, you may have the same issue in the end, as it seems that you want to create it, and then assign another array just as big to this one. Try splitting your algorithm in smaller pieces, so that you may even be able to parallelize it. Cheers, 2014-06-11 12:16 GMT+02:00 Antonelli Maria Rosaria : > Thank you very much ! > I just wanted to be sure that there are not other solution. > Do Blaze say anything to you ? > Otherwise I will split my matrix by two ! > Best, > Rosa > > On 6/11/14 12:13 PM, "Matthieu Brucher" wrote: > >>You will need to construct this array bits by bits, not in one go. No >>other solution, as it would be even worse in terms of memory usage. >> >>2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria >>: >>> Yes, thank you,. >>> But later I will have to assign a matrix to float to fp? >>> Best, >>> Rosa >>> >>> From: Da?id >>> Reply-To: SciPy Users List >>> Date: Wednesday, June 11, 2014 12:07 PM >>> To: SciPy Users List >>> Subject: Re: [SciPy-User] Problem with handling big matrices with >>>Windows >>> >>> On 11 June 2014 12:01, Antonelli Maria Rosaria >>> wrote: >>>> >>>> I can np.memmap of that size, but when I ask to assign a zeros matrix >>>>of >>>> the same size to the np.memmap, it gives the same error : >>>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, 384, >>>> 285)) >>>> fp = zeros((80, 100, 384, 285) >>> >>> >>> You are not assigning values, but just replacing the memap object by a >>>new >>> matrix. You can zero the matrix by doing: >>> >>> fp = np.memap(...) >>> fp[:] = 0 >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >>-- >>Information System Engineer, Ph.D. >>Blog: http://matt.eifelle.com >>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>Music band: http://liliejay.com/ >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From maria-rosaria.antonelli at curie.fr Wed Jun 11 06:26:59 2014 From: maria-rosaria.antonelli at curie.fr (Antonelli Maria Rosaria) Date: Wed, 11 Jun 2014 10:26:59 +0000 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: Ok, thank you. I will try splitting and keep you posted. Thank you very much for your help ! Best, Rosa On 6/11/14 12:23 PM, "Matthieu Brucher" wrote: >You can use whatever tool you want, in the end you will have to use >directly on undirectly this kind of tools, and this means that you >won't be able to create your array as is. If you split it in half, you >may have the same issue in the end, as it seems that you want to >create it, and then assign another array just as big to this one. Try >splitting your algorithm in smaller pieces, so that you may even be >able to parallelize it. > >Cheers, > >2014-06-11 12:16 GMT+02:00 Antonelli Maria Rosaria >: >> Thank you very much ! >> I just wanted to be sure that there are not other solution. >> Do Blaze say anything to you ? >> Otherwise I will split my matrix by two ! >> Best, >> Rosa >> >> On 6/11/14 12:13 PM, "Matthieu Brucher" >>wrote: >> >>>You will need to construct this array bits by bits, not in one go. No >>>other solution, as it would be even worse in terms of memory usage. >>> >>>2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria >>>: >>>> Yes, thank you,. >>>> But later I will have to assign a matrix to float to fp? >>>> Best, >>>> Rosa >>>> >>>> From: Da?id >>>> Reply-To: SciPy Users List >>>> Date: Wednesday, June 11, 2014 12:07 PM >>>> To: SciPy Users List >>>> Subject: Re: [SciPy-User] Problem with handling big matrices with >>>>Windows >>>> >>>> On 11 June 2014 12:01, Antonelli Maria Rosaria >>>> wrote: >>>>> >>>>> I can np.memmap of that size, but when I ask to assign a zeros matrix >>>>>of >>>>> the same size to the np.memmap, it gives the same error : >>>>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >>>>>384, >>>>> 285)) >>>>> fp = zeros((80, 100, 384, 285) >>>> >>>> >>>> You are not assigning values, but just replacing the memap object by a >>>>new >>>> matrix. You can zero the matrix by doing: >>>> >>>> fp = np.memap(...) >>>> fp[:] = 0 >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >>> >>> >>>-- >>>Information System Engineer, Ph.D. >>>Blog: http://matt.eifelle.com >>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>Music band: http://liliejay.com/ >>>_______________________________________________ >>>SciPy-User mailing list >>>SciPy-User at scipy.org >>>http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > >-- >Information System Engineer, Ph.D. >Blog: http://matt.eifelle.com >LinkedIn: http://www.linkedin.com/in/matthieubrucher >Music band: http://liliejay.com/ >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user From sturla.molden at gmail.com Wed Jun 11 08:57:01 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 11 Jun 2014 12:57:01 +0000 (UTC) Subject: [SciPy-User] Problem with handling big matrices with Windows References: Message-ID: <983230843424182976.456329sturla.molden-gmail.com@news.gmane.org> Antonelli Maria Rosaria wrote: > Thank you. > I am going to try with memmap. The virtual memory space at your disposal is just 2 GB. That includes allocated RAM and memmap. If you memmap a file you can allocate less memory from RAM. But you cannot exceed 2 GB in total. That is the upper limit on 32 bit Windows applications. memmap can be used as "extended memory" if you run out of RAM on 64 bit systems, because the virtual memory space is astronomic. On a 64 bit system you can in practice memory map files of any size. This is one of the most important reasons for prefering 64 bit Python. memmap has very limited usability 32 bit systems. It is often used for other purposes than avoiding loading large files into RAM. On ARM (e.g. Raspberry Pie) you communicate with hardware via memory mapping. So with numpy.memmap ypu can communicate with any hardware you connect to it. Or perhaps you have a database file and want random access to the fields stored in the file (sorting, searching). Then it can be easier to implement algorithms with mmap than file.read and file.write, because the same functions can be used (often unchanged) on memmaps and arrays. Or perhaps you want to use shared memory to share data between processes. You would then memory map from the paging file (it has fd 0 on Windows and fd -1 on Linux). But none of this has to do with "array to big to fit in RAM". Sturla From sturla.molden at gmail.com Wed Jun 11 08:57:05 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 11 Jun 2014 12:57:05 +0000 (UTC) Subject: [SciPy-User] Problem with handling big matrices with Windows References: Message-ID: <1474024639424182917.700236sturla.molden-gmail.com@news.gmane.org> Antonelli Maria Rosaria wrote: > I am working with Windows 7, 64bits and I have 8G of Ram. > I have installed Anaconda 32bits and I get "memory" error when handling > matrices of 80*100*285*384. > Do installing Anaconda 64bits would solve this problem ? Yes. Sturla From sturla.molden at gmail.com Wed Jun 11 08:57:05 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 11 Jun 2014 12:57:05 +0000 (UTC) Subject: [SciPy-User] Problem with handling big matrices with Windows References: Message-ID: <1834503159424182764.930373sturla.molden-gmail.com@news.gmane.org> Matthieu Brucher wrote: > Your arrays have more than 800 millions entries, so even with floats > or int32, that means more than 3GB, meaning only one would fit in the > process memory. 32 bits applications on Windows only allow 2 GB of memory allocated from user space. The virtual memory space i 4 GB, but the kernel has reserved the upper half. Sturla From sturla.molden at gmail.com Wed Jun 11 09:06:04 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 11 Jun 2014 13:06:04 +0000 (UTC) Subject: [SciPy-User] Problem with handling big matrices with Windows References: Message-ID: <1829852382424184434.517664sturla.molden-gmail.com@news.gmane.org> Antonelli Maria Rosaria wrote: > Thank you very much ! > I just wanted to be sure that there are not other solution. > Do Blaze say anything to you ? > Otherwise I will split my matrix by two ! If you're working with really large arrays, it is often helpful to use HDF5 to store the data (PyTables or h5py). PyTables also allows you to use numexpr to do computations on the arrays in the HDF5 file without having the whole array in RAM. Sturla From matthieu.brucher at gmail.com Wed Jun 11 09:16:32 2014 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2014 14:16:32 +0100 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: <1834503159424182764.930373sturla.molden-gmail.com@news.gmane.org> References: <1834503159424182764.930373sturla.molden-gmail.com@news.gmane.org> Message-ID: Indeed, the official limit is 2GB (I usually had the option to use 3GB activated). 2014-06-11 14:57 GMT+02:00 Sturla Molden : > Matthieu Brucher wrote: > >> Your arrays have more than 800 millions entries, so even with floats >> or int32, that means more than 3GB, meaning only one would fit in the >> process memory. > > 32 bits applications on Windows only allow 2 GB of memory allocated from > user space. The virtual memory space i 4 GB, but the kernel has reserved > the upper half. > > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From paul.blelloch at ata-e.com Wed Jun 11 09:58:43 2014 From: paul.blelloch at ata-e.com (Paul Blelloch) Date: Wed, 11 Jun 2014 06:58:43 -0700 Subject: [SciPy-User] Problem with handling big matrices with Windows In-Reply-To: References: Message-ID: That's a large matrix. I figure that it's about 6.5 GBytes. On a laptop with 4 GBytes of memory I was able to create a matrix of half that size, but no larger. With a laptop with 8 GBytes of memory I was able to create a matrix of that size, but only barely. My guess is that if I was doing anything else of significance it would have failed., I think that you're on the edge with 8 GBytes. You either need to get more RAM, reduce the size of your problem or figure out how to do it in pieces. On Wed, 11 Jun 2014 05:14:22 -0500 scipy-user-request at scipy.org wrote: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Re: Problem with handling big matrices with Windows > (Antonelli Maria Rosaria) > 2. Re: Problem with handling big matrices with Windows > (Antonelli Maria Rosaria) > 3. Re: Problem with handling big matrices with Windows (Da?id) > 4. Re: Problem with handling big matrices with Windows > (Antonelli Maria Rosaria) > 5. Re: Problem with handling big matrices with Windows > (Matthieu Brucher) > 6. Re: Problem with handling big matrices with Windows > (Antonelli Maria Rosaria) > 7. Re: Problem with handling big matrices with Windows > (Matthieu Brucher) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 11 Jun 2014 09:54:54 +0000 >From: Antonelli Maria Rosaria > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="Windows-1252" > > Hello, > Thanks for answering. > I have to correct myself, I have Anaconda 64bits installed. > I want to create a zeros matrix of that size, for the moment. I am >not > doing much? > > Best, > Rosa > > On 6/11/14 11:31 AM, "Horea Christian" wrote: > >>The memory error more likely is caused by how *exactly* you are >>handling the matrix. Tell us what operations you are trying to >>perform, >>and we might be able to tell you more. I doubt the error has much to >>do >>with 32 vs. 64 bits. >> >>On Mi 11 Jun 2014 11:08:53 CEST, Antonelli Maria Rosaria wrote: >>> Hello, >>> >>> I am working with Windows 7, 64bits and I have 8G of Ram. >>> I have installed Anaconda 32bits and I get "memory" error when >>>handling >>> matrices of 80*100*285*384. >>> Do installing Anaconda 64bits would solve this problem ? >>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits >>>? >>> >>> Best regards, >>> Rosa >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>-- >>Horea Christian >>http://chymera.eu >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------ > > Message: 2 > Date: Wed, 11 Jun 2014 10:01:56 +0000 >From: Antonelli Maria Rosaria > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="iso-8859-2" > > Hello, > > I can np.memmap of that size, but when I ask to assign a zeros >matrix of > the same size to the np.memmap, it gives the same error : > fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >384, > 285)) > fp = zeros((80, 100, 384, 285)). > > Best, > Rosa > > On 6/11/14 11:32 AM, "Matthieu Brucher" >wrote: > >>Yes, but I don't think that would change anything. It is still the >>same OS that allocates memory in the end. >> >>Cheers, >> >>2014-06-11 10:26 GMT+01:00 Antonelli Maria Rosaria >>: >>> Hello, >>> Thanks for answering. >>> I do have only one matrix that big and some others 80*285*384. >>> Have you ever heard about numpy binaries by Christoph Gohlke ? >>> >>> Best, >>> Rosa >>> >>> On 6/11/14 11:20 AM, "Matthieu Brucher" >>>wrote: >>> >>>>How many such matrices do you have? What is the content? Even if you >>>>could allocate up to 2**64 bytes with a 64bits app, it is not >>>>possible >>>>to do so. You are limited by your RAM and the size of your swap file. >>>>If the OS can find enough memory, it will fail with the error message >>>>you have. >>>>Perhaps you should use memmap to map your saved array on disk, >>>>instead >>>>of having it in memory. >>>> >>>>Cheers, >>>> >>>>Matthieu >>>> >>>>2014-06-11 11:17 GMT+02:00 Antonelli Maria Rosaria >>>>: >>>>> Sorry I have to correct my self. >>>>> I already have installed Anaconda 64 bit? >>>>> Best regards, >>>>> Rosa >>>>> >>>>> On 6/11/14 11:13 AM, "Matthieu Brucher" >>>>>wrote: >>>>> >>>>>>Hi, >>>>>> >>>>>>Your arrays have more than 800 millions entries, so even with floats >>>>>>or int32, that means more than 3GB, meaning only one would fit in the >>>>>>process memory. >>>>>>You _have_ to switch to a 64bits Python and hope you have enough RAM >>>>>>as >>>>>>well. >>>>>> >>>>>>Cheers, >>>>>> >>>>>>Matthieu >>>>>> >>>>>>2014-06-11 10:08 GMT+01:00 Antonelli Maria Rosaria >>>>>>: >>>>>>> Hello, >>>>>>> >>>>>>> I am working with Windows 7, 64bits and I have 8G of Ram. >>>>>>> I have installed Anaconda 32bits and I get "memory" error when >>>>>>>handling >>>>>>> matrices of 80*100*285*384. >>>>>>> Do installing Anaconda 64bits would solve this problem ? >>>>>>> Can I install Anaconda 64 bits without uninstalling Anaconda 32bits >>>>>>>? >>>>>>> >>>>>>> Best regards, >>>>>>> Rosa >>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> >>>>>> >>>>>>-- >>>>>>Information System Engineer, Ph.D. >>>>>>Blog: http://matt.eifelle.com >>>>>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>>>Music band: http://liliejay.com/ >>>>>>_______________________________________________ >>>>>>SciPy-User mailing list >>>>>>SciPy-User at scipy.org >>>>>>http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>>-- >>>>Information System Engineer, Ph.D. >>>>Blog: http://matt.eifelle.com >>>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>>Music band: http://liliejay.com/ >>>>_______________________________________________ >>>>SciPy-User mailing list >>>>SciPy-User at scipy.org >>>>http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >>-- >>Information System Engineer, Ph.D. >>Blog: http://matt.eifelle.com >>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>Music band: http://liliejay.com/ >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------ > > Message: 3 > Date: Wed, 11 Jun 2014 12:07:04 +0200 >From: Da?id > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On 11 June 2014 12:01, Antonelli Maria Rosaria < > maria-rosaria.antonelli at curie.fr> wrote: > >> I can np.memmap of that size, but when I ask to assign a zeros >>matrix of >> the same size to the np.memmap, it gives the same error : >> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >>384, >> 285)) >> fp = zeros((80, 100, 384, 285) >> > > You are not assigning values, but just replacing the memap object by >a new > matrix. You can zero the matrix by doing: > > fp = np.memap(...) > fp[:] = 0 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: >http://mail.scipy.org/pipermail/scipy-user/attachments/20140611/9f3020b8/attachment-0001.html > > ------------------------------ > > Message: 4 > Date: Wed, 11 Jun 2014 10:12:03 +0000 >From: Antonelli Maria Rosaria > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="windows-1253" > > Yes, thank you,. > But later I will have to assign a matrix to float to fp? > Best, > Rosa > >From: Da?id > > Reply-To: SciPy Users List >> > Date: Wednesday, June 11, 2014 12:07 PM > To: SciPy Users List >> > Subject: Re: [SciPy-User] Problem with handling big matrices with >Windows > > On 11 June 2014 12:01, Antonelli Maria Rosaria >> >wrote: > I can np.memmap of that size, but when I ask to assign a zeros >matrix of > the same size to the np.memmap, it gives the same error : > fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >384, > 285)) > fp = zeros((80, 100, 384, 285) > > You are not assigning values, but just replacing the memap object by >a new matrix. You can zero the matrix by doing: > > fp = np.memap(...) > fp[:] = 0 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: >http://mail.scipy.org/pipermail/scipy-user/attachments/20140611/4a0336c7/attachment-0001.html > > ------------------------------ > > Message: 5 > Date: Wed, 11 Jun 2014 11:13:58 +0100 >From: Matthieu Brucher > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > You will need to construct this array bits by bits, not in one go. >No > other solution, as it would be even worse in terms of memory usage. > > 2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria > : >> Yes, thank you,. >> But later I will have to assign a matrix to float to fp? >> Best, >> Rosa >> >> From: Da?id >> Reply-To: SciPy Users List >> Date: Wednesday, June 11, 2014 12:07 PM >> To: SciPy Users List >> Subject: Re: [SciPy-User] Problem with handling big matrices with >>Windows >> >> On 11 June 2014 12:01, Antonelli Maria Rosaria >> wrote: >>> >>> I can np.memmap of that size, but when I ask to assign a zeros >>>matrix of >>> the same size to the np.memmap, it gives the same error : >>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >>>384, >>> 285)) >>> fp = zeros((80, 100, 384, 285) >> >> >> You are not assigning values, but just replacing the memap object by >>a new >> matrix. You can zero the matrix by doing: >> >> fp = np.memap(...) >> fp[:] = 0 >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > Music band: http://liliejay.com/ > > > ------------------------------ > > Message: 6 > Date: Wed, 11 Jun 2014 10:16:07 +0000 >From: Antonelli Maria Rosaria > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="Windows-1252" > > Thank you very much ! > I just wanted to be sure that there are not other solution. > Do Blaze say anything to you ? > Otherwise I will split my matrix by two ! > Best, > Rosa > > On 6/11/14 12:13 PM, "Matthieu Brucher" >wrote: > >>You will need to construct this array bits by bits, not in one go. No >>other solution, as it would be even worse in terms of memory usage. >> >>2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria >>: >>> Yes, thank you,. >>> But later I will have to assign a matrix to float to fp? >>> Best, >>> Rosa >>> >>> From: Da?id >>> Reply-To: SciPy Users List >>> Date: Wednesday, June 11, 2014 12:07 PM >>> To: SciPy Users List >>> Subject: Re: [SciPy-User] Problem with handling big matrices with >>>Windows >>> >>> On 11 June 2014 12:01, Antonelli Maria Rosaria >>> wrote: >>>> >>>> I can np.memmap of that size, but when I ask to assign a zeros >>>>matrix >>>>of >>>> the same size to the np.memmap, it gives the same error : >>>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >>>>384, >>>> 285)) >>>> fp = zeros((80, 100, 384, 285) >>> >>> >>> You are not assigning values, but just replacing the memap object by >>>a >>>new >>> matrix. You can zero the matrix by doing: >>> >>> fp = np.memap(...) >>> fp[:] = 0 >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >>-- >>Information System Engineer, Ph.D. >>Blog: http://matt.eifelle.com >>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>Music band: http://liliejay.com/ >>_______________________________________________ >>SciPy-User mailing list >>SciPy-User at scipy.org >>http://mail.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------ > > Message: 7 > Date: Wed, 11 Jun 2014 11:23:19 +0100 >From: Matthieu Brucher > Subject: Re: [SciPy-User] Problem with handling big matrices with > Windows > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > You can use whatever tool you want, in the end you will have to use > directly on undirectly this kind of tools, and this means that you > won't be able to create your array as is. If you split it in half, >you > may have the same issue in the end, as it seems that you want to > create it, and then assign another array just as big to this one. >Try > splitting your algorithm in smaller pieces, so that you may even be > able to parallelize it. > > Cheers, > > 2014-06-11 12:16 GMT+02:00 Antonelli Maria Rosaria > : >> Thank you very much ! >> I just wanted to be sure that there are not other solution. >> Do Blaze say anything to you ? >> Otherwise I will split my matrix by two ! >> Best, >> Rosa >> >> On 6/11/14 12:13 PM, "Matthieu Brucher" >>wrote: >> >>>You will need to construct this array bits by bits, not in one go. No >>>other solution, as it would be even worse in terms of memory usage. >>> >>>2014-06-11 11:12 GMT+01:00 Antonelli Maria Rosaria >>>: >>>> Yes, thank you,. >>>> But later I will have to assign a matrix to float to fp? >>>> Best, >>>> Rosa >>>> >>>> From: Da?id >>>> Reply-To: SciPy Users List >>>> Date: Wednesday, June 11, 2014 12:07 PM >>>> To: SciPy Users List >>>> Subject: Re: [SciPy-User] Problem with handling big matrices with >>>>Windows >>>> >>>> On 11 June 2014 12:01, Antonelli Maria Rosaria >>>> wrote: >>>>> >>>>> I can np.memmap of that size, but when I ask to assign a zeros >>>>>matrix >>>>>of >>>>> the same size to the np.memmap, it gives the same error : >>>>> fp = np.memmap(filename, dtype='float, mode ='w+', shape(80, 100, >>>>>384, >>>>> 285)) >>>>> fp = zeros((80, 100, 384, 285) >>>> >>>> >>>> You are not assigning values, but just replacing the memap object by >>>>a >>>>new >>>> matrix. You can zero the matrix by doing: >>>> >>>> fp = np.memap(...) >>>> fp[:] = 0 >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >>> >>> >>>-- >>>Information System Engineer, Ph.D. >>>Blog: http://matt.eifelle.com >>>LinkedIn: http://www.linkedin.com/in/matthieubrucher >>>Music band: http://liliejay.com/ >>>_______________________________________________ >>>SciPy-User mailing list >>>SciPy-User at scipy.org >>>http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > Music band: http://liliejay.com/ > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 130, Issue 10 > ******************************************* Paul Blelloch, Ph.D. Director, Aerospace Analysis ATA Engineering Inc. (858) 480-2065 From m.hofsaess at gmail.com Wed Jun 11 17:47:37 2014 From: m.hofsaess at gmail.com (=?UTF-8?B?TWFydGluIEhvZnPDpMOf?=) Date: Wed, 11 Jun 2014 23:47:37 +0200 Subject: [SciPy-User] finding distribution that best describes samples Message-ID: Hi all, I have a set of data and want to determine the distribution function that best describes the data. How to do this best with scipy? Or is statsmodel better suited? Thanks for your help. Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gmane20130611 at henney.org Wed Jun 11 20:44:22 2014 From: Gmane20130611 at henney.org (Will Henney) Date: Thu, 12 Jun 2014 00:44:22 +0000 (UTC) Subject: [SciPy-User] CIECAM02 References: Message-ID: Nathaniel Smith pobox.com> writes: > CIECAM02 is a standard model of human color perception. I'm trying to > implement a model of perceptual color similarity[1] that uses > CIECAM02, and this requires the ability to convert from sRGB > coordinates to CIECAM02's J, M, h coordinates. Code for doing this > seems surprisingly short on the ground. Before I start trying to > mechanically translate some incomprehensible matlab script [2], I > thought I'd check whether around here has implemented such a thing, or > knows of a suitable implementation? > The Wikipedia page has a link to an implementation in C: http://scanline.ca/ciecam02/ It might be easier to adapt that. Or you could even just wrap it with cffi or similar Will > -n > > [1] Luo, M. R., Cui, G., & Li, C. (2006). Uniform colour spaces based on > CIECAM02 colour appearance model. Color Research & Application, 31(4), > 320?330. doi:10.1002/col.20227 > [2] http://www.mathworks.co.uk/matlabcentral/fileexchange/40640-computational-colour- science-using-matlab-2e/content/ciecam02.m > From maximilian.albert at gmail.com Thu Jun 12 18:48:04 2014 From: maximilian.albert at gmail.com (Maximilian Albert) Date: Thu, 12 Jun 2014 23:48:04 +0100 Subject: [SciPy-User] Error when computing eigenvalues of a LinearOperator: "gmres did not converge" Message-ID: Hi, This is my first post to this mailing list, so let me take the opportunity to thank everyone involved for their great work! I use scipy on a daily basis and couldn't imagine what I would do without it. I'm trying to solve large eigenvalue problems using scipy's sparse linear algebra module. It works fine if I apply the solver scipy.sparse.linalg.eigs to a regular numpy array A. However, my matrices get quite big and I'm having problems with memory consumption. So ideally I would like to use a LinearOperator instead. (The matrix A is dense, but I can compute the action of A on a vector without having to assemble A explicitly, so I'm hoping this is still a valid use case for LinearOperator.) The problem is that if I try this then the iterative solver doesn't converge. This is true even if the 'matvec' method of the LinearOperator is simply matrix-vector multiplication with the (dense) matrix A. A minimal example illustrating the failure is attached below. This computes the eigenvalues of a random matrix A just fine, but fails when applied to a LinearOperator that is directly converted from A. I tried to fiddle with the parameters for the iterative solver (v0, ncv, maxiter) but to no avail. Am I missing something obvious? Is there a way to make this work? Any suggestions would be highly appreciated. Many thanks and best regards, Max ==> from scipy.sparse.linalg import eigs, LinearOperator, aslinearoperator import numpy as np # Set a seed for reproducibility np.random.seed(0) # Size of the matrix N = 100 # Generate a random matrix of size N x N # and compute its eigenvalues A = np.random.random_sample((N, N)) eigvals = eigs(A, sigma=0.0, which='LM', return_eigenvectors=False) print eigvals # Convert the matrix to a LinearOperator and # try to solve the same eigenproblem again. # The call to 'eigs' will produce an error: # # ValueError: Error in inverting M: function gmres did not converge (info = 1000). A_op = aslinearoperator(A) eigvals2 = eigs(A_op, sigma=0.0, which='LM', return_eigenvectors=False) <== -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Jun 12 19:02:45 2014 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 13 Jun 2014 00:02:45 +0100 Subject: [SciPy-User] CIECAM02 In-Reply-To: References: Message-ID: On 12 Jun 2014 01:50, "Will Henney" wrote: > > > Nathaniel Smith pobox.com> writes: > > > CIECAM02 is a standard model of human color perception. I'm trying to > > implement a model of perceptual color similarity[1] that uses > > CIECAM02, and this requires the ability to convert from sRGB > > coordinates to CIECAM02's J, M, h coordinates. Code for doing this > > seems surprisingly short on the ground. Before I start trying to > > mechanically translate some incomprehensible matlab script [2], I > > thought I'd check whether around here has implemented such a thing, or > > knows of a suitable implementation? > > > > The Wikipedia page has a link to an implementation in C: > > http://scanline.ca/ciecam02/ > > It might be easier to adapt that. > Or you could even just wrap it with cffi or similar Thanks for the tip! I did eventually discover that implementation, though my confidence was somewhat shaken when I noticed that the provided .c and .h files describe entirely unrelated APIs (!!). There's a previous version that might be more usable, and LittleCMS also provides some CIECAM02 functionality (though it only directly computes J, C, and h, not the other perceptual correlates). -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Jun 14 19:39:35 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 14 Jun 2014 23:39:35 +0000 (UTC) Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) Message-ID: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> How can we teach Bayesian statistics to new students when utter ignorance like this makes it into textbooks? https://twitter.com/nedlom/status/477948016272629760 This is from Scott M. Lynch, "Introduction to Applied Bayesian Statistics & Estimation for Social Scientists", published by Springer Verlag. I'll leave it as an exercise to understand why this integral is ridiculously easy to compute. If you still don't see it, look up the definition of an expectancy value... Anyhow, if an author doesn't recognize or know how to compute an average, how can he write a textbook on statistics? And no, I am not going to recommend this book. This error is just too disqualifying. Sturla From njs at pobox.com Sat Jun 14 19:51:48 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 15 Jun 2014 00:51:48 +0100 Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) In-Reply-To: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> References: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> Message-ID: On 15 Jun 2014 00:39, "Sturla Molden" wrote: > > How can we teach Bayesian statistics to new students when utter ignorance > like this makes it into textbooks? > > https://twitter.com/nedlom/status/477948016272629760 > > This is from Scott M. Lynch, "Introduction to Applied Bayesian Statistics & > Estimation for Social Scientists", published by Springer Verlag. > > I'll leave it as an exercise to understand why this integral is > ridiculously easy to compute. If you still don't see it, look up the > definition of an expectancy value... Of course any integral can be written as an expectation, but if you have a tractable general method for computing the expected value of arbitrary distributions then you should publish it and collect your Fields medal. ("I have discovered a truly marvelous proof of this proposition, which doesn't fit into 140 characters...") -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Jun 14 20:26:31 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 15 Jun 2014 00:26:31 +0000 (UTC) Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) References: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> Message-ID: <977189149424483531.286732sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > Of course any integral can be written as an expectation, but if you have a > tractable general method for computing the expected value of arbitrary > distributions then you should publish it and collect your Fields medal. In this particular case (pseudocode): for i in range(n): theta[i] ~ p(theta | M), e.g. by Markov Chain Monte Carlo L[i] = p(y | theta[i] ) p(y | M) = mean( L[burnin:] ) Can I have my Fields medal now? ;-) From sturla.molden at gmail.com Sat Jun 14 20:28:23 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 15 Jun 2014 00:28:23 +0000 (UTC) Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) References: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> Message-ID: <348737959424484857.346987sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > ("I have discovered a truly marvelous proof of this proposition, which > doesn't fit into 140 characters...") Metropolis-Hastings is already proven. From njs at pobox.com Sat Jun 14 21:42:16 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 15 Jun 2014 02:42:16 +0100 Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) In-Reply-To: <977189149424483531.286732sturla.molden-gmail.com@news.gmane.org> References: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> <977189149424483531.286732sturla.molden-gmail.com@news.gmane.org> Message-ID: On 15 Jun 2014 01:26, "Sturla Molden" wrote: > > Nathaniel Smith wrote: > > > Of course any integral can be written as an expectation, but if you have a > > tractable general method for computing the expected value of arbitrary > > distributions then you should publish it and collect your Fields medal. > > In this particular case (pseudocode): > > for i in range(n): > theta[i] ~ p(theta | M), e.g. by Markov Chain Monte Carlo > L[i] = p(y | theta[i] ) > p(y | M) = mean( L[burnin:] ) Sure, that looks like it ought to work great when you have a good sampler for p(theta | M), the theta space is low dimensional, p(theta|M) is everywhere on the same order of magnitude as p(theta|M)p(y|theta, M), and you have a tractable method of computing p(y|theta, M). Writing down theoretically correct but intractable Bayesian algorithms is usually easy... Your "particular case" is AFAICT the fully general case of computing partition functions, i.e. if this worked then all Bayesian models would be trivial. I'm not sure I've correctly diagnosed all the problems with it, but I'm pretty sure all Bayesian models are not trivial, so :-). -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.h.jaffe at gmail.com Sun Jun 15 03:23:28 2014 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Sun, 15 Jun 2014 08:23:28 +0100 Subject: [SciPy-User] OT: The standard of Bayesian statistics textbooks (rant) In-Reply-To: <977189149424483531.286732sturla.molden-gmail.com@news.gmane.org> References: <616090123424481391.265700sturla.molden-gmail.com@news.gmane.org> <977189149424483531.286732sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On 15/06/2014 01:26, Sturla Molden wrote: > Nathaniel Smith wrote: > >> Of course any integral can be written as an expectation, but if you have a >> tractable general method for computing the expected value of arbitrary >> distributions then you should publish it and collect your Fields medal. > > In this particular case (pseudocode): > > for i in range(n): > theta[i] ~ p(theta | M), e.g. by Markov Chain Monte Carlo > L[i] = p(y | theta[i] ) > p(y | M) = mean( L[burnin:] ) > > Can I have my Fields medal now? This is relatively well known (at least in my field, cosmology) as the "Blackwell-Rao" estimate of the model likelihood (aka model likelihood aka Bayesian evidence). Unfortunately in very many cases it has very bad properties, especially in the tails. Moreover, one of the advantages of the sampling methods is that it is often easier to sample from the likelihood than to actually compute it. Yours, Andrew > > ;-) > From tmp50 at ukr.net Sun Jun 15 08:47:24 2014 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 15 Jun 2014 15:47:24 +0300 Subject: [SciPy-User] new OpenOpt Suite release 0.54 Message-ID: <1402836389.898416638.pbuknjzx@frv44.fwdcdn.com> I'm glad to inform you about new OpenOpt Suite release 0.54: ? ? * Some changes for PyPy compatibility ? ? * FuncDesigner translator() can handle sparse derivatives from automatic differentiation ? ? * New interalg parameter rTol (relative tolerance, default 10^-8) ? ? * Bugfix and improvements for categorical variables? ??? * Some more changes and improvements Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritemio at gmail.com Wed Jun 18 19:12:30 2014 From: tritemio at gmail.com (Antonino Ingargiola) Date: Wed, 18 Jun 2014 16:12:30 -0700 Subject: [SciPy-User] lmfit: confidence intervals issue Message-ID: Hi, I'm not sure this is the right list for this kind of issues. I'm using lmfit to solve a non-linear least square problem to fit a "noisy" histogram to a model that is a single exponential decay convoluted with a "sharp" peak and a baseline noise floor. The problem is that unless I set constrains for the 'offset' parameter the confidence interval calculation fails in a strange way in scipy brentq function. You can see the full example notebook and the error message in this ipython notebook: http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue What can I do to fix the problem? If this is the normal behavior, a more informative error message would be helpful. Thanks in advance for any help, Antonio PS: Let me congratulate with the lmfit authors as this is a really amazing piece of software that should be included in scipy IMHO! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Wed Jun 18 19:25:02 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 18 Jun 2014 23:25:02 +0000 (UTC) Subject: [SciPy-User] lmfit: confidence intervals issue References: Message-ID: <1304994110424826534.825813sturla.molden-gmail.com@news.gmane.org> lmfit is not a part of SciPy, but as long as it's scientific use of Python and no other appropriate list, I don't think anyone will mind. I cannot answer your question, though. Sturla Antonino Ingargiola wrote: > Hi, > > I'm not sure this is the right list for this kind of issues. > > I'm using lmfit to solve a non-linear least square problem to fit a "noisy" > histogram to a model that is a single exponential decay convoluted with a > "sharp" peak and a baseline noise floor. > > The problem is that unless I set constrains for the 'offset' parameter the > confidence interval calculation fails in a strange way in scipy brentq > function. > > You can see the full example notebook and the error message in this ipython > notebook: > > href="http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue">http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue > > What can I do to fix the problem? If this is the normal behavior, a more > informative error message would be helpful. > > Thanks in advance for any help, > Antonio > > PS: Let me congratulate with the lmfit authors as this is a really amazing > piece of software that should be included in scipy IMHO! > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > href="http://mail.scipy.org/mailman/listinfo/scipy-user">http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Wed Jun 18 20:09:04 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 Jun 2014 20:09:04 -0400 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: <1304994110424826534.825813sturla.molden-gmail.com@news.gmane.org> References: <1304994110424826534.825813sturla.molden-gmail.com@news.gmane.org> Message-ID: On Wed, Jun 18, 2014 at 7:25 PM, Sturla Molden wrote: > lmfit is not a part of SciPy, but as long as it's scientific use of Python > and no other appropriate list, I don't think anyone will mind. I cannot > answer your question, though. > > Sturla > > Antonino Ingargiola wrote: >> Hi, >> >> I'm not sure this is the right list for this kind of issues. >> >> I'm using lmfit to solve a non-linear least square problem to fit a "noisy" >> histogram to a model that is a single exponential decay convoluted with a >> "sharp" peak and a baseline noise floor. >> >> The problem is that unless I set constrains for the 'offset' parameter the >> confidence interval calculation fails in a strange way in scipy brentq >> function. >> >> You can see the full example notebook and the error message in this ipython >> notebook: >> >> > href="http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue">http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue >> >> What can I do to fix the problem? If this is the normal behavior, a more >> informative error message would be helpful. http://stackoverflow.com/questions/20619156/python-brentq-problems/20625362 I have no idea about the details in lmfit, but it's not an infrequent problem also outside of lmfit. Josef >> >> Thanks in advance for any help, >> Antonio >> >> PS: Let me congratulate with the lmfit authors as this is a really amazing >> piece of software that should be included in scipy IMHO! >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> > href="http://mail.scipy.org/mailman/listinfo/scipy-user">http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From newville at cars.uchicago.edu Wed Jun 18 23:19:33 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 18 Jun 2014 22:19:33 -0500 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Antonino, On Wed, Jun 18, 2014 at 6:12 PM, Antonino Ingargiola wrote: > > Hi, > > I'm not sure this is the right list for this kind of issues. It's OK with me. Using github issues would be reasonable too. > > > I'm using lmfit to solve a non-linear least square problem to fit a "noisy" histogram to a model that is a single exponential decay convoluted with a "sharp" peak and a baseline noise floor. > > The problem is that unless I set constrains for the 'offset' parameter the confidence interval calculation fails in a strange way in scipy brentq function. > > You can see the full example notebook and the error message in this ipython notebook: > > http://nbviewer.ipython.org/urls/gist.githubusercontent.com/tritemio/901c2eb2a43775e81844/raw/755eede8f170cc33b96316601286ae534e901c49/lmfit%20issue > > What can I do to fix the problem? If this is the normal behavior, a more informative error message would be helpful. I haven't tried your example, but it's possible that the development version on github.com fixes this -- there was a similar problem with conf_interval() failing when calling brentq() that was fixed relatively recently (slow devel cycle), but I'm not certain that the fix there will work here. I would guess that the essential problem is related to the fact that your also getting NaNs from the covariance matrix. Again, without actually trying your problem, I believe that this is probably related to your use of pos_range = (x - offset) >= 0 to select the data range for which y!=0. This could make the fit insensitive to small changes (ie, well below 1) to offset, making it hard to compute the partial derivative for it. This could, in turn, make the covariance matrix hard to get (which is what you are seeing). Without finite, numerical values for stderr from the covariance matrix, it's hard for conf_interval() to make good guesses about what range of values to search around. In short, it's a challenge to have variables that are used essentially as integers. Still, even if it doesn't work well, it shouldn't be failing this way. My advice would be to try the devel version of lmfit. If it's still failing, try to construct a smaller failing example (or make something complete that can be run as a non-IPython script), and start an Issue. Cheers, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From shailesh.ahuja03 at gmail.com Thu Jun 19 05:10:18 2014 From: shailesh.ahuja03 at gmail.com (Shailesh Ahuja) Date: Thu, 19 Jun 2014 14:40:18 +0530 Subject: [SciPy-User] Transformations for BSpline, Legendre and Chebyshev fuctions Message-ID: Hi, I am looking for a function that will let me transform the functions' coefficients, such that I dont have to manually transform the data later. Let me give an example so that it's easy to understand: *Initialization* *from scipy import interpolate as intr* *import numpy as np* *x = np.arange(10)* *y = np.sin(x)* *tck = intr.splrep(x, y)* *data = np.random.random(100)* Actually the data is provided by the user, but I initialized it here. Now I need to do some transformations on the data. *Transformation* *x2 = data* 10.0* *Evaluation* *y2 = intr.splev(x2, tck)* Is it possible to manipulate the 'tck' so that I don't have to do the transformation at all? I need to do something similar for evaluating Legendre and Chebyshev polynomials as well, so is there a function that accepts the transformation as inputs, and gives back the transformed function? Sincerely Shailesh P.S. Pardon me if there was any duplicate email, I lost track of the previous one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jun 19 16:38:37 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 19 Jun 2014 16:38:37 -0400 Subject: [SciPy-User] Fwd: [pystatsmodels] StatsIntro 4.0 is out In-Reply-To: <9ed4b6fb-ada6-48dd-8eed-d7601f18e529@googlegroups.com> References: <9ed4b6fb-ada6-48dd-8eed-d7601f18e529@googlegroups.com> Message-ID: Thomas has been working for a while now on a Introduction to Statistics using Python. It introduces and explains many of the statistics that are in scipy.stats and statsmodels. I covers statistics on the Intro level, up to regression. It's the first "Introduction to Statistics using Python (libraries)" instead of "Introducing Python for Statistics". In case someone is interested in GLM models, Thomas also wrote statsmodels answers for the exercises in the book by Dobson http://work.thaslwanter.at/Stats/html/statsAdvanced.html#generalized-linear-models Thanks Thomas (It also needs advertising so it shows up higher when I look for it in a google search.) Josef ---------- Forwarded message ---------- From: Thomas Haslwanter Date: Thu, Jun 19, 2014 at 4:03 PM Subject: [pystatsmodels] StatsIntro 4.0 is out To: pystatsmodels at googlegroups.com Over the last 2 month I have added exercises, polished the text, and added a significant number of new images. I think that the "Introduction to Statistical Analysis" is now pretty much ready for general use. I want to point out that It comes in an online version (http://work.thaslwanter.at/Stats/html/), which is particularly handy on mobile devices. It also comes as a free e-book (http://work.thaslwanter.at/Stats/StatsIntro.pdf) It can be freely used (https://github.com/thomas-haslwanter/statsintro) and modified for teaching introductory courses in statistics And thanks to Josef's help and Ralf's suggestion, it is now also listed on http://www.scipy.org/topical-software.html :) I am a bit disappointed that so far I have gotten pretty much zero feedback :( But it is my first contribution to the Python community, and I sincerely hope that it will help to increase the Python community, especially the statistical one! thomas From vanforeest at gmail.com Thu Jun 19 17:35:52 2014 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 19 Jun 2014 23:35:52 +0200 Subject: [SciPy-User] Fwd: [pystatsmodels] StatsIntro 4.0 is out In-Reply-To: References: <9ed4b6fb-ada6-48dd-8eed-d7601f18e529@googlegroups.com> Message-ID: Hi Thomas (and Josef), The doc looks really great, and very useful. Thanks for all your efforts. I'll pass it on to my students too. Nicky On 19 June 2014 22:38, wrote: > Thomas has been working for a while now on a Introduction to > Statistics using Python. > > It introduces and explains many of the statistics that are in > scipy.stats and statsmodels. > I covers statistics on the Intro level, up to regression. > > It's the first "Introduction to Statistics using Python (libraries)" > instead of "Introducing Python for Statistics". > > In case someone is interested in GLM models, Thomas also wrote > statsmodels answers for the exercises in the book by Dobson > > http://work.thaslwanter.at/Stats/html/statsAdvanced.html#generalized-linear-models > > Thanks Thomas > > (It also needs advertising so it shows up higher when I look for it in > a google search.) > > Josef > > ---------- Forwarded message ---------- > From: Thomas Haslwanter > Date: Thu, Jun 19, 2014 at 4:03 PM > Subject: [pystatsmodels] StatsIntro 4.0 is out > To: pystatsmodels at googlegroups.com > > > Over the last 2 month I have added exercises, polished the text, and > added a significant number of new images. I think that the > "Introduction to Statistical Analysis" is now pretty much ready for > general use. > > I want to point out that > > It comes in an online version > (http://work.thaslwanter.at/Stats/html/), which is particularly handy > on mobile devices. > It also comes as a free e-book ( > http://work.thaslwanter.at/Stats/StatsIntro.pdf) > It can be freely used > (https://github.com/thomas-haslwanter/statsintro) and modified for > teaching introductory courses in statistics > And thanks to Josef's help and Ralf's suggestion, it is now also > listed on http://www.scipy.org/topical-software.html :) > > I am a bit disappointed that so far I have gotten pretty much zero > feedback :( > > But it is my first contribution to the Python community, and I > sincerely hope that it will help to increase the Python community, > especially the statistical one! > > thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritemio at gmail.com Fri Jun 20 16:37:01 2014 From: tritemio at gmail.com (Antonino Ingargiola) Date: Fri, 20 Jun 2014 13:37:01 -0700 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Matt, I reply inline... On Wed, Jun 18, 2014 at 8:19 PM, Matt Newville wrote: [cut] > I haven't tried your example, but it's possible that the development > version on github.com fixes this -- there was a similar problem with > conf_interval() failing when calling brentq() that was fixed relatively > recently (slow devel cycle), but I'm not certain that the fix there will > work here. > Indeed, upgrading to the latest version solved the error. Now I get the CI in both cases. I would guess that the essential problem is related to the fact that your > also getting NaNs from the covariance matrix. Again, without actually > trying your problem, I believe that this is probably related to your use of > > pos_range = (x - offset) >= 0 > > to select the data range for which y!=0. This could make the fit > insensitive to small changes (ie, well below 1) to offset, making it hard > to compute the partial derivative for it. This could, in turn, make the > covariance matrix hard to get (which is what you are seeing). Without > finite, numerical values for stderr from the covariance matrix, it's hard > for conf_interval() to make good guesses about what range of values to > search around. > > Thanks for the explanation, now I understand much better the problem. I have a model function with a discontinuity in the origin (i.e. exp(-x - x0) for x > x0 else 0). If I sample it with a step dx, I will always have a problem when x0 changes less than dx. Is there any known trick I can use to avoid this problem? > In short, it's a challenge to have variables that are used essentially as > integers. Still, even if it doesn't work well, it shouldn't be failing > this way. > Yes, but now I'm curious. How did you get around the problem? Are the confidence interval still accurate in this case? I assume that if the CI for the offset (x0) is < dx they don't make to much sense, is it right? Thanks, Antonio -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Fri Jun 20 17:12:38 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Fri, 20 Jun 2014 16:12:38 -0500 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Antonio, On Fri, Jun 20, 2014 at 3:37 PM, Antonino Ingargiola wrote: > Hi Matt, > > I reply inline... > > On Wed, Jun 18, 2014 at 8:19 PM, Matt Newville > wrote: > [cut] > > I haven't tried your example, but it's possible that the development >> version on github.com fixes this -- there was a similar problem with >> conf_interval() failing when calling brentq() that was fixed relatively >> recently (slow devel cycle), but I'm not certain that the fix there will >> work here. >> > > Indeed, upgrading to the latest version solved the error. Now I get the CI > in both cases. > OK, good. Now, whether those CI are reliable... > > I would guess that the essential problem is related to the fact that >> your also getting NaNs from the covariance matrix. Again, without actually >> trying your problem, I believe that this is probably related to your use of >> >> pos_range = (x - offset) >= 0 >> >> to select the data range for which y!=0. This could make the fit >> insensitive to small changes (ie, well below 1) to offset, making it hard >> to compute the partial derivative for it. This could, in turn, make the >> covariance matrix hard to get (which is what you are seeing). Without >> finite, numerical values for stderr from the covariance matrix, it's hard >> for conf_interval() to make good guesses about what range of values to >> search around. >> >> > Thanks for the explanation, now I understand much better the problem. I > have a model function with a discontinuity in the origin (i.e. exp(-x - x0) > for x > x0 else 0). If I sample it with a step dx, I will always have a > problem when x0 changes less than dx. Is there any known trick I can use to > avoid this problem? > I'm not sure that there is a robust way to have the range of data considered in the fitting to be a parameter. I might suggest (but haven't looked at your situation in great detail) to have you consider using an "offset" that shifts the origin for the model, then interpolate that onto the grid of data. That way, you might be able to set a fit range in the data coordinates before the fit, and not have it change. The model can be shifted in "x" (assuming there is such a thing -- your data appears to have an obvious "x" axis), but is fitting a fixed data range. Again, I'm not sure that would fix all problems, but it might help. > In short, it's a challenge to have variables that are used essentially as > integers. Still, even if it doesn't work well, it shouldn't be failing > this way. > Yes, but now I'm curious. How did you get around the problem? > Least-squares fitting with values used as integers are going to have poorly defined derivatives, and are bound to cause problems, at least sometimes. I don't know of a general solution, but perhaps someone does. > Are the confidence interval still accurate in this case? > Practically speaking, if the fit with leastsq() is not sensitive to one of the variables and results in NaNs in the covariance matrix, the confidence_intervals aren't going to be easy to find. The error you were seeing being raised by brentq() was from conf_interval() trying to find a suitable range of values to explore -- it has to find values that make the fit worse by 1-sigma, and so forth. If there are NaNs in the covariance matrix, it won't be able determine whether one fit is worse than another, and won't really work to find better confidence intervals than those from the covariance matrix. > I assume that if the CI for the offset (x0) is < dx they don't make to > much sense, is it right? > I think that can be OK for a continuous/analog value, but will fail for a discrete value when dx is less than the step in discrete values. Hope that helps, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritemio at gmail.com Fri Jun 20 19:36:59 2014 From: tritemio at gmail.com (Antonino Ingargiola) Date: Fri, 20 Jun 2014 16:36:59 -0700 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Matt, On Fri, Jun 20, 2014 at 2:12 PM, Matt Newville wrote: [cut] > Thanks for the explanation, now I understand much better the problem. I >> have a model function with a discontinuity in the origin (i.e. exp(-x - x0) >> for x > x0 else 0). If I sample it with a step dx, I will always have a >> problem when x0 changes less than dx. Is there any known trick I can use to >> avoid this problem? >> > > I'm not sure that there is a robust way to have the range of data > considered in the fitting to be a parameter. I might suggest (but haven't > looked at your situation in great detail) to have you consider using an > "offset" that shifts the origin for the model, then interpolate that onto > the grid of data. That way, you might be able to set a fit range in the > data coordinates before the fit, and not have it change. The model can be > shifted in "x" (assuming there is such a thing -- your data appears to have > an obvious "x" axis), but is fitting a fixed data range. Again, I'm not > sure that would fix all problems, but it might help. > Unless I misunderstood, I do already what you suggest. My function is exp(- x + x0) (note that I wrote "-x0" before by mistake) and x0 is "continuous", regardless of the x discretization. The problem is that the function is 0 for x < x0 and therefore there is a discontinuity at x=x0. When the function is evaluated on the save discrete x arrays, changing smoothly x0 does not result in a smooth translation of the function. > > >> In short, it's a challenge to have variables that are used essentially >> as integers. Still, even if it doesn't work well, it shouldn't be failing >> this way. >> > > Yes, but now I'm curious. How did you get around the problem? >> > > Least-squares fitting with values used as integers are going to have > poorly defined derivatives, and are bound to cause problems, at least > sometimes. I don't know of a general solution, but perhaps someone does. > I think that fitting a function with a discontinuity using an offset (x translation) as a parameter should be a quite common problem. Now that I think about this, maybe, constraining the offset variable within a single step of x will result in a smooth behavior and would allow to find the offset with accuracy of a fraction of the discretization step. I'll try... Thinking loud, would be nice to have a 2-step fit. In step 1, you find the offset varying it with the same discretization of the x axis. In the second step, you vary the offset only within one x bin to find the fractional part. Does this makes sense? Any idea if would it be feasible with current lmfit/scipy? Thanks, Antonio -------------- next part -------------- An HTML attachment was scrubbed... URL: From manolo at austrohungaro.com Sat Jun 21 02:10:20 2014 From: manolo at austrohungaro.com (Manolo =?iso-8859-1?Q?Mart=EDnez?=) Date: Sat, 21 Jun 2014 08:10:20 +0200 Subject: [SciPy-User] numpy.sort() failing Message-ID: <20140621061020.GA28762@ManoloMartinez> Dear all, I have a couple of structured arrays that, as far as I can see, only differ in irrelevant ways; yet numpy.sort chokes on one, not the other: This structured array *cannot* be sorted with np.sort(array, order='weight')... array([([0, 0, 0], 0.0), ([0, 0, 1], 0.0), ([0, 0, 2], 0.0), ([0, 1, 0], 0.0), ([0, 1, 1], 0.0), ([0, 1, 2], 0.0), ([0, 2, 0], 0.0), ([0, 2, 1], 0.0), ([0, 2, 2], 0.0), ([1, 0, 0], 0.0), ([1, 0, 1], 0.0), ([1, 0, 2], 0.0), ([1, 1, 0], 0.0), ([1, 1, 1], 0.0), ([1, 1, 2], 0.0), ([1, 2, 0], 0.0), ([1, 2, 1], 0.0), ([1, 2, 2], 0.0), ([2, 0, 0], 4.08179211371555e-289), ([2, 0, 1], 0.0), ([2, 0, 2], 0.0), ([2, 1, 0], 1.0), ([2, 1, 1], 6.939504595227983e-225), ([2, 1, 2], 1.0626819127933375e-224), ([2, 2, 0], 1.1209583874093894e-260), ([2, 2, 1], 0.0), ([2, 2, 2], 0.0)], dtype=[('strat', 'O'), ('weight', ' in () ----> 1 np.sort(aa, order=['weight', 'strat']) /usr/lib/python3.4/site-packages/numpy/core/fromnumeric.py in sort(a, axis, kind, order) 786 else: 787 a = asanyarray(a).copy() --> 788 a.sort(axis, kind, order) 789 return a 790 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I should say that I asked this very question some days ago over at #scipy, and a user there kindly run the offending array, and could not reproduce the problem. I'm a bit perplexed by this. python --version: 3.4.1 numpy.__version__: 1.8.1 ipython --version: 2.1.0 Thanks for any advice. Cheers, Manolo From newville at cars.uchicago.edu Sat Jun 21 08:39:35 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Sat, 21 Jun 2014 07:39:35 -0500 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Antonio, On Fri, Jun 20, 2014 at 6:36 PM, Antonino Ingargiola wrote: > Hi Matt, > > > On Fri, Jun 20, 2014 at 2:12 PM, Matt Newville > wrote: > [cut] > >> Thanks for the explanation, now I understand much better the problem. I >>> have a model function with a discontinuity in the origin (i.e. exp(-x - x0) >>> for x > x0 else 0). If I sample it with a step dx, I will always have a >>> problem when x0 changes less than dx. Is there any known trick I can use to >>> avoid this problem? >>> >> >> I'm not sure that there is a robust way to have the range of data >> considered in the fitting to be a parameter. I might suggest (but haven't >> looked at your situation in great detail) to have you consider using an >> "offset" that shifts the origin for the model, then interpolate that onto >> the grid of data. That way, you might be able to set a fit range in the >> data coordinates before the fit, and not have it change. The model can be >> shifted in "x" (assuming there is such a thing -- your data appears to have >> an obvious "x" axis), but is fitting a fixed data range. Again, I'm not >> sure that would fix all problems, but it might help. >> > > Unless I misunderstood, I do already what you suggest. My function is > exp(- x + x0) (note that I wrote "-x0" before by mistake) and x0 is > "continuous", regardless of the x discretization. The problem is that the > function is 0 for x < x0 and therefore there is a discontinuity at x=x0. > When the function is evaluated on the save discrete x arrays, changing > smoothly x0 does not result in a smooth translation of the function. > Ah, sorry, I see that now. I think I must have missed the use of "offset" in "np.exp(-(x[pos_range] - offset)/tau)" earlier. Yes, I think that should work, unless I'm missing something else.... Still, the main issue is getting NaNs in the covariance matrix for the "ampl" and "offset" parameters, which is especially strange since the resulting fit with best-fit values looks pretty reasonable. You might try temporarily simplifying the model (turn weights on/off, turn convolution step on/off) and/or printing values in the residual or model function to see if you can figure out what conditions cause that to happen. A different (untested) guess: exponential decays are often surprisingly difficult for leastsq(). You might try a different algorithm (say, Nelder-Mead) as a first pass, then use those results as starting values for leastsq() (as that will estimate uncertainties). I hear people who have good success with this approach when fitting noisy exponentially decaying data. >> >>> In short, it's a challenge to have variables that are used essentially >>> as integers. Still, even if it doesn't work well, it shouldn't be failing >>> this way. >>> >> >> Yes, but now I'm curious. How did you get around the problem? >>> >> >> Least-squares fitting with values used as integers are going to have >> poorly defined derivatives, and are bound to cause problems, at least >> sometimes. I don't know of a general solution, but perhaps someone does. >> > > I think that fitting a function with a discontinuity using an offset (x > translation) as a parameter should be a quite common problem. > Yes, I do this fairly often myself (including, like you, forcing the model for x < x0 to zero). I think I must have missed the use of "offset" as both the selection of the valid range and the shift itself. > Now that I think about this, maybe, constraining the offset variable > within a single step of x will result in a smooth behavior and would allow > to find the offset with accuracy of a fraction of the discretization step. > I'll try... > > Thinking loud, would be nice to have a 2-step fit. In step 1, you find the > offset varying it with the same discretization of the x axis. In the second > step, you vary the offset only within one x bin to find the fractional part. > > Does this makes sense? Any idea if would it be feasible with current > lmfit/scipy? > > I don't know how feasible that would be, but I hope it is not necessary. --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritemio at gmail.com Sat Jun 21 20:46:32 2014 From: tritemio at gmail.com (Antonino Ingargiola) Date: Sat, 21 Jun 2014 17:46:32 -0700 Subject: [SciPy-User] lmfit: confidence intervals issue In-Reply-To: References: Message-ID: Hi Matt, On Sat, Jun 21, 2014 at 5:39 AM, Matt Newville wrote: > Hi Antonio, > > > On Fri, Jun 20, 2014 at 6:36 PM, Antonino Ingargiola > wrote: > >> Hi Matt, >> >> >> On Fri, Jun 20, 2014 at 2:12 PM, Matt Newville < >> newville at cars.uchicago.edu> wrote: >> [cut] >> >>> Thanks for the explanation, now I understand much better the problem. >>>> I have a model function with a discontinuity in the origin (i.e. exp(-x - >>>> x0) for x > x0 else 0). If I sample it with a step dx, I will always have a >>>> problem when x0 changes less than dx. Is there any known trick I can use to >>>> avoid this problem? >>>> >>> >>> I'm not sure that there is a robust way to have the range of data >>> considered in the fitting to be a parameter. I might suggest (but haven't >>> looked at your situation in great detail) to have you consider using an >>> "offset" that shifts the origin for the model, then interpolate that onto >>> the grid of data. That way, you might be able to set a fit range in the >>> data coordinates before the fit, and not have it change. The model can be >>> shifted in "x" (assuming there is such a thing -- your data appears to have >>> an obvious "x" axis), but is fitting a fixed data range. Again, I'm not >>> sure that would fix all problems, but it might help. >>> >> >> Unless I misunderstood, I do already what you suggest. My function is >> exp(- x + x0) (note that I wrote "-x0" before by mistake) and x0 is >> "continuous", regardless of the x discretization. The problem is that the >> function is 0 for x < x0 and therefore there is a discontinuity at x=x0. >> When the function is evaluated on the save discrete x arrays, changing >> smoothly x0 does not result in a smooth translation of the function. >> > > Ah, sorry, I see that now. I think I must have missed the use of > "offset" in "np.exp(-(x[pos_range] - offset)/tau)" earlier. Yes, I > think that should work, unless I'm missing something else.... > > Still, the main issue is getting NaNs in the covariance matrix for the > "ampl" and "offset" parameters, which is especially strange since the > resulting fit with best-fit values looks pretty reasonable. You might > try temporarily simplifying the model (turn weights on/off, turn > convolution step on/off) and/or printing values in the residual or model > function to see if you can figure out what conditions cause that to happen. > > I suspect that the pseudo-periodic behaviour or the residuals as a function of the offset caused by the time axis discretization is causing problems here. BTW how can I see the covariance matrix? > A different (untested) guess: exponential decays are often surprisingly > difficult for leastsq(). You might try a different algorithm (say, > Nelder-Mead) as a first pass, then use those results as starting values for > leastsq() (as that will estimate uncertainties). I hear people who have > good success with this approach when fitting noisy exponentially decaying > data. > Oh, well, that's a very good suggestion. In the few trials I did Nelder-Mead works, then leastsq does not move the solution anymore but I can get the CI. Neat. As I told you the original error is gone when updating to current master. However I find some new combinations of data and initial parameters that give errors when computing the CI. The errors are: AttributeError: 'int' object has no attribute 'copy' and ValueError: f(a) and f(b) must have different signs I opened an issue to track them: https://github.com/lmfit/lmfit-py/issues/91 Best, Antonio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Jun 24 14:01:34 2014 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 24 Jun 2014 14:01:34 -0400 Subject: [SciPy-User] scipy-0.14.0 build failure fedora 20 Message-ID: I'm trying to build scipy-0.14.0 on fedora 20 (x86_64). I've tried different compile options, but most recently just using the scipy-0.14.0-2.fc21.src.rpm unchanged. It builds on py2 and py3, but when it tries to run self-test it explodes. I don't know which test it's running, but memory usage becomes much larger than my 16G, and I have to kill it. Any ideas? From ndbecker2 at gmail.com Tue Jun 24 14:25:23 2014 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 24 Jun 2014 14:25:23 -0400 Subject: [SciPy-User] scipy-0.14.0 build failure fedora 20 References: Message-ID: Neal Becker wrote: > I'm trying to build scipy-0.14.0 on fedora 20 (x86_64). I've tried different > compile options, but most recently just using the > > scipy-0.14.0-2.fc21.src.rpm > > unchanged. > > It builds on py2 and py3, but when it tries to run self-test it explodes. I > don't know which test it's running, but memory usage becomes much larger than > my 16G, and I have to kill it. > > Any ideas? More info: When run with DEFAULT test (not 'full'), the test does complete, on both py2 and py3. Using numpy-1.8.1 + openblas When run on py3, I got 1 failure: ERROR: test_fitpack.TestSplder.test_kink ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/nbecker/.local/lib/python3.3/site- packages/scipy/interpolate/fitpack.py", line 1190, in splder c = (c[1:-1-k] - c[:-2-k]) * k / dt FloatingPointError: divide by zero encountered in true_divide During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.3/site-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/nbecker/.local/lib/python3.3/site- packages/scipy/interpolate/tests/test_fitpack.py", line 329, in test_kink splder(spl2, 2) # Should work File "/home/nbecker/.local/lib/python3.3/site- packages/scipy/interpolate/fitpack.py", line 1198, in splder "and is not differentiable %d times") % n) ValueError: The spline has internal repeated knots and is not differentiable 2 times ---------------------------------------------------------------------- Ran 16420 tests in 216.470s FAILED (KNOWNFAIL=277, SKIP=1172, errors=1) When run on py2, I got 0 failures. From pav at iki.fi Tue Jun 24 16:01:47 2014 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Jun 2014 23:01:47 +0300 Subject: [SciPy-User] scipy-0.14.0 build failure fedora 20 In-Reply-To: References: Message-ID: 24.06.2014 21:25, Neal Becker kirjoitti: [clip] > When run with DEFAULT test (not 'full'), the test does complete, on both py2 and > py3. > > Using numpy-1.8.1 + openblas > > When run on py3, I got 1 failure: > > ERROR: test_fitpack.TestSplder.test_kink > ---------------------------------------------------------------------- [clip] Known bug (solved in git master, but not in 0.14.0): https://github.com/scipy/scipy/issues/2911 From tlinnet at gmail.com Thu Jun 26 04:31:19 2014 From: tlinnet at gmail.com (=?UTF-8?Q?Troels_Emtek=C3=A6r_Linnet?=) Date: Thu, 26 Jun 2014 10:31:19 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential Message-ID: Dear NMR wizards. Do any of you know any faster approach to calculate the matrix exponential ? I currently do it via eigenvalue decomposition approach. But my profiling scripts tells me, that my bottleneck is the eig function. Stealing 86% of the time. The matrix exponential is essential calculated in all numerical models of NMR magnetisation evolution. My minimisation algorithm would calculate this at each step, with iterations between 1E4 to 1E7 iterations. I have looked through this paper, do you have any experience with some of the methods? They don't really come to a conclusion, but suggest method 3. (page 10) Moler, C. and Van Loan, C. (2003) Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later. SIAM Review, 45, 3-49. (http://dx.doi.org/10.1137/S00361445024180 or http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf). My multidimensional approach right now is: http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py The numerical models using it: http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_3site.py http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_3site.py Troels Emtek?r Linnet -------------- next part -------------- An HTML attachment was scrubbed... URL: From tlinnet at gmail.com Thu Jun 26 04:46:37 2014 From: tlinnet at gmail.com (=?UTF-8?Q?Troels_Emtek=C3=A6r_Linnet?=) Date: Thu, 26 Jun 2014 10:46:37 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: Message-ID: It the bottom, I provide the profiling for NS R1rho 2site. You will see, that using the eig function, takes 50% of the time: That is i little sad, that the reason why Numerical solutions is so slow, is numpy.linalg.eig(). They differ a little in matrix size. ns_cpmg_2site_3d.py 7 X 7 matrix. ns_cpmg_2site_star.py 2x2 matrix ns_mmq_2site.py 2x2 matrix. ns_mmq_3site.py 3x3 matrix, ns_cpmg_2site_3d.py 6x6 matrix ns_r1rho_3site.py 9x9 matrix. ####### Thu Jun 26 10:42:13 2014 /var/folders/ww/1jkhkh315x57jglgxnr9g24w0000gp/T/tmp0buvpw 211077 function calls in 5.073 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 5.073 5.073 :1() 1 0.007 0.007 5.073 5.073 profiling_ns_r1rho_2site.py:553(cluster) 10 0.000 0.000 4.592 0.459 profiling_ns_r1rho_2site.py:515(calc) 10 0.015 0.002 4.592 0.459 relax_disp.py:1585(func_ns_r1rho_2site) 10 0.113 0.011 4.574 0.457 ns_r1rho_2site.py:190(ns_r1rho_2site) 10 0.093 0.009 4.138 0.414 matrix_exponential.py:33(matrix_exponential_rank_NE_NS_NM_NO_ND_x_x) 10 2.529 0.253 2.581 0.258 linalg.py:982(eig) 40 0.890 0.022 0.890 0.022 {numpy.core.multiarray.einsum} 10 0.524 0.052 0.552 0.055 linalg.py:455(inv) 1 0.000 0.000 0.474 0.474 profiling_ns_r1rho_2site.py:112(__init__) 10 0.125 0.013 0.304 0.030 ns_r1rho_2site.py:117(rr1rho_3d_2site_rankN) 1 0.248 0.248 0.274 0.274 relax_disp.py:64(__init__) 92 0.180 0.002 0.180 0.002 {method 'outer' of 'numpy.ufunc' objects} 1 0.090 0.090 0.107 0.107 profiling_ns_r1rho_2site.py:189(return_offset_data) 1 0.053 0.053 0.092 0.092 profiling_ns_r1rho_2site.py:289(return_r2eff_arrays) 30 0.065 0.002 0.065 0.002 {method 'astype' of 'numpy.ndarray' objects} 15109 0.043 0.000 0.043 0.000 {numpy.core.multiarray.array} 118863 0.018 0.000 0.018 0.000 {method 'append' of 'list' objects} 3004 0.004 0.000 0.018 0.000 numeric.py:136(ones) 10 0.001 0.000 0.015 0.002 shape_base.py:761(tile) 40 0.015 0.000 0.015 0.000 {method 'repeat' of 'numpy.ndarray' objects} 10 0.012 0.001 0.013 0.001 linalg.py:214(_assertFinite) Troels Emtek?r Linnet 2014-06-26 10:31 GMT+02:00 Troels Emtek?r Linnet : > Dear NMR wizards. > > Do any of you know any faster approach to calculate the matrix exponential > ? > > I currently do it via eigenvalue decomposition approach. > But my profiling scripts tells me, that my bottleneck is the eig function. > Stealing 86% of the time. > > The matrix exponential is essential calculated in all numerical models of > NMR > magnetisation evolution. > > My minimisation algorithm would calculate this at each step, with > iterations between 1E4 to 1E7 iterations. > > I have looked through this paper, do you have any experience with some of > the methods? > > They don't really come to a conclusion, but suggest method 3. (page 10) > > Moler, C. and Van Loan, C. (2003) Nineteen Dubious Ways to Compute > the Exponential of a Matrix, Twenty-Five Years Later. SIAM Review, > 45, 3-49. (http://dx.doi.org/10.1137/S00361445024180 or > http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf). > > My multidimensional approach right now is: > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/matrix_exponential.py > > The numerical models using it: > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_3d.py > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_cpmg_2site_star.py > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_2site.py > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_mmq_3site.py > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_2site.py > > http://svn.gna.org/svn/relax/branches/disp_spin_speed/lib/dispersion/ns_r1rho_3site.py > > > > Troels Emtek?r Linnet > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregor.thalhammer at gmail.com Thu Jun 26 06:52:04 2014 From: gregor.thalhammer at gmail.com (Gregor Thalhammer) Date: Thu, 26 Jun 2014 12:52:04 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: Message-ID: <42442A80-DCAA-4438-AA16-4E2D67BE7984@gmail.com> Am 26.06.2014 um 10:31 schrieb Troels Emtek?r Linnet : > Dear NMR wizards. > > Do any of you know any faster approach to calculate the matrix exponential ? > > I currently do it via eigenvalue decomposition approach. > But my profiling scripts tells me, that my bottleneck is the eig function. > Stealing 86% of the time. > scipy.linalg.expm provides a matrix exponential using Pad? approximation. Is this sufficiently accurate for your application? Gregor From tlinnet at gmail.com Thu Jun 26 07:07:47 2014 From: tlinnet at gmail.com (=?UTF-8?Q?Troels_Emtek=C3=A6r_Linnet?=) Date: Thu, 26 Jun 2014 13:07:47 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: <42442A80-DCAA-4438-AA16-4E2D67BE7984@gmail.com> References: <42442A80-DCAA-4438-AA16-4E2D67BE7984@gmail.com> Message-ID: Dear Gregor. Using the scripy method does not sound viable. It was discussed here, post 2 http://thread.gmane.org/gmane.science.nmr.relax.devel/6446/match=eliminating+86+bottleneck+numeric+r1rho The problem is also, that this function expects square matrices. I need to tweak to data of dimension a, b, c, d, e, X, X where X, X is the square matrices. A universal method for the data array. Best Troels On 26 Jun 2014 12:52, "Gregor Thalhammer" wrote: > > Am 26.06.2014 um 10:31 schrieb Troels Emtek?r Linnet : > > > Dear NMR wizards. > > > > Do any of you know any faster approach to calculate the matrix > exponential ? > > > > I currently do it via eigenvalue decomposition approach. > > But my profiling scripts tells me, that my bottleneck is the eig > function. > > Stealing 86% of the time. > > > > scipy.linalg.expm provides a matrix exponential using Pad? approximation. > Is this sufficiently accurate for your application? > > Gregor > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 26 08:32:45 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Jun 2014 06:32:45 -0600 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: Message-ID: On Thu, Jun 26, 2014 at 2:31 AM, Troels Emtek?r Linnet wrote: > Dear NMR wizards. > > Do any of you know any faster approach to calculate the matrix exponential > ? > > I currently do it via eigenvalue decomposition approach. > But my profiling scripts tells me, that my bottleneck is the eig function. > Stealing 86% of the time. > > The matrix exponential is essential calculated in all numerical models of > NMR > magnetisation evolution. > Are you solving a differential equation? If so, an explicit solution may not be the best way to go. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moore2 at nih.gov Thu Jun 26 10:42:46 2014 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Thu, 26 Jun 2014 14:42:46 +0000 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: <42442A80-DCAA-4438-AA16-4E2D67BE7984@gmail.com> Message-ID: <649847CE7F259144A0FD99AC64E7326D0EE9F8@MLBXV17.nih.gov> From: Troels Emtek?r Linnet [mailto:tlinnet at gmail.com] Sent: Thursday, June 26, 2014 7:08 AM To: SciPy Users List Subject: Re: [SciPy-User] Faster approach to calculate the matrix exponential Dear Gregor. Using the scripy method does not sound viable. It was discussed here,? post 2 http://thread.gmane.org/gmane.science.nmr.relax.devel/6446/match=eliminating+86+bottleneck+numeric+r1rho The problem is also, that this function expects square matrices. I need to tweak to data of dimension a, b, c, d, e, X, X where X, X is the square matrices.? A universal method for the data array. Best Troels On 26 Jun 2014 12:52, "Gregor Thalhammer" wrote: Am 26.06.2014 um 10:31 schrieb Troels Emtek?r Linnet : > Dear NMR wizards. > > Do any of you know any faster approach to calculate the matrix exponential ? > > I currently do it via eigenvalue decomposition approach. > But my profiling scripts tells me, that my bottleneck is the eig function. > Stealing 86% of the time. > scipy.linalg.expm provides a matrix exponential using Pad? approximation. Is this sufficiently accurate for your application? Gregor There are expansions for the matrix exponential in terms of Chebyshev and Laguerre matrix polynomials. (The first of which does get use a bit for calculating propagators in NMR, for instance, spinevolution and simpson both can/do use it.) However, your matrices seem quite small, even if there are a large number of them, so it may be very difficult to speed this up by much. Eric _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Thu Jun 26 13:22:17 2014 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 26 Jun 2014 20:22:17 +0300 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: <42442A80-DCAA-4438-AA16-4E2D67BE7984@gmail.com> Message-ID: 26.06.2014 14:07, Troels Emtek?r Linnet kirjoitti: > Using the scripy method does not sound viable. > > It was discussed here, post 2 > > http://thread.gmane.org/gmane.science.nmr.relax.devel/6446/match=eliminating+86+bottleneck+numeric+r1rho That link doesn't load currently. Nevertheless, using the Scipy implementation inside for loops is faster than using the eig() based approach, with a crossover (on my machine) when the inner matrix size is bigger than 16x16. This comparison applies to Scipy 0.14.0 --- the implementation in earlier Scipy versions is less efficient. Benchmark: """ from scipy.linalg import expm def matrix_exponential_scipy(A, dtype=None): B = np.empty(A.shape, dtype=A.dtype) for i in range(A.shape[0]): for j in range(A.shape[1]): for k in range(A.shape[2]): for l in range(A.shape[3]): for m in range(A.shape[4]): B[i,j,k,l,m] = expm(A[i,j,k,l,m]) return B import numpy as np N = 18 A = np.random.rand(3, 4, 5, 3, 2, N, N) start_1 = time.time() B_1 = matrix_exponential_scipy(A) end_1 = time.time() start_2 = time.time() B_2 = matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(A) end_2 = time.time() print("Scipy:", end_1 - start_1) print("Numpy:", end_2 - start_2) """ #-> ('Scipy:', 0.3015120029449463) #-> ('Numpy:', 0.31020593643188477) -- Pauli Virtanen From shoyer at gmail.com Thu Jun 26 14:04:54 2014 From: shoyer at gmail.com (Stephan Hoyer) Date: Thu, 26 Jun 2014 11:04:54 -0700 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: Message-ID: On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris wrote: > > Are you solving a differential equation? If so, an explicit solution may > not be the best way to go. > I agree. My experience with solving very similar quantum mechanics problems is that using an ODE integrator like zofe (via scipy.integrate.ode) can be much faster than solving the propagator exactly.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at perfect-kreim.de Thu Jun 26 14:46:23 2014 From: michael at perfect-kreim.de (Michael Kreim) Date: Thu, 26 Jun 2014 20:46:23 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: References: Message-ID: <53AC6A7F.2060001@perfect-kreim.de> Am 26.06.2014 10:31, schrieb Troels Emtek?r Linnet: > Do any of you know any faster approach to calculate the matrix exponential ? Am 26.06.2014 20:04, schrieb Stephan Hoyer: > > On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris > > wrote: > > Are you solving a differential equation? If so, an explicit solution > may not be the best way to go. > > > I agree. > > My experience with solving very similar quantum mechanics problems is > that using an ODE integrator like zofe (via scipy.integrate.ode) can be > much faster than solving the propagator exactly.. > Hi, I never used Python for my numerical Problems, but I worked with some PDEs of the d/dt p = Ap type in the recent time (but not from quantum mechanics). I agree with Stephan and Charles that it my be worth trying to solve the ODE using any suitable ODE solver. In Matlab I made the experience that it is very problem depended if the ode solver or the matrix exponential method is faster. Also you can have a look at expokit which is a very good implementation of matrix exponential solvers (using Krylov subspace methods): http://www.maths.uq.edu.au/expokit/ Unfortunately there is no Python implementation but maybe you can call the fortran or C++ version from python. And you can have a look at operator splitting methods. If it is possible to split your problem in two (or more) sub problems, then these methods often speed up the numerical computations. The problem usually lies in finding a suitable splitting. The implementation is not very complicated, if you have numerical methods for the sub problems. Cheers, Michael From tlinnet at gmail.com Fri Jun 27 04:57:01 2014 From: tlinnet at gmail.com (=?UTF-8?Q?Troels_Emtek=C3=A6r_Linnet?=) Date: Fri, 27 Jun 2014 10:57:01 +0200 Subject: [SciPy-User] Faster approach to calculate the matrix exponential In-Reply-To: <53AC6A7F.2060001@perfect-kreim.de> References: <53AC6A7F.2060001@perfect-kreim.de> Message-ID: Dear Gregor, Charles, Eric, Pauli, Stephan and Michael. Thank you for your valuable input! This saves me alot of time. It seems that the Gmane mail archive have some issues. Another mail server: http://www.mail-archive.com/relax-devel at gna.org/msg06311.html And then "Next message". I have also discussed this at our own mailing list: http://www.mail-archive.com/relax-users at gna.org/msg01634.html What I read from your comments is: Look into ODE integrator like zofe (scipy.integrate.ode) It also becomes clear for me, that I should sit down with theory to grasp, why I actually do: 1) matrix exponential 2) matrix power in all the numerical models. Then I would be able to select which best method to apply. Best Troels Troels Emtek?r Linnet 2014-06-26 20:46 GMT+02:00 Michael Kreim : > Am 26.06.2014 10:31, schrieb Troels Emtek?r Linnet: >> Do any of you know any faster approach to calculate the matrix exponential ? > > Am 26.06.2014 20:04, schrieb Stephan Hoyer: > > > > On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris > > > wrote: > > > > Are you solving a differential equation? If so, an explicit solution > > may not be the best way to go. > > > > > > I agree. > > > > My experience with solving very similar quantum mechanics problems is > > that using an ODE integrator like zofe (via scipy.integrate.ode) can be > > much faster than solving the propagator exactly.. > > > > Hi, > > I never used Python for my numerical Problems, but I worked with some > PDEs of the d/dt p = Ap type in the recent time (but not from quantum > mechanics). > > I agree with Stephan and Charles that it my be worth trying to solve the > ODE using any suitable ODE solver. In Matlab I made the experience that > it is very problem depended if the ode solver or the matrix exponential > method is faster. > > Also you can have a look at expokit which is a very good implementation > of matrix exponential solvers (using Krylov subspace methods): > http://www.maths.uq.edu.au/expokit/ > Unfortunately there is no Python implementation but maybe you can call > the fortran or C++ version from python. > > And you can have a look at operator splitting methods. If it is possible > to split your problem in two (or more) sub problems, then these methods > often speed up the numerical computations. The problem usually lies in > finding a suitable splitting. The implementation is not very > complicated, if you have numerical methods for the sub problems. > > > Cheers, > > Michael > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From helmrp at yahoo.com Fri Jun 27 16:12:34 2014 From: helmrp at yahoo.com (The Helmbolds) Date: Fri, 27 Jun 2014 13:12:34 -0700 Subject: [SciPy-User] SciPy-User Digest, Vol 130, Issue 23 In-Reply-To: References: Message-ID: <1403899954.87008.YahooMailNeo@web142801.mail.bf1.yahoo.com> If the matrix is not too large, the straightforward series expansion, for judiciously chosen n can be a remarkably good approximation: ??? EXP = I ????# the identity matrix ????k = 1 ??? for k in range(n):??? # or ???? ????EXP = EXP*M / k >________________________________ > From: "scipy-user-request at scipy.org" >To: scipy-user at scipy.org >Sent: Friday, June 27, 2014 10:00 AM >Subject: SciPy-User Digest, Vol 130, Issue 23 > > >Send SciPy-User mailing list submissions to >??? scipy-user at scipy.org > >To subscribe or unsubscribe via the World Wide Web, visit >??? http://mail.scipy.org/mailman/listinfo/scipy-user >or, via email, send a message with subject or body 'help' to >??? scipy-user-request at scipy.org > >You can reach the person managing the list at >??? scipy-user-owner at scipy.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of SciPy-User digest..." > > >Today's Topics: > >? 1. Re: Faster approach to calculate the matrix exponential >? ? ? (Pauli Virtanen) >? 2. Re: Faster approach to calculate the matrix exponential >? ? ? (Stephan Hoyer) >? 3. Re: Faster approach to calculate the matrix exponential >? ? ? (Michael Kreim) >? 4. Re: Faster approach to calculate the matrix exponential >? ? ? (Troels Emtek?r Linnet) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Thu, 26 Jun 2014 20:22:17 +0300 >From: Pauli Virtanen >Subject: Re: [SciPy-User] Faster approach to calculate the matrix >??? exponential >To: scipy-user at scipy.org >Message-ID: >Content-Type: text/plain; charset=ISO-8859-1 > >26.06.2014 14:07, Troels Emtek?r Linnet kirjoitti: >> Using the scripy method does not sound viable. >> >> It was discussed here,? post 2 >> >> http://thread.gmane.org/gmane.science.nmr.relax.devel/6446/match=eliminating+86+bottleneck+numeric+r1rho > >That link doesn't load currently. > >Nevertheless, using the Scipy implementation inside for loops is faster >than using the eig() based approach, with a crossover (on my machine) >when the inner matrix size is bigger than 16x16. This comparison applies >to Scipy 0.14.0 --- the implementation in earlier Scipy versions is less >efficient. > >Benchmark: >""" >from scipy.linalg import expm > >def matrix_exponential_scipy(A, dtype=None): >? ? B = np.empty(A.shape, dtype=A.dtype) >? ? for i in range(A.shape[0]): >? ? ? ? for j in range(A.shape[1]): >? ? ? ? ? ? for k in range(A.shape[2]): >? ? ? ? ? ? ? ? for l in range(A.shape[3]): >? ? ? ? ? ? ? ? ? ? for m in range(A.shape[4]): >? ? ? ? ? ? ? ? ? ? ? ? B[i,j,k,l,m] = expm(A[i,j,k,l,m]) >? ? return B > >import numpy as np > >N = 18 >A = np.random.rand(3, 4, 5, 3, 2, N, N) > >start_1 = time.time() >B_1 = matrix_exponential_scipy(A) >end_1 = time.time() > >start_2 = time.time() >B_2 = matrix_exponential_rank_NE_NS_NM_NO_ND_x_x(A) >end_2 = time.time() > >print("Scipy:", end_1 - start_1) >print("Numpy:", end_2 - start_2) >""" >#-> ('Scipy:', 0.3015120029449463) >#-> ('Numpy:', 0.31020593643188477) > > >-- >Pauli Virtanen > > > >------------------------------ > >Message: 2 >Date: Thu, 26 Jun 2014 11:04:54 -0700 >From: Stephan Hoyer >Subject: Re: [SciPy-User] Faster approach to calculate the matrix >??? exponential >To: SciPy Users List >Message-ID: >??? >Content-Type: text/plain; charset="utf-8" > >On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris > wrote: >> >> Are you solving a differential equation? If so, an explicit solution may >> not be the best way to go. >> > >I agree. > >My experience with solving very similar quantum mechanics problems is that >using an ODE integrator like zofe (via scipy.integrate.ode) can be much >faster than solving the propagator exactly.. >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20140626/ec2d91f3/attachment-0001.html > >------------------------------ > >Message: 3 >Date: Thu, 26 Jun 2014 20:46:23 +0200 >From: Michael Kreim >Subject: Re: [SciPy-User] Faster approach to calculate the matrix >??? exponential >To: SciPy Users List >Message-ID: <53AC6A7F.2060001 at perfect-kreim.de> >Content-Type: text/plain; charset=UTF-8; format=flowed > >Am 26.06.2014 10:31, schrieb Troels Emtek?r Linnet: >> Do any of you know any faster approach to calculate the matrix exponential ? > >Am 26.06.2014 20:04, schrieb Stephan Hoyer: >> >> On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris >> > wrote: >> >>? ? Are you solving a differential equation? If so, an explicit solution >>? ? may not be the best way to go. >> >> >> I agree. >> >> My experience with solving very similar quantum mechanics problems is >> that using an ODE integrator like zofe (via scipy.integrate.ode) can be >> much faster than solving the propagator exactly.. >> > >Hi, > >I never used Python for my numerical Problems, but I worked with some >PDEs of the d/dt p = Ap type in the recent time (but not from quantum >mechanics). > >I agree with Stephan and Charles that it my be worth trying to solve the >ODE using any suitable ODE solver. In Matlab I made the experience that >it is very problem depended if the ode solver or the matrix exponential >method is faster. > >Also you can have a look at expokit which is a very good implementation >of matrix exponential solvers (using Krylov subspace methods): >http://www.maths.uq.edu.au/expokit/ >Unfortunately there is no Python implementation but maybe you can call >the fortran or C++ version from python. > >And you can have a look at operator splitting methods. If it is possible >to split your problem in two (or more) sub problems, then these methods >often speed up the numerical computations. The problem usually lies in >finding a suitable splitting. The implementation is not very >complicated, if you have numerical methods for the sub problems. > > >Cheers, > >Michael > > > >------------------------------ > >Message: 4 >Date: Fri, 27 Jun 2014 10:57:01 +0200 >From: Troels Emtek?r Linnet >Subject: Re: [SciPy-User] Faster approach to calculate the matrix >??? exponential >To: SciPy Users List >Message-ID: >??? >Content-Type: text/plain; charset=UTF-8 > >Dear Gregor, Charles, Eric, Pauli, Stephan and Michael. > >Thank you for your valuable input! >This saves me alot of time. > >It seems that the Gmane mail archive have some issues. Another mail server: >http://www.mail-archive.com/relax-devel at gna.org/msg06311.html >And then "Next message". > >I have also discussed this at our own mailing list: >http://www.mail-archive.com/relax-users at gna.org/msg01634.html > >What I read from your comments is: >Look into ODE integrator like zofe (scipy.integrate.ode) > >It also becomes clear for me, that I should sit down with theory to grasp, why >I actually do: > >1) matrix exponential >2) matrix power > >in all the numerical models. >Then I would be able to select which best method to apply. > >Best >Troels > >Troels Emtek?r Linnet > > >2014-06-26 20:46 GMT+02:00 Michael Kreim : >> Am 26.06.2014 10:31, schrieb Troels Emtek?r Linnet: >>> Do any of you know any faster approach to calculate the matrix exponential ? >> >> Am 26.06.2014 20:04, schrieb Stephan Hoyer: >>? > >>? > On Thu, Jun 26, 2014 at 5:32 AM, Charles R Harris >>? > > wrote: >>? > >>? >? ? Are you solving a differential equation? If so, an explicit solution >>? >? ? may not be the best way to go. >>? > >>? > >>? > I agree. >>? > >>? > My experience with solving very similar quantum mechanics problems is >>? > that using an ODE integrator like zofe (via scipy.integrate.ode) can be >>? > much faster than solving the propagator exactly.. >>? > >> >> Hi, >> >> I never used Python for my numerical Problems, but I worked with some >> PDEs of the d/dt p = Ap type in the recent time (but not from quantum >> mechanics). >> >> I agree with Stephan and Charles that it my be worth trying to solve the >> ODE using any suitable ODE solver. In Matlab I made the experience that >> it is very problem depended if the ode solver or the matrix exponential >> method is faster. >> >> Also you can have a look at expokit which is a very good implementation >> of matrix exponential solvers (using Krylov subspace methods): >> http://www.maths.uq.edu.au/expokit/ >> Unfortunately there is no Python implementation but maybe you can call >> the fortran or C++ version from python. >> >> And you can have a look at operator splitting methods. If it is possible >> to split your problem in two (or more) sub problems, then these methods >> often speed up the numerical computations. The problem usually lies in >> finding a suitable splitting. The implementation is not very >> complicated, if you have numerical methods for the sub problems. >> >> >> Cheers, >> >> Michael >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > >------------------------------ > >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > > >End of SciPy-User Digest, Vol 130, Issue 23 >******************************************* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrett.n.b at gmail.com Sun Jun 29 03:46:29 2014 From: barrett.n.b at gmail.com (Barrett B) Date: Sun, 29 Jun 2014 07:46:29 +0000 (UTC) Subject: [SciPy-User] Optimizing odeint without a for loop References: <649847CE7F259144A0FD99AC64E7326D0E27A3@MLBXV17.nih.gov> Message-ID: Moore, Eric (NIH/NIDDK) [F] nih.gov> writes: > > Untested: > > def f(X, t): > N = len(X)/2 > E = X[:N] > n = X[1:N+1] > n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) > m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) > dV = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - E_K))/C > dV += eps * np.dot(X[:N], connex) > dn = (n_inf_E - n)/tau > return np.concatenate((dV, dn)) > > The basic idea is to operate on full arrays at once rather than looping over them. See, for instance, https://scipy-lectures.github.io/intro/numpy/operations.html > > Eric > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Thanks, that definitely steers me in the right direction. I made some changes (additional code included for context): -------------------- def f(X, t): N = len(X)/2 E = X[:N]; n = X[N: 2*N] n_inf_E = 1/(1 + np.exp((E_half_n - E)/k_n)) m_inf_E = 1/(1 + np.exp((E_half_m - E)/k_m)) dV = (I - g_L*(E - E_L) - g_Na*m_inf_E*(E - E_Na) - g_K*n*(E - E_K))/C dV += eps*np.dot(connex, X[:N]) dn = (n_inf_E - n)/tau return np.concatenate((dV, dn)) connex = np.matrix([[-1,1,0], [1,-2,1], [0,1,-1]]) #connection matrix t = np.arange(0, stopTime, timeStep) V0 = np.array([0, -20, -50]) n0 = np.array([0.2, 0.4, 0.7]); N = len(n0) soln = odeint(f, np.concatenate((V0, n0)), t) -------------- But the "dv +=" line (third to last in function f) gives me the following error repeated several times: Traceback (most recent call last): File "C:/Users/Barrett/Documents/Research/Networks/HH n V.py", line 31, in f dV += eps*np.dot(connex, X[:N]) ValueError: non-broadcastable output operand with shape (3) doesn't match the broadcast shape (1,3) ------------- I figured it might have been a problem with the matrix multiplication, but that wasn't it: Switching "connex" and "X[:N]" gives the exact same error. What's going on here?