From fmafjr at gmail.com Sat Feb 1 01:30:40 2014 From: fmafjr at gmail.com (Carlos Eduardo) Date: Fri, 31 Jan 2014 22:30:40 -0800 Subject: [SciPy-User] Tikhonov regularization in the non-negative least square - NNLS (python:scipy) Message-ID: Hi guys, I am working on a project that I need to add a regularization into the NNLSalgorithm. Is there a way to add the Tikhonov regularization into the NNLS implementation of scipy [1]? [2] talks about it, but does not show any implementation. Sklearn has an implementation, but it is not applied to nnls. [1] http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.nnls .html [2] http://icses2012.pwr.wroc.pl/article/34.pdf Any help is welcome Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Sat Feb 1 02:06:19 2014 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Sat, 1 Feb 2014 08:06:19 +0100 Subject: [SciPy-User] Gaussian filter In-Reply-To: References: Message-ID: <20140201080619.642c127b9a0d5432371af513@esrf.fr> On Fri, 31 Jan 2014 13:47:35 -0500 Gabriele Brambilla wrote: > Hi, > I'm an italian guy that studies astrophysics. > I'm using Scipy's gaussian filter to smooth some images... > Could you tell me the meaning the unity of measure of the parameter sigma? Hi, sigma refers to: http://en.wikipedia.org/wiki/Gaussian_function it is in pixel untis. -- J?r?me Kieffer Data analysis unit - ESRF From gb.gabrielebrambilla at gmail.com Sat Feb 1 07:31:15 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Sat, 1 Feb 2014 07:31:15 -0500 Subject: [SciPy-User] Gaussian filter In-Reply-To: <20140201080619.642c127b9a0d5432371af513@esrf.fr> References: <20140201080619.642c127b9a0d5432371af513@esrf.fr> Message-ID: Ok thank you! sent from Samsung Mobile Il giorno 01/feb/2014 02:06, "Jerome Kieffer" ha scritto: > On Fri, 31 Jan 2014 13:47:35 -0500 > Gabriele Brambilla wrote: > > > Hi, > > I'm an italian guy that studies astrophysics. > > I'm using Scipy's gaussian filter to smooth some images... > > Could you tell me the meaning the unity of measure of the parameter > sigma? > > Hi, > > sigma refers to: > http://en.wikipedia.org/wiki/Gaussian_function > it is in pixel untis. > -- > J?r?me Kieffer > Data analysis unit - ESRF > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Sat Feb 1 22:12:46 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Sat, 1 Feb 2014 22:12:46 -0500 Subject: [SciPy-User] AttributeError: 'numpy.float64' object has no attribute '_mask' Message-ID: Hi, i'm trying to do a fit with this code: def funky(xo, a, b, c, d): return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) guess = [A, gamma, Ecut, b] params, params_covariance = optimize.curve_fit(funky, x, y, guess)#, errorvector) yee = funky(x, params[0], params[1], params[2], params[3]) fig = plt.figure() axes = fig.add_subplot(111) im = axes.plot(x, y)#errorbar(x, y, errorvector, zeri) im = axes.plot(x, yee) axes.set_yscale('log') axes.set_xscale('log') fig.canvas.draw() plt.show() and I've found also this: http://mail.scipy.org/pipermail/numpy-discussion/2013-September/067593.html what could I do? thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Sun Feb 2 12:38:31 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Sun, 2 Feb 2014 12:38:31 -0500 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! Message-ID: Hi, I'm trying to do a fit with scipy optimize.curve_fit when I define this function omitting the variable d the fit works well: a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) when I include the d (not only in this line obviously) I get this error: my1curvefit.py:14: RuntimeWarning: invalid value encountered in power return a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) Traceback (most recent call last): File "my1curvefit.py", line 37, in params, params_covariance = optimize.curve_fit(funky, xa, ye, guess, e) File "C:\Users\Gabriele\Anaconda1\lib\site-packages\scipy\optimize\minpack.py", line 538, in curve_fit raise RuntimeError(msg) RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 1000. could you help me to understand what's happening?and how could I do this fit? thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaggi.federico at gmail.com Sun Feb 2 15:11:44 2014 From: vaggi.federico at gmail.com (federico vaggi) Date: Sun, 2 Feb 2014 21:11:44 +0100 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! Message-ID: What's happening most likely is that the LM algorithm is suggesting values of d which cause an overflow. Can you try editing your function to print a message if your expression starts to exceed: sys.maxsize If this is what's happening, try using: http://cars9.uchicago.edu/software/python/lmfit/ Which is a nice wrapper with a much better UI (and some neat features) around the same LM algorithm, which gives you the option of setting limits on the parameters. Second point - you are minimizing a function for which you have a symbolic expression. Why not write out an explicit function for the derivative? It will be much faster than calculating the derivative numerically with finite element approaches. On Sun, Feb 2, 2014 at 7:00 PM, wrote: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. AttributeError: 'numpy.float64' object has no attribute > '_mask' (Gabriele Brambilla) > 2. fit with 3 parameters ok, fit with 4 is going wrong! > (Gabriele Brambilla) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 1 Feb 2014 22:12:46 -0500 > From: Gabriele Brambilla > Subject: [SciPy-User] AttributeError: 'numpy.float64' object has no > attribute '_mask' > To: SciPy Users List > Message-ID: > < > CABmgkifb-vXjrAwvAU_1gx8F8TqimGBU8C9RNt+ovRbVGozYaQ at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > i'm trying to do a fit with this code: > > def funky(xo, a, b, c, d): > return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) > > guess = [A, gamma, Ecut, b] > > > > params, params_covariance = optimize.curve_fit(funky, x, y, > guess)#, errorvector) > > > yee = funky(x, params[0], params[1], params[2], params[3]) > > > fig = plt.figure() > > axes = fig.add_subplot(111) > > im = axes.plot(x, y)#errorbar(x, y, errorvector, zeri) > > im = axes.plot(x, yee) > axes.set_yscale('log') > axes.set_xscale('log') > fig.canvas.draw() > > plt.show() > > > and I've found also this: > http://mail.scipy.org/pipermail/numpy-discussion/2013-September/067593.html > > what could I do? > > thanks > > Gabriele > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140201/7000cbe6/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Sun, 2 Feb 2014 12:38:31 -0500 > From: Gabriele Brambilla > Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going > wrong! > To: SciPy Users List > Message-ID: > AdagusdT7WG2DtFmi810+k52Nt95RVQy-8YMw at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I'm trying to do a fit with scipy optimize.curve_fit > > when I define this function omitting the variable d the fit works well: > > a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) > > when I include the d (not only in this line obviously) I get this error: > > my1curvefit.py:14: RuntimeWarning: invalid value encountered in power > return a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) > Traceback (most recent call last): > File "my1curvefit.py", line 37, in > params, params_covariance = optimize.curve_fit(funky, xa, ye, guess, e) > File > "C:\Users\Gabriele\Anaconda1\lib\site-packages\scipy\optimize\minpack.py", > line 538, in curve_fit > raise RuntimeError(msg) > RuntimeError: Optimal parameters not found: Number of calls to function has > reached maxfev = 1000. > > could you help me to understand what's happening?and how could I do this > fit? > > thanks > > Gabriele > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140202/e5530ed3/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 126, Issue 2 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele at grinta.net Mon Feb 3 18:07:52 2014 From: daniele at grinta.net (Daniele Nicolodi) Date: Tue, 04 Feb 2014 00:07:52 +0100 Subject: [SciPy-User] Allan Variance In-Reply-To: References: <5134A07F.4050907@grinta.net> <5135E441.80504@grinta.net> Message-ID: <52F02148.506@grinta.net> On 05/03/2013 14:29, Neal Becker wrote: > Daniele Nicolodi wrote: > >> On 05/03/2013 13:13, Neal Becker wrote: >>> Daniele Nicolodi wrote: >>> >>>> Hello, >>>> >>>> does anyone know or have written and is willing to share some code to >>>> compute the Allan variance, modified Allan variance and friends, ideally >>>> both from frequency than from phase data? >>>> >>>> I'm writing my own code, it is not that difficult, but I would prefer >>>> the time required for testing and validation doing something more fun. >>>> >>>> If there is no available code implementing Allan variance, would it be >>>> interesting to include in in SciPy? >>> >>> Do you mean, given phase noise power spectral density? >> >> I mean given phase or frequency time series. But also code that >> computes Allan variation from power spectral density may come handy for >> me in future. > > I don't have that - but do have code to generate samples of blocks of phase > noise from a given spectral mask. Hello, sorry for the extremely long delay. I need to do some phase noise generation, therefore, to do not reinvent the wheel, I would like to have a look at your code. Can you share it, please? Thanks. Best, Daniele From ralf.gommers at gmail.com Tue Feb 4 02:16:39 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 4 Feb 2014 08:16:39 +0100 Subject: [SciPy-User] ANN: Scipy 0.13.3 release Message-ID: Hi, I'm happy to announce the availability of the scipy 0.13.3 release. This is a bugfix only release; it contains fixes for regressions in ndimage and weave. Source tarballs can be found at https://sourceforge.net/projects/scipy/files/scipy/0.13.3/ and on PyPi. Release notes copied below, binaries will follow later (the regular build machine is not available for the next two weeks). Cheers, Ralf ========================== SciPy 0.13.3 Release Notes ========================== SciPy 0.13.3 is a bug-fix release with no new features compared to 0.13.2. Both the weave and the ndimage.label bugs were severe regressions in 0.13.0, hence this release. Issues fixed ------------ - 3148: fix a memory leak in ``ndimage.label``. - 3216: fix weave issue with too long file names for MSVC. Other changes ------------- - Update Sphinx theme used for html docs so ``>>>`` in examples can be toggled. Checksums ========= 0547c1f8e8afad4009cc9b5ef17a2d4d release/installers/scipy-0.13.3.tar.gz 20ff3a867cc5925ef1d654aed2ff7e88 release/installers/scipy-0.13.3.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Feb 4 07:56:53 2014 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 04 Feb 2014 07:56:53 -0500 Subject: [SciPy-User] Allan Variance References: <5134A07F.4050907@grinta.net> <5135E441.80504@grinta.net> <52F02148.506@grinta.net> Message-ID: Daniele Nicolodi wrote: > On 05/03/2013 14:29, Neal Becker wrote: >> Daniele Nicolodi wrote: >> >>> On 05/03/2013 13:13, Neal Becker wrote: >>>> Daniele Nicolodi wrote: >>>> >>>>> Hello, >>>>> >>>>> does anyone know or have written and is willing to share some code to >>>>> compute the Allan variance, modified Allan variance and friends, ideally >>>>> both from frequency than from phase data? >>>>> >>>>> I'm writing my own code, it is not that difficult, but I would prefer >>>>> the time required for testing and validation doing something more fun. >>>>> >>>>> If there is no available code implementing Allan variance, would it be >>>>> interesting to include in in SciPy? >>>> >>>> Do you mean, given phase noise power spectral density? >>> >>> I mean given phase or frequency time series. But also code that >>> computes Allan variation from power spectral density may come handy for >>> me in future. >> >> I don't have that - but do have code to generate samples of blocks of phase >> noise from a given spectral mask. > > Hello, > > sorry for the extremely long delay. I need to do some phase noise > generation, therefore, to do not reinvent the wheel, I would like to > have a look at your code. Can you share it, please? > > Thanks. Best, > Daniele Attached. It is not complete by itself. I have included almost everything needed. I use my own boost::python based wrapper for fftw. You can substitute something else - a real inverse fft is used. pyFFTW could probably be substituted. -------------- next part -------------- A non-text attachment was scrubbed... Name: phase_noise_block.py Type: text/x-python Size: 3227 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: normal_c.py Type: text/x-python Size: 420 bytes Desc: not available URL: From daniele at grinta.net Tue Feb 4 09:13:57 2014 From: daniele at grinta.net (Daniele Nicolodi) Date: Tue, 04 Feb 2014 15:13:57 +0100 Subject: [SciPy-User] Allan Variance In-Reply-To: References: <5134A07F.4050907@grinta.net> <5135E441.80504@grinta.net> <52F02148.506@grinta.net> Message-ID: <52F0F5A5.3000203@grinta.net> On 04/02/2014 13:56, Neal Becker wrote: >>> I don't have that - but do have code to generate samples of blocks of phase >>> noise from a given spectral mask. >> >> Hello, >> >> sorry for the extremely long delay. I need to do some phase noise >> generation, therefore, to do not reinvent the wheel, I would like to >> have a look at your code. Can you share it, please? >> >> Thanks. Best, >> Daniele > > Attached. It is not complete by itself. I have included almost everything > needed. I use my own boost::python based wrapper for fftw. You can substitute > something else - a real inverse fft is used. pyFFTW could probably be > substituted. Thanks Neal! I'll have a look at it and I'll let you know if I'll have any problems. Cheers, Daniele From vanforeest at gmail.com Tue Feb 4 14:33:26 2014 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 4 Feb 2014 20:33:26 +0100 Subject: [SciPy-User] "inverting" an array Message-ID: Hi, I am wondering whether a shortcut exists in numpy/scipy for the following problem. The values in an array represent the number of customers that arrive in a certain time slot, e.g., a = [0,4,7,3,1,5, 0,0,0,] means that in time slot 1 4 customers arrive, in time slot 2 seven arrive, and so on. Now I want to "invert" this array to compute the arrival time of the i-th customer. Thus, customer 2 arrives in time slot 1, customer 6 in time slot 2, and so on. For this problem I wrote the following function: a = [0,4,7,3,1,5, 0,0,0,] A = np.cumsum(a) def invert(A): Ainv = np.empty(A[-1]) aprev=0 for i, a in enumerate(A): Ainv[aprev:a] = i aprev = a return Ainv Ainv= invert(A) print a print A print Ainv The output is [0, 4, 7, 3, 1, 5, 0, 0, 0] [ 0 4 11 14 15 20 20 20 20] [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. 5. 5.] Does anybody know whether this code can be made faster, or whether a numpy/scipy function exists that establishes this in one go? thanks Nicky -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Feb 4 14:52:44 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 4 Feb 2014 14:52:44 -0500 Subject: [SciPy-User] "inverting" an array In-Reply-To: References: Message-ID: On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest wrote: > Hi, > > I am wondering whether a shortcut exists in numpy/scipy for the following > problem. The values in an array represent the number of customers that > arrive in a certain time slot, e.g., > > a = [0,4,7,3,1,5, 0,0,0,] > > means that in time slot 1 4 customers arrive, in time slot 2 seven arrive, > and so on. Now I want to "invert" this array to compute the arrival time of > the i-th customer. Thus, customer 2 arrives in time slot 1, customer 6 in > time slot 2, and so on. For this problem I wrote the following function: > > a = [0,4,7,3,1,5, 0,0,0,] > A = np.cumsum(a) > > def invert(A): > Ainv = np.empty(A[-1]) > aprev=0 > for i, a in enumerate(A): > Ainv[aprev:a] = i > aprev = a > return Ainv > > > Ainv= invert(A) > > print a > print A > print Ainv > > The output is > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > [ 0 4 11 14 15 20 20 20 20] > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. > 5. 5.] > > Does anybody know whether this code can be made faster, or whether a > numpy/scipy function exists that establishes this in one go? > > thanks > > Nicky > > You can use `np.repeat`: In [10]: a Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] In [11]: np.repeat(np.arange(len(a)), a) Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, 5]) Warren > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Feb 4 14:54:44 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Feb 2014 13:54:44 -0600 Subject: [SciPy-User] "inverting" an array In-Reply-To: References: Message-ID: <52F14584.6090704@chem.wisc.edu> On 2/4/2014 1:52 PM, Warren Weckesser wrote: > > On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest > > wrote: > > Hi, > > I am wondering whether a shortcut exists in numpy/scipy for the > following problem. The values in an array represent the number of > customers that arrive in a certain time slot, e.g., > > a = [0,4,7,3,1,5, 0,0,0,] > > means that in time slot 1 4 customers arrive, in time slot 2 seven > arrive, and so on. Now I want to "invert" this array to compute > the arrival time of the i-th customer. Thus, customer 2 arrives in > time slot 1, customer 6 in time slot 2, and so on. For this > problem I wrote the following function: > > a = [0,4,7,3,1,5, 0,0,0,] > A = np.cumsum(a) > > def invert(A): > Ainv = np.empty(A[-1]) > aprev=0 > for i, a in enumerate(A): > Ainv[aprev:a] = i > aprev = a > return Ainv > > > Ainv= invert(A) > > print a > print A > print Ainv > > The output is > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > [ 0 4 11 14 15 20 20 20 20] > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. > 5. 5. > 5. 5.] > > Does anybody know whether this code can be made faster, or whether > a numpy/scipy function exists that establishes this in one go? > > thanks > > Nicky > > > You can use `np.repeat`: > In [10]: a > Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] > > In [11]: np.repeat(np.arange(len(a)), a) > Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, > 5, 5]) > > > Warren > I came up with a version that only uses python intrinsics: def invert(a): ainv = [] for i, n in enumerate(a): ainv += [i]*n return ainv Eric > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Tue Feb 4 15:21:28 2014 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 4 Feb 2014 21:21:28 +0100 Subject: [SciPy-User] "inverting" an array In-Reply-To: <52F14584.6090704@chem.wisc.edu> References: <52F14584.6090704@chem.wisc.edu> Message-ID: HI, Thanks. What an elegant solutions! bye Nicky On 4 February 2014 20:54, Eric Hermes wrote: > > On 2/4/2014 1:52 PM, Warren Weckesser wrote: > > > On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest wrote: > >> Hi, >> >> I am wondering whether a shortcut exists in numpy/scipy for the >> following problem. The values in an array represent the number of >> customers that arrive in a certain time slot, e.g., >> >> a = [0,4,7,3,1,5, 0,0,0,] >> >> means that in time slot 1 4 customers arrive, in time slot 2 seven >> arrive, and so on. Now I want to "invert" this array to compute the arrival >> time of the i-th customer. Thus, customer 2 arrives in time slot 1, >> customer 6 in time slot 2, and so on. For this problem I wrote the >> following function: >> >> a = [0,4,7,3,1,5, 0,0,0,] >> A = np.cumsum(a) >> >> def invert(A): >> Ainv = np.empty(A[-1]) >> aprev=0 >> for i, a in enumerate(A): >> Ainv[aprev:a] = i >> aprev = a >> return Ainv >> >> >> Ainv= invert(A) >> >> print a >> print A >> print Ainv >> >> The output is >> >> [0, 4, 7, 3, 1, 5, 0, 0, 0] >> [ 0 4 11 14 15 20 20 20 20] >> [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. >> 5. 5.] >> >> Does anybody know whether this code can be made faster, or whether a >> numpy/scipy function exists that establishes this in one go? >> >> thanks >> >> Nicky >> >> > You can use `np.repeat`: > > In [10]: a > Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] > > In [11]: np.repeat(np.arange(len(a)), a) > Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, > 5]) > > > Warren > > I came up with a version that only uses python intrinsics: > > def invert(a): > ainv = [] > for i, n in enumerate(a): > ainv += [i]*n > return ainv > > Eric > > > > >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschwabedal at gmail.com Tue Feb 4 15:59:06 2014 From: jschwabedal at gmail.com (Justus Schwabedal) Date: Tue, 4 Feb 2014 15:59:06 -0500 Subject: [SciPy-User] "inverting" an array Message-ID: Hi everyone, I ran a test of which solution is faster. For random integers, the repeat solution wins, but for sparse vectors "a" Eric's solution is actually faster! Now, that's a suprise... J 2014-02-04 : > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. "inverting" an array (nicky van foreest) > 2. Re: "inverting" an array (Warren Weckesser) > 3. Re: "inverting" an array (Eric Hermes) > 4. Re: "inverting" an array (nicky van foreest) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 4 Feb 2014 20:33:26 +0100 > From: nicky van foreest > Subject: [SciPy-User] "inverting" an array > To: SciPy Users List > Message-ID: > h9M114ea2WcmuY-DZHmgjz-D22GT+6_rg at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I am wondering whether a shortcut exists in numpy/scipy for the following > problem. The values in an array represent the number of customers that > arrive in a certain time slot, e.g., > > a = [0,4,7,3,1,5, 0,0,0,] > > means that in time slot 1 4 customers arrive, in time slot 2 seven arrive, > and so on. Now I want to "invert" this array to compute the arrival time of > the i-th customer. Thus, customer 2 arrives in time slot 1, customer 6 in > time slot 2, and so on. For this problem I wrote the following function: > > a = [0,4,7,3,1,5, 0,0,0,] > A = np.cumsum(a) > > def invert(A): > Ainv = np.empty(A[-1]) > aprev=0 > for i, a in enumerate(A): > Ainv[aprev:a] = i > aprev = a > return Ainv > > > Ainv= invert(A) > > print a > print A > print Ainv > > The output is > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > [ 0 4 11 14 15 20 20 20 20] > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. > 5. 5.] > > Does anybody know whether this code can be made faster, or whether a > numpy/scipy function exists that establishes this in one go? > > thanks > > Nicky > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140204/f6ea1383/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Tue, 4 Feb 2014 14:52:44 -0500 > From: Warren Weckesser > Subject: Re: [SciPy-User] "inverting" an array > To: SciPy Users List > Message-ID: > < > CAGzF1uctEZoUCoYQwzveVBhEzGBEeOsvwix4yMO8DDY68D0tGg at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest >wrote: > > > Hi, > > > > I am wondering whether a shortcut exists in numpy/scipy for the following > > problem. The values in an array represent the number of customers that > > arrive in a certain time slot, e.g., > > > > a = [0,4,7,3,1,5, 0,0,0,] > > > > means that in time slot 1 4 customers arrive, in time slot 2 seven > arrive, > > and so on. Now I want to "invert" this array to compute the arrival time > of > > the i-th customer. Thus, customer 2 arrives in time slot 1, customer 6 in > > time slot 2, and so on. For this problem I wrote the following function: > > > > a = [0,4,7,3,1,5, 0,0,0,] > > A = np.cumsum(a) > > > > def invert(A): > > Ainv = np.empty(A[-1]) > > aprev=0 > > for i, a in enumerate(A): > > Ainv[aprev:a] = i > > aprev = a > > return Ainv > > > > > > Ainv= invert(A) > > > > print a > > print A > > print Ainv > > > > The output is > > > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > > [ 0 4 11 14 15 20 20 20 20] > > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. > > 5. 5.] > > > > Does anybody know whether this code can be made faster, or whether a > > numpy/scipy function exists that establishes this in one go? > > > > thanks > > > > Nicky > > > > > You can use `np.repeat`: > > In [10]: a > Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] > > In [11]: np.repeat(np.arange(len(a)), a) > Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, > 5]) > > > Warren > > > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140204/1b74e682/attachment-0001.html > > ------------------------------ > > Message: 3 > Date: Tue, 04 Feb 2014 13:54:44 -0600 > From: Eric Hermes > Subject: Re: [SciPy-User] "inverting" an array > To: scipy-user at scipy.org > Message-ID: <52F14584.6090704 at chem.wisc.edu> > Content-Type: text/plain; charset="iso-8859-1" > > > On 2/4/2014 1:52 PM, Warren Weckesser wrote: > > > > On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest > > > wrote: > > > > Hi, > > > > I am wondering whether a shortcut exists in numpy/scipy for the > > following problem. The values in an array represent the number of > > customers that arrive in a certain time slot, e.g., > > > > a = [0,4,7,3,1,5, 0,0,0,] > > > > means that in time slot 1 4 customers arrive, in time slot 2 seven > > arrive, and so on. Now I want to "invert" this array to compute > > the arrival time of the i-th customer. Thus, customer 2 arrives in > > time slot 1, customer 6 in time slot 2, and so on. For this > > problem I wrote the following function: > > > > a = [0,4,7,3,1,5, 0,0,0,] > > A = np.cumsum(a) > > > > def invert(A): > > Ainv = np.empty(A[-1]) > > aprev=0 > > for i, a in enumerate(A): > > Ainv[aprev:a] = i > > aprev = a > > return Ainv > > > > > > Ainv= invert(A) > > > > print a > > print A > > print Ainv > > > > The output is > > > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > > [ 0 4 11 14 15 20 20 20 20] > > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. > > 5. 5. > > 5. 5.] > > > > Does anybody know whether this code can be made faster, or whether > > a numpy/scipy function exists that establishes this in one go? > > > > thanks > > > > Nicky > > > > > > You can use `np.repeat`: > > In [10]: a > > Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] > > > > In [11]: np.repeat(np.arange(len(a)), a) > > Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, > > 5, 5]) > > > > > > Warren > > > I came up with a version that only uses python intrinsics: > > def invert(a): > ainv = [] > for i, n in enumerate(a): > ainv += [i]*n > return ainv > > Eric > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140204/eba58b20/attachment-0001.html > > ------------------------------ > > Message: 4 > Date: Tue, 4 Feb 2014 21:21:28 +0100 > From: nicky van foreest > Subject: Re: [SciPy-User] "inverting" an array > To: SciPy Users List > Message-ID: > NA at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > HI, > > Thanks. What an elegant solutions! > > bye > > Nicky > > > > On 4 February 2014 20:54, Eric Hermes wrote: > > > > > On 2/4/2014 1:52 PM, Warren Weckesser wrote: > > > > > > On Tue, Feb 4, 2014 at 2:33 PM, nicky van foreest >wrote: > > > >> Hi, > >> > >> I am wondering whether a shortcut exists in numpy/scipy for the > >> following problem. The values in an array represent the number of > >> customers that arrive in a certain time slot, e.g., > >> > >> a = [0,4,7,3,1,5, 0,0,0,] > >> > >> means that in time slot 1 4 customers arrive, in time slot 2 seven > >> arrive, and so on. Now I want to "invert" this array to compute the > arrival > >> time of the i-th customer. Thus, customer 2 arrives in time slot 1, > >> customer 6 in time slot 2, and so on. For this problem I wrote the > >> following function: > >> > >> a = [0,4,7,3,1,5, 0,0,0,] > >> A = np.cumsum(a) > >> > >> def invert(A): > >> Ainv = np.empty(A[-1]) > >> aprev=0 > >> for i, a in enumerate(A): > >> Ainv[aprev:a] = i > >> aprev = a > >> return Ainv > >> > >> > >> Ainv= invert(A) > >> > >> print a > >> print A > >> print Ainv > >> > >> The output is > >> > >> [0, 4, 7, 3, 1, 5, 0, 0, 0] > >> [ 0 4 11 14 15 20 20 20 20] > >> [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. > >> 5. 5.] > >> > >> Does anybody know whether this code can be made faster, or whether a > >> numpy/scipy function exists that establishes this in one go? > >> > >> thanks > >> > >> Nicky > >> > >> > > You can use `np.repeat`: > > > > In [10]: a > > Out[10]: [0, 4, 7, 3, 1, 5, 0, 0, 0] > > > > In [11]: np.repeat(np.arange(len(a)), a) > > Out[11]: array([1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, 5, > > 5]) > > > > > > Warren > > > > I came up with a version that only uses python intrinsics: > > > > def invert(a): > > ainv = [] > > for i, n in enumerate(a): > > ainv += [i]*n > > return ainv > > > > Eric > > > > > > > > > >> > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > > > > > > _______________________________________________ > > SciPy-User mailing listSciPy-User at scipy.orghttp:// > mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > > Eric Hermes > > J.R. Schmidt Group > > Chemistry Department > > University of Wisconsin - Madison > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140204/4e90e967/attachment.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 126, Issue 5 > ****************************************** > -- Justus Schwabedal skype: justus1802 Work: +1 617 353 4659 Handy (US): +1 617 449 8478 Handy (D): +49 177 939 5281 email: jschwabedal at googlemail.com 657 John Wesley Dobbs Ave NE 30312 Atlanta, GA USA Steinkreuzstr. 23 53757 Sankt Augustin Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.guyer at nist.gov Tue Feb 4 16:14:54 2014 From: jonathan.guyer at nist.gov (Guyer, Jonathan E. Dr.) Date: Tue, 4 Feb 2014 21:14:54 +0000 Subject: [SciPy-User] "inverting" an array In-Reply-To: References: Message-ID: <8B66A822-3093-4D51-B957-8617CF5EC537@nist.gov> > On Feb 4, 2014, at 2:33 PM, "nicky van foreest" wrote: > > Hi, > > I am wondering whether a shortcut exists in numpy/scipy for the following problem. The values in an array represent the number of customers that arrive in a certain time slot, e.g., > > a = [0,4,7,3,1,5, 0,0,0,] > > means that in time slot 1 4 customers arrive, in time slot 2 seven arrive, and so on. Now I want to "invert" this array to compute the arrival time of the i-th customer. Thus, customer 2 arrives in time slot 1, customer 6 in time slot 2, and so on. For this problem I wrote the following function: > > a = [0,4,7,3,1,5, 0,0,0,] > A = np.cumsum(a) > > def invert(A): > Ainv = np.empty(A[-1]) > aprev=0 > for i, a in enumerate(A): > Ainv[aprev:a] = i > aprev = a > return Ainv > > > Ainv= invert(A) > > print a > print A > print Ainv > > The output is > > [0, 4, 7, 3, 1, 5, 0, 0, 0] > [ 0 4 11 14 15 20 20 20 20] > [ 1. 1. 1. 1. 2. 2. 2. 2. 2. 2. 2. 3. 3. 3. 4. 5. 5. 5. > 5. 5.] > > Does anybody know whether this code can be made faster, or whether a numpy/scipy function exists that establishes this in one go? > > thanks > > Nicky > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From d.l.goldsmith at gmail.com Tue Feb 4 18:31:06 2014 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 4 Feb 2014 15:31:06 -0800 Subject: [SciPy-User] OT: "Standard" Text in Computational Biochemistry? Message-ID: Is there such a beast? I'm particularly interested in coverage of methods for modeling enzyme specificity, but want something that covers both the quantum biochemical models used and any specialized computational aspects. Thanks in advance for any references. OlyDLG -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryanv at continuum.io Tue Feb 4 20:47:36 2014 From: bryanv at continuum.io (Bryan Van de Ven) Date: Tue, 4 Feb 2014 19:47:36 -0600 Subject: [SciPy-User] ANN: Bokeh 0.4 Release Message-ID: I am pleased to announce the release of Bokeh version 0.4! Bokeh is a Python library for visualizing large and realtime datasets on the web. Its goal is to provide elegant, concise construction of novel graphics in the style of Protovis/D3, while delivering high-performance interactivity to thin clients. Bokeh includes its own Javascript library (BokehJS) that implements a reactive scenegraph representation of the plot, and renders efficiently to HTML5 Canvas. Bokeh works well with IPython Notebook, but can generate standalone graphics that embed into regular HTML. Check out the full documentation and interactive gallery at http://bokeh.pydata.org If you are using Anaconda, you can install with conda: conda install bokeh Alternatively, you can install with pip: pip install bokeh Some of the new features in this release include: * Preliminary work on Matplotlib support: convert MPL figures to Bokeh plots * Free public beta of Bokeh plot hosting at http://bokehplots.com * Tool improvements: - "always on" pan tool and wheel zoom tool (with shift key) - box zoom tool - viewport reset tool * Enhanced datetime axis, with better performance and nicer ticking * Expanded testing, including TravisCI integrations and static image output using PhantomJS * RGBA and color mapped image plots now available from Python * Python 3 supported * Vastly improved documentation for glyphs, with inline examples and JSFiddle integration Also, we've fixed lots of little bugs - see the CHANGELOG for full details. Bokeh will be having a free "Office Hours" later this week! Join us this Thursday at 2pm CST on EngineHereathttps://www.enginehere.com/stream/437/bokeh-04-release/ for a live informational session about the latest release. We'll be covering all the newest features and updates through a combination of live lecture, Q&A, and pair programming. It's all free, just sign up to the EngineHere learning platform. BokehJS is also available by CDN for use in standalone javascript applications: http://cdn.pydata.org/bokeh-0.4.js http://cdn.pydata.org/bokeh-0.4.css http://cdn.pydata.org/bokeh-0.4.min.js http://cdn.pydata.org/bokeh-0.4.min.css Some examples of BokehJS use can be found on the Bokeh JSFiddle page: http://jsfiddle.net/user/bokeh/fiddles/ The release of Bokeh 0.5 is planned for late March. Some notable features we plan to include are: * Abstract Rendering for semantically meaningful downsampling of large datasets * Better grid-based layout system, using Cassowary.js * Selection tools, tooltips, etc. Issues, enhancement requests, and pull requests can be made on the Bokeh Github page: https://github.com/continuumio/bokeh Questions can be directed to the Bokeh mailing list: bokeh at continuum.io Special thanks to recent contributors: Janek Klawe, Samantha Hughes, Rebecca Paz, and Benedikt Sauer. Regards, Bryan Van de Ven Continuum Analytics http://continuum.io From mocalvao at gmail.com Thu Feb 6 06:09:24 2014 From: mocalvao at gmail.com (Mauricio Calvao) Date: Thu, 6 Feb 2014 09:09:24 -0200 Subject: [SciPy-User] scipy.interpolate unsuccessful after updating in Debian Message-ID: Hi there I have just updated my python-scipy package under Debian sid. I used to have the version 0.10.1 and now it is 0.12.0. When I now try to import scipy.interpolate the following error is issued: In [8]: import scipy.interpolate --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () ----> 1 import scipy.interpolate /usr/lib/python2.7/dist-packages/scipy/interpolate/__init__.py in () 148 from __future__ import division, print_function, absolute_import 149 --> 150 from .interpolate import * 151 from .fitpack import * 152 /usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py in () 10 dot, poly1d, asarray, intp 11 import numpy as np ---> 12 import scipy.special as spec 13 import math 14 /usr/lib/python2.7/dist-packages/scipy/special/__init__.py in () 530 from ._ufuncs_cxx import * 531 --> 532 from .basic import * 533 from . import specfun 534 from . import orthogonal /usr/lib/python2.7/dist-packages/scipy/special/basic.py in () 6 7 import numpy as np ----> 8 from scipy.lib.six.moves import xrange 9 from numpy import pi, asarray, floor, isscalar, iscomplex, real, imag, sqrt, \ 10 where, mgrid, cos, sin, exp, place, seterr, issubdtype, extract, \ ImportError: No module named moves Does anyone have any ideas about how to fix this? I am in short time to finish a task and I seem to need the capabilities of this new scipy version Thanks in advance! -- ####################################### Prof. Mauricio Ortiz Calvao Federal University of Rio de Janeiro Institute of Physics, P O Box 68528 CEP 21941-972 Rio de Janeiro, RJ Brazil Email: orca at if.ufrj.br Phone: (55)(21)25627483 Homepage: http://www.if.ufrj.br/~orca ####################################### . -------------- next part -------------- An HTML attachment was scrubbed... URL: From opossumnano at gmail.com Thu Feb 6 07:52:23 2014 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 6 Feb 2014 13:52:23 +0100 Subject: [SciPy-User] [ANN] Summer School "Advanced Scientific Programming in Python" in Split, Croatia Message-ID: <20140206125223.GE5464@bio230.biologie.hu-berlin.de> Advanced Scientific Programming in Python ========================================= a Summer School by the G-Node and the Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB), University of Split Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists have been trained to use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques, incorporating theoretical lectures and practical exercises tailored to the needs of a programming scientist. New skills will be tested in a real programming project: we will team up to develop an entertaining scientific computer game. We use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. We show how clean language design, ease of extensibility, and the great wealth of open source libraries for scientific computing and data visualization are driving Python to become a standard tool for the programming scientist. This school is targeted at Master or PhD students and Post-docs from all areas of science. Competence in Python or in another language such as Java, C/C++, MATLAB, or Mathematica is absolutely required. Basic knowledge of Python is assumed. Participants without any prior experience with Python should work through the proposed introductory materials before the course. Date and Location ================= September 8?13, 2014. Split, Croatia Preliminary Program =================== Day 0 (Mon Sept 8) ? Best Programming Practices ? Best Practices for Scientific Computing ? Version control with git and how to contribute to Open Source with github ? Object-oriented programming & design patterns Day 1 (Tue Sept 9) ? Software Carpentry ? Test-driven development, unit testing & quality assurance ? Debugging, profiling and benchmarking techniques ? Advanced Python I: idioms, useful built-in data structures, generators Day 2 (Wed Sept 10) ? Scientific Tools for Python ? Advanced NumPy ? The Quest for Speed (intro): Interfacing to C with Cython ? Programming in teams Day 3 (Thu Sept 11) ? The Quest for Speed ? Writing parallel applications in Python ? Python 3: why should I care ? Programming project Day 4 (Fri Sept 12) ? Efficient Memory Management ? When parallelization does not help: the starving CPUs problem ? Advanced Python II: decorators and context managers ? Programming project Day 5 (Sat Sept 13) ? Practical Software Development ? Programming project ? The Pelita Tournament Every evening we will have the tutors' consultation hour: Tutors will answer your questions and give suggestions for your own projects. Applications ============ You can apply on-line at http://python.g-node.org Applications must be submitted before 23:59 UTC, May 1, 2014. Notifications of acceptance will be sent by June 1, 2014. No fee is charged but participants should take care of travel, living, and accommodation expenses. Candidates will be selected on the basis of their profile. Places are limited: acceptance rate is usually around 20%. Prerequisites: You are supposed to know the basics of Python to participate in the lectures. You are encouraged to go through the introductory material available on the website. Faculty ======= ? Francesc Alted, Continuum Analytics Inc., USA ? Pietro Berkes, Enthought Inc., UK ? Kathryn D. Huff, Department of Nuclear Engineering, University of California - Berkeley, USA ? Zbigniew J?drzejewski-Szmek, Krasnow Institute, George Mason University, USA ? Eilif Muller, Blue Brain Project, ?cole Polytechnique F?d?rale de Lausanne, Switzerland ? Rike-Benjamin Schuppner, Technologit GbR, Germany ? Nelle Varoquaux, Centre for Computational Biology Mines ParisTech, Institut Curie, U900 INSERM, Paris, France ? St?fan van der Walt, Applied Mathematics, Stellenbosch University, South Africa ? Niko Wilbert, TNG Technology Consulting GmbH, Germany ? Tiziano Zito, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany Organized by Tiziano Zito (head) and Zbigniew J?drzejewski-Szmek for the German Neuroinformatics Node of the INCF (Germany), Lana Peri?a for the Numerical and applied mathematics group, FESB, University of Split (Croatia), Ivana Kaji? from the Bernstein Center for Computational Neuroscience Berlin (Germany), Ivana Bala?evi? from the Technical University Berlin (Germany), and Filip Petkovski from IN2 Ltd. Skopje (Macedonia). Website: http://python.g-node.org Contact: python-info at g-node.org From gb.gabrielebrambilla at gmail.com Thu Feb 6 10:43:34 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Thu, 6 Feb 2014 10:43:34 -0500 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: Ok, excuse me but I have not understand some things: 2014-02-02 federico vaggi : > What's happening most likely is that the LM algorithm is suggesting values > of d which cause an overflow. Can you try editing your function to print a > message if your expression starts to exceed: > > what does it mean that my expression starts to exceed? how could I check it? > sys.maxsize > > If this is what's happening, try using: > > http://cars9.uchicago.edu/software/python/lmfit/ > > Which is a nice wrapper with a much better UI (and some neat features) > around the same LM algorithm, which gives you the option of setting limits > on the parameters. > > Second point - you are minimizing a function for which you have a symbolic > expression. Why not write out an explicit function for the derivative? It > will be much faster than calculating the derivative numerically with finite > element approaches. > > ok but how could I use this in the function curve_fit? could you send me an example? thanks Gabriele > > On Sun, Feb 2, 2014 at 7:00 PM, wrote: > >> Send SciPy-User mailing list submissions to >> scipy-user at scipy.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://mail.scipy.org/mailman/listinfo/scipy-user >> or, via email, send a message with subject or body 'help' to >> scipy-user-request at scipy.org >> >> You can reach the person managing the list at >> scipy-user-owner at scipy.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-User digest..." >> >> >> Today's Topics: >> >> 1. AttributeError: 'numpy.float64' object has no attribute >> '_mask' (Gabriele Brambilla) >> 2. fit with 3 parameters ok, fit with 4 is going wrong! >> (Gabriele Brambilla) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sat, 1 Feb 2014 22:12:46 -0500 >> From: Gabriele Brambilla >> Subject: [SciPy-User] AttributeError: 'numpy.float64' object has no >> attribute '_mask' >> To: SciPy Users List >> Message-ID: >> < >> CABmgkifb-vXjrAwvAU_1gx8F8TqimGBU8C9RNt+ovRbVGozYaQ at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi, >> i'm trying to do a fit with this code: >> >> def funky(xo, a, b, c, d): >> return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) >> >> guess = [A, gamma, Ecut, b] >> >> >> >> params, params_covariance = optimize.curve_fit(funky, x, y, >> guess)#, errorvector) >> >> >> yee = funky(x, params[0], params[1], params[2], params[3]) >> >> >> fig = plt.figure() >> >> axes = fig.add_subplot(111) >> >> im = axes.plot(x, y)#errorbar(x, y, errorvector, zeri) >> >> im = axes.plot(x, yee) >> axes.set_yscale('log') >> axes.set_xscale('log') >> fig.canvas.draw() >> >> plt.show() >> >> >> and I've found also this: >> >> http://mail.scipy.org/pipermail/numpy-discussion/2013-September/067593.html >> >> what could I do? >> >> thanks >> >> Gabriele >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-user/attachments/20140201/7000cbe6/attachment-0001.html >> >> ------------------------------ >> >> Message: 2 >> Date: Sun, 2 Feb 2014 12:38:31 -0500 >> From: Gabriele Brambilla >> Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going >> wrong! >> To: SciPy Users List >> Message-ID: >> > AdagusdT7WG2DtFmi810+k52Nt95RVQy-8YMw at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi, >> >> I'm trying to do a fit with scipy optimize.curve_fit >> >> when I define this function omitting the variable d the fit works well: >> >> a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) >> >> when I include the d (not only in this line obviously) I get this error: >> >> my1curvefit.py:14: RuntimeWarning: invalid value encountered in power >> return a*((x)**((-1)*b))*(np.exp(((-1)*x/c)**d)) >> Traceback (most recent call last): >> File "my1curvefit.py", line 37, in >> params, params_covariance = optimize.curve_fit(funky, xa, ye, guess, >> e) >> File >> "C:\Users\Gabriele\Anaconda1\lib\site-packages\scipy\optimize\minpack.py", >> line 538, in curve_fit >> raise RuntimeError(msg) >> RuntimeError: Optimal parameters not found: Number of calls to function >> has >> reached maxfev = 1000. >> >> could you help me to understand what's happening?and how could I do this >> fit? >> >> thanks >> >> Gabriele >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-user/attachments/20140202/e5530ed3/attachment-0001.html >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> End of SciPy-User Digest, Vol 126, Issue 2 >> ****************************************** >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Thu Feb 6 11:13:19 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 6 Feb 2014 10:13:19 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Thu, Feb 6, 2014 at 9:43 AM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > Ok, excuse me but I have not understand some things: > > 2014-02-02 federico vaggi : > >> What's happening most likely is that the LM algorithm is suggesting >> values of d which cause an overflow. Can you try editing your function to >> print a message if your expression starts to exceed: >> >> > what does it mean that my expression starts to exceed? how could I check > it? > I think the recommendation was to print out parameter values (especially for your "c" and "d" parameters which could easily send your function to under/overflow values, and so make it very difficult for the fitting algorithm to figure out how changing the value of the parameter changes the result). > sys.maxsize >> >> If this is what's happening, try using: >> >> http://cars9.uchicago.edu/software/python/lmfit/ >> >> Which is a nice wrapper with a much better UI (and some neat features) >> around the same LM algorithm, which gives you the option of setting limits >> on the parameters. >> >> > Second point - you are minimizing a function for which you have a symbolic >> expression. Why not write out an explicit function for the derivative? It >> will be much faster than calculating the derivative numerically with finite >> element approaches. >> >> ok but how could I use this in the function curve_fit? could you send me > an example? > It may not be completely clear from the curve_fit() documentation, but the **kw parameters are passed to leastsq(). So, if you write a function to provide the Jacobian, you can pass function this to curve_fit() with the "Dfun" parameter. Since curve_fit() is a simplified wrapper around leastsq(), once you start doing anything non-simple, you may be better off just using leastsq(). Among other conveniences, the lmfit package add the ability to fix and/or set bounds on the parameters in the model without having to rewrite the objective function, which might be helpful for your problem. -- --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Feb 6 11:24:08 2014 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 6 Feb 2014 16:24:08 +0000 (UTC) Subject: [SciPy-User] scipy.interpolate unsuccessful after updating in Debian References: Message-ID: Mauricio Calvao gmail.com> writes: > I have just updated my python-scipy package under Debian sid. I used to have the version 0.10.1 and now it is 0.12.0. > > When I now try to import scipy.interpolate the following error is [clip] Hi, What to do: 1. Send bug report to Debian. Their package is broken. 2. Install Scipy manually to your home directory: sudo apt-get build-dep scipy wget https://pypi.python.org/packages/source/s/scipy/scipy- 0.13.3.tar.gz#md5=0547c1f8e8afad4009cc9b5ef17a2d4d tar xzf scipy-0.13.3.tar.gz cd scipy-0.13.3 python setup.py install --user This installs it under ~/.local/lib/python2.7 You can remove it from there later on when it's not needed any more. Don't use root/sudo for installation, except for installing the build dependencies via apt-get. -- Pauli Virtanen From pav at iki.fi Thu Feb 6 11:29:08 2014 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 6 Feb 2014 16:29:08 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?scipy=2Einterpolate_unsuccessful_after_upd?= =?utf-8?q?ating_in=09Debian?= References: Message-ID: Pauli Virtanen iki.fi> writes: > Mauricio Calvao gmail.com> writes: > > I have just updated my python-scipy package under Debian sid. I used to > have the version 0.10.1 and now it is 0.12.0. > > > > When I now try to import scipy.interpolate the following error is > [clip] Alternatively, if you are not *fully* on Debian sid, make sure you install also the python-six package from sid. -- Pauli Virtanen From davidmenhur at gmail.com Thu Feb 6 11:45:10 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 6 Feb 2014 17:45:10 +0100 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On 2 February 2014 21:11, federico vaggi wrote: > def funky(xo, a, b, c, d): >> return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) > > That function is messy! Here is a clean-up: a * x0**-b * np.exp(-x0 * d / c) You may notice that d and c are degenerate, so you could just use as a parameter the ratio d/c and remove one parameter. On 6 February 2014 17:13, Matt Newville wrote: > It may not be completely clear from the curve_fit() documentation, but the > **kw parameters are passed to leastsq(). So, if you write a function to > provide the Jacobian, you can pass function this to curve_fit() with the > "Dfun" parameter. > > Since curve_fit() is a simplified wrapper around leastsq(), once you start > doing anything non-simple, you may be better off just using leastsq(). > Among other conveniences, the lmfit package add the ability to fix and/or > set bounds on the parameters in the model without having to rewrite the > objective function, which might be helpful for your problem. If Scipy is having a hard time and you can afford the dependency, I would consider using iminuit with probfit. It is built on top of CERN's Minuit, that has very powerful minimization algorithms, robust in many dimensions (many > 10). And in my particular case, in its pure python mode is faster than Scipy, and gained a lot of speed when providing a Cython function. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jobelenus at gmail.com Thu Feb 6 11:57:59 2014 From: jobelenus at gmail.com (John Obelenus) Date: Thu, 6 Feb 2014 11:57:59 -0500 Subject: [SciPy-User] Non-science BPP problem Message-ID: [Noob Trigger Warning] Hey everyone, I'm your standard python programmer that realizes I have a variation of a Bin Packing Problem on my hands. I turn to you for your help in understanding how to use the wonders of your math packages to solve the BPP. I saw that `openopt` has a simple and extended BPP solver, but I am open for whatever python package is going to get the job done. I realize this isn't your standard scipy user mailing, as I'm looking for help just getting started on how to address/specify my problem so that the package can solve it. I took a shot at the archives here but it seems everyone else here is far past my level. If anyone has good introductory resources that can get me from where I am now (understanding that I have a Bin Packing Problem and generally how that works) to solving it (specifically, how to use the packages and implement my data to the solver to get a coherent answer) that would be greatly appreciated. Thanks for your time -A Humble Non-Science Programmer -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Thu Feb 6 12:55:47 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 6 Feb 2014 11:55:47 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Thu, Feb 6, 2014 at 10:45 AM, Da?id wrote: > On 2 February 2014 21:11, federico vaggi wrote: > >> def funky(xo, a, b, c, d): >>> return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) >> >> > That function is messy! Here is a clean-up: > > a * x0**-b * np.exp(-x0 * d / c) > > You may notice that d and c are degenerate, so you could just use as a > parameter the ratio d/c and remove one parameter. > Ah.. good catch! I didn't even bother parsing all the extra parens. Readability definitely counts! Gabriele, the real answer is that your function only has 3 unique variables. > > On 6 February 2014 17:13, Matt Newville wrote: > >> It may not be completely clear from the curve_fit() documentation, but >> the **kw parameters are passed to leastsq(). So, if you write a function >> to provide the Jacobian, you can pass function this to curve_fit() with the >> "Dfun" parameter. >> >> Since curve_fit() is a simplified wrapper around leastsq(), once you >> start doing anything non-simple, you may be better off just using >> leastsq(). Among other conveniences, the lmfit package add the ability to >> fix and/or set bounds on the parameters in the model without having to >> rewrite the objective function, which might be helpful for your problem. > > > If Scipy is having a hard time and you can afford the dependency, I would > consider using iminuit with probfit. It is built on top of CERN's Minuit, > that has very powerful minimization algorithms, robust in many dimensions > (many > 10). And in my particular case, in its pure python mode is faster > than Scipy, and gained a lot of speed when providing a Cython function > I'm interested, but slightly confused. Iminuit / probfit is described as BSD, but depends on (and includes?) Minuit - isn't that GPL? We've been trying (but slowly) to add some simpler modelling functionality to lmfit, but it looks like probfit has many more functions available. But providing higher level modeling functions seems somewhat separate to me than the details of the underlying fitting algorithm. Am I missing some important aspect of iminuit / probfit? Can you elaborate on the speed comparison? -- --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From argriffi at ncsu.edu Thu Feb 6 13:15:17 2014 From: argriffi at ncsu.edu (alex) Date: Thu, 6 Feb 2014 13:15:17 -0500 Subject: [SciPy-User] Non-science BPP problem Message-ID: On Thu, Feb 6, 2014 at 12:58 PM, wrote: > ------------------------------ > > Message: 5 > Date: Thu, 6 Feb 2014 11:57:59 -0500 > From: John Obelenus > Subject: [SciPy-User] Non-science BPP problem > To: scipy-user at scipy.org > > [Noob Trigger Warning] > > Hey everyone, I'm your standard python programmer that realizes I have a > variation of a Bin Packing Problem on my hands. I turn to you for your help > in understanding how to use the wonders of your math packages to solve the > BPP. I saw that `openopt` has a simple and extended BPP solver, but I am > open for whatever python package is going to get the job done. > > I realize this isn't your standard scipy user mailing, as I'm looking for > help just getting started on how to address/specify my problem so that the > package can solve it. I took a shot at the archives here but it seems > everyone else here is far past my level. If anyone has good introductory > resources that can get me from where I am now (understanding that I have a > Bin Packing Problem and generally how that works) to solving it > (specifically, how to use the packages and implement my data to the solver > to get a coherent answer) that would be greatly appreciated. > > Thanks for your time > -A Humble Non-Science Programmer Although the scipy library can solve some least squares problems and some local optimization of smooth functions, it currently doesn't do combinatorial optimization. It doesn't even have a linear programming solver. From gb.gabrielebrambilla at gmail.com Thu Feb 6 14:00:16 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Thu, 6 Feb 2014 14:00:16 -0500 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: 2014-02-06 Da?id : > On 2 February 2014 21:11, federico vaggi wrote: > >> def funky(xo, a, b, c, d): >>> return a*((xo)**((-1)*b))*(np.exp(((-1)*xo/c))**d) >> >> > That function is messy! Here is a clean-up: > > a * x0**-b * np.exp(-x0 * d / c) > > You may notice that d and c are degenerate, so you could just use as a > parameter the ratio d/c and remove one parameter. > > No, written in this way it is wrong. In the exponential I need: (-x0/c)**d ; NOT (-x0/c)*d. Maybe I'm writing it in the wrong way in Python...I attach an image with the function I want to write. > > On 6 February 2014 17:13, Matt Newville wrote: > >> It may not be completely clear from the curve_fit() documentation, but >> the **kw parameters are passed to leastsq(). So, if you write a function >> to provide the Jacobian, you can pass function this to curve_fit() with the >> "Dfun" parameter. >> >> Since curve_fit() is a simplified wrapper around leastsq(), once you >> start doing anything non-simple, you may be better off just using >> leastsq(). Among other conveniences, the lmfit package add the ability to >> fix and/or set bounds on the parameters in the model without having to >> rewrite the objective function, which might be helpful for your problem. > > > If Scipy is having a hard time and you can afford the dependency, I would > consider using iminuit with probfit. It is built on top of CERN's Minuit, > that has very powerful minimization algorithms, robust in many dimensions > (many > 10). And in my particular case, in its pure python mode is faster > than Scipy, and gained a lot of speed when providing a Cython function. > > > /David. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: function.jpg Type: image/jpeg Size: 4000 bytes Desc: not available URL: From d.l.goldsmith at gmail.com Thu Feb 6 14:24:24 2014 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 6 Feb 2014 11:24:24 -0800 Subject: [SciPy-User] Non-science BPP problem Message-ID: Hi, John. Please describe your specific problem in somewhat greater detail: "Bin Packing Problem" could mean different things to different scientists, depending on their area of expertise. Thanks! DG Date: Thu, 6 Feb 2014 11:57:59 -0500 > From: John Obelenus > Subject: [SciPy-User] Non-science BPP problem > To: scipy-user at scipy.org > Message-ID: > X-Csv3vdiSvR44dEFN-2KEEOy1Gy9ucfdPAcw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > [Noob Trigger Warning] > > Hey everyone, I'm your standard python programmer that realizes I have a > variation of a Bin Packing Problem on my hands. I turn to you for your help > in understanding how to use the wonders of your math packages to solve the > BPP. I saw that `openopt` has a simple and extended BPP solver, but I am > open for whatever python package is going to get the job done. > > I realize this isn't your standard scipy user mailing, as I'm looking for > help just getting started on how to address/specify my problem so that the > package can solve it. I took a shot at the archives here but it seems > everyone else here is far past my level. If anyone has good introductory > resources that can get me from where I am now (understanding that I have a > Bin Packing Problem and generally how that works) to solving it > (specifically, how to use the packages and implement my data to the solver > to get a coherent answer) that would be greatly appreciated. > > Thanks for your time > -A Humble Non-Science Programmer > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/5fe4f918/attachment-0001.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jobelenus at gmail.com Thu Feb 6 14:33:02 2014 From: jobelenus at gmail.com (John Obelenus) Date: Thu, 6 Feb 2014 14:33:02 -0500 Subject: [SciPy-User] Non-science BPP problem In-Reply-To: References: Message-ID: Certainly! Thank you David for showing some interest. This is specifically for a player ranking/team creation system, but its certainly a BPP, with some wrinkles. I have N teams (the bins). And I have M players. Each player has a ranking. I have both a ranking # (e.g. 1,2,3,...) and a relative ranking (e.g. 35 points = #1 rank, and 29 points = #2 rank, etc). Assigning teams according to decreasing absolute rank so that teams are even split by absolute ranking is pretty trivial (and thus "balanced" to an acceptable definition of "balanced"). What I am trying to solve now is pairing, e.g. players sign up to play with their friends. So player at Rank A is packed into a group with player Rank J. And the aggregate ranking of all the teams/groups ought to accommodate these pairings such that the packages are "balanced" as evenly as they can be. So the "Volume" parameter is what I don't know how to specify for the BPP problem here. And a further wrinkle is that males and females are assigned to teams differently (e.g. so each team has an approximately equal# of males and females), but ranked together (e.g. a female may have a higher rank than a male, and that matters) and will certainly be paired together to be on the same team/group. And I am very sure that complicates things :) On Thu, Feb 6, 2014 at 2:24 PM, David Goldsmith wrote: > Hi, John. Please describe your specific problem in somewhat greater > detail: "Bin Packing Problem" could mean different things to different > scientists, depending on their area of expertise. Thanks! > > DG > > Date: Thu, 6 Feb 2014 11:57:59 -0500 > >> From: John Obelenus >> Subject: [SciPy-User] Non-science BPP problem >> To: scipy-user at scipy.org >> Message-ID: >> > X-Csv3vdiSvR44dEFN-2KEEOy1Gy9ucfdPAcw at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> [Noob Trigger Warning] >> >> Hey everyone, I'm your standard python programmer that realizes I have a >> variation of a Bin Packing Problem on my hands. I turn to you for your >> help >> in understanding how to use the wonders of your math packages to solve the >> BPP. I saw that `openopt` has a simple and extended BPP solver, but I am >> open for whatever python package is going to get the job done. >> >> I realize this isn't your standard scipy user mailing, as I'm looking for >> help just getting started on how to address/specify my problem so that the >> package can solve it. I took a shot at the archives here but it seems >> everyone else here is far past my level. If anyone has good introductory >> resources that can get me from where I am now (understanding that I have a >> Bin Packing Problem and generally how that works) to solving it >> (specifically, how to use the packages and implement my data to the solver >> to get a coherent answer) that would be greatly appreciated. >> >> Thanks for your time >> -A Humble Non-Science Programmer >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/5fe4f918/attachment-0001.html >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Thu Feb 6 15:10:16 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 6 Feb 2014 21:10:16 +0100 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On 6 February 2014 20:00, Gabriele Brambilla wrote: > No, written in this way it is wrong. In the exponential I need: (-x0/c)**d > ; NOT (-x0/c)*d. Maybe I'm writing it in the wrong way in Python...I attach > an image with the function I want to write. > Then, your function has a misplaced parenthesis. Usually, IDEs can tell you which parenthesis closes which one, so you can see if you are writing what you want. (See attached a couple of examples with vi). See if this fixes your original problem: a * x0**-b * np.exp(-(x0 / c)**d) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: parenthesis.png Type: image/png Size: 2604 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: parenthesisII.png Type: image/png Size: 1945 bytes Desc: not available URL: From ehermes at chem.wisc.edu Thu Feb 6 15:15:05 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Thu, 06 Feb 2014 14:15:05 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: <52F3ED49.4070108@chem.wisc.edu> On Thursday, February 06, 2014 2:10:16 PM, Da?id wrote: > On 6 February 2014 20:00, Gabriele Brambilla > > wrote: > > No, written in this way it is wrong. In the exponential I need: > (-x0/c)**d ; NOT (-x0/c)*d. Maybe I'm writing it in the wrong way > in Python...I attach an image with the function I want to write. > > > Then, your function has a misplaced parenthesis. Usually, IDEs can > tell you which parenthesis closes which one, so you can see if you are > writing what you want. (See attached a couple of examples with vi). > > See if this fixes your original problem: > > a * x0**-b * np.exp(-(x0 / c)**d) This is also not the same as Gabriele's original expression. Observe that (-x0/c)**d != -(x0/c)**d. The former case is the original expression, which is complex when x0/c is positive and d is non-integer. This is the root cause of Gabriele's issue. Eric > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison From davidmenhur at gmail.com Thu Feb 6 15:20:04 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 6 Feb 2014 21:20:04 +0100 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On 6 February 2014 18:55, Matt Newville wrote: > I'm interested, but slightly confused. Iminuit / probfit is described as > BSD, but depends on (and includes?) Minuit - isn't that GPL? The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, that wraps the C code is MIT. One could deattach it and use a different algorithm, or a Minuit implementation under another license. Probfit provides convenience functions to do regressions and plotting. For example, chi2 = Chi2Regression(fun, x, y) creates a callable object that calculates the chi2 for the function and the defined data, but has methods like chi2.draw to plot it, and it can obtain the plotting parameters from an iminuit object (the dictionary Minuit.values), but could work perfectly well with some other object presenting the same interface. I am currently using it to fit a lorentzian curve to a noisy profile. I have fairly nice approximations to the actual values (looking at the maximum, for example), and speed is critical, so I tried a few approaches and iminuit + probfit were the fastest, by 20 to 50%. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_1.png Type: image/png Size: 59456 bytes Desc: not available URL: From d.l.goldsmith at gmail.com Thu Feb 6 15:34:37 2014 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 6 Feb 2014 12:34:37 -0800 Subject: [SciPy-User] Non-science BPP problem (John Obelenus) Message-ID: Ah, OK, what you have is a Constrained, Integer Linear 'Programming' problem (Programming is in quotes because, in this context, due to historical misfortune, it means something quite different from what it now typically means). Unfortunately, this is a rather advanced area of mathematics (often not taught until the graduate level), at the intersection of combinatorics and optimization, but at least now you have the terms you can use to conduct searches. However, there may be people on this list who have already "solved" (i.e., written code to solve) this problem, and they may be willing to share and/or provide guidance. One concrete suggestion: one method that I often find useful when approaching combinatorics problems is to study the simplest non-trivial case--e.g., say two boys, two girls, with the types of rankings you indicate--and figure out the solution by hand. Then complicate the problem incrementally until, hopefully, the pattern inherent in the solution (or method of solution) becomes evident. In other words, work inductively, rather than deductively. (FWIW: integer programming problems often require iterative, rather than formulaic, solutions, so figuring out how to solve the problem by hand may be no different than what you would have to implement in code anyway.) Good luck! DG Date: Thu, 6 Feb 2014 14:33:02 -0500 > From: John Obelenus > Subject: Re: [SciPy-User] Non-science BPP problem > To: SciPy Users List > Message-ID: > < > CALV8KbdBxB0Ux0+FoH+udbYKGT88OD3eo3eL3Gc26zY-yY9u_Q at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Certainly! Thank you David for showing some interest. This is specifically > for a player ranking/team creation system, but its certainly a BPP, with > some wrinkles. > > I have N teams (the bins). And I have M players. Each player has a ranking. > I have both a ranking # (e.g. 1,2,3,...) and a relative ranking (e.g. 35 > points = #1 rank, and 29 points = #2 rank, etc). Assigning teams according > to decreasing absolute rank so that teams are even split by absolute > ranking is pretty trivial (and thus "balanced" to an acceptable definition > of "balanced"). > > What I am trying to solve now is pairing, e.g. players sign up to play with > their friends. So player at Rank A is packed into a group with player Rank > J. And the aggregate ranking of all the teams/groups ought to accommodate > these pairings such that the packages are "balanced" as evenly as they can > be. > > So the "Volume" parameter is what I don't know how to specify for the BPP > problem here. > > And a further wrinkle is that males and females are assigned to teams > differently (e.g. so each team has an approximately equal# of males and > females), but ranked together (e.g. a female may have a higher rank than a > male, and that matters) and will certainly be paired together to be on the > same team/group. And I am very sure that complicates things :) > > > On Thu, Feb 6, 2014 at 2:24 PM, David Goldsmith >wrote: > > > Hi, John. Please describe your specific problem in somewhat greater > > detail: "Bin Packing Problem" could mean different things to different > > scientists, depending on their area of expertise. Thanks! > > > > DG > > > > Date: Thu, 6 Feb 2014 11:57:59 -0500 > > > >> From: John Obelenus > >> Subject: [SciPy-User] Non-science BPP problem > >> To: scipy-user at scipy.org > >> Message-ID: > >> >> X-Csv3vdiSvR44dEFN-2KEEOy1Gy9ucfdPAcw at mail.gmail.com> > >> Content-Type: text/plain; charset="utf-8" > >> > >> [Noob Trigger Warning] > >> > >> Hey everyone, I'm your standard python programmer that realizes I have a > >> variation of a Bin Packing Problem on my hands. I turn to you for your > >> help > >> in understanding how to use the wonders of your math packages to solve > the > >> BPP. I saw that `openopt` has a simple and extended BPP solver, but I am > >> open for whatever python package is going to get the job done. > >> > >> I realize this isn't your standard scipy user mailing, as I'm looking > for > >> help just getting started on how to address/specify my problem so that > the > >> package can solve it. I took a shot at the archives here but it seems > >> everyone else here is far past my level. If anyone has good introductory > >> resources that can get me from where I am now (understanding that I > have a > >> Bin Packing Problem and generally how that works) to solving it > >> (specifically, how to use the packages and implement my data to the > solver > >> to get a coherent answer) that would be greatly appreciated. > >> > >> Thanks for your time > >> -A Humble Non-Science Programmer > >> -------------- next part -------------- > >> An HTML attachment was scrubbed... > >> URL: > >> > http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/5fe4f918/attachment-0001.html > >> > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/29ddda95/attachment.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 126, Issue 10 > ******************************************* > -- >From "A Letter From The Future" in "Peak Everything" by Richard Heinberg: "By the time I was an older teenager, a certain...attitude was developing among the young people...a feeling of utter contempt for anyone over a certain age--maybe 30 or 40. The adults had consumed so many resources, and now there were none left for their own children...when those adults were younger, they [were] just doing what everybody else was doing...they figured it was normal to cut down ancient forests for...phone books, pump every last gallon of oil to power their SUV's...[but] for...my generation all that was just a dim memory...We [grew up] living in darkness, with shortages of food and water, with riots in the streets, with people begging on street corners...for us, the adults were the enemy." Want to *really* understand what's *really* going on? Read "Peak Everything." -------------- next part -------------- An HTML attachment was scrubbed... URL: From jobelenus at gmail.com Thu Feb 6 15:40:15 2014 From: jobelenus at gmail.com (John Obelenus) Date: Thu, 6 Feb 2014 15:40:15 -0500 Subject: [SciPy-User] Non-science BPP problem (John Obelenus) In-Reply-To: References: Message-ID: Interesting. I figured the M/W wrinkle may have pushed it beyond the BPP. But the treatment of "rank" as "volume" in a BPP would keep the problem in that domain, and thus I could use a solver to do the heavy lifting. I'm doing this purely as a hobby project -- so the biggest bang for the smallest buck will be very helpful :) And yes, moving through the process, as if by hand, I understand I could do. Though it would certainly by slow. My main concern is that it would not be optimized for "balance" enough, e.g. based on on the input it would be seriously skewed (which is already a large social issue amongst our group of a couple thousand players). Thanks David! Does anyone else have any experience with this Constrained Integer Linear Programming problem space? On Thu, Feb 6, 2014 at 3:34 PM, David Goldsmith wrote: > Ah, OK, what you have is a Constrained, Integer Linear 'Programming' > problem (Programming is in quotes because, in this context, due to > historical misfortune, it means something quite different from what it now > typically means). Unfortunately, this is a rather advanced area of > mathematics (often not taught until the graduate level), at the > intersection of combinatorics and optimization, but at least now you have > the terms you can use to conduct searches. However, there may be people on > this list who have already "solved" (i.e., written code to solve) this > problem, and they may be willing to share and/or provide guidance. > > One concrete suggestion: one method that I often find useful when > approaching combinatorics problems is to study the simplest non-trivial > case--e.g., say two boys, two girls, with the types of rankings you > indicate--and figure out the solution by hand. Then complicate the problem > incrementally until, hopefully, the pattern inherent in the solution (or > method of solution) becomes evident. In other words, work inductively, > rather than deductively. (FWIW: integer programming problems often require > iterative, rather than formulaic, solutions, so figuring out how to solve > the problem by hand may be no different than what you would have to > implement in code anyway.) > > Good luck! > > DG > > > Date: Thu, 6 Feb 2014 14:33:02 -0500 >> From: John Obelenus >> Subject: Re: [SciPy-User] Non-science BPP problem >> To: SciPy Users List >> Message-ID: >> < >> CALV8KbdBxB0Ux0+FoH+udbYKGT88OD3eo3eL3Gc26zY-yY9u_Q at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Certainly! Thank you David for showing some interest. This is specifically >> for a player ranking/team creation system, but its certainly a BPP, with >> some wrinkles. >> >> I have N teams (the bins). And I have M players. Each player has a >> ranking. >> I have both a ranking # (e.g. 1,2,3,...) and a relative ranking (e.g. 35 >> points = #1 rank, and 29 points = #2 rank, etc). Assigning teams according >> to decreasing absolute rank so that teams are even split by absolute >> ranking is pretty trivial (and thus "balanced" to an acceptable definition >> of "balanced"). >> >> What I am trying to solve now is pairing, e.g. players sign up to play >> with >> their friends. So player at Rank A is packed into a group with player Rank >> J. And the aggregate ranking of all the teams/groups ought to accommodate >> these pairings such that the packages are "balanced" as evenly as they can >> be. >> >> So the "Volume" parameter is what I don't know how to specify for the BPP >> problem here. >> >> And a further wrinkle is that males and females are assigned to teams >> differently (e.g. so each team has an approximately equal# of males and >> females), but ranked together (e.g. a female may have a higher rank than a >> male, and that matters) and will certainly be paired together to be on the >> same team/group. And I am very sure that complicates things :) >> >> >> On Thu, Feb 6, 2014 at 2:24 PM, David Goldsmith > >wrote: >> >> > Hi, John. Please describe your specific problem in somewhat greater >> > detail: "Bin Packing Problem" could mean different things to different >> > scientists, depending on their area of expertise. Thanks! >> > >> > DG >> > >> > Date: Thu, 6 Feb 2014 11:57:59 -0500 >> > >> >> From: John Obelenus >> >> Subject: [SciPy-User] Non-science BPP problem >> >> To: scipy-user at scipy.org >> >> Message-ID: >> >> > >> X-Csv3vdiSvR44dEFN-2KEEOy1Gy9ucfdPAcw at mail.gmail.com> >> >> Content-Type: text/plain; charset="utf-8" >> >> >> >> [Noob Trigger Warning] >> >> >> >> Hey everyone, I'm your standard python programmer that realizes I have >> a >> >> variation of a Bin Packing Problem on my hands. I turn to you for your >> >> help >> >> in understanding how to use the wonders of your math packages to solve >> the >> >> BPP. I saw that `openopt` has a simple and extended BPP solver, but I >> am >> >> open for whatever python package is going to get the job done. >> >> >> >> I realize this isn't your standard scipy user mailing, as I'm looking >> for >> >> help just getting started on how to address/specify my problem so that >> the >> >> package can solve it. I took a shot at the archives here but it seems >> >> everyone else here is far past my level. If anyone has good >> introductory >> >> resources that can get me from where I am now (understanding that I >> have a >> >> Bin Packing Problem and generally how that works) to solving it >> >> (specifically, how to use the packages and implement my data to the >> solver >> >> to get a coherent answer) that would be greatly appreciated. >> >> >> >> Thanks for your time >> >> -A Humble Non-Science Programmer >> >> -------------- next part -------------- >> >> An HTML attachment was scrubbed... >> >> URL: >> >> >> http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/5fe4f918/attachment-0001.html >> >> >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-user/attachments/20140206/29ddda95/attachment.html >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> End of SciPy-User Digest, Vol 126, Issue 10 >> ******************************************* >> > > > > -- > From "A Letter From The Future" in "Peak Everything" by Richard Heinberg: > > "By the time I was an older teenager, a certain...attitude was developing > among the young people...a feeling of utter contempt for anyone over a > certain age--maybe 30 or 40. The adults had consumed so many resources, > and now there were none left for their own children...when those adults > were younger, they [were] just doing what everybody else was doing...they > figured it was normal to cut down ancient forests for...phone books, pump > every last gallon of oil to power their SUV's...[but] for...my generation > all that was just a dim memory...We [grew up] living in darkness, with > shortages of food and water, with riots in the streets, with people begging > on street corners...for us, the adults were the enemy." > > Want to *really* understand what's *really* going on? Read "Peak > Everything." > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.stowell at eecs.qmul.ac.uk Thu Feb 6 15:54:11 2014 From: dan.stowell at eecs.qmul.ac.uk (Dan Stowell) Date: Thu, 06 Feb 2014 20:54:11 +0000 Subject: [SciPy-User] downsampling with max in 2d Message-ID: <52F3F673.3060007@eecs.qmul.ac.uk> Hi all, I would like to downsample 2D arrays using max-pooling. In other words, to downsample by a factor of 2 would reduce a b c d e f g h i j k l m n o p to max(a,b,e,f) max(c,d,g,h) max(i,j,m,n) max(k,l,o,p) I've searched through numpy and scipy and not found a method for this, but I'd be grateful for pointers if I've missed it. In the meantime, I wrote the following function. If you can spot a faster/better ways to achieve this, please do say: def maxdownsample2d(data, factor): """Supply a 2D numpy array, and an integer factor by which to downsample (by nonoverlapping maxpooling) in both directions""" # factor might not be exact fit, so we trim to this. trimrows = int(np.floor(data.shape[0] / float(factor))) * factor trimcols = int(np.floor(data.shape[1] / float(factor))) * factor first = True for coff in range(factor): for roff in range(factor): hopped = data[roff:trimrows:factor, coff:trimcols:factor] if first: maxsofar = hopped first = False else: maxsofar = np.maximum(maxsofar, hopped) return maxsofar Best Dan -- Dan Stowell Postdoctoral Research Assistant Centre for Digital Music Queen Mary, University of London Mile End Road, London E1 4NS http://c4dm.eecs.qmul.ac.uk/people/dans.htm http://www.mcld.co.uk/ From josef.pktd at gmail.com Thu Feb 6 16:14:30 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 6 Feb 2014 16:14:30 -0500 Subject: [SciPy-User] downsampling with max in 2d In-Reply-To: <52F3F673.3060007@eecs.qmul.ac.uk> References: <52F3F673.3060007@eecs.qmul.ac.uk> Message-ID: On Thu, Feb 6, 2014 at 3:54 PM, Dan Stowell wrote: > Hi all, > > I would like to downsample 2D arrays using max-pooling. In other words, > to downsample by a factor of 2 would reduce > > a b c d > e f g h > i j k l > m n o p > > to > > max(a,b,e,f) max(c,d,g,h) > max(i,j,m,n) max(k,l,o,p) > > > I've searched through numpy and scipy and not found a method for this, > but I'd be grateful for pointers if I've missed it. > In the meantime, I wrote the following function. If you can spot a > faster/better ways to achieve this, please do say: > > > def maxdownsample2d(data, factor): > """Supply a 2D numpy array, and an integer factor by which to > downsample (by nonoverlapping maxpooling) in both directions""" > # factor might not be exact fit, so we trim to this. > trimrows = int(np.floor(data.shape[0] / float(factor))) * factor > trimcols = int(np.floor(data.shape[1] / float(factor))) * factor > first = True > for coff in range(factor): > for roff in range(factor): > hopped = data[roff:trimrows:factor, > coff:trimcols:factor] > if first: > maxsofar = hopped > first = False > else: > maxsofar = np.maximum(maxsofar, hopped) > return maxsofar > > > ndimage has a maximum filter and then slice, but it does a lot of redundant calculation I always wanted to use reduceat (and didn't find many opportunities) >>> ar = np.random.randn(24,24) >>> cut = np.arange(0, 24, 4) >>> y1 = np.maximum.reduceat(ar, cut) >>> y2 = np.maximum.reduceat(y1, cut,axis=1) >>> y2 - maxdownsample2d(ar, 4) array([[ 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0.]]) Josef > > Best > Dan > -- > Dan Stowell > Postdoctoral Research Assistant > Centre for Digital Music > Queen Mary, University of London > Mile End Road, London E1 4NS > http://c4dm.eecs.qmul.ac.uk/people/dans.htm > http://www.mcld.co.uk/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Thu Feb 6 16:20:15 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 6 Feb 2014 15:20:15 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: Hi David, On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: > On 6 February 2014 18:55, Matt Newville wrote: > >> I'm interested, but slightly confused. Iminuit / probfit is described >> as BSD, but depends on (and includes?) Minuit - isn't that GPL? > > > The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, > that wraps the C code is MIT. > The way I understand it, code that wraps GPLv2 code is required to use GPL, and so propagate the license to all code using it. That would mean that code that uses Minuit cannot be something other than GPL. Perhaps I am not understanding something. > One could deattach it and use a different algorithm, or a Minuit > implementation under another license. > Does such an implementation exist? Probfit provides convenience functions to do regressions and plotting. For > example, chi2 = Chi2Regression(fun, x, y) creates a callable object that > calculates the chi2 for the function and the defined data, but has methods > like chi2.draw to plot it, and it can obtain the plotting parameters from > an iminuit object (the dictionary Minuit.values), but could work perfectly > well with some other object presenting the same interface. > > I am currently using it to fit a lorentzian curve to a noisy profile. I > have fairly nice approximations to the actual values (looking at the > maximum, for example), and speed is critical, so I tried a few approaches > and iminuit + probfit were the fastest, by 20 to 50%. > OK. Code that can quickly and accurately fit lorentzians and other basic curves to noisy 1-d data sets is important to me too. Ultimate speed could depend greatly on initial values and whether analytic derivatives can be used as well as fitting algorithm used. In lmfit, we've tried to make it easy to switch fitting algorithms without having to change the objective function, but with bounded and possibly constrained parameter values, it's more challenging to use analytic derivatives. --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From deil.christoph at googlemail.com Thu Feb 6 16:24:54 2014 From: deil.christoph at googlemail.com (Christoph Deil) Date: Thu, 6 Feb 2014 22:24:54 +0100 Subject: [SciPy-User] downsampling with max in 2d In-Reply-To: References: <52F3F673.3060007@eecs.qmul.ac.uk> Message-ID: <42295A59-621E-441A-926F-C563A2A896B1@gmail.com> On 06 Feb 2014, at 22:14, josef.pktd at gmail.com wrote: > > > On Thu, Feb 6, 2014 at 3:54 PM, Dan Stowell wrote: > Hi all, > > I would like to downsample 2D arrays using max-pooling. In other words, > to downsample by a factor of 2 would reduce > > a b c d > e f g h > i j k l > m n o p > > to > > max(a,b,e,f) max(c,d,g,h) > max(i,j,m,n) max(k,l,o,p) > > > I've searched through numpy and scipy and not found a method for this, > but I'd be grateful for pointers if I've missed it. Hi Dan, scikit-image has a flexible function that let?s you easily do this: http://scikit-image.org/docs/dev/api/skimage.measure.html#block-reduce Christoph > In the meantime, I wrote the following function. If you can spot a > faster/better ways to achieve this, please do say: > > > def maxdownsample2d(data, factor): > """Supply a 2D numpy array, and an integer factor by which to > downsample (by nonoverlapping maxpooling) in both directions""" > # factor might not be exact fit, so we trim to this. > trimrows = int(np.floor(data.shape[0] / float(factor))) * factor > trimcols = int(np.floor(data.shape[1] / float(factor))) * factor > first = True > for coff in range(factor): > for roff in range(factor): > hopped = data[roff:trimrows:factor, coff:trimcols:factor] > if first: > maxsofar = hopped > first = False > else: > maxsofar = np.maximum(maxsofar, hopped) > return maxsofar > > > > ndimage has a maximum filter and then slice, but it does a lot of redundant calculation > > I always wanted to use reduceat (and didn't find many opportunities) > > >>> ar = np.random.randn(24,24) > >>> cut = np.arange(0, 24, 4) > >>> y1 = np.maximum.reduceat(ar, cut) > >>> y2 = np.maximum.reduceat(y1, cut,axis=1) > > >>> y2 - maxdownsample2d(ar, 4) > array([[ 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0.]]) > > Josef > > > Best > Dan > -- > Dan Stowell > Postdoctoral Research Assistant > Centre for Digital Music > Queen Mary, University of London > Mile End Road, London E1 4NS > http://c4dm.eecs.qmul.ac.uk/people/dans.htm > http://www.mcld.co.uk/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Thu Feb 6 16:28:37 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Thu, 6 Feb 2014 16:28:37 -0500 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: Hi guys! now it works (not perfectly but there has been a lot of progress!) first the expression I wrote was wrong for the reason outlined by Eric Hermes, now I use: -(x0/c)**d. Second I've tried a way to give to the fit a more precise start point. thank you all Gabriele 2014-02-06 Da?id : > On 6 February 2014 18:55, Matt Newville wrote: > >> I'm interested, but slightly confused. Iminuit / probfit is described >> as BSD, but depends on (and includes?) Minuit - isn't that GPL? > > > The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, > that wraps the C code is MIT. One could deattach it and use a different > algorithm, or a Minuit implementation under another license. > > Probfit provides convenience functions to do regressions and plotting. For > example, chi2 = Chi2Regression(fun, x, y) creates a callable object that > calculates the chi2 for the function and the defined data, but has methods > like chi2.draw to plot it, and it can obtain the plotting parameters from > an iminuit object (the dictionary Minuit.values), but could work perfectly > well with some other object presenting the same interface. > > I am currently using it to fit a lorentzian curve to a noisy profile. I > have fairly nice approximations to the actual values (looking at the > maximum, for example), and speed is critical, so I tried a few approaches > and iminuit + probfit were the fastest, by 20 to 50%. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Thu Feb 6 16:56:17 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 6 Feb 2014 22:56:17 +0100 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On 6 February 2014 22:20, Matt Newville wrote: > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: > >> On 6 February 2014 18:55, Matt Newville wrote: >> >>> I'm interested, but slightly confused. Iminuit / probfit is described >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python >> part, that wraps the C code is MIT. >> > > The way I understand it, code that wraps GPLv2 code is required to use > GPL, and so propagate the license to all code using it. That would mean > that code that uses Minuit cannot be something other than GPL. Perhaps I > am not understanding something. > That is a good point. Reading the licenses, it is completely unclear to my interpretation. GNU says that the whole package has to be compatible with GPL [1], and MIT, as well as the revised BSD [2], are. Regarding the implementations, to my knowledge, the closest thing is taking it from a ROOT distribution, that is LGPL [3]. I admit I did not include lmfit in my benchmarks, I should give it a try. After a quick look, the interface looks clumsier to write, but much nicer to deal with programatically (ie, fitting an arbitrary number of parameters), and a sure plus is the arbitrarily complex constrains and bounds. /David. [1] http://www.gnu.org/licenses/old-licenses/gpl-2.0-faq.html#GPLWrapper [2] http://www.gnu.org/licenses/old-licenses/gpl-2.0-faq.html#OrigBSD [3] http://root.cern.ch/drupal/content/license -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Feb 6 18:14:05 2014 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Feb 2014 23:14:05 +0000 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville wrote: > Hi David, > > > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: >> >> On 6 February 2014 18:55, Matt Newville >> wrote: >>> >>> I'm interested, but slightly confused. Iminuit / probfit is described >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, >> that wraps the C code is MIT. > > The way I understand it, code that wraps GPLv2 code is required to use GPL, > and so propagate the license to all code using it. That would mean that code > that uses Minuit cannot be something other than GPL. Perhaps I am not > understanding something. GPLed code can be combined with code under a different license as long as that other license imposes no conditions more restrictive than those in the GPL. The MIT license is one such license. This is all perfectly above board. That said, as a practical matter, for a package that mostly just wraps the functionality of the internal GPLed component, licensing that wrapper under the MIT license leads to confusion, as we have seen. -- Robert Kern From hkhan003 at fiu.edu Thu Feb 6 19:21:36 2014 From: hkhan003 at fiu.edu (Hari) Date: Thu, 6 Feb 2014 19:21:36 -0500 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! Message-ID: <52f42718.ca628c0a.a368.261e@mx.google.com> -----Original Message----- From: "Robert Kern" Sent: ?2/?6/?2014 6:14 PM To: "SciPy Users List" Subject: Re: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville wrote: > Hi David, > > > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: >> >> On 6 February 2014 18:55, Matt Newville >> wrote: >>> >>> I'm interested, but slightly confused. Iminuit / probfit is described >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, >> that wraps the C code is MIT. > > The way I understand it, code that wraps GPLv2 code is required to use GPL, > and so propagate the license to all code using it. That would mean that code > that uses Minuit cannot be something other than GPL. Perhaps I am not > understanding something. GPLed code can be combined with code under a different license as long as that other license imposes no conditions more restrictive than those in the GPL. The MIT license is one such license. This is all perfectly above board. That said, as a practical matter, for a package that mostly just wraps the functionality of the internal GPLed component, licensing that wrapper under the MIT license leads to confusion, as we have seen. -- Robert Kern _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Fri Feb 7 04:08:23 2014 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Fri, 7 Feb 2014 10:08:23 +0100 Subject: [SciPy-User] downsampling with max in 2d In-Reply-To: <52F3F673.3060007@eecs.qmul.ac.uk> References: <52F3F673.3060007@eecs.qmul.ac.uk> Message-ID: <20140207100823.44c456b6.Jerome.Kieffer@esrf.fr> On Thu, 06 Feb 2014 20:54:11 +0000 Dan Stowell wrote: > Hi all, > > I would like to downsample 2D arrays using max-pooling. In other words, > to downsample by a factor of 2 would reduce > This is very close to binning ... Here are a couple of pure numpy implementations: if your binning is small (up to 8x8): go for a double loop like this: out = numpy.zeros(tuple(outputSize)) for i in xrange(binsize[0]): for j in xrange(binsize[1]): out = numpy.maximum(input_img[i::binsize[0], j::binsize[1]], out) If you order is large temp = input_img.copy() temp.shape = (outputSize[0], binsize[0], outputSize[1], binsize[1]) out = temp.max(axis=3).max(axis=1) Anyway the size of the input array does not matter. HTH. -- J?r?me Kieffer tel +33 476 882 445 From newville at cars.uchicago.edu Fri Feb 7 08:03:48 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Fri, 7 Feb 2014 07:03:48 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: > > On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville > wrote: > > Hi David, > > > > > > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: > >> > >> On 6 February 2014 18:55, Matt Newville > >> wrote: > >>> > >>> I'm interested, but slightly confused. Iminuit / probfit is described > >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? > >> > >> > >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python part, > >> that wraps the C code is MIT. > > > > The way I understand it, code that wraps GPLv2 code is required to use GPL, > > and so propagate the license to all code using it. That would mean that code > > that uses Minuit cannot be something other than GPL. Perhaps I am not > > understanding something. > > GPLed code can be combined with code under a different license as long > as that other license imposes no conditions more restrictive than > those in the GPL. The MIT license is one such license. This is all > perfectly above board. Unless I am mistaken (not without precedent), the resulting mixed code must use the GPL. If library "M" is GPL and package "Mwrapper" calls it, "Mwrapper" must be GPL. This license propagation is the distinguishing characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. Do you disagree? > That said, as a practical matter, for a package that mostly just wraps > the functionality of the internal GPLed component, licensing that > wrapper under the MIT license leads to confusion, as we have seen. I think this does not lead to confusion, but to a license violation. The GPL is wordy, but not confusing. If your code wraps (uses, links to, imports, or calls) GPL code, your code must use the GPL. Not MIT. Then again, maybe something has changed and Minuit is not actually GPL anymore. My recollection is that Minuit and Root have been GPL forever. I've just ignored these tools, but I now see that parts of Root are now LGPL (not the core math routines, including the minimization code, which is based on GSL, and so GPL). Perhaps Minuit was abandoned by the CERN folks and the license was changed? I downloaded the Minuit package from http://www.cern.ch/mathlibs/sw/5_34_14/Minuit2/Minuit2-5.34.14.tar.gz but see no mention of any license anywhere in the source or documentation there -- perhaps I missed it. Is Minuit being developed separately now? --Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Feb 7 08:22:11 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Feb 2014 13:22:11 +0000 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 1:03 PM, Matt Newville wrote: > > On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: >> >> On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville >> wrote: >> > Hi David, >> > >> > >> > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: >> >> >> >> On 6 February 2014 18:55, Matt Newville >> >> wrote: >> >>> >> >>> I'm interested, but slightly confused. Iminuit / probfit is >> >>> described >> >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python >> >> part, >> >> that wraps the C code is MIT. >> > >> > The way I understand it, code that wraps GPLv2 code is required to use >> > GPL, >> > and so propagate the license to all code using it. That would mean that >> > code >> > that uses Minuit cannot be something other than GPL. Perhaps I am not >> > understanding something. >> >> GPLed code can be combined with code under a different license as long >> as that other license imposes no conditions more restrictive than >> those in the GPL. The MIT license is one such license. This is all >> perfectly above board. > > Unless I am mistaken (not without precedent), the resulting mixed code must > use the GPL. If library "M" is GPL and package "Mwrapper" calls it, > "Mwrapper" must be GPL. This license propagation is the distinguishing > characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. Do > you disagree? Yes. The combined work must be distributed under the GPL license. But each component can have a non-GPL-but-GPL-compatible license, like the MIT license. The parts of "Mwrapper" that are not "M" itself do not need to be GPLed. "Mwrapper+M" is GPLed. "Mwrapper" itself does not need to be (provided they are licensed compatibly). This is why I say that the *practical effect* of saying that "Mwrapper" is BSD-licensed is confusing. Users grab "Mwrapper+M" under the impression that "Mwrapper"'s license is the only one, when it is not. >> That said, as a practical matter, for a package that mostly just wraps >> the functionality of the internal GPLed component, licensing that >> wrapper under the MIT license leads to confusion, as we have seen. > > I think this does not lead to confusion, but to a license violation. The > GPL is wordy, but not confusing. If your code wraps (uses, links to, > imports, or calls) GPL code, your code must use the GPL. Not MIT. You have misread the GPL. Please consult the FSF's information about mixing code with GPL-compatible licenses. The propagation applies to the combined work as a whole, not every single piece of code in the combined work. -- Robert Kern From gb.gabrielebrambilla at gmail.com Fri Feb 7 11:47:31 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Fri, 7 Feb 2014 11:47:31 -0500 Subject: [SciPy-User] Curve fit: strange documentation Message-ID: I think there is a misunderstanding in the documentation: *sigma* : None or N-length sequence If not None, this vector will be used as relative weights in the least-squares problem. usually sigma is the way in which you indicate an error of a measure and the weight for that measure is w=1/sigma. this vector is the vector of errors or the vector of weights? thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Feb 7 11:57:03 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Feb 2014 11:57:03 -0500 Subject: [SciPy-User] Curve fit: strange documentation In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 11:47 AM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > I think there is a misunderstanding in the documentation: > > > *sigma* : None or N-length sequence > > If not None, this vector will be used as relative weights in the > least-squares problem. > > > usually sigma is the way in which you indicate an error of a measure and > the weight for that measure is w=1/sigma. > > this vector is the vector of errors or the vector of weights? > "If not None, this vector will be used as relative inverse weights in the least-squares problem." data is transformed by (y - f(x)) * 1 / sigma which would make it clearer if that were in the docstring. Josef > > thanks > > > Gabriele > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaggi.federico at gmail.com Fri Feb 7 13:37:17 2014 From: vaggi.federico at gmail.com (federico vaggi) Date: Fri, 7 Feb 2014 19:37:17 +0100 Subject: [SciPy-User] Curve fit: strange documentation (josef.pktd@gmail.com) Message-ID: > > In weighted least squares, the residuals are weighted by their standard > deviation - so it's basically the same thing. http://en.wikipedia.org/wiki/Weighted_least_squares#Weighted_least_squares -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Feb 7 13:59:44 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Feb 2014 13:59:44 -0500 Subject: [SciPy-User] Curve fit: strange documentation (josef.pktd@gmail.com) In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 1:37 PM, federico vaggi wrote: > In weighted least squares, the residuals are weighted by their standard >> deviation - so it's basically the same thing. > > > http://en.wikipedia.org/wiki/Weighted_least_squares#Weighted_least_squares > > > It'a not a question whether they are "basically the same thing", it's a question of what the (imprecise) terms in the docstring actually mean. `WLS-weights` = 1/sigma**2 but the transformation uses w = 1/ sigma or maybe weights just refers to general weighting, not to WLS-weights (natural language usage instead of formal definition) Is it squared or not, and is it the inverse or not? sigma is the standard deviation, except it's only proportional because we scale it (by default) with the residual standard deviation. Josef INALL > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Fri Feb 7 16:42:03 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Fri, 7 Feb 2014 15:42:03 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 7:22 AM, Robert Kern wrote: > On Fri, Feb 7, 2014 at 1:03 PM, Matt Newville > wrote: > > > > On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: > >> > >> On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville > >> wrote: > >> > Hi David, > >> > > >> > > >> > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: > >> >> > >> >> On 6 February 2014 18:55, Matt Newville > >> >> wrote: > >> >>> > >> >>> I'm interested, but slightly confused. Iminuit / probfit is > >> >>> described > >> >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? > >> >> > >> >> > >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python > >> >> part, > >> >> that wraps the C code is MIT. > >> > > >> > The way I understand it, code that wraps GPLv2 code is required to use > >> > GPL, > >> > and so propagate the license to all code using it. That would mean > that > >> > code > >> > that uses Minuit cannot be something other than GPL. Perhaps I am > not > >> > understanding something. > >> > >> GPLed code can be combined with code under a different license as long > >> as that other license imposes no conditions more restrictive than > >> those in the GPL. The MIT license is one such license. This is all > >> perfectly above board. > > > > Unless I am mistaken (not without precedent), the resulting mixed code > must > > use the GPL. If library "M" is GPL and package "Mwrapper" calls it, > > "Mwrapper" must be GPL. This license propagation is the distinguishing > > characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. > Do > > you disagree? > > Yes. The combined work must be distributed under the GPL license. But > each component can have a non-GPL-but-GPL-compatible license, like the > MIT license. The parts of "Mwrapper" that are not "M" itself do not > need to be GPLed. "Mwrapper+M" is GPLed. "Mwrapper" itself does not > need to be (provided they are licensed compatibly). > > The parts of Mwrapper that use M must be GPL. > This is why I say that the *practical effect* of saying that > "Mwrapper" is BSD-licensed is confusing. Users grab "Mwrapper+M" under > the impression that "Mwrapper"'s license is the only one, when it is > not. > > >> That said, as a practical matter, for a package that mostly just wraps > >> the functionality of the internal GPLed component, licensing that > >> wrapper under the MIT license leads to confusion, as we have seen. > > > > I think this does not lead to confusion, but to a license violation. The > > GPL is wordy, but not confusing. If your code wraps (uses, links to, > > imports, or calls) GPL code, your code must use the GPL. Not MIT. > > You have misread the GPL. Please consult the FSF's information about > mixing code with GPL-compatible licenses. The propagation applies to > the combined work as a whole, not every single piece of code in the > combined work. > Perhaps this is a bit off-topic, but I believe that I have not misread the GPL. I do agree that it does not apply to every single piece of code in the combined work. But I believe (and, sorry but I am not convinced by you saying otherwise, without strong evidence) it does apply to every piece of code that **uses** the GPL code, including making library calls. That is, a wrapper that does import gpl_licensed_lib gpl_licensed_lib.function() must use the GPL. For sure, it can also do import bsd_licensed_lib import lgpl_licensed_lib without requiring the code in those other libraries to change to use GPL. Indeed, BSD code can be and is regularly used by GPL code. But if one is distributing a script or module that imports gpl_licensed_lib (which,effectively links to it), that script or module is required to be GPL. That script or module is "the combined work as a whole" -- it is what is being distributed for others to use. The language on compatible licenses from the FSF is discussing what licenses are compatible with GPL in the sense that "it is OK to include BSD licensed code (eg, X11) in GPL code". It most certainly does not grant permission to distribute code under a BSD license if that code calls a GPL library. The LGPL does allow that, but not the GPL. Sorry to be off-topic and slightly ranty, but this is an important (if subtle) distinction. --Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Fri Feb 7 21:58:46 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Fri, 7 Feb 2014 21:58:46 -0500 Subject: [SciPy-User] covariance mtrix returns inf Message-ID: Hi, I am performing a fit with this code, and it fits well but at the end when I try to print the covariance matrix it returns to me inf: def funk(x0, a, b, c, d): return a - b * x0 - log10(e)*((10**x0)/c)**b #I should take away the 0 values in y and the corrispettive ones in x i=0 zeroenemy=[] for o in y: if o==0: zeroenemy.append(i) i=i+1 newx = np.delete(x, zeroenemy) newy = np.delete(y, zeroenemy) x = newx y = newy #conversion in log scale logy = np.log10(y) logx = np.log10(x) maxi = y.max() Alog = log(maxi) gamma = 2/3 Ecut = 122243792 b = 1 guess = [Alog, gamma, Ecut, b] print('before fit') params, params_covariance = optimize.curve_fit(funk, logx, logy, guess, logy/1000)#, maxfev=10000 print('paramcov') print(params_covariance) I attach the data I use for x and y and a plot of the data and the fitted curve [image: Immagine in linea 1] thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_1.png Type: image/png Size: 26109 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: xdata1.dat Type: application/octet-stream Size: 1749 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ydata1.dat Type: application/octet-stream Size: 1849 bytes Desc: not available URL: From josef.pktd at gmail.com Fri Feb 7 22:36:05 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 7 Feb 2014 22:36:05 -0500 Subject: [SciPy-User] covariance mtrix returns inf In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 9:58 PM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > Hi, > I am performing a fit with this code, and it fits well but at the end when > I try to print the covariance matrix it returns to me inf: > If you get an inf back, that means that the estimated covariance matrix is singular or not (strictly) positive definite. This can be because it's really singular, or because of numerical problems, bad scaling or because the numerical derivatives are not very good. If you have an analytical derivate it might help. Ecut looks much too large in scale compared to the other ones. Numerical derivative might be useless if it doesn't scale correctly, which I think it doesn't do. Josef > > def funk(x0, a, b, c, d): > > return a - b * x0 - log10(e)*((10**x0)/c)**b > > > #I should take away the 0 values in y and the corrispettive ones > in x > > i=0 > > zeroenemy=[] > > for o in y: > > if o==0: > > zeroenemy.append(i) > > i=i+1 > > newx = np.delete(x, zeroenemy) > > newy = np.delete(y, zeroenemy) > > x = newx > > y = newy > > #conversion in log scale > > logy = np.log10(y) > > logx = np.log10(x) > > maxi = y.max() > > Alog = log(maxi) > > gamma = 2/3 > > Ecut = 122243792 > > b = 1 > > guess = [Alog, gamma, Ecut, b] > > > print('before fit') > > params, params_covariance = optimize.curve_fit(funk, logx, logy, > guess, logy/1000)#, maxfev=10000 > > print('paramcov') > > print(params_covariance) > > I attach the data I use for x and y and a plot of the data and the fitted > curve > > > [image: Immagine in linea 1] > thanks > > Gabriele > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_1.png Type: image/png Size: 26109 bytes Desc: not available URL: From argriffi at ncsu.edu Fri Feb 7 23:12:17 2014 From: argriffi at ncsu.edu (alex) Date: Fri, 7 Feb 2014 23:12:17 -0500 Subject: [SciPy-User] covariance mtrix returns inf Message-ID: > Message: 1 > Date: Fri, 7 Feb 2014 22:36:05 -0500 > From: josef.pktd at gmail.com > Subject: Re: [SciPy-User] covariance mtrix returns inf > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Fri, Feb 7, 2014 at 9:58 PM, Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com> wrote: > >> Hi, >> I am performing a fit with this code, and it fits well but at the end when >> I try to print the covariance matrix it returns to me inf: >> > > If you get an inf back, that means that the estimated covariance matrix is > singular or not (strictly) positive definite. > This can be because it's really singular, or because of numerical problems, > bad scaling or because the numerical derivatives are not very good. > > If you have an analytical derivate it might help. > > Ecut looks much too large in scale compared to the other ones. Numerical > derivative might be useless if it doesn't scale correctly, which I think it > doesn't do. > > Josef > > > >> >> def funk(x0, a, b, c, d): >> >> return a - b * x0 - log10(e)*((10**x0)/c)**b Is that last b supposed to be d? If d makes no appearance in this function then d would have infinite uncertainty so this could make your covariance matrix inf. From newville at cars.uchicago.edu Fri Feb 7 23:42:06 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Fri, 7 Feb 2014 22:42:06 -0600 Subject: [SciPy-User] covariance mtrix returns inf In-Reply-To: References: Message-ID: Josef, On Fri, Feb 7, 2014 at 9:36 PM, wrote: > > > On Fri, Feb 7, 2014 at 9:58 PM, Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com> wrote: > >> Hi, >> I am performing a fit with this code, and it fits well but at the end >> when I try to print the covariance matrix it returns to me inf: >> > > If you get an inf back, that means that the estimated covariance matrix is > singular or not (strictly) positive definite. > This can be because it's really singular, or because of numerical > problems, bad scaling or because the numerical derivatives are not very > good. > > If you have an analytical derivate it might help. > > Ecut looks much too large in scale compared to the other ones. Numerical > derivative might be useless if it doesn't scale correctly, which I think it > doesn't do. > > Josef > Since Gabriele is using (10**x0)/c with x0 = log10(x) and x ~ 1e8 to 5e10 (so that x0 is order 10), I don't think the value of Ecut (aka 'c') of order 1e8 is too large. As Alex points out, the fact that 'd' is not actually used is a bigger problem for getting a finite covariance matrix, and a likely typo. But to be clear, curve_fit() uses leastsq() uses MINPACK's lmdif() which does scale the parameters to be of similar magnitude internally. It is not uncommon to have parameter values differing by 10 orders of magnitude be fitted well and with best-fit values and covariance within machine precision of those when scaling the parameters by hand ahead of time. In short, differences in scaling of parameters is not likely to cause a singular covariance matrix from leastsq()/curve_fit(). --Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Feb 8 06:01:54 2014 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 8 Feb 2014 11:01:54 +0000 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Fri, Feb 7, 2014 at 9:42 PM, Matt Newville wrote: > > On Fri, Feb 7, 2014 at 7:22 AM, Robert Kern wrote: >> >> On Fri, Feb 7, 2014 at 1:03 PM, Matt Newville >> wrote: >> > >> > On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: >> >> >> >> On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville >> >> wrote: >> >> > Hi David, >> >> > >> >> > >> >> > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: >> >> >> >> >> >> On 6 February 2014 18:55, Matt Newville >> >> >> wrote: >> >> >>> >> >> >>> I'm interested, but slightly confused. Iminuit / probfit is >> >> >>> described >> >> >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> >> >> >> >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python >> >> >> part, >> >> >> that wraps the C code is MIT. >> >> > >> >> > The way I understand it, code that wraps GPLv2 code is required to >> >> > use >> >> > GPL, >> >> > and so propagate the license to all code using it. That would mean >> >> > that >> >> > code >> >> > that uses Minuit cannot be something other than GPL. Perhaps I am >> >> > not >> >> > understanding something. >> >> >> >> GPLed code can be combined with code under a different license as long >> >> as that other license imposes no conditions more restrictive than >> >> those in the GPL. The MIT license is one such license. This is all >> >> perfectly above board. >> > >> > Unless I am mistaken (not without precedent), the resulting mixed code >> > must >> > use the GPL. If library "M" is GPL and package "Mwrapper" calls it, >> > "Mwrapper" must be GPL. This license propagation is the distinguishing >> > characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. >> > Do >> > you disagree? >> >> Yes. The combined work must be distributed under the GPL license. But >> each component can have a non-GPL-but-GPL-compatible license, like the >> MIT license. The parts of "Mwrapper" that are not "M" itself do not >> need to be GPLed. "Mwrapper+M" is GPLed. "Mwrapper" itself does not >> need to be (provided they are licensed compatibly). >> > > The parts of Mwrapper that use M must be GPL. > >> >> This is why I say that the *practical effect* of saying that >> "Mwrapper" is BSD-licensed is confusing. Users grab "Mwrapper+M" under >> the impression that "Mwrapper"'s license is the only one, when it is >> not. >> >> >> That said, as a practical matter, for a package that mostly just wraps >> >> the functionality of the internal GPLed component, licensing that >> >> wrapper under the MIT license leads to confusion, as we have seen. >> > >> > I think this does not lead to confusion, but to a license violation. >> > The >> > GPL is wordy, but not confusing. If your code wraps (uses, links to, >> > imports, or calls) GPL code, your code must use the GPL. Not MIT. >> >> You have misread the GPL. Please consult the FSF's information about >> mixing code with GPL-compatible licenses. The propagation applies to >> the combined work as a whole, not every single piece of code in the >> combined work. > > Perhaps this is a bit off-topic, but I believe that I have not misread the > GPL. I do agree that it does not apply to every single piece of code in the > combined work. But I believe (and, sorry but I am not convinced by you > saying otherwise, without strong evidence) it does apply to every piece of > code that **uses** the GPL code, including making library calls. Section 7 of the GPLv3: """ You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. """ Applying the BSD license to the portions of the code that you wrote is providing just such additional permissions. -- Robert Kern From gb.gabrielebrambilla at gmail.com Sat Feb 8 10:23:12 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Sat, 8 Feb 2014 10:23:12 -0500 Subject: [SciPy-User] covariance mtrix returns inf In-Reply-To: References: Message-ID: thank you guys! the problem was that I mistake that b that should be a d! but I am curious how do you discover it?it was the last thing at which I was thinking about... Gabriele 2014-02-07 23:42 GMT-05:00 Matt Newville : > Josef, > > On Fri, Feb 7, 2014 at 9:36 PM, wrote: > >> >> >> On Fri, Feb 7, 2014 at 9:58 PM, Gabriele Brambilla < >> gb.gabrielebrambilla at gmail.com> wrote: >> >>> Hi, >>> I am performing a fit with this code, and it fits well but at the end >>> when I try to print the covariance matrix it returns to me inf: >>> >> >> If you get an inf back, that means that the estimated covariance matrix >> is singular or not (strictly) positive definite. >> This can be because it's really singular, or because of numerical >> problems, bad scaling or because the numerical derivatives are not very >> good. >> >> If you have an analytical derivate it might help. >> >> Ecut looks much too large in scale compared to the other ones. Numerical >> derivative might be useless if it doesn't scale correctly, which I think it >> doesn't do. >> >> Josef >> > > > Since Gabriele is using (10**x0)/c with x0 = log10(x) and x ~ 1e8 to 5e10 > (so that x0 is order 10), I don't think the value of Ecut (aka 'c') of > order 1e8 is too large. As Alex points out, the fact that 'd' is not > actually used is a bigger problem for getting a finite covariance matrix, > and a likely typo. > > But to be clear, curve_fit() uses leastsq() uses MINPACK's lmdif() which > does scale the parameters to be of similar magnitude internally. It is not > uncommon to have parameter values differing by 10 orders of magnitude be > fitted well and with best-fit values and covariance within machine > precision of those when scaling the parameters by hand ahead of time. In > short, differences in scaling of parameters is not likely to cause a > singular covariance matrix from leastsq()/curve_fit(). > > --Matt > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sat Feb 8 10:39:56 2014 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sat, 8 Feb 2014 16:39:56 +0100 Subject: [SciPy-User] covariance mtrix returns inf In-Reply-To: References: Message-ID: On 8 February 2014 16:23, Gabriele Brambilla wrote: > the problem was that I mistake that b that should be a d! > but I am curious how do you discover it?it was the last thing at which I > was thinking about... > And that is exactly why you didn't see it. :) Most bugs are one of three types: missuse of libraries (because one didn't understand correctly how to use them), actual bugs in the library, and wrong assumtpions on the code. When debugging, we make assumptions on the code (for exaple, I assume that the line zeroenemy.append(i) works perfectly, ie, there is no bug in Python's append). For someone unfamiliar with the code, making less assumptions is more natural. When the assumptions you make are invalid, bugs appear; when you no longer make those assumptions, bugs are obvious. Sometimes it helps to explain the problem to someone else, walking through the code. Even if that person doesn't know how to program, or what is it about, the sheer act of explaining will help you realise what are your assumptions and investigate if they are actually true. Actually, many people when facing a bug, they explain the situation to a teddy bear or a rubber duck ( http://en.wikipedia.org/wiki/Rubber_duck_debugging) [my personal preference is a platypus plushie]. Even if they are not very talkative, they can stare at you while you figure out what was your wrong assumption. Still, this will not help if the bug actually where in Scipy, but it could help you narrow down the problem, test the function, and see that it was actually buggy. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Sat Feb 8 10:44:11 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Sat, 8 Feb 2014 09:44:11 -0600 Subject: [SciPy-User] covariance mtrix returns inf In-Reply-To: References: Message-ID: On Sat, Feb 8, 2014 at 9:23 AM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > thank you guys! > the problem was that I mistake that b that should be a d! > but I am curious how do you discover it?it was the last thing at which I > was thinking about... > > Gabriele > For fitting with curve_fit() et al, a singular covariance matrix almost always means one of your variables is not actually altering the fit. Experience shows that an error is most likely (in fact, almost certainly) in code one has written, and the fewer times a code has been run, the more likely it is to have typos. For myself, if I write 50 lines of code, I am certain there will be typos and use the initial runs to find them. For example, if you had printed starting and ending values for your variables you probably would have seen that the value for 'd' in funk (confusingly called 'b' in the main program) did not change at all. Make your code as readable and simple as possible, with meaningful and consistent variable names. It's also useful to take the time to learn library features, such as numpy's where(). --Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Sun Feb 9 22:14:07 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Sun, 9 Feb 2014 21:14:07 -0600 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Sat, Feb 8, 2014 at 5:01 AM, Robert Kern wrote: > > On Fri, Feb 7, 2014 at 9:42 PM, Matt Newville > wrote: > > > > On Fri, Feb 7, 2014 at 7:22 AM, Robert Kern wrote: > >> > >> On Fri, Feb 7, 2014 at 1:03 PM, Matt Newville > >> wrote: > >> > > >> > On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: > >> >> > >> >> On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville > >> >> wrote: > >> >> > Hi David, > >> >> > > >> >> > > >> >> > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: > >> >> >> > >> >> >> On 6 February 2014 18:55, Matt Newville > >> >> >> wrote: > >> >> >>> > >> >> >>> I'm interested, but slightly confused. Iminuit / probfit is > >> >> >>> described > >> >> >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? > >> >> >> > >> >> >> > >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python > >> >> >> part, > >> >> >> that wraps the C code is MIT. > >> >> > > >> >> > The way I understand it, code that wraps GPLv2 code is required to > >> >> > use > >> >> > GPL, > >> >> > and so propagate the license to all code using it. That would mean > >> >> > that > >> >> > code > >> >> > that uses Minuit cannot be something other than GPL. Perhaps I am > >> >> > not > >> >> > understanding something. > >> >> > >> >> GPLed code can be combined with code under a different license as long > >> >> as that other license imposes no conditions more restrictive than > >> >> those in the GPL. The MIT license is one such license. This is all > >> >> perfectly above board. > >> > > >> > Unless I am mistaken (not without precedent), the resulting mixed code > >> > must > >> > use the GPL. If library "M" is GPL and package "Mwrapper" calls it, > >> > "Mwrapper" must be GPL. This license propagation is the distinguishing > >> > characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. > >> > Do > >> > you disagree? > >> > >> Yes. The combined work must be distributed under the GPL license. But > >> each component can have a non-GPL-but-GPL-compatible license, like the > >> MIT license. The parts of "Mwrapper" that are not "M" itself do not > >> need to be GPLed. "Mwrapper+M" is GPLed. "Mwrapper" itself does not > >> need to be (provided they are licensed compatibly). > >> > > > > The parts of Mwrapper that use M must be GPL. > > > >> > >> This is why I say that the *practical effect* of saying that > >> "Mwrapper" is BSD-licensed is confusing. Users grab "Mwrapper+M" under > >> the impression that "Mwrapper"'s license is the only one, when it is > >> not. > >> > >> >> That said, as a practical matter, for a package that mostly just wraps > >> >> the functionality of the internal GPLed component, licensing that > >> >> wrapper under the MIT license leads to confusion, as we have seen. > >> > > >> > I think this does not lead to confusion, but to a license violation. > >> > The > >> > GPL is wordy, but not confusing. If your code wraps (uses, links to, > >> > imports, or calls) GPL code, your code must use the GPL. Not MIT. > >> > >> You have misread the GPL. Please consult the FSF's information about > >> mixing code with GPL-compatible licenses. The propagation applies to > >> the combined work as a whole, not every single piece of code in the > >> combined work. > > > > Perhaps this is a bit off-topic, but I believe that I have not misread the > > GPL. I do agree that it does not apply to every single piece of code in the > > combined work. But I believe (and, sorry but I am not convinced by you > > saying otherwise, without strong evidence) it does apply to every piece of > > code that **uses** the GPL code, including making library calls. > > Section 7 of the GPLv3: > > """ > You may place additional permissions on material, added by you to a > covered work, for which you have or can give appropriate copyright > permission. > """ > > Applying the BSD license to the portions of the code that you wrote is > providing just such additional permissions. I'm pretty sure that would actually violate the GPL. This is a bit off-topic, and can be a politically-charged topic. Truly, I mean no ill-will, am not looking for an argument, and would be willing to end the discussion. My initial reaction on reading this is that I have been dealing with this issue for well over a decade, and am really very certain that I understand these licenses, and that your reading is dangerously mistaken. But, I've also been doing science and software long enough to see my own certainty as a red flag -- being dead certain is the first sign of being dead wrong. So, as you suggest, let's look only to texts from the gnu.org sites. Evidence and explanations at wikipedia, stackexchange, and the like are often in plainer language, and uniformly come to a common conclusion: code that links to a GPL library is derived work, and is required to use the GPL. But, let's call that as hearsay, and re-read the writings from gnu.org with open minds, looking for clarification. For example, https://www.gnu.org/licenses/gpl-faq.html#GPLStaticVsDynamic Q: Does the GPL have different requirements for statically vs dynamically linked modules with a covered work? A: No. Linking a GPL covered work statically or dynamically with other modules is making a combined work based on the GPL covered work. Thus, the terms and conditions of the GNU General Public License cover the whole combination. For myself, that is clear enough to effectively end the discussion: linking to a GPL library makes the work GPL. A python script that has "import pygsl", "import pyfftw" or "import PyQ4t" is lining to one of those GPL libraries, and so is required to use the GPL. If that is not yet clear enough, consider why the LGPL exists at all. >From the preamble of the LGPL (http://www.gnu.org/licenses/lgpl-2.1.html), describing this very subject: When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library. Here, of course, 'fits its criteria of freedom' means 'uses the GPL'. The point being made here is to explain that the purpose of the LGPL is to relax the strict requirement of license propagation that does exist for the GPL. You can also read the entirety of http://www.gnu.org/philosophy/why-not-lgpl.html, which explains clearly that code that links to a GPL library must use the GPL, and that the purpose of the LGPL is to relax this requirement. For completeness, this point is succinctly if tortuously stated in the GPL itself, GPLv3 section 5c (similar wording exists in GPLv2 section 2c) reads: You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. The intent seems clear enough to me. Do you read this differently? Earlier, you correctly said "The propagation [of GPL] applies to the combined work as a whole", as this section states rather clearly. A python module or script is a "combined work" or "program" here -- copyrightable source code that can be run and distributed. By importing a GPL covered module, one is dynamically linking to it, and the GPL used by the imported library applies to the importing program/script/module. To be sure, any code that might also be used in said script but which does not directly import/link to a GPL library does not need to be GPLed. But the calling script (the equivalent of main() -- the "combined work") most certainly does. You quoted from Section 7 of the GPL, but I believe you misunderstood "for which you have or can give appropriate copyright permission". You (generally speaking) do not have copyright permission needed to be able to change the license under which you use the library that you are linking to. And the license for linking to that library requires that your work use the GPL. This section is new to GPLv3. According to https://www.gnu.org/licenses/quick-guide-gplv3.html, the license compatibility language was clarified in GPLv3: Clarifying License Compatibility If you found some code and wanted to incorporate it into a GPLed project, GPLv2 said that the license on the other code was not allowed to have any restrictions that were not already in GPLv2. As long as that was the case, we said the license was GPL-compatible. However, some licenses had requirements that weren't really restrictive, because they were so easy to comply with. For example, some licenses say that they don't give you permission to use certain trademarks. That's not really an additional restriction: if that clause wasn't there, you still wouldn't have permission to use the trademark. We always said those licenses were compatible with GPLv2, too. Now, GPLv3 explicitly gives everyone permission to use code that has requirements like this. These new terms should help clear up misunderstandings about which licenses are GPL-compatible, why that is, and what you can do with GPL-compatible code. Again, the sense of "compatible" here is to say what code is allowed to be used by GPL code, not what license one may choose for derived work. It is not granting one the right to use a BSD license for code that links to a GPL library. The propagation of the GPL to calling code is the essence of the GPL, and the distinguishing characteristic between GPL, LGPL and the BSD/MIT licenses. Happily, the scipy pages on license compatibility are clear about *not* including GPL (or even LGPL) code in scipy. Trying to come back on topic, I still can't figure out what license Minuit actually uses. The iminuit package describes it one place as LGPL, but I don't see that in Minuit itself. Sorry this was so long and slightly off topic. Really, I am not looking for an argument, but getting this right (as scipy does) is important. -- --Matt Newville From robert.kern at gmail.com Mon Feb 10 06:01:50 2014 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Feb 2014 11:01:50 +0000 Subject: [SciPy-User] fit with 3 parameters ok, fit with 4 is going wrong! In-Reply-To: References: Message-ID: On Mon, Feb 10, 2014 at 3:14 AM, Matt Newville wrote: > On Sat, Feb 8, 2014 at 5:01 AM, Robert Kern wrote: >> >> On Fri, Feb 7, 2014 at 9:42 PM, Matt Newville >> wrote: >> > >> > On Fri, Feb 7, 2014 at 7:22 AM, Robert Kern wrote: >> >> >> >> On Fri, Feb 7, 2014 at 1:03 PM, Matt Newville >> >> wrote: >> >> > >> >> > On Feb 6, 2014 5:14 PM, "Robert Kern" wrote: >> >> >> >> >> >> On Thu, Feb 6, 2014 at 9:20 PM, Matt Newville >> >> >> wrote: >> >> >> > Hi David, >> >> >> > >> >> >> > >> >> >> > On Thu, Feb 6, 2014 at 2:20 PM, Da?id wrote: >> >> >> >> >> >> >> >> On 6 February 2014 18:55, Matt Newville >> >> >> >> wrote: >> >> >> >>> >> >> >> >>> I'm interested, but slightly confused. Iminuit / probfit is >> >> >> >>> described >> >> >> >>> as BSD, but depends on (and includes?) Minuit - isn't that GPL? >> >> >> >> >> >> >> >> >> >> >> >> The license says iminuit is MIT, but Minuit is GPLv2. So, the Python >> >> >> >> part, >> >> >> >> that wraps the C code is MIT. >> >> >> > >> >> >> > The way I understand it, code that wraps GPLv2 code is required to >> >> >> > use >> >> >> > GPL, >> >> >> > and so propagate the license to all code using it. That would mean >> >> >> > that >> >> >> > code >> >> >> > that uses Minuit cannot be something other than GPL. Perhaps I am >> >> >> > not >> >> >> > understanding something. >> >> >> >> >> >> GPLed code can be combined with code under a different license as long >> >> >> as that other license imposes no conditions more restrictive than >> >> >> those in the GPL. The MIT license is one such license. This is all >> >> >> perfectly above board. >> >> > >> >> > Unless I am mistaken (not without precedent), the resulting mixed code >> >> > must >> >> > use the GPL. If library "M" is GPL and package "Mwrapper" calls it, >> >> > "Mwrapper" must be GPL. This license propagation is the distinguishing >> >> > characteristic of GPL, and what differentiates it from LGPL and BSD/MIT. >> >> > Do >> >> > you disagree? >> >> >> >> Yes. The combined work must be distributed under the GPL license. But >> >> each component can have a non-GPL-but-GPL-compatible license, like the >> >> MIT license. The parts of "Mwrapper" that are not "M" itself do not >> >> need to be GPLed. "Mwrapper+M" is GPLed. "Mwrapper" itself does not >> >> need to be (provided they are licensed compatibly). >> >> >> > >> > The parts of Mwrapper that use M must be GPL. >> > >> >> >> >> This is why I say that the *practical effect* of saying that >> >> "Mwrapper" is BSD-licensed is confusing. Users grab "Mwrapper+M" under >> >> the impression that "Mwrapper"'s license is the only one, when it is >> >> not. >> >> >> >> >> That said, as a practical matter, for a package that mostly just wraps >> >> >> the functionality of the internal GPLed component, licensing that >> >> >> wrapper under the MIT license leads to confusion, as we have seen. >> >> > >> >> > I think this does not lead to confusion, but to a license violation. >> >> > The >> >> > GPL is wordy, but not confusing. If your code wraps (uses, links to, >> >> > imports, or calls) GPL code, your code must use the GPL. Not MIT. >> >> >> >> You have misread the GPL. Please consult the FSF's information about >> >> mixing code with GPL-compatible licenses. The propagation applies to >> >> the combined work as a whole, not every single piece of code in the >> >> combined work. >> > >> > Perhaps this is a bit off-topic, but I believe that I have not misread the >> > GPL. I do agree that it does not apply to every single piece of code in the >> > combined work. But I believe (and, sorry but I am not convinced by you >> > saying otherwise, without strong evidence) it does apply to every piece of >> > code that **uses** the GPL code, including making library calls. >> >> Section 7 of the GPLv3: >> >> """ >> You may place additional permissions on material, added by you to a >> covered work, for which you have or can give appropriate copyright >> permission. >> """ >> >> Applying the BSD license to the portions of the code that you wrote is >> providing just such additional permissions. > > I'm pretty sure that would actually violate the GPL. This is a bit > off-topic, and can be a politically-charged topic. Truly, I mean no > ill-will, am not looking for an argument, and would be willing to end > the discussion. My initial reaction on reading this is that I have > been dealing with this issue for well over a decade, Me too! > and am really > very certain that I understand these licenses, and that your reading > is dangerously mistaken. But, I've also been doing science and > software long enough to see my own certainty as a red flag -- being > dead certain is the first sign of being dead wrong. So, as you > suggest, let's look only to texts from the gnu.org sites. Evidence > and explanations at wikipedia, stackexchange, and the like are often > in plainer language, and uniformly come to a common conclusion: code > that links to a GPL library is derived work, and is required to use > the GPL. But, let's call that as hearsay, and re-read the writings > from gnu.org with open minds, looking for clarification. > > For example, https://www.gnu.org/licenses/gpl-faq.html#GPLStaticVsDynamic > > Q: Does the GPL have different requirements for statically vs > dynamically linked modules with a covered work? > A: No. Linking a GPL covered work statically or dynamically with > other modules is making a combined work based > on the GPL covered work. Thus, the terms and conditions of the GNU > General Public License cover the whole combination. > > For myself, that is clear enough to effectively end the discussion: > linking to a GPL library makes the work GPL. A python script that has > "import pygsl", "import pyfftw" or "import PyQ4t" is lining to one of > those GPL libraries, and so is required to use the GPL. All of these statements are about whether or not the terms of the GPL apply at all. The terms do apply to these situations. They *don't* apply if one uses stdin/stdout to communicate with the GPLed component, and there are *no* restrictions on the licensing of the outer component in that case. One cannot write a program that imports a GPLed component that places restrictions beyond the GPL (to use GPLv2 terminology). However, one of the *explicit* terms of the GPL lets you grant *additional permissions* to the code that you wrote (but *not* the third-party GPLed code that you used). > If that is not yet clear enough, consider why the LGPL exists at all. > From the preamble of the LGPL > (http://www.gnu.org/licenses/lgpl-2.1.html), describing this very > subject: > > When a program is linked with a library, whether statically or > using a shared library, the combination of the two is legally speaking > a combined work, a derivative of the original library. The > ordinary General Public License therefore permits such linking only if > the > entire combination fits its criteria of freedom. The Lesser > General Public License permits more lax criteria for linking other > code > with the library. > > Here, of course, 'fits its criteria of freedom' means 'uses the GPL'. > The point being made here is to explain that the purpose of the LGPL > is to relax the strict requirement of license propagation that does > exist for the GPL. You can also read the entirety of > http://www.gnu.org/philosophy/why-not-lgpl.html, which explains > clearly that code that links to a GPL library must use the GPL, and > that the purpose of the LGPL is to relax this requirement. Actually, I don't think it ever says that, nor does it address the facts of our case in front of us. My reading of the GPL is consistent with that manifesto. The two parties contemplated in the manifesto are "free software developers" and "proprietary software developers". The FSF considers the BSD a "free software license", just a non-copyleft one. My reading of the GPL still contains propagation, verbatim from the library to the work as a whole, and from the library to every piece of code in the whole independently by way of restricting the available licenses to those that are GPL-compatible. > For completeness, this point is succinctly if tortuously stated in > the GPL itself, GPLv3 section 5c (similar wording exists in GPLv2 > section 2c) reads: > > You may convey a work based on the Program, or the modifications > to produce it from the Program, in the form of source > code under the terms of section 4, provided that you also meet all > of these conditions: > > c) You must license the entire work, as a whole, under this > License to anyone who comes into possession of a copy. This > License will therefore apply, along with any applicable section 7 > additional terms, to the whole of the work, and all its parts, > regardless of how they are packaged. This License gives no > permission to license the work in any other way, but it does not > invalidate such permission if you have separately received it. > > The intent seems clear enough to me. Do you read this differently? "as a whole" is the part where we seem to differ. You are making a claim that there is an essential difference between the act of mixing a third-party BSD component and a third-party GPL component to make a GPLed whole and the act of writing a BSD component and using a third-party GPL component to make a GPLed whole. I believe that there is no such essential difference in the text of the GPL or the FSF's interpretation of the GPL. If we are diving back in the GPLv2 to discern long-term intent: "These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it." Applying the BSD license to the iminuit wrapper is exactly that case: if there are bits from the wrapper that you can use without the GPLed MINUIT2, then you can extract them and use them under the BSD license. If those bits still need the GPLed MINUIT2, then you still need to deal with the MINUIT2 license, as a matter of course. That follows regardless because you still need to distribute MINUIT2 with those bits. > Earlier, you correctly said "The propagation [of GPL] applies to the > combined work as a whole", as this section states rather clearly. A > python module or script is a "combined work" or "program" here -- > copyrightable source code that can be run and distributed. By > importing a GPL covered module, one is dynamically linking to it, and > the GPL used by the imported library applies to the importing > program/script/module. To be sure, any code that might also be used > in said script but which does not directly import/link to a GPL > library does not need to be GPLed. But the calling script (the > equivalent of main() -- the "combined work") most certainly does. > > You quoted from Section 7 of the GPL, but I believe you misunderstood > "for which you have or can give appropriate copyright permission". > You (generally speaking) do not have copyright permission needed to > be able to change the license under which you use the library that you > are linking to. And the license for linking to that library requires > that your work use the GPL. Your interpretation would make that section entirely inoperable. You could never "place additional permissions on material, added by you to a covered work" because the act of adding to a GPLed work in a meaningful way always means that you never have "appropriate copyright permission". > This section is new to GPLv3. According to > https://www.gnu.org/licenses/quick-guide-gplv3.html, the license > compatibility language was clarified in GPLv3: > > Clarifying License Compatibility > > If you found some code and wanted to incorporate it into a GPLed > project, GPLv2 said that the license on the other > code was not allowed to have any restrictions that were not > already in GPLv2. As long as that was the case, we said > the license was GPL-compatible. > > However, some licenses had requirements that weren't really > restrictive, because they were so easy to comply with. For > example, some licenses say that they don't give you permission to > use certain trademarks. That's not really an additional > restriction: if that clause wasn't there, you still wouldn't have > permission to use the trademark. We always said those licenses > were compatible with GPLv2, too. > > Now, GPLv3 explicitly gives everyone permission to use code that > has requirements like this. These new terms should help > clear up misunderstandings about which licenses are > GPL-compatible, why that is, and what you can do with GPL-compatible > code. > > Again, the sense of "compatible" here is to say what code is allowed > to be used by GPL code, not what license one may choose for derived > work. It is not granting one the right to use a BSD license for code > that links to a GPL library. I believe that you are taking the mode of speech, which implicitly assumes the most common case (adding other code to a GPLed program) and the most desirable one (for the FSF) for didactic purposes, as forbidding the uncommon cases. You may be interested in some of the FAQs that do address the uncommon cases that are more like ours: http://www.gnu.org/licenses/gpl-faq.html#GPLModuleLicense """ * If I add a module to a GPL-covered program, do I have to use the GPL as the license for my module? The GPL says that the whole combined program has to be released under the GPL. So your module has to be available for use under the GPL. But you can give additional permission for the use of your code. You can, if you wish, release your program under a license which is more lax than the GPL but compatible with the GPL. The license list page gives a partial list of GPL-compatible licenses. """ http://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL """ * If a library is released under the GPL (not the LGPL), does that mean that any software which uses it has to be under the GPL or a GPL-compatible license? (#IfLibraryIsGPL) Yes, because the software as it is actually run includes the library. """ Note that for this last question, they explicitly say that the software using the GPLed library can be under a GPL-compatible license, not just the GPL itself. > The propagation of the GPL to calling code is the essence of the GPL, > and the distinguishing characteristic between GPL, LGPL and the > BSD/MIT licenses. Happily, the scipy pages on license compatibility > are clear about *not* including GPL (or even LGPL) code in scipy. > > Trying to come back on topic, I still can't figure out what license > Minuit actually uses. The iminuit package describes it one place as > LGPL, but I don't see that in Minuit itself. It's a mess. There is precisely one source file in MINUIT2 itself that includes the standard GPL header (besides all of the autoconf stuff). > Sorry this was so long and slightly off topic. Really, I am not > looking for an argument, but getting this right (as scipy does) is > important. > > -- > --Matt Newville > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Robert Kern From gb.gabrielebrambilla at gmail.com Mon Feb 10 10:30:37 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Mon, 10 Feb 2014 10:30:37 -0500 Subject: [SciPy-User] changing axis values in imshow Message-ID: Hi, I'm trying to change the values of the axes of an image displayed with imshow(). At the moment I have two arrays that contain the values to display and also the maximum and the minimum value. thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jean-Paul.JADAUD at CEA.FR Mon Feb 10 10:42:36 2014 From: Jean-Paul.JADAUD at CEA.FR (Jean-Paul.JADAUD at CEA.FR) Date: Mon, 10 Feb 2014 15:42:36 +0000 Subject: [SciPy-User] changing axis values in imshow In-Reply-To: References: Message-ID: <364c43a2c903407f9c68aef49540fe95@U-MYKONOS3.dif.dam.intra.cea.fr> See the optional arguments : Vmin, vmax: for the image values Extent : for the x, y axes http://matplotlib.org/api/pyplot_api.html?highlight=imshow#matplotlib.pyplot.imshow Cheers, JP Jadaud De : scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] De la part de Gabriele Brambilla Envoy? : lundi 10 f?vrier 2014 16:31 ? : SciPy Users List Objet : [SciPy-User] changing axis values in imshow Hi, I'm trying to change the values of the axes of an image displayed with imshow(). At the moment I have two arrays that contain the values to display and also the maximum and the minimum value. thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Mon Feb 10 11:32:17 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Mon, 10 Feb 2014 11:32:17 -0500 Subject: [SciPy-User] changing axis values in imshow In-Reply-To: <364c43a2c903407f9c68aef49540fe95@U-MYKONOS3.dif.dam.intra.cea.fr> References: <364c43a2c903407f9c68aef49540fe95@U-MYKONOS3.dif.dam.intra.cea.fr> Message-ID: thanks. Gabriele 2014-02-10 10:42 GMT-05:00 Jean-Paul.JADAUD at CEA.FR : > See the optional arguments : > > Vmin, vmax: for the image values > > Extent : for the x, y axes > > > http://matplotlib.org/api/pyplot_api.html?highlight=imshow#matplotlib.pyplot.imshow > > > > Cheers, > > > > JP Jadaud > > > > > > *De :* scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] *De > la part de* Gabriele Brambilla > *Envoy? :* lundi 10 f?vrier 2014 16:31 > *? :* SciPy Users List > *Objet :* [SciPy-User] changing axis values in imshow > > > > Hi, > > I'm trying to change the values of the axes of an image displayed with > imshow(). > > At the moment I have two arrays that contain the values to display and > also the maximum and the minimum value. > > > > thanks > > > > Gabriele > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Mon Feb 10 13:21:09 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Mon, 10 Feb 2014 13:21:09 -0500 Subject: [SciPy-User] basemap wrapimage Message-ID: Hi, I'm trying to use basemap to plot some maps. I am trying to don't save the figures of the plot but to plot them automatically on a basemap. this the code I'm trying to run: fiq = plt.figure() axf = fiq.add_subplot(111) imh = axf.imshow(B) fiq.canvas.draw() #trying to build a map degree = list(range(360)) for d in degree: map = Basemap(projection='ortho', lat_0 = 0, lon_0 = d, resolution = 'l', area_thresh = 1000.) map.warpimage(image=imh) map.drawmapboundary() title = str(d)+'.png' map.savefig(title) but it doesn't work. it returns Traceback (most recent call last): File "building3d.py", line 103, in map.warpimage(image=imh) File "C:\Anaconda\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 40 06, in warpimage if file.startswith('http'): AttributeError: 'AxesImage' object has no attribute 'startswith' And I don't find many tutorials on the web... thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Mon Feb 10 13:35:35 2014 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Mon, 10 Feb 2014 13:35:35 -0500 Subject: [SciPy-User] basemap wrapimage In-Reply-To: References: Message-ID: What happens if you do not supply any arguments to warpimage? Also try commenting this line... I think these could help you: http://matplotlib.org/basemap/users/geography.html http://matplotlib.org/basemap/users/examples.html Also I do not think that this list is appropriate for this question ... 2014-02-10 13:21 GMT-05:00 Gabriele Brambilla < gb.gabrielebrambilla at gmail.com>: > Hi, > I'm trying to use basemap to plot some maps. > > I am trying to don't save the figures of the plot but to plot them > automatically on a basemap. > > this the code I'm trying to run: > > fiq = plt.figure() > axf = fiq.add_subplot(111) > imh = axf.imshow(B) > fiq.canvas.draw() > > #trying to build a map > > degree = list(range(360)) > for d in degree: > > map = Basemap(projection='ortho', lat_0 = 0, lon_0 = d, resolution > = 'l', area_thresh = 1000.) > map.warpimage(image=imh) > map.drawmapboundary() > title = str(d)+'.png' > map.savefig(title) > > but it doesn't work. it returns > > Traceback (most recent call last): > File "building3d.py", line 103, in > map.warpimage(image=imh) > File "C:\Anaconda\lib\site-packages\mpl_toolkits\basemap\__init__.py", > line 40 > 06, in warpimage > if file.startswith('http'): > AttributeError: 'AxesImage' object has no attribute 'startswith' > > And I don't find many tutorials on the web... > > thanks > > Gabriele > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Mon Feb 10 22:03:47 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Mon, 10 Feb 2014 22:03:47 -0500 Subject: [SciPy-User] basemap wrapimage In-Reply-To: References: Message-ID: it happens that it plots a default image of the earth. I want to show on the map an image of which I have a matrix. Why is not the correct list? isn't it a library of matplotlib? and matplotlib isn't a Scipy tool? thanks Gabriele 2014-02-10 13:35 GMT-05:00 Oleksandr Huziy : > What happens if you do not supply any arguments to warpimage? Also try > commenting this line... > > I think these could help you: > http://matplotlib.org/basemap/users/geography.html > http://matplotlib.org/basemap/users/examples.html > > Also I do not think that this list is appropriate for this question ... > > > 2014-02-10 13:21 GMT-05:00 Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com>: > >> Hi, >> I'm trying to use basemap to plot some maps. >> >> I am trying to don't save the figures of the plot but to plot them >> automatically on a basemap. >> >> this the code I'm trying to run: >> >> fiq = plt.figure() >> axf = fiq.add_subplot(111) >> imh = axf.imshow(B) >> fiq.canvas.draw() >> >> #trying to build a map >> >> degree = list(range(360)) >> for d in degree: >> >> map = Basemap(projection='ortho', lat_0 = 0, lon_0 = d, >> resolution = 'l', area_thresh = 1000.) >> map.warpimage(image=imh) >> map.drawmapboundary() >> title = str(d)+'.png' >> map.savefig(title) >> >> but it doesn't work. it returns >> >> Traceback (most recent call last): >> File "building3d.py", line 103, in >> map.warpimage(image=imh) >> File "C:\Anaconda\lib\site-packages\mpl_toolkits\basemap\__init__.py", >> line 40 >> 06, in warpimage >> if file.startswith('http'): >> AttributeError: 'AxesImage' object has no attribute 'startswith' >> >> And I don't find many tutorials on the web... >> >> thanks >> >> Gabriele >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > Sasha > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Mon Feb 10 22:21:52 2014 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Mon, 10 Feb 2014 22:21:52 -0500 Subject: [SciPy-User] basemap wrapimage In-Reply-To: References: Message-ID: Hi Gabriele: scipy, matplotlib and basemap are different libraries.. basemap depends on matplotlib though.. try showing your image without the map and then add features and complexities one by one, if that won't help then a selfcontained example would help more to understand your problem. Cheers 2014-02-10 22:03 GMT-05:00 Gabriele Brambilla < gb.gabrielebrambilla at gmail.com>: > it happens that it plots a default image of the earth. > I want to show on the map an image of which I have a matrix. > Why is not the correct list? isn't it a library of matplotlib? and > matplotlib isn't a Scipy tool? > > thanks > > Gabriele > > > 2014-02-10 13:35 GMT-05:00 Oleksandr Huziy : > > What happens if you do not supply any arguments to warpimage? Also try >> commenting this line... >> >> I think these could help you: >> http://matplotlib.org/basemap/users/geography.html >> http://matplotlib.org/basemap/users/examples.html >> >> Also I do not think that this list is appropriate for this question ... >> >> >> 2014-02-10 13:21 GMT-05:00 Gabriele Brambilla < >> gb.gabrielebrambilla at gmail.com>: >> >>> Hi, >>> I'm trying to use basemap to plot some maps. >>> >>> I am trying to don't save the figures of the plot but to plot them >>> automatically on a basemap. >>> >>> this the code I'm trying to run: >>> >>> fiq = plt.figure() >>> axf = fiq.add_subplot(111) >>> imh = axf.imshow(B) >>> fiq.canvas.draw() >>> >>> #trying to build a map >>> >>> degree = list(range(360)) >>> for d in degree: >>> >>> map = Basemap(projection='ortho', lat_0 = 0, lon_0 = d, >>> resolution = 'l', area_thresh = 1000.) >>> map.warpimage(image=imh) >>> map.drawmapboundary() >>> title = str(d)+'.png' >>> map.savefig(title) >>> >>> but it doesn't work. it returns >>> >>> Traceback (most recent call last): >>> File "building3d.py", line 103, in >>> map.warpimage(image=imh) >>> File "C:\Anaconda\lib\site-packages\mpl_toolkits\basemap\__init__.py", >>> line 40 >>> 06, in warpimage >>> if file.startswith('http'): >>> AttributeError: 'AxesImage' object has no attribute 'startswith' >>> >>> And I don't find many tutorials on the web... >>> >>> thanks >>> >>> Gabriele >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> >> -- >> Sasha >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jean-Paul.JADAUD at CEA.FR Tue Feb 11 07:17:41 2014 From: Jean-Paul.JADAUD at CEA.FR (Jean-Paul.JADAUD at CEA.FR) Date: Tue, 11 Feb 2014 12:17:41 +0000 Subject: [SciPy-User] basemap wrapimage In-Reply-To: References: Message-ID: <27b427e9a4c541f7aeb5488f7585efda@U-MYKONOS3.dif.dam.intra.cea.fr> >From http://matplotlib.org/basemap/api/basemap_api.html?highlight=warpimage#mpl_toolkits.basemap.Basemap.warpimage Warpimage expects a filename as a value for optional argument image. You provide a matplotlib object instance which explains the error message. Cheers De : scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] De la part de Gabriele Brambilla Envoy? : lundi 10 f?vrier 2014 19:21 ? : SciPy Users List Objet : [SciPy-User] basemap wrapimage Hi, I'm trying to use basemap to plot some maps. I am trying to don't save the figures of the plot but to plot them automatically on a basemap. this the code I'm trying to run: fiq = plt.figure() axf = fiq.add_subplot(111) imh = axf.imshow(B) fiq.canvas.draw() #trying to build a map degree = list(range(360)) for d in degree: map = Basemap(projection='ortho', lat_0 = 0, lon_0 = d, resolution = 'l', area_thresh = 1000.) map.warpimage(image=imh) map.drawmapboundary() title = str(d)+'.png' map.savefig(title) but it doesn't work. it returns Traceback (most recent call last): File "building3d.py", line 103, in map.warpimage(image=imh) File "C:\Anaconda\lib\site-packages\mpl_toolkits\basemap\__init__.py", line 40 06, in warpimage if file.startswith('http'): AttributeError: 'AxesImage' object has no attribute 'startswith' And I don't find many tutorials on the web... thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Fri Feb 14 15:09:51 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Fri, 14 Feb 2014 15:09:51 -0500 Subject: [SciPy-User] plotting two functions on the same figure but with two different scales on y axis Message-ID: Hi, I'm trying to plot two functions on the same errorbar picture like the code below: fmlp = plt.figure() axop = fmlp.add_subplot(111) insp = axop.errorbar(g, Pampli, yerr = ePampli, fmt = '.') ins1 = axop.errorbar(g, lightcurva, yerr = zeros, fmt = 'x') axop.set_title('amplitudes for phases') fmlp.canvas.draw() but the two functions has a very different range in y values (one near 10^-15 one near 1): how could I use two different scale (one per function) on the same plot? thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Fri Feb 14 15:10:58 2014 From: lists at hilboll.de (Andreas Hilboll) Date: Fri, 14 Feb 2014 21:10:58 +0100 Subject: [SciPy-User] plotting two functions on the same figure but with two different scales on y axis In-Reply-To: References: Message-ID: <52FE7852.8030805@hilboll.de> On 14.02.2014 21:09, Gabriele Brambilla wrote: > Hi, > > I'm trying to plot two functions on the same errorbar picture like the > code below: > > fmlp = plt.figure() > > axop = fmlp.add_subplot(111) > > insp = axop.errorbar(g, Pampli, yerr = ePampli, fmt = '.') > ins1 = axop.errorbar(g, lightcurva, yerr = zeros, fmt = 'x') > > axop.set_title('amplitudes for phases') > > fmlp.canvas.draw() > > but the two functions has a very different range in y values (one near > 10^-15 one near 1): how could I use two different scale (one per > function) on the same plot? You're looking for twinx() / twiny() functionality, and you should ask this on the matplotlib mailing list or on stackoverflow. Cheers, Andreas. From ralf.gommers at gmail.com Sun Feb 16 17:08:42 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 16 Feb 2014 23:08:42 +0100 Subject: [SciPy-User] ANN: Scipy 0.13.3 release In-Reply-To: References: Message-ID: Binaries are now available on SourceForge. Ralf On Tue, Feb 4, 2014 at 8:16 AM, Ralf Gommers wrote: > Hi, > > I'm happy to announce the availability of the scipy 0.13.3 release. This > is a bugfix only release; it contains fixes for regressions in ndimage and > weave. > > Source tarballs can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.13.3/ and on PyPi. > Release notes copied below, binaries will follow later (the regular build > machine is not available for the next two weeks). > > Cheers, > Ralf > > > > ========================== > SciPy 0.13.3 Release Notes > ========================== > > SciPy 0.13.3 is a bug-fix release with no new features compared to 0.13.2. > Both the weave and the ndimage.label bugs were severe regressions in > 0.13.0, > hence this release. > > Issues fixed > ------------ > - 3148: fix a memory leak in ``ndimage.label``. > - 3216: fix weave issue with too long file names for MSVC. > > Other changes > ------------- > - Update Sphinx theme used for html docs so ``>>>`` in examples can be > toggled. > > Checksums > ========= > 0547c1f8e8afad4009cc9b5ef17a2d4d release/installers/scipy-0.13.3.tar.gz > 20ff3a867cc5925ef1d654aed2ff7e88 release/installers/scipy-0.13.3.zip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnick at MIT.EDU Sun Feb 16 21:14:55 2014 From: mnick at MIT.EDU (Maximilian Nickel) Date: Sun, 16 Feb 2014 21:14:55 -0500 Subject: [SciPy-User] [ANN] scikit-tensor 0.1 Message-ID: Hello everyone, I am pleased to announce the first release of scikit-tensor, a python module for multilinear algebra and tensor factorizations. The projects website is hosted at github: https://github.com/mnick/scikit-tensor where also all development is organized. Currently, scikit-tensor supports basic tensor operations such as folding/unfolding, tensor-matrix and tensor-vector products as well as the following tensor factorizations: * Canonical Decomposition / Parafac (CP) * Tucker Decomposition * RESCAL * DEDICOM Moreover, all operations support dense and sparse tensors. scikit-tensor is still a very young project in alpha status and I'm happy for any contributions (patches, code, bugfixes, documentation, whatever) to get it to a stable and useful point. Feel free to get in touch with me via email or directly via github if you are interested in contributing. Cheers, Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.mckendree.young at gmail.com Tue Feb 18 22:31:56 2014 From: patrick.mckendree.young at gmail.com (Patrick Young) Date: Tue, 18 Feb 2014 20:31:56 -0700 Subject: [SciPy-User] MKL Build Not passing ODR tests Message-ID: Hi, I recently spent a bunch of time trying to get a MKL build of numpy and scipy to pass all the respective tests (Ubuntu 13.04, Python 2.7.6). Numpy passes them all, but I can't get scipy to pass 4 of them, 3 of those related to the ODR package. I posted a ticket with more detailed info here, so I won't repeat myself: https://github.com/scipy/scipy/issues/3340 If anyone has any thoughts on what I could try, please let me know! I've tried fiddling with the flags a bit, but only seem to make it fail more tests when I do so! Thanks, -Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From kathleen.m.tacina at nasa.gov Wed Feb 19 13:47:04 2014 From: kathleen.m.tacina at nasa.gov (Kathleen Tacina) Date: Wed, 19 Feb 2014 13:47:04 -0500 Subject: [SciPy-User] Determining if statistics are converged Message-ID: <5304FC28.7080208@nasa.gov> Hi, This is slightly off-topic, but I'm not sure where a better place to ask this would be ... I have a time series, and I'd like to check that we've taken enough points so that the mean and rms are converged. I hoping to get help with 2 things: (1) Good references on how to do this. I've been using ad hoc methods. For example, comparing the mean of the 1st n samples to the overall mean. This works well if, for example, the mean of the 1st 50 samples is the same as the mean of all 5,000 samples. But when the mean of the 1st 300 samples hasn't yet converged to the mean of all 400 samples, it isn't as helpful. (2) Tools to help with this in the scipy ecosystem. The application is highly turbulent flow where we expect the rms (or, equivalently, the standard deviation) to be on the same order of magnitude as the mean. I'd also appreciate suggestions for better places to ask this question. Thanks! Best regards, Kathleen -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmhobson at gmail.com Wed Feb 19 14:21:07 2014 From: pmhobson at gmail.com (Paul Hobson) Date: Wed, 19 Feb 2014 11:21:07 -0800 Subject: [SciPy-User] Determining if statistics are converged In-Reply-To: <5304FC28.7080208@nasa.gov> References: <5304FC28.7080208@nasa.gov> Message-ID: On Wed, Feb 19, 2014 at 10:47 AM, Kathleen Tacina < kathleen.m.tacina at nasa.gov> wrote: > Hi, > > This is slightly off-topic, but I'm not sure where a better place to ask > this would be ... > > I have a time series, and I'd like to check that we've taken enough points > so that the mean and rms are converged. I hoping to get help with 2 things: > > (1) Good references on how to do this. > > I've been using ad hoc methods. For example, comparing the mean of the 1st > n samples to the overall mean. This works well if, for example, the mean of > the 1st 50 samples is the same as the mean of all 5,000 samples. But when > the mean of the 1st 300 samples hasn't yet converged to the mean of all 400 > samples, it isn't as helpful. > > (2) Tools to help with this in the scipy ecosystem. > > The application is highly turbulent flowwhere we expect the rms (or, equivalently, the standard deviation) to be on > the same order of magnitude as the mean. > > I'd also appreciate suggestions for better places to ask this question. > > Thanks! > > Best regards, > Kathleen > Hey Kathleen, It seems to me that a reasonable approach would be to to simply compute the expanding mean. You could then do some rolling inspection of how much the expanding mean has changed with time. The pandas library has great built-in support for doing this to time series data. http://pandas.pydata.org/pandas-docs/stable/computation.html#expanding-window-moment-functions Maybe something like this: http://nbviewer.ipython.org/gist/phobson/9099530 I wish pandas had be around when I was writing my hydraulic engineering master's thesis :) -paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Wed Feb 19 14:32:23 2014 From: afraser at lanl.gov (Andrew Fraser) Date: Wed, 19 Feb 2014 12:32:23 -0700 Subject: [SciPy-User] Determining if statistics are converged In-Reply-To: <5304FC28.7080208@nasa.gov> References: <5304FC28.7080208@nasa.gov> Message-ID: <530506C7.8050206@lanl.gov> I'm interested in the expert answers you get. Although I am not an expert, I believe that the dependent (not independent) nature of samples taken close together in time means that you need more samples than you would need if they were independent. Perhaps you need to estimate the autocorrelations. If so, that will get you into the big literature on estimating the Fourier power spectrum. Sorry. As I write this it feels like hitting a tar baby. Each blow just makes things worse. Andy Kathleen Tacina wrote: > Hi, > > This is slightly off-topic, but I'm not sure where a better place to > ask this would be ... > > I have a time series, and I'd like to check that we've taken enough > points so that the mean and rms are converged. I hoping to get help > with 2 things: > > (1) Good references on how to do this. > > I've been using ad hoc methods. For example, comparing the mean of the > 1st n samples to the overall mean. This works well if, for example, > the mean of the 1st 50 samples is the same as the mean of all 5,000 > samples. But when the mean of the 1st 300 samples hasn't yet converged > to the mean of all 400 samples, it isn't as helpful. > > (2) Tools to help with this in the scipy ecosystem. > > The application is highly turbulent flow > where we expect the rms (or, > equivalently, the standard deviation) to be on the same order of > magnitude as the mean. > > I'd also appreciate suggestions for better places to ask this question. > > Thanks! > > Best regards, > Kathleen > -- > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kathleen.m.tacina at nasa.gov Wed Feb 19 17:07:39 2014 From: kathleen.m.tacina at nasa.gov (Kathleen Tacina) Date: Wed, 19 Feb 2014 17:07:39 -0500 Subject: [SciPy-User] Determining if statistics are converged In-Reply-To: References: <5304FC28.7080208@nasa.gov> Message-ID: <53052B2B.9060608@nasa.gov> On 2/19/14 2:21 PM, Paul Hobson wrote: > > > > On Wed, Feb 19, 2014 at 10:47 AM, Kathleen Tacina > > wrote: > > Hi, > > [snip] > > I have a time series, and I'd like to check that we've taken > enough points so that the mean and rms are converged. I hoping to > get help with 2 things: > > (1) Good references on how to do this. > > [snip] > (2) Tools to help with this in the scipy ecosystem. > > The application is highly turbulent flow > where we expect the rms > (or, equivalently, the standard deviation) to be on the same order > of magnitude as the mean. > > I'd also appreciate suggestions for better places to ask this > question. > > Thanks! > > Best regards, > Kathleen > > > Hey Kathleen, > > It seems to me that a reasonable approach would be to to simply > compute the expanding mean. You could then do some rolling inspection > of how much the expanding mean has changed with time. > > The pandas library has great built-in support for doing this to time > series data. > http://pandas.pydata.org/pandas-docs/stable/computation.html#expanding-window-moment-functions > > Maybe something like this: > http://nbviewer.ipython.org/gist/phobson/9099530 > > I wish pandas had be around when I was writing my hydraulic > engineering master's thesis :) > > -paul > Paul, Thanks for the link to the pandas page. The expanding (and rolling) statistics will be very helpful -- much better than having to rewrite them myself. I've looked at the expanding mean. Unfortunately, for some cases, it looks like I'm not converged. Referring to your (very helpful) ipython gist, it would be like we stopped collecting data after 5-15 sec, before statistics converge. Best regards, Kathleen -------------- next part -------------- An HTML attachment was scrubbed... URL: From pyviennacl at tsmithe.net Thu Feb 20 07:46:07 2014 From: pyviennacl at tsmithe.net (Toby St Clere Smithe) Date: Thu, 20 Feb 2014 12:46:07 +0000 Subject: [SciPy-User] PyViennaCL Message-ID: <8761oa3qb4.fsf@tsmithe.net> Hi all, Apologies for posting across lists; I thought that this might be of interest to both groups. I have just released PyViennaCL 1.0.0, which is a set of largely NumPy-compatible Python bindings to the ViennaCL linear algebra and numerical computation library for GPGPU and heterogeneous systems. PyViennaCL aims to make powerful GPGPU computing really transparently easy, especially for users already using NumPy for representing matrices. Please see my announcement below for links to source and packages and documentation, a list of features, and a list of missing pieces. I hope to iron out all those missing bits over the coming months, and work on closer integration, especially with PyOpenCL / PyCUDA, over the summer. Best wishes, Toby St Clere Smithe -------------- next part -------------- An embedded message was scrubbed... From: Toby St Clere Smithe Subject: PyViennaCL 1.0.0 is released! Date: Thu, 20 Feb 2014 11:54:44 +0000 Size: 2401 URL: From nils106 at googlemail.com Thu Feb 20 07:53:38 2014 From: nils106 at googlemail.com (Nils Wagner) Date: Thu, 20 Feb 2014 13:53:38 +0100 Subject: [SciPy-User] Hot Spot Analysis Message-ID: Hi all, Assume that we have a spatial energy distribution given at discrete points in 3-D, i.e. E_i(x_i,y_i,z_i) where E_i denotes the energy and x_i,y_i,z_i are the corresponding coordinates. Is it possible to extract the local hot spots using scipy ? A small example is appreciated. Thanks in advance Nils -------------- next part -------------- An HTML attachment was scrubbed... URL: From morri495 at umn.edu Thu Feb 20 09:22:40 2014 From: morri495 at umn.edu (David) Date: Thu, 20 Feb 2014 08:22:40 -0600 Subject: [SciPy-User] Hot Spot Analysis In-Reply-To: References: Message-ID: I would imagine there is a way. Do you have an algorithm? On Thu, Feb 20, 2014 at 6:53 AM, Nils Wagner wrote: > Hi all, > > Assume that we have a spatial energy distribution given at discrete points > in 3-D, i.e. > > E_i(x_i,y_i,z_i) > > where E_i denotes the energy and x_i,y_i,z_i are the corresponding > coordinates. > > Is it possible to extract the local hot spots using scipy ? > > A small example is appreciated. > > Thanks in advance > > > Nils > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- David P. Morrissey PHD Candidate School of Mathematics University of Minnesota 127 Vincent Hall 206 Church St SE Minneapolis MN 55455 http://www.math.umn.edu/~morri495/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.gullikson at gmail.com Thu Feb 20 09:28:08 2014 From: kevin.gullikson at gmail.com (Kevin Gullikson) Date: Thu, 20 Feb 2014 08:28:08 -0600 Subject: [SciPy-User] Hot Spot Analysis In-Reply-To: References: Message-ID: There is scipy.signal.argrelmax, but I think that just works on 1D arrays (or along one axis of multidimensional arrays). Kevin Gullikson On Thu, Feb 20, 2014 at 6:53 AM, Nils Wagner wrote: > Hi all, > > Assume that we have a spatial energy distribution given at discrete points > in 3-D, i.e. > > E_i(x_i,y_i,z_i) > > where E_i denotes the energy and x_i,y_i,z_i are the corresponding > coordinates. > > Is it possible to extract the local hot spots using scipy ? > > A small example is appreciated. > > Thanks in advance > > > Nils > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Thu Feb 20 09:28:34 2014 From: hturesson at gmail.com (Hjalmar Turesson) Date: Thu, 20 Feb 2014 11:28:34 -0300 Subject: [SciPy-User] Hot Spot Analysis In-Reply-To: References: Message-ID: Couldn't the energy at x,y,z be a value in a 3-d array? With your values in an array you should be able to find local peak and troughs. On Thu, Feb 20, 2014 at 9:53 AM, Nils Wagner wrote: > Hi all, > > Assume that we have a spatial energy distribution given at discrete points > in 3-D, i.e. > > E_i(x_i,y_i,z_i) > > where E_i denotes the energy and x_i,y_i,z_i are the corresponding > coordinates. > > Is it possible to extract the local hot spots using scipy ? > > A small example is appreciated. > > Thanks in advance > > > Nils > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From deil.christoph at googlemail.com Thu Feb 20 09:45:55 2014 From: deil.christoph at googlemail.com (Christoph Deil) Date: Thu, 20 Feb 2014 15:45:55 +0100 Subject: [SciPy-User] Hot Spot Analysis In-Reply-To: References: Message-ID: On 20 Feb 2014, at 13:53, Nils Wagner wrote: > Hi all, > > Assume that we have a spatial energy distribution given at discrete points in 3-D, i.e. > > E_i(x_i,y_i,z_i) > > where E_i denotes the energy and x_i,y_i,z_i are the corresponding coordinates. > > Is it possible to extract the local hot spots using scipy ? > > A small example is appreciated. > > Thanks in advance > > > Nils Maybe using one of these functions you can achieve what you want? http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.extrema.html http://scikit-image.org/docs/dev/api/skimage.feature.html#peak-local-max Christoph From L.J.Buitinck at uva.nl Thu Feb 20 09:46:27 2014 From: L.J.Buitinck at uva.nl (Lars Buitinck) Date: Thu, 20 Feb 2014 15:46:27 +0100 Subject: [SciPy-User] PyViennaCL Message-ID: 2014-02-20 13:47 GMT+01:00 : > Message: 5 > Date: Thu, 20 Feb 2014 12:46:07 +0000 > From: Toby St Clere Smithe > Subject: [SciPy-User] PyViennaCL > To: Discussion of Numerical Python , > scipy-user > Cc: Karl Rupp > Message-ID: <8761oa3qb4.fsf at tsmithe.net> > Content-Type: text/plain; charset="us-ascii" > > I have just released PyViennaCL 1.0.0, which is a set of largely > NumPy-compatible Python bindings to the ViennaCL linear algebra and > numerical computation library for GPGPU and heterogeneous systems. > > PyViennaCL aims to make powerful GPGPU computing really transparently > easy, especially for users already using NumPy for representing > matrices. > > Please see my announcement below for links to source and packages and > documentation, a list of features, and a list of missing pieces. I hope > to iron out all those missing bits over the coming months, and work on > closer integration, especially with PyOpenCL / PyCUDA, over the summer. Now here's a package I've been waiting for, but the attachment with link is missing in the ML digest. Is it https://github.com/tsmithe/viennacl-dev/tree/pyviennacl? From mail at tsmithe.net Thu Feb 20 09:55:41 2014 From: mail at tsmithe.net (Toby St Clere Smithe) Date: Thu, 20 Feb 2014 14:55:41 +0000 Subject: [SciPy-User] PyViennaCL References: Message-ID: <87fvnd3kb6.fsf@tsmithe.net> Lars Buitinck writes: > Now here's a package I've been waiting for, but the attachment with > link is missing in the ML digest. Is it > https://github.com/tsmithe/viennacl-dev/tree/pyviennacl? Ah. Please see [0] for the announcement. You can find the main page at [1], documentation and examples at [2], and download information at [3]. [0] http://article.gmane.org/gmane.comp.mathematics.viennacl.devel/473 [1] http://viennacl.sourceforge.net/pyviennacl.html [2] http://viennacl.sourceforge.net/pyviennacl/doc/ [3] http://viennacl.sourceforge.net/viennacl-download.html Enjoy! If you have any feedback or requests, then do stay in touch :) Toby From joferkington at gmail.com Thu Feb 20 14:05:13 2014 From: joferkington at gmail.com (Joe Kington) Date: Thu, 20 Feb 2014 13:05:13 -0600 Subject: [SciPy-User] Hot Spot Analysis In-Reply-To: References: Message-ID: On Thu, Feb 20, 2014 at 8:45 AM, Christoph Deil < deil.christoph at googlemail.com> wrote: > > On 20 Feb 2014, at 13:53, Nils Wagner wrote: > > > Hi all, > > > > Assume that we have a spatial energy distribution given at discrete > points in 3-D, i.e. > > > > E_i(x_i,y_i,z_i) > > > > where E_i denotes the energy and x_i,y_i,z_i are the corresponding > coordinates. > > > > Is it possible to extract the local hot spots using scipy ? > > > > A small example is appreciated. > > > > Thanks in advance > > > > > > Nils > > Maybe using one of these functions you can achieve what you want? > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.extrema.html > http://scikit-image.org/docs/dev/api/skimage.feature.html#peak-local-max > Christoph > Similar to Christoph's suggestion, have a look at scipy.ndimage.maximum_filter: http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.maximum_filter.html#scipy.ndimage.filters.maximum_filter There's a nice example of using it for a very similar use case here: http://stackoverflow.com/a/3689710/325565 The stackoverflow answer is for 2D, but all of the `scipy.ndimage` functions used there should work equally well for the 3D case. Hope that helps! -Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdmcbain at freeshell.org Thu Feb 20 22:51:09 2014 From: gdmcbain at freeshell.org (Geordie McBain) Date: Fri, 21 Feb 2014 14:51:09 +1100 Subject: [SciPy-User] scipy.signal.cascade scaling factor in dilation equation undocumented? Message-ID: Hello. Is there an omission in the documentation for phi output of scipy.signal.cascade? It says phi(x) = sum(hk * phi(2x-k)) but I wonder whether it should be phi(x) = sqrt(2) * sum(hk * phi(2x-k)) as in eq. 6.18 of the book referred to, Strang & Nguyen (1996, p. 186). Those authors call this the 'dilation equation'. Consider the Haar case, for which, e.g. from scipy.signal.daub(1), hk is >>> scipy.signal.daub(1) array([ 0.70710678, 0.70710678]) these values being 1/sqrt(2). Putting these into scipy.signal.cascade does generate a Haar scaling function and wavelet >>> scipy.signal.cascade(scipy.signal.daub(1), 1) (array([ 0. , 0.5]), array([ 1., 1.]), array([ 1., -1.])) where the three pairs are the absciss?, scaling function (phi), and wavelet (psi). Using x[0] = 0 in the dilation equation gives phi (0) = h[0] * phi (0) + h[1] * phi (1) 1 = 1/sqrt(2) * 1 + 1/sqrt(2) * 0 = 1/sqrt(2), which is wrong. It would be fixed if the RHS were multiplied by sqrt(2), as in S. & N. Similarly at x[1] = .5, phi (.5) = h[0] * phi (1) + h[1] * phi(0) 1 = 1/sqrt(2) * 0 + 1/sqrt(2) * 1 = 1/sqrt(2) This sqrt(2) seems to work for higher-order Daubechies wavelets too: >>> h = scipy.signal.daub(2) >>> x, phi, psi = scipy.signal.cascade(h, 1) >>> phi[1] - np.sqrt(2) * h[0] * phi[2] 0.0 >>> phi[2] - np.sqrt(2) * (h[0] * phi[4] + h[1] * phi[2]) 2.2204460492503131e-16 >>> phi[3] - np.sqrt(2) * (h[1] * phi[4] + h[2] * phi[2]) -8.8028830759584723e-17 From alan.isaac at gmail.com Fri Feb 21 15:15:55 2014 From: alan.isaac at gmail.com (Alan G Isaac) Date: Fri, 21 Feb 2014 15:15:55 -0500 Subject: [SciPy-User] editing privileges Message-ID: <5307B3FB.5060606@gmail.com> May I have editing privileges for the SciPy Wiki or at least for: http://wiki.scipy.org/NumPy_for_Matlab_Users username: AlanIsaac Thanks, Alan Isaac From robert.kern at gmail.com Fri Feb 21 15:35:04 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Feb 2014 20:35:04 +0000 Subject: [SciPy-User] editing privileges In-Reply-To: <5307B3FB.5060606@gmail.com> References: <5307B3FB.5060606@gmail.com> Message-ID: On Fri, Feb 21, 2014 at 8:15 PM, Alan G Isaac wrote: > May I have editing privileges for the SciPy Wiki or at least for: > http://wiki.scipy.org/NumPy_for_Matlab_Users > > username: AlanIsaac The Wiki is frozen. Instead of making changes to the wiki, please port any leftover content to the static website by contributing a PR. Thanks. https://github.com/scipy/scipy.org -- Robert Kern From pav at iki.fi Fri Feb 21 16:28:37 2014 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Feb 2014 23:28:37 +0200 Subject: [SciPy-User] editing privileges In-Reply-To: <5307B3FB.5060606@gmail.com> References: <5307B3FB.5060606@gmail.com> Message-ID: 21.02.2014 22:15, Alan G Isaac kirjoitti: > May I have editing privileges for the SciPy Wiki or at least for: > http://wiki.scipy.org/NumPy_for_Matlab_Users > > username: AlanIsaac Added. -- Pauli Virtanen From njs at pobox.com Sat Feb 22 00:04:39 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 22 Feb 2014 00:04:39 -0500 Subject: [SciPy-User] Matrix multiplication operator PEP Message-ID: Hey all, I was feeling grotty and unproductive yesterday, so I decided to procrastinate by writing a PEP, which I figure we might as well throw at the wall and see if it sticks. At least, once we do have consensus on what it says -- I think we probably pretty much do, but let me know. Any other comments, suggestions, pointings out of places where I out a word, and arguments about how I've made some terrible mistakes, are also very welcome; I guess we probably only get one more chance at this. Cheers, -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org -------------- next part -------------- A non-text attachment was scrubbed... Name: return-of-revenge-of-matmult-pep.rst Type: application/octet-stream Size: 26727 bytes Desc: not available URL: From dineshbvadhia at hotmail.com Sat Feb 22 05:03:54 2014 From: dineshbvadhia at hotmail.com (Dinesh Vadhia) Date: Sat, 22 Feb 2014 02:03:54 -0800 Subject: [SciPy-User] Matrix multiplication operator PEP Message-ID: +1 Btw, what do other languages use as their "@" matrix multiplication operator. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Feb 22 05:20:45 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 22 Feb 2014 10:20:45 +0000 (UTC) Subject: [SciPy-User] Matrix multiplication operator PEP References: Message-ID: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> It has to easily explain why subclassing is unsatisfactory. Otherwise it will be thrown out with the argument that we can just subclass. Sturla Nathaniel Smith wrote: > Hey all, > > I was feeling grotty and unproductive yesterday, so I decided to > procrastinate by writing a PEP, which I figure we might as well throw at > the wall and see if it sticks. At least, once we do have consensus on > what it says -- I think we probably pretty much do, but let me know. Any > other comments, suggestions, pointings out of places where I out a word, > and arguments about how I've made some terrible mistakes, are also very > welcome; I guess we probably only get one more chance at this. > > Cheers, > -n > > -- > Nathaniel J. Smith > Postdoctoral researcher - Informatics - University of Edinburgh > http://vorpus.org > > _______________________________________________ SciPy-User mailing list > SciPy-User at scipy.org href="http://mail.scipy.org/mailman/listinfo/scipy-user">http://mail.scipy.org/mailman/listinfo/scipy-user From njs at pobox.com Sat Feb 22 06:36:10 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 22 Feb 2014 06:36:10 -0500 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> References: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> Message-ID: On 22 Feb 2014 05:21, "Sturla Molden" wrote: > > It has to easily explain why subclassing is unsatisfactory. Otherwise it > will be thrown out with the argument that we can just subclass. Do you mean the first "Alternative" (second half of "Rationale"), or something else? -n > > > Nathaniel Smith wrote: > > Hey all, > > > > I was feeling grotty and unproductive yesterday, so I decided to > > procrastinate by writing a PEP, which I figure we might as well throw at > > the wall and see if it sticks. At least, once we do have consensus on > > what it says -- I think we probably pretty much do, but let me know. Any > > other comments, suggestions, pointings out of places where I out a word, > > and arguments about how I've made some terrible mistakes, are also very > > welcome; I guess we probably only get one more chance at this. > > > > Cheers, > > -n > > > > -- > > Nathaniel J. Smith > > Postdoctoral researcher - Informatics - University of Edinburgh > > http://vorpus.org > > > > _______________________________________________ SciPy-User mailing list > > SciPy-User at scipy.org > href="http://mail.scipy.org/mailman/listinfo/scipy-user"> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Feb 22 14:26:16 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 22 Feb 2014 20:26:16 +0100 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> Message-ID: On 22/02/14 12:36, Nathaniel Smith wrote: > On 22 Feb 2014 05:21, "Sturla Molden" > wrote: > > > > It has to easily explain why subclassing is unsatisfactory. Otherwise it > > will be thrown out with the argument that we can just subclass. > > Do you mean the first "Alternative" (second half of "Rationale"), or > something else? > > -n I dont't care as long as the reader will understand that subclassing is not a viable option. Just make sure it does not drown in TL;DR text. Say it early and preferably repeat it several times, so it does not get missed. This is a literary strategy I've learned to use from writing grant applications: Most of the prosaic text is never read, so make sure the major message stand out as conspicuously as possible. Subclassing is likely to be the major counter-argument, so make sure you shoot it down thoroughly. Sturla From njs at pobox.com Sat Feb 22 15:21:23 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 22 Feb 2014 15:21:23 -0500 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> Message-ID: On 22 Feb 2014 14:26, "Sturla Molden" wrote: > > On 22/02/14 12:36, Nathaniel Smith wrote: > > On 22 Feb 2014 05:21, "Sturla Molden" > > wrote: > > > > > > It has to easily explain why subclassing is unsatisfactory. Otherwise it > > > will be thrown out with the argument that we can just subclass. > > > > Do you mean the first "Alternative" (second half of "Rationale"), or > > something else? > > > > -n > > > I dont't care as long as the reader will understand that subclassing is > not a viable option. Just make sure it does not drown in TL;DR text. Say > it early and preferably repeat it several times, so it does not get > missed. This is a literary strategy I've learned to use from writing > grant applications: Most of the prosaic text is never read, so make sure > the major message stand out as conspicuously as possible. > > Subclassing is likely to be the major counter-argument, so make sure you > shoot it down thoroughly. My question is what counterargument are you referring to when you say "subclassing" :-). Is it the "you should just have both np.matrix and np.ndarray" argument? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Feb 22 16:26:43 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 22 Feb 2014 16:26:43 -0500 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: Message-ID: You know, better idea: let's move discussion to a PR: https://github.com/numpy/numpy/pull/4351 -n On Sat, Feb 22, 2014 at 12:04 AM, Nathaniel Smith wrote: > Hey all, > > I was feeling grotty and unproductive yesterday, so I decided to > procrastinate by writing a PEP, which I figure we might as well throw > at the wall and see if it sticks. At least, once we do have consensus > on what it says -- I think we probably pretty much do, but let me > know. Any other comments, suggestions, pointings out of places where I > out a word, and arguments about how I've made some terrible mistakes, > are also very welcome; I guess we probably only get one more chance at > this. > > Cheers, > -n > > -- > Nathaniel J. Smith > Postdoctoral researcher - Informatics - University of Edinburgh > http://vorpus.org -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From gdmcbain at freeshell.org Sat Feb 22 18:09:02 2014 From: gdmcbain at freeshell.org (Geordie McBain) Date: Sun, 23 Feb 2014 10:09:02 +1100 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: Message-ID: I like this idea and draft PEP a lot. Having this feature would ease my programming and greatly improve the readability of my SciPy programs. Having been trained in Matlab and then having used GNU Octave for many years, I thought about adapting their pair of infix multiplication operators, * and .*, but quickly realized that this wouldn't translate as they use the lone star for the linear algebraic product. A small suggestion is an additional mnemonic motivation for the at-sign: it slightly resembles the centred dot (LaTeX \cdot) often used in printed mathematics to represent an algebraic product. A bolder one that I haven't thought through yet: were there such an operator, besides making heavy use of its intended purpose, how might I overload it? The first thing that springs to mind is functional composition and application. The fundamental idea of a matrix is as a function on vectors; matrix-matrix multiplication is defined subsequently by associativity. If I had arbitrary compatible functions f and g and an argument x, would I like to write 'f @ x' for f (x) and 'g @ f' for lambda x: g (f (x))? Maybe not the former so much but the latter definitely. Haskell uses a dot for this as in 'g . f'. http://www.haskell.org/haskellwiki/Function_composition Of course the dot's unpalatable here as it's an accessor in Python. The use of @ for composition might go some way to addressing the concern about adding an operator with no stdlib uses: I presume it does contain a few functional compositions. Then turning that argument around: what about (should the PEP not get up, or even before it gets up) defining ndarray.__call__ as dot? So we'd have A (x) instead of A.dot (x) or A @ x. An advantage of this approach would be that it wouldn't require changing Python, only NumPy. Matlab/Octave can't do that as they use parentheses for indexing as well as application; NumPy has left parentheses free by using brackets for the former. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sat Feb 22 21:57:57 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 23 Feb 2014 02:57:57 +0000 (UTC) Subject: [SciPy-User] Matrix multiplication operator PEP References: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> Message-ID: <482712841414816726.719923sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > Subclassing is likely to be the major counter-argument, so make sure you > shoot it down thoroughly. > > My question is what counterargument are you referring to when you say > "subclassing" :-). Is it the "you should just have both np.matrix and np.ndarray" argument? > Yes, exactly that, but perhaps from a more a more principal point of view: That subclassing and overloading the operator in a subclass is the more "pythonic" way to change the behavior of an operator. Whereas adding more syntactic bloat is unpythonic. So we have to explain why subclassing and operator overloading is insufficient. Sturla From matthew.brett at gmail.com Sat Feb 22 22:08:10 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 22 Feb 2014 19:08:10 -0800 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: <482712841414816726.719923sturla.molden-gmail.com@news.gmane.org> References: <1181117832414756454.002616sturla.molden-gmail.com@news.gmane.org> <482712841414816726.719923sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi, On Sat, Feb 22, 2014 at 6:57 PM, Sturla Molden wrote: > Nathaniel Smith wrote: > >> Subclassing is likely to be the major counter-argument, so make sure you >> shoot it down thoroughly. >> >> My question is what counterargument are you referring to when you say >> "subclassing" :-). Is it the "you should just have both np.matrix and np.ndarray" argument? >> > > Yes, exactly that, but perhaps from a more a more principal point of view: > That subclassing and overloading the operator in a subclass is the more > "pythonic" way to change the behavior of an operator. Whereas adding more > syntactic bloat is unpythonic. So we have to explain why subclassing and > operator overloading is insufficient. Yes, we do need to explain why we need two '*' operators, one for element-wise multiplication and one for matrix multiplication. Sub-classing can't do that for us - as we've know well - we just one replace one mult by the other. Cheers, Matthew From gdmcbain at freeshell.org Sun Feb 23 17:26:27 2014 From: gdmcbain at freeshell.org (Geordie McBain) Date: Mon, 24 Feb 2014 09:26:27 +1100 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: Message-ID: 2014-02-22 21:03 GMT+11:00 Dinesh Vadhia : > +1 > > Btw, what do other languages use as their "@" matrix multiplication > operator. I skimmed a few examples from http://rosettacode.org/wiki/Matrix_multiplication: APL: +.? IDL: # J: +/ .* Julia: * K: _mul Mathematica: . MATLAB/Octave: * (with .* for elementwise) Maxima: . (with * for elementwise) Pari/GP: * R: %*% Ruby: * (but only for a special matrix class) TI-83 BASIC: * I'd say APL's use of non-ASCII characters was historically unpopular. IDL's hash clashes with Python's comment. J uses an ASCII reworking of APL, I think, so it's got a much more elaborate idea of combining multiple infix operators together; powerful but perhaps not really Pythonic. The dot of Mathematica and Maxima is nice but clashes with Python's object attribute/method/module accessor. Julia, Octave, Pari/GP, and TI-83 BASIC's star is nice but would be backward-incompatible with NumPy's elementwise multiplication; Ruby's star relies on the subclass idea eloquently rebutted by the original poster. R's and K's multicharacter things are pretty ugly. -- G. D. McBain Theory of Lift - Introductory Computational Aerodynamics in MATLAB/Octave Out now - http://www.wileyeurope.com/remtitle.cgi?111995228X From fperez.net at gmail.com Sun Feb 23 19:16:54 2014 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Feb 2014 16:16:54 -0800 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: Message-ID: Great info, but it might be better to record it in the PEP PR: https://github.com/numpy/numpy/pull/4351 so it makes it into the final document. I think Nathaniel wanted to continue the discussion there, so it's not spread across lists. Cheers f On Sun, Feb 23, 2014 at 2:26 PM, Geordie McBain wrote: > 2014-02-22 21:03 GMT+11:00 Dinesh Vadhia : > > +1 > > > > Btw, what do other languages use as their "@" matrix multiplication > > operator. > > I skimmed a few examples from > http://rosettacode.org/wiki/Matrix_multiplication: > > APL: +.? > IDL: # > J: +/ .* > Julia: * > K: _mul > Mathematica: . > MATLAB/Octave: * (with .* for elementwise) > Maxima: . (with * for elementwise) > Pari/GP: * > R: %*% > Ruby: * (but only for a special matrix class) > TI-83 BASIC: * > > I'd say APL's use of non-ASCII characters was historically unpopular. > IDL's hash clashes with Python's comment. J uses an ASCII reworking > of APL, I think, so it's got a much more elaborate idea of combining > multiple infix operators together; powerful but perhaps not really > Pythonic. The dot of Mathematica and Maxima is nice but clashes with > Python's object attribute/method/module accessor. Julia, Octave, > Pari/GP, and TI-83 BASIC's star is nice but would be > backward-incompatible with NumPy's elementwise multiplication; Ruby's > star relies on the subclass idea eloquently rebutted by the original > poster. R's and K's multicharacter things are pretty ugly. > > -- > G. D. McBain > > Theory of Lift - Introductory Computational Aerodynamics in MATLAB/Octave > Out now - http://www.wileyeurope.com/remtitle.cgi?111995228X > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdmcbain at freeshell.org Sun Feb 23 22:14:07 2014 From: gdmcbain at freeshell.org (Geordie McBain) Date: Mon, 24 Feb 2014 14:14:07 +1100 Subject: [SciPy-User] Matrix multiplication operator PEP In-Reply-To: References: Message-ID: 2014-02-24 11:16 GMT+11:00 Fernando Perez : > Great info, but it might be better to record it in the PEP PR: > > https://github.com/numpy/numpy/pull/4351 > > so it makes it into the final document. I think Nathaniel wanted to continue > the discussion there, so it's not spread across lists. Thanks, yes, sorry, missed that point. Done. https://github.com/numpy/numpy/pull/4351#issuecomment-35854661 From cimrman3 at ntc.zcu.cz Tue Feb 25 08:57:11 2014 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 25 Feb 2014 14:57:11 +0100 Subject: [SciPy-User] ANN: SfePy 2014.1 Message-ID: <530CA137.8090606@ntc.zcu.cz> I am pleased to announce release 2014.1 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - sfepy.fem was split to separate FEM-specific and general modules - lower memory usage by creating active DOF connectivities directly from field connectivities - new handling of field and variable shapes - clean up: many obsolete modules were removed, all module names follow naming conventions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Maty?? Nov?k, Jaroslav Vond?ejc From andyfaff at gmail.com Thu Feb 27 02:03:55 2014 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 27 Feb 2014 18:03:55 +1100 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. Message-ID: Dear list, I have a least squares data fitting system. I use two different ways of fitting the data; the first is with a differential evolution algorithm, the second is with the scipy.optimize.leastsq function. When I use the leastsq function I obtain the cov_x matrix, giving me estimated parameter uncertainties. However, with my home rolled DE algorithm I don't get a covariance matrix and wish to estimate one. However, I don't want to change the fitted parameters, I just want the covariance matrix estimated. Is there any way of getting leastsq to give me the covariance matrix solely based on the initial parameters? If there isn't, can anyone suggest the best way for me to accomplish this? At the moment I am using numdifftools to calculate the Hessian and inverting this, but this is not giving me the same numbers as I get out of the leastsq function. regards, Andrew. -- _____________________________________ Dr. Andrew Nelson _____________________________________ From nadeeshtv at gmail.com Thu Feb 27 06:11:25 2014 From: nadeeshtv at gmail.com (nadeesh t v) Date: Thu, 27 Feb 2014 16:41:25 +0530 Subject: [SciPy-User] solving under determined system of equation Message-ID: hi, I have to solve system of equation of the form Ax=B,x>=0 which can be under determined. Which Scipy library function I should use? I know that scipy.optimize.nnls is there but Whether it can solve under determined system of equation? Thanks and Regards nadeesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Feb 27 07:49:54 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 27 Feb 2014 07:49:54 -0500 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. In-Reply-To: References: Message-ID: On Thu, Feb 27, 2014 at 2:03 AM, Andrew Nelson wrote: > Dear list, > I have a least squares data fitting system. I use two different ways > of fitting the data; the first is with a differential evolution > algorithm, the second is with the scipy.optimize.leastsq function. > > When I use the leastsq function I obtain the cov_x matrix, giving me > estimated parameter uncertainties. > > However, with my home rolled DE algorithm I don't get a covariance > matrix and wish to estimate one. However, I don't want to change the > fitted parameters, I just want the covariance matrix estimated. Is > there any way of getting leastsq to give me the covariance matrix > solely based on the initial parameters? If your own fitted parameters also solve the leastsq problem, then calling leastsq with your values as starting values should not change the parameters. > > If there isn't, can anyone suggest the best way for me to accomplish > this? At the moment I am using numdifftools to calculate the Hessian > and inverting this, but this is not giving me the same numbers as I > get out of the leastsq function. Depending on how "nice" your function is, the values can differ quite a bit depending on the choice of the stepsize in the numerical derivatives. In general I would trust numdifftools more than the derivatives returned by leastsq. You could also compare with the numerical derivatives in statsmodels. http://statsmodels.sourceforge.net/devel/tools.html#numerical-differentiation Josef > > regards, > Andrew. > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From newville at cars.uchicago.edu Thu Feb 27 07:56:19 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 27 Feb 2014 06:56:19 -0600 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. In-Reply-To: References: Message-ID: Andrew, On Thu, Feb 27, 2014 at 1:03 AM, Andrew Nelson wrote: > Dear list, > I have a least squares data fitting system. I use two different ways > of fitting the data; the first is with a differential evolution > algorithm, the second is with the scipy.optimize.leastsq function. > > When I use the leastsq function I obtain the cov_x matrix, giving me > estimated parameter uncertainties. > > However, with my home rolled DE algorithm I don't get a covariance > matrix and wish to estimate one. However, I don't want to change the > fitted parameters, I just want the covariance matrix estimated. Is > there any way of getting leastsq to give me the covariance matrix > solely based on the initial parameters? Does it work to run leastsq() starting with the solution from your DE algorithm? That is, if you've found a good solution, and start leastsq() with that, wouldn't you be concerned if it finds a significantly different solution? Maybe leastsq() can be thought of as refining the values found from the DE approach? If you think (or find) that leastsq() is running off to obviously wrong solutions, tools like lmfit allow you to put bounds on parameter values with leastsq(). Sometimes that complicates finding a good (non-singular) covariance matrix, but it often works just fine. I know a few people using lmfit to first do a fit with Powell's method (often thought to be more stable than LM when far from solution, and so safer in some automated/batch analysis), and then use that solution with leastsq() to get a final solution with error bars are correlations. Lmfit tries to simplify that by allowing you to switch fitting methods without changing your objective function. We'd love to add a DE algorithm. If you know of a good one, or are willing to share yours, I'd love to hear it. > If there isn't, can anyone suggest the best way for me to accomplish > this? At the moment I am using numdifftools to calculate the Hessian > and inverting this, but this is not giving me the same numbers as I > get out of the leastsq function. I don't think it would be too hard to lift the code that extracts and uses the jacobian to calculate the covariance matrix from leastsq() (but be mindful of the pivoting). It might be easier to start with the MPFIT code, as it's pure python. Hope that helps, --Matt Newville From f5owl at free.fr Thu Feb 27 09:53:22 2014 From: f5owl at free.fr (f5owl at free.fr) Date: Thu, 27 Feb 2014 09:53:22 -0500 (EST) Subject: [SciPy-User] lfilter produce very low output In-Reply-To: <1860534687.439148773.1393511865641.JavaMail.root@zimbra48-e8.priv.proxad.net> Message-ID: <669368969.439187758.1393512802958.JavaMail.root@zimbra48-e8.priv.proxad.net> Dear list readers, I am new to scipy and I am trying to filter some wav file containing experimental data. The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz noise and other spurious. The fs=44100Hz and samples are 16 bits. I use firwin and lfilter to filter my data between 4000 and 6000 Hz. My problem is that the output of lfilter is very low and seem to be almost independent of the filter frequencies. I have made a test file with a pure 5kHz sine wave and result are consistant. I have to normalize data between -1 and 1 but I have the same problem. I can figure out what is wrong. I put my code and the end of the message. phil1.wav is real data, sin_0_5000_44100.wav is generated sine. Attachment is two plots of the first 400 samples made with the two files. With the simulated sine wave, result is consistent filter delay and low attenuation. With the real data, no filter delay and output very low. Can someone help ? Thank you Here is my code : import matplotlib.pyplot as plt import numpy as np from scipy.signal import lfilter, firwin from scipy.io import wavfile #filename='phil1.wav' filename='sin_0_5000_44100.wav' fs, sig = wavfile.read(filename) # extract first channel r= np.hsplit(sig,2)[0] print('fs = %f'%fs) print('min %f max %f mean %f rms %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) flow=4000 fup=6000 taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) rf = lfilter(taps, 1.0, r) print('min %f max %f mean %f rms %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) t=np.arange(len(r))*(1./fs) plt.subplot(211) plt.title(filename) plt.plot(t[:400],r[:400]) plt.subplot(212) plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) plt.plot(t[:400],rf[:400]) plt.show() -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_bad.png Type: image/png Size: 95483 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_ok.png Type: image/png Size: 100667 bytes Desc: not available URL: From newville at cars.uchicago.edu Thu Feb 27 11:08:38 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 27 Feb 2014 10:08:38 -0600 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. In-Reply-To: References: Message-ID: Hi Josef, Andrew, On Thu, Feb 27, 2014 at 6:49 AM, wrote: > On Thu, Feb 27, 2014 at 2:03 AM, Andrew Nelson wrote: >> Dear list, >> I have a least squares data fitting system. I use two different ways >> of fitting the data; the first is with a differential evolution >> algorithm, the second is with the scipy.optimize.leastsq function. >> >> When I use the leastsq function I obtain the cov_x matrix, giving me >> estimated parameter uncertainties. >> >> However, with my home rolled DE algorithm I don't get a covariance >> matrix and wish to estimate one. However, I don't want to change the >> fitted parameters, I just want the covariance matrix estimated. Is >> there any way of getting leastsq to give me the covariance matrix >> solely based on the initial parameters? > > If your own fitted parameters also solve the leastsq problem, then > calling leastsq with your values as starting values should not change > the parameters. > >> >> If there isn't, can anyone suggest the best way for me to accomplish >> this? At the moment I am using numdifftools to calculate the Hessian >> and inverting this, but this is not giving me the same numbers as I >> get out of the leastsq function. > > Depending on how "nice" your function is, the values can differ quite > a bit depending on the choice of the stepsize in the numerical > derivatives. > > In general I would trust numdifftools more than the derivatives > returned by leastsq. You could also compare with the numerical > derivatives in statsmodels. > http://statsmodels.sourceforge.net/devel/tools.html#numerical-differentiation Do you think it would be worth adding an option to leastsq() to use these methods to calculate the covariance after the best solution is found? I forgot the mention that lmfit also includes methods to do a brute-force estimate of confidence intervals, which can be especially useful when sections of the covariance matrix are far from elliptical. That might be a useful approach for a problem that needs a DE solver. --Matt Newville From warren.weckesser at gmail.com Thu Feb 27 11:09:12 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 27 Feb 2014 11:09:12 -0500 Subject: [SciPy-User] lfilter produce very low output In-Reply-To: <669368969.439187758.1393512802958.JavaMail.root@zimbra48-e8.priv.proxad.net> References: <1860534687.439148773.1393511865641.JavaMail.root@zimbra48-e8.priv.proxad.net> <669368969.439187758.1393512802958.JavaMail.root@zimbra48-e8.priv.proxad.net> Message-ID: On 2/27/14, f5owl at free.fr wrote: > Dear list readers, > I am new to scipy and I am trying to filter some wav file containing > experimental data. > The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz noise > and other spurious. > The fs=44100Hz and samples are 16 bits. > I use firwin and lfilter to filter my data between 4000 and 6000 Hz. > My problem is that the output of lfilter is very low and seem to be almost > independent of the filter frequencies. > I have made a test file with a pure 5kHz sine wave and result are > consistant. > I have to normalize data between -1 and 1 but I have the same problem. > I can figure out what is wrong. > I put my code and the end of the message. phil1.wav is real data, > sin_0_5000_44100.wav is generated sine. > Attachment is two plots of the first 400 samples made with the two files. > With the simulated sine wave, result is consistent filter delay and low > attenuation. > With the real data, no filter delay and output very low. > Can someone help ? Can you put the file phil1.wav somewhere accessible? It would help us reproduce the output you are seeing. Warren > Thank you > > Here is my code : > > import matplotlib.pyplot as plt > import numpy as np > from scipy.signal import lfilter, firwin > from scipy.io import wavfile > #filename='phil1.wav' > filename='sin_0_5000_44100.wav' > > fs, sig = wavfile.read(filename) > > # extract first channel > r= np.hsplit(sig,2)[0] > > print('fs = %f'%fs) > print('min %f max %f mean %f rms > %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) > flow=4000 > fup=6000 > taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) > > rf = lfilter(taps, 1.0, r) > > print('min %f max %f mean %f rms > %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) > t=np.arange(len(r))*(1./fs) > plt.subplot(211) > plt.title(filename) > plt.plot(t[:400],r[:400]) > plt.subplot(212) > plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) > plt.plot(t[:400],rf[:400]) > > plt.show() > > From josef.pktd at gmail.com Thu Feb 27 11:32:06 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 27 Feb 2014 11:32:06 -0500 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. In-Reply-To: References: Message-ID: On Thu, Feb 27, 2014 at 11:08 AM, Matt Newville wrote: > Hi Josef, Andrew, > > On Thu, Feb 27, 2014 at 6:49 AM, wrote: >> On Thu, Feb 27, 2014 at 2:03 AM, Andrew Nelson wrote: >>> Dear list, >>> I have a least squares data fitting system. I use two different ways >>> of fitting the data; the first is with a differential evolution >>> algorithm, the second is with the scipy.optimize.leastsq function. >>> >>> When I use the leastsq function I obtain the cov_x matrix, giving me >>> estimated parameter uncertainties. >>> >>> However, with my home rolled DE algorithm I don't get a covariance >>> matrix and wish to estimate one. However, I don't want to change the >>> fitted parameters, I just want the covariance matrix estimated. Is >>> there any way of getting leastsq to give me the covariance matrix >>> solely based on the initial parameters? >> >> If your own fitted parameters also solve the leastsq problem, then >> calling leastsq with your values as starting values should not change >> the parameters. >> >>> >>> If there isn't, can anyone suggest the best way for me to accomplish >>> this? At the moment I am using numdifftools to calculate the Hessian >>> and inverting this, but this is not giving me the same numbers as I >>> get out of the leastsq function. >> >> Depending on how "nice" your function is, the values can differ quite >> a bit depending on the choice of the stepsize in the numerical >> derivatives. >> >> In general I would trust numdifftools more than the derivatives >> returned by leastsq. You could also compare with the numerical >> derivatives in statsmodels. >> http://statsmodels.sourceforge.net/devel/tools.html#numerical-differentiation > > Do you think it would be worth adding an option to leastsq() to use > these methods to calculate the covariance after the best solution is > found? scipy has an open issue for adding more numerical derivative facilities. I only remember some spot checking where the cov_x of leastsq wasn't very good. (I'm not using leastsq these days, and my last comment on scaling in leastsq was wrong.) IIRC, In statsmodels we never rely on the Hessian approximation that the optimizer uses. And I have seen also cases that are "not nice", where our own numerical derivatives can be very sensitive. Serious problems like not being positive definite show up less often at the minimum than during the optimization (with the non-leastsq optimizers). Josef > > I forgot the mention that lmfit also includes methods to do a > brute-force estimate of confidence intervals, which can be especially > useful when sections of the covariance matrix are far from elliptical. > That might be a useful approach for a problem that needs a DE > solver. > > --Matt Newville > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From f5owl at free.fr Thu Feb 27 11:46:48 2014 From: f5owl at free.fr (f5owl at free.fr) Date: Thu, 27 Feb 2014 11:46:48 -0500 (EST) Subject: [SciPy-User] lfilter produce very low output In-Reply-To: Message-ID: <518784588.439502337.1393519608098.JavaMail.root@zimbra48-e8.priv.proxad.net> Yes of course it can help :-) The two files are here : http://arlotto.univ-tln.fr/tmp/ The phil1.wav has two channels only one (the first read by scipy) contains interesting data. The phil1.wav level is lower than the generated sine but is still audible. Philip ----- Mail original ----- De: "Warren Weckesser" ?: "SciPy Users List" Envoy?: Jeudi 27 F?vrier 2014 17:09:12 Objet: Re: [SciPy-User] lfilter produce very low output On 2/27/14, f5owl at free.fr wrote: > Dear list readers, > I am new to scipy and I am trying to filter some wav file containing > experimental data. > The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz noise > and other spurious. > The fs=44100Hz and samples are 16 bits. > I use firwin and lfilter to filter my data between 4000 and 6000 Hz. > My problem is that the output of lfilter is very low and seem to be almost > independent of the filter frequencies. > I have made a test file with a pure 5kHz sine wave and result are > consistant. > I have to normalize data between -1 and 1 but I have the same problem. > I can figure out what is wrong. > I put my code and the end of the message. phil1.wav is real data, > sin_0_5000_44100.wav is generated sine. > Attachment is two plots of the first 400 samples made with the two files. > With the simulated sine wave, result is consistent filter delay and low > attenuation. > With the real data, no filter delay and output very low. > Can someone help ? Can you put the file phil1.wav somewhere accessible? It would help us reproduce the output you are seeing. Warren > Thank you > > Here is my code : > > import matplotlib.pyplot as plt > import numpy as np > from scipy.signal import lfilter, firwin > from scipy.io import wavfile > #filename='phil1.wav' > filename='sin_0_5000_44100.wav' > > fs, sig = wavfile.read(filename) > > # extract first channel > r= np.hsplit(sig,2)[0] > > print('fs = %f'%fs) > print('min %f max %f mean %f rms > %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) > flow=4000 > fup=6000 > taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) > > rf = lfilter(taps, 1.0, r) > > print('min %f max %f mean %f rms > %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) > t=np.arange(len(r))*(1./fs) > plt.subplot(211) > plt.title(filename) > plt.plot(t[:400],r[:400]) > plt.subplot(212) > plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) > plt.plot(t[:400],rf[:400]) > > plt.show() > > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From vaggi.federico at gmail.com Thu Feb 27 13:17:53 2014 From: vaggi.federico at gmail.com (federico vaggi) Date: Thu, 27 Feb 2014 19:17:53 +0100 Subject: [SciPy-User] estimating cov_x matrix with leastsq, without doing a fit. Message-ID: To expand on what Joseph was saying: You can take the parameter value that you've obtained at your solution, calculate the Jacobian using finite elements (I didn't know statsmodels had those utility functions, I'll definitely use those in my own code) then use the Jacobian to approximate the Hessian (if you are close to the minima, it's very close to J*J.T) then use the Hessian to calculate the covariance matrix. The only issue with this is that this approach is only really valid if your solution is near a minima already - if it isn't, the approximation for the Hessian might be way off. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadeeshtv at gmail.com Thu Feb 27 14:10:11 2014 From: nadeeshtv at gmail.com (nadeesh t v) Date: Fri, 28 Feb 2014 00:40:11 +0530 Subject: [SciPy-User] solve under determined system of eqauation Message-ID: hi, I have to solve system of equation of the form Ax=B,x>=0 which can be under determined. Which Scipy library function I should use? I know that scipy.optimize.nnls is there but Whether it can solve under determined system of equation? Thanks and Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From f5owl at free.fr Fri Feb 28 05:44:40 2014 From: f5owl at free.fr (f5owl at free.fr) Date: Fri, 28 Feb 2014 05:44:40 -0500 (EST) Subject: [SciPy-User] lfilter produce very low output In-Reply-To: Message-ID: <1664704729.441315533.1393584280483.JavaMail.root@zimbra48-e8.priv.proxad.net> Hello, I have identified my problem : the phil1.wav is a stereo file. I converted it to mono with audacity and now I have consistent results. Probably r= np.hsplit(sig,2)[0] is not the good way to select only one channel. Philipp ----- Mail original ----- De: "Warren Weckesser" ?: "SciPy Users List" Envoy?: Jeudi 27 F?vrier 2014 17:09:12 Objet: Re: [SciPy-User] lfilter produce very low output On 2/27/14, f5owl at free.fr wrote: > Dear list readers, > I am new to scipy and I am trying to filter some wav file containing > experimental data. > The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz noise > and other spurious. > The fs=44100Hz and samples are 16 bits. > I use firwin and lfilter to filter my data between 4000 and 6000 Hz. > My problem is that the output of lfilter is very low and seem to be almost > independent of the filter frequencies. > I have made a test file with a pure 5kHz sine wave and result are > consistant. > I have to normalize data between -1 and 1 but I have the same problem. > I can figure out what is wrong. > I put my code and the end of the message. phil1.wav is real data, > sin_0_5000_44100.wav is generated sine. > Attachment is two plots of the first 400 samples made with the two files. > With the simulated sine wave, result is consistent filter delay and low > attenuation. > With the real data, no filter delay and output very low. > Can someone help ? Can you put the file phil1.wav somewhere accessible? It would help us reproduce the output you are seeing. Warren > Thank you > > Here is my code : > > import matplotlib.pyplot as plt > import numpy as np > from scipy.signal import lfilter, firwin > from scipy.io import wavfile > #filename='phil1.wav' > filename='sin_0_5000_44100.wav' > > fs, sig = wavfile.read(filename) > > # extract first channel > r= np.hsplit(sig,2)[0] > > print('fs = %f'%fs) > print('min %f max %f mean %f rms > %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) > flow=4000 > fup=6000 > taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) > > rf = lfilter(taps, 1.0, r) > > print('min %f max %f mean %f rms > %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) > t=np.arange(len(r))*(1./fs) > plt.subplot(211) > plt.title(filename) > plt.plot(t[:400],r[:400]) > plt.subplot(212) > plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) > plt.plot(t[:400],rf[:400]) > > plt.show() > > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From warren.weckesser at gmail.com Fri Feb 28 09:47:45 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 28 Feb 2014 09:47:45 -0500 Subject: [SciPy-User] lfilter produce very low output In-Reply-To: <1664704729.441315533.1393584280483.JavaMail.root@zimbra48-e8.priv.proxad.net> References: <1664704729.441315533.1393584280483.JavaMail.root@zimbra48-e8.priv.proxad.net> Message-ID: On 2/28/14, f5owl at free.fr wrote: > Hello, > I have identified my problem : the phil1.wav is a stereo file. > I converted it to mono with audacity and now I have consistent results. > Probably r= np.hsplit(sig,2)[0] is not the good way to select only one > channel. > Philipp Hi Philipp, The array sig returned by wavfile.read has shape (2634096, 2), and when you use hsplit like that, r has shape (2634096, 1). It is a 2-d array containing a single column. The problem occurred when you passed this array to lfilter. lfilter can handle n-dimensional arrays; the axis to be filtered is specified with the 'axis' argument. The default axis is -1 (i.e. the last axis of the array). So when you called lfilter with r, it treated r as a collection of 2634096 1-dimensional arrays each with length 1. The quick fix for that would be to use 'axis=0' in the call to lfilter. A better solution is to not use hsplit, and instead use regular numpy slicing to pull apart the signal. To get the first and second channels into 1-d arrays, you can use channel1 = sig[:, 0] channel2 = sig[:, 1] Each of those variables is a 1-d array, and can be passed to lfilter without requiring a change to the axis argument. Warren > > ----- Mail original ----- > De: "Warren Weckesser" > ?: "SciPy Users List" > Envoy?: Jeudi 27 F?vrier 2014 17:09:12 > Objet: Re: [SciPy-User] lfilter produce very low output > > On 2/27/14, f5owl at free.fr wrote: >> Dear list readers, >> I am new to scipy and I am trying to filter some wav file containing >> experimental data. >> The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz >> noise >> and other spurious. >> The fs=44100Hz and samples are 16 bits. >> I use firwin and lfilter to filter my data between 4000 and 6000 Hz. >> My problem is that the output of lfilter is very low and seem to be almost >> independent of the filter frequencies. >> I have made a test file with a pure 5kHz sine wave and result are >> consistant. >> I have to normalize data between -1 and 1 but I have the same problem. >> I can figure out what is wrong. >> I put my code and the end of the message. phil1.wav is real data, >> sin_0_5000_44100.wav is generated sine. >> Attachment is two plots of the first 400 samples made with the two files. >> With the simulated sine wave, result is consistent filter delay and low >> attenuation. >> With the real data, no filter delay and output very low. >> Can someone help ? > > > Can you put the file phil1.wav somewhere accessible? It would help > us reproduce the output you are seeing. > > Warren > > >> Thank you >> >> Here is my code : >> >> import matplotlib.pyplot as plt >> import numpy as np >> from scipy.signal import lfilter, firwin >> from scipy.io import wavfile >> #filename='phil1.wav' >> filename='sin_0_5000_44100.wav' >> >> fs, sig = wavfile.read(filename) >> >> # extract first channel >> r= np.hsplit(sig,2)[0] >> >> print('fs = %f'%fs) >> print('min %f max %f mean %f rms >> %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) >> flow=4000 >> fup=6000 >> taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) >> >> rf = lfilter(taps, 1.0, r) >> >> print('min %f max %f mean %f rms >> %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) >> t=np.arange(len(r))*(1./fs) >> plt.subplot(211) >> plt.title(filename) >> plt.plot(t[:400],r[:400]) >> plt.subplot(212) >> plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) >> plt.plot(t[:400],rf[:400]) >> >> plt.show() >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From f5owl at free.fr Fri Feb 28 10:56:18 2014 From: f5owl at free.fr (f5owl at free.fr) Date: Fri, 28 Feb 2014 10:56:18 -0500 (EST) Subject: [SciPy-User] lfilter produce very low output In-Reply-To: Message-ID: <1955623358.442199014.1393602978581.JavaMail.root@zimbra48-e8.priv.proxad.net> Hi Warren, Thank you very much indeed for these explanations. I have tried and it works perfectly with stereo files. I would never have found by myself. But now it's clear. It becomes also clear that I need a better understanding of numpy an python as well :-) Thank you. Philipp ----- Mail original ----- De: "Warren Weckesser" ?: "SciPy Users List" Envoy?: Vendredi 28 F?vrier 2014 15:47:45 Objet: Re: [SciPy-User] lfilter produce very low output On 2/28/14, f5owl at free.fr wrote: > Hello, > I have identified my problem : the phil1.wav is a stereo file. > I converted it to mono with audacity and now I have consistent results. > Probably r= np.hsplit(sig,2)[0] is not the good way to select only one > channel. > Philipp Hi Philipp, The array sig returned by wavfile.read has shape (2634096, 2), and when you use hsplit like that, r has shape (2634096, 1). It is a 2-d array containing a single column. The problem occurred when you passed this array to lfilter. lfilter can handle n-dimensional arrays; the axis to be filtered is specified with the 'axis' argument. The default axis is -1 (i.e. the last axis of the array). So when you called lfilter with r, it treated r as a collection of 2634096 1-dimensional arrays each with length 1. The quick fix for that would be to use 'axis=0' in the call to lfilter. A better solution is to not use hsplit, and instead use regular numpy slicing to pull apart the signal. To get the first and second channels into 1-d arrays, you can use channel1 = sig[:, 0] channel2 = sig[:, 1] Each of those variables is a 1-d array, and can be passed to lfilter without requiring a change to the axis argument. Warren > > ----- Mail original ----- > De: "Warren Weckesser" > ?: "SciPy Users List" > Envoy?: Jeudi 27 F?vrier 2014 17:09:12 > Objet: Re: [SciPy-User] lfilter produce very low output > > On 2/27/14, f5owl at free.fr wrote: >> Dear list readers, >> I am new to scipy and I am trying to filter some wav file containing >> experimental data. >> The files contain an (almost) sine wave at 5kHz with some 50 and 100Hz >> noise >> and other spurious. >> The fs=44100Hz and samples are 16 bits. >> I use firwin and lfilter to filter my data between 4000 and 6000 Hz. >> My problem is that the output of lfilter is very low and seem to be almost >> independent of the filter frequencies. >> I have made a test file with a pure 5kHz sine wave and result are >> consistant. >> I have to normalize data between -1 and 1 but I have the same problem. >> I can figure out what is wrong. >> I put my code and the end of the message. phil1.wav is real data, >> sin_0_5000_44100.wav is generated sine. >> Attachment is two plots of the first 400 samples made with the two files. >> With the simulated sine wave, result is consistent filter delay and low >> attenuation. >> With the real data, no filter delay and output very low. >> Can someone help ? > > > Can you put the file phil1.wav somewhere accessible? It would help > us reproduce the output you are seeing. > > Warren > > >> Thank you >> >> Here is my code : >> >> import matplotlib.pyplot as plt >> import numpy as np >> from scipy.signal import lfilter, firwin >> from scipy.io import wavfile >> #filename='phil1.wav' >> filename='sin_0_5000_44100.wav' >> >> fs, sig = wavfile.read(filename) >> >> # extract first channel >> r= np.hsplit(sig,2)[0] >> >> print('fs = %f'%fs) >> print('min %f max %f mean %f rms >> %f'%(r.min(),r.max(),r.mean(),np.sqrt(np.mean(r**2)))) >> flow=4000 >> fup=6000 >> taps = firwin(256,[flow,fup],nyq=fs/2,pass_zero=False, scale=False) >> >> rf = lfilter(taps, 1.0, r) >> >> print('min %f max %f mean %f rms >> %f'%(rf.min(),rf.max(),rf.mean(),np.sqrt(np.mean(rf**2)))) >> t=np.arange(len(r))*(1./fs) >> plt.subplot(211) >> plt.title(filename) >> plt.plot(t[:400],r[:400]) >> plt.subplot(212) >> plt.title(filename+' filtered %.0f %.0f'%(flow,fup)) >> plt.plot(t[:400],rf[:400]) >> >> plt.show() >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From j.anderson at ambisonictoolkit.net Fri Feb 28 15:20:03 2014 From: j.anderson at ambisonictoolkit.net (Joseph Anderson) Date: Fri, 28 Feb 2014 12:20:03 -0800 Subject: [SciPy-User] [OT] Job: Postdoctoral Research Associate position at DXARTS Message-ID: <8A83A82C-2DB4-4418-9A58-89EB2EBC2920@ambisonictoolkit.net> Hello All, Wanted to let everyone know about a new position at DXARTS / University of Washington (Seattle). Detailed description here (and below): http://ap.washington.edu/ahr/academic-jobs/position/nn0378/ My best, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Joseph Anderson j.anderson at ambisonictoolkit.net http://www.ambisonictoolkit.net ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Position Description: The Center for Digital Arts and Experimental Media (DXARTS) has an outstanding opportunity for a postdoctoral Research Associate (0148) position. The successful candidate will develop tools for a variety of research projects ongoing in our center. These projects encompass the areas of sound synthesis and computer music, experimental video and machine vision, mechatronics and digital fabrication, etc. The position requires the ability to work independently, planning and executing the development of new software, supporting advanced artistic research across many media. Specific Responsibilities Work will involve developing new tools and interfaces for media processing, including real-time systems for sound synthesis and processing, spatial audio, performance technologies, HCI, live video processing tools for immersive installations, sensing and control systems applied to robotics and mechatronic systems, as well as other software tools related to specific research projects ongoing in our center. Travel to national and international venues to support performances and showings of faculty work as well as presentations at conferences and festivals. Requirements: PhD in Computer Science or Digital Arts with focus on media programming for audio, video and physical computing. At least 5 years of programming and software development experience in the following programming languages: C / C++, Objective C, Python, Scheme, Ruby, SuperCollider; experience with standard media libraries such as Open Frameworks, OpenCV, GStreamer, JUCE, etc. The candidate should have experience developing software on the Mac OS X (and iOS) and Linux operating systems and IDEs such as XCode and CodeBlocks and the ability to manage large projects using GIT repositories. Experience with just in time (JIT) compilers such as LLVM would be considered a plus. To apply, please send a cover letter, three letters of reference, a CV, questions about the position and disability accommodations to Billie Grace at bgrace at uw.edu. Candidates are encouraged to apply as soon as possible. This is a 100% FTE position. The initial appointment will be for 1 year (12mo. service period), with the option of renewal (up to 6 years). The University of Washington is an affirmative action, equal opportunity employer. The University is building a culturally diverse faculty and staff and strongly encourages applications from women, minorities, individuals with disabilities and covered veterans.