From ralf.gommers at gmail.com Fri May 1 01:49:40 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 1 May 2015 07:49:40 +0200 Subject: [SciPy-User] Norm for Sparse Matrix In-Reply-To: References: Message-ID: Hi Wenlei, On Thu, Apr 30, 2015 at 11:43 PM, Wenlei Xie wrote: > Hi, > > The function numpy.linalg.norm doesn't seem to work with sparse matrices. > I am wondering if there is any function to get the norm for sparse matrices? > Current master contains a function that does this: https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/_norm.py Will be in the 0.16.0 release. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From bala.biophysics at gmail.com Sun May 3 13:12:42 2015 From: bala.biophysics at gmail.com (Bala subramanian) Date: Sun, 3 May 2015 19:12:42 +0200 Subject: [SciPy-User] gamma distribution fitting: newbie question Message-ID: Friends, I am a newbie to scipy and distribution fitting. I have written following code to fit gamma distribution to my data (attached .txt file) from scipy.stats import gamma import matplotlib.pyplot as plt *# take the data* data = np.loadtxt('test.dat',skiprows=1) n, bins, patches = plt.hist(data[:,1],bins=50, normed=1, visible=True,color='white') col=['red','magenta','green'] *#estimate gamma params* param = gamma.fit(data[:,1],floc=0) i=0 *for val in [500,800,1000]*: x = np.linspace( np.min(data[:,1]) , np.max(data[:,1]), val) *# create a fitted curve* pdf_fitted = gamma.pdf( x, param[0], scale=param[-1] ) plt.plot(pdf_fitted, c=col[i],lw=2) i += 1 plt.show() q1) I would like to know if i am doing the fitting correctly ? q2) When i use various sample values (for loop above), i get different fitted curves all with the same shapes. I dnt understand how to choose this n value. q3) If i have to estimate the chi2 and p value to see the goodness of fit, how can i do it in scipy. Someone please give the code i should use for the same. I want to see if my data follows a gamma distribution. Thanks, Bala -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.dat Type: application/x-ns-proxy-autoconfig Size: 98368 bytes Desc: not available URL: From guziy.sasha at gmail.com Sun May 3 14:29:54 2015 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sun, 3 May 2015 14:29:54 -0400 Subject: [SciPy-User] gamma distribution fitting: newbie question In-Reply-To: References: Message-ID: Hi: 1) I think your fitting is OK. 2) The bigger the n the better. You get different curves, because you are not plotting it right. Please see here the corrected example (from your code): http://nbviewer.ipython.org/github/guziy/PyNotebooks/blob/master/Gamma_dist.ipynb 3) I do not feel comfortable writing about chi2 or p-value, hopefully someone more knowledgeable can help. Cheers 2015-05-03 13:12 GMT-04:00 Bala subramanian : > Friends, > I am a newbie to scipy and distribution fitting. I have written following > code to fit gamma distribution to my data (attached .txt file) > > from scipy.stats import gamma > import matplotlib.pyplot as plt > *# take the data* > data = np.loadtxt('test.dat',skiprows=1) > n, bins, patches = plt.hist(data[:,1],bins=50, normed=1, > visible=True,color='white') > col=['red','magenta','green'] > > *#estimate gamma params* > param = gamma.fit(data[:,1],floc=0) > > i=0 > *for val in [500,800,1000]*: > x = np.linspace( np.min(data[:,1]) , np.max(data[:,1]), val) > *# create a fitted curve* > pdf_fitted = gamma.pdf( x, param[0], scale=param[-1] ) > plt.plot(pdf_fitted, c=col[i],lw=2) > i += 1 > > plt.show() > > q1) I would like to know if i am doing the fitting correctly ? > > q2) When i use various sample values (for loop above), i get different > fitted curves all with the same shapes. I dnt understand how to choose this > n value. > > q3) If i have to estimate the chi2 and p value to see the goodness of fit, > how can i do it in scipy. Someone please give the code i should use for the > same. I want to see if my data follows a gamma distribution. > > Thanks, > Bala > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From gary.ruben at gmail.com Mon May 4 02:40:22 2015 From: gary.ruben at gmail.com (gary ruben) Date: Mon, 4 May 2015 16:40:22 +1000 Subject: [SciPy-User] JOB: Australian Synchrotron Senior Scientific Software Engineer Message-ID: I'm based at the Australian Synchrotron facility in Melbourne. The Scientific Computing & IT group here are advertising for a Senior Scientific Software Engineer: See http://goo.gl/VzeIDd if interested. The group develops analysis software for various OSes so Python tends to be the go-to language. Feel free to distribute this around the wider scientific developer community. thanks, Gary Ruben CSIRO OCE Postdoctoral Fellow XFM Tomographic Metals Mapping project -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Mon May 4 06:03:55 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Mon, 4 May 2015 12:03:55 +0200 Subject: [SciPy-User] gamma distribution fitting: newbie question In-Reply-To: References: Message-ID: On 3 May 2015 at 19:12, Bala subramanian wrote: > q3) If i have to estimate the chi2 and p value to see the goodness of fit, > how can i do it in scipy. Someone please give the code i should use for the > same. I want to see if my data follows a gamma distribution. Here you have it: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html#scipy.stats.chisquare Mind the warning, though. Two thirds of your bins are empty or almost empty, so the test is not very accurate. You would have to apply an appropriate transformation, or use a more robust method like: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kstest.html#scipy.stats.kstest /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gallen at arlut.utexas.edu Tue May 5 12:09:54 2015 From: gallen at arlut.utexas.edu (Gregory Allen) Date: Tue, 5 May 2015 11:09:54 -0500 Subject: [SciPy-User] signal.firls In-Reply-To: References: Message-ID: On May 3, 2015, at 12:12 PM, Ralf Gommers wrote: > Does one of you want to add some unit tests and send a pull request on Github? I can do that, but you may make me learn something new. :) -Greg From ralf.gommers at gmail.com Tue May 5 12:36:03 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 5 May 2015 18:36:03 +0200 Subject: [SciPy-User] signal.firls In-Reply-To: References: Message-ID: On Tue, May 5, 2015 at 6:09 PM, Gregory Allen wrote: > On May 3, 2015, at 12:12 PM, Ralf Gommers wrote: > > Does one of you want to add some unit tests and send a pull request on > Github? > > I can do that, but you may make me learn something new. :) > Cool, let us know if you need any help. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Tue May 5 13:41:29 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Tue, 5 May 2015 13:41:29 -0400 Subject: [SciPy-User] A bug in SLSQP? Message-ID: Dear all, In using scipy.optimize.minimize, I encountered a bug when using bounds. My bounds is set as: bounds = ((0, None), (0, None), (0, 1), (0, None)) Then, it will say invalid value encountered in greater bnderr = where(bnds[:, 0] > bnds[:, 1])[0] I looked into slsqp.py, and in line 330, it checks: if bounds is None or len(bounds) == 0: ... Obviously, bounds is not None, so it did not replace 1e12 or -1e12 into the bnds. Could anyone please help? Thanks! My scipy version: 0.15.1; numpy version: 1.9.2. I am using Windows 64 bit, Python 3.4. Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From yw5aj at virginia.edu Tue May 5 14:08:58 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Tue, 5 May 2015 14:08:58 -0400 Subject: [SciPy-User] A bug in SLSQP? In-Reply-To: References: Message-ID: Dear all, Oops - sorry. Please disregard my previous email. Even though it complains (the warning), it still filled the xl and xu with the right values. Sorry again for the spamming. Shawn On Tue, May 5, 2015 at 1:41 PM, Yuxiang Wang wrote: > Dear all, > > In using scipy.optimize.minimize, I encountered a bug when using bounds. > > My bounds is set as: bounds = ((0, None), (0, None), (0, 1), (0, None)) > > Then, it will say invalid value encountered in greater bnderr = > where(bnds[:, 0] > bnds[:, 1])[0] > > I looked into slsqp.py, and in line 330, it checks: if bounds is None > or len(bounds) == 0: ... > > Obviously, bounds is not None, so it did not replace 1e12 or -1e12 > into the bnds. > > > Could anyone please help? Thanks! > > My scipy version: 0.15.1; numpy version: 1.9.2. I am using Windows 64 > bit, Python 3.4. > > Shawn > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From ralf.gommers at gmail.com Tue May 5 14:20:47 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 5 May 2015 20:20:47 +0200 Subject: [SciPy-User] A bug in SLSQP? In-Reply-To: References: Message-ID: On Tue, May 5, 2015 at 8:08 PM, Yuxiang Wang wrote: > Dear all, > > Oops - sorry. Please disregard my previous email. Even though it > complains (the warning), it still filled the xl and xu with the right > values. > > Sorry again for the spamming. > Thanks for the quick update. Looks to me like there's something to fix though. Your bounds are specified correctly, so there shouldn't be a warning like that. Can you provide a self-contained example that reproduces the warning and open an issue? Ralf > > Shawn > > On Tue, May 5, 2015 at 1:41 PM, Yuxiang Wang wrote: > > Dear all, > > > > In using scipy.optimize.minimize, I encountered a bug when using bounds. > > > > My bounds is set as: bounds = ((0, None), (0, None), (0, 1), (0, None)) > > > > Then, it will say invalid value encountered in greater bnderr = > > where(bnds[:, 0] > bnds[:, 1])[0] > > > > I looked into slsqp.py, and in line 330, it checks: if bounds is None > > or len(bounds) == 0: ... > > > > Obviously, bounds is not None, so it did not replace 1e12 or -1e12 > > into the bnds. > > > > > > Could anyone please help? Thanks! > > > > My scipy version: 0.15.1; numpy version: 1.9.2. I am using Windows 64 > > bit, Python 3.4. > > > > Shawn > > > > -- > > Yuxiang "Shawn" Wang > > Gerling Research Lab > > University of Virginia > > yw5aj at virginia.edu > > +1 (434) 284-0836 > > https://sites.google.com/a/virginia.edu/yw5aj/ > > > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Tue May 5 14:46:58 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Tue, 5 May 2015 14:46:58 -0400 Subject: [SciPy-User] A bug in SLSQP? In-Reply-To: References: Message-ID: Hi Ralf, Thanks for your response! And - yes, sure thing. Link to the issue: https://github.com/scipy/scipy/issues/4811 Shawn On Tue, May 5, 2015 at 2:20 PM, Ralf Gommers wrote: > > > On Tue, May 5, 2015 at 8:08 PM, Yuxiang Wang wrote: >> >> Dear all, >> >> Oops - sorry. Please disregard my previous email. Even though it >> complains (the warning), it still filled the xl and xu with the right >> values. >> >> Sorry again for the spamming. > > > Thanks for the quick update. Looks to me like there's something to fix > though. Your bounds are specified correctly, so there shouldn't be a warning > like that. Can you provide a self-contained example that reproduces the > warning and open an issue? > > Ralf > > >> >> >> Shawn >> >> On Tue, May 5, 2015 at 1:41 PM, Yuxiang Wang wrote: >> > Dear all, >> > >> > In using scipy.optimize.minimize, I encountered a bug when using bounds. >> > >> > My bounds is set as: bounds = ((0, None), (0, None), (0, 1), (0, None)) >> > >> > Then, it will say invalid value encountered in greater bnderr = >> > where(bnds[:, 0] > bnds[:, 1])[0] >> > >> > I looked into slsqp.py, and in line 330, it checks: if bounds is None >> > or len(bounds) == 0: ... >> > >> > Obviously, bounds is not None, so it did not replace 1e12 or -1e12 >> > into the bnds. >> > >> > >> > Could anyone please help? Thanks! >> > >> > My scipy version: 0.15.1; numpy version: 1.9.2. I am using Windows 64 >> > bit, Python 3.4. >> > >> > Shawn >> > >> > -- >> > Yuxiang "Shawn" Wang >> > Gerling Research Lab >> > University of Virginia >> > yw5aj at virginia.edu >> > +1 (434) 284-0836 >> > https://sites.google.com/a/virginia.edu/yw5aj/ >> >> >> >> -- >> Yuxiang "Shawn" Wang >> Gerling Research Lab >> University of Virginia >> yw5aj at virginia.edu >> +1 (434) 284-0836 >> https://sites.google.com/a/virginia.edu/yw5aj/ >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From rob.clewley at gmail.com Wed May 6 19:31:09 2015 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 6 May 2015 19:31:09 -0400 Subject: [SciPy-User] Is there existing code to log-with-bells-on for runtime algorithm diagnostics? In-Reply-To: References: Message-ID: Just to follow up on this thread, for interested readers' future reference... On Tue, Apr 21, 2015 at 4:22 PM, Robert Kern wrote: > On Tue, Apr 21, 2015 at 8:02 PM, Rob Clewley wrote: >> In fact, I'm trying to build a general purpose tool for exploring the >> inner workings of numerical algorithms for teaching and learning >> purposes, e.g. for graduate student training or for figuring out >> parameter choices in difficult applications. > > The term you want to search for is "structured logging". > > http://www.structlog.org/en/stable/ > http://eliot.readthedocs.org/en/stable/ > https://twiggy.readthedocs.org/en/latest/logging.html#structured-logging > http://netlogger.lbl.gov/ > I posted a new blog entry about my prototypical diagnosis and visualization tools for python numerical algorithms, built over matplotlib: http://robclewley.github.io/logging-and-diagnostic-tools-for-numeric-python-algorithms/ They utilize structlog, and I hooked up a noSQL DB (tinydb) to the logging to enable search capabilities of the log post-mortem. Comments and PRs most welcome. Thanks to everyone for the pointers. -Rob From daniel.p.bliss at gmail.com Thu May 7 04:43:06 2015 From: daniel.p.bliss at gmail.com (Daniel Bliss) Date: Thu, 7 May 2015 01:43:06 -0700 Subject: [SciPy-User] Race Condition using scipy.weave.inline Message-ID: Hi SciPy Users/Developers, I've just posted an issue to the brian2 github page, which describes a race condition I've encountered using scipy.weave.inline. In short, multiple simultaneous processes are making calls to inline, and in turn writing to the same scipy cache files, introducing syntax errors in these cache files. Here is the full description of the problem: https://github.com/brian-team/brian2/issues/463 Is there an easy way to avoid this? Many thanks, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant.roch at gmail.com Thu May 7 10:12:10 2015 From: grant.roch at gmail.com (Grant Roch) Date: Thu, 7 May 2015 10:12:10 -0400 Subject: [SciPy-User] Dense * sparse slower than expected Message-ID: I'm multiplying a dense matrix and a sparse matrix and was surprised at how long it was taking given the number of operations that should be taking place. I was able to multiply them faster using straight python/numpy than the scipy routines. I suspect that I'm not using the appropriate sparse matrix type, but I tried other various combinations with similar timings. Any thoughts would be appreciated. The results are (in seconds): Scipy: 0.206862998009 Manually: 0.0189853906631 Here is the code to reproduce: from scipy.sparse import csr_matrix, csc_matrix import numpy as np import timeit def manualMultiply(x, y): r = np.zeros((x.shape[0], y.shape[1]), dtype=x.dtype.char) y = y.tocsc() indptr = y.indptr indices = y.indices data = y.data numYColumns = y.shape[1] for j in xrange(numYColumns): rows = indices[indptr[j]:indptr[j+1]] d = data[indptr[j]:indptr[j+1]] r[:,j] = np.dot(x[:, rows], d.T) return r if __name__=="__main__": number = 10 indices = [ 0, 2, 1, 3, 5, 6, 53, 54, 54, 55, 59, 60, 59, 61, 59, 62, 59, 63, 101, 102, 102, 103, 111, 114, 112, 113, 193, 196, 194, 195] indptr = [ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30] data = [-29.90430622, 29.90430622, -29.90430622, 29.90430622, 30.16591252, -30.16591252, -31.25 , 31.25 , 31.25 , -31.25 , -17.98561151, 17.98561151, -17.98561151, 17.98561151, -17.98561151, 17.98561151, -17.98561151, 17.98561151, -40.81632653, 40.81632653, 30.24803388, -30.24803388, 30.48780488, -30.48780488, -30.48780488, 30.48780488, 31.84713376, -31.84713376, 31.84713376, -31.84713376] np.random.seed(1) x = np.random.randn(8500, 4581) y = csc_matrix((data, indices, indptr), shape=(4581, 15), dtype=np.float64) a = x * y b = manualMultiply(x, y) print(np.allclose(a, b)) t = timeit.timeit('x * y', 'from __main__ import x, y', number=number) print('Scipy: %s' % (t / number)) t = timeit.timeit('manualMultiply(x, y)', 'from __main__ import x, y, manualMultiply', number=number) print('Manually: %s' % (t / number)) Thanks, Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhisek.mndl at gmail.com Fri May 8 11:00:32 2015 From: abhisek.mndl at gmail.com (abhisek Mondal) Date: Fri, 8 May 2015 20:30:32 +0530 Subject: [SciPy-User] quadratic fitting of surface Message-ID: Hi. I have a data set like following(but way too long, may be million points): x = [2.06, 2.07, 2.14, 2.09, 2.2, 2.05, 1.92, 2.06, 2.11, 2.07] y = [171.82, 170.8, 159.59, 164.28, 169.98, 162.23, 167.37, 173.81,166.66, 155.13] z = [-1.41, -1.26, -1.07, -1.07, -1.46, -0.95, -0.08, -1.28, -1.2, -0.86] Using matplotlib,* scipy.linalg.lstsq* function I've got a *quadratic-surface-fit model*. But is it possible to print the "equation of the surface and check if the fitting is correct ??" Found no clue in documentation page. Please help ! thank you -- Abhisek Mondal *Research Fellow* *Structural Biology and Bioinformatics* *Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* -------------- next part -------------- An HTML attachment was scrubbed... URL: From akhil.jaggarwal at gmail.com Fri May 8 13:15:50 2015 From: akhil.jaggarwal at gmail.com (Akhil) Date: Fri, 8 May 2015 13:15:50 -0400 Subject: [SciPy-User] building scipy 0.9 on Suse Linux 11 sp1 Message-ID: Hello, I would like to ask for some input from scipy-users on building scipy on Suse Linux 11 SP1 using python version 2.7.2. The problem is that we're unable to build and install scipy 0.9 with and without BLAS and LAPACK static and shared library linkage and with numpy 1.9.2. I stop here: In file included from /remote/users/ajaggarwal/python2-7-2/include/python2.7/Python.h:38, from scipy/spatial/qhull/src/mem.h:22, from scipy/spatial/qhull/src/mem.c:37: /usr/include/string.h:546: error: expected declaration specifiers or '...' before '__locale_t' /usr/include/string.h:547: error: nonnull argument with out-of-range operand number (argument 1, operand 3) /usr/include/string.h:550: error: expected declaration specifiers or '...' before '__locale_t' /usr/include/string.h:551: error: nonnull argument with out-of-range operand number (argument 1, operand 4) In file included from /remote/users/ajaggarwal/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804, from scipy/spatial/qhull/src/mem.h:23, from scipy/spatial/qhull/src/mem.c:37: Looking around for the error points out at system and scipy libc version mismatch. Here's my libc version: GNU C Library stable release version 2.11.1 (20100118), by Roland McGrath et al. Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Configured for i686-suse-linux. Compiled by GNU CC version 4.3.4 [gcc-4_3-branch revision 152973]. Compiled on a Linux 2.6.32 system on 2010-05-06. My build environment is not really flexible as I cannot really change the libc version that is provided with the system. Would anyone know a workaroudn to this issue? (other than installing my own libc, rebuilding python 2.7.2 with that libc and retry the entire procedure). Any help would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From indiajoe at gmail.com Sun May 10 15:26:29 2015 From: indiajoe at gmail.com (Joe P Ninan) Date: Mon, 11 May 2015 00:56:29 +0530 Subject: [SciPy-User] Large Memory usage while doing median filter Message-ID: Hi, I was trying median_filter in scipy.ndimage.filters on a 1024x1024 array. What I noticed is that the memory requirement grows really fast when we increase the size of the median filter. On a machine with 6gb RAM I could do only (150,150) size filter. Anything above gives Memory Error. On a bigger server I could see it takes about 16gb RAM while using a filter size (200, 200) I can understand, computation time increasing with size of filter, but why is the memory size exploding with respect to size of the median filter? Is this expected behaviour? -cheers joe -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation ************************************************ Joe Philip Ninan Research Scholar DAA, TIFR, Mumbai, India. Ph: +917738438212 ------------------------------------------------------------ Website: www.tifr.res.in/~ninan/ My GnuPG Public Key: www.tifr.res.in/~ninan/JPN_public.key -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Mon May 11 01:25:40 2015 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Mon, 11 May 2015 07:25:40 +0200 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: Message-ID: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> On Mon, 11 May 2015 00:56:29 +0530 Joe P Ninan wrote: > Hi, > I was trying median_filter in scipy.ndimage.filters > on a 1024x1024 array. > > What I noticed is that the memory requirement grows really fast when we > increase the size of the median filter. > On a machine with 6gb RAM I could do only (150,150) size filter. > Anything above gives Memory Error. > > On a bigger server I could see it takes about 16gb RAM while using a filter > size (200, 200) > > I can understand, computation time increasing with size of filter, but why > is the memory size exploding with respect to size of the median filter? > Is this expected behaviour? I guess this is because scipy creates a 1024x1024x(40000) array to do the sort along the last axis. maybe no the best from the memorry point of view. Cheers, -- J?r?me Kieffer Data analysis unit - ESRF From jni.soma at gmail.com Mon May 11 01:31:46 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sun, 10 May 2015 22:31:46 -0700 (PDT) Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> Message-ID: <1431322306355.6ba8c8cc@Nodemailer> If you can cast your image as a uint8 image, try the median filter in scikit-image's filters.rank module. It's very fast and has a minimal memory footprint. But it doesn't work on floats or high ints.? ? Sent from Mailbox On Mon, May 11, 2015 at 3:25 PM, Jerome Kieffer wrote: > On Mon, 11 May 2015 00:56:29 +0530 > Joe P Ninan wrote: >> Hi, >> I was trying median_filter in scipy.ndimage.filters >> on a 1024x1024 array. >> >> What I noticed is that the memory requirement grows really fast when we >> increase the size of the median filter. >> On a machine with 6gb RAM I could do only (150,150) size filter. >> Anything above gives Memory Error. >> >> On a bigger server I could see it takes about 16gb RAM while using a filter >> size (200, 200) >> >> I can understand, computation time increasing with size of filter, but why >> is the memory size exploding with respect to size of the median filter? >> Is this expected behaviour? > I guess this is because scipy creates a 1024x1024x(40000) array to do the sort along the last axis. > maybe no the best from the memorry point of view. > Cheers, > -- > J?r?me Kieffer > Data analysis unit - ESRF > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From indiajoe at gmail.com Mon May 11 09:14:13 2015 From: indiajoe at gmail.com (Joe P Ninan) Date: Mon, 11 May 2015 18:44:13 +0530 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: <1431322306355.6ba8c8cc@Nodemailer> References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <1431322306355.6ba8c8cc@Nodemailer> Message-ID: Hi Juan, Thank you for the suggestion, but my data is 32 bit float. And since the precision in data is important, I cannot convert them to uint8 data. As Jerome suggested, it might be due to the extra large array scipy is creating to do faster sorting. In typical astronomy applications I encounter, our images are bigger than 1kx1k, I wonder whether there exist other tools to do median filtering. For a moving window median, since only a few pixels leaves and enter the window, if we take advantage of that, then I would imagine the sort time required to find median in each window position wouldn't be very high. Does anybody know of any such fast median filter routines in python? Thanking you, -cheers joe On 11 May 2015 at 11:01, Juan Nunez-Iglesias wrote: > If you can cast your image as a uint8 image, try the median filter in > scikit-image's filters.rank module. It's very fast and has a minimal memory > footprint. But it doesn't work on floats or high ints. > > ? > Sent from Mailbox > > > On Mon, May 11, 2015 at 3:25 PM, Jerome Kieffer > wrote: > >> On Mon, 11 May 2015 00:56:29 +0530 >> Joe P Ninan wrote: >> >> > Hi, >> > I was trying median_filter in scipy.ndimage.filters >> > on a 1024x1024 array. >> > >> > What I noticed is that the memory requirement grows really fast when we >> > increase the size of the median filter. >> > On a machine with 6gb RAM I could do only (150,150) size filter. >> > Anything above gives Memory Error. >> > >> > On a bigger server I could see it takes about 16gb RAM while using a >> filter >> > size (200, 200) >> > >> > I can understand, computation time increasing with size of filter, but >> why >> > is the memory size exploding with respect to size of the median filter? >> > Is this expected behaviour? >> >> I guess this is because scipy creates a 1024x1024x(40000) array to do the >> sort along the last axis. >> maybe no the best from the memorry point of view. >> Cheers, >> >> -- >> J?r?me Kieffer >> Data analysis unit - ESRF >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation ************************************************ Joe Philip Ninan Research Scholar DAA, TIFR, Mumbai, India. Ph: +917738438212 ------------------------------------------------------------ Website: www.tifr.res.in/~ninan/ My GnuPG Public Key: www.tifr.res.in/~ninan/JPN_public.key -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.moore2 at nih.gov Mon May 11 09:22:30 2015 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Mon, 11 May 2015 13:22:30 +0000 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <1431322306355.6ba8c8cc@Nodemailer> Message-ID: <649847CE7F259144A0FD99AC64E7326DEDE4AC@msgb09.nih.gov> From: Joe P Ninan [mailto:indiajoe at gmail.com] Sent: Monday, May 11, 2015 9:14 AM To: SciPy Users List Subject: Re: [SciPy-User] Large Memory usage while doing median filter Hi Juan, Thank you for the suggestion, but my data is 32 bit float. And since the precision in data is important, I cannot convert them to uint8 data. As Jerome suggested, it might be due to the extra large array scipy is creating to do faster sorting. In typical astronomy applications I encounter, our images are bigger than 1kx1k, I wonder whether there exist other tools to do median filtering. For a moving window median, since only a few pixels leaves and enter the window, if we take advantage of that, then I would imagine the sort time required to find median in each window position wouldn't be very high. Does anybody know of any such fast median filter routines in python? Thanking you, -cheers joe On 11 May 2015 at 11:01, Juan Nunez-Iglesias wrote: If you can cast your image as a uint8 image, try the median filter in scikit-image's filters.rank module. It's very fast and has a minimal memory footprint. But it doesn't work on floats or high ints.? ? Sent from Mailbox On Mon, May 11, 2015 at 3:25 PM, Jerome Kieffer wrote: On Mon, 11 May 2015 00:56:29 +0530 Joe P Ninan wrote: > Hi, > I was trying median_filter in scipy.ndimage.filters > on a 1024x1024 array. > > What I noticed is that the memory requirement grows really fast when we > increase the size of the median filter. > On a machine with 6gb RAM I could do only (150,150) size filter. > Anything above gives Memory Error. > > On a bigger server I could see it takes about 16gb RAM while using a filter > size (200, 200) > > I can understand, computation time increasing with size of filter, but why > is the memory size exploding with respect to size of the median filter? > Is this expected behaviour? I guess this is because scipy creates a 1024x1024x(40000) array to do the sort along the last axis. maybe no the best from the memorry point of view. Cheers, -- J?r?me Kieffer Data analysis unit - ESRF _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" -? GNU Generation ************************************************ Joe Philip Ninan? ?? ? Research Scholar?????? DAA,? TIFR,?????????????????????????? Mumbai, India. ???? Ph: +917738438212? ------------------------------------------------------------ Website: www.tifr.res.in/~ninan/ My GnuPG Public Key: www.tifr.res.in/~ninan/JPN_public.key Hi Joe, Would you report this as an issue on github so that it doesn't get lost? A second thought is that a different implementation of a median filter exists in the signal package as medfilt and medfilt2d. I haven't ever used any of these functions, but it might be worth a shot to try them. -Eric From jni.soma at gmail.com Mon May 11 09:36:24 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 11 May 2015 23:36:24 +1000 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <1431322306355.6ba8c8cc@Nodemailer> Message-ID: Hey Joe, The moving window approach is that used by skimage's rank filters, but this requires maintaining a histogram of existing values, which is much easier when these are 8-bit ints. It occurs to me you can generate a slow but low-memory version using generic_filter: ndimage.generic_filter(image, np.median, footprint=footprint) That won't generate a bigger array than the image + footprint, I believe. With a bit of work, you might actually be able to create the sliding window approach yourself this way! =) Juan. On Mon, May 11, 2015 at 11:14 PM, Joe P Ninan wrote: > Hi Juan, > Thank you for the suggestion, but my data is 32 bit float. And since the > precision in data is important, I cannot convert them to uint8 data. > > As Jerome suggested, it might be due to the extra large array scipy is > creating to do faster sorting. > In typical astronomy applications I encounter, our images are bigger than > 1kx1k, I wonder whether there exist other tools to do median filtering. > For a moving window median, since only a few pixels leaves and enter the > window, if we take advantage of that, then I would imagine the sort time > required to find median in each window position wouldn't be very high. > > Does anybody know of any such fast median filter routines in python? > Thanking you, > -cheers > joe > > > On 11 May 2015 at 11:01, Juan Nunez-Iglesias wrote: > >> If you can cast your image as a uint8 image, try the median filter in >> scikit-image's filters.rank module. It's very fast and has a minimal memory >> footprint. But it doesn't work on floats or high ints. >> >> -- >> Sent from Mailbox >> >> >> On Mon, May 11, 2015 at 3:25 PM, Jerome Kieffer >> wrote: >> >>> On Mon, 11 May 2015 00:56:29 +0530 >>> Joe P Ninan wrote: >>> >>> > Hi, >>> > I was trying median_filter in scipy.ndimage.filters >>> > on a 1024x1024 array. >>> > >>> > What I noticed is that the memory requirement grows really fast when >>> we >>> > increase the size of the median filter. >>> > On a machine with 6gb RAM I could do only (150,150) size filter. >>> > Anything above gives Memory Error. >>> > >>> > On a bigger server I could see it takes about 16gb RAM while using a >>> filter >>> > size (200, 200) >>> > >>> > I can understand, computation time increasing with size of filter, but >>> why >>> > is the memory size exploding with respect to size of the median >>> filter? >>> > Is this expected behaviour? >>> >>> I guess this is because scipy creates a 1024x1024x(40000) array to do >>> the sort along the last axis. >>> maybe no the best from the memorry point of view. >>> Cheers, >>> >>> -- >>> J?r?me Kieffer >>> Data analysis unit - ESRF >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > /--------------------------------------------------------------- > "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation > > ************************************************ > Joe Philip Ninan > Research Scholar > DAA, TIFR, > Mumbai, India. > Ph: +917738438212 > ------------------------------------------------------------ > Website: www.tifr.res.in/~ninan/ > My GnuPG Public Key: www.tifr.res.in/~ninan/JPN_public.key > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Mon May 11 09:53:35 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 11 May 2015 09:53:35 -0400 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> Message-ID: On Mon, May 11, 2015 at 1:25 AM, Jerome Kieffer wrote: > On Mon, 11 May 2015 00:56:29 +0530 > Joe P Ninan wrote: > > > Hi, > > I was trying median_filter in scipy.ndimage.filters > > on a 1024x1024 array. > > > > What I noticed is that the memory requirement grows really fast when we > > increase the size of the median filter. > > On a machine with 6gb RAM I could do only (150,150) size filter. > > Anything above gives Memory Error. > > > > On a bigger server I could see it takes about 16gb RAM while using a > filter > > size (200, 200) > > > > I can understand, computation time increasing with size of filter, but > why > > is the memory size exploding with respect to size of the median filter? > > Is this expected behaviour? > > I guess this is because scipy creates a 1024x1024x(40000) array to do the > sort along the last axis. > maybe no the best from the memorry point of view. > Maybe I didn't search hard enough, but I don't see where such an array is allocated. There are several layers of calls, from python in ndimage/filters.py down to C in ndimage/src/ni_filters.c, so maybe I missed it. Can you point to where such an array is created, or was that really a guess? Warren Cheers, > > -- > J?r?me Kieffer > Data analysis unit - ESRF > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Mon May 11 10:01:28 2015 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Mon, 11 May 2015 16:01:28 +0200 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> Message-ID: <20150511160128.6f1bba20@lintaillefer.esrf.fr> On Mon, 11 May 2015 09:53:35 -0400 Warren Weckesser wrote: > > I guess this is because scipy creates a 1024x1024x(40000) array to do the > > sort along the last axis. > > maybe no the best from the memorry point of view. > > > > > Maybe I didn't search hard enough, but I don't see where such an array is > allocated. There are several layers of calls, from python in > ndimage/filters.py down to C in ndimage/src/ni_filters.c, so maybe I missed > it. Can you point to where such an array is created, or was that really a > guess? It is really a guess ... I did not have a look at the source code. To do such things, a colleague of mine did some CUDA (OpenCL would be the same) but it is out of the scope. Cheers. -- J?r?me Kieffer tel +33 476 882 445 From eric.moore2 at nih.gov Mon May 11 10:34:03 2015 From: eric.moore2 at nih.gov (Moore, Eric (NIH/NIDDK) [F]) Date: Mon, 11 May 2015 14:34:03 +0000 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> Message-ID: <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> From: Warren Weckesser [mailto:warren.weckesser at gmail.com] Sent: Monday, May 11, 2015 9:54 AM To: SciPy Users List Subject: Re: [SciPy-User] Large Memory usage while doing median filter On Mon, May 11, 2015 at 1:25 AM, Jerome Kieffer wrote: On Mon, 11 May 2015 00:56:29 +0530 Joe P Ninan wrote: > Hi, > I was trying median_filter in scipy.ndimage.filters > on a 1024x1024 array. > > What I noticed is that the memory requirement grows really fast when we > increase the size of the median filter. > On a machine with 6gb RAM I could do only (150,150) size filter. > Anything above gives Memory Error. > > On a bigger server I could see it takes about 16gb RAM while using a filter > size (200, 200) > > I can understand, computation time increasing with size of filter, but why > is the memory size exploding with respect to size of the median filter? > Is this expected behaviour? I guess this is because scipy creates a 1024x1024x(40000) array to do the sort along the last axis. maybe no the best from the memorry point of view. Maybe I didn't search hard enough, but I don't see where such an array is allocated.? There are several layers of calls, from python in ndimage/filters.py down to C in ndimage/src/ni_filters.c, so maybe I missed it.? Can you point to where such an array is created, or was that really a guess? Warren The really large array is allocated in NI_InitFilterOffsets, on line 518, in ni_support.c which is called from line 726 of ni_filter.c, in Ni_RankFilter. For me, calling ndimage.median_filter(arr, 150), with arr a (1024, 1024) array of doubles or floats results in an allocation of 4050000000 bytes ( 3.77 GB). Which seems a little bit bigger than we would like here. -Eric From warren.weckesser at gmail.com Mon May 11 11:16:07 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 11 May 2015 11:16:07 -0400 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> Message-ID: On Mon, May 11, 2015 at 10:34 AM, Moore, Eric (NIH/NIDDK) [F] < eric.moore2 at nih.gov> wrote: > From: Warren Weckesser [mailto:warren.weckesser at gmail.com] > Sent: Monday, May 11, 2015 9:54 AM > To: SciPy Users List > Subject: Re: [SciPy-User] Large Memory usage while doing median filter > > > > On Mon, May 11, 2015 at 1:25 AM, Jerome Kieffer > wrote: > On Mon, 11 May 2015 00:56:29 +0530 > Joe P Ninan wrote: > > > Hi, > > I was trying median_filter in scipy.ndimage.filters > > on a 1024x1024 array. > > > > What I noticed is that the memory requirement grows really fast when we > > increase the size of the median filter. > > On a machine with 6gb RAM I could do only (150,150) size filter. > > Anything above gives Memory Error. > > > > On a bigger server I could see it takes about 16gb RAM while using a > filter > > size (200, 200) > > > > I can understand, computation time increasing with size of filter, but > why > > is the memory size exploding with respect to size of the median filter? > > Is this expected behaviour? > > I guess this is because scipy creates a 1024x1024x(40000) array to do the > sort along the last axis. > maybe no the best from the memorry point of view. > > Maybe I didn't search hard enough, but I don't see where such an array is > allocated. There are several layers of calls, from python in > ndimage/filters.py down to C in ndimage/src/ni_filters.c, so maybe I missed > it. Can you point to where such an array is created, or was that really a > guess? > Warren > > The really large array is allocated in NI_InitFilterOffsets, on line 518, > in ni_support.c which is called from line 726 of ni_filter.c, in > Ni_RankFilter. > Thanks Eric. Warren > > For me, calling ndimage.median_filter(arr, 150), with arr a (1024, 1024) > array of doubles or floats results in an allocation of 4050000000 bytes ( > 3.77 GB). Which seems a little bit bigger than we would like here. > > -Eric > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffreback at gmail.com Mon May 11 11:42:11 2015 From: jeffreback at gmail.com (Jeff Reback) Date: Mon, 11 May 2015 11:42:11 -0400 Subject: [SciPy-User] ANN: pandas 0.16.1 released Message-ID: Hello, We are proud to announce v0.16.1 of pandas, a minor release from 0.16.0. This release includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. This was a release of 7 weeks with 222 commits by 57 authors encompassing 85 issues. We recommend that all users upgrade to this version. *What is it:* *pandas* is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Highlights of this release include: - Support for *CategoricalIndex*, a category based index, see here - New section on how-to-contribute to *pandas*, see here - Revised "Merge, join, and concatenate" documentation, including graphical examples to make it easier to understand each operations, see here - New method *sample* for drawing random samples from Series, DataFrames and Panels. See here - The default *Index* printing has changed to a more uniform format, see here - *BusinessHour* datetime-offset is now supported, see here - Further enhancement to the *.str* accessor to make string operations easier, see here See the Whatsnew in v0.16.1 Documentation: http://pandas.pydata.org/pandas-docs/stable/ Source tarballs, windows binaries are available on PyPI: https://pypi.python.org/pypi/pandas windows binaries are courtesy of Christoph Gohlke and are built on Numpy 1.8 macosx wheels are courtesy of Matthew Brett Please report any issues here: https://github.com/pydata/pandas/issues Thanks The Pandas Development Team Contributors to the 0.16.1 release - - Alfonso MHC - Andy Hayden - Artemy Kolchinsky - Chris Gilmer - Chris Grinolds - Dan Birken - David BROCHART - David Hirschfeld - David Stephens - Dr. Leo - Evan Wright - Frans van Dunn? - Hatem Nassrat - Henning Sperr - Hugo Herter - Jan Schulz - Jeff Blackburne - Jeff Reback - Jim Crist - Jonas Abernot - Joris Van den Bossche - Kerby Shedden - Leo Razoumov - Manuel Riel - Mortada Mehyar - Nick Burns - Nick Eubank - Olivier Grisel - Phillip Cloud - Pietro Battiston - Roy Hyunjin Han - Sam Zhang - Scott Sanderson - Stephan Hoyer - Tiago Antao - Tom Ajamian - Tom Augspurger - Tomaz Berisa - Vikram Shirgur - Vladimir Filimonov - William Hogman - Yasin A - Younggun Kim - behzad nouri - dsm054 - floydsoft - flying-sheep - gfr - jnmclarty - jreback - ksanghai - lucas - mschmohl - ptype - rockg - scls19fr - sinhrks -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Mon May 11 11:44:35 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 11 May 2015 17:44:35 +0200 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> Message-ID: On 11/05/15 17:16, Warren Weckesser wrote: > I guess this is because scipy creates a 1024x1024x(40000) array to > do the sort along the last axis. > maybe no the best from the memorry point of view. And when you see this answer, the solution is "Cython" :) I guess we could change ndimage's medianfilter to call introselect on each axis sequantially. But as noted, it will take little bit of Cython. Sturla From ralf.gommers at gmail.com Mon May 11 12:01:14 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 11 May 2015 18:01:14 +0200 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> Message-ID: On Mon, May 11, 2015 at 5:44 PM, Sturla Molden wrote: > On 11/05/15 17:16, Warren Weckesser wrote: > > > I guess this is because scipy creates a 1024x1024x(40000) array to > > do the sort along the last axis. > > maybe no the best from the memorry point of view. > > And when you see this answer, the solution is "Cython" :) > > I guess we could change ndimage's medianfilter to call introselect on > each axis sequantially. But as noted, it will take little bit of Cython. Good thing then that there's a GSoC on rewriting ndimage in Cython is about to start:) Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Mon May 11 20:17:18 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 12 May 2015 02:17:18 +0200 Subject: [SciPy-User] Large Memory usage while doing median filter In-Reply-To: References: <20150511072540.2b27e24f4ed3ebeec68054a2@esrf.fr> <649847CE7F259144A0FD99AC64E7326DEE0CF6@msgb09.nih.gov> Message-ID: On 11/05/15 18:01, Ralf Gommers wrote: > Good thing then that there's a GSoC on rewriting ndimage in Cython is > about to start:) Yes. However, the world isn't always perfect, and neither is SciPy. But correctly working code is infinitely better than no code, even it it hungry on memory. And the cheapest solution to excessive memory use is (almost) always to buy more RAM. That tends to be way cheaper than paying a developer :) Sturla From bhmerchant at gmail.com Tue May 12 00:43:42 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Mon, 11 May 2015 21:43:42 -0700 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? Message-ID: Hi all, Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? Is this result still of use? Kind regards, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue May 12 00:52:56 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 12 May 2015 00:52:56 -0400 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: On Tue, May 12, 2015 at 12:43 AM, Brian Merchant wrote: > Hi all, > > Why does scipy.integrate.odeint still provide a result even after > "repeated convergence failures"? Is this result still of use? > > Which version of scipy are you using? Can you give a short example that demonstrates the problem? Warren > Kind regards, > Brian > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhmerchant at gmail.com Tue May 12 01:06:14 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Mon, 11 May 2015 22:06:14 -0700 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: Hi Warren, I am using scipy '0.15.1' (as per scipy.__version__). I don't have a short example handy, so I'd have to "make one" out of my current use case. That might be tricky to do since I am not sure exactly where the issue is yet. One idea I have is to cut out all the ODE variables except one (so, make it from an N-dimensional system to a 1-dimensional system), and then slowly reintroduce the variables until the problem comes up? Brian On Mon, May 11, 2015 at 9:52 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > > > On Tue, May 12, 2015 at 12:43 AM, Brian Merchant > wrote: > >> Hi all, >> >> Why does scipy.integrate.odeint still provide a result even after >> "repeated convergence failures"? Is this result still of use? >> >> > > Which version of scipy are you using? Can you give a short example that > demonstrates the problem? > > Warren > > > >> Kind regards, >> Brian >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhmerchant at gmail.com Tue May 12 01:44:50 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Mon, 11 May 2015 22:44:50 -0700 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: Hi Warren, Just an update: I was able to "zero" in on the portion of the model that causes the issue (I set the ode function to just return 0 for every other variable, except the one that causes issues). I can't find an error in the Python code responsible for calculating the derivative for that variable (I stepped through it, it simply implements a Hill equation: http://en.wikipedia.org/wiki/Hill_equation_%28biochemistry%29), and the results provided by odeint in the end don't seem *obviously* out-of-whack. Brian On Mon, May 11, 2015 at 9:52 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > > > On Tue, May 12, 2015 at 12:43 AM, Brian Merchant > wrote: > >> Hi all, >> >> Why does scipy.integrate.odeint still provide a result even after >> "repeated convergence failures"? Is this result still of use? >> >> > > Which version of scipy are you using? Can you give a short example that > demonstrates the problem? > > Warren > > > >> Kind regards, >> Brian >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue May 12 02:10:25 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 12 May 2015 08:10:25 +0200 Subject: [SciPy-User] signal.firls In-Reply-To: References: Message-ID: On Tue, May 5, 2015 at 6:09 PM, Gregory Allen wrote: > On May 3, 2015, at 12:12 PM, Ralf Gommers wrote: > > Does one of you want to add some unit tests and send a pull request on > Github? > > I can do that, but you may make me learn something new. :) > For who is interested in this, Greg's PR is here: https://github.com/scipy/scipy/pull/4836 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Tue May 12 08:36:54 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 12 May 2015 14:36:54 +0200 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: Are you trying to fit an ODE or just solve an ODE with known parameters? Fitting parameters by solving an ODE can be a notoriously difficult problem with respect to convergence. There are methods such as "multiple shooting" which can improve the situation. But fitting an ODE is often an ill-posed problem. Sturla On 12/05/15 07:44, Brian Merchant wrote: > Hi Warren, > > Just an update: I was able to "zero" in on the portion of the model that > causes the issue (I set the ode function to just return 0 for every > other variable, except the one that causes issues). > > I can't find an error in the Python code responsible for calculating the > derivative for that variable (I stepped through it, it simply implements > a Hill equation: > http://en.wikipedia.org/wiki/Hill_equation_%28biochemistry%29), and the > results provided by odeint in the end don't seem *obviously* out-of-whack. > > Brian > > On Mon, May 11, 2015 at 9:52 PM, Warren Weckesser > > wrote: > > > > On Tue, May 12, 2015 at 12:43 AM, Brian Merchant > > wrote: > > Hi all, > > Why does scipy.integrate.odeint still provide a result even > after "repeated convergence failures"? Is this result still of use? > > > > Which version of scipy are you using? Can you give a short example > that demonstrates the problem? > > Warren > > > Kind regards, > Brian > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sturla.molden at gmail.com Tue May 12 08:42:47 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 12 May 2015 14:42:47 +0200 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: See e.g. this: http://webber.physik.uni-freiburg.de/~jeti/papers/04140671.pdf On 12/05/15 14:36, Sturla Molden wrote: > Are you trying to fit an ODE or just solve an ODE with known parameters? > > Fitting parameters by solving an ODE can be a notoriously difficult > problem with respect to convergence. There are methods such as "multiple > shooting" which can improve the situation. But fitting an ODE is often > an ill-posed problem. > > > Sturla > > > > On 12/05/15 07:44, Brian Merchant wrote: >> Hi Warren, >> >> Just an update: I was able to "zero" in on the portion of the model that >> causes the issue (I set the ode function to just return 0 for every >> other variable, except the one that causes issues). >> >> I can't find an error in the Python code responsible for calculating the >> derivative for that variable (I stepped through it, it simply implements >> a Hill equation: >> http://en.wikipedia.org/wiki/Hill_equation_%28biochemistry%29), and the >> results provided by odeint in the end don't seem *obviously* out-of-whack. >> >> Brian >> >> On Mon, May 11, 2015 at 9:52 PM, Warren Weckesser >> > wrote: >> >> >> >> On Tue, May 12, 2015 at 12:43 AM, Brian Merchant >> > wrote: >> >> Hi all, >> >> Why does scipy.integrate.odeint still provide a result even >> after "repeated convergence failures"? Is this result still of use? >> >> >> >> Which version of scipy are you using? Can you give a short example >> that demonstrates the problem? >> >> Warren >> >> >> Kind regards, >> Brian >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From bhmerchant at gmail.com Tue May 12 09:10:18 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Tue, 12 May 2015 06:10:18 -0700 Subject: [SciPy-User] Why does scipy.integrate.odeint still provide a result even after "repeated convergence failures"? In-Reply-To: References: Message-ID: Hi Sturla, I am using fixed parameters. However, the parameters interact non-linearly with other parameters and the phase variables (for example, via Hill functions). I have implemented this system of ODEs in scipy.integrate.odeint without issue. This is a second implementation that has been rewritten from scratch so that I could clean up the supporting code. Kind regards, Brian On Tue, May 12, 2015 at 5:36 AM, Sturla Molden wrote: > Are you trying to fit an ODE or just solve an ODE with known parameters? > > Fitting parameters by solving an ODE can be a notoriously difficult > problem with respect to convergence. There are methods such as "multiple > shooting" which can improve the situation. But fitting an ODE is often > an ill-posed problem. > > > Sturla > > > > On 12/05/15 07:44, Brian Merchant wrote: > > Hi Warren, > > > > Just an update: I was able to "zero" in on the portion of the model that > > causes the issue (I set the ode function to just return 0 for every > > other variable, except the one that causes issues). > > > > I can't find an error in the Python code responsible for calculating the > > derivative for that variable (I stepped through it, it simply implements > > a Hill equation: > > http://en.wikipedia.org/wiki/Hill_equation_%28biochemistry%29), and the > > results provided by odeint in the end don't seem *obviously* > out-of-whack. > > > > Brian > > > > On Mon, May 11, 2015 at 9:52 PM, Warren Weckesser > > > wrote: > > > > > > > > On Tue, May 12, 2015 at 12:43 AM, Brian Merchant > > > wrote: > > > > Hi all, > > > > Why does scipy.integrate.odeint still provide a result even > > after "repeated convergence failures"? Is this result still of > use? > > > > > > > > Which version of scipy are you using? Can you give a short example > > that demonstrates the problem? > > > > Warren > > > > > > Kind regards, > > Brian > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhmerchant at gmail.com Tue May 12 11:11:55 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Tue, 12 May 2015 08:11:55 -0700 Subject: [SciPy-User] Parameter dependent repeated convergence failures with scipy.integrate.odeint? Message-ID: Hi all, In my modelling, I am encountering *parameter dependent* repeated convergence failures with scipy.integrate.odeint. That is, for some set of parameters I encounter this message, and for other sets, I do not. Are there some checklist type heuristics you can share with me, which I can go through and examine regarding my model and/or the numerical scheme in order to determine how to have warning free runs? Kind regards, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ellisonbg at gmail.com Tue May 12 12:19:38 2015 From: ellisonbg at gmail.com (Brian Granger) Date: Tue, 12 May 2015 09:19:38 -0700 Subject: [SciPy-User] [JOB] Work full time on Project Jupyter/IPython Message-ID: Hi all, I wanted to let the community know that we are currently hiring 3 full time software engineers to work full time on Project Jupyter/IPython. These positions will be in my group at Cal Poly in San Luis Obispo, CA. We are looking for frontend and backend software engineers with lots of Python/JavaScript experience and a passion for open source software. The details can be found here: https://www.calpolycorporationjobs.org/postings/736 This is an unusual opportunity in a couple of respects: * These positions will allow you to work on open source software full time - not as a X% side project (aka weekends and evenings). * These are fully benefited positions (CA state retirement, health care, etc.) * You will get to work and live in San Luis Obispo, one of the nicest places on earth. We are minutes from the beach, have perfect year-round weather and are close to both the Bay Area and So Cal. I am more than willing to talk to any who interested in these positions. Cheers, Brian -- Brian E. Granger Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger at calpoly.edu and ellisonbg at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangnp.daniel at gmail.com Tue May 12 14:33:19 2015 From: wangnp.daniel at gmail.com (Ningping Wang) Date: Wed, 13 May 2015 02:33:19 +0800 Subject: [SciPy-User] Help on matrix multiplication with spare result Message-ID: I have two dense matrices U (10000x50) and V(50x10000), and one sparse matrix A(10000x10000). Each element in A is either 1 or 0 and only 1% elements are non-zero. I hope to find A*(UV) and I write A*(U.dot(V) in SciPy, noting that '*' is element-wise multiplication. Here SciPy will calculate a dense matrix UV first, then filter the result using A. But UV is dense and large (10000x10000) so it's very slow. Because I only need a few elements of UV indicated by A, it should save a lot of time if only necessary elements are calculated. Is there a way to instruct scipy to do this? BTW, I used Matlab to solve this problem before and Matlab is smart enough to find what I'm trying to do and works efficiently. In Matlab I write like this: A.*(U*V). Thanks, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Tue May 12 15:53:51 2015 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 12 May 2015 21:53:51 +0200 Subject: [SciPy-User] Help on matrix multiplication with spare result In-Reply-To: References: Message-ID: <1431460431.1422.42.camel@lma.cnrs-mrs.fr> Le mercredi 13 mai 2015, Ningping Wang a ?crit : > I have two dense matrices U (10000x50) and V(50x10000), and one sparse > matrix A(10000x10000). Each element in A is either 1 or 0 and only 1% > elements are non-zero. I hope to find A*(UV) and I write A*(U.dot(V) in > SciPy, noting that '*' is element-wise multiplication. Here SciPy will > calculate a dense matrix UV first, then filter the result using A. But UV > is dense and large (10000x10000) so it's very slow. > > Because I only need a few elements of UV indicated by A, it should save a > lot of time if only necessary elements are calculated. Is there a way to > instruct scipy to do this? > > BTW, I used Matlab to solve this problem before and Matlab is smart enough > to find what I'm trying to do and works efficiently. In Matlab I write like > this: A.*(U*V). You may then be more comfortable using indexing on nonzeros values of A Assuming A is some scipy sparse matrix: from itertools import izip, count Atmp = A.tocoo() res = np.zeros(Atmp.nnz) for ind, row, col in izip(count(), Atmp.row, Atmp.col): res[ind] = np.sum(U[row, :] * V[:, col]) res = scipy.sparse.coo_matrix((res, (Atmp.row, Atmp.col))) should (not tested) compute it cheaply. -- Fabrice From ralf.gommers at gmail.com Tue May 12 17:54:55 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 12 May 2015 23:54:55 +0200 Subject: [SciPy-User] ANN: Scipy 0.16.0 beta 1 release Message-ID: Hi all, I'm pleased to announce the availability of the first beta release of Scipy 0.16.0. Please try this beta and report any issues on the Github issue tracker or on the scipy-dev mailing list. This first beta is a source-only release; binary installers will follow (probably next week). Source tarballs and the full release notes can be found at https://sourceforge.net/projects/scipy/files/scipy/0.16.0b1/. Part of the release notes copied below. Thanks to everyone who contributed to this release! Ralf ========================== SciPy 0.16.0 Release Notes ========================== .. note:: Scipy 0.16.0 is not released yet! SciPy 0.16.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.15.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater. Highlights of this release include: - A Cython API for BLAS/LAPACK in `scipy.linalg` - A new benchmark suite. It's now straightforward to add new benchmarks, and they're routinely included with performance enhancement PRs. - Support for the second order sections (SOS) format in `scipy.signal`. New features ============ Benchmark suite --------------- The benchmark suite has switched to using `Airspeed Velocity `__ for benchmarking. You can run the suite locally via ``python runtests.py --bench``. For more details, see ``benchmarks/README.rst``. `scipy.linalg` improvements --------------------------- A full set of Cython wrappers for BLAS and LAPACK has been added in the modules `scipy.linalg.cython_blas` and `scipy.linalg.cython_lapack`. In Cython, these wrappers can now be cimported from their corresponding modules and used without linking directly against BLAS or LAPACK. The functions `scipy.linalg.qr_delete`, `scipy.linalg.qr_insert` and `scipy.linalg.qr_update` for updating QR decompositions were added. The function `scipy.linalg.solve_circulant` solves a linear system with a circulant coefficient matrix. The function `scipy.linalg.invpascal` computes the inverse of a Pascal matrix. The function `scipy.linalg.solve_toeplitz`, a Levinson-Durbin Toeplitz solver, was added. Added wrapper for potentially useful LAPACK function ``*lasd4``. It computes the square root of the i-th updated eigenvalue of a positive symmetric rank-one modification to a positive diagonal matrix. See its LAPACK documentation and unit tests for it to get more info. Added two extra wrappers for LAPACK least-square solvers. Namely, they are ``*gelsd`` and ``*gelsy``. Wrappers for the LAPACK ``*lange`` functions, which calculate various matrix norms, were added. Wrappers for ``*gtsv`` and ``*ptsv``, which solve ``A*X = B`` for tri-diagonal matrix ``A``, were added. `scipy.signal` improvements --------------------------- Support for second order sections (SOS) as a format for IIR filters was added. The new functions are: * `scipy.signal.sosfilt` * `scipy.signal.sosfilt_zi`, * `scipy.signal.sos2tf` * `scipy.signal.sos2zpk` * `scipy.signal.tf2sos` * `scipy.signal.zpk2sos`. Additionally, the filter design functions `iirdesign`, `iirfilter`, `butter`, `cheby1`, `cheby2`, `ellip`, and `bessel` can return the filter in the SOS format. The function `scipy.signal.place_poles`, which provides two methods to place poles for linear systems, was added. The option to use Gustafsson's method for choosing the initial conditions of the forward and backward passes was added to `scipy.signal.filtfilt`. New classes ``TransferFunction``, ``StateSpace`` and ``ZerosPolesGain`` were added. These classes are now returned when instantiating `scipy.signal.lti`. Conversion between those classes can be done explicitly now. An exponential (Poisson) window was added as `scipy.signal.exponential`, and a Tukey window was added as `scipy.signal.tukey`. The function for computing digital filter group delay was added as `scipy.signal.group_delay`. The functionality for spectral analysis and spectral density estimation has been significantly improved: `scipy.signal.welch` became ~8x faster and the functions `scipy.signal.spectrogram`, `scipy.signal.coherence` and `scipy.signal.csd` (cross-spectral density) were added. `scipy.signal.lsim` was rewritten - all known issues are fixed, so this function can now be used instead of ``lsim2``; ``lsim`` is orders of magnitude faster than ``lsim2`` in most cases. `scipy.sparse` improvements --------------------------- The function `scipy.sparse.norm`, which computes sparse matrix norms, was added. The function `scipy.sparse.random`, which allows to draw random variates from an arbitrary distribution, was added. `scipy.spatial` improvements ---------------------------- `scipy.spatial.cKDTree` has seen a major rewrite, which improved the performance of the ``query`` method significantly, added support for parallel queries, pickling, and options that affect the tree layout. See pull request 4374 for more details. The function `scipy.spatial.procrustes` for Procrustes analysis (statistical shape analysis) was added. `scipy.stats` improvements -------------------------- The Wishart distribution and its inverse have been added, as `scipy.stats.wishart` and `scipy.stats.invwishart`. The Exponentially Modified Normal distribution has been added as `scipy.stats.exponnorm`. The Generalized Normal distribution has been added as `scipy.stats.gennorm`. All distributions now contain a ``random_state`` property and allow specifying a specific ``numpy.random.RandomState`` random number generator when generating random variates. Many statistical tests and other `scipy.stats` functions that have multiple return values now return ``namedtuples``. See pull request 4709 for details. `scipy.optimize` improvements ----------------------------- A new derivative-free method DF-SANE has been added to the nonlinear equation system solving function `scipy.optimize.root`. Deprecated features =================== ``scipy.stats.pdf_fromgamma`` is deprecated. This function was undocumented, untested and rarely used. Statsmodels provides equivalent functionality with ``statsmodels.distributions.ExpandedNormal``. ``scipy.stats.fastsort`` is deprecated. This function is unnecessary, ``numpy.argsort`` can be used instead. ``scipy.stats.signaltonoise`` and ``scipy.stats.mstats.signaltonoise`` are deprecated. These functions did not belong in ``scipy.stats`` and are rarely used. See issue #609 for details. ``scipy.stats.histogram2`` is deprecated. This function is unnecessary, ``numpy.histogram2d`` can be used instead. Backwards incompatible changes ============================== The deprecated global optimizer ``scipy.optimize.anneal`` was removed. The following deprecated modules have been removed: ``scipy.lib.blas``, ``scipy.lib.lapack``, ``scipy.linalg.cblas``, ``scipy.linalg.fblas``, ``scipy.linalg.clapack``, ``scipy.linalg.flapack``. They had been deprecated since Scipy 0.12.0, the functionality should be accessed as `scipy.linalg.blas` and `scipy.linalg.lapack`. The deprecated function ``scipy.special.all_mat`` has been removed. The deprecated functions ``fprob``, ``ksprob``, ``zprob``, ``randwcdf`` and ``randwppf`` have been removed from `scipy.stats`. Other changes ============= The version numbering for development builds has been updated to comply with PEP 440. Building with ``python setup.py develop`` is now supported. Authors ======= * @axiru + * @endolith * Elliott Sales de Andrade + * Anne Archibald * Yoshiki V?zquez Baeza + * Sylvain Bellemare * Felix Berkenkamp + * Raoul Bourquin + * Matthew Brett * Per Brodtkorb * Christian Brueffer * Lars Buitinck * Evgeni Burovski * Steven Byrnes * CJ Carey * George Castillo + * Alex Conley + * Liam Damewood + * Rupak Das + * Abraham Escalante + * Matthias Feurer + * Eric Firing + * Clark Fitzgerald * Chad Fulton * Andr? Gaul * Andreea Georgescu + * Christoph Gohlke * Andrey Golovizin + * Ralf Gommers * J.J. Green + * Alex Griffing * Alexander Grigorievskiy + * Hans Moritz Gunther + * Jonas Hahnfeld + * Charles Harris * Ian Henriksen * Andreas Hilboll * ?smund Hjulstad + * Jan Schl?ter + * Janko Slavi? + * Daniel Jensen + * Johannes Ball? + * Terry Jones + * Amato Kasahara + * Eric Larson * Denis Laxalde * Antony Lee * Gregory R. Lee * Perry Lee + * Lo?c Est?ve * Martin Manns + * Eric Martin + * Mat?j Koci?n + * Andreas Mayer + * Nikolay Mayorov + * Robert McGibbon + * Sturla Molden * Nicola Montecchio + * Eric Moore * Jamie Morton + * Nikolas Moya + * Maniteja Nandana + * Andrew Nelson * Joel Nothman * Aldrian Obaja * Regina Ongowarsito + * Paul Ortyl + * Pedro L?pez-Adeva Fern?ndez-Layos + * Stefan Peterson + * Irvin Probst + * Eric Quintero + * John David Reaver + * Juha Remes + * Thomas Robitaille * Clancy Rowley + * Tobias Schmidt + * Skipper Seabold * Aman Singh + * Eric Soroos * Valentine Svensson + * Julian Taylor * Aman Thakral + * Helmut Toplitzer + * Fukumu Tsutsumi + * Anastasiia Tsyplia + * Jacob Vanderplas * Pauli Virtanen * Matteo Visconti + * Warren Weckesser * Florian Wilhelm + * Nathan Woods * Haochen Wu + * Daan Wynen + A total of 93 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhmerchant at gmail.com Tue May 12 21:33:08 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Tue, 12 May 2015 18:33:08 -0700 Subject: [SciPy-User] Understanding the "vode" warning for `scipy.integrate.ode`: "This integrator is not re-entrant. You cannot have two ode instances...at the same time." Message-ID: Hi all, I'd like to make sure I am understanding the warning for the "vode" and "zvode" integrators, provided by `scipy.integrate.ode` correctly -- it might seem repetitive, but please bear with me. Let us say one has ODE problem 1, which is completely separate and different from ODE problem 2. Then, this warning tells us that we have to choose to solve ODE problem 1 first, and then ODE problem 2, even though they are completely separate. I cannot solve ODE problem 1 and ODE problem 2 in parallel, using two *separate* instances of `scipy.integrate.odeint`. Presumably, this is because the same instance of the underlying solver is called by all `scipy.integrate.odeint` instances? Kind regards, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgodshall at enthought.com Wed May 13 20:16:21 2015 From: cgodshall at enthought.com (Courtenay Godshall (Enthought)) Date: Wed, 13 May 2015 19:16:21 -0500 Subject: [SciPy-User] ANN: SciPy 2015 Talk & Poster Selections Announced Today, Early Bird Deadline 5/22 Message-ID: <008e01d08ddb$35f11290$a1d337b0$@enthought.com> The talks & posters for the 2015 SciPy Conference were announced today: http://scipy2015.scipy.org/ehome/115969/292868/? &. Early bird registration deadline was extended (final) to 5/22 - hope we'll see you this year! -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu May 14 08:08:16 2015 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 14 May 2015 15:08:16 +0300 Subject: [SciPy-User] Understanding the "vode" warning for `scipy.integrate.ode`: "This integrator is not re-entrant. You cannot have two ode instances...at the same time." In-Reply-To: References: Message-ID: 13.05.2015, 04:33, Brian Merchant kirjoitti: > I'd like to make sure I am understanding the warning for the "vode" and > "zvode" integrators, provided by `scipy.integrate.ode` correctly -- it > might seem repetitive, but please bear with me. (i) The warning applies only to scipy.integrate.ode when using vode/zvode. It does not apply to scipy.integrate.odeint or the other solvers available in scipy.integrate.ode. (ii) It is not possible to do this: solve first one problem, then switch to a second problem, and then switch back to the first one. The solver has only a single process-wide opaque internal state that can only hold one problem at a time. (iii) Where the limitation comes from in practice is that the VODE/ZVODE solvers use many global variables; the code is fairly old Fortran, and is not that easily refactored. From matteo at mycarta.ca Thu May 14 17:15:27 2015 From: matteo at mycarta.ca (Matteo Niccoli) Date: Thu, 14 May 2015 17:15:27 -0400 Subject: [SciPy-User] Problems evaluating interpolated function at new points Message-ID: <1dfaf4f46ac5bfc2aa84ce41647db48c.squirrel@mycarta.ca> I am relatively new to Scipy, at least when it comes to interpolation. I am trying to replicate some of the functionality of Matlab's interp2. I have a distance function map (available at this Google drive location) https://drive.google.com/open?id=0B6acq_amk5e3X0Q5UG1ya1VhSlE&authuser=0 Values are normalized in the range 0-1. Size is 200 rows by 300 columns. I can load it up with this code snippet: import numpy as np dstnc1=np.load('dstnc.npy') Coordinates are defined by the next snippet: xmin = 0. xmax = 9000. ymin = 0. ymax = 6000. r1,c1 = dstnc1.shape x = np.linspace(xmin,xmax,c1) y = np.linspace(ymin, ymax,r1) I have three map points defined by vectors xnew1, ynew1 with this snippet: xnew1=[3700.540199,3845.940199,3983.240199] ynew1=[1782.8611,1769.862,1694.862] I check their location with respect to the distance map with this: import matplotlib.pyplot as plt fig = plt.figure(figsize=(20, 16)) ax = fig.add_subplot(1, 1, 1) plt.imshow(dstnc1, cmap=my_cmap_r,vmin=0,vmax=0.3, extent=[0, 9000, 0, 6000], origin='upper') plt.scatter(xnew1, ynew1, s=50, linewidths=0.15) plt.show() They plot in the correct location. Now I would like to extract the distance value at those three points. I tried first interp2d. from scipy.interpolate import interp2d x1 = np.linspace(xmin,xmax,c1) y1 = np.linspace(ymin,ymax,r1) f = interp2d(x1, y1, dstnc1, kind='cubic') but when I try to evaluate with: test=f(xnew,ynew) I get this error message: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 test=f(xnew,ynew) C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\interpolate.pyc in __call__(self, x, y, dx, dy) 270 (self.y_min, self.y_max))) 271 --> 272 z = fitpack.bisplev(x, y, self.tck, dx, dy) 273 z = atleast_2d(z) 274 z = transpose(z) C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack.pyc in bisplev(x, y, tck, dx, dy) 1027 z,ier = _fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy) 1028 if ier == 10: -> 1029 raise ValueError("Invalid input data") 1030 if ier: 1031 raise TypeError("An error occurred") ValueError: Invalid input data If I try RectBivariateSpline: from scipy.interpolate import RectBivariateSpline x2 = np.linspace(xmin,xmax,r1) y2 = np.linspace(ymin,ymax,c1) f = RectBivariateSpline(x2, y2, dstnc1) I get this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 test=f(xnew,ynew) C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack2.pyc in __call__(self, x, y, mth, dx, dy, grid) 643 z,ier = dfitpack.bispev(tx,ty,c,kx,ky,x,y) 644 if not ier == 0: --> 645 raise ValueError("Error code returned by bispev: %s" % ier) 646 else: 647 # standard Numpy broadcasting ValueError: Error code returned by bispev: 10 Any suggestion as to whether I am using the wrong functions or the right function with wrong syntax, and how I may fix it is appreciated. Thank you From guziy.sasha at gmail.com Thu May 14 18:24:25 2015 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 14 May 2015 18:24:25 -0400 Subject: [SciPy-User] Problems evaluating interpolated function at new points In-Reply-To: <1dfaf4f46ac5bfc2aa84ce41647db48c.squirrel@mycarta.ca> References: <1dfaf4f46ac5bfc2aa84ce41647db48c.squirrel@mycarta.ca> Message-ID: Hi All: This is not the answer to Matteo's question but probably a report on unclear documentation here: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.interp2d.html I am confused by the returns part, it says """ Returns: *values_x* : ndarray, shape xi.shape[:-1] + values.shape[ndim:] Interpolated values at input coordinates. """ But in the examples, the returned value f is used as a callable... """ Now use the obtained interpolation function and plot the result: >>> >>> xnew = np.arange(-5.01, 5.01, 1e-2)>>> ynew = np.arange(-5.01, 5.01, 1e-2)>>> znew = f(xnew, ynew)>>> plt.plot(x, z[0, :], 'ro-', xnew, znew[0, :], 'b-')>>> plt.show() """ Probably returns section corresponds to this version of scipy, but the examples are from the later version, which has probably confused Matteo. Cheers 2015-05-14 17:15 GMT-04:00 Matteo Niccoli : > I am relatively new to Scipy, at least when it comes to interpolation. I > am trying to replicate some of the functionality of Matlab's interp2. > > I have a distance function map (available at this Google drive location) > https://drive.google.com/open?id=0B6acq_amk5e3X0Q5UG1ya1VhSlE&authuser=0 > > Values are normalized in the range 0-1. Size is 200 rows by 300 columns. > > I can load it up with this code snippet: > import numpy as np > dstnc1=np.load('dstnc.npy') > > > Coordinates are defined by the next snippet: > xmin = 0. > xmax = 9000. > ymin = 0. > ymax = 6000. > r1,c1 = dstnc1.shape > x = np.linspace(xmin,xmax,c1) > y = np.linspace(ymin, ymax,r1) > > I have three map points defined by vectors xnew1, ynew1 with this snippet: > xnew1=[3700.540199,3845.940199,3983.240199] > ynew1=[1782.8611,1769.862,1694.862] > > I check their location with respect to the distance map with this: > import matplotlib.pyplot as plt > fig = plt.figure(figsize=(20, 16)) > ax = fig.add_subplot(1, 1, 1) > plt.imshow(dstnc1, cmap=my_cmap_r,vmin=0,vmax=0.3, > extent=[0, 9000, 0, 6000], origin='upper') > plt.scatter(xnew1, ynew1, s=50, linewidths=0.15) > plt.show() > > They plot in the correct location. Now I would like to extract the > distance value at those three points. I tried first interp2d. > > from scipy.interpolate import interp2d > x1 = np.linspace(xmin,xmax,c1) > y1 = np.linspace(ymin,ymax,r1) > f = interp2d(x1, y1, dstnc1, kind='cubic') > > but when I try to evaluate with: > test=f(xnew,ynew) > > I get this error message: > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > in () > ----> 1 test=f(xnew,ynew) > > > C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\interpolate.pyc > in __call__(self, x, y, dx, dy) > 270 (self.y_min, self.y_max))) > 271 > --> 272 z = fitpack.bisplev(x, y, self.tck, dx, dy) > 273 z = atleast_2d(z) > 274 z = transpose(z) > > > C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack.pyc > in bisplev(x, y, tck, dx, dy) > 1027 z,ier = _fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy) > 1028 if ier == 10: > -> 1029 raise ValueError("Invalid input data") > 1030 if ier: > 1031 raise TypeError("An error occurred") > > ValueError: Invalid input data > > If I try RectBivariateSpline: > from scipy.interpolate import RectBivariateSpline > x2 = np.linspace(xmin,xmax,r1) > y2 = np.linspace(ymin,ymax,c1) > f = RectBivariateSpline(x2, y2, dstnc1) > > I get this error: > > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > in () > ----> 1 test=f(xnew,ynew) > > > C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpolate\fitpack2.pyc > in __call__(self, x, y, mth, dx, dy, grid) > 643 z,ier = dfitpack.bispev(tx,ty,c,kx,ky,x,y) > 644 if not ier == 0: > --> 645 raise ValueError("Error code returned by > bispev: %s" % ier) > 646 else: > 647 # standard Numpy broadcasting > > ValueError: Error code returned by bispev: 10 > > > Any suggestion as to whether I am using the wrong functions or the right > function with wrong syntax, and how I may fix it is appreciated. Thank you > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo at mycarta.ca Fri May 15 10:26:23 2015 From: matteo at mycarta.ca (Matteo Niccoli) Date: Fri, 15 May 2015 10:26:23 -0400 Subject: [SciPy-User] Problems evaluating interpolated function at new points In-Reply-To: References: <1dfaf4f46ac5bfc2aa84ce41647db48c.squirrel@mycarta.ca> Message-ID: Update. I just checked: I am running Python 2.7.9 and Scipy 0.14.0. As noted by Oleksandr I may have been using the Syntax recommended for a newer version. Can anybody suggest a workaround or the correct syntax for version 0.14.0? Also, there was a typo when I run the notebook last and copied code in my post (xnew, ynew, instead of xnew1, ynew1). I run again my code with xnew1, ynew1 and I still get the errors. Thanks Matteo On Thu, May 14, 2015 6:24 pm, Oleksandr Huziy wrote: > Hi All: > > > This is not the answer to Matteo's question but probably a report on > unclear documentation here: > > http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpol > ate.interp2d.html > > I am confused by the returns part, it says > > > """ > Returns: > > > *values_x* : ndarray, shape xi.shape[:-1] + values.shape[ndim:] > > > Interpolated values at input coordinates. > > > """ > > > But in the examples, the returned value f is used as a callable... > > > """ > > > Now use the obtained interpolation function and plot the result: > >>>> > >>>> xnew = np.arange(-5.01, 5.01, 1e-2)>>> ynew = np.arange(-5.01, >>>> 5.01, 1e-2)>>> znew = f(xnew, ynew)>>> plt.plot(x, z[0, :], 'ro-', >>>> xnew, znew[0, :], 'b-')>>> plt.show() > > """ > > > Probably returns section corresponds to this version of scipy, but the > examples are from the later version, which has probably confused Matteo. > > > Cheers > > > > > 2015-05-14 17:15 GMT-04:00 Matteo Niccoli : > > >> I am relatively new to Scipy, at least when it comes to interpolation. >> I >> am trying to replicate some of the functionality of Matlab's interp2. >> >> I have a distance function map (available at this Google drive >> location) >> https://drive.google.com/open?id=0B6acq_amk5e3X0Q5UG1ya1VhSlE&authuser= >> 0 >> >> >> Values are normalized in the range 0-1. Size is 200 rows by 300 >> columns. >> >> I can load it up with this code snippet: >> import numpy as np dstnc1=np.load('dstnc.npy') >> >> >> Coordinates are defined by the next snippet: >> xmin = 0. xmax = 9000. ymin = 0. ymax = 6000. r1,c1 = dstnc1.shape x = >> np.linspace(xmin,xmax,c1) y = np.linspace(ymin, ymax,r1) >> >> I have three map points defined by vectors xnew1, ynew1 with this >> snippet: >> xnew1=[3700.540199,3845.940199,3983.240199] >> ynew1=[1782.8611,1769.862,1694.862] >> >> I check their location with respect to the distance map with this: >> import matplotlib.pyplot as plt fig = plt.figure(figsize=(20, 16)) ax = >> fig.add_subplot(1, 1, 1) plt.imshow(dstnc1, >> cmap=my_cmap_r,vmin=0,vmax=0.3, extent=[0, 9000, 0, 6000], >> origin='upper') plt.scatter(xnew1, ynew1, s=50, linewidths=0.15) >> plt.show() >> >> They plot in the correct location. Now I would like to extract the >> distance value at those three points. I tried first interp2d. >> >> from scipy.interpolate import interp2d x1 = np.linspace(xmin,xmax,c1) y1 >> = np.linspace(ymin,ymax,r1) >> f = interp2d(x1, y1, dstnc1, kind='cubic') >> >> but when I try to evaluate with: test=f(xnew,ynew) >> >> I get this error message: >> ------------------------------------------------------------------------ >> --- >> ValueError Traceback (most recent call >> last) in () >> ----> 1 test=f(xnew,ynew) >> >> >> >> C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpo >> late\interpolate.pyc in __call__(self, x, y, dx, dy) 270 >> (self.y_min, self.y_max))) >> 271 >> --> 272 z = fitpack.bisplev(x, y, self.tck, dx, dy) >> 273 z = atleast_2d(z) >> 274 z = transpose(z) >> >> >> >> C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpo >> late\fitpack.pyc in bisplev(x, y, tck, dx, dy) 1027 z,ier = >> _fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy) >> 1028 if ier == 10: >> -> 1029 raise ValueError("Invalid input data") >> 1030 if ier: >> 1031 raise TypeError("An error occurred") >> >> >> ValueError: Invalid input data >> >> >> If I try RectBivariateSpline: >> from scipy.interpolate import RectBivariateSpline x2 = >> np.linspace(xmin,xmax,r1) y2 = np.linspace(ymin,ymax,c1) f = >> RectBivariateSpline(x2, y2, dstnc1) >> >> >> I get this error: >> >> >> ----------------------------------------------------------------------- >> ---- >> ValueError Traceback (most recent call >> last) in () >> ----> 1 test=f(xnew,ynew) >> >> >> >> C:\...\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\interpo >> late\fitpack2.pyc in __call__(self, x, y, mth, dx, dy, grid) 643 >> z,ier = dfitpack.bispev(tx,ty,c,kx,ky,x,y) 644 if not >> ier == 0: --> 645 raise ValueError("Error code >> returned by bispev: %s" % ier) >> 646 else: >> 647 # standard Numpy broadcasting >> >> >> ValueError: Error code returned by bispev: 10 >> >> >> >> Any suggestion as to whether I am using the wrong functions or the >> right function with wrong syntax, and how I may fix it is appreciated. >> Thank you >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > > -- > Sasha > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From asreeve at maine.edu Fri May 15 12:47:06 2015 From: asreeve at maine.edu (asreeve) Date: Fri, 15 May 2015 12:47:06 -0400 (EDT) Subject: [SciPy-User] calling Julia module from Python? Message-ID: I would like some guidance in using the Julia Language (v:0.3.8) from Python(v:3.4.3), to see if it can be used for creating fast modules for Python scripts. I've written a short function in Julia and would like to call it from a python script. To get started, I'm trying to load the Julia script into an ipython shell. I can call the module from Julia, and can execute short bits of Julia in the ipython shell. Here's where I'm at now (the Julia module I'm calling is at the end of this message): Using Julia =========== julia> using snow_frac2 julia> snow_frac2.snow_fraction2(1.) 0.5 =========== Using Python ============ In [1]: import julia In [2]: jl=julia.Julia() In [3]: jl.eval('sin(3.14)') Out[3]: 0.0015926529164868282 =========== I'd like to call the "snow_frac2.jl" module below from the ipython interpreter, just like I did in in Julia, and use the snow_fraction2 function. I've attempted a variety of commands (eg. foo=jl.call('snow_frac2.jl')) but just get "JuliaError"s. Can anyone show the proper incantation to call a Julia module from Python? I've tried updating to a more recent verion of Julia, but that did not seem to help (although I gave up on this pretty quickly). Thanks for any help you can provide, Andy ==snow_frac2.jl== module snow_frac2 function snow_fraction2(temptr::Float64) if temptr<-1. fraction=1.0 elseif temptr>3.0 fraction=0.0 else fraction=(temptr+1.0)/(4.0) end return fraction end end .................................... Andrew Reeve School of Earth and Climate Sciences University of Maine Orono, ME 04469 From gdmcbain at freeshell.org Sun May 17 21:21:47 2015 From: gdmcbain at freeshell.org (Geordie McBain) Date: Mon, 18 May 2015 11:21:47 +1000 Subject: [SciPy-User] calling Julia module from Python? In-Reply-To: References: Message-ID: 2015-05-16 2:47 GMT+10:00 asreeve : > I would like some guidance in using the Julia Language (v:0.3.8) from > Python(v:3.4.3), to see if it can be used for creating fast modules > for Python scripts. > > I've written a short function in Julia and would like to call it from > a python script. To get started, I'm trying to load the Julia script > into an ipython shell. I can call the module from Julia, and can > execute short bits of Julia in the ipython shell. > > Here's where I'm at now (the Julia module I'm calling is at the end of > this message): > > Using Julia > =========== > > julia> using snow_frac2 > > julia> snow_frac2.snow_fraction2(1.) > 0.5 > =========== > > Using Python > ============ > > In [1]: import julia > > In [2]: jl=julia.Julia() > > In [3]: jl.eval('sin(3.14)') > Out[3]: 0.0015926529164868282 > > =========== > > I'd like to call the "snow_frac2.jl" module below from the ipython > interpreter, just like I did in in Julia, and use the snow_fraction2 > function. I've attempted a variety of commands > (eg. foo=jl.call('snow_frac2.jl')) but just get "JuliaError"s. Can > anyone show the proper incantation to call a Julia module from Python? > I've tried updating to a more recent verion of Julia, but that did not > seem to help (although I gave up on this pretty quickly). Hello. There was a thread about Julia last month http://mail.scipy.org/pipermail/scipy-user/2015-April/036450.html referring to http://blog.leahhanson.us/julia-calling-python-calling-julia.html Based on that, it looks like what's needed is: %<--- import julia j = julia.Julia() j.eval('using snow_frac2') print(j.eval('snow_frac2.snow_fraction2(1.)')) --->% This works here. $ python snow_frac2.py 0.5 -------------- next part -------------- A non-text attachment was scrubbed... Name: snow_frac2.py Type: text/x-python Size: 107 bytes Desc: not available URL: From abhisek.mndl at gmail.com Tue May 19 05:03:00 2015 From: abhisek.mndl at gmail.com (abhisek Mondal) Date: Tue, 19 May 2015 14:33:00 +0530 Subject: [SciPy-User] Trouble getting R^2 value Message-ID: Hi, I'm working with a 3D data-set aiming to best-fit the points in a 3D surface. So far, I have successfully fitted the surface using *scipy.linalg.lstsq*. But, I would like to express the significance of the results in terms of R^2 value, which I'm not being able to perform with this particular function. So, a little help regarding this matter would be highly appreciated. Thank you. -- Abhisek Mondal *Research Fellow* *Structural Biology and Bioinformatics* *Indian Institute of Chemical Biology* *Kolkata 700032* *INDIA* -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Tue May 19 09:01:31 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 19 May 2015 13:01:31 +0000 (UTC) Subject: [SciPy-User] Trouble getting R^2 value References: Message-ID: <782364673453732360.567021sturla.molden-gmail.com@news.gmane.org> abhisek Mondal wrote: > I'm working with a 3D data-set aiming to best-fit the points in a 3D > surface. > So far, I have successfully fitted the surface using *scipy.linalg.lstsq*. > But, I would like to express the significance of the results in terms of > R^2 value, which I'm not being able to perform with this particular > function. scipy.linalg.lstsq is a linear algebra solver, it is basically an interface to the lapack *gelss subroutines. It does not do linear regression analysis as such. (That is why it is not in scipy.stats.*) If you want to fit a linear regression model and get all the diagnostics and probablility computations, statsmodels is an excellent package. Apart from that, R2 is easy to compute: http://en.m.wikipedia.org/wiki/Coefficient_of_determination Sturla From gb.gabrielebrambilla at gmail.com Fri May 22 05:13:07 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Fri, 22 May 2015 11:13:07 +0200 Subject: [SciPy-User] problem using numpy arrays with stream plot Message-ID: Hi, I have a problem using bumpy arrays with stream plot. I want to use a 3d vector field in coordinates (x,y,z) stored in a numpy array, and plot slices of it with streamplot. To test it I wanted to use a vector field with arrows pointed up in the z>0 region and pointed down in the z<0 region. import numpy as np import matplotlib.pyplot as plt from math import * max = 100 min = -100 X = np.linspace(min, max, num=100) Y = np.linspace(min, max, num=100) Z = np.linspace(min, max, num=100) N = X.size #single components in the 3D matrix Bxa = np.zeros((N, N, N)) Bya = np.zeros((N, N, N)) Bza = np.zeros((N, N, N)) for i, x in enumerate(X): for j, y in enumerate(Y): for k, z in enumerate(Z): Bxa[ i, j, k] = 0.0 #x Bya[ i, j, k] = 0.0 #y Bza[ i, j, k] = z #I take a slice close to Y=0 Bx_sec = Bxa[:,N/2,:] By_sec = Bya[:,N/2,:] Bz_sec = Bza[:,N/2,:] fig = plt.figure() ax = fig.add_subplot(111) ax.streamplot(X, Z, Bx_sec, Bz_sec, color='b') ax.set_xlim([X.min(), X.max()]) ax.set_ylim([Z.min(), Z.max()]) plt.show() But I obtain something that looks like if I have put Bza = x! I tried to invert the order of vectors but it is unuseful! I attach the picture. Do you understand why? (the code I wrote should run) Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_1.png Type: image/png Size: 14785 bytes Desc: not available URL: From ralf.gommers at gmail.com Sun May 24 10:55:31 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 24 May 2015 16:55:31 +0200 Subject: [SciPy-User] ANN: Scipy 0.16.0 beta release 2 Message-ID: Hi all, The second beta for Scipy 0.16.0 is now available. After beta 1 a couple of critical issues on Windows were solved, and there are now also 32-bit Windows binaries (along with the sources and release notes) available on https://sourceforge.net/projects/scipy/files/scipy/0.16.0b2/. Please try this release and report any issues on the scipy-dev mailing list. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue May 26 05:34:02 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 26 May 2015 11:34:02 +0200 Subject: [SciPy-User] cdf inverse Message-ID: Hi, I would like to use the inverse of a cdf. The cdf in scipy.stats take the degrees of freedom, the x (until we integrate) and return the quantile. I would like, given the quantile, to obtain the x. To make it even more clear I would like to do what is done in this table. http://sites.stat.psu.edu/~mga/401/tables/Chi-square-table.pdf Is it possible in spicy.stats? Thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue May 26 05:58:36 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 26 May 2015 11:58:36 +0200 Subject: [SciPy-User] cdf inverse In-Reply-To: References: Message-ID: Thanks, it works perfectly. Gabriele On Tue, May 26, 2015 at 11:53 AM, Christoph Deil wrote: > Doesn?t look like this got posted (maybe I used the wrong email address to > send the reply), so I?m re-sending you the reply offline. > > Begin forwarded message: > > *From: *Christoph Deil > *Subject: **Re: [SciPy-User] cdf inverse* > *Date: *26 May 2015 11:49:39 CEST > *To: *SciPy Users List > > > On 26 May 2015, at 11:34, Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com> wrote: > > Hi, > > I would like to use the inverse of a cdf. > The cdf in scipy.stats take the degrees of freedom, the x (until we > integrate) and return the quantile. > I would like, given the quantile, to obtain the x. > > To make it even more clear I would like to do what is done in this table. > http://sites.stat.psu.edu/~mga/401/tables/Chi-square-table.pdf > > Is it possible in spicy.stats? > > Thanks > > Gabriele > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > Hi Gabriele, > > you should be able to compute this using the isf (inverse survival > function) or ppf (percent point function, inverse of cdf) methods on of the > scipy.stats.chi2 distribution: > http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2.html > > Here?s some more info on the distribution methods: > > http://docs.scipy.org/doc/scipy-dev/reference/tutorial/stats.html#common-methods > > Let me know if that doesn?t help and I?ll post some example code. > > Christoph > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Wed May 27 12:00:59 2015 From: toddrjen at gmail.com (Todd) Date: Wed, 27 May 2015 18:00:59 +0200 Subject: [SciPy-User] Structured array from dict Message-ID: Is there a constructor to create a structured array from a dict in numpy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmsachs at gmail.com Wed May 27 12:11:18 2015 From: jmsachs at gmail.com (Jason Sachs) Date: Wed, 27 May 2015 09:11:18 -0700 Subject: [SciPy-User] Structured array from dict In-Reply-To: References: Message-ID: suggest looking at pandas DataFrame On Wed, May 27, 2015 at 9:00 AM, Todd wrote: > Is there a constructor to create a structured array from a dict in numpy? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Wed May 27 12:14:40 2015 From: toddrjen at gmail.com (Todd) Date: Wed, 27 May 2015 18:14:40 +0200 Subject: [SciPy-User] Structured array from dict In-Reply-To: References: Message-ID: Yes, I use pandas extensively. But it isn't always the best solution. On Wed, May 27, 2015 at 6:11 PM, Jason Sachs wrote: > suggest looking at pandas DataFrame > > On Wed, May 27, 2015 at 9:00 AM, Todd wrote: > >> Is there a constructor to create a structured array from a dict in numpy? >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From camillechambon at yahoo.fr Thu May 28 04:34:47 2015 From: camillechambon at yahoo.fr (Camille Chambon) Date: Thu, 28 May 2015 10:34:47 +0200 Subject: [SciPy-User] Computing derivatives with scipy.integrate.odeint Message-ID: <5566D327.4010002@yahoo.fr> Hello, I'm trying to understand how scipy.integrate.odeint works. I wrote the following example: import numpy as np from scipy.integrate import odeint y0 = [2] def func(y, t): return [y[0]] t = np.arange(0, 4.0, 1.0) y = odeint(func, y0, t) print 'y', y It prints: y [[ 2. ] [ 5.43656382] [ 14.77811278] [ 40.17107594]] Why don't I get: y [[ 2. ] [ 4.] [ 8.] [ 16.]] ? My question is probably due to my limited knowledge in basic mathematics. Thanks for your help. Camille -------------- next part -------------- An HTML attachment was scrubbed... URL: From gyromagnetic at gmail.com Thu May 28 08:24:30 2015 From: gyromagnetic at gmail.com (Gyro Funch) Date: Thu, 28 May 2015 06:24:30 -0600 Subject: [SciPy-User] Computing derivatives with scipy.integrate.odeint In-Reply-To: <5566D327.4010002@yahoo.fr> References: <5566D327.4010002@yahoo.fr> Message-ID: On 2015-05-28 2:34 AM, Camille Chambon wrote: > Hello, > > I'm trying to understand how scipy.integrate.odeint works. > > I wrote the following example: > > import numpy as np > from scipy.integrate import odeint > y0 = [2] > def func(y, t): > return [y[0]] > t = np.arange(0, 4.0, 1.0) > y = odeint(func, y0, t) > print 'y', y > > It prints: > y [[ 2. ] > [ 5.43656382] > [ 14.77811278] > [ 40.17107594]] > > Why don't I get: > y [[ 2. ] > [ 4.] > [ 8.] > [ 16.]] > > ? > > My question is probably due to my limited knowledge in basic > mathematics. > > Thanks for your help. > > Camille > Hello Camille, The way that 'func' is written, you are solving the differential equation dy/dt = y ; y(0) = 2 , which has the solution y = 2 * e^t. -gyro From camillechambon at yahoo.fr Thu May 28 08:54:13 2015 From: camillechambon at yahoo.fr (Camille Chambon) Date: Thu, 28 May 2015 14:54:13 +0200 Subject: [SciPy-User] Computing derivatives with scipy.integrate.odeint In-Reply-To: References: <5566D327.4010002@yahoo.fr> Message-ID: <55670FF5.2060305@yahoo.fr> Thanks very much! It's now clear for me. Camille Le 28/05/2015 14:24, Gyro Funch a ?crit : > On 2015-05-28 2:34 AM, Camille Chambon wrote: >> Hello, >> >> I'm trying to understand how scipy.integrate.odeint works. >> >> I wrote the following example: >> >> import numpy as np >> from scipy.integrate import odeint >> y0 = [2] >> def func(y, t): >> return [y[0]] >> t = np.arange(0, 4.0, 1.0) >> y = odeint(func, y0, t) >> print 'y', y >> >> It prints: >> y [[ 2. ] >> [ 5.43656382] >> [ 14.77811278] >> [ 40.17107594]] >> >> Why don't I get: >> y [[ 2. ] >> [ 4.] >> [ 8.] >> [ 16.]] >> >> ? >> >> My question is probably due to my limited knowledge in basic >> mathematics. >> >> Thanks for your help. >> >> Camille >> > > Hello Camille, > > The way that 'func' is written, you are solving the differential > equation > > dy/dt = y ; y(0) = 2 , > > which has the solution y = 2 * e^t. > > -gyro > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From cournape at gmail.com Thu May 28 09:35:55 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 28 May 2015 22:35:55 +0900 Subject: [SciPy-User] [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: References: Message-ID: IMO, this really begs the question on whether we still want to use sourceforge at all. At this point I just don't trust the service at all anymore. Could we use some resources (e.g. rackspace ?) to host those files ? Do we know how much traffic they get so estimate the cost ? David On Thu, May 28, 2015 at 9:46 PM, Julian Taylor < jtaylor.debian at googlemail.com> wrote: > hi, > It has been reported that sourceforge has taken over the gimp > unofficial windows downloader page and temporarily bundled the > installer with unauthorized adware: > https://plus.google.com/+gimp/posts/cxhB1PScFpe > > As NumPy is also distributing windows installers via sourceforge I > recommend that when you download the files you verify the downloads > via the checksums in the README.txt before using them. The README.txt > is clearsigned with my gpg key so it should be safe from tampering. > Unfortunately as I don't use windows I cannot give any advice on how > to do the verifcation on these platforms. Maybe someone familar with > available tools can chime in. > > I have checked the numpy downloads and they still match what I > uploaded, but as sourceforge does redirect based on OS and geolocation > this may not mean much. > > Cheers, > Julian Taylor > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmsachs at gmail.com Thu May 28 09:41:18 2015 From: jmsachs at gmail.com (Jason Sachs) Date: Thu, 28 May 2015 06:41:18 -0700 Subject: [SciPy-User] [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: References: Message-ID: Discussion of this on r/programming: http://www.reddit.com/r/programming/comments/37h8ad/sourceforge_took_control_of_the_gimp_account_and/ GitHub apparently has a Releases feature, seems like that is the most appropriate mechanism. On Thu, May 28, 2015 at 6:35 AM, David Cournapeau wrote: > IMO, this really begs the question on whether we still want to use > sourceforge at all. At this point I just don't trust the service at all > anymore. > > Could we use some resources (e.g. rackspace ?) to host those files ? Do we > know how much traffic they get so estimate the cost ? > > David > > On Thu, May 28, 2015 at 9:46 PM, Julian Taylor < > jtaylor.debian at googlemail.com> wrote: > >> hi, >> It has been reported that sourceforge has taken over the gimp >> unofficial windows downloader page and temporarily bundled the >> installer with unauthorized adware: >> https://plus.google.com/+gimp/posts/cxhB1PScFpe >> >> As NumPy is also distributing windows installers via sourceforge I >> recommend that when you download the files you verify the downloads >> via the checksums in the README.txt before using them. The README.txt >> is clearsigned with my gpg key so it should be safe from tampering. >> Unfortunately as I don't use windows I cannot give any advice on how >> to do the verifcation on these platforms. Maybe someone familar with >> available tools can chime in. >> >> I have checked the numpy downloads and they still match what I >> uploaded, but as sourceforge does redirect based on OS and geolocation >> this may not mean much. >> >> Cheers, >> Julian Taylor >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.collette at gmail.com Thu May 28 13:00:08 2015 From: andrew.collette at gmail.com (Andrew Collette) Date: Thu, 28 May 2015 11:00:08 -0600 Subject: [SciPy-User] [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: <31698217454514411.075227sturla.molden-gmail.com@news.gmane.org> References: <31698217454514411.075227sturla.molden-gmail.com@news.gmane.org> Message-ID: > Here is their lame excuse: > > https://sourceforge.net/blog/gimp-win-project-wasnt-hijacked-just-abandoned/ > > It probably means this: > > If NumPy installers are moved away from Sourceforge, they will set up a > mirror and load the mirrored installers with all sorts of crapware. It is > some sort of racket the mob couldn't do better. I noticed that like most BSD-licensed software, NumPy's license includes this clause: "Neither the name of the NumPy Developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission." There's an argument to be made that SF isn't legally permitted to distribute poisoned installers under the name "NumPy" without permission. I recall a similar dust-up a while ago about "Standard Markdown" using the name "Markdown"; the original author (John Gruber) took action and got them to change the name. In any case I've always been surprised that NumPy is distributed through SourceForge, which has been sketchy for years now. Could it simply be hosted on PyPI? Andrew From cournape at gmail.com Thu May 28 13:05:57 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 29 May 2015 02:05:57 +0900 Subject: [SciPy-User] [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: References: <31698217454514411.075227sturla.molden-gmail.com@news.gmane.org> Message-ID: On Fri, May 29, 2015 at 2:00 AM, Andrew Collette wrote: > > Here is their lame excuse: > > > > > https://sourceforge.net/blog/gimp-win-project-wasnt-hijacked-just-abandoned/ > > > > It probably means this: > > > > If NumPy installers are moved away from Sourceforge, they will set up a > > mirror and load the mirrored installers with all sorts of crapware. It is > > some sort of racket the mob couldn't do better. > > I noticed that like most BSD-licensed software, NumPy's license > includes this clause: > > "Neither the name of the NumPy Developers nor the names of any > contributors may be used to endorse or promote products derived from > this software without specific prior written permission." > > There's an argument to be made that SF isn't legally permitted to > distribute poisoned installers under the name "NumPy" without > permission. I recall a similar dust-up a while ago about "Standard > Markdown" using the name "Markdown"; the original author (John Gruber) > took action and got them to change the name. > > In any case I've always been surprised that NumPy is distributed > through SourceForge, which has been sketchy for years now. Could it > simply be hosted on PyPI? > They don't accept arbitrary binaries like SF does, and some of our installer formats can't be uploaded there. David > > Andrew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Fri May 29 11:24:22 2015 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 29 May 2015 17:24:22 +0200 Subject: [SciPy-User] ANN: SfePy 2015.2 Message-ID: <556884A6.1010600@ntc.zcu.cz> I am pleased to announce release 2015.2 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (preliminary support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - major code simplification (removed element groups) - time stepping solvers updated for interactive use - improved finding of reference element coordinates of physical points - reorganized examples - reorganized installation on POSIX systems (sfepy-run script) For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Lubos Kejzlar, Vladimir Lukes, Anton Gladky, Matyas Novak From saketkc at gmail.com Fri May 29 13:52:26 2015 From: saketkc at gmail.com (Saket Choudhary) Date: Fri, 29 May 2015 10:52:26 -0700 Subject: [SciPy-User] [Numpy-discussion] Verify your sourceforge windows installer downloads In-Reply-To: References: <31698217454514411.075227sturla.molden-gmail.com@news.gmane.org> Message-ID: On 28 May 2015 at 10:05, David Cournapeau wrote: > > > On Fri, May 29, 2015 at 2:00 AM, Andrew Collette > wrote: >> >> > Here is their lame excuse: >> > >> > >> > https://sourceforge.net/blog/gimp-win-project-wasnt-hijacked-just-abandoned/ >> > >> > It probably means this: >> > >> > If NumPy installers are moved away from Sourceforge, they will set up a >> > mirror and load the mirrored installers with all sorts of crapware. It >> > is >> > some sort of racket the mob couldn't do better. >> >> I noticed that like most BSD-licensed software, NumPy's license >> includes this clause: >> >> "Neither the name of the NumPy Developers nor the names of any >> contributors may be used to endorse or promote products derived from >> this software without specific prior written permission." >> >> There's an argument to be made that SF isn't legally permitted to >> distribute poisoned installers under the name "NumPy" without >> permission. I recall a similar dust-up a while ago about "Standard >> Markdown" using the name "Markdown"; the original author (John Gruber) >> took action and got them to change the name. >> >> In any case I've always been surprised that NumPy is distributed >> through SourceForge, which has been sketchy for years now. Could it >> simply be hosted on PyPI? > > > They don't accept arbitrary binaries like SF does, and some of our installer > formats can't be uploaded there. > Bintray [1] has been providing a free service for hosting 'bottles'(compiled binaries) for the Homebrew project [2]. Probably an option to look at. [1] https://bintray.com/ [2] http://brew.sh/ > David > >> >> >> Andrew >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From arashaga at gmail.com Fri May 29 20:05:31 2015 From: arashaga at gmail.com (Arash M) Date: Fri, 29 May 2015 19:05:31 -0500 Subject: [SciPy-User] Constrained Optimization Message-ID: I am new to scipy, I am trying to s maximize x^(0.5)y^(0.5) st. x+y=10 using scipy. I can't figure out which method to use. I would really appreciate it if someone could guide me on this. and this is the code I have tried : from scipy.optimize import * import numpy as np def func(x, sign=1.0): """ Objective function """ return sign*(x[0]**(0.5)*2*x[1]**(0.5)) def func_deriv(x, sign=1.0): """ Derivative of objective function """ dfdx0 = sign*((0.5*x[0]**(-0.5))*x[1]**(0.5)) dfdx1 = sign*((0.5*x[1]**(-0.5))*x[0]**(0.5)) return np.array([ dfdx0, dfdx1 ]) cons = ({'type': 'eq', 'fun': lambda x: x[0] + x[1] - 10, 'jac' : lambda x: np.array([1.0, 1.0])}) res = minimize(func,(4,6),args=(-1.0,), jac=func_deriv,method='SLSQP', options={'disp': True}) print(res.x) -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Sat May 30 04:02:13 2015 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Sat, 30 May 2015 10:02:13 +0200 Subject: [SciPy-User] Constrained Optimization In-Reply-To: References: Message-ID: > On 30.05.2015, at 02:05, Arash M wrote: > > I am new to scipy, I am trying to s maximize x^(0.5)y^(0.5) > > st. x+y=10 using scipy. I can't figure out which method to use. I would really appreciate it if someone could guide me on this. > > and this is the code I have tried : > > from scipy.optimize import * > import numpy as np > > def func(x, sign=1.0): > """ Objective function """ > return sign*(x[0]**(0.5)*2*x[1]**(0.5)) > > def func_deriv(x, sign=1.0): > """ Derivative of objective function """ > dfdx0 = sign*((0.5*x[0]**(-0.5))*x[1]**(0.5)) > dfdx1 = sign*((0.5*x[1]**(-0.5))*x[0]**(0.5)) > return np.array([ dfdx0, dfdx1 ]) > > cons = ({'type': 'eq', 'fun': lambda x: x[0] + x[1] - 10, > 'jac' : lambda x: np.array([1.0, 1.0])}) > > res = minimize(func,(4,6),args=(-1.0,), jac=func_deriv,method='SLSQP', options={'disp': True}) > > print(res.x) > __________________________________ Hi, I haven't fully checked your equations but in your call to minimize, you don't pass in your constraints. Therefore it is not a surprise they aren't honoured. Just try to add constraints=cons to the call to minimize and see if that solves your problems. Hanno > _____________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: