From ndbecker2 at gmail.com Mon Jun 6 15:25:31 2016 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 06 Jun 2016 15:25:31 -0400 Subject: [SciPy-User] plot sos (freqz for sos) filter Message-ID: Happy to see sos format supported. Getting the response of the sos design requires a little effort. A nice addition would be for freqz to directly support an sos input. Here's some code which I'm using for now: sos = signal.ellip (8, 0.5, 80, 0.5 * 500e3/(sps*250e6), output='sos') ba = [signal.sos2tf(s[np.newaxis,:]) for s in sos] ws = np.pi * np.logspace (-4, 0, num=500) wh = [signal.freqz (_[0],_[1],worN=ws) for _ in ba] plt.semilogx (wh[0][0]/np.pi, np.clip (20*np.log10(np.abs(np.prod ([_[1] for _ in wh], axis=0))),-100,0)) From warren.weckesser at gmail.com Mon Jun 6 17:00:59 2016 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 6 Jun 2016 17:00:59 -0400 Subject: [SciPy-User] plot sos (freqz for sos) filter In-Reply-To: References: Message-ID: On Mon, Jun 6, 2016 at 3:25 PM, Neal Becker wrote: > Happy to see sos format supported. Getting the response of the sos design > requires a little effort. A nice addition would be for freqz to directly > support an sos input. > > Here's some code which I'm using for now: > > sos = signal.ellip (8, 0.5, 80, 0.5 * 500e3/(sps*250e6), output='sos') > ba = [signal.sos2tf(s[np.newaxis,:]) for s in sos] > ws = np.pi * np.logspace (-4, 0, num=500) > wh = [signal.freqz (_[0],_[1],worN=ws) for _ in ba] > plt.semilogx (wh[0][0]/np.pi, np.clip (20*np.log10(np.abs(np.prod ([_[1] > for > _ in wh], axis=0))),-100,0)) > > > I started an implementation of 'sosfreqz' here: https://github.com/scipy/scipy/pull/4465 The implementation is simple--just repeated calls to freqz--but the API needs work. I've been away from nontrivial scipy development for the last six months or so, and at the moment there are a couple other pull requests with higher priority that I will be working on for the 0.18 release, so I probably won't get back sosfreqz for a while. There are several active developers working on scipy.signal these days, and I suspect if you poke the right ones, someone might take over that work or start a new pull request (which would be fine with me!). Warren > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Jun 7 07:31:16 2016 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 07 Jun 2016 07:31:16 -0400 Subject: [SciPy-User] filtfilt for sos? Message-ID: I'm wondering about applying filtfilt to sos sections. Would it be reasonable to simply apply filtfilt to each 2nd order section, feeding the output of the 1st into the 2nd? My guess is that isn't correct, and that filtfilt needs to be modified to accept a cascaded filter (instead of a single filter section specified by a,b). From pierre.debuyl at chem.kuleuven.be Tue Jun 7 09:59:13 2016 From: pierre.debuyl at chem.kuleuven.be (Pierre de Buyl) Date: Tue, 7 Jun 2016 15:59:13 +0200 Subject: [SciPy-User] [Numpy-discussion] EuroSciPy 2016 In-Reply-To: <20160531130523.GN12938@pi-x230> References: <20160531130523.GN12938@pi-x230> Message-ID: <20160607135913.GR1738@pi-x230> Dear NumPy and SciPy communities, On Tue, May 31, 2016 at 03:05:23PM +0200, Pierre de Buyl wrote: > EuroSciPy 2016 takes place in Erlangen, Germany, from the 23 to the 27 of August > and consists of two days of tutorials (beginner and advanced tracks) and two > days of conference representing many fields of science, with a focus on Python > tools for science. A day of sprints follows (sprints TBA). > > The keynote speakers are Ga?l Varoquaux and Abby Cabunoc Mayes and we can expect > a rich tutorial and scientific program! Videos from previous years are available > at https://www.youtube.com/playlist?list=PLYx7XA2nY5GeQCCugyvtnHMVLdhYlrRxH and > https://www.youtube.com/playlist?list=PLYx7XA2nY5Gcpabmu61kKcToLz0FapmHu > > Visit us, register and submit an abstract on our website! > https://www.euroscipy.org/2016/ EuroSciPy 2016 has extended the deadline for submitting contributions! You have until the 19th of june to submit a talk/poster/tutorial at https://www.euroscipy.org/2016/ SciPythonic regards, The EuroSciPy 2016 team From ndbecker2 at gmail.com Tue Jun 7 15:29:41 2016 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 07 Jun 2016 15:29:41 -0400 Subject: [SciPy-User] problem with signal.group_delay Message-ID: Here is my test code (opt.loop_bw = 100e3, sps=2) from scipy import signal bw = opt.loop_bw/(sps*250e6) sos = signal.ellip (8, 0.5, 80, 0.5 * bw, output='sos') bas = [signal.sos2tf(s[np.newaxis,:]) for s in sos] ## my group delay code from group_delay import group_delay2 << my group_delay code delays = [group_delay2 (ba[0], ba[1])[0] for ba in bas] ## scipy group delay code delays2 = [signal.group_delay ((ba[0], ba[1]), w=1e-3) for ba in bas] ## print ('delays:', delays) print ('delay:', sum(delays)) delay = sum(delays) scipy group delay code gives me: /home/nbecker/anaconda3/envs/py35/lib/python3.5/site- packages/scipy/signal/filter_design.py:343: UserWarning: The group delay is singular at frequencies [0.001], setting to 0 format(", ".join("{0:.3f}".format(ws) for ws in w[singular])) I have my own group delay code, that I wrote based on https://ccrma.stanford.edu/~jos/fp/Numerical_Computation_Group_Delay.html This seems to give a correct result. My code is attached. It is using my own wrapper for fftw for fft, but of course you can use whatever you like. ,----[ /home/nbecker/sigproc/group_delay.py ] | ## delay = real[dft(n*h(n))/dft(h(n))] | '''https://ccrma-www.stanford.edu/~jos/filters/Numerical_Computation_Group_Delay.html''' | | import numpy as np | from numpy_fncs import rotate | | def group_delay (a, fftsize=256): | from fft3 import fft, FORWARD | the_fft = fft (fftsize, FORWARD) | if (len(a) < the_fft.size): | padded = np.concatenate ((a, np.zeros (the_fft.size-len(a), | dtype=complex))) | else: | padded = a | ## num = rotate (the_fft (np.arange (the_fft.size) * padded), | ## the_fft.size/2) den = rotate (the_fft (padded), the_fft.size/2) | num = the_fft (np.arange (the_fft.size) * padded) | den = the_fft (padded) | poles = abs (den) < 1e-50 | num[poles] = 0 | den[poles] = 1 | delay = (num / den).real | return delay | | def group_delay2 (a, b, fftsize=256): | c = np.convolve (a, np.conjugate (b)[::-1]) | return group_delay (c) - len (b) | | if __name__ == "__main__": | ## a = np.array ((0,0,1,0,0),dtype=complex) | ## d = group_delay (a) | ## print d | ## b = np.array ((1,0),dtype=complex) | ## print group_delay2 (a, b) | from scipy.signal import butter | a,b = butter (7, .4) | ##a,b = ((0,1), (1,)) | d = group_delay2 (a, b) | from pylab import plot, show | plot (np.linspace (0, 1, len(d)), d) | show() | `---- From cosmicsteve at gmail.com Wed Jun 8 06:50:07 2016 From: cosmicsteve at gmail.com (Steven Ehlert) Date: Wed, 8 Jun 2016 05:50:07 -0500 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats Message-ID: Hi all- I am trying to create a custom PDF with arguments for some statistical analysis I am doing. I want to have a power law with an exponential cutoff at low values: mathematically, this looks like P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three parameters: a normalization (N), an index (index), and a scale value (m0). I have tried to produce some code that creates this pdf and while the pdf and cdf functions appear to be working as I hoped, when I try to generate random values they don't follow this distribution at all, and I am not sure why. It seems to be far more heavily drawing from the low end of the curve (which are explicitly suppressed in the pdf) Can you suggest the changes I need to make to try and figure out why my code isn't working as planned? My code snippet is below: from scipy import stats import numpy as np from matplotlib import pyplot as plt class my_pdf(stats.rv_continuous): def _pdf(self,x,norm,index,m0): return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked around 1e-3, but histogram peaks around 1e-5... I am really having a hard time figuring out how to create a robust and complex custom rv_continuous in scipy based on scipy, so any help you can provide would be most appreciated. Cheers, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Wed Jun 8 06:59:09 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 8 Jun 2016 11:59:09 +0100 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats In-Reply-To: References: Message-ID: 08.06.2016 13:50 ???????????? "Steven Ehlert" ???????: > > Hi all- > > I am trying to create a custom PDF with arguments for some statistical analysis I am doing. I want to have a power law with an exponential cutoff at low values: mathematically, this looks like > > P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three parameters: a normalization (N), an index (index), and a scale value (m0). > > > > I have tried to produce some code that creates this pdf and while the pdf and cdf functions appear to be working as I hoped, when I try to generate random values they don't follow this distribution at all, and I am not sure why. It seems to be far more heavily drawing from the low end of the curve (which are explicitly suppressed in the pdf) > > Can you suggest the changes I need to make to try and figure out why my code isn't working as planned? My code snippet is below: > > > from scipy import stats > import numpy as np > from matplotlib import pyplot as plt > > > class my_pdf(stats.rv_continuous): > def _pdf(self,x,norm,index,m0): > return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) > > > my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") > > > rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked around 1e-3, but histogram peaks around 1e-5... > > > > I am really having a hard time figuring out how to create a robust and complex custom rv_continuous in scipy based on scipy, so any help you can provide would be most appreciated. > > > Cheers, > > Steve > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > First and foremost, you need to fix the normalization: the norm constant is not a free parameter, as the pdf must integrate to unity over the support of your RV. Next, the scale parameter is handled by the framework: pdf, cdf methods all accept the `scale` argument. Thus you do not need the m0 parameter. HTH, Evgeni -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosmicsteve at gmail.com Wed Jun 8 08:31:25 2016 From: cosmicsteve at gmail.com (Steven Ehlert) Date: Wed, 8 Jun 2016 07:31:25 -0500 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats In-Reply-To: References: Message-ID: Okay- Thank you for the insights. So three questions about this, as I am not an expert in the implementation. I figured that pdf normalization may play a role First of all, what is the best way to numerically handle the normalization? Do I need to include the integral every time I initialize the pdf? Something like this: class my_pdf(stats.rv_continuous): def _pdf(self,x,index,m0): step=(self.b-self.a)/100. this_xrange=np.arange(self.a,self.b+step,step) this_function=this_xrange**(-1*index) * (1. -np.exp((-1*this_xrange**2)/m0**2) ) #for x,f in zip(this_xrange,this_function): print x,f this_norm=1./(integrate.simps(this_function,x=this_xrange)) #print this_norm return this_norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) I don't think there is an analytic form for this integral, so I have to do it numerically, but I also have to implement it with the pdf. And the second question I have is: how do I call the scale variable within the _pdf function? Would I simply put in self.scale in place of m0 in the code above? Or is it more sophisticated than that? Finally, while I realized these concerns when I wrote it, but I didn't expect the pdf to look okay but the rvs to be so off- are there reasons to expect this behavior based on my previously incorrect normalization? It is not obvious to me that it would. Thank you for your help thus far and I look forward to learning more about custom statistical distributions in SciPy. Steven Ehlert On Wed, Jun 8, 2016 at 5:59 AM, Evgeni Burovski wrote: > > 08.06.2016 13:50 ???????????? "Steven Ehlert" > ???????: > > > > > Hi all- > > > > I am trying to create a custom PDF with arguments for some statistical > analysis I am doing. I want to have a power law with an exponential cutoff > at low values: mathematically, this looks like > > > > P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three parameters: > a normalization (N), an index (index), and a scale value (m0). > > > > > > > > I have tried to produce some code that creates this pdf and while the > pdf and cdf functions appear to be working as I hoped, when I try to > generate random values they don't follow this distribution at all, and I am > not sure why. It seems to be far more heavily drawing from the low end of > the curve (which are explicitly suppressed in the pdf) > > > > Can you suggest the changes I need to make to try and figure out why my > code isn't working as planned? My code snippet is below: > > > > > > from scipy import stats > > import numpy as np > > from matplotlib import pyplot as plt > > > > > > class my_pdf(stats.rv_continuous): > > def _pdf(self,x,norm,index,m0): > > return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) > > > > > > my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") > > > > > > rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked around > 1e-3, but histogram peaks around 1e-5... > > > > > > > > I am really having a hard time figuring out how to create a robust and > complex custom rv_continuous in scipy based on scipy, so any help you can > provide would be most appreciated. > > > > > > Cheers, > > > > Steve > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > https://mail.scipy.org/mailman/listinfo/scipy-user > > > > First and foremost, you need to fix the normalization: the norm constant > is not a free parameter, as the pdf must integrate to unity over the > support of your RV. > > Next, the scale parameter is handled by the framework: pdf, cdf methods > all accept the `scale` argument. Thus you do not need the m0 parameter. > > HTH, > > Evgeni > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Wed Jun 8 09:46:33 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 8 Jun 2016 14:46:33 +0100 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats In-Reply-To: References: Message-ID: On Wed, Jun 8, 2016 at 1:31 PM, Steven Ehlert wrote: > Okay- > > Thank you for the insights. > > So three questions about this, as I am not an expert in the implementation. > I figured that pdf normalization may play a role > > > First of all, what is the best way to numerically handle the normalization? > Do I need to include the integral every time I initialize the pdf? Something > like this: > class my_pdf(stats.rv_continuous): > def _pdf(self,x,index,m0): > step=(self.b-self.a)/100. > > this_xrange=np.arange(self.a,self.b+step,step) > > this_function=this_xrange**(-1*index) * (1. > -np.exp((-1*this_xrange**2)/m0**2) ) > > > #for x,f in zip(this_xrange,this_function): print x,f > this_norm=1./(integrate.simps(this_function,x=this_xrange)) > #print this_norm > > return this_norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) > Yes, this would need to compute it at each call to pdf. The integral can be expressed in terms of gamma functions (do a substitution of y = x**2 /2 in the integral). Alternatively, you can do the integral numerically, likely `quad` is better than `simps`. > I don't think there is an analytic form for this integral, so I have to do > it numerically, but I also have to implement it with the pdf. > > And the second question I have is: how do I call the scale variable within > the _pdf function? Would I simply put in self.scale in place of m0 in the > code above? Or is it more sophisticated than that? It's easy: you don't need to :-). distribution.pdf(x, loc=11, scale=42) calls distribution._pdf(y) with y = (x - loc) / scale. This way, you `_pdf` only needs to deal with a scaled variable. There is no self.scale --- it's just a parameter to any call to pdf, cdf, sf etc. > Finally, while I realized these concerns when I wrote it, but I didn't > expect the pdf to look okay but the rvs to be so off- are there reasons to > expect this behavior based on my previously incorrect normalization? It is > not obvious to me that it would. PDF needs to integrate to one. If the normalization is off, it still might look "okay" because the shape is the same, but it's still wrong. Drawing random variates assumes that the cdf is a real distribution function. From experience, having ok-looking PDF and nonsense for the random variates is a typical sign that the normalization might be off. > Thank you for your help thus far and I look forward to learning more about > custom statistical distributions in SciPy. > > > Steven Ehlert > > > On Wed, Jun 8, 2016 at 5:59 AM, Evgeni Burovski > wrote: >> >> >> 08.06.2016 13:50 ???????????? "Steven Ehlert" >> ???????: >> >> >> > >> > Hi all- >> > >> > I am trying to create a custom PDF with arguments for some statistical >> > analysis I am doing. I want to have a power law with an exponential cutoff >> > at low values: mathematically, this looks like >> > >> > P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three parameters: >> > a normalization (N), an index (index), and a scale value (m0). >> > >> > >> > >> > I have tried to produce some code that creates this pdf and while the >> > pdf and cdf functions appear to be working as I hoped, when I try to >> > generate random values they don't follow this distribution at all, and I am >> > not sure why. It seems to be far more heavily drawing from the low end of >> > the curve (which are explicitly suppressed in the pdf) >> > >> > Can you suggest the changes I need to make to try and figure out why my >> > code isn't working as planned? My code snippet is below: >> > >> > >> > from scipy import stats >> > import numpy as np >> > from matplotlib import pyplot as plt >> > >> > >> > class my_pdf(stats.rv_continuous): >> > def _pdf(self,x,norm,index,m0): >> > return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) >> > >> > >> > my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") >> > >> > >> > rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked around >> > 1e-3, but histogram peaks around 1e-5... >> > >> > >> > >> > I am really having a hard time figuring out how to create a robust and >> > complex custom rv_continuous in scipy based on scipy, so any help you can >> > provide would be most appreciated. >> > >> > >> > Cheers, >> > >> > Steve >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > https://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> First and foremost, you need to fix the normalization: the norm constant >> is not a free parameter, as the pdf must integrate to unity over the support >> of your RV. >> >> Next, the scale parameter is handled by the framework: pdf, cdf methods >> all accept the `scale` argument. Thus you do not need the m0 parameter. >> >> HTH, >> >> Evgeni >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > From kevin.gullikson at gmail.com Wed Jun 8 09:54:09 2016 From: kevin.gullikson at gmail.com (Kevin Gullikson) Date: Wed, 08 Jun 2016 13:54:09 +0000 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats In-Reply-To: References: Message-ID: Explicitly defining the cdf would probably help too. If it is analytically calculable, defining the inverse cdf would almost definitely help since that is how the random numbers are being generated. https://en.wikipedia.org/wiki/Inverse_transform_sampling What you have now only defines the pdf. The base class automagically integrates to get cdf, and might do some kind of numerical parameter optimization to get to the inverse cdf (not sure about that). If nothing else, defining the cdf/inverse cdf would make things faster. On Wed, Jun 8, 2016 at 8:46 AM Evgeni Burovski wrote: > On Wed, Jun 8, 2016 at 1:31 PM, Steven Ehlert > wrote: > > Okay- > > > > Thank you for the insights. > > > > So three questions about this, as I am not an expert in the > implementation. > > I figured that pdf normalization may play a role > > > > > > First of all, what is the best way to numerically handle the > normalization? > > Do I need to include the integral every time I initialize the pdf? > Something > > like this: > > > class my_pdf(stats.rv_continuous): > > def _pdf(self,x,index,m0): > > step=(self.b-self.a)/100. > > > > this_xrange=np.arange(self.a,self.b+step,step) > > > > this_function=this_xrange**(-1*index) * (1. > > -np.exp((-1*this_xrange**2)/m0**2) ) > > > > > > #for x,f in zip(this_xrange,this_function): print x,f > > this_norm=1./(integrate.simps(this_function,x=this_xrange)) > > #print this_norm > > > > return this_norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) > > > > Yes, this would need to compute it at each call to pdf. > The integral can be expressed in terms of gamma functions (do a > substitution of y = x**2 /2 in the integral). Alternatively, you can > do the integral numerically, likely `quad` is better than `simps`. > > > > > I don't think there is an analytic form for this integral, so I have to > do > > it numerically, but I also have to implement it with the pdf. > > > > And the second question I have is: how do I call the scale variable > within > > the _pdf function? Would I simply put in self.scale in place of m0 in the > > code above? Or is it more sophisticated than that? > > It's easy: you don't need to :-). > distribution.pdf(x, loc=11, scale=42) calls distribution._pdf(y) with > y = (x - loc) / scale. > > This way, you `_pdf` only needs to deal with a scaled variable. There > is no self.scale --- it's just a parameter to any call to pdf, cdf, sf > etc. > > > > > Finally, while I realized these concerns when I wrote it, but I didn't > > expect the pdf to look okay but the rvs to be so off- are there reasons > to > > expect this behavior based on my previously incorrect normalization? It > is > > not obvious to me that it would. > > PDF needs to integrate to one. If the normalization is off, it still > might look "okay" because the shape is the same, but it's still wrong. > Drawing random variates assumes that the cdf is a real distribution > function. From experience, having ok-looking PDF and nonsense for the > random variates is a typical sign that the normalization might be off. > > > > Thank you for your help thus far and I look forward to learning more > about > > custom statistical distributions in SciPy. > > > > > > Steven Ehlert > > > > > > On Wed, Jun 8, 2016 at 5:59 AM, Evgeni Burovski < > evgeny.burovskiy at gmail.com> > > wrote: > >> > >> > >> 08.06.2016 13:50 ???????????? "Steven Ehlert" > >> ???????: > >> > >> > >> > > >> > Hi all- > >> > > >> > I am trying to create a custom PDF with arguments for some statistical > >> > analysis I am doing. I want to have a power law with an exponential > cutoff > >> > at low values: mathematically, this looks like > >> > > >> > P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three > parameters: > >> > a normalization (N), an index (index), and a scale value (m0). > >> > > >> > > >> > > >> > I have tried to produce some code that creates this pdf and while the > >> > pdf and cdf functions appear to be working as I hoped, when I try to > >> > generate random values they don't follow this distribution at all, > and I am > >> > not sure why. It seems to be far more heavily drawing from the low > end of > >> > the curve (which are explicitly suppressed in the pdf) > >> > > >> > Can you suggest the changes I need to make to try and figure out why > my > >> > code isn't working as planned? My code snippet is below: > >> > > >> > > >> > from scipy import stats > >> > import numpy as np > >> > from matplotlib import pyplot as plt > >> > > >> > > >> > class my_pdf(stats.rv_continuous): > >> > def _pdf(self,x,norm,index,m0): > >> > return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) > >> > > >> > > >> > my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") > >> > > >> > > >> > rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked around > >> > 1e-3, but histogram peaks around 1e-5... > >> > > >> > > >> > > >> > I am really having a hard time figuring out how to create a robust and > >> > complex custom rv_continuous in scipy based on scipy, so any help you > can > >> > provide would be most appreciated. > >> > > >> > > >> > Cheers, > >> > > >> > Steve > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > https://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > >> First and foremost, you need to fix the normalization: the norm constant > >> is not a free parameter, as the pdf must integrate to unity over the > support > >> of your RV. > >> > >> Next, the scale parameter is handled by the framework: pdf, cdf methods > >> all accept the `scale` argument. Thus you do not need the m0 parameter. > >> > >> HTH, > >> > >> Evgeni > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> https://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > https://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -- Kevin Gullikson Data Scientist Spark Cognition -------------- next part -------------- An HTML attachment was scrubbed... URL: From moy.misc at gmail.com Fri Jun 10 15:02:02 2016 From: moy.misc at gmail.com (Moy) Date: Fri, 10 Jun 2016 15:02:02 -0400 Subject: [SciPy-User] KS 2-sample p-value Message-ID: <575B0EAA.10205@gmail.com> Hi folks. I'm curious about the p-value computation for the 2-sample KS test (lines 4298-4302 here [1], link below). Can someone please give me a reference for this, in particular for the mysterious "en + 0.12 + 0.11 / en" piece? I've looked at the MIT OpenCourseWare notes on KS [2] and at the SciPy docs for kstwobign but I'm still totally confuzzled by that piece. Thanks for your time! Moy 1: https://github.com/scipy/scipy/blob/master/scipy/stats/stats.py#L4298 2: http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture14.pdf From josef.pktd at gmail.com Fri Jun 10 16:06:15 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 10 Jun 2016 16:06:15 -0400 Subject: [SciPy-User] KS 2-sample p-value In-Reply-To: <575B0EAA.10205@gmail.com> References: <575B0EAA.10205@gmail.com> Message-ID: On Fri, Jun 10, 2016 at 3:02 PM, Moy wrote: > Hi folks. > > I'm curious about the p-value computation for the 2-sample KS test > (lines 4298-4302 here [1], link below). > > Can someone please give me a reference for this, in particular for the > mysterious "en + 0.12 + 0.11 / en" piece? I've looked at the MIT > OpenCourseWare notes on KS [2] and at the SciPy docs for kstwobign but > I'm still totally confuzzled by that piece. > > Thanks for your time! > > Moy > > 1: https://github.com/scipy/scipy/blob/master/scipy/stats/stats.py#L4298 > 2: > > http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture14.pdf I don't think I ever looked at the details for that, or don't remember. For similar one-sample test Stephens had several articles with similar small sample corrections as a function of the number of observations. I don't find anything related in a brief google search. My guess would be that this is from an old (older than 30 or 40 years) study that found that the fractional polynomial provides a good approximation. Some of the nonparametric tests in scipy.stats also referred to older (maybe 1980s) books on nonparametric testing where I never had access to. Sometimes another statistics package, R, SAS, Stata, has a relevant original reference in the documentation. Josef > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jun 10 16:31:23 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 10 Jun 2016 16:31:23 -0400 Subject: [SciPy-User] KS 2-sample p-value In-Reply-To: References: <575B0EAA.10205@gmail.com> Message-ID: On Fri, Jun 10, 2016 at 4:06 PM, wrote: > > > On Fri, Jun 10, 2016 at 3:02 PM, Moy wrote: > >> Hi folks. >> >> I'm curious about the p-value computation for the 2-sample KS test >> (lines 4298-4302 here [1], link below). >> >> Can someone please give me a reference for this, in particular for the >> mysterious "en + 0.12 + 0.11 / en" piece? I've looked at the MIT >> OpenCourseWare notes on KS [2] and at the SciPy docs for kstwobign but >> I'm still totally confuzzled by that piece. >> >> Thanks for your time! >> >> Moy >> >> 1: https://github.com/scipy/scipy/blob/master/scipy/stats/stats.py#L4298 >> 2: >> >> http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture14.pdf > > > > I don't think I ever looked at the details for that, or don't remember. > For similar one-sample test Stephens had several articles with similar > small sample corrections as a function of the number of observations. > > I don't find anything related in a brief google search. My guess would be > that this is from an old (older than 30 or 40 years) study that found that > the fractional polynomial provides a good approximation. > Stephens 1970 http://www.jstor.org/stable/2984408 has this form of the polynomial with 0.11 and 0.12 coefficients, but AFAICS from a quick browse and from what I remember of his other articles it's for one-sample tests, but maybe he has somewhere a section on two-sample tests. The best reference for Stephens is a chapter in a book that summarizes his earlier articles, but I don't remember where I have a copy of it. Josef > > Some of the nonparametric tests in scipy.stats also referred to older > (maybe 1980s) books on nonparametric testing where I never had access to. > > Sometimes another statistics package, R, SAS, Stata, has a relevant > original reference in the documentation. > > > Josef > > >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moy.misc at gmail.com Sat Jun 11 03:53:39 2016 From: moy.misc at gmail.com (Moy) Date: Sat, 11 Jun 2016 03:53:39 -0400 Subject: [SciPy-User] KS 2-sample p-value In-Reply-To: References: <575B0EAA.10205@gmail.com> Message-ID: <575BC383.5090608@gmail.com> > Stephens 1970 http://www.jstor.org/stable/2984408 > has this form of the polynomial with 0.11 and 0.12 coefficients, but > AFAICS from a quick browse and from what I remember of his other > articles it's for one-sample tests, but maybe he has somewhere a > section on two-sample tests. > The best reference for Stephens is a chapter in a book that summarizes > his earlier articles, but I don't remember where I have a copy of it. Thanks, Josef, this is very helpful. Best, Moy From jaonary at free.fr Sun Jun 12 16:58:49 2016 From: jaonary at free.fr (Samuel Thibault) Date: Sun, 12 Jun 2016 23:58:49 +0300 Subject: [SciPy-User] *** SPAM *** last news Message-ID: <0000c857a199$5976405a$6b2133ad$@free.fr> Hi, Here are some news since we've met last time, just read'em here Samuel Thibault -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaonary at free.fr Sun Jun 12 17:15:34 2016 From: jaonary at free.fr (jaonary at free.fr) Date: Mon, 13 Jun 2016 00:15:34 +0300 Subject: [SciPy-User] that's so gooood Message-ID: <0000514c02e3$8ff16086$42d4cee2$@free.fr> Just take a look at this stuff, it's reaaly good, more info here Regards, jaonary at free.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Mon Jun 20 09:31:44 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Mon, 20 Jun 2016 14:31:44 +0100 Subject: [SciPy-User] scipy 0.18 release candidate 1 Message-ID: Hi, I'm pleased to announce the availability of the first release candidate for scipy 0.18.0. Please try this release and report any issues on Github tracker, https://github.com/scipy/scipy, or scipy-dev mailing list. Source tarballs and release notes are available from Github releases, https://github.com/scipy/scipy/releases/tag/v0.18.0rc1 Please note that this is a source-only release. We do not provide Windows binaries for this release. OS X and Linux wheels will be provided for the final release. The current release schedule is 27 June: rc2 (if necessary) 11 July: final release Thanks to everyone who contributed to this release! Cheers, Evgeni A part of the release notes follows: ========================== SciPy 0.18.0 Release Notes ========================== .. note:: Scipy 0.18.0 is not released yet! .. contents:: SciPy 0.18.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.19.x branch, and on adding new features on the master branch. This release requires Python 2.7 or 3.4-3.5 and NumPy 1.7.1 or greater. Highlights of this release include: - - A new ODE solver for two-point boundary value problems, `scipy.optimize.solve_bvp`. - - A new class, `CubicSpline`, for cubic spline interpolation of data. - - N-dimensional tensor product polynomials, `scipy.interpolate.NdPPoly`. - - Spherical Voronoi diagrams, `scipy.spatial.SphericalVoronoi`. - - Support for discrete-time linear systems, `scipy.signal.dlti`. New features ============ `scipy.integrate` improvements - ------------------------------ A solver of two-point boundary value problems for ODE systems has been implemented in `scipy.integrate.solve_bvp`. The solver allows for non-separated boundary conditions, unknown parameters and certain singular terms. It finds a C1 continious solution using a fourth-order collocation algorithm. `scipy.interpolate` improvements - -------------------------------- Cubic spline interpolation is now available via `scipy.interpolate.CubicSpline`. This class represents a piecewise cubic polynomial passing through given points and C2 continuous. It is represented in the standard polynomial basis on each segment. A representation of n-dimensional tensor product piecewise polynomials is available as the `scipy.interpolate.NdPPoly` class. Univariate piecewise polynomial classes, `PPoly` and `Bpoly`, can now be evaluated on periodic domains. Use ``extrapolate="periodic"`` keyword argument for this. `scipy.fftpack` improvements - ---------------------------- `scipy.fftpack.next_fast_len` function computes the next "regular" number for FFTPACK. Padding the input to this length can give significant performance increase for `scipy.fftpack.fft`. `scipy.signal` improvements - --------------------------- Resampling using polyphase filtering has been implemented in the function `scipy.signal.resample_poly`. This method upsamples a signal, applies a zero-phase low-pass FIR filter, and downsamples using `scipy.signal.upfirdn` (which is also new in 0.18.0). This method can be faster than FFT-based filtering provided by `scipy.signal.resample` for some signals. `scipy.signal.firls`, which constructs FIR filters using least-squares error minimization, was added. `scipy.signal.sosfiltfilt`, which does forward-backward filtering like `scipy.signal.filtfilt` but for second-order sections, was added. Discrete-time linear systems ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ `scipy.signal.dlti` provides an implementation of discrete-time linear systems. Accordingly, the `StateSpace`, `TransferFunction` and `ZerosPolesGain` classes have learned a the new keyword, `dt`, which can be used to create discrete-time instances of the corresponding system representation. `scipy.sparse` improvements - --------------------------- The functions `sum`, `max`, `mean`, `min`, `transpose`, and `reshape` in `scipy.sparse` have had their signatures augmented with additional arguments and functionality so as to improve compatibility with analogously defined functions in `numpy`. Sparse matrices now have a `count_nonzero` method, which counts the number of nonzero elements in the matrix. Unlike `getnnz()` and ``nnz`` propety, which return the number of stored entries (the length of the data attribute), this method counts the actual number of non-zero entries in data. `scipy.optimize` improvements - ----------------------------- The implementation of Nelder-Mead minimization, `scipy.minimize(..., method="Nelder-Mead")`, obtained a new keyword, `initial_simplex`, which can be used to specify the initial simplex for the optimization process. Initial step size selection in CG and BFGS minimizers has been improved. We expect that this change will improve numeric stability of optimization in some cases. See pull request gh-5536 for details. Handling of infinite bounds in SLSQP optimization has been improved. We expect that this change will improve numeric stability of optimization in the some cases. See pull request gh-6024 for details. A large suite of global optimization benchmarks has been added to ``scipy/benchmarks/go_benchmark_functions``. See pull request gh-4191 for details. Nelder-Mead and Powell minimization will now only set defaults for maximum iterations or function evaluations if neither limit is set by the caller. In some cases with a slow converging function and only 1 limit set, the minimization may continue for longer than with previous versions and so is more likely to reach convergence. See issue gh-5966. `scipy.stats` improvements - -------------------------- Trapezoidal distribution has been implemented as `scipy.stats.trapz`. Skew normal distribution has been implemented as `scipy.stats.skewnorm`. Burr type XII distribution has been implemented as `scipy.stats.burr12`. Three- and four-parameter kappa distributions have been implemented as `scipy.stats.kappa3` and `scipy.stats.kappa4`, respectively. New `scipy.stats.iqr` function computes the interquartile region of a distribution. Random matrices ~~~~~~~~~~~~~~~ `scipy.stats.special_ortho_group` and `scipy.stats.ortho_group` provide generators of random matrices in the SO(N) and O(N) groups, respectively. They generate matrices in the Haar distribution, the only uniform distribution on these group manifolds. `scipy.stats.random_correlation` provides a generator for random correlation matrices, given specified eigenvalues. `scipy.linalg` improvements - --------------------------- `scipy.linalg.svd` gained a new keyword argument, ``lapack_driver``. Available drivers are ``gesdd`` (default) and ``gesvd``. `scipy.linalg.lapack.ilaver` returns the version of the LAPACK library SciPy links to. `scipy.spatial` improvements - ---------------------------- Boolean distances, `scipy.spatial.pdist`, have been sped up. Improvements vary by the function and the input size. In many cases, one can expect a speed-up of x2--x10. New class `scipy.spatial.SphericalVoronoi` constructs Voronoi diagrams on the surface of a sphere. See pull request gh-5232 for details. `scipy.cluster` improvements - ---------------------------- A new clustering algorithm, the nearest neighbor chain algorithm, has been implemented for `scipy.cluster.hierarchy.linkage`. As a result, one can expect a significant algorithmic improvement (:math:`O(N^2)` instead of :math:`O(N^3)`) for several linkage methods. `scipy.special` improvements - ---------------------------- The new function `scipy.special.loggamma` computes the principal branch of the logarithm of the Gamma function. For real input, ``loggamma`` is compatible with `scipy.special.gammaln`. For complex input, it has more consistent behavior in the complex plane and should be preferred over ``gammaln``. Vectorized forms of spherical Bessel functions have been implemented as `scipy.special.spherical_jn`, `scipy.special.spherical_kn`, `scipy.special.spherical_in` and `scipy.special.spherical_yn`. They are recommended for use over ``sph_*`` functions, which are now deprecated. Several special functions have been extended to the complex domain and/or have seen domain/stability improvements. This includes `spence`, `digamma`, `log1p` and several others. Deprecated features =================== The cross-class properties of `lti` systems have been deprecated. The following properties/setters will raise a `DeprecationWarning`: Name - (accessing/setting raises warning) - (setting raises warning) * StateSpace - (`num`, `den`, `gain`) - (`zeros`, `poles`) * TransferFunction (`A`, `B`, `C`, `D`, `gain`) - (`zeros`, `poles`) * ZerosPolesGain (`A`, `B`, `C`, `D`, `num`, `den`) - () Spherical Bessel functions, ``sph_in``, ``sph_jn``, ``sph_kn``, ``sph_yn``, ``sph_jnyn`` and ``sph_inkn`` have been deprecated in favor of `scipy.special.spherical_jn` and ``spherical_kn``, ``spherical_yn``, ``spherical_in``. The following functions in `scipy.constants` are deprecated: ``C2K``, ``K2C``, ``C2F``, ``F2C``, ``F2K`` and ``K2F``. They are superceded by a new function `scipy.constants.convert_temperature` that can perform all those conversions plus to/from the Rankine temperature scale. Backwards incompatible changes ============================== `scipy.optimize` - ---------------- The convergence criterion for ``optimize.bisect``, ``optimize.brentq``, ``optimize.brenth``, and ``optimize.ridder`` now works the same as ``numpy.allclose``. `scipy.ndimage` - --------------- The offset in ``ndimage.iterpolation.affine_transform`` is now consistently added after the matrix is applied, independent of if the matrix is specified using a one-dimensional or a two-dimensional array. `scipy.stats` - ------------- ``stats.ks_2samp`` used to return nonsensical values if the input was not real or contained nans. It now raises an exception for such inputs. Several deprecated methods of `scipy.stats` distributions have been removed: ``est_loc_scale``, ``vecfunc``, ``veccdf`` and ``vec_generic_moment``. Deprecated functions ``nanmean``, ``nanstd`` and ``nanmedian`` have been removed from `scipy.stats`. These functions were deprecated in scipy 0.15.0 in favor of their `numpy` equivalents. A bug in the ``rvs()`` method of the distributions in `scipy.stats` has been fixed. When arguments to ``rvs()`` were given that were shaped for broadcasting, in many cases the returned random samples were not random. A simple example of the problem is ``stats.norm.rvs(loc=np.zeros(10))``. Because of the bug, that call would return 10 identical values. The bug only affected code that relied on the broadcasting of the shape, location and scale parameters. The ``rvs()`` method also accepted some arguments that it should not have. There is a potential for backwards incompatibility in cases where ``rvs()`` accepted arguments that are not, in fact, compatible with broadcasting. An example is stats.gamma.rvs([2, 5, 10, 15], size=(2,2)) The shape of the first argument is not compatible with the requested size, but the function still returned an array with shape (2, 2). In scipy 0.18, that call generates a ``ValueError``. `scipy.io` - ---------- `scipy.io.netcdf` masking now gives precedence to the ``_FillValue`` attribute over the ``missing_value`` attribute, if both are given. Also, data are only treated as missing if they match one of these attributes exactly: values that differ by roundoff from ``_FillValue`` or ``missing_value`` are no longer treated as missing values. `scipy.interpolate` - ------------------- `scipy.interpolate.PiecewisePolynomial` class has been removed. It has been deprecated in scipy 0.14.0, and `scipy.interpolate.BPoly.from_derivatives` serves as a drop-in replacement. Other changes ============= Scipy now uses ``setuptools`` for its builds instead of plain distutils. This fixes usage of ``install_requires='scipy'`` in the ``setup.py`` files of projects that depend on Scipy (see Numpy issue gh-6551 for details). It potentially affects the way that build/install methods for Scipy itself behave though. Please report any unexpected behavior on the Scipy issue tracker. PR `#6240 `__ changes the interpretation of the `maxfun` option in `L-BFGS-B` based routines in the `scipy.optimize` module. An `L-BFGS-B` search consists of multiple iterations, with each iteration consisting of one or more function evaluations. Whereas the old search strategy terminated immediately upon reaching `maxfun` function evaluations, the new strategy allows the current iteration to finish despite reaching `maxfun`. The bundled copy of Qhull in the `scipy.spatial` subpackage has been upgraded to version 2015.2. The bundled copy of ARPACK in the `scipy.sparse.linalg` subpackage has been upgraded to arpack-ng 3.3.0. The bundled copy of SuperLU in the `scipy.sparse` subpackage has been upgraded to version 5.1.1. Authors ======= * @endolith * @yanxun827 + * @kleskjr + * @MYheavyGo + * @solarjoe + * Gregory Allen + * Gilles Aouizerate + * Tom Augspurger + * Henrik Bengtsson + * Felix Berkenkamp * Per Brodtkorb * Lars Buitinck * Daniel Bunting + * Evgeni Burovski * CJ Carey * Tim Cera * Grey Christoforo + * Robert Cimrman * Philip DeBoer + * Yves Delley + * D?vid Bodn?r + * Ion Elberdin + * Gabriele Farina + * Yu Feng * Andrew Fowlie + * Joseph Fox-Rabinovitz * Simon Gibbons + * Neil Girdhar + * Kolja Glogowski + * Christoph Gohlke * Ralf Gommers * Todd Goodall + * Johnnie Gray + * Alex Griffing * Olivier Grisel * Thomas Haslwanter + * Michael Hirsch + * Derek Homeier * Golnaz Irannejad + * Marek Jacob + * InSuk Joung + * Tetsuo Koyama + * Eugene Krokhalev + * Eric Larson * Denis Laxalde * Antony Lee * Jerry Li + * Henry Lin + * Nelson Liu + * Lo?c Est?ve * Lei Ma + * Osvaldo Martin + * Stefano Martina + * Nikolay Mayorov * Matthieu Melot + * Sturla Molden * Eric Moore * Alistair Muldal + * Maniteja Nandana * Tavi Nathanson + * Andrew Nelson * Joel Nothman * Behzad Nouri * Nikolai Nowaczyk + * Juan Nunez-Iglesias + * Ted Pudlik * Eric Quintero * Yoav Ram * Jonas Rauber + * Tyler Reddy + * Juha Remes * Garrett Reynolds + * Ariel Rokem + * Fabian Rost + * Bill Sacks + * Jona Sassenhagen + * Marcello Seri + * Sourav Singh + * Martin Spacek + * S?ren Fuglede J?rgensen + * Bhavika Tekwani + * Martin Thoma + * Sam Tygier + * Meet Udeshi + * Utkarsh Upadhyay * Bram Vandekerckhove + * Sebasti?n Vanrell + * Ze Vinicius + * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Jakub Wilk + * Josh Wilson * Phillip J. Wolfram + * Nathan Woods * Haochen Wu * G Young + A total of 99 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. From cosmicsteve at gmail.com Mon Jun 20 11:56:06 2016 From: cosmicsteve at gmail.com (Steven Ehlert) Date: Mon, 20 Jun 2016 10:56:06 -0500 Subject: [SciPy-User] Creating a Custom PDF with Arguments Using Scipy Stats In-Reply-To: References: Message-ID: Hi all- After doing some math, I was able to figure out how to normalize this pdf in a meaningful way and get the right math packages to interpret it for me. A new question: how do I properly call this within the context of a KS-test? What I want to start doing is comparing histograms of data to this pdf and see if they make sense. I am guessing it would be something like stats.kstest(stuffsample, stuff) where stuff=observed_stuff(name="Stuff",a=1e-6,b=100) but I am not sure how to make the call appropriate. The one I have above does not work, and I know I need to send off a few arguments. The new class with the correct math in place is below. Thanks for your help! class observed_stuff(stats.rv_continuous): def _pdf(self, x, massindex,complimit): dndm=(x)**(-1.*massindex) complete= 1. - np.exp(-1.*x**2/complimit**2) self.massindex=massindex self.complimit=complimit norm=self.integral(self.b)-self.integral(self.a) #print "Norm ", norm return 1./norm * dndm * complete def _cdf(self, xval,massindex,complimit): self.massindex=massindex self.complimit=complimit norm=self.integral(self.b)-self.integral(self.a) return 1./norm * (self.integral(xval)-self.integral(self.a)) def integral(self,xval): usq=xval**2/self.complimit**2 mi=self.massindex arg1= (1.-mi)/2. #We have to use mpmath for the incomplete gamma function that arises in this integral...otherwise the values are all screwy. This try-except snippet ensures the output is in the right format for scipy stats try: gammaval = [float(mpmath.gammainc(a,u)) for a,u in zip(arg1,usq)] gammaval=np.array(gammaval) except: gammaval=float(mpmath.gammainc(arg1,usq)) numerator= xval**(1-mi) * ( -2. * np.sqrt(usq) + (mi-1.)* (usq)**(mi/2.) * gammaval) denominator=2.*(mi-1.)*np.sqrt(usq) return numerator/denominator On Wed, Jun 8, 2016 at 8:54 AM, Kevin Gullikson wrote: > Explicitly defining the cdf would probably help too. If it is analytically > calculable, defining the inverse cdf would almost definitely help since > that is how the random numbers are being generated. > https://en.wikipedia.org/wiki/Inverse_transform_sampling > > What you have now only defines the pdf. The base class automagically > integrates to get cdf, and might do some kind of numerical parameter > optimization to get to the inverse cdf (not sure about that). If nothing > else, defining the cdf/inverse cdf would make things faster. > > > On Wed, Jun 8, 2016 at 8:46 AM Evgeni Burovski > wrote: > >> On Wed, Jun 8, 2016 at 1:31 PM, Steven Ehlert >> wrote: >> > Okay- >> > >> > Thank you for the insights. >> > >> > So three questions about this, as I am not an expert in the >> implementation. >> > I figured that pdf normalization may play a role >> > >> > >> > First of all, what is the best way to numerically handle the >> normalization? >> > Do I need to include the integral every time I initialize the pdf? >> Something >> > like this: >> >> > class my_pdf(stats.rv_continuous): >> > def _pdf(self,x,index,m0): >> > step=(self.b-self.a)/100. >> > >> > this_xrange=np.arange(self.a,self.b+step,step) >> > >> > this_function=this_xrange**(-1*index) * (1. >> > -np.exp((-1*this_xrange**2)/m0**2) ) >> > >> > >> > #for x,f in zip(this_xrange,this_function): print x,f >> > this_norm=1./(integrate.simps(this_function,x=this_xrange)) >> > #print this_norm >> > >> > return this_norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) >> > >> >> Yes, this would need to compute it at each call to pdf. >> The integral can be expressed in terms of gamma functions (do a >> substitution of y = x**2 /2 in the integral). Alternatively, you can >> do the integral numerically, likely `quad` is better than `simps`. >> >> >> >> > I don't think there is an analytic form for this integral, so I have to >> do >> > it numerically, but I also have to implement it with the pdf. >> > >> > And the second question I have is: how do I call the scale variable >> within >> > the _pdf function? Would I simply put in self.scale in place of m0 in >> the >> > code above? Or is it more sophisticated than that? >> >> It's easy: you don't need to :-). >> distribution.pdf(x, loc=11, scale=42) calls distribution._pdf(y) with >> y = (x - loc) / scale. >> >> This way, you `_pdf` only needs to deal with a scaled variable. There >> is no self.scale --- it's just a parameter to any call to pdf, cdf, sf >> etc. >> >> >> >> > Finally, while I realized these concerns when I wrote it, but I didn't >> > expect the pdf to look okay but the rvs to be so off- are there reasons >> to >> > expect this behavior based on my previously incorrect normalization? It >> is >> > not obvious to me that it would. >> >> PDF needs to integrate to one. If the normalization is off, it still >> might look "okay" because the shape is the same, but it's still wrong. >> Drawing random variates assumes that the cdf is a real distribution >> function. From experience, having ok-looking PDF and nonsense for the >> random variates is a typical sign that the normalization might be off. >> >> >> > Thank you for your help thus far and I look forward to learning more >> about >> > custom statistical distributions in SciPy. >> > >> > >> > Steven Ehlert >> > >> > >> > On Wed, Jun 8, 2016 at 5:59 AM, Evgeni Burovski < >> evgeny.burovskiy at gmail.com> >> > wrote: >> >> >> >> >> >> 08.06.2016 13:50 ???????????? "Steven Ehlert" >> >> ???????: >> >> >> >> >> >> > >> >> > Hi all- >> >> > >> >> > I am trying to create a custom PDF with arguments for some >> statistical >> >> > analysis I am doing. I want to have a power law with an exponential >> cutoff >> >> > at low values: mathematically, this looks like >> >> > >> >> > P(m) = N * m**(-index) * (1 - exp((-m**2)/m0**2) with three >> parameters: >> >> > a normalization (N), an index (index), and a scale value (m0). >> >> > >> >> > >> >> > >> >> > I have tried to produce some code that creates this pdf and while the >> >> > pdf and cdf functions appear to be working as I hoped, when I try to >> >> > generate random values they don't follow this distribution at all, >> and I am >> >> > not sure why. It seems to be far more heavily drawing from the low >> end of >> >> > the curve (which are explicitly suppressed in the pdf) >> >> > >> >> > Can you suggest the changes I need to make to try and figure out why >> my >> >> > code isn't working as planned? My code snippet is below: >> >> > >> >> > >> >> > from scipy import stats >> >> > import numpy as np >> >> > from matplotlib import pyplot as plt >> >> > >> >> > >> >> > class my_pdf(stats.rv_continuous): >> >> > def _pdf(self,x,norm,index,m0): >> >> > return norm*x**(-1*index) * (1. -np.exp((-1*x**2)/m0**2) ) >> >> > >> >> > >> >> > my_cv= my_pdf(a=1e-6,b=1e-2,name="Test") >> >> > >> >> > >> >> > rvs=my_cv.rvs(1.0,1.7,1e-3,size=1000) #Values should be peaked >> around >> >> > 1e-3, but histogram peaks around 1e-5... >> >> > >> >> > >> >> > >> >> > I am really having a hard time figuring out how to create a robust >> and >> >> > complex custom rv_continuous in scipy based on scipy, so any help >> you can >> >> > provide would be most appreciated. >> >> > >> >> > >> >> > Cheers, >> >> > >> >> > Steve >> >> > >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > https://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> >> >> First and foremost, you need to fix the normalization: the norm >> constant >> >> is not a free parameter, as the pdf must integrate to unity over the >> support >> >> of your RV. >> >> >> >> Next, the scale parameter is handled by the framework: pdf, cdf methods >> >> all accept the `scale` argument. Thus you do not need the m0 parameter. >> >> >> >> HTH, >> >> >> >> Evgeni >> >> >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> https://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > https://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> > -- > Kevin Gullikson > Data Scientist > Spark Cognition > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonlederman at gmail.com Mon Jun 20 21:31:05 2016 From: jonlederman at gmail.com (Jon Lederman) Date: Mon, 20 Jun 2016 21:31:05 -0400 Subject: [SciPy-User] Spipy Minimize Warm Restart Message-ID: <6A653F84-AD0F-4A7D-AE09-C60686AA810A@gmail.com> Hi All, I am trying to use some of the SciPy minimize algos such as Nelder-Mead and cause the algorithm to execute a warm-restart if it is detected that the optimizer is stuck or not converging. Is there a way to accomplish this programmatically? Thanks. Jon From matthew.brett at gmail.com Mon Jun 20 23:27:21 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 20 Jun 2016 20:27:21 -0700 Subject: [SciPy-User] [Numpy-discussion] scipy 0.18 release candidate 1 In-Reply-To: References: Message-ID: Hi, On Mon, Jun 20, 2016 at 6:31 AM, Evgeni Burovski wrote: > Hi, > > I'm pleased to announce the availability of the first release > candidate for scipy 0.18.0. > Please try this release and report any issues on Github tracker, > https://github.com/scipy/scipy, or scipy-dev mailing list. > Source tarballs and release notes are available from Github releases, > https://github.com/scipy/scipy/releases/tag/v0.18.0rc1 > > Please note that this is a source-only release. We do not provide > Windows binaries for this release. OS X and Linux wheels will be > provided for the final release. > > The current release schedule is > > 27 June: rc2 (if necessary) > 11 July: final release > > Thanks to everyone who contributed to this release! > > Cheers, > > Evgeni > > > A part of the release notes follows: > > > > ========================== > SciPy 0.18.0 Release Notes > ========================== > > .. note:: Scipy 0.18.0 is not released yet! > > .. contents:: > > SciPy 0.18.0 is the culmination of 6 months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and > better documentation. There have been a number of deprecations and > API changes in this release, which are documented below. All users > are encouraged to upgrade to this release, as there are a large number > of bug-fixes and optimizations. Moreover, our development attention > will now shift to bug-fix releases on the 0.19.x branch, and on adding > new features on the master branch. > > This release requires Python 2.7 or 3.4-3.5 and NumPy 1.7.1 or greater. Thanks a lot for taking on the release. I put the manylinux1 and OSX wheel building into a single repo to test 64- and 32-bit linux wheels. There's a test run with the 0.18.0rc1 code here: https://travis-ci.org/MacPython/scipy-wheels/builds/139084454 For Python 3 I am getting these errors: https://github.com/scipy/scipy/issues/6292 For all 32-bit builds I am getting this error: https://github.com/scipy/scipy/issues/6093 For the Python 3 32-bit builds I am also getting this error: https://github.com/scipy/scipy/issues/6101 For the builds that succeeded without failure (all OSX and manylinux1 for 64 bit Python 2.7), you can test with: python -m pip install -U pip pip install --trusted-host wheels.scipy.org -f https://wheels.scipy.org -U --pre scipy Thanks again, sorry for the tiring news, Matthew From josef.pktd at gmail.com Tue Jun 21 16:21:44 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 21 Jun 2016 16:21:44 -0400 Subject: [SciPy-User] ANN: statsmodels release candidate 0.8.0rc1 Message-ID: We are very pleased to announce the release candidate for 0.8.0 of statsmodels. Statsmodels 0.8.0 combines the work of two years of statsmodels development with many new features given that version 0.7 was never released. See the release notes below for details. The source distribution and Windows wheels compiled with Microsoft compilers are available on pypi. The documentation for this release candidate is currently at http://www.statsmodels.org/0.8.0/ The release notes are at http://www.statsmodels.org/0.8.0/release/version0.8.html and http://www.statsmodels.org/0.8.0/release/version0.7.html Please report any problems on the github issues https://github.com/statsmodels/statsmodels/issues or on the mailing list http://groups.google.com/group/pystatsmodels . The release team on behalf of statsmodels developers -------------------------------- about statsmodels: Statsmodels is a Python package that provides estimation and inference for statistical and econometric models and descriptive and inferential statistics. Release 0.8.0 ============= See also changes in the unreleased 0.7 Release summary --------------- The main features of this release are several new time series models based on the statespace framework, multiple imputation using MICE as well as many other enhancements. The codebase also has been updated to be compatible with recent numpy and pandas releases. Statsmodels is using now github to store the updated documentation which is available under http://www.statsmodels.org/stable for the last release, and http://www.statsmodels.org/dev/ for the development version. This is the last release that supports Python 2.6. **Warning** API stability is not guaranteed for new features, although even in this case changes will be made in a backwards compatible way if possible. The stability of a new feature depends on how much time it was already in statsmodels master and how much usage it has already seen. If there are specific known problems or limitations, then they are mentioned in the docstrings. The following major new features appear in this version. Statespace Models ----------------- Building on the statespace framework and models added in 0.7, this release includes additional models that build on it. Authored by Chad Fulton largely during GSOC 2015 Kalman Smoother ^^^^^^^^^^^^^^^ The Kalman smoother (introduced in #2434) allows making inference on the unobserved state vector at each point in time using data from the entire sample. In addition to this improved inference, the Kalman smoother is required for future improvements such as simulation smoothing and the expectation maximization (EM) algorithm. As a result of this improvement, all state space models now inherit a `smooth` method for producing results with smoothed state estimates. In addition, the `fit` method will return results with smoothed estimates at the maximum likelihood estimates. Postestimation ^^^^^^^^^^^^^^ Improved post-estimation output is now available to all state space models (introduced in #2566). This includes the new methods `get_prediction` and `get_forecast`, providing standard errors and confidence intervals as well as point estimates, `simulate`, providing simulation of time series following the given state space process, and `impulse_responses`, allowing computation of impulse responses due to innovations to the state vector. Diagnostics ^^^^^^^^^^^ A number of general diagnostic tests on the residuals from state space estimation are now available to all state space models (introduced in #2431). These include: * `test_normality` implements the Jarque-Bera test for normality of residuals * `test_heteroskedasticity` implements a test for homoskedasticity of residuals similar to the Goldfeld-Quandt test * `test_serial_correlation` implements the Ljung-Box (or Box-Pierce) test for serial correlation of residuals These test statistics are also now included in the `summary` method output. In addition, a `plot_diagnostics` method is available which provides four plots to visually assess model fit. Unobserved Components ^^^^^^^^^^^^^^^^^^^^^ The class of univariate Unobserved Components models (also known as structural time series models) are now available (introduced in #2432). This includes as special cases the local level model and local linear trend model. Generically it allows decomposing a time series into trend, cycle, seasonal, and irregular components, optionally with exogenous regressors and / or autoregressive errors. Multivariate Models ^^^^^^^^^^^^^^^^^^^ Two standard multivariate econometric models - vector autoregressive moving-average model with exogenous regressors (VARMAX) and Dynamic Factors models - are now available (introduced in #2563). The first is a popular reduced form method of exploring the covariance in several time series, and the second is a popular reduced form method of extracting a small number of common factors from a large dataset of observed series. Recursive least squares ^^^^^^^^^^^^^^^^^^^^^^^ A model for recursive least squares, also known as expanding-window OLS, is now available in `statsmodels.regression` (introduced in #2830). Miscellaneous ^^^^^^^^^^^^^ Other improvements to the state space framework include: * Improved missing data handling #2770, #2809 * Ongoing refactoring and bug fixes in fringes and corner cases New functionality in statistics ------------------------------- Contingency Tables #2418 (Kerby Shedden) Local FDR, multiple testing #2297 (Kerby Shedden) Mediation Analysis #2352 (Kerby Shedden) other: * weighted quantiles in DescrStatsW #2707 (Kerby Shedden) Duration -------- Kaplan Meier Survival Function #2614 (Kerby Shedden) Cumulative incidence rate function #3016 (Kerby Shedden) other: * frequency weights in Kaplan-Meier #2992 (Kerby Shedden) Imputation ---------- new subpackage in `statsmodels.imputation` MICE #2076 (Frank Cheng GSOC 2014 and Kerby Shedden) Imputation by regression on Order Statistic #3019 (Paul Hobson) Time Series Analysis -------------------- Markov Switching Models ^^^^^^^^^^^^^^^^^^^^^^^ Markov switching dynamic regression and autoregression models are now available (introduced in #2980 by Chad Fulton). These models allow regression effects and / or autoregressive dynamics to differ depending on an unobserved "regime"; in Markov switching models, the regimes are assumed to transition according to a Markov process. Statistics ^^^^^^^^^^ * KPSS stationarity, unit root test #2775 (N-Wouda) * The Brock Dechert Scheinkman (BDS) test for nonlinear dependence is now available (introduced in #934 by Chad Fulton) Penalized Estimation -------------------- Elastic net: fit_regularized with L1/L2 penalization has been added to OLS, GLM and PHReg (Kerby Shedden) GLM --- Tweedie is now available as new family #2872 (Peter Quackenbush, Josef Perktold) other: * frequency weights for GLM (currently without full support) # * more flexible convergence options #2803 (Peter Quackenbush) Multivariate ------------ new subpackage that currently contains PCA PCA was added in 0.7 to statsmodels.tools and is now in statsmodels.multivariate Documentation ------------- New doc build with latest jupyter and Python 3 compatibility (Tom Augspurger) Other important improvements ---------------------------- several existing functions have received improvements * seasonal_decompose: improved periodicity handling #2987 (ssktotoro ?) * tools add_constant, add_trend: refactoring and pandas compatibility #2240 (Kevin Sheppard) * acf, pacf, acovf: option for missing handling #3020 (joesnacks ?) * acf, pacf plots: allow array of lags #2989 (Kevin Sheppard) * io SimpleTable (summary): allow names with special characters #3015 (tvanessa ?) * tsa tools lagmat, lagmat2ds: pandas support #2310 #3042 (Kevin Sheppard) * CompareMeans: from_data, summary methods #2754 (Valery Tyumen) Major Bugs fixed ---------------- * see github issues Backwards incompatible changes and deprecations ----------------------------------------------- * ??? * predict now returns a pandas Series if the exog argument is a DataFrame * PCA moved to multivariate compared to 0.7 Development summary and credits ------------------------------- Besides receiving contributions for new and improved features and for bugfixes, important contributions to general maintenance came from * Kevin Sheppard * Pierre Barbier de Reuille * Tom Augsburger and the general maintainer and code reviewer * Josef Perktold Additionally, many users contributed by participation in github issues and providing feedback. Thanks to all of the contributors for the 0.8 release: .. note:: * Ashish * Brendan * Brendan Condon * BrianLondon * Chad Fulton * Chris Fonnesbeck * Christoph T. Weidemann * James Kerns * Josef Perktold * Kerby Shedden * Kevin Sheppard * Leoyzen * Matthew Brett * Niels Wouda * Paul Hobson * Pierre Barbier de Reuille * Pietro Battiston * Ralf Gommers * Roman Ring * Skipper Seabold * Soren Fuglede Jorgensen * Thomas Cokelaer * Tom Augspurger * ValeryTyumen * Vanessa * Yaroslav Halchenko * joesnacks * kokes * matiumerca * rlan * ssktotoro * thequackdaddy * vegcev Thanks to all of the contributors for the 0.7 release: .. note:: * Alex Griffing * Antony Lee * Chad Fulton * Christoph Deil * Daniel Sullivan * Hans-Martin von Gaudecker * Jan Schulz * Joey Stockermans * Josef Perktold * Kerby Shedden * Kevin Sheppard * Kiyoto Tamura * Louis-Philippe Lemieux Perreault * Padarn Wilson * Ralf Gommers * Saket Choudhary * Skipper Seabold * Tom Augspurger * Trent Hauck * Vincent Arel-Bundock * chebee7i * donbeo * gliptak * hlin117 * jerry dumblauskas * jonahwilliams * kiyoto * neilsummers * waynenilsen These lists of names are automatically generated based on git log, and may not be complete. From matthew.brett at gmail.com Wed Jun 22 03:29:41 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 22 Jun 2016 00:29:41 -0700 Subject: [SciPy-User] [pystatsmodels] ANN: statsmodels release candidate 0.8.0rc1 In-Reply-To: References: Message-ID: Hi, On Tue, Jun 21, 2016 at 1:21 PM, wrote: > We are very pleased to announce the release candidate for 0.8.0 of statsmodels. > > Statsmodels 0.8.0 combines the work of two years of statsmodels > development with many new features given that version 0.7 was never > released. See the release notes below for details. > > The source distribution and Windows wheels compiled with Microsoft > compilers are available on pypi. > > The documentation for this release candidate is currently at > http://www.statsmodels.org/0.8.0/ > The release notes are at > http://www.statsmodels.org/0.8.0/release/version0.8.html and > http://www.statsmodels.org/0.8.0/release/version0.7.html > > > Please report any problems on the github issues > https://github.com/statsmodels/statsmodels/issues or on the mailing > list http://groups.google.com/group/pystatsmodels . I am just building OSX and manylinux1 wheels - here : https://travis-ci.org/MacPython/statsmodels-wheels/builds/139376876 I found a precision failure on 32-bit : https://github.com/statsmodels/statsmodels/issues/3067 Great to see the release on its way, Cheers, Matthew From thomas.robitaille at gmail.com Thu Jun 23 08:32:55 2016 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Thu, 23 Jun 2016 13:32:55 +0100 Subject: [SciPy-User] ANN: Astropy v1.2 released Message-ID: Dear colleagues, We are very happy to announce the v1.2 release of the Astropy package, a core Python package for Astronomy: http://www.astropy.org Astropy is a community-driven Python package intended to contain much of the core functionality and common tools needed for astronomy and astrophysics. New and improved major functionality in this release includes: * A new class to compute Lomb-Scargle periodograms efficiently using different methods. * A number of new statistics functions including those for Jackknife resampling, circular statistics, and the Akaike and Bayesian information criteria. * Support for getting the positions of solar system bodies in the coordinates sub-package. * The ability to compute Barycentric and Heliocentric light-travel time corrections. * Support for offset coordinate frames, which can be used to define a coordinate system relative to a known position and rotation. * An implementation of the zscale algorithm to determine image limits automatically. * Support for bolometric magnitudes in the units package. * Improvements to the NDData class and subclasses. * Auto-downloading of IERS tables as needed, which gives information about Earth orientation parameters necessary for high precision coordinate calculations and conversions to/from the UT1 scale. In addition, hundreds of smaller improvements and fixes have been made. An overview of the changes is provided at: http://docs.astropy.org/en/stable/whatsnew/1.2.html Instructions for installing Astropy are provided on our website, and extensive documentation can be found at: http://docs.astropy.org If you make use of the Anaconda Python Distribution, you can update to Astropy v1.2 with: conda update astropy If you normally use pip, you can upgrade with: pip install astropy --upgrade Note that if you install now you should get Astropy v1.2.1, as some last-minute bug fixes were found and fixed after the v1.2 release was created but before this announcement. Please report any issues, or request new features via our GitHub repository: https://github.com/astropy/astropy/issues Over 190 developers have contributed code to Astropy so far, and you can find out more about the team behind Astropy here: http://www.astropy.org/team.html As a reminder, Astropy v1.0 (our long term support release) will continue to be supported with bug fixes until Feb 19th 2017, so if you need to use Astropy in a very stable environment, you may want to consider staying on the v1.0.x set of releases (for which we have recently released v1.0.10). If you use Astropy directly for your work, or as a dependency to another package, please remember to include the following acknowledgment at the end of papers: ?This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013).? where (Astropy Collaboration, 2013) is a reference to the Astropy paper: http://dx.doi.org/10.1051/0004-6361/201322068 Please feel free to forward this announcement to anyone you think might be interested in this release! Erik Tollerud, Tom Robitaille, Kelle Cruz, and Tom Aldcroft on behalf of The Astropy Collaboration From fausto_barbuto at yahoo.ca Sun Jun 26 00:15:26 2016 From: fausto_barbuto at yahoo.ca (Fausto Arinos de A. Barbuto) Date: Sun, 26 Jun 2016 04:15:26 +0000 (UTC) Subject: [SciPy-User] Problem with SciPy 0.17.1 integrate package References: <1794449799.1300778.1466914526199.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1794449799.1300778.1466914526199.JavaMail.yahoo@mail.yahoo.com> Hello, I have just upgraded from SciPy 0.13.3 to 0.17.1 and Python scripts that usedto work don't work any more. The problem appears when I try to use the odeintfunction. When I try this: >>> from scipy.integrate import odeint I get those following error messages: Traceback (most recent call last): ? File "", line 1, in ? File "/usr/local/lib/python2.7/dist-packages/scipy/integrate/__init__.py", line 56, in ??? from .odepack import * ? File "/usr/local/lib/python2.7/dist-packages/scipy/integrate/odepack.py", line 6, in ??? from . import _odepack ImportError: /usr/local/lib/python2.7/dist-packages/scipy/integrate/_odepack.so: undefined symbol: lsoda_ The command above always worked fine with SciPy 0.13.3. Is this error somehow related to Numpy or Lapack (sorry if this question makes no sense, but I'mreally lost)? Is there any test I could do? Despite this error the whole SciPy module/package seems to import OK: >>> import scipy >>> print scipy.__version__ 0.17.1 >>> I'm running Ubuntu 14.04 64-bits with Python 2.7.6, gcc 4.8.4 and Numpy 1.11.0. Thanks in advance for any help you provide. Fausto -------------- next part -------------- An HTML attachment was scrubbed... URL: From fausto_barbuto at yahoo.ca Sun Jun 26 01:43:06 2016 From: fausto_barbuto at yahoo.ca (Fausto Arinos de A. Barbuto) Date: Sun, 26 Jun 2016 02:43:06 -0300 Subject: [SciPy-User] Problem with SciPy 0.17.1 integrate package In-Reply-To: <1794449799.1300778.1466914526199.JavaMail.yahoo@mail.yahoo.com> References: <1794449799.1300778.1466914526199.JavaMail.yahoo.ref@mail.yahoo.com> <1794449799.1300778.1466914526199.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 26-06-2016 01:15, Fausto Arinos de A. Barbuto wrote: > > Hello, > > I have just upgraded from SciPy 0.13.3 to 0.17.1 and Python scripts that used > to work don't work any more. The problem appears when I try to use the odeint > function. When I try this: > > *>>> from scipy.integrate import odeint* > > I get those following error messages: > > *Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib/python2.7/dist-packages/scipy/integrate/__init__.py", line 56, in > from .odepack import * > File "/usr/local/lib/python2.7/dist-packages/scipy/integrate/odepack.py", line 6, in > from . import _odepack > ImportError: /usr/local/lib/python2.7/dist-packages/scipy/integrate/_odepack.so: undefined > symbol: lsoda_* > > The command above always worked fine with SciPy 0.13.3. Is this error somehow > related to Numpy or Lapack (sorry if this question makes no sense, but I'm > really lost)? Is there any test I could do? > > Despite this error the whole SciPy module/package seems to import OK: > > *>>> import scipy > >>> print scipy.__version__ > 0.17.1 > >>>* > > I'm running Ubuntu 14.04 64-bits with Python 2.7.6, gcc 4.8.4 and Numpy 1.11.0. > > Thanks in advance for any help you provide. > > Fausto > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > I found out what the problem was. Setup installs Scipy on /usr/local/lib/python2.7/dist-packages/scipy by default, whereas Ubuntu puts the packages on /usr/lib/python2.7/dist-packages/scipy, and that was causing a conflict. I deleted the former and got 0.13.3 (which had never been overwritten) back. I might have used "sudo python setup.py install --prefix=/usr/lib/..." but SciPy's installation instructions do not recommend that. I guess I will have to wait for the point release of 16.04 LTS to finally have 0.17.1. Fausto From charlesr.harris at gmail.com Sun Jun 26 12:36:28 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 26 Jun 2016 10:36:28 -0600 Subject: [SciPy-User] Numpy 1.11.1 release Message-ID: Hi All, I'm pleased to announce the release of Numpy 1.11.1. This release supports Python 2.6 - 2.7, and 3.2 - 3.5 and fixes bugs and regressions found in Numpy 1.11.0 as well as making several build related improvements. Wheels for Linux, Windows, and OSX can be found on PyPI. Sources are available on both PyPI and Sourceforge . Thanks to all who were involved in this release, and a special thanks to Matthew Brett for his work on the Linux and Windows wheel infrastructure. The following pull requests have been merged: - 7506 BUG: Make sure numpy imports on python 2.6 when nose is unavailable. - 7530 BUG: Floating exception with invalid axis in np.lexsort. - 7535 BUG: Extend glibc complex trig functions blacklist to glibc < 2.18. - 7551 BUG: Allow graceful recovery for no compiler. - 7558 BUG: Constant padding expected wrong type in constant_values. - 7578 BUG: Fix OverflowError in Python 3.x. in swig interface. - 7590 BLD: Fix configparser.InterpolationSyntaxError. - 7597 BUG: Make np.ma.take work on scalars. - 7608 BUG: linalg.norm(): Don't convert object arrays to float. - 7638 BLD: Correct C compiler customization in system_info.py. - 7654 BUG: ma.median of 1d array should return a scalar. - 7656 BLD: Remove hardcoded Intel compiler flag -xSSE4.2. - 7660 BUG: Temporary fix for str(mvoid) for object field types. - 7665 BUG: Fix incorrect printing of 1D masked arrays. - 7670 BUG: Correct initial index estimate in histogram. - 7671 BUG: Boolean assignment no GIL release when transfer needs API. - 7676 BUG: Fix handling of right edge of final histogram bin. - 7680 BUG: Fix np.clip bug NaN handling for Visual Studio 2015. - 7724 BUG: Fix segfaults in np.random.shuffle. - 7731 MAINT: Change mkl_info.dir_env_var from MKL to MKLROOT. - 7737 BUG: Fix issue on OS X with Python 3.x, npymath.ini not installed. The following developers contributed to this release, developers marked with a '+' are first time contributors. - Allan Haldane - Amit Aronovitch+ - Andrei Kucharavy+ - Charles Harris - Eric Wieser+ - Evgeni Burovski - Lo?c Est?ve+ - Mathieu Lamarre+ - Matthew Brett - Matthias Geier - Nathaniel J. Smith - Nikola Forr?+ - Ralf Gommers - Ray Donnelly+ - Robert Kern - Sebastian Berg - Simon Conseil - Simon Gibbons - Sorin Sbarnea+ Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Mon Jun 27 07:29:25 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Mon, 27 Jun 2016 12:29:25 +0100 Subject: [SciPy-User] Problem with SciPy 0.17.1 integrate package In-Reply-To: References: <1794449799.1300778.1466914526199.JavaMail.yahoo.ref@mail.yahoo.com> <1794449799.1300778.1466914526199.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Sun, Jun 26, 2016 at 6:43 AM, Fausto Arinos de A. Barbuto wrote: > On 26-06-2016 01:15, Fausto Arinos de A. Barbuto wrote: >> >> >> Hello, >> >> I have just upgraded from SciPy 0.13.3 to 0.17.1 and Python scripts that >> used >> to work don't work any more. The problem appears when I try to use the >> odeint >> function. When I try this: >> >> *>>> from scipy.integrate import odeint* >> >> I get those following error messages: >> >> *Traceback (most recent call last): >> File "", line 1, in >> File >> "/usr/local/lib/python2.7/dist-packages/scipy/integrate/__init__.py", line >> 56, in >> from .odepack import * >> File >> "/usr/local/lib/python2.7/dist-packages/scipy/integrate/odepack.py", line 6, >> in >> from . import _odepack >> ImportError: >> /usr/local/lib/python2.7/dist-packages/scipy/integrate/_odepack.so: >> undefined >> symbol: lsoda_* >> >> The command above always worked fine with SciPy 0.13.3. Is this error >> somehow >> related to Numpy or Lapack (sorry if this question makes no sense, but I'm >> really lost)? Is there any test I could do? >> >> Despite this error the whole SciPy module/package seems to import OK: >> >> *>>> import scipy >> >>> print scipy.__version__ >> 0.17.1 >> >>>* >> >> I'm running Ubuntu 14.04 64-bits with Python 2.7.6, gcc 4.8.4 and Numpy >> 1.11.0. >> >> Thanks in advance for any help you provide. >> >> Fausto >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> > > I found out what the problem was. Setup installs Scipy on > /usr/local/lib/python2.7/dist-packages/scipy by default, whereas Ubuntu puts > the packages on /usr/lib/python2.7/dist-packages/scipy, > and that was causing a conflict. I deleted the former and got 0.13.3 (which > had never been > overwritten) back. I might have used "sudo python setup.py install > --prefix=/usr/lib/..." but > SciPy's installation instructions do not recommend that. I guess I will have > to wait for the > point release of 16.04 LTS to finally have 0.17.1. > > Fausto You might want to start using virtualenvs. This is precisely what they are for.