From nicolas.fauchereau at gmail.com Tue Feb 1 04:56:08 2011 From: nicolas.fauchereau at gmail.com (Nicolas) Date: Tue, 1 Feb 2011 10:56:08 +0100 Subject: [SciPy-User] census method II (X-11) in python Message-ID: Hi all Does anyone know if the Census method II (X-11) approach to seasonal adjustment, used mainly in econometrics, has been implemented in python ? my searches have been unsuccessful so far .. thanks Nicolas -- _/\/??????\/\_ 33?49'45.24"S & 18?28'45.60"E Dr. Nicolas Fauchereau CSIR - NRE Research Group: Ocean systems and climate 15 Lower Hope street, Rosebank 7700 South Africa tel: 021 658 2764 _/\/??????\/\_ 33?49'45.24"S & 18?28'45.60"E From Adam.Dybbroe at smhi.se Tue Feb 1 08:40:39 2011 From: Adam.Dybbroe at smhi.se (Dybbroe Adam) Date: Tue, 1 Feb 2011 14:40:39 +0100 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data Message-ID: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> Hello, I am trying to use various 1-d spline interpolation methods from scipy but without much success. I have data (satellite spectral responses as a function of wavelength) which are unevenly spaced and I want to re-sample the data to get them evenly spaced. The full dataset has 219 points (only). I use numpy 1.5.1 and scipy 0.8.0 I have tested both splrep/splev and InterpolatedUnivariateSpline: This is my code-snip with splrep/splev: for idx in range(10,w_ch.shape[0],1): try: start = w_ch[0] end = w_ch[idx] dlambda = 0.005 xspl = np.linspace(start, end, (end-start)/dlambda) # Mask out part of the response curve: maskidx = np.logical_and(np.greater_equal(w_ch, start), np.less_equal(w_ch, end)) wvl = np.repeat(w_ch, maskidx) resp = np.repeat(resp_ch, maskidx) tck = splrep(wvl, resp) resp_ipol = splev(xspl, tck) except: print "idx = ",idx print "Unexpected error:", sys.exc_info()[0:] break I get an error already at idx=16. (I start at 10): Unexpected error: (, IOError('Interpolated response values corrupt',), ) It happens inside fitpack.py at line 418: raise _iermess[ier][1],_iermess[ier][0] So I guess the error code 'ier' returned from n,c,fp,ier = dfitpack.percur(task, x, y, w, t, wrk, iwrk, k, s) is above 3? When I use InterpolatedUnivariateSpline my code looks like this: for idx in range(10,w_ch.shape[0],1): try: start = w_ch[0] end = w_ch[idx] dlambda = 0.005 xspl = np.linspace(start, end, (end-start)/dlambda) # Mask out part of the response curve: maskidx = np.logical_and(np.greater_equal(w_ch, start), np.less_equal(w_ch, end)) wvl = np.repeat(w_ch, maskidx) resp = np.repeat(resp_ch, maskidx) ius = InterpolatedUnivariateSpline(wvl, resp) resp_ipol = ius(xspl) if np.isnan(resp_ipol[0]): raise IOError("Interpolated response values corrupt") except: print "idx = ",idx print "Unexpected error:", sys.exc_info()[0:] break The latter does not throw an exception but when I try it on the entire dataset it just gives me "nan" throughout. The above code stops/breaks when this happens first time. Here I get to idx=156 before it fails. I have here attached the two resulting plots (original data and interpolated). Any ideas? Do I use the wrong methods for this application? Regards Adam Dybbroe Swedish Meteorological and Hydrological Institute (SMHI) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rsr_interpolate_test_UnivariateSpline.png Type: image/png Size: 40543 bytes Desc: rsr_interpolate_test_UnivariateSpline.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rsr_interpolate_test_splrep.png Type: image/png Size: 64283 bytes Desc: rsr_interpolate_test_splrep.png URL: From ckkart at hoc.net Tue Feb 1 09:12:01 2011 From: ckkart at hoc.net (Christian K.) Date: Tue, 01 Feb 2011 15:12:01 +0100 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> Message-ID: Am 01.02.11 14:40, schrieb Dybbroe Adam: > # Mask out part of the response curve: > maskidx = np.logical_and(np.greater_equal(w_ch, start), > np.less_equal(w_ch, end)) > wvl = np.repeat(w_ch, maskidx) > resp = np.repeat(resp_ch, maskidx) I do not unterstand what those lines are for. As long as the interpolation range is within the data range everything is ok. Thus it is really suffienct to do tck = splrep(yourdata_x, yourdata_y) resp_ipol = splev(linspace(yourdata_x[0],yourdata_x[-1],200), tck) Maybe you could post a running script including your data. Regards, Christian From ckkart at hoc.net Tue Feb 1 09:15:03 2011 From: ckkart at hoc.net (Christian K.) Date: Tue, 01 Feb 2011 15:15:03 +0100 Subject: [SciPy-User] Installing Scipy from source In-Reply-To: References: Message-ID: Am 31.01.11 23:39, schrieb Jorge Garcia: > I tried using apt-get to get the blas package but it couldn't find it. The packages are calld libblas-dev and liblapack-dev Christian. From jsseabold at gmail.com Tue Feb 1 09:39:31 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 1 Feb 2011 09:39:31 -0500 Subject: [SciPy-User] census method II (X-11) in python In-Reply-To: References: Message-ID: On Tue, Feb 1, 2011 at 4:56 AM, Nicolas wrote: > Hi all > > Does anyone know if the Census method II (X-11) approach to seasonal > adjustment, used mainly in econometrics, has been implemented in > python ? my searches have been unsuccessful so far .. > Not to my knowledge. I don't think it would be too much trouble to generate wrappers for the x-11 part of the x-12-arima source. If you need a higher level open source alternative, gretl has the ability to deseasonalize using x-12 arima. Skipper From josef.pktd at gmail.com Tue Feb 1 09:47:53 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 Feb 2011 09:47:53 -0500 Subject: [SciPy-User] census method II (X-11) in python In-Reply-To: References: Message-ID: On Tue, Feb 1, 2011 at 9:39 AM, Skipper Seabold wrote: > On Tue, Feb 1, 2011 at 4:56 AM, Nicolas wrote: >> Hi all >> >> Does anyone know if the Census method II (X-11) approach to seasonal >> adjustment, used mainly in econometrics, has been implemented in >> python ? my searches have been unsuccessful so far .. >> > > Not to my knowledge. ?I don't think it would be too much trouble to > generate wrappers for the x-11 part of the x-12-arima source. > > If you need a higher level open source alternative, gretl has the > ability to deseasonalize using x-12 arima. I don't know the details, but deseasonalization for "natural phenomena" might not be too difficult to implement. Assuming that the weather does not depend on the dates for Easter, and other holidays for example. Josef > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Adam.Dybbroe at smhi.se Tue Feb 1 18:23:08 2011 From: Adam.Dybbroe at smhi.se (Dybbroe Adam) Date: Wed, 2 Feb 2011 00:23:08 +0100 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> Message-ID: <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> Thanks. Ok, I have stored my data in an npz-file (attached). You should be able to run the following code (with the npz file in the same directory from where you run): import numpy as np outfile = './ipol_test_arrays.npz' from scipy.interpolate import splrep, splev # Load data and try do the spline interpolation: npzfile = np.load(outfile) wvl = npzfile['wvl'] resp = npzfile['resp'] xspl = np.linspace(wvl[0], wvl[-1], 300) tck = splrep(wvl, resp) resp_ipol = splev(xspl, tck) Here is what I get: In [1]: run scipy_ipol_test_ext.py --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /local_disk/laptop/NowcastingSaf/radiationStuff/test/scipy_ipol_test_ext.py in () 14 xspl = np.linspace(wvl[0], wvl[-1], 300) 15 ---> 16 tck = splrep(wvl, resp) 17 resp_ipol = splev(xspl, tck) 18 /home/a000680/usr/lib64/python2.5/site-packages/scipy/interpolate/fitpack.pyc in splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet) 416 else: 417 try: --> 418 raise _iermess[ier][1],_iermess[ier][0] 419 except KeyError: 420 raise _iermess['unknown'][1],_iermess['unknown'][0] ValueError: Error on input data WARNING: Failure executing file: :-( -Adam On 2011-02-01 15:12, Christian K. wrote: > Am 01.02.11 14:40, schrieb Dybbroe Adam: > >> # Mask out part of the response curve: >> maskidx = np.logical_and(np.greater_equal(w_ch, start), >> np.less_equal(w_ch, end)) >> wvl = np.repeat(w_ch, maskidx) >> resp = np.repeat(resp_ch, maskidx) >> > I do not unterstand what those lines are for. As long as the > interpolation range is within the data range everything is ok. > > Thus it is really suffienct to do > > tck = splrep(yourdata_x, yourdata_y) > resp_ipol = splev(linspace(yourdata_x[0],yourdata_x[-1],200), tck) > > Maybe you could post a running script including your data. > > Regards, Christian > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ipol_test_arrays.npz Type: application/octet-stream Size: 3868 bytes Desc: ipol_test_arrays.npz URL: From charlesr.harris at gmail.com Tue Feb 1 19:09:15 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 1 Feb 2011 17:09:15 -0700 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> Message-ID: On Tue, Feb 1, 2011 at 4:23 PM, Dybbroe Adam wrote: > Thanks. > Ok, I have stored my data in an npz-file (attached). > You should be able to run the following code (with the npz file in the > same directory from where you run): > > import numpy as np > outfile = './ipol_test_arrays.npz' > > from scipy.interpolate import splrep, splev > > # Load data and try do the spline interpolation: > npzfile = np.load(outfile) > wvl = npzfile['wvl'] > resp = npzfile['resp'] > xspl = np.linspace(wvl[0], wvl[-1], 300) > > > tck = splrep(wvl, resp) > resp_ipol = splev(xspl, tck) > > > Here is what I get: > > In [1]: run scipy_ipol_test_ext.py > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > > /local_disk/laptop/NowcastingSaf/radiationStuff/test/scipy_ipol_test_ext.py > in () > 14 xspl = np.linspace(wvl[0], wvl[-1], 300) > 15 > ---> 16 tck = splrep(wvl, resp) > 17 resp_ipol = splev(xspl, tck) > 18 > > > /home/a000680/usr/lib64/python2.5/site-packages/scipy/interpolate/fitpack.pyc > in splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet) > 416 else: > 417 try: > --> 418 raise _iermess[ier][1],_iermess[ier][0] > 419 except KeyError: > 420 raise > _iermess['unknown'][1],_iermess['unknown'][0] > > ValueError: Error on input data > WARNING: Failure executing file: > > > The contents of wvl need to be in sorted order, wvl[16] is out of order. Further, I wouldn't recommend letting splrep choose the knot points. A plain old spline with not-a-knot end conditions might be preferable to splrep. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Feb 1 19:12:51 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Feb 2011 00:12:51 +0000 (UTC) Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> Message-ID: On Wed, 02 Feb 2011 00:23:08 +0100, Dybbroe Adam wrote: > Ok, I have stored my data in an npz-file (attached). You should be able > to run the following code (with the npz file in the same directory from > where you run): [clip] The spline interpolation routines assume that (i) the x-coordinates are sorted, and (ii) there are no duplicate x-entries. Your data seems not to satisfy these conditions: you'll need to sort and de-dupe the data before interpolation. Like so: import numpy as np from scipy.interpolate import InterpolatedUnivariateSpline def sort_data(x, y): # Sort data j = np.argsort(x) x = x[j] y = y[j] # De-duplicate data mask = np.r_[True, (np.diff(x) > 0)] if not mask.all(): print "Dropping data on the floor..." # could do something smarter here x = x[mask] y = y[mask] return x, y x, y = sort_data(x, y) ip = InterpolatedUnivariateSpline(x2, y2) -- Pauli Virtanen From Adam.Dybbroe at smhi.se Wed Feb 2 05:33:52 2011 From: Adam.Dybbroe at smhi.se (Dybbroe Adam) Date: Wed, 2 Feb 2011 11:33:52 +0100 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> Message-ID: <99FC912DA3F22F43AAA94C99F76C642AB207DC@CORRE.ad.smhi.se> On 2011-02-02 01:12, Pauli Virtanen wrote: > On Wed, 02 Feb 2011 00:23:08 +0100, Dybbroe Adam wrote: > >> Ok, I have stored my data in an npz-file (attached). You should be able >> to run the following code (with the npz file in the same directory from >> where you run): >> > [clip] > > The spline interpolation routines assume that (i) the x-coordinates are > sorted, and (ii) there are no duplicate x-entries. Your data seems not to > satisfy these conditions: you'll need to sort and de-dupe the data before > interpolation. > > Like so: > > import numpy as np > from scipy.interpolate import InterpolatedUnivariateSpline > > def sort_data(x, y): > # Sort data > j = np.argsort(x) > x = x[j] > y = y[j] > > # De-duplicate data > mask = np.r_[True, (np.diff(x)> 0)] > if not mask.all(): > print "Dropping data on the floor..." > # could do something smarter here > x = x[mask] > y = y[mask] > return x, y > > x, y = sort_data(x, y) > ip = InterpolatedUnivariateSpline(x2, y2) > > Thank you so much, this was very useful. It works, yes. Had overlooked the possibility that my data was not sorted. That of course explains why some response curves was okay and others were not. I shall improve on my quality-checking (data filtering). The "np.r_" de-duplicate thing was new to me. Nice! -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From Adam.Dybbroe at smhi.se Wed Feb 2 05:36:16 2011 From: Adam.Dybbroe at smhi.se (Dybbroe Adam) Date: Wed, 2 Feb 2011 11:36:16 +0100 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: References: <99FC912DA3F22F43AAA94C99F76C642AB207D0@CORRE.ad.smhi.se> <99FC912DA3F22F43AAA94C99F76C642AB207D4@CORRE.ad.smhi.se> Message-ID: <99FC912DA3F22F43AAA94C99F76C642AB207DD@CORRE.ad.smhi.se> Thanks for the kind and useful help. I am using the InterpolatedUnivariateSpline now, after having sorted the data and removed for duplicates. -Adam On 2011-02-02 01:09, Charles R Harris wrote: On Tue, Feb 1, 2011 at 4:23 PM, Dybbroe Adam wrote: Thanks. Ok, I have stored my data in an npz-file (attached). You should be able to run the following code (with the npz file in the same directory from where you run): import numpy as np outfile = './ipol_test_arrays.npz' from scipy.interpolate import splrep, splev # Load data and try do the spline interpolation: npzfile = np.load(outfile) wvl = npzfile['wvl'] resp = npzfile['resp'] xspl = np.linspace(wvl[0], wvl[-1], 300) tck = splrep(wvl, resp) resp_ipol = splev(xspl, tck) Here is what I get: In [1]: run scipy_ipol_test_ext.py --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /local_disk/laptop/NowcastingSaf/radiationStuff/test/scipy_ipol_test_ext.py in () 14 xspl = np.linspace(wvl[0], wvl[-1], 300) 15 ---> 16 tck = splrep(wvl, resp) 17 resp_ipol = splev(xspl, tck) 18 /home/a000680/usr/lib64/python2.5/site-packages/scipy/interpolate/fitpack.pyc in splrep(x, y, w, xb, xe, k, task, s, t, full_output, per, quiet) 416 else: 417 try: --> 418 raise _iermess[ier][1],_iermess[ier][0] 419 except KeyError: 420 raise _iermess['unknown'][1],_iermess['unknown'][0] ValueError: Error on input data WARNING: Failure executing file: The contents of wvl need to be in sorted order, wvl[16] is out of order. Further, I wouldn't recommend letting splrep choose the knot points. A plain old spline with not-a-knot end conditions might be preferable to splrep. Chuck _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josselin.jacquard at gmail.com Wed Feb 2 09:50:58 2011 From: josselin.jacquard at gmail.com (Josselin Jacquard) Date: Wed, 2 Feb 2011 15:50:58 +0100 Subject: [SciPy-User] graphs Message-ID: Hi guys, I'm wondering what is the state on graph utilities in scipy ? I'm looking for an algorithme to find negative cycles. Any clues ? Thanks in advance Joss From brendanarnold at gmail.com Wed Feb 2 10:03:56 2011 From: brendanarnold at gmail.com (Brendan Arnold) Date: Wed, 2 Feb 2011 15:03:56 +0000 Subject: [SciPy-User] Expected behaviour when upacking values from np.loadtxt Message-ID: Hi there, I am having trouble when I read a file with only one line of data, it seems to not be an array as expected but a numpy float, i.e. >>> s = StringIO.StringIO('1,2,3\n4,5,6') >>> a, b, c = np.loadtxt(s, delimiter=',', unpack=True) >>> a >>> a = StringIO.StringIO('1,2,3') >>> a, b, c = np.loadtxt(s, delimiter=',', unpack=True) >>> type(a) Is there a way to ensure that this always returns an array? This is messing with my code which expects an iterable Brendan From jjstickel at vcn.com Wed Feb 2 10:19:13 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Wed, 02 Feb 2011 08:19:13 -0700 Subject: [SciPy-User] 1-d Interpolation of unevenly spaced data In-Reply-To: References: Message-ID: <4D4975F1.5000005@vcn.com> On 2/2/11 03:33 , scipy-user-request at scipy.org wrote: > Date: Wed, 2 Feb 2011 11:33:52 +0100 > From: "Dybbroe Adam" > Subject: Re: [SciPy-User] 1-d Interpolation of unevenly spaced data > > Pauli Virtanen wrote: >> > On Wed, 02 Feb 2011 00:23:08 +0100, Dybbroe Adam wrote: >> > >>> >> Ok, I have stored my data in an npz-file (attached). You should be able >>> >> to run the following code (with the npz file in the same directory from >>> >> where you run): >>> >> >> > [clip] >> > >> > The spline interpolation routines assume that (i) the x-coordinates are >> > sorted, and (ii) there are no duplicate x-entries. Your data seems not to >> > satisfy these conditions: you'll need to sort and de-dupe the data before >> > interpolation. >> > >> > Like so: >> > >> > import numpy as np >> > from scipy.interpolate import InterpolatedUnivariateSpline >> > >> > def sort_data(x, y): >> > # Sort data >> > j = np.argsort(x) >> > x = x[j] >> > y = y[j] >> > >> > # De-duplicate data >> > mask = np.r_[True, (np.diff(x)> 0)] >> > if not mask.all(): >> > print "Dropping data on the floor..." >> > # could do something smarter here >> > x = x[mask] >> > y = y[mask] >> > return x, y >> > >> > x, y = sort_data(x, y) >> > ip = InterpolatedUnivariateSpline(x2, y2) >> > >> > > Thank you so much, this was very useful. It works, yes. > Had overlooked the possibility that my data was not sorted. That of > course explains why some response curves was okay and others were not. > I shall improve on my quality-checking (data filtering). > > The "np.r_" de-duplicate thing was new to me. Nice! > You might also like to try the datasmooth scikit: http://pypi.python.org/pypi/scikits.datasmooth/0.5 Although you may not need smoothing of your data, it can be useful for interpolation. It can be used with scattered data; unsorted and duplicates are OK. Here is the code: import numpy as np from matplotlib.pyplot import * import scikits.datasmooth.regularsmooth as ds # Load data and try do the spline interpolation: npzfile = np.load('ipol_test_arrays.npz') wvl = npzfile['wvl'] resp = npzfile['resp'] xspl = np.linspace(wvl[0], wvl[-1], 300) resp_ipol = ds.smooth_data(wvl,resp,d=2,lmbd=1e-10,xhat=xspl) clf() plot(wvl,resp,'o',xspl,resp_ipol,lw=2) draw() show() Regards, Jonathan From jgarc063 at fiu.edu Wed Feb 2 13:05:36 2011 From: jgarc063 at fiu.edu (Jorge Garcia) Date: Wed, 2 Feb 2011 13:05:36 -0500 Subject: [SciPy-User] Installing Scipy from source In-Reply-To: References: Message-ID: Thanks Guys, That did it, I found the libraries in Ubuntu's repositories and got them from there. I have Scipy working now, thanks. On Tue, Feb 1, 2011 at 9:15 AM, Christian K. wrote: > Am 31.01.11 23:39, schrieb Jorge Garcia: > > I tried using apt-get to get the blas package but it couldn't find it. > > The packages are calld libblas-dev and liblapack-dev > > Christian. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed Feb 2 15:08:58 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 2 Feb 2011 21:08:58 +0100 Subject: [SciPy-User] graphs In-Reply-To: References: Message-ID: <20110202200858.GB13480@phare.normalesup.org> On Wed, Feb 02, 2011 at 03:50:58PM +0100, Josselin Jacquard wrote: > Hi guys, I'm wondering what is the state on graph utilities in scipy ? > I'm looking for an algorithme to find negative cycles. > Any clues ? Graph support in scipy is fairly small (basically, AFAIK, the only real graph algorithm is finding connnect components on a sparse matrix). I suggest that you look at networkx. Ga?l From Chris.Barker at noaa.gov Wed Feb 2 15:21:06 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 02 Feb 2011 12:21:06 -0800 Subject: [SciPy-User] graphs In-Reply-To: <20110202200858.GB13480@phare.normalesup.org> References: <20110202200858.GB13480@phare.normalesup.org> Message-ID: <4D49BCB2.40304@noaa.gov> On 2/2/11 12:08 PM, Gael Varoquaux wrote: > On Wed, Feb 02, 2011 at 03:50:58PM +0100, Josselin Jacquard wrote: >> Hi guys, I'm wondering what is the state on graph utilities in scipy ? >> I'm looking for an algorithme to find negative cycles. > >> Any clues ? Never used it, but: http://graphine.org came up on another thread on another list -- it looks pretty cool. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From josselin.jacquard at gmail.com Wed Feb 2 15:34:36 2011 From: josselin.jacquard at gmail.com (Josselin Jacquard) Date: Wed, 2 Feb 2011 21:34:36 +0100 Subject: [SciPy-User] graphs In-Reply-To: <4D49BCB2.40304@noaa.gov> References: <20110202200858.GB13480@phare.normalesup.org> <4D49BCB2.40304@noaa.gov> Message-ID: Ok thanks guys 2011/2/2 Christopher Barker : > On 2/2/11 12:08 PM, Gael Varoquaux wrote: >> On Wed, Feb 02, 2011 at 03:50:58PM +0100, Josselin Jacquard wrote: >>> Hi guys, I'm wondering what is the state on graph utilities in scipy ? >>> I'm looking for an algorithme to find negative cycles. >> >>> Any clues ? > > Never used it, but: > > http://graphine.org > > came up on another thread on another list -- it looks pretty cool. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice > 7600 Sand Point Way NE ? (206) 526-6329 ? fax > Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Dharhas.Pothina at twdb.state.tx.us Wed Feb 2 16:26:12 2011 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 02 Feb 2011 15:26:12 -0600 Subject: [SciPy-User] np.genfromtxt bug : breaks when # is present. Message-ID: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> Hi, I think I found a bug in np.genfromtxt when reading in data that has missing values indicated by # symbols >From what I can tell if a # is in any of the fields it is not seeing any of the data in the line to the right of the #. to reproduce: from StringIO import StringIO data_str1 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 ##.## 75.2 9.6\n' data_str2 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 #### 75.2 9.6\n' data_str3 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 NA 75.2 9.6\n' data_str4 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 N# 75.2 9.6\n' np.genfromtxt(StringIO(data_str1), dtype=float, missing_values='##.##') *** ValueError: Some errors were detected ! Line #2 (got 3 columns instead of 6 np.genfromtxt(StringIO(data_str2), dtype=float, missing_values='####') *** ValueError: Some errors were detected ! Line #2 (got 3 columns instead of 6) np.genfromtxt(StringIO(data_str3), dtype=float, missing_values='NA') array([[ 3.87 , 3.562, 1.9 , 33.3 , 75.2 , 9.6 ], [ 13.87 , 3.562, 1.9 , nan, 75.2 , 9.6 ]]) np.genfromtxt(StringIO(data_str4), dtype=float, missing_values='N#') *** ValueError: Some errors were detected ! Line #2 (got 4 columns instead of 6) I have a workaround replacing all #'s with N's before reading the data with genfromtxt. - dharhas From cwebster at enthought.com Wed Feb 2 16:33:25 2011 From: cwebster at enthought.com (Corran Webster) Date: Wed, 2 Feb 2011 15:33:25 -0600 Subject: [SciPy-User] np.genfromtxt bug : breaks when # is present. In-Reply-To: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> References: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> Message-ID: Hi, '#' is the default comment marker in genfromtxt, so it will ignore anything on line after a '#' as you observed. You can probably work around this by specifying a different comment character in the arguments: np.genfromtxt( ... comments='%') or some other character that won't appear in your input. -- Corran On Wed, Feb 2, 2011 at 3:26 PM, Dharhas Pothina < Dharhas.Pothina at twdb.state.tx.us> wrote: > Hi, > > > I think I found a bug in np.genfromtxt when reading in data that has > missing values indicated by # symbols > >From what I can tell if a # is in any of the fields it is not seeing any > of the data in the line to the right of the #. > > > to reproduce: > > > from StringIO import StringIO > > > data_str1 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 > 1.9 ##.## 75.2 9.6\n' > data_str2 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 > 1.9 #### 75.2 9.6\n' > data_str3 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 > 1.9 NA 75.2 9.6\n' > data_str4 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 > 1.9 N# 75.2 9.6\n' > > > np.genfromtxt(StringIO(data_str1), dtype=float, missing_values='##.##') > *** ValueError: Some errors were detected ! > Line #2 (got 3 columns instead of 6 > > > > np.genfromtxt(StringIO(data_str2), dtype=float, missing_values='####') > *** ValueError: Some errors were detected ! > Line #2 (got 3 columns instead of 6) > > > np.genfromtxt(StringIO(data_str3), dtype=float, missing_values='NA') > array([[ 3.87 , 3.562, 1.9 , 33.3 , 75.2 , 9.6 ], > [ 13.87 , 3.562, 1.9 , nan, 75.2 , 9.6 ]]) > > > > > np.genfromtxt(StringIO(data_str4), dtype=float, missing_values='N#') > *** ValueError: Some errors were detected ! > Line #2 (got 4 columns instead of 6) > > > I have a workaround replacing all #'s with N's before reading the data with > genfromtxt. > > > - dharhas > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Wed Feb 2 17:56:49 2011 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 02 Feb 2011 16:56:49 -0600 Subject: [SciPy-User] np.genfromtxt bug : breaks when # is present. In-Reply-To: References: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> Message-ID: <4D498CD10200009B0003513B@GWWEB.twdb.state.tx.us> Hi, I realized that after sending my email. I've never seen environmental monitoring data from equipment or from various entities have midline comments. i.e any comments always start at the beginning of the line with the comment character in the first position. Is there any value in having an optional behavior in genfromtxt that only ignores lines starting with the comment character or is that too specific of a use case. - dharhas >>> Corran Webster 2/2/2011 3:33 PM >>> Hi, '#' is the default comment marker in genfromtxt, so it will ignore anything on line after a '#' as you observed. You can probably work around this by specifying a different comment character in the arguments: np.genfromtxt( ... comments='%') or some other character that won't appear in your input. -- Corran On Wed, Feb 2, 2011 at 3:26 PM, Dharhas Pothina wrote: Hi, I think I found a bug in np.genfromtxt when reading in data that has missing values indicated by # symbols >From what I can tell if a # is in any of the fields it is not seeing any of the data in the line to the right of the #. to reproduce: from StringIO import StringIO data_str1 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 ##.## 75.2 9.6\n' data_str2 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 #### 75.2 9.6\n' data_str3 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 NA 75.2 9.6\n' data_str4 = '3.87 3.562 1.9 33.3 75.2 9.6\n13.87 3.562 1.9 N# 75.2 9.6\n' np.genfromtxt(StringIO(data_str1), dtype=float, missing_values='##.##') *** ValueError: Some errors were detected ! Line #2 (got 3 columns instead of 6 np.genfromtxt(StringIO(data_str2), dtype=float, missing_values='####') *** ValueError: Some errors were detected ! Line #2 (got 3 columns instead of 6) np.genfromtxt(StringIO(data_str3), dtype=float, missing_values='NA') array([[ 3.87 , 3.562, 1.9 , 33.3 , 75.2 , 9.6 ], [ 13.87 , 3.562, 1.9 , nan, 75.2 , 9.6 ]]) np.genfromtxt(StringIO(data_str4), dtype=float, missing_values='N#') *** ValueError: Some errors were detected ! Line #2 (got 4 columns instead of 6) I have a workaround replacing all #'s with N's before reading the data with genfromtxt. - dharhas _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Feb 2 18:10:13 2011 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Feb 2011 00:10:13 +0100 Subject: [SciPy-User] np.genfromtxt bug : breaks when # is present. In-Reply-To: <4D498CD10200009B0003513B@GWWEB.twdb.state.tx.us> References: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> <4D498CD10200009B0003513B@GWWEB.twdb.state.tx.us> Message-ID: <4B32D28A-E28C-4498-BF37-AD3C169E62F0@gmail.com> On Feb 2, 2011, at 11:56 PM, Dharhas Pothina wrote: > Hi, > > I realized that after sending my email. > > I've never seen environmental monitoring data from equipment or from various entities have midline comments. i.e any comments always start at the beginning of the line with the comment character in the first position. True, but... Imagine you're working with some processed data (eg, a CSV you have generated), where comments appear at the end of a row to describe some potential problems for that record. That's a not too far-stretched case where discarding comments inline are a good thing to do. > Is there any value in having an optional behavior in genfromtxt that only ignores lines starting with the comment character or is that too specific of a use case. I'm afraid that's too specific. genfromtxt is already bloated as it is in my opinion that adding yet another parameter to control this behavior would be a bit too much. I'm really not keen on implementing it and would advise you to follow Corran's suggestion: please specify a comment character different from # in input. Now, of course, if there's really a popular need for this feature (say, more than 5 people...)... P. From emanuele at relativita.com Thu Feb 3 07:23:06 2011 From: emanuele at relativita.com (Emanuele Olivetti) Date: Thu, 03 Feb 2011 13:23:06 +0100 Subject: [SciPy-User] graphs In-Reply-To: References: <20110202200858.GB13480@phare.normalesup.org> <4D49BCB2.40304@noaa.gov> Message-ID: <4D4A9E2A.30609@relativita.com> I had some experiencies with graph-tool http://projects.skewed.de/graph-tool/ It is a very good tool in my opinion. Friends of mine warmly suggest igraph: http://cneurocvs.rmki.kfki.hu/igraph/ Anyway, if you need simple stuff and/or have no big scalability issue I would suggest networkx, which very quick to learn. HTH, E. On 02/02/2011 09:34 PM, Josselin Jacquard wrote: > Ok thanks guys > > 2011/2/2 Christopher Barker: >> On 2/2/11 12:08 PM, Gael Varoquaux wrote: >>> On Wed, Feb 02, 2011 at 03:50:58PM +0100, Josselin Jacquard wrote: >>>> Hi guys, I'm wondering what is the state on graph utilities in scipy ? >>>> I'm looking for an algorithme to find negative cycles. >>>> Any clues ? >> >> http://graphine.org >> From blattnem at gmail.com Thu Feb 3 07:28:39 2011 From: blattnem at gmail.com (Marcel Blattner) Date: Thu, 3 Feb 2011 13:28:39 +0100 Subject: [SciPy-User] graphs In-Reply-To: <4D4A9E2A.30609@relativita.com> References: <20110202200858.GB13480@phare.normalesup.org> <4D49BCB2.40304@noaa.gov> <4D4A9E2A.30609@relativita.com> Message-ID: As a graph tool I really recommend networkx. I used it extensively and was always satisfied.... M. On Thu, Feb 3, 2011 at 1:23 PM, Emanuele Olivetti wrote: > I had some experiencies with graph-tool > http://projects.skewed.de/graph-tool/ > It is a very good tool in my opinion. > > Friends of mine warmly suggest igraph: > http://cneurocvs.rmki.kfki.hu/igraph/ > > Anyway, if you need simple stuff and/or have no big scalability issue I > would suggest networkx, which very quick to learn. > > HTH, > > E. > > On 02/02/2011 09:34 PM, Josselin Jacquard wrote: > > Ok thanks guys > > > > 2011/2/2 Christopher Barker: > >> On 2/2/11 12:08 PM, Gael Varoquaux wrote: > >>> On Wed, Feb 02, 2011 at 03:50:58PM +0100, Josselin Jacquard wrote: > >>>> Hi guys, I'm wondering what is the state on graph utilities in scipy ? > >>>> I'm looking for an algorithme to find negative cycles. > >>>> Any clues ? > >> > >> http://graphine.org > >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Thu Feb 3 10:05:44 2011 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 03 Feb 2011 09:05:44 -0600 Subject: [SciPy-User] np.genfromtxt bug : breaks when # is present. In-Reply-To: <4B32D28A-E28C-4498-BF37-AD3C169E62F0@gmail.com> References: <4D4977940200009B00035127@GWWEB.twdb.state.tx.us> <4D498CD10200009B0003513B@GWWEB.twdb.state.tx.us> <4B32D28A-E28C-4498-BF37-AD3C169E62F0@gmail.com> Message-ID: <4D4A6FE80200009B0003518D@GWWEB.twdb.state.tx.us> makes sense. btw I think genfromtxt is very useful with all its flexibility in reading files. thanks for your work on it, - dharhas >>> Pierre GM 2/2/2011 5:10 PM >>> On Feb 2, 2011, at 11:56 PM, Dharhas Pothina wrote: > Hi, > > I realized that after sending my email. > > I've never seen environmental monitoring data from equipment or from various entities have midline comments. i.e any comments always start at the beginning of the line with the comment character in the first position. True, but... Imagine you're working with some processed data (eg, a CSV you have generated), where comments appear at the end of a row to describe some potential problems for that record. That's a not too far-stretched case where discarding comments inline are a good thing to do. > Is there any value in having an optional behavior in genfromtxt that only ignores lines starting with the comment character or is that too specific of a use case. I'm afraid that's too specific. genfromtxt is already bloated as it is in my opinion that adding yet another parameter to control this behavior would be a bit too much. I'm really not keen on implementing it and would advise you to follow Corran's suggestion: please specify a comment character different from # in input. Now, of course, if there's really a popular need for this feature (say, more than 5 people...)... P. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Thu Feb 3 10:15:27 2011 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 03 Feb 2011 09:15:27 -0600 Subject: [SciPy-User] When is scipy.io.loadmat bugfix expected to make it into a scipy release? Message-ID: <4D4A722F0200009B00035192@GWWEB.twdb.state.tx.us> Hi, Several months ago Mathew Brent & others helped me read a funkily formatted matlab file that was breaking loadmat. Details at the link below. http://old.nabble.com/scipy.io.loadmat-error-when-trying-to-read-a-.mat-file-ts29878030.html#a29887611 Back then he added some features to his github branch that enabled the file to be read. How can I found out when this fix will make it into a release version of Scipy. I need this to deploy some scripts which will run on several colleagues systems and I really prefer to having them use release versions of scipy. thanks - dharhas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Thu Feb 3 21:42:43 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 3 Feb 2011 18:42:43 -0800 (PST) Subject: [SciPy-User] [SciPy-user] code structure advise for a model Message-ID: <30841514.post@talk.nabble.com> Hi, I am translating some c++ code to python and just wanted to ask some advise on structure. The original has everything declared globally and nothing passed via function (I assume, but don't know, that this isn't just standard c++ practice!). So given this, I have a pretty much clean slate as I can't quite just copy the functions over. I was thinking something like this class Params: def __init__(self, fname): self.set_inital_condtions() self.read_input_file(fname) def set_inital_conditons(self): self.some_parm = 0.0 def read_input_file(fname): #read file, change initial params if specified then I thought I could pass this as an object to the model class class Model(Params): def __init__(self): # blah def some_func(self): if (Params.some_param == something): foo OR this just a very bad way to structure it? The other thing I can't decide on is how to pass the parameters and variables through the class. So because of the way the original is written (everything is global), I could just inherit things, but it does means there is a lot of self. syntax. So I wondered if it might be better to pass things as function arguments? Any thoughts? I am also half considering other users from non-python backgrounds and what might seem very alien (syntax) to them. thanks in advance (ps. I am cross posting this on comp.lang.python as I am not sure where is more appropriate). -- View this message in context: http://old.nabble.com/code-structure-advise-for-a-model-tp30841514p30841514.html Sent from the Scipy-User mailing list archive at Nabble.com. From david at silveregg.co.jp Thu Feb 3 23:42:38 2011 From: david at silveregg.co.jp (David) Date: Fri, 04 Feb 2011 13:42:38 +0900 Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: <30841514.post@talk.nabble.com> References: <30841514.post@talk.nabble.com> Message-ID: <4D4B83BE.8030807@silveregg.co.jp> Hi, On 02/04/2011 11:42 AM, mdekauwe wrote: > > Hi, > > I am translating some c++ code to python and just wanted to ask some advise > on structure. The original has everything declared globally and nothing > passed via function (I assume, but don't know, that this isn't just standard > c++ practice!). So given this, I have a pretty much clean slate as I can't > quite just copy the functions over. I was thinking something like this > > class Params: > > def __init__(self, fname): > self.set_inital_condtions() > self.read_input_file(fname) > > def set_inital_conditons(self): > self.some_parm = 0.0 > > > def read_input_file(fname): > > #read file, change initial params if specified > Bit hard to say without more information, but you often use less custom classes than in a language like C++/java to pass data around. One rule of a thumb to decide wether you want class or just pass arguments through list/dict is whether you need your parameters to be stateful (which is a good thing to avoid everything else being equal). As for how to use inheritence: inheritence is tricky. One rule which works almost all the time to decide where B should derive from A is whether an instance of B can be used whenever an instance of A needs to be used (this is called the Liskov subsitution principle if you want to shine in discussions). Another rule is to avoid inheritence if you can: you should try to use composition instead, because it is more flexible (design mistakes in term of inheritence are hard to fix because it impacts all the subclasses of a changed class). In that aspect, from what information you gave us, I don't see why Model should inherit from Params. As far as Params goes, something like the following would work equally well: def create_params(some_param=0.0): data = {"some_param": some_param, "other_param": ...} return data def create_param_from_file(filename): read file content, parse it to get data return create_params(some_param) This is simpler, use common code for the parameters creation. Of course, it may be that your code is much more complicated, in which case a class may be better. In the latter case, a common idiom is class Params(object): # Class method just means that from_file is a method attached # to the class, not instances (~ static methods in java/c++). @classmethod def from_file(cls, filename): read file content and parse it to get data return cls(some_param) def __init__(self, some_param=0.0): self.some_param = some_param You then use is as follows: params1 = Params(some_param=1.0) params2 = Params.from_file("foo.txt") Using class methods for atlernate constructions is a very common idiom in python. cheers, David From mdekauwe at gmail.com Thu Feb 3 23:56:28 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 3 Feb 2011 20:56:28 -0800 (PST) Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: <4D4B83BE.8030807@silveregg.co.jp> References: <30841514.post@talk.nabble.com> <4D4B83BE.8030807@silveregg.co.jp> Message-ID: <30841931.post@talk.nabble.com> That is very helpful thanks. So I think I will follow what you suggested and set up two functions e.g. def setup_initial_conditions(): params = {"cat": 1, "mouse": 2 ...etc} return params def read_file(params, fname): # read file and replace any relevant values in params return params I am still going to use a class for my model as it has a lot of functions that I think tie nicely together class Model: def __init__(self, fname='x.dat'): self.fname = fname def main(self): params = setup_initial_conditions(): params = read_file(params, self.fname) def work_out_something(self): etc I have a series of "pools" which go up and down through time and I will have these as inherited and pass the rest of the model args around the code. I think perhaps based on what you suggested this sounds more sensible? thanks David Cournapeau-3 wrote: > > Hi, > > On 02/04/2011 11:42 AM, mdekauwe wrote: >> >> Hi, >> >> I am translating some c++ code to python and just wanted to ask some >> advise >> on structure. The original has everything declared globally and nothing >> passed via function (I assume, but don't know, that this isn't just >> standard >> c++ practice!). So given this, I have a pretty much clean slate as I >> can't >> quite just copy the functions over. I was thinking something like this >> >> class Params: >> >> def __init__(self, fname): >> self.set_inital_condtions() >> self.read_input_file(fname) >> >> def set_inital_conditons(self): >> self.some_parm = 0.0 >> >> >> def read_input_file(fname): >> >> #read file, change initial params if specified >> > > Bit hard to say without more information, but you often use less custom > classes than in a language like C++/java to pass data around. > > One rule of a thumb to decide wether you want class or just pass > arguments through list/dict is whether you need your parameters to be > stateful (which is a good thing to avoid everything else being equal). > > As for how to use inheritence: inheritence is tricky. One rule which > works almost all the time to decide where B should derive from A is > whether an instance of B can be used whenever an instance of A needs to > be used (this is called the Liskov subsitution principle if you want to > shine in discussions). Another rule is to avoid inheritence if you can: > you should try to use composition instead, because it is more flexible > (design mistakes in term of inheritence are hard to fix because it > impacts all the subclasses of a changed class). > > In that aspect, from what information you gave us, I don't see why Model > should inherit from Params. > > As far as Params goes, something like the following would work equally > well: > > def create_params(some_param=0.0): > data = {"some_param": some_param, "other_param": ...} > return data > > def create_param_from_file(filename): > read file content, parse it to get data > return create_params(some_param) > > This is simpler, use common code for the parameters creation. Of course, > it may be that your code is much more complicated, in which case a class > may be better. In the latter case, a common idiom is > > class Params(object): > # Class method just means that from_file is a method attached > # to the class, not instances (~ static methods in java/c++). > @classmethod > def from_file(cls, filename): > read file content and parse it to get data > return cls(some_param) > > def __init__(self, some_param=0.0): > self.some_param = some_param > > You then use is as follows: > > params1 = Params(some_param=1.0) > params2 = Params.from_file("foo.txt") > > Using class methods for atlernate constructions is a very common idiom > in python. > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/code-structure-advise-for-a-model-tp30841514p30841931.html Sent from the Scipy-User mailing list archive at Nabble.com. From Chris.Barker at noaa.gov Fri Feb 4 12:16:31 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 04 Feb 2011 09:16:31 -0800 Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: <4D4B83BE.8030807@silveregg.co.jp> References: <30841514.post@talk.nabble.com> <4D4B83BE.8030807@silveregg.co.jp> Message-ID: <4D4C346F.6000007@noaa.gov> On 2/3/11 8:42 PM, David wrote: > As for how to use inheritance: inheritance is tricky. One rule which > works almost all the time to decide where B should derive from A is > whether an instance of B can be used whenever an instance of A needs to > be used (this is called the Liskov substitution principle if you want to > shine in discussions). There is also the classic way of thinking about OO: use inheritance for a "is a" relationship: a cat "is a" mammal. > In that aspect, from what information you gave us, I don't see why Model > should inherit from Params. let's see: A Model "is a" set of parameters -- nope! Another common relationship is the "has a" relationship: A Model "has a" set of parameters -- yup! So that means that your model would have a Params instance as an attribute. > def create_params(some_param=0.0): > data = {"some_param": some_param, "other_param": ...} > return data yup -- a simple dict may be all you need. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ckkart at hoc.net Sat Feb 5 04:42:41 2011 From: ckkart at hoc.net (Christian K.) Date: Sat, 05 Feb 2011 10:42:41 +0100 Subject: [SciPy-User] broadened step functions Message-ID: Hi, can you think of a faster way to evaluate a broadened step function, i.e. the convolution of a heavyside step function with e.g. a Lorentz function, than this here? def lbstep(x,amp,pos,sigma,sigmap=100.0): dx = x[-1]-x[0] up = x[-1]+5*dx lo = x[0]-5*dx f = amp/N.pi xp = N.linspace(lo,up,1000.0)[:,N.newaxis] s2 = N.power(sigma,2) i = trapz(sigma/(N.power(x-xp,2)+s2)/ \ (1+N.exp(-sigmap*(xp-pos))), xp, axis=0) return f*i Speed depends naturally on the length of x and xp and the accuracy also on the upper and lower integration limits. Thanks for any idea, Christian From josef.pktd at gmail.com Sat Feb 5 16:47:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 5 Feb 2011 16:47:29 -0500 Subject: [SciPy-User] quantizing a distribution from a cdf in nd In-Reply-To: References: Message-ID: On Sat, Jan 29, 2011 at 10:32 AM, wrote: > I'm starting to get slowly into multivariate distributions. > > One functionality I need, is to quantize the pdf on a regular grid > when I have the cdf given. > 1d is easy, 2d I figured out myself. > > Question: Is there code for the more than two dimensional case? Or > does anyone know a formula or a reference? > > It's kind of a nd version of np.diff. > > (In case anyone is interested, it is going toward goodness of fit > tests and maybe estimation of multivariate distributions. For the > empirical part, I expect that np.histogramdd will be handy.) > > below is the recipe for 1d and 2d > > 1d is easy with np.diff > >>>> from scipy import stats >>>> stats.beta.cdf([0, 0.25, 0.5, 0.75, 1], 10, 10) > array([ 0. ? ? ? ?, ?0.00890328, ?0.5 ? ? ? , ?0.99109672, ?1. ? ? ? ?]) >>>> np.diff(stats.beta.cdf([0, 0.25, 0.5, 0.75, 1], 10, 10)) > array([ 0.00890328, ?0.49109672, ?0.49109672, ?0.00890328]) > > > I wrote a first version for a function for 2d, functions in attachment > >>>> unif_2d = lambda x,y: x*y > >>>> prob_bv_rectangle([0,0], [1,0.5], unif_2d) > 0.5 >>>> prob_bv_rectangle([0,0], [0.5,0.5], unif_2d) > 0.25 >>>> prob_quantize_cdf2(np.linspace(0,1,6), np.linspace(0,1,5), unif_2d) > array([[ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05]]) >>>> prob_quantize_cdf(np.linspace(0,1,6), np.linspace(0,1,5), unif_2d) > array([[ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.05, ?0.05]]) >>>> prob_quantize_cdf([0, 0.25, 0.75, 1], np.linspace(0,1,5), unif_2d) > array([[ 0.0625, ?0.0625, ?0.0625, ?0.0625], > ? ? ? [ 0.125 , ?0.125 , ?0.125 , ?0.125 ], > ? ? ? [ 0.0625, ?0.0625, ?0.0625, ?0.0625]]) > > > check: >>>> prob_quantize_cdf([0, 0.25, 0.75, 1], np.linspace(0,1,5), unif_2d).sum() > 1.0 >>>> 0.5/4 > 0.125 >>>> 0.25/4 > 0.0625 > > Josef > (to answer myself) Taking np.diff along each axis is much easier and works for the full grid >>> xyz.shape (3, 4, 5, 3) >>> np.diff(np.diff(np.diff(unif_3d(xyz), axis=0), axis=1), axis=2) array([[[ 0.03125, 0.03125], [ 0.03125, 0.03125], [ 0.03125, 0.03125], [ 0.03125, 0.03125]], [[ 0.0625 , 0.0625 ], [ 0.0625 , 0.0625 ], [ 0.0625 , 0.0625 ], [ 0.0625 , 0.0625 ]], [[ 0.03125, 0.03125], [ 0.03125, 0.03125], [ 0.03125, 0.03125], [ 0.03125, 0.03125]]]) >>> np.diff(np.diff(np.diff(unif_3d(xyz), axis=0), axis=1), axis=2).sum() 1.0 similar to contingency tables: chisquare test on binned multivariate distributions theoretical distribution is uniform in 3d on [0,1] x [0,1] x [0,1] >>> prob = prob_mv_grid(([0,0.25,0.75,1], np.linspace(0,1,3), np.linspace(0,1,3)), unif_3d) >>> from scipy.stats import chisquare >>> n = 100000 >>> rvs = np.random.rand(n, 3) >>> freq = np.histogramdd(rvs, ([0,0.25,0.75,1], np.linspace(0,1,3), np.linspace(0,1,3)))[0] >>> chisquare(freq.ravel(), n*prob.ravel()) (14.02144, 0.2318093840451339) stats.chisquare seems to work also along axis=0, I guess not by conscious design (looks like test of distribution of x conditional on y and z) >>> chisquare(freq, n*prob) (array([[ 0.98576, 7.46528], [ 1.48928, 4.08112]]), array([[ 0.61086457, 0.02392958], [ 0.47490524, 0.12995592]])) >>> freq.shape (3, 2, 2) Josef From mdekauwe at gmail.com Mon Feb 7 00:27:57 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Sun, 6 Feb 2011 21:27:57 -0800 (PST) Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: <4D4C346F.6000007@noaa.gov> References: <30841514.post@talk.nabble.com> <4D4B83BE.8030807@silveregg.co.jp> <4D4C346F.6000007@noaa.gov> Message-ID: <30860854.post@talk.nabble.com> OK given the above perhaps my latest implementation is wrong then? So to recap I have a model, lets say it predicts the photosynthesis by a plant. There are some parameters which are read in from a file, for example size of leaf, efficiency at converting sunlight etc. So my current implementation is to group all the functions of the model in one class and pass the parameters to this class. (1). Based on previous suggestions I used ConfigParser to read a file containing various model parameters into a set of different dictionaries which I then passed around the code (for example control parameters: e.g. out_fname, log_fname, model_number and initial model parameters: eg. leaf size) However I was finding using the dictionary syntax made the code a little hard to read, so now I have packaged these dictionaries into classes e.g. class Dict2Class: def __init__(self, d): self.__dict__ = d so now instead of control_param['logfile'] I can just do control_param.logfile. Does that sound OK? I think it makes the code easier to read personally, but perhaps this is not great? (2). In line with the example the model has a set of "pools" (values) that get updated during every loop iterations (for example over all the days in the year). These "pools", for example the total amount of leaves on a tree are required throughout different model functions. I had toyed with inheriting these, but then I think it is hard to follow exactly where they are being adjusted. So I wondered if I could take a similar approach to (1) and declare and empty class and update this when I read the initial conditions file and then pass the pools object around between the model methods. e.g. class ModelPools: pass cpools = ModelPools() cpools.total_leaves = ini_cond['total_leaves'] so a general code structure now looks like this def read_confile(confname, section, data_type): """ read the config file with various model parameters into different dictionary, broken down by section """ return dict class Dict2Class: def __init__(self, d): self.__dict__ = d class ModelPools: """ fill when initial conditions file is read pass class Model: def __init__(self, config_fname, initialcond_fname): self.config_fname = config_fname self.initialcond_fname = initialcond_fname def main(self): # read the driving data in met_data = met_data = np.loadtxt(self.met_fname, comments='#') # read the configuration file, keep params grouped in dictionaries ini_cond = Dict2Class(rc.read_confile(self.initialcond_fname, 'ini', 'float')) photo_params = Dict2Class(rc.read_confile(self.config_fname, 'prodn', 'float')) control_params = Dict2Class(rc.read_confile(self.config_fname, 'control', 'float')) # setup pools structure cpools = ModelPools() cpools.total_leaves = ini_cond['total_leaves'] self.call_some_func(cpools, control_params, photo_params) thanks again. Christopher Barker wrote: > > On 2/3/11 8:42 PM, David wrote: >> As for how to use inheritance: inheritance is tricky. One rule which >> works almost all the time to decide where B should derive from A is >> whether an instance of B can be used whenever an instance of A needs to >> be used (this is called the Liskov substitution principle if you want to >> shine in discussions). > > There is also the classic way of thinking about OO: > > use inheritance for a "is a" relationship: > > a cat "is a" mammal. > >> In that aspect, from what information you gave us, I don't see why Model >> should inherit from Params. > > let's see: A Model "is a" set of parameters -- nope! > > Another common relationship is the "has a" relationship: > > A Model "has a" set of parameters -- yup! > > So that means that your model would have a Params instance as an > attribute. > >> def create_params(some_param=0.0): >> data = {"some_param": some_param, "other_param": ...} >> return data > > yup -- a simple dict may be all you need. > > -Chris > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/code-structure-advise-for-a-model-tp30841514p30860854.html Sent from the Scipy-User mailing list archive at Nabble.com. From yyc at solvcon.net Mon Feb 7 06:04:25 2011 From: yyc at solvcon.net (Yung-Yu Chen) Date: Mon, 7 Feb 2011 06:04:25 -0500 Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: <30860854.post@talk.nabble.com> References: <30841514.post@talk.nabble.com> <4D4B83BE.8030807@silveregg.co.jp> <4D4C346F.6000007@noaa.gov> <30860854.post@talk.nabble.com> Message-ID: On Mon, Feb 7, 2011 at 00:27, mdekauwe wrote: > > OK given the above perhaps my latest implementation is wrong then? > > So to recap I have a model, lets say it predicts the photosynthesis by a > plant. There are some parameters which are read in from a file, for example > size of leaf, efficiency at converting sunlight etc. So my current > implementation is to group all the functions of the model in one class and > pass the parameters to this class. > > (1). > > Based on previous suggestions I used ConfigParser to read a file containing > various model parameters into a set of different dictionaries which I then > passed around the code (for example control parameters: e.g. out_fname, > log_fname, model_number and initial model parameters: eg. leaf size) > > However I was finding using the dictionary syntax made the code a little > hard to read, so now I have packaged these dictionaries into classes > > e.g. > > class Dict2Class: > def __init__(self, d): > self.__dict__ = d > > so now instead of control_param['logfile'] > > I can just do control_param.logfile. > > For your use case, code like: class AttributeDict(dict): def __getattr__(self, name): return self[name] def __setattr__(self, name, value): if name in self: self[name] = value else: super(AttributeDict, self).__setattr__(name, value) should be more straightforward. What you wanted is a dictionary that supports accessing its content through attribute access (x.y). What you did is replacing the name space of an object. Semantically it's not what you want. > Does that sound OK? I think it makes the code easier to read personally, > but > perhaps this is not great? > > It should be OK. But overriding the name space of an object, i.e., __dict__, could surprise you when you want further extension to your class. > (2). > > [deleted] > with regards, Yung-Yu Chen -- Yung-Yu Chen PhD candidate of Mechanical Engineering The Ohio State University, Columbus, Ohio +1 (614) 859 2436 http://solvcon.net/yyc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Mon Feb 7 06:09:57 2011 From: mdekauwe at gmail.com (mdekauwe) Date: Mon, 7 Feb 2011 03:09:57 -0800 (PST) Subject: [SciPy-User] [SciPy-user] code structure advise for a model In-Reply-To: References: <30841514.post@talk.nabble.com> <4D4B83BE.8030807@silveregg.co.jp> <4D4C346F.6000007@noaa.gov> <30860854.post@talk.nabble.com> Message-ID: <30862532.post@talk.nabble.com> Hi, thanks if you could explain that a little further it would be appreciated. Martin Yung-Yu Chen-2 wrote: > > On Mon, Feb 7, 2011 at 00:27, mdekauwe wrote: > >> >> OK given the above perhaps my latest implementation is wrong then? >> >> So to recap I have a model, lets say it predicts the photosynthesis by a >> plant. There are some parameters which are read in from a file, for >> example >> size of leaf, efficiency at converting sunlight etc. So my current >> implementation is to group all the functions of the model in one class >> and >> pass the parameters to this class. >> >> (1). >> >> Based on previous suggestions I used ConfigParser to read a file >> containing >> various model parameters into a set of different dictionaries which I >> then >> passed around the code (for example control parameters: e.g. out_fname, >> log_fname, model_number and initial model parameters: eg. leaf size) >> >> However I was finding using the dictionary syntax made the code a little >> hard to read, so now I have packaged these dictionaries into classes >> >> e.g. >> >> class Dict2Class: >> def __init__(self, d): >> self.__dict__ = d >> >> so now instead of control_param['logfile'] >> >> I can just do control_param.logfile. >> >> > For your use case, code like: > > class AttributeDict(dict): > def __getattr__(self, name): > return self[name] > def __setattr__(self, name, value): > if name in self: > self[name] = value > else: > super(AttributeDict, self).__setattr__(name, value) > > should be more straightforward. What you wanted is a dictionary that > supports accessing its content through attribute access (x.y). What you > did > is replacing the name space of an object. Semantically it's not what you > want. > > >> Does that sound OK? I think it makes the code easier to read personally, >> but >> perhaps this is not great? >> >> > It should be OK. But overriding the name space of an object, i.e., > __dict__, could surprise you when you want further extension to your > class. > > >> (2). >> >> [deleted] >> > > with regards, > Yung-Yu Chen > > -- > Yung-Yu Chen > PhD candidate of Mechanical Engineering > The Ohio State University, Columbus, Ohio > +1 (614) 859 2436 > http://solvcon.net/yyc/ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/code-structure-advise-for-a-model-tp30841514p30862532.html Sent from the Scipy-User mailing list archive at Nabble.com. From tobangi at gmail.com Mon Feb 7 12:17:47 2011 From: tobangi at gmail.com (Tobjan Brejicz) Date: Mon, 7 Feb 2011 12:17:47 -0500 Subject: [SciPy-User] K-means clustering algorithm Message-ID: Hello Scipy List: I would like to know about good implementations of clustering-type algorithm in scipy, or maybe also in related package. Specific, I want to do k-means clustering. Does someone recommend the k-means clustering implementation? For example, to constrast scipy and scikits.learn and some other examples? I am sorry to say if this is not the correct topic for list. Thanks you! -Tob -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Feb 7 12:27:53 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 7 Feb 2011 18:27:53 +0100 Subject: [SciPy-User] K-means clustering algorithm In-Reply-To: References: Message-ID: <20110207172753.GA25730@phare.normalesup.org> On Mon, Feb 07, 2011 at 12:17:47PM -0500, Tobjan Brejicz wrote: > I would like to know about good implementations of clustering-type > algorithm in scipy, or maybe also in related package. ? Specific, I want > to do k-means clustering. ? > Does someone recommend the k-means clustering implementation? ? For > example, to constrast scipy and scikits.learn and some other examples? ? Scipy's k-means works fine. Tle scikit-learn implementation should be faster, but to the cost of depending on an extra package. I would advice you to try both, time the difference on your data, and decide in function of the result. Gael From zachary.pincus at yale.edu Mon Feb 7 13:20:35 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 7 Feb 2011 13:20:35 -0500 Subject: [SciPy-User] K-means clustering algorithm In-Reply-To: <20110207172753.GA25730@phare.normalesup.org> References: <20110207172753.GA25730@phare.normalesup.org> Message-ID: pycluster works reasonably well too -- all the backend stuff is in C. http://bonsai.hgc.jp/~mdehoon/software/cluster/software.htm On Feb 7, 2011, at 12:27 PM, Gael Varoquaux wrote: > On Mon, Feb 07, 2011 at 12:17:47PM -0500, Tobjan Brejicz wrote: >> I would like to know about good implementations of clustering-type >> algorithm in scipy, or maybe also in related package. Specific, >> I want >> to do k-means clustering. >> Does someone recommend the k-means clustering implementation? For >> example, to constrast scipy and scikits.learn and some other >> examples? > > Scipy's k-means works fine. Tle scikit-learn implementation should be > faster, but to the cost of depending on an extra package. > > I would advice you to try both, time the difference on your data, and > decide in function of the result. > > Gael > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From tobangi at gmail.com Mon Feb 7 16:41:12 2011 From: tobangi at gmail.com (Tobjan Brejicz) Date: Mon, 7 Feb 2011 16:41:12 -0500 Subject: [SciPy-User] K-means clustering algorithm In-Reply-To: References: <20110207172753.GA25730@phare.normalesup.org> Message-ID: Thanks to you both! -Tob On Mon, Feb 7, 2011 at 1:20 PM, Zachary Pincus wrote: > pycluster works reasonably well too -- all the backend stuff is in C. > > http://bonsai.hgc.jp/~mdehoon/software/cluster/software.htm > > > On Feb 7, 2011, at 12:27 PM, Gael Varoquaux wrote: > > > On Mon, Feb 07, 2011 at 12:17:47PM -0500, Tobjan Brejicz wrote: > >> I would like to know about good implementations of clustering-type > >> algorithm in scipy, or maybe also in related package. Specific, > >> I want > >> to do k-means clustering. > >> Does someone recommend the k-means clustering implementation? For > >> example, to constrast scipy and scikits.learn and some other > >> examples? > > > > Scipy's k-means works fine. Tle scikit-learn implementation should be > > faster, but to the cost of depending on an extra package. > > > > I would advice you to try both, time the difference on your data, and > > decide in function of the result. > > > > Gael > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlconlin at gmail.com Mon Feb 7 17:45:03 2011 From: jlconlin at gmail.com (Jeremy Conlin) Date: Mon, 7 Feb 2011 15:45:03 -0700 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 Message-ID: I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running 10.6.3. I'm trying to compile 64 bit scipy as well, but I'm running into a problem. I'm pretty sure that the problem is architecture related. The error(s) I get are copied below. The command I used to compile scipy is: LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" FFLAGS="-arch x86_64" python setup.py install I used the same command to compile numpy without any problems. I am using g95 and gcc (version info below). If someone has some hints for me, I'd be most appreciative. Thanks, Jeremy $ g95 --version G95 (GCC 4.0.4 (g95 0.91!) Oct 15 2008) Copyright (C) 2002-2005 Free Software Foundation, Inc. G95 comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of G95 under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Errors: g95:f77: scipy/sparse/linalg/eigen/arpack/ARPACK/LAPACK/zlahqr.f g95:f77: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_f.f ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libarpack.a /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cgetv0.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cnaitr.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cnapps.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cnaup2.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cnaupd.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cneigh.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) /usr/bin/ranlib: archive member: build/temp.macosx-10.4-x86_64-2.7/libarpack.a(cneupd.o) cputype (7) does not match previous archive members cputype (16777223) (all members must match) ... From david at silveregg.co.jp Mon Feb 7 19:32:35 2011 From: david at silveregg.co.jp (David) Date: Tue, 08 Feb 2011 09:32:35 +0900 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: Message-ID: <4D508F23.9040808@silveregg.co.jp> On 02/08/2011 07:45 AM, Jeremy Conlin wrote: > I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running > 10.6.3. I'm trying to compile 64 bit scipy as well, but I'm running > into a problem. I'm pretty sure that the problem is architecture > related. The error(s) I get are copied below. The command I used to > compile scipy is: > > LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" > FFLAGS="-arch x86_64" python setup.py install > > I used the same command to compile numpy without any problems. I am > using g95 and gcc (version info below). Don't use g95, it has caused too many issues, especially on mac os x. Use gfortran, the one available on http://r.research.att.com/tools/ (NOT the one on hpc.sourceforge.org, which does not work well either). cheers, David From denis-bz-gg at t-online.de Wed Feb 9 06:44:26 2011 From: denis-bz-gg at t-online.de (denis) Date: Wed, 9 Feb 2011 03:44:26 -0800 (PST) Subject: [SciPy-User] K-means clustering algorithm In-Reply-To: References: Message-ID: <21b6528c-09af-4b07-a2b2-0a4ef8177fb3@r16g2000yqk.googlegroups.com> Tobjan, about how many data points do you have, what dimension, what k ? One size cannot fit all. Plain scipy.cluster http://docs.scipy.org/doc/scipy/reference/cluster.html has hierarchical clustering, good for large k, but its kmeans calls cholesky on a maybe-singular matrix; try cluster.vq.kmeans2( data, k, minit="points" ). (Be aware that k-means can be noisy, and measuring "quality" is tough.) As Gael says, scikits.learn has a number of clustering methods. pycluster is asfarasiknow designed for low-dim gene data. See also http://stackoverflow.com/questions/tagged/k-means . cheers -- denis On Feb 7, 6:17?pm, Tobjan Brejicz wrote: > Hello Scipy List: > > I would like to know about good implementations of clustering-type algorithm > in scipy, or maybe also in related package. ? Specific, I want to do k-means > clustering. From jlconlin at gmail.com Wed Feb 9 18:52:34 2011 From: jlconlin at gmail.com (Jeremy Conlin) Date: Wed, 9 Feb 2011 16:52:34 -0700 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: <4D508F23.9040808@silveregg.co.jp> References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Mon, Feb 7, 2011 at 5:32 PM, David wrote: > On 02/08/2011 07:45 AM, Jeremy Conlin wrote: >> I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running >> 10.6.3. ?I'm trying to compile 64 bit scipy as well, but I'm running >> into a problem. ?I'm pretty sure that the problem is architecture >> related. ?The error(s) I get are copied below. ?The command I used to >> compile scipy is: >> >> LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" >> FFLAGS="-arch x86_64" ?python setup.py install >> >> I used the same command to compile numpy without any problems. ?I am >> using g95 and gcc (version info below). > > Don't use g95, it has caused too many issues, especially on mac os x. > Use gfortran, the one available on http://r.research.att.com/tools/ (NOT > the one on hpc.sourceforge.org, which does not work well either). Thanks for the suggestion. Unfortunately I can't install a precompiled binary since I do not have root access on my machine. However, I was able to compile gfortran (version 4.5.2) on my own and it got me past my previous problems. I have now come up with a new error. It seems that gcc doesn't like the option "-faltivec" (full error copied below). A Google search indicates that this may be an old option that isn't necessary anymore. How can I remove this option? I know I didn't specify it in any environment variable. Thanks, Jeremy From charlesr.harris at gmail.com Wed Feb 9 19:07:42 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Feb 2011 17:07:42 -0700 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Wed, Feb 9, 2011 at 4:52 PM, Jeremy Conlin wrote: > On Mon, Feb 7, 2011 at 5:32 PM, David wrote: > > On 02/08/2011 07:45 AM, Jeremy Conlin wrote: > >> I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running > >> 10.6.3. I'm trying to compile 64 bit scipy as well, but I'm running > >> into a problem. I'm pretty sure that the problem is architecture > >> related. The error(s) I get are copied below. The command I used to > >> compile scipy is: > >> > >> LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" > >> FFLAGS="-arch x86_64" python setup.py install > >> > >> I used the same command to compile numpy without any problems. I am > >> using g95 and gcc (version info below). > > > > Don't use g95, it has caused too many issues, especially on mac os x. > > Use gfortran, the one available on http://r.research.att.com/tools/ (NOT > > the one on hpc.sourceforge.org, which does not work well either). > > > Thanks for the suggestion. Unfortunately I can't install a > precompiled binary since I do not have root access on my machine. > However, I was able to compile gfortran (version 4.5.2) on my own and > it got me past my previous problems. > > I have now come up with a new error. It seems that gcc doesn't like > the option "-faltivec" (full error copied below). A Google search > indicates that this may be an old option that isn't necessary anymore. > How can I remove this option? I know I didn't specify it in any > environment variable. > > Altivec is a PPC thing, seems strange that it should turn up on Intel. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Feb 9 19:15:39 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 10 Feb 2011 08:15:39 +0800 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Thu, Feb 10, 2011 at 8:07 AM, Charles R Harris wrote: > > > On Wed, Feb 9, 2011 at 4:52 PM, Jeremy Conlin wrote: > >> On Mon, Feb 7, 2011 at 5:32 PM, David wrote: >> > On 02/08/2011 07:45 AM, Jeremy Conlin wrote: >> >> I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running >> >> 10.6.3. I'm trying to compile 64 bit scipy as well, but I'm running >> >> into a problem. I'm pretty sure that the problem is architecture >> >> related. The error(s) I get are copied below. The command I used to >> >> compile scipy is: >> >> >> >> LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" >> >> FFLAGS="-arch x86_64" python setup.py install >> >> >> >> I used the same command to compile numpy without any problems. I am >> >> using g95 and gcc (version info below). >> > >> > Don't use g95, it has caused too many issues, especially on mac os x. >> > Use gfortran, the one available on http://r.research.att.com/tools/(NOT >> > the one on hpc.sourceforge.org, which does not work well either). >> >> >> Thanks for the suggestion. Unfortunately I can't install a >> precompiled binary since I do not have root access on my machine. >> However, I was able to compile gfortran (version 4.5.2) on my own and >> it got me past my previous problems. >> >> I have now come up with a new error. It seems that gcc doesn't like >> the option "-faltivec" (full error copied below). A Google search >> indicates that this may be an old option that isn't necessary anymore. >> How can I remove this option? I know I didn't specify it in any >> environment variable. >> >> > Altivec is a PPC thing, seems strange that it should turn up on Intel. > > distutils is probably trying to create fat binaries because gcc supports that. The recommended binary gfortran does as well, but your home-compiled one maybe doesn't? Try the same with numscons, that will probably work better than distutils (as noted at http://www.scipy.org/Installing_SciPy/Mac_OS_X) Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 9 19:37:57 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 9 Feb 2011 18:37:57 -0600 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Wed, Feb 9, 2011 at 17:52, Jeremy Conlin wrote: > On Mon, Feb 7, 2011 at 5:32 PM, David wrote: >> On 02/08/2011 07:45 AM, Jeremy Conlin wrote: >>> I have compiled python 2.7 and numpy.1.5.1 in 64-bit on my Mac running >>> 10.6.3. ?I'm trying to compile 64 bit scipy as well, but I'm running >>> into a problem. ?I'm pretty sure that the problem is architecture >>> related. ?The error(s) I get are copied below. ?The command I used to >>> compile scipy is: >>> >>> LDFLAGS="-arch x86_64 -Wall -undefined dynamic_lookup -bundle" >>> FFLAGS="-arch x86_64" ?python setup.py install >>> >>> I used the same command to compile numpy without any problems. ?I am >>> using g95 and gcc (version info below). >> >> Don't use g95, it has caused too many issues, especially on mac os x. >> Use gfortran, the one available on http://r.research.att.com/tools/ (NOT >> the one on hpc.sourceforge.org, which does not work well either). > > Thanks for the suggestion. ?Unfortunately I can't install a > precompiled binary since I do not have root access on my machine. > However, I was able to compile gfortran (version 4.5.2) on my own and > it got me past my previous problems. > > I have now come up with a new error. ?It seems that gcc doesn't like > the option "-faltivec" (full error copied below). ?A Google search > indicates that this may be an old option that isn't necessary anymore. > ?How can I remove this option? ?I know I didn't specify it in any > environment variable. Did you just follow the regular gfortran build instructions? You need to do some customizations from Apple before you can get it to work properly. See the section "Building a universal compiler" on http://r.research.att.com/tools/ Even if you don't use their binaries, I would suggest following their build advice. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From robert.kern at gmail.com Wed Feb 9 19:50:23 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 9 Feb 2011 18:50:23 -0600 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Wed, Feb 9, 2011 at 17:52, Jeremy Conlin wrote: > On Mon, Feb 7, 2011 at 5:32 PM, David wrote: >> Don't use g95, it has caused too many issues, especially on mac os x. >> Use gfortran, the one available on http://r.research.att.com/tools/ (NOT >> the one on hpc.sourceforge.org, which does not work well either). > > Thanks for the suggestion. ?Unfortunately I can't install a > precompiled binary since I do not have root access on my machine. You can also extract the files from the package into a location that you do have permissions for. [Downloads]$ open gfortran-4.2.3.dmg [Downloads]$ pax -zrvf /Volumes/GNU\ Fortran\ 4.2.3/gfortran.pkg/Contents/Archive.pax.gz . ./usr ./usr/local ./usr/local/bin ./usr/local/bin/gfortran ./usr/local/bin/gfortran-4.0 ./usr/local/bin/gfortran-4.2 .... [Downloads]$ mv ./usr/local ~/gfortran [Downloads]$ ~/gfortran/bin/gfortran i686-apple-darwin8-gfortran-4.2: no input files -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From jlconlin at gmail.com Thu Feb 10 09:54:12 2011 From: jlconlin at gmail.com (Jeremy Conlin) Date: Thu, 10 Feb 2011 07:54:12 -0700 Subject: [SciPy-User] Mismatch architecture type, compiling scipy on Mac 10.6 In-Reply-To: References: <4D508F23.9040808@silveregg.co.jp> Message-ID: On Wed, Feb 9, 2011 at 5:50 PM, Robert Kern wrote: > On Wed, Feb 9, 2011 at 17:52, Jeremy Conlin wrote: >> On Mon, Feb 7, 2011 at 5:32 PM, David wrote: > >>> Don't use g95, it has caused too many issues, especially on mac os x. >>> Use gfortran, the one available on http://r.research.att.com/tools/ (NOT >>> the one on hpc.sourceforge.org, which does not work well either). >> >> Thanks for the suggestion. ?Unfortunately I can't install a >> precompiled binary since I do not have root access on my machine. > > You can also extract the files from the package into a location that > you do have permissions for. > > [Downloads]$ open gfortran-4.2.3.dmg > [Downloads]$ pax -zrvf /Volumes/GNU\ Fortran\ > 4.2.3/gfortran.pkg/Contents/Archive.pax.gz > . > ./usr > ./usr/local > ./usr/local/bin > ./usr/local/bin/gfortran > ./usr/local/bin/gfortran-4.0 > ./usr/local/bin/gfortran-4.2 > .... > [Downloads]$ mv ./usr/local ~/gfortran > [Downloads]$ ~/gfortran/bin/gfortran > i686-apple-darwin8-gfortran-4.2: no input files > Now this seems like a great idea. I'll try this out and report back if I have any problems. Jeremy From joel.andersson at esat.kuleuven.be Thu Feb 10 10:32:45 2011 From: joel.andersson at esat.kuleuven.be (Joel Andersson) Date: Thu, 10 Feb 2011 16:32:45 +0100 Subject: [SciPy-User] Symbolic dtype in scipy.sparse.csr_matrix? Message-ID: Dear Scipy users, I have a symbolic datatype, which I use to construct functions. It supports basic elementary operations (addition, multiplications, powers, trigonometric functions etc.) but not operations such as comparisons (you don't know if "a" is larger than "b" until you have assigned values to them). I have written a sparse matrix class in C++ which works fine also for symbolic scalars, but when I interface my code to Python, I want these datatypes to be translated into something more pythonic and more standard. I tested with scipy.sparse.csr_matrix (which is using the same sparse format as my C++ class) and constructing a matrix and printing it works fine, but as soon as I try to do any operation, it doesn't work anymore (even things like transpose, additions etc). Is there some alternative sparse matrix library out there that work better with symbolic datatypes? In particular, I want to be able to do more complex operations such as factorizations or solving linear systems. I can implement all these methods myself, but it would be nice to use something ready, Greetings, Joel -- Joel Andersson, PhD Student Electrical Engineering Department (ESAT-SCD), Room 05.11, K.U.Leuven, Kasteelpark Arenberg 10 - bus 2446, 3001 Heverlee, Belgium Phone: +32-16-321819 Mobile: +32-486-672874 (Belgium) / +34-63-4452111 (Spain) / +46-727-365878 (Sweden) Private address: Justus Lipsiusstraat 59/202, 3000 Leuven, Belgium -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Feb 10 12:52:48 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Feb 2011 18:52:48 +0100 Subject: [SciPy-User] [sundials-users] CasADi interface to CVODES and IDAS released In-Reply-To: <201102101624.p1AGOISC024608@nsplists.llnl.gov> References: <201102101624.p1AGOISC024608@nsplists.llnl.gov> Message-ID: On Thu, 10 Feb 2011 09:48:01 +0100 Benny Malengier wrote: > > Joel, > > This seems very nice. The website is down however, so I >can only browse some > code via sourceforge. > > The scipy community would probably also be interested in >this work, you > should post there too. > > They are not very happy with the current state of their >integrators, > everything in python that improves it is nice. >pysundials ( > http://pysundials.sourceforge.net/ ) is slow in moving >to sundials 2.4, that > was the way to use sundials from python. I made a simple >extensions to use > sundials from scipy using pysundials, > http://pypi.python.org/pypi/scikits.odes . > > I see you use numpy in the examples, is there full >support for numpy arrays? > The learning curve for casadi seems a bit higher though >than the normal > scipy integrators. I hope doc is good, I'll have a look >when the website is > back up. > > Benny Malengier > There is also http://code.google.com/p/python-sundials/ What is the reason for so many different interfaces to sundials ? It looks like reinventing the wheel. Who can shed some light on this development ? Nils > > 2011/2/8 Joel Andersson > > >> Dear Sundials users, >> >> It is my pleasure to announce to you the first public >>release of CasADi, a >> minimalistic computer algebra system implementing >>automatic differentiation >> in forward and adjoint modes by means of a hybrid >>symbolic/numeric >> approach. >> It is designed to be a low-level tool for quick, yet >>highly efficient >> implementation of algorithms for dynamic optimization. >>It is written in >> completely self-contained C++ code and is released under >>the LGPL license, >> meaning you are free to use it also in commercial code >>without paying >> royalties. It comes with a rather complete interface to >>Python, which we >> recommend you to use. No matter if the tool is used from >>C++ or Python, all >> functions will be evaluated on CasADi's virtual machine, >>meaning that there >> is little to no speed penalty when using the tool from >>Python. C-code >> generation is also supported, but typically unnecessary. >> >> This beta release contains a rather full-featured >>interface to CVODES and >> IDAS, releaving the user for much of the painstaking >>work when using these >> tools efficiently. In particular, the following is >>supported: >> >> * Automatic generation of the forward and adjoint right >>hand side when >> calculating ODE/DAE sensitivities >> * Automatic generation of Jacobian information in dense, >>banded or general >> sparse format >> * Interface to a sparse direct linear solver (SuperLU), >>to be used as an >> alternative linear solver or as a preconditioner module >>for Sundial's >> iterative linear solvers >> * Second and higher order sensitivities via a >>forward-over-forward or >> forward-over-adjoint approach >> >> Still on an experimental, but to some degree functional, >>level is to create >> matrix-valued computational graphs containing (among >>other things) >> integrator calls. When automatic differentiation are >>applied to these >> syntax >> trees, CVODES' or IDAS' support for calculating forward >>and adjoint >> sensitivites is used. This feature is intended to >>greatly simplify the >> implementation of derivative-based optimal control >>algorithms with imbedded >> ODE/DAE integrators (for example direct multiple >>shooting). >> >> Also on an experimental level is import of DAEs >>formulated in the Modelica >> modelling language and exported in a symbolic form (XML) >>using the >> open-source JModelica.org compiler. >> >> In addition to the Sundials interface, there are >>interfaces to KNITRO and >> IPOPT, two excellent NLP solvers. These can be used >>together with the >> Sundials interface for shooting methods or be used >>separately to implement >> collocation methods for optimal control. So far, we have >>used these >> interfaces to implement some different flavors of direct >>collocation, >> another popular way of solving optimal control problems. >>In this setting, >> CasADi will provide the NLP solver with exact Jacobian >>and Hessian >> information in a sparse format, making CasADi an(other) >>open-source >> alternative to AMPL in this aspect. >> >> The software is located on Sourceforge, please visit >>www.casadi.org for >> instructions on how to download and install the code. >> >> Best regards, >> Joel Andersson on behalf of the CasADi team >> >> >> -- >> Joel Andersson, PhD Student >> Electrical Engineering Department (ESAT-SCD), Room >>05.11, >> K.U.Leuven, Kasteelpark Arenberg 10 - bus 2446, 3001 >>Heverlee, Belgium >> Phone: +32-16-321819 >> Mobile: +32-486-672874 (Belgium) / +34-63-4452111 >>(Spain) / +46-727-365878 >> (Sweden) >> >> Private address: Justus Lipsiusstraat 59/202, 3000 >>Leuven, Belgium >> >> >> >> > > From joel.andersson at esat.kuleuven.be Thu Feb 10 13:40:58 2011 From: joel.andersson at esat.kuleuven.be (Joel Andersson) Date: Thu, 10 Feb 2011 19:40:58 +0100 Subject: [SciPy-User] [sundials-users] CasADi interface to CVODES and IDAS released In-Reply-To: References: <201102101624.p1AGOISC024608@nsplists.llnl.gov> Message-ID: Dear Nils, I guess the many interfaces confirm the position of Sundials as the most comprehensive, open-source ODE/DAE integrator suite around. The focus of this interface is the forward and adjoint sensitivity analysis, which I consider to be the most interesting feature of Sundials, especially if you intend to use the integrators in optimal control. This makes this interface very useful not only from Python, but also from C++ directly. I have not seen any interface to Sundials (to Python or any other language), which addresses this issue. I might be wrong, though. The closest you come is to use some generic AD tool like ADOL-C or CppAD together with the Sundials c-interface, which I have heard that some people are doing. Best regards, Joel 2011/2/10 Nils Wagner > On Thu, 10 Feb 2011 09:48:01 +0100 > Benny Malengier wrote: > > > > Joel, > > > > This seems very nice. The website is down however, so I > >can only browse some > > code via sourceforge. > > > > The scipy community would probably also be interested in > >this work, you > > should post there too. > > > > They are not very happy with the current state of their > >integrators, > > everything in python that improves it is nice. > >pysundials ( > > http://pysundials.sourceforge.net/ ) is slow in moving > >to sundials 2.4, that > > was the way to use sundials from python. I made a simple > >extensions to use > > sundials from scipy using pysundials, > > http://pypi.python.org/pypi/scikits.odes . > > > > I see you use numpy in the examples, is there full > >support for numpy arrays? > > The learning curve for casadi seems a bit higher though > >than the normal > > scipy integrators. I hope doc is good, I'll have a look > >when the website is > > back up. > > > > Benny Malengier > > > > There is also http://code.google.com/p/python-sundials/ > What is the reason for so many different interfaces to > sundials ? > It looks like reinventing the wheel. > Who can shed some light on this development ? > > Nils > > > > > 2011/2/8 Joel Andersson > > > > > >> Dear Sundials users, > >> > >> It is my pleasure to announce to you the first public > >>release of CasADi, a > >> minimalistic computer algebra system implementing > >>automatic differentiation > >> in forward and adjoint modes by means of a hybrid > >>symbolic/numeric > >> approach. > >> It is designed to be a low-level tool for quick, yet > >>highly efficient > >> implementation of algorithms for dynamic optimization. > >>It is written in > >> completely self-contained C++ code and is released under > >>the LGPL license, > >> meaning you are free to use it also in commercial code > >>without paying > >> royalties. It comes with a rather complete interface to > >>Python, which we > >> recommend you to use. No matter if the tool is used from > >>C++ or Python, all > >> functions will be evaluated on CasADi's virtual machine, > >>meaning that there > >> is little to no speed penalty when using the tool from > >>Python. C-code > >> generation is also supported, but typically unnecessary. > >> > >> This beta release contains a rather full-featured > >>interface to CVODES and > >> IDAS, releaving the user for much of the painstaking > >>work when using these > >> tools efficiently. In particular, the following is > >>supported: > >> > >> * Automatic generation of the forward and adjoint right > >>hand side when > >> calculating ODE/DAE sensitivities > >> * Automatic generation of Jacobian information in dense, > >>banded or general > >> sparse format > >> * Interface to a sparse direct linear solver (SuperLU), > >>to be used as an > >> alternative linear solver or as a preconditioner module > >>for Sundial's > >> iterative linear solvers > >> * Second and higher order sensitivities via a > >>forward-over-forward or > >> forward-over-adjoint approach > >> > >> Still on an experimental, but to some degree functional, > >>level is to create > >> matrix-valued computational graphs containing (among > >>other things) > >> integrator calls. When automatic differentiation are > >>applied to these > >> syntax > >> trees, CVODES' or IDAS' support for calculating forward > >>and adjoint > >> sensitivites is used. This feature is intended to > >>greatly simplify the > >> implementation of derivative-based optimal control > >>algorithms with imbedded > >> ODE/DAE integrators (for example direct multiple > >>shooting). > >> > >> Also on an experimental level is import of DAEs > >>formulated in the Modelica > >> modelling language and exported in a symbolic form (XML) > >>using the > >> open-source JModelica.org compiler. > >> > >> In addition to the Sundials interface, there are > >>interfaces to KNITRO and > >> IPOPT, two excellent NLP solvers. These can be used > >>together with the > >> Sundials interface for shooting methods or be used > >>separately to implement > >> collocation methods for optimal control. So far, we have > >>used these > >> interfaces to implement some different flavors of direct > >>collocation, > >> another popular way of solving optimal control problems. > >>In this setting, > >> CasADi will provide the NLP solver with exact Jacobian > >>and Hessian > >> information in a sparse format, making CasADi an(other) > >>open-source > >> alternative to AMPL in this aspect. > >> > >> The software is located on Sourceforge, please visit > >>www.casadi.org for > >> instructions on how to download and install the code. > >> > >> Best regards, > >> Joel Andersson on behalf of the CasADi team > >> > >> > >> -- > >> Joel Andersson, PhD Student > >> Electrical Engineering Department (ESAT-SCD), Room > >>05.11, > >> K.U.Leuven, Kasteelpark Arenberg 10 - bus 2446, 3001 > >>Heverlee, Belgium > >> Phone: +32-16-321819 > >> Mobile: +32-486-672874 (Belgium) / +34-63-4452111 > >>(Spain) / +46-727-365878 > >> (Sweden) > >> > >> Private address: Justus Lipsiusstraat 59/202, 3000 > >>Leuven, Belgium > >> > >> > >> > >> > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joel Andersson, PhD Student Electrical Engineering Department (ESAT-SCD), Room 05.11, K.U.Leuven, Kasteelpark Arenberg 10 - bus 2446, 3001 Heverlee, Belgium Phone: +32-16-321819 Mobile: +32-486-672874 (Belgium) / +34-63-4452111 (Spain) / +46-727-365878 (Sweden) Private address: Justus Lipsiusstraat 59/202, 3000 Leuven, Belgium -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Feb 10 17:05:25 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 10 Feb 2011 17:05:25 -0500 Subject: [SciPy-User] manifold learning tools? Message-ID: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Hello, In googling around for manifold learning tools in python (specifically, diffusion maps a la Lafon and Coifman), I see a couple references on St?fan and Matthieu's web pages, but I'm not sure if anyone has released anything. Do you two, or does anyone else, know of any code out there for the basic techniques (isomap / LLE / diffusion maps / etc...)? No big problem to re-implement... but if the code's already written, I'd start with that! Thanks a ton! Zach From apalomba at austin.rr.com Thu Feb 10 17:10:21 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Thu, 10 Feb 2011 16:10:21 -0600 Subject: [SciPy-User] manifold learning tools? In-Reply-To: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: I am very interested in this as well. An orbifold implementation would be welcome too. -ap On Thu, Feb 10, 2011 at 4:05 PM, Zachary Pincus wrote: > Hello, > > In googling around for manifold learning tools in python > (specifically, diffusion maps a la Lafon and Coifman), I see a couple > references on St?fan and Matthieu's web pages, but I'm not sure if > anyone has released anything. > > Do you two, or does anyone else, know of any code out there for the > basic techniques (isomap / LLE / diffusion maps / etc...)? > > No big problem to re-implement... but if the code's already written, > I'd start with that! > > Thanks a ton! > Zach > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Feb 10 17:14:56 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 10 Feb 2011 23:14:56 +0100 Subject: [SciPy-User] manifold learning tools? In-Reply-To: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: Hi, The code for LLE (I think this one has a bug), Isomap, Diffusion maps, ... is available in the first release of scikits.learn. You may also find it in some branches of the project, mainly Fabian or Alexandre's manifold-light (with a fixed LLE, but no Isomap). I don't have time to finish the port to the current scikit's API, and it takes too long to have something like it integrated anyway. Matthieu 2011/2/10 Zachary Pincus > Hello, > > In googling around for manifold learning tools in python > (specifically, diffusion maps a la Lafon and Coifman), I see a couple > references on St?fan and Matthieu's web pages, but I'm not sure if > anyone has released anything. > > Do you two, or does anyone else, know of any code out there for the > basic techniques (isomap / LLE / diffusion maps / etc...)? > > No big problem to re-implement... but if the code's already written, > I'd start with that! > > Thanks a ton! > Zach > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Feb 10 17:16:39 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 10 Feb 2011 17:16:39 -0500 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: > The code for LLE (I think this one has a bug), Isomap, Diffusion > maps, ... is available in the first release of scikits.learn. You > may also find it in some branches of the project, mainly Fabian or > Alexandre's manifold-light (with a fixed LLE, but no Isomap). I > don't have time to finish the port to the current scikit's API, and > it takes too long to have something like it integrated anyway. Oh great, thanks! I poked around the scikits.learn repo a bit but must have missed those -- sorry. I'll check that out in more detail. Zach From opossumnano at gmail.com Fri Feb 11 08:42:11 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Fri, 11 Feb 2011 14:42:11 +0100 Subject: [SciPy-User] manifold learning tools? In-Reply-To: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: <20110211134211.GH2989@tulpenbaum.cognition.tu-berlin.de> > In googling around for manifold learning tools in python > (specifically, diffusion maps a la Lafon and Coifman), I see a couple > references on St?fan and Matthieu's web pages, but I'm not sure if > anyone has released anything. > > Do you two, or does anyone else, know of any code out there for the > basic techniques (isomap / LLE / diffusion maps / etc...)? LLE [1] and Hessian-LLE [2] are available in MDP [3]. You may be interested in Growing Neural Gas [4] too. hth, tiziano [1] http://mdp-toolkit.sourceforge.net/node_list.html#mdp.nodes.LLENode [2] http://mdp-toolkit.sourceforge.net/node_list.html#mdp.nodes.HLLENode [3] http://mdp-toolkit.sourceforge.net/ [4] http://mdp-toolkit.sourceforge.net/node_list.html#mdp.nodes.GrowingNeuralGasNode From ndbecker2 at gmail.com Fri Feb 11 09:11:38 2011 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 11 Feb 2011 09:11:38 -0500 Subject: [SciPy-User] [signal] zpk -> s domain? Message-ID: If I have a transfer function in zpk, is there a way to transform to poles and zeros in s domain? From josef.pktd at gmail.com Fri Feb 11 09:49:59 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 11 Feb 2011 09:49:59 -0500 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: Message-ID: On Fri, Feb 11, 2011 at 9:11 AM, Neal Becker wrote: > If I have a transfer function in zpk, is there a way to transform to poles and > zeros in s domain? I don't know what s domain is. zpk to transfer function as 2 polynomials signal.zpk2tf there are functions to divide 2 polynomials to get one polynomial with a finite number of terms as approximation to the infinite impulse response /transfer function. It's also possible to get the roots of the one polynomial. there are also functions in scipy.signal to calculate some partial fraction expansion that I didn't figure out yet. (Somewhere there is an answer) Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Fri Feb 11 10:21:57 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 11 Feb 2011 09:21:57 -0600 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: Message-ID: On Fri, Feb 11, 2011 at 8:11 AM, Neal Becker wrote: > If I have a transfer function in zpk, is there a way to transform to poles > and > zeros in s domain? > > I'm confused by the question, because if you have the transfer function "in zpk", don't you already have the zeros, poles and gain? Warren > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Fri Feb 11 10:40:55 2011 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 11 Feb 2011 12:40:55 -0300 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: Message-ID: <1297438855.1841.22.camel@florian-desktop> El ven., 11-02-2011 a las 09:21 -0600, Warren Weckesser escribi?: > On Fri, Feb 11, 2011, Neal Becker wrote: > > If I have a transfer function in zpk, is there a way to transform to > > poles and zeros in s domain? > I'm confused by the question, because if you have the transfer > function "in zpk", don't you already have the zeros, poles and gain? Is variable z unambigously related to the Laplace variable s ? Frequential representations of z-domain filter are computed with z = exp(jw/Fs). Is there the same relation z=exp(s/Fs) so that calculations of poles and zeros would merely be a matter of inverting this relation s=Fs log(z) ? -- Fabrice Silva From warren.weckesser at enthought.com Fri Feb 11 10:40:36 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 11 Feb 2011 09:40:36 -0600 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: <1297438855.1841.22.camel@florian-desktop> References: <1297438855.1841.22.camel@florian-desktop> Message-ID: On Fri, Feb 11, 2011 at 9:40 AM, Fabrice Silva wrote: > El ven., 11-02-2011 a las 09:21 -0600, Warren Weckesser escribi?: > > On Fri, Feb 11, 2011, Neal Becker wrote: > > > If I have a transfer function in zpk, is there a way to transform to > > > poles and zeros in s domain? > > > I'm confused by the question, because if you have the transfer > > function "in zpk", don't you already have the zeros, poles and gain? > > Is variable z unambigously related to the Laplace variable s ? > Frequential representations of z-domain filter are computed with > z = exp(jw/Fs). > > In scipy.signal (and in Matlab), "zpk" means "zeros (z), poles (p) and gain (k)". Warren > Is there the same relation z=exp(s/Fs) so that calculations of poles and > zeros would merely be a matter of inverting this relation s=Fs log(z) ? > > -- > Fabrice Silva > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Fri Feb 11 10:58:01 2011 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 11 Feb 2011 12:58:01 -0300 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: <1297438855.1841.22.camel@florian-desktop> Message-ID: <1297439881.1841.24.camel@florian-desktop> > > El ven., 11-02-2011 a las 09:21 -0600, Warren Weckesser escribi?: > > > On Fri, Feb 11, 2011, Neal Becker wrote: > > > > If I have a transfer function in zpk, is there a way to > transform to > > > > poles and zeros in s domain? > > > > > I'm confused by the question, because if you have the transfer > > > function "in zpk", don't you already have the zeros, poles and > gain? > > > > Is variable z unambigously related to the Laplace variable s ? > > Frequential representations of z-domain filter are computed with > > z = exp(jw/Fs). > In scipy.signal (and in Matlab), "zpk" means "zeros (z), poles (p) and > gain (k)". No problem with that, but it could be design as a z-domain (discrete time) transfer function, and he (Neal) would need properties in the Laplace domain... That is the way I understand the question. -- Fabrice From josef.pktd at gmail.com Fri Feb 11 10:56:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 11 Feb 2011 10:56:28 -0500 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: <1297439881.1841.24.camel@florian-desktop> References: <1297438855.1841.22.camel@florian-desktop> <1297439881.1841.24.camel@florian-desktop> Message-ID: On Fri, Feb 11, 2011 at 10:58 AM, Fabrice Silva wrote: >> > El ven., 11-02-2011 a las 09:21 -0600, Warren Weckesser escribi?: >> > > On Fri, Feb 11, 2011, Neal Becker wrote: >> > > > If I have a transfer function in zpk, is there a way to >> transform to >> > > > poles and zeros in s domain? >> > >> > > I'm confused by the question, because if you have the transfer >> > > function "in zpk", don't you already have the zeros, poles and >> gain? >> > >> > Is variable z unambigously related to the Laplace variable s ? >> > Frequential representations of z-domain filter are computed with >> > z = exp(jw/Fs). > >> In scipy.signal (and in Matlab), "zpk" means "zeros (z), poles (p) and >> gain (k)". > > No problem with that, but it could be design as a z-domain (discrete > time) transfer function, and he (Neal) would need properties in the > Laplace domain... > That is the way I understand the question. I like Neal's short and cryptic questions. We can always guess what he really meant. Just like the debates about Keynes or the Bible. :) Josef > > -- > Fabrice > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ndbecker2 at gmail.com Fri Feb 11 11:23:14 2011 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 11 Feb 2011 11:23:14 -0500 Subject: [SciPy-User] [signal] zpk -> s domain? References: Message-ID: Sorry for the cryptic question - for an engineer working in my field it would not be cryptic. I have a digital filter design, which is specified as poles and zeros in the z- transform domain (zpk). k (z-z0)(z-z1)... ----------------- (z-p0)(z-p1)... I want it expressed in the s domain (laplace transform), in a form such as: (s-a)(s-b)... ----------- (s-c)(s-d)... or s^3 + a s^2 + b s + c --------------------- s^3 + d s^2 + e s + f or similar It is true that z=e^(sT), but simply making this substitution doesn't seem very useful. I thought maybe there was some obvious way I'm overlooking. If I wanted to go from s->z, I could use e.g. bilinear transform. But the other direction? From silva at lma.cnrs-mrs.fr Fri Feb 11 12:29:21 2011 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 11 Feb 2011 14:29:21 -0300 Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: Message-ID: <1297445361.1841.31.camel@florian-desktop> El ven., 11-02-2011 a las 11:23 -0500, Neal Becker escribi?: > > I have a digital filter design, which is specified as poles and zeros > in the z-transform domain (zpk). > > k (z-z0)(z-z1)... > ----------------- > (z-p0)(z-p1)... > > I want it expressed in the s domain (laplace transform), in a form > such as: > > (s-a)(s-b)... > ----------- > (s-c)(s-d)... > It is true that z=e^(sT), but simply making this substitution doesn't > seem very useful. Yes it is! For each pole pn, you have to solve pn=exp(sn Ts) which have an infinity of solutions (due to aliasing when sampling) sn = Fs( log(abs(pn)) + 1.j*(angle(pn)+2*k*pi)), k being integer Choose for each pn the one that is in the frequency bandwidth of interest [-Fs/2, Fs/2] or [0,Fs].. The same for the zeros. It then remains to identify the gain, a mere scalar... -- Fabrice From dineshbvadhia at hotmail.com Sat Feb 12 09:42:59 2011 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 12 Feb 2011 06:42:59 -0800 Subject: [SciPy-User] sparse - getrow or todense Message-ID: In a sparse matrix A, the need is to continuously get multiple random rows at a time. What is the most efficient way to do this - with getrow(), todense() or use sqlite? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Feb 13 03:36:06 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 13 Feb 2011 16:36:06 +0800 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 3 Message-ID: Hi, I am pleased to announce the availability of the third release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Sources, binaries and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that due to the issues Sourceforge is still having the binaries are not visible at this moment, even though they are uploaded. They should appear within a day I expect. Changes since release candidate 2: - a high-priority bugfix for fftpack (#1353) - a change in ndimage for compatibility with upcoming numpy 1.6 - fixes for compatibility with Python 2.4 - fixed test failures reported for RC2 built against MKL If no more issues are reported, 0.9.0 will be released in the weekend of 19/20 February. Enjoy, Ralf From scipy at SamuelJohn.de Sun Feb 13 09:43:20 2011 From: scipy at SamuelJohn.de (Samuel John) Date: Sun, 13 Feb 2011 15:43:20 +0100 Subject: [SciPy-User] ndimage.rotate around which anchor point? center? Message-ID: Hi all, I have a quick question to ndimage.rotate (in scipy 0.9): What is the point around which is rotated? I guess it is something like the center (h/2,w/2) of the image. There is nothing in the docstring about this. However, I need to define the point, around which is rotated (in my case the point is ndimage.center_of_mass(myImage)[0:2] for an RGBA image). I really like the fact, that ndimage.rotate and ndimage.zoom return the array such that nothing is cropped. Any idea? Other suggestions are also welcome! thanks in advance, Samuel From scipy at samueljohn.de Sun Feb 13 14:37:35 2011 From: scipy at samueljohn.de (Samuel) Date: Sun, 13 Feb 2011 20:37:35 +0100 Subject: [SciPy-User] ndimage.rotate around which anchor point? center? In-Reply-To: References: Message-ID: <4B03831A-A369-494F-94BB-A1FBB5AD339D@samueljohn.de> Hey again [sorry for double posting, i mixed scipy-user and scipy-dev ... shame on me] On 13.02.2011, at 15:43, I wrote: > What is the point around which is rotated? I guess it is something > like the center (h/2,w/2) of the image. There is nothing in the > docstring about this. > However, I need to define the point, around which is rotated (in my > case the point is ndimage.center_of_mass(myImage)[0:2] for an RGBA > image). > > I really like the fact, that ndimage.rotate and ndimage.zoom return > the array such that nothing is cropped. My current solution is quite complicated. It rotates an *image* around *center*: import scipy as S from scipy import ndimage center = S.array(ndimage.center_of_mass( image[:,:,-1] )) orig_shape = S.array(image.shape[0:2],dtype=S.float32) im_center = orig_shape/2.0 # only h,w bigger_shape = orig_shape + S.absolute( center - im_center ) bigger_shape = S.array( [ bigger_shape.max(), bigger_shape.max()] ) #square shaped bigger = S.zeros( (bigger_shape[0], bigger_shape[1], image.shape[2]) ,dtype=S.float32) # Now I copy image into top left of bigger bigger[0:orig_shape[0], 0:orig_shape[1],:] = image[:] bigger_center = bigger_shape/2.0 image = ndimage.shift( input=bigger, shift=S.concatenate( (bigger_center-center, [0]) ), order=1 ) The idea is to make enough room, then shift the bigger array such that *center* will be in the center of bigger. If you then rotate via ndimage.rotate(image, order=1) the effect is as if you would rotate around the point *center*. What do you think? Samuel From bevan07 at gmail.com Sun Feb 13 18:50:58 2011 From: bevan07 at gmail.com (bevan j) Date: Sun, 13 Feb 2011 15:50:58 -0800 (PST) Subject: [SciPy-User] [SciPy-user] help with optimizing (constrained) Message-ID: <30917753.post@talk.nabble.com> Hello, I am a bit (maybe more than a bit) confused with constrained optimzing in scipy. I would like to constrain 3 terms (term1, term2, term3) to >= 0.01. I am attached some working code that has what I have so far. It is running two data sets ( one generated from the model I am trying to fit ans then a real world example). I think I should be able to get a better fit to the real world example. Any help is appreciated. Bevan http://old.nabble.com/file/p30917753/3p_standalone.py 3p_standalone.py -- View this message in context: http://old.nabble.com/help-with-optimizing-%28constrained%29-tp30917753p30917753.html Sent from the Scipy-User mailing list archive at Nabble.com. From dominique.orban at gmail.com Sun Feb 13 22:52:58 2011 From: dominique.orban at gmail.com (Dominique Orban) Date: Sun, 13 Feb 2011 22:52:58 -0500 Subject: [SciPy-User] Equality of lists and arrays Message-ID: Hey there, I'm wondering if the following is intentional. 1< []==[0] # Lists. 1> False 2< import numpy as np 3< a = np.array([]) ; b = np.array([0]) 4< a==b # Arrays. 4> array([], dtype=bool) 5< np.all(a==b) 5> True This is creating inconsistencies for me. If this behavior is intentional, what's the intended way to check for equality of arrays? Thanks. -- Dominique From robert.kern at gmail.com Sun Feb 13 23:16:41 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 13 Feb 2011 22:16:41 -0600 Subject: [SciPy-User] Equality of lists and arrays In-Reply-To: References: Message-ID: On Sun, Feb 13, 2011 at 21:52, Dominique Orban wrote: > Hey there, > > I'm wondering if the following is intentional. > > 1< []==[0] ?# Lists. > 1> False > > 2< import numpy as np > 3< a = np.array([]) ; b = np.array([0]) > 4< a==b ? ? # Arrays. > 4> array([], dtype=bool) Everything up to here is certainly intended. The == operator broadcasts, just like any other binary operator. Since the shape of b is (1,), it will broadcast that array to be the same as a.shape: namely (0,). > 5< np.all(a==b) > 5> True > > This is creating inconsistencies for me. If this behavior is > intentional, what's the intended way to check for equality of arrays? We implement np.all(x) as np.logical_and.reduce(x). The initial value for the np.logical_and ufunc is True (just like the initial value for np.multiply for is 1), so when given an empty array, you get that initial value. This happens to coincide with regular Python's treatment of all([]). I suspect that you don't want the broadcasting behavior. Try this function instead: def really_equal(a, b): if a.shape != b.shape: return False return np.all(a == b) If you are using this in unit tests, use np.testing.assert_array_equal() instead. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From dodibitran at gmail.com Mon Feb 14 04:43:53 2011 From: dodibitran at gmail.com (Dudi Bitran) Date: Mon, 14 Feb 2011 09:43:53 +0000 (UTC) Subject: [SciPy-User] Reading TDM/TDMS Files with scipy References: Message-ID: Sebastian Haase gmail.com> writes: > > On Wed, Jan 12, 2011 at 8:45 AM, Floris hotmail.com> wrote: > > Nils Wagner iam.uni-stuttgart.de> writes: > > > >> > >> Hi all, > >> > >> Is it possible to read TDM/TDMS files with scipy ? > >> > >> I found a tool for Matlab > >> http://zone.ni.com/devzone/cda/epd/p/id/5957 > >> > >> Nils > >> > > > > > > Hello Nils, > > I made a little tool for that: pyTDMS. > > http://sourceforge.net/projects/pytdms/ > > Hope that helps. > > Floris > > > Hi Floris, > > this is great news ! I hope I find time to try it out soon. > > - Sebastian > The file looks great but when i try to use this file i get: Traceback (most recent call last): File "", line 17, in File "", line 14, in main File "pyTDMS.py", line 892, in read data = readSegment(f,sz,data) File "pyTDMS.py", line 755, in readSegment newdata = readRawData(f,leadin,newobjects,newobjectorder,filesize) File "pyTDMS.py", line 609, in readRawData n_chunks = total_chunks / chunk_size ZeroDivisionError: integer division or modulo by zero I tried to debug it but i couldn't find the source of the problem If you can please help me I'm trying for a very long time to read this files in python. From korn at freisingnet.de Mon Feb 14 06:10:26 2011 From: korn at freisingnet.de (Johannes Korn) Date: Mon, 14 Feb 2011 12:10:26 +0100 Subject: [SciPy-User] NetcdfIntepolatingFunction Message-ID: Hi, I?d like to use "NetcdfIntepolatingFunction" to obtain values from an ECMWF-Reanalysis file interpolated in time and space. I tried: In [88]: a=Scientific.Functions.Interpolation.NetCDFInterpolatingFunction('output2.nc',['longitude','latitude','time'],'u10',default=-99.) In [90]: a(0,0,899940) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/users/rein_be/cloudprops/compare_albedos/tmp/ in () /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc in __call__(self, *points) 90 pass 91 try: ---> 92 neighbours = map(_lookup, points, self.axes, self.period) 93 except ValueError, text: 94 if self.default is not None: /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc in _lookup(point, axis, period) 436 raise ValueError('Point outside grid of values') 437 i = j-1 --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) 439 return index_expression[i:j+1:1], weight 440 else: TypeError: illegal subscript type Interestingly if I swap the axes ('time' now on 1st position) I?m able to call 'a': In [91]: a=Scientific.Functions.Interpolation.NetCDFInterpolatingFunction('output2.nc',['time','longitude','latitude'],'u10',default=-99.) In [92]: a(0,0,899940) Out[92]: -99.0 However, if I also swap axes in the calling command: In [93]: a(899940,0,0) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/users/rein_be/cloudprops/compare_albedos/tmp/ in () /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc in __call__(self, *points) 90 pass 91 try: ---> 92 neighbours = map(_lookup, points, self.axes, self.period) 93 except ValueError, text: 94 if self.default is not None: /usr/local/lib/python2.6/site-packages/Scientific/FunctionsIn [93]: a(899940,0,0) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/users/rein_be/cloudprops/compare_albedos/tmp/ in () /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc in __call__(self, *points) 90 pass 91 try: ---> 92 neighbours = map(_lookup, points, self.axes, self.period) 93 except ValueError, text: 94 if self.default is not None: /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc in _lookup(point, axis, period) 436 raise ValueError('Point outside grid of values') 437 i = j-1 --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) 439 return index_expression[i:j+1:1], weight 440 else: TypeError: illegal subscript type /Interpolation.pyc in _lookup(point, axis, period) 436 raise ValueError('Point outside grid of values') 437 i = j-1 --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) 439 return index_expression[i:j+1:1], weight 440 else: TypeError: illegal subscript type Am I doing things wrong or is this a bug? Kind regards! Johannes From warren.weckesser at enthought.com Mon Feb 14 07:37:43 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 14 Feb 2011 06:37:43 -0600 Subject: [SciPy-User] NetcdfIntepolatingFunction In-Reply-To: References: Message-ID: Johannes, ScientificPython and SciPy are different projects. Your question is about the ScientificPython package: http://dirac.cnrs-orleans.fr/plone/software/scientificpython/ http://sourcesup.cru.fr/projects/scientific-py/ While there is no harm in asking your question on this mailing list, you might also consider asking your question in the ScientificPython forums: https://sourcesup.cru.fr/forum/?group_id=180 Warren On Mon, Feb 14, 2011 at 5:10 AM, Johannes Korn wrote: > Hi, > > I?d like to use "NetcdfIntepolatingFunction" to obtain values from an > ECMWF-Reanalysis file interpolated in time and space. > > I tried: > > In [88]: > a=Scientific.Functions.Interpolation.NetCDFInterpolatingFunction(' > output2.nc',['longitude','latitude','time'],'u10',default=-99.) > > In [90]: a(0,0,899940) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /home/users/rein_be/cloudprops/compare_albedos/tmp/ in > () > > > /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc > in __call__(self, *points) > 90 pass > 91 try: > ---> 92 neighbours = map(_lookup, points, self.axes, > self.period) > 93 except ValueError, text: > 94 if self.default is not None: > > > /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc > in _lookup(point, axis, period) > 436 raise ValueError('Point outside grid of values') > 437 i = j-1 > --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) > 439 return index_expression[i:j+1:1], weight > 440 else: > > TypeError: illegal subscript type > > > > > > Interestingly if I swap the axes ('time' now on 1st position) I?m able > to call 'a': > > In [91]: > a=Scientific.Functions.Interpolation.NetCDFInterpolatingFunction(' > output2.nc',['time','longitude','latitude'],'u10',default=-99.) > > In [92]: a(0,0,899940) > Out[92]: -99.0 > > > > > However, if I also swap axes in the calling command: > > > In [93]: a(899940,0,0) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /home/users/rein_be/cloudprops/compare_albedos/tmp/ in > () > > > /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc > in __call__(self, *points) > 90 pass > 91 try: > ---> 92 neighbours = map(_lookup, points, self.axes, > self.period) > 93 except ValueError, text: > 94 if self.default is not None: > > /usr/local/lib/python2.6/site-packages/Scientific/FunctionsIn [93]: > a(899940,0,0) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /home/users/rein_be/cloudprops/compare_albedos/tmp/ in > () > > > /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc > in __call__(self, *points) > 90 pass > 91 try: > ---> 92 neighbours = map(_lookup, points, self.axes, > self.period) > 93 except ValueError, text: > 94 if self.default is not None: > > > /usr/local/lib/python2.6/site-packages/Scientific/Functions/Interpolation.pyc > in _lookup(point, axis, period) > 436 raise ValueError('Point outside grid of values') > 437 i = j-1 > --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) > 439 return index_expression[i:j+1:1], weight > 440 else: > > TypeError: illegal subscript type > /Interpolation.pyc in _lookup(point, axis, period) > 436 raise ValueError('Point outside grid of values') > 437 i = j-1 > --> 438 weight = (point-axis[i])/(axis[j]-axis[i]) > 439 return index_expression[i:j+1:1], weight > 440 else: > > TypeError: illegal subscript type > > > Am I doing things wrong or is this a bug? > > Kind regards! > > Johannes > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ferdinand.Gruebler at gmx.de Mon Feb 14 08:37:35 2011 From: Ferdinand.Gruebler at gmx.de (Ferdinand) Date: Mon, 14 Feb 2011 14:37:35 +0100 Subject: [SciPy-User] permutate elements - unique order Message-ID: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> hi, is there a function or an easy way to generate unique sequences of elements? Like: function(a,b,c) output: a b c b c a c b a (not for example a c b ? because a was on the first position already) From dominique.orban at gmail.com Mon Feb 14 08:47:59 2011 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 14 Feb 2011 08:47:59 -0500 Subject: [SciPy-User] Equality of lists and arrays Message-ID: On Mon, Feb 14, 2011 at 7:39 AM, wrote: > From:?Robert Kern > To:?SciPy Users List > Date:?Sun, 13 Feb 2011 22:16:41 -0600 > Subject:?Re: [SciPy-User] Equality of lists and arrays > On Sun, Feb 13, 2011 at 21:52, Dominique Orban > wrote: >> Hey there, >> >> I'm wondering if the following is intentional. >> >> 1< []==[0] ?# Lists. >> 1> False >> >> 2< import numpy as np >> 3< a = np.array([]) ; b = np.array([0]) >> 4< a==b ? ? # Arrays. >> 4> array([], dtype=bool) > > Everything up to here is certainly intended. The == operator > broadcasts, just like any other binary operator. Since the shape of b > is (1,), it will broadcast that array to be the same as a.shape: > namely (0,). Yes, this part was clear. >> 5< np.all(a==b) >> 5> True >> >> This is creating inconsistencies for me. If this behavior is >> intentional, what's the intended way to check for equality of arrays? > > We implement np.all(x) as np.logical_and.reduce(x). The initial value > for the np.logical_and ufunc is True (just like the initial value for > np.multiply for is 1), so when given an empty array, you get that > initial value. This happens to coincide with regular Python's > treatment of all([]). Ok, thanks for the explanation. > I suspect that you don't want the broadcasting behavior. Try this > function instead: > > def really_equal(a, b): > ? ?if a.shape != b.shape: > ? ? ? ?return False > ? ?return np.all(a == b) That's what I've been doing. I was just wondering if I was missing something. Thanks! -- Dominique From stefan at sun.ac.za Mon Feb 14 08:49:16 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 14 Feb 2011 15:49:16 +0200 Subject: [SciPy-User] ndimage.rotate around which anchor point? center? In-Reply-To: <4B03831A-A369-494F-94BB-A1FBB5AD339D@samueljohn.de> References: <4B03831A-A369-494F-94BB-A1FBB5AD339D@samueljohn.de> Message-ID: Hi Samual On Sun, Feb 13, 2011 at 9:37 PM, Samuel wrote: > My current solution is quite complicated. It rotates an *image* around *center*: > > ? ? ? import scipy as S > ? ? ? from scipy import ndimage > > ? ? ? center = S.array(ndimage.center_of_mass( image[:,:,-1] )) > ? ? ? orig_shape = S.array(image.shape[0:2],dtype=S.float32) > ? ? ? im_center = orig_shape/2.0 # only h,w > ? ? ? bigger_shape = orig_shape + S.absolute( center - im_center ) > ? ? ? bigger_shape = S.array( [ bigger_shape.max(), bigger_shape.max()] ) #square shaped > ? ? ? bigger = S.zeros( (bigger_shape[0], bigger_shape[1], image.shape[2]) ,dtype=S.float32) > ? ? ? # Now I copy image into top left of bigger > ? ? ? bigger[0:orig_shape[0], 0:orig_shape[1],:] = image[:] > ? ? ? bigger_center = bigger_shape/2.0 > ? ? ? image = ndimage.shift( input=bigger, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? shift=S.concatenate( (bigger_center-center, [0]) ), > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? order=1 ) > > The idea is to make enough room, then shift the bigger array such that *center* will be > in the center of bigger. If you then rotate via ndimage.rotate(image, order=1) the effect > is as if you would rotate around the point *center*. The standard solution is pretty much like you describe. Say we have transformation matrices T (translate centre to origin) and R (perform rotation), you want to apply them as follows: inv(T) R T In other words, translate centre to origin, rotate, translate back. The matrices have the forms R = [cos(theta) -sin(theta) 0] [sin(theta) cos(theta) 0] [0 0 1] T = [0 0 tx] [0 0 ty] [0 0 1] After multiplying the matrices, you probably get the solution you propose earlier on. You can apply these transformation matrices using the "homography" function from scikits.image.transform: https://github.com/stefanv/scikits.image/blob/master/scikits/image/transform/project.py#L19 Regards St?fan From alan.isaac at gmail.com Mon Feb 14 09:39:22 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Mon, 14 Feb 2011 09:39:22 -0500 Subject: [SciPy-User] permutate elements - unique order In-Reply-To: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> References: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> Message-ID: <4D593E9A.30105@gmail.com> On 2/14/2011 8:37 AM, Ferdinand wrote: > is there a function or an easy way to generate unique sequences of elements? > > Like: > > function(a,b,c) > > output: > > a b c > b c a > c b a > > > > (not for example a c b ? because a was on the first position already) This specification appears incomplete: is the first position special? E.g., why doesn't abc conflict with cba? The term "unique sequence" suggested you were after permutations, but you clearly what something more restrictive. (Or if not, see itertools.) Alan Isaac From nwagner at iam.uni-stuttgart.de Mon Feb 14 09:40:29 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Feb 2011 15:40:29 +0100 Subject: [SciPy-User] griddata Message-ID: Hi all, Is it possible to apply griddata to polar coordinates or do I need cartesian coordinates ? http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata Nils From rajeev.raizada at dartmouth.edu Mon Feb 14 09:54:58 2011 From: rajeev.raizada at dartmouth.edu (Rajeev Raizada) Date: Mon, 14 Feb 2011 09:54:58 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: Dear SciPy folks, Those of you whose duties include teaching basic stats might be interested in these interactive tutorial files, designed to illustrate basic concepts. Running the code opens up an interactive figure window. When you click on a figure to add new points, the statistical tests shown in the figure change accordingly. http://www.dartmouth.edu/~raj/intro-stats.html The code has lots of comments in it, which attempt to explain the concepts as explicitly as possible. No prior knowledge of Python or statistics is assumed. These programs require the SciPy and Matplotlib modules. The tutorials are: - interactive_mean_std_normal_distribution.py - interactive_one_sample_t_test.py - interactive_two_sample_t_test.py - interactive_correlation_plot.py The same webpage also contains Matlab versions of the scripts. Please feel more than free to use any of the code for teaching, if you find it useful. Yours, Rajeev Raizada Research Assistant Professor Neukom Institute for Computational Science Dartmouth College HB 6255 Hanover NH 03755 Tel: 603 646 0175 E.mail: rajeev.raizada at dartmouth.edu WWW: http://www.dartmouth.edu/~raj From ralf.gommers at googlemail.com Mon Feb 14 10:24:24 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 14 Feb 2011 23:24:24 +0800 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 3 In-Reply-To: References: Message-ID: On Sun, Feb 13, 2011 at 4:36 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the third release > candidate of SciPy 0.9.0. This will be the first SciPy release to > include support for Python 3 (all modules except scipy.weave), as well > as for Python 2.7. > > Sources, binaries and release notes can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that > due to the issues Sourceforge is still having the binaries are not > visible at this moment, even though they are uploaded. They should > appear within a day I expect. > All binaries are visible now. Cheers, Ralf > Changes since release candidate 2: > - a high-priority bugfix for fftpack (#1353) > - a change in ndimage for compatibility with upcoming numpy 1.6 > - fixes for compatibility with Python 2.4 > - fixed test failures reported for RC2 built against MKL > > If no more issues are reported, 0.9.0 will be released in the weekend > of 19/20 February. > > Enjoy, > Ralf > From davide.lasagna at polito.it Mon Feb 14 10:26:50 2011 From: davide.lasagna at polito.it (Davide) Date: Mon, 14 Feb 2011 16:26:50 +0100 Subject: [SciPy-User] permutate elements - unique order In-Reply-To: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> References: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> Message-ID: <4D5949BA.4070007@polito.it> On 02/14/2011 02:37 PM, Ferdinand wrote: > hi, > > is there a function or an easy way to generate unique sequences of elements? > > Like: > > function(a,b,c) > > output: > > a b c > b c a > c b a > > > > (not for example a c b ? because a was on the first position already) > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, Have a look at the itertools module which is in the Python standard library. There should be enoungh for your purposes. Cheers, Davide From fperez.net at gmail.com Mon Feb 14 10:37:14 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Feb 2011 07:37:14 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 6:54 AM, Rajeev Raizada wrote: > > Those of you whose duties include teaching basic stats > might be interested in these interactive tutorial files, > designed to illustrate basic concepts. Wonderful codes! Many thanks for sharing these, they are a fantastic illustration... For those using the ipython qt console from git's master, you can load Rajeevs' codes straight in via: %loadpy http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_distribution.py %loadpy http://www.dartmouth.edu/~raj/Python/interactive_two_sample_t_test.py %loadpy http://www.dartmouth.edu/~raj/Python/interactive_correlation_plot.py Cheers, f From Ferdinand.Gruebler at gmx.de Mon Feb 14 10:38:59 2011 From: Ferdinand.Gruebler at gmx.de (Ferdinand) Date: Mon, 14 Feb 2011 16:38:59 +0100 Subject: [SciPy-User] permutate elements - unique order In-Reply-To: <4D593E9A.30105@gmail.com> References: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> <4D593E9A.30105@gmail.com> Message-ID: Oh sorry, I made a mistake ... of course abc conflicts with cba It has to be: function(a,b,c) a b c b c a c a b sorry for messing it up. On 14.02.2011, at 15:39, Alan G Isaac wrote: > On 2/14/2011 8:37 AM, Ferdinand wrote: >> is there a function or an easy way to generate unique sequences of elements? >> >> Like: >> >> function(a,b,c) >> >> output: >> >> a b c >> b c a >> c b a >> >> >> >> (not for example a c b ? because a was on the first position already) > > > > > This specification appears incomplete: > is the first position special? > E.g., why doesn't abc conflict with cba? > > The term "unique sequence" suggested you were > after permutations, but you clearly what something > more restrictive. (Or if not, see itertools.) > > Alan Isaac > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Mon Feb 14 10:58:50 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 14 Feb 2011 17:58:50 +0200 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus wrote: > Oh great, thanks! I poked around the scikits.learn repo a bit but must > have missed those -- sorry. I'll check that out in more detail. I looked at the docs for scikits.learn now, but it doesn't seem as though one can specify the diffusion map kernel? Either way, the code for this is literally about 20 lines (attached). Note that 'eigen' is known as 'eigs' in the latest scipy RC. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: difmap.py Type: application/octet-stream Size: 2202 bytes Desc: not available URL: From fusion_energy at hotmail.com Mon Feb 14 11:00:24 2011 From: fusion_energy at hotmail.com (F. B.) Date: Mon, 14 Feb 2011 16:00:24 +0000 Subject: [SciPy-User] KDE bandwith selection question Message-ID: Hi all, I read the Kernel Density Estimation documentation online but I was unable to find any reference to the bandwith selection algorithm (in scipy.stats I mean). My question is, wich kind of algorithm use stats.gaussian_kde to evaluate the bandwith? Any answer is appreciated. Best Fabrizio -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 14 11:03:39 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 14 Feb 2011 11:03:39 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 10:37 AM, Fernando Perez wrote: > On Mon, Feb 14, 2011 at 6:54 AM, Rajeev Raizada > wrote: >> >> Those of you whose duties include teaching basic stats >> might be interested in these interactive tutorial files, >> designed to illustrate basic concepts. > > Wonderful codes! ?Many thanks for sharing these, they are a fantastic > illustration... > > > For those using the ipython qt console from git's master, you can load > Rajeevs' codes straight in via: > > %loadpy http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_distribution.py > > %loadpy http://www.dartmouth.edu/~raj/Python/interactive_two_sample_t_test.py > > %loadpy http://www.dartmouth.edu/~raj/Python/interactive_correlation_plot.py > > And additionally thanks for the link to http://en.wikipedia.org/wiki/Anscombe%27s_quartet I didn't know about those. (and, to follow in Fernando's footsteps) use higher order moments use general measures of dependence use non-linearity tests use a robust estimator and use partial correlation in the multivariate case Josef > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From bhanukiran.perabathini at gmail.com Mon Feb 14 11:04:05 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Mon, 14 Feb 2011 21:34:05 +0530 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: Hi Rajeev, Thanks for sharing this stuff. Sadly I'm having a trouble running these codes on my mac that uses Python 2.7.1 |EPD 7.0-1 (32-bit)|. It raises the following warning: Warning: divide by zero encountered in double_scalars and nothing happens as I click around on the canvas. Is there something that I should possibly do to fix this? Thanks so much, Bhanukiran FOSSEE IIT-B, India. On Mon, Feb 14, 2011 at 8:24 PM, Rajeev Raizada wrote: > Dear SciPy folks, > > Those of you whose duties include teaching basic stats > might be interested in these interactive tutorial files, > designed to illustrate basic concepts. > Running the code opens up an interactive figure window. > When you click on a figure to add new points, > the statistical tests shown in the figure change accordingly. > > http://www.dartmouth.edu/~raj/intro-stats.html > > The code has lots of comments in it, which attempt to explain the > concepts as explicitly as possible. No prior knowledge of Python or > statistics is assumed. These programs require the SciPy and > Matplotlib modules. > > The tutorials are: > - interactive_mean_std_normal_distribution.py > - interactive_one_sample_t_test.py > - interactive_two_sample_t_test.py > - interactive_correlation_plot.py > > The same webpage also contains Matlab versions of the scripts. > Please feel more than free to use any of the code for teaching, > if you find it useful. > > Yours, > > Rajeev Raizada > Research Assistant Professor > Neukom Institute for Computational Science > Dartmouth College > HB 6255 > Hanover NH 03755 > Tel: 603 646 0175 > E.mail: rajeev.raizada at dartmouth.edu > WWW: http://www.dartmouth.edu/~raj > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Mon Feb 14 11:14:34 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 14 Feb 2011 18:14:34 +0200 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 6:04 PM, bhanukiran perabathini wrote: > and nothing happens as I click around on the canvas. This seems to be a bug in some of the matplotlib backends. If you grab the window and resize it, everything should appear (let's hope). Cheers St?fan From zachary.pincus at yale.edu Mon Feb 14 11:16:01 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 14 Feb 2011 11:16:01 -0500 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: Cool -- thanks a ton! On Feb 14, 2011, at 10:58 AM, St?fan van der Walt wrote: > On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus > wrote: >> Oh great, thanks! I poked around the scikits.learn repo a bit but >> must >> have missed those -- sorry. I'll check that out in more detail. > > I looked at the docs for scikits.learn now, but it doesn't seem as > though one can specify the diffusion map kernel? Either way, the code > for this is literally about 20 lines (attached). Note that 'eigen' is > known as 'eigs' in the latest scipy RC. > > Regards > St?fan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Feb 14 11:18:46 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Feb 2011 10:18:46 -0600 Subject: [SciPy-User] permutate elements - unique order In-Reply-To: References: <03EF6C44-A422-4A89-854F-3BC84D6D415D@gmx.de> <4D593E9A.30105@gmail.com> Message-ID: On Mon, Feb 14, 2011 at 09:38, Ferdinand wrote: > Oh sorry, I made a mistake ... > > of course abc conflicts with cba > > It has to be: > > > function(a,b,c) > > a ? b ? c > b ? c ? a > c ? a ? b Ah, you want a Latin Square. http://en.wikipedia.org/wiki/Latin_square An easy (but boring) construction is just to shift each row by one element. [~] |3> def ezlatin(n): ..> rows = [] ..> base = np.arange(n) ..> for i in range(n): ..> rows.append(np.roll(base, i)) ..> return np.array(rows) [~] |4> ezlatin(3) array([[0, 1, 2], [2, 0, 1], [1, 2, 0]]) [~] |5> ezlatin(4) array([[0, 1, 2, 3], [3, 0, 1, 2], [2, 3, 0, 1], [1, 2, 3, 0]]) [~] |6> ezlatin(5) array([[0, 1, 2, 3, 4], [4, 0, 1, 2, 3], [3, 4, 0, 1, 2], [2, 3, 4, 0, 1], [1, 2, 3, 4, 0]]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From jsseabold at gmail.com Mon Feb 14 11:19:14 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 14 Feb 2011 11:19:14 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: 2011/2/14 St?fan van der Walt : > On Mon, Feb 14, 2011 at 6:04 PM, bhanukiran perabathini > wrote: >> and nothing happens as I click around on the canvas. > > This seems to be a bug in some of the matplotlib backends. ?If you > grab the window and resize it, everything should appear (let's hope). > Same for me on linux with a tarball from svn mpl 1.0 rev 8222 and 'WXAgg' or 'Qt4Agg' backend (only 2 I tried). It works as expected if I run ipython with the -pylab switch though. Skipper From josef.pktd at gmail.com Mon Feb 14 11:19:35 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 14 Feb 2011 11:19:35 -0500 Subject: [SciPy-User] KDE bandwith selection question In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 11:00 AM, F. B. wrote: > Hi all, > > I read the Kernel Density Estimation documentation online but I was unable > to find any reference > to the bandwith selection algorithm (in scipy.stats I?mean). > My question is, wich kind of algorithm use stats.gaussian_kde to evaluate > the bandwith? It has an option for Scotts factor or Silverman factor, I think used for all dimensions. literature references are in scipy-user mailinglist Jan 6 "KDE: Scott's factor reference" Josef > > > Any?answer is appreciated. > Best > Fabrizio > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From zachary.pincus at yale.edu Mon Feb 14 11:19:37 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 14 Feb 2011 11:19:37 -0500 Subject: [SciPy-User] KDE bandwith selection question In-Reply-To: References: Message-ID: > I read the Kernel Density Estimation documentation online but I was > unable to find any reference to the bandwith selection algorithm (in > scipy.stats I mean). My question is, wich kind of algorithm use > stats.gaussian_kde to evaluate the bandwith? The default is "Scott's Factor" (look at the source code), which is pretty simplistic but seems to work well. A while ago I asked where this came from, and Josef did some really helpful research. Below is his answer (and my original question etc). Zach >> I've been wading through the old literature on gaussian KDE for a >> little while trying to find a reference for the "Scott's factor" >> rule- >> of-thumb for gaussian KDE bandwidth selection (n**(-1/(d+4)), where n >> is the number of data points and d their dimension; this factor is >> multiplied by the covariance matrix to yield the bandwidths). >> >> I can find a lot of Scott's later contributions of fancier methods, >> but nothing about this basic one... > > Scotts 1992 is the reference in Haerdle > > http://books.google.com/books?id=qPCmAOS-CoMC&pg=PA73&lpg=PA73&dq=scott%27s+factor+rule-+of-thumb+hardle&source=bl&ots=kTNHJpyk6w&sig=5wwCOzThGsIzXOyVax2AbKQ11Rw&hl=en&ei=MOwlTdC3F4aBlAeRsZDNAQ&sa=X&oi=book_result&ct=result&resnum=1&sqi=2&ved=0CBYQ6AEwAA#v > =onepage&q&f=false > > Haerdle's book is also online, but I need to look for the link. > > Josef I think it's equation (3.70) in http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/spm/spmhtmlnode18.html with page reference to scott 92 p 152 more online Haerdle is here http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/ Josef From gael.varoquaux at normalesup.org Mon Feb 14 11:32:01 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 14 Feb 2011 17:32:01 +0100 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: <20110214163201.GD14803@phare.normalesup.org> On Mon, Feb 14, 2011 at 05:58:50PM +0200, St?fan van der Walt wrote: > On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus > wrote: > > Oh great, thanks! I poked around the scikits.learn repo a bit but must > > have missed those -- sorry. I'll check that out in more detail. > I looked at the docs for scikits.learn now, but it doesn't seem as > though one can specify the diffusion map kernel? Which code are you refeering to exactly? In the code I know, you have to apply it yourself. However, this code is still fairly barebones, and will need more work, in particular to define good APIs. This will happen when we merge in the manifold module (give it a month). The code for diffusion maps is indeed fairly short, but in the scikit we play all the tricks we can to be numerically efficient (and they are not always trivial). Ga?l From matthieu.brucher at gmail.com Mon Feb 14 11:38:14 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 Feb 2011 17:38:14 +0100 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: 2011/2/14 St?fan van der Walt > On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus > wrote: > > Oh great, thanks! I poked around the scikits.learn repo a bit but must > > have missed those -- sorry. I'll check that out in more detail. > > I looked at the docs for scikits.learn now, but it doesn't seem as > though one can specify the diffusion map kernel? Either way, the code > for this is literally about 20 lines (attached). Note that 'eigen' is > known as 'eigs' in the latest scipy RC. > Indeed, because the diffusion kernel is hardcoded for diffusion map. The inner computation function can be passed a kernel though (sparse like Laplacian Eigenmaps or not sparse like Diffusion Map). Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon Feb 14 12:27:29 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Feb 2011 18:27:29 +0100 Subject: [SciPy-User] turning points Message-ID: Hi all, Is there a built-in function to compute all turning points of an array ? Nils From josef.pktd at gmail.com Mon Feb 14 12:45:13 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 14 Feb 2011 12:45:13 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 11:03 AM, wrote: > On Mon, Feb 14, 2011 at 10:37 AM, Fernando Perez wrote: >> On Mon, Feb 14, 2011 at 6:54 AM, Rajeev Raizada >> wrote: >>> >>> Those of you whose duties include teaching basic stats >>> might be interested in these interactive tutorial files, >>> designed to illustrate basic concepts. >> >> Wonderful codes! ?Many thanks for sharing these, they are a fantastic >> illustration... >> >> >> For those using the ipython qt console from git's master, you can load >> Rajeevs' codes straight in via: >> >> %loadpy http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_distribution.py >> >> %loadpy http://www.dartmouth.edu/~raj/Python/interactive_two_sample_t_test.py >> >> %loadpy http://www.dartmouth.edu/~raj/Python/interactive_correlation_plot.py >> >> > > And additionally thanks for the link to > http://en.wikipedia.org/wiki/Anscombe%27s_quartet > > I didn't know about those. > > (and, to follow in Fernando's footsteps) > > > > use higher order moments > use general measures of dependence > use non-linearity tests > use a robust estimator > and use partial correlation in the multivariate case since it works fine on my computer, and it's fun I added and changed the following lines to plot_the_correlation() in interactive _correlation_plot.py import numpy as np from scikits.statsmodels.api import RLM, add_constant resrlmbeta = RLM(y_coords, add_constant(x_coords)).fit().params fitted_robust = np.dot(add_constant(axis_range*scipy.array([-1,1])), resrlmbeta).ravel() #### Plot the best-fit line in red handle_of_regression_line_plot = pylab.plot(axis_range*scipy.array([-1,1]), np.column_stack((y_intercept + slope*axis_range*scipy.array([-1,1]), fitted_robust)),'r-') I now have two lines, one OLS the other one robust. Setting first some points roughly in a straight line and then adding a few outliers shows the difference between standard OLS and robust estimators. It's a very quick hack: RLM doesn't have a predict method (a bug I just found), although fittedvalues exist. I don't know how to add a second plot() with fittedvalues to the same handle without running to the matplotlib manual, nor how to make separate colors for the lines if they are in the same plot(). This could be very nice to illustrate the different weighting options in RLM, the default doesn't look very robust if there are many outliers. > Thanks, Josef > > Josef > >> >> Cheers, >> >> f >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From bgamari.foss at gmail.com Mon Feb 14 13:22:50 2011 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 14 Feb 2011 13:22:50 -0500 Subject: [SciPy-User] turning points In-Reply-To: References: Message-ID: <87d3muxypx.fsf@gmail.com> On Mon, 14 Feb 2011 18:27:29 +0100, "Nils Wagner" wrote: > Hi all, > > Is there a built-in function to compute all turning points > of an array ? > By this you mean the points where the "derivative" (finite difference I suppose) changes sign, right? You could try the following, def turning_points(v): diff = v[1:] - v[:-1] tps = numpy.flatnonzero(diff[1:]*diff[:-1] < 0) return numpy.array(tps)+1 >>> v = range(0,10,1) + range(11,5,-1) + range(30,35,1) >>> turning_points(numpy.array(v)) array([10, 15]) From fperez.net at gmail.com Mon Feb 14 13:33:22 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Feb 2011 10:33:22 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 9:45 AM, wrote: > since it works fine on my computer, and it's fun > > I added and changed the following lines to plot_the_correlation() in > interactive _correlation_plot.py Until we have our own nice file exchange, this is a good example of how at least popping these little scripts on https://gist.github.com/ would be good, so your improvements can be added on top of Rajeev's original code cleanly. I don't want to put them up so the first version credit goes to him, though... Cheers, f From josef.pktd at gmail.com Mon Feb 14 14:29:53 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 14 Feb 2011 14:29:53 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 1:33 PM, Fernando Perez wrote: > On Mon, Feb 14, 2011 at 9:45 AM, ? wrote: >> since it works fine on my computer, and it's fun >> >> I added and changed the following lines to plot_the_correlation() in >> interactive _correlation_plot.py > > Until we have our own nice file exchange, this is a good example of > how at least popping these little scripts on https://gist.github.com/ > would be good, so your improvements can be added on top of Rajeev's > original code cleanly. Except it would add 100% time in administrative overhead for me, and I would still download them and make local changes (as with Alexandre's permutation test), instead of firing up git. (And, when I'm allowed to, they or a variation of them will eventually end up in statsmodels.) :) Josef (Eventually I will learn how to use git and github, but not today.) > > I don't want to put them up so the first version credit goes to him, though... > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Feb 14 14:38:35 2011 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Feb 2011 13:38:35 -0600 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 13:29, wrote: > On Mon, Feb 14, 2011 at 1:33 PM, Fernando Perez wrote: >> On Mon, Feb 14, 2011 at 9:45 AM, ? wrote: >>> since it works fine on my computer, and it's fun >>> >>> I added and changed the following lines to plot_the_correlation() in >>> interactive _correlation_plot.py >> >> Until we have our own nice file exchange, this is a good example of >> how at least popping these little scripts on https://gist.github.com/ >> would be good, so your improvements can be added on top of Rajeev's >> original code cleanly. > > Except it would add 100% time in administrative overhead for me, and I > would still download them and make local changes (as with Alexandre's > permutation test), instead of firing up git. The gist functionality is entirely web-based. You don't even need a Github account. Just click the link and see. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From apalomba at austin.rr.com Mon Feb 14 15:18:18 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Mon, 14 Feb 2011 14:18:18 -0600 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: Forgive me, but I am new to the whole world of manifolds. My goal is to come up with a working implementation of an orbit manifold in python that I can experiment with. Would scikits.learn allow me to do that? Is a diffusion map the same thing as a manifold? Sorry again if these seem like silly questions, I am trying to get my head around what I need to learn in order to make this happen. Any pointers to manifold learning resources would greatly appreciated. Thanks, Anthony On Mon, Feb 14, 2011 at 10:38 AM, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > > > 2011/2/14 St?fan van der Walt > > On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus >> wrote: >> > Oh great, thanks! I poked around the scikits.learn repo a bit but must >> > have missed those -- sorry. I'll check that out in more detail. >> >> I looked at the docs for scikits.learn now, but it doesn't seem as >> though one can specify the diffusion map kernel? Either way, the code >> for this is literally about 20 lines (attached). Note that 'eigen' is >> known as 'eigs' in the latest scipy RC. >> > > Indeed, because the diffusion kernel is hardcoded for diffusion map. The > inner computation function can be passed a kernel though (sparse like > Laplacian Eigenmaps or not sparse like Diffusion Map). > > Matthieu > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrocher at enthought.com Mon Feb 14 15:39:46 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Mon, 14 Feb 2011 14:39:46 -0600 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: Hi Anthony, I don't know exactly what you need to learn about manifolds but I can recommend a well written and rigorous book called geometry, topology and physics by Nakahara http://www.amazon.com/Geometry-Topology-Physics-Graduate-Student/dp/0750306068 It provides good definitions of these and related concepts. Probably worth having a look if you can get to a copy.... Hope this helps. Jonathan On Mon, Feb 14, 2011 at 2:18 PM, Anthony Palomba wrote: > Forgive me, but I am new to the whole world of manifolds. My goal > is to come up with a working implementation of an orbit manifold > in python that I can experiment with. > > Would scikits.learn allow me to do that? Is a diffusion map the same > thing as a manifold? > > Sorry again if these seem like silly questions, I am trying to get my > head around what I need to learn in order to make this happen. > > Any pointers to manifold learning resources would greatly > appreciated. > > > > Thanks, > Anthony > > > > On Mon, Feb 14, 2011 at 10:38 AM, Matthieu Brucher < > matthieu.brucher at gmail.com> wrote: > >> >> >> 2011/2/14 St?fan van der Walt >> >> On Fri, Feb 11, 2011 at 12:16 AM, Zachary Pincus >>> wrote: >>> > Oh great, thanks! I poked around the scikits.learn repo a bit but must >>> > have missed those -- sorry. I'll check that out in more detail. >>> >>> I looked at the docs for scikits.learn now, but it doesn't seem as >>> though one can specify the diffusion map kernel? Either way, the code >>> for this is literally about 20 lines (attached). Note that 'eigen' is >>> known as 'eigs' in the latest scipy RC. >>> >> >> Indeed, because the diffusion kernel is hardcoded for diffusion map. The >> inner computation function can be passed a kernel though (sparse like >> Laplacian Eigenmaps or not sparse like Diffusion Map). >> >> Matthieu >> >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Jonathan Rocher, Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Feb 14 15:24:53 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Feb 2011 12:24:53 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 11:29 AM, wrote: > Except it would add 100% time in administrative overhead for me, and I > would still download them and make local changes (as with Alexandre's > permutation test), instead of firing up git. (And, when I'm allowed > to, they or a variation of them will eventually end up in > statsmodels.) Not trying to push you in any direction, just wanted to clarify that you can use gist, fork and contribute to a snippet without knowing that git exists. I just grabbed some random snippet: https://gist.github.com/826461 clicked the 'fork' button and then had my own fork, changed the code in the web browser by clicking on the 'Edit' button, and clicked the 'Save Gist' button, and now my fork shows two new revisions: https://gist.github.com/826469 Again, this isn't trying to be pushy with you, just showing how this system lets anyone fork, evolve and contribute to any trivial snippet without even knowing what git is for or having it installed on their computer. All they need is a web browser. Cheers, f From rajeev.raizada at gmail.com Mon Feb 14 16:28:46 2011 From: rajeev.raizada at gmail.com (Raj) Date: Mon, 14 Feb 2011 13:28:46 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: Hi guys, Many thanks indeed for the positive feedback about the stats scripts! Sorry about the problems that people have been encountering when using some Matplotlib backends. I'm not sure what to do about that, as I think it may need to be fixed within the Matplotlib backend-code itself. Good to hear that ipython -pylab fixes it, or at least provides a workaround. I had been testing on the Mac OS X backend. The "divide by zero" problem mentioned by Bhanukiran should, I believe, just be a warning printed in the Python window, and should not actually stop the script from running. I had encountered that warning in the correlation-plot script, as calculating a correlation with just two points in it produces a NaN p-value and spits out a warning. So, that script includes a line to disable to warning, although I am not sure if my including that was a good idea. I guess it's possible that the other scripts might produce similar warnings too, although I didn't encounter it myself. Perhaps, e.g. in the one-sample t-test, if there are just two points and those two points have identical y-values? Re Github: I would be more than happy to upload the scripts to there. Please let me know where it would be best for me to put them. Josef's addition of robust stats is very interesting, although these tutorials are meant to be extremely introductory, and the robust-stats add-on may be a bit advanced, from a purely tutorial point of view. However, no harm at all in having both versions available. Thanks again, Raj From matthieu.brucher at gmail.com Mon Feb 14 16:33:06 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 14 Feb 2011 22:33:06 +0100 Subject: [SciPy-User] manifold learning tools? In-Reply-To: References: <220B9D46-093C-4A99-8C7D-EC706FBE7FDF@yale.edu> Message-ID: 2011/2/14 Anthony Palomba > Forgive me, but I am new to the whole world of manifolds. My goal > is to come up with a working implementation of an orbit manifold > in python that I can experiment with. > > Would scikits.learn allow me to do that? Is a diffusion map the same > thing as a manifold? > Diffusion map are one way of defining a set of coordinates on a manifold. I don't know what an orbit manifold is, if it is a specific algorithm, or if you just want to have a coordinate system on an orbit... Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 14 21:27:15 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 14 Feb 2011 21:27:15 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 3:24 PM, Fernando Perez wrote: > On Mon, Feb 14, 2011 at 11:29 AM, ? wrote: >> Except it would add 100% time in administrative overhead for me, and I >> would still download them and make local changes (as with Alexandre's >> permutation test), instead of firing up git. (And, when I'm allowed >> to, they or a variation of them will eventually end up in >> statsmodels.) > > Not trying to push you in any direction, just wanted to clarify that > you can use gist, fork and contribute to a snippet without knowing > that git exists. ?I just grabbed some random snippet: > > https://gist.github.com/826461 > > clicked the 'fork' button and then had my own fork, changed the code > in the web browser by clicking on the 'Edit' button, and clicked ?the > 'Save Gist' button, and now my fork shows two new revisions: > > https://gist.github.com/826469 > > Again, this isn't trying to be pushy with you, just showing how this > system lets anyone fork, evolve and contribute to any trivial snippet > without even knowing what git is for or having it installed on their > computer. ?All they need is a web browser. Thanks for the explanation, I got my first gist clone, and I take back the 100 percent overhead. Is there a way to see the changesets, the diff between different revisions or forks? I didn't find a button. Maybe a nice complement to the mailinglist, which for me still has the advantage of instant search. Josef > > Cheers, > > f > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Mon Feb 14 23:55:07 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Feb 2011 20:55:07 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 6:27 PM, wrote: > Thanks for the explanation, I got my first gist clone, and I take back > the 100 percent overhead. Glad to hear that. > Is there a way to see the changesets, the diff between different > revisions or forks? I didn't find a button. Unfortunately it's a requested but not yet available feature: http://support.github.com/discussions/feature-requests/542-show-diffs-between-gist-revisions To actually see the diffs, as of now you'll need to get a real git clone and use local tools, I'm afraid. I hope they add a nice UI to compare revisions in a gist's history sometime soon... Cheers, f From fperez.net at gmail.com Tue Feb 15 00:01:36 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 14 Feb 2011 21:01:36 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: Hi Raj, On Mon, Feb 14, 2011 at 1:28 PM, Raj wrote: > > Re Github: I would be more than happy to upload the scripts to there. > Please let me know where it would be best for me to put them. It's basically up to you: a 'gist' is meant as a lightweight container for one or more (but hopefully a few, say three or four max) files with a simple revision history. The system lets you keep the files, make incremental improvements, and there can be some discussion right there. Here's a good example of a gist with a few files, some history and some discussion: https://gist.github.com/566233 Alternatively, you can create a full-blown git repository. This requires a little more setup (it's not that hard though), but you get a proper named url, a bug tracker, pull requests, a wiki, etc. This is what numpy, ipython, nipy*, etc. use, and in a few days also scipy and matplotlib. I mentioned the gist situation because your snippets seemed like a good use case for such a lightweight, low-overhead tool. But honestly, if you'd like to keep developing them, you might want to consider making a little 'pystats-tutorials' proper git repository so the project can grow more. I can imagine something like this growing over time into a very useful collection of tools for interactive stats demos, and at that point you'll benefit from the full infrastructure of a real github repo. It's still 100% free, and the time you invest in learning how to use the tools will pay tenfold for you on many fronts. And if you get stuck with git, head over to J. Haxby's lab and talk to Yarick or Michael H., they are more or less world-class gurus on that stuff... A number of us really got into these tools after watching those two in awe. Best, f From rmay31 at gmail.com Mon Feb 14 10:06:48 2011 From: rmay31 at gmail.com (Ryan May) Date: Mon, 14 Feb 2011 09:06:48 -0600 Subject: [SciPy-User] [Matplotlib-users] griddata In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 8:40 AM, Nils Wagner wrote: > Hi all, > > Is it possible to apply griddata to polar coordinates or > do I need cartesian coordinates ? > > http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata You can keep the data in polar coordinates, you just need to pass in the locations of the points in cartesian coordinates: x = r * cos(theta) y = r * sin(theta) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From dave.hirschfeld at gmail.com Tue Feb 15 06:59:30 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 15 Feb 2011 11:59:30 +0000 (UTC) Subject: [SciPy-User] splint: bug or expected behaviour? Message-ID: It appears that splint doesn't work correctly when asked to integrate a region outside the support of the original function. Consider the following code: """ from scipy import interpolate from scipy.integrate import quad y = array([ 0.38608245, 0.33937334, 0.30557594, 0.29031793, 0.27597413, 0.26680583, 0.25012443, 0.24310048, 0.23825924, 0.23260759, 0.22179233, 0.21821583, 0.2164903 , 0.21275566, 0.20813042, 0.20404953, 0.19703265, 0.19649831, 0.19548709, 0.1922062 , 0.18623453]) # x = ones(y.size)/4 x[0] /= 2 x = x.cumsum() tck = interpolate.splrep(x, y, s=0.0001) """ splev appears to have no problems extrapolating outside the original support: print interpolate.splev(0, tck, der=0) 0.412282701045 print interpolate.splev(0.1, tck, der=0) 0.389777686302 print interpolate.splev(0.2, tck, der=0) 0.369853064862 however in the first two cases below the results from splint appear a little suspect: print interpolate.splint(0, 0.1, tck) -0.0096789147401 print interpolate.splint(0, 0.2, tck) 0.0282819198434 print interpolate.splint(0.2, 0.3, tck) 0.0360893890763 and indeed, comparing against quad only the final result where the bounds are wholly within the original support do they agree: print quad(lambda v: interpolate.splev(v, tck, der=0), 0, 0.1)[0] 0.040080715787 print quad(lambda v: interpolate.splev(v, tck, der=0), 0, 0.2)[0] 0.0780415503705 print quad(lambda v: interpolate.splev(v, tck, der=0), 0.2, 0.3)[0] 0.0360893890763 Is this a bug in splint or expected behaviour when the bounds are outside the original support (even though splev has no problems) or am I simply mis- understanding something obvious? Thanks, Dave From mjakubik at ta3.sk Tue Feb 15 08:41:49 2011 From: mjakubik at ta3.sk (Marian Jakubik) Date: Tue, 15 Feb 2011 14:41:49 +0100 Subject: [SciPy-User] cdf Message-ID: <20110215144149.269c4ce7@jakubik> Hi All, i would like to create cumulative distribution function (CDF) from the array of data. I attempted to solve this using stats.rv_discrete.cdf() - see attached script. Can you pls help me with this issue? Best, Marian -------------- next part -------------- A non-text attachment was scrubbed... Name: qweb.py Type: text/x-python Size: 1360 bytes Desc: not available URL: From scipy at SamuelJohn.de Tue Feb 15 09:37:24 2011 From: scipy at SamuelJohn.de (Samuel John) Date: Tue, 15 Feb 2011 15:37:24 +0100 Subject: [SciPy-User] ndimage.rotate around which anchor point? center? In-Reply-To: References: <4B03831A-A369-494F-94BB-A1FBB5AD339D@samueljohn.de> Message-ID: Thanks, St?fan for your answer! On the compute-hosts available to me, I would need to install scikits at first, but perhaps I should give it a try. I hope the homography also supports spline interpolation... maybe it's even faster than doing ndimage.shift, ndimage.rotate and ndimage.zoom one after another ... I don't know who feels responsible for ndimage, but I think that homography should be added to ndimage :-) Additionally, the docstring of ndimage.rotate should be updated. cheers, Samuel From robert.kern at gmail.com Tue Feb 15 10:34:46 2011 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Feb 2011 09:34:46 -0600 Subject: [SciPy-User] cdf In-Reply-To: <20110215144149.269c4ce7@jakubik> References: <20110215144149.269c4ce7@jakubik> Message-ID: On Tue, Feb 15, 2011 at 07:41, Marian Jakubik wrote: > Hi All, > > i would like to create cumulative distribution function (CDF) from the > array of data. I attempted to solve this using stats.rv_discrete.cdf() > - see attached script. import numpy as np from scipy import stats qarr = np.array([...]) qarr.sort() ranks = stats.rankdata(qarr) ecdf = ranks / (len(qarr) + 1) There are a whole variety of conventions for normalizing those ranks. You can see them here: http://old.nabble.com/Re:-how-to-plot-the-empirical-cdf-of-an-array--p29117695.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From bsouthey at gmail.com Tue Feb 15 12:58:05 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 15 Feb 2011 11:58:05 -0600 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 3 In-Reply-To: References: Message-ID: <4D5ABEAD.4000603@gmail.com> On 02/14/2011 09:24 AM, Ralf Gommers wrote: > On Sun, Feb 13, 2011 at 4:36 PM, Ralf Gommers > wrote: >> Hi, >> >> I am pleased to announce the availability of the third release >> candidate of SciPy 0.9.0. This will be the first SciPy release to >> include support for Python 3 (all modules except scipy.weave), as well >> as for Python 2.7. >> >> Sources, binaries and release notes can be found at >> http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that >> due to the issues Sourceforge is still having the binaries are not >> visible at this moment, even though they are uploaded. They should >> appear within a day I expect. >> > All binaries are visible now. > > Cheers, > Ralf > >> Changes since release candidate 2: >> - a high-priority bugfix for fftpack (#1353) >> - a change in ndimage for compatibility with upcoming numpy 1.6 >> - fixes for compatibility with Python 2.4 >> - fixed test failures reported for RC2 built against MKL >> >> If no more issues are reported, 0.9.0 will be released in the weekend >> of 19/20 February. >> >> Enjoy, >> Ralf >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user As you can tell from the numpy version, I have not played around with numpy/scipy for a while. I do not see any failures on Fedora 14 with Python2.4 (numpy 2.0.0.dev), Python2.5 (numpy 2.0.0.dev), Python2.6 (numpy 2.0.0.dev), Python2.7(numpy 2.0.0.dev-f72c605) or Python3.1 (numpy 1.5.1). The Python2.7 output is given below because I get the CObject deprecation warning. I think this is addressed by ticket 1237 (scheduled milestone 0.9.0) that needs to be applied to Python 2.7 as well as Python3.1. http://projects.scipy.org/scipy/ticket/1237 Bruce $ python -c "import scipy; scipy.test()" Running unit tests for scipy NumPy version 2.0.0.dev-f72c605 NumPy is installed in /usr/lib64/python2.7/site-packages/numpy SciPy version 0.9.0rc3 SciPy is installed in /usr/lib64/python2.7/site-packages/scipy Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] nose version 0.11.2 .............................................................................................................................................................................................................K......................................................................................................................K..K....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS..................................................................................................................................................................................................................................................................................K............................................................................................................................................................................................................................................................................................................................................................../usr/lib64/python2.7/site-packages/scipy/ndimage/morphology.py:254: PendingDeprecationWarning: The CObject type is marked Pending Deprecation in Python 2.7. Please use capsule objects instead. structure, mask, output, border_value, origin, invert, cit, 1) .........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K...........................................SSSSSSSSSSS................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK...............................................................................................................................................................................................................................................................................................................................................................................................................................K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSSSS..................................................................................................................................................................................................................................................................................................................................................................................................Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp Warning: overflow encountered in exp .................................Warning: invalid value encountered in sqrt .........................................................................................................................................................................................................................................................................................................S............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ ---------------------------------------------------------------------- Ran 4724 tests in 48.168s OK (KNOWNFAIL=13, SKIP=35) From charlesr.harris at gmail.com Tue Feb 15 13:14:31 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 15 Feb 2011 11:14:31 -0700 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 3 In-Reply-To: <4D5ABEAD.4000603@gmail.com> References: <4D5ABEAD.4000603@gmail.com> Message-ID: On Tue, Feb 15, 2011 at 10:58 AM, Bruce Southey wrote: > On 02/14/2011 09:24 AM, Ralf Gommers wrote: > > On Sun, Feb 13, 2011 at 4:36 PM, Ralf Gommers > > wrote: > >> Hi, > >> > >> I am pleased to announce the availability of the third release > >> candidate of SciPy 0.9.0. This will be the first SciPy release to > >> include support for Python 3 (all modules except scipy.weave), as well > >> as for Python 2.7. > >> > >> Sources, binaries and release notes can be found at > >> http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that > >> due to the issues Sourceforge is still having the binaries are not > >> visible at this moment, even though they are uploaded. They should > >> appear within a day I expect. > >> > > All binaries are visible now. > > > > Cheers, > > Ralf > > > >> Changes since release candidate 2: > >> - a high-priority bugfix for fftpack (#1353) > >> - a change in ndimage for compatibility with upcoming numpy 1.6 > >> - fixes for compatibility with Python 2.4 > >> - fixed test failures reported for RC2 built against MKL > >> > >> If no more issues are reported, 0.9.0 will be released in the weekend > >> of 19/20 February. > >> > >> Enjoy, > >> Ralf > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > As you can tell from the numpy version, I have not played around with > numpy/scipy for a while. I do not see any failures on Fedora 14 with > Python2.4 (numpy 2.0.0.dev), Python2.5 (numpy 2.0.0.dev), Python2.6 > (numpy 2.0.0.dev), Python2.7(numpy 2.0.0.dev-f72c605) or Python3.1 > (numpy 1.5.1). > > The Python2.7 output is given below because I get the CObject > deprecation warning. I think this is addressed by ticket 1237 (scheduled > milestone 0.9.0) that needs to be applied to Python 2.7 as well as > Python3.1. > http://projects.scipy.org/scipy/ticket/1237 > > We only use the capsule object for python 3.*, for 2.7 we decided to stick with PyCObject for backwards compatibility. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From vineethrakesh at gmail.com Tue Feb 15 20:41:03 2011 From: vineethrakesh at gmail.com (Vineeth Mohan) Date: Tue, 15 Feb 2011 20:41:03 -0500 Subject: [SciPy-User] using power iteration method: Message-ID: <4D5B2B2F.4070000@gmail.com> Hello all, Can you tell me what module to use for power iteration method to find eigen values? an example would be much appreciated. Thank You Vin From wnbell at gmail.com Tue Feb 15 22:38:44 2011 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Feb 2011 22:38:44 -0500 Subject: [SciPy-User] using power iteration method: In-Reply-To: <4D5B2B2F.4070000@gmail.com> References: <4D5B2B2F.4070000@gmail.com> Message-ID: On Tue, Feb 15, 2011 at 8:41 PM, Vineeth Mohan wrote: > Hello all, > > Can you tell me what module to use for power iteration method to find > eigen values? an example would be much appreciated. > > Thank You > Vin > Hi Vin, You'll want to try the eigs() function in the scipy.sparse.linalg module [1]. The algorithm used by eigs() is considered to be better than power iteration since it takes fewer iterations to produce the same quality result. [1] http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html#scipy.sparse.linalg.eigs -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Feb 15 22:39:57 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 15 Feb 2011 21:39:57 -0600 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 3 In-Reply-To: References: <4D5ABEAD.4000603@gmail.com> Message-ID: On Tue, Feb 15, 2011 at 12:14 PM, Charles R Harris wrote: > > > On Tue, Feb 15, 2011 at 10:58 AM, Bruce Southey wrote: >> >> On 02/14/2011 09:24 AM, Ralf Gommers wrote: >> > On Sun, Feb 13, 2011 at 4:36 PM, Ralf Gommers >> > ?wrote: >> >> Hi, >> >> >> >> I am pleased to announce the availability of the third release >> >> candidate of SciPy 0.9.0. This will be the first SciPy release to >> >> include support for Python 3 (all modules except scipy.weave), as well >> >> as for Python 2.7. >> >> >> >> Sources, binaries and release notes can be found at >> >> http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc3/. Note that >> >> due to the issues Sourceforge is still having the binaries are not >> >> visible at this moment, even though they are uploaded. They should >> >> appear within a day I expect. >> >> >> > All binaries are visible now. >> > >> > Cheers, >> > Ralf >> > >> >> Changes since release candidate 2: >> >> - a high-priority bugfix for fftpack (#1353) >> >> - a change in ndimage for compatibility with upcoming numpy 1.6 >> >> - fixes for compatibility with Python 2.4 >> >> - fixed test failures reported for RC2 built against MKL >> >> >> >> If no more issues are reported, 0.9.0 will be released in the weekend >> >> of 19/20 February. >> >> >> >> Enjoy, >> >> Ralf >> >> >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> As you can tell from the numpy version, I have not played around with >> numpy/scipy for a while. I do not see any failures on Fedora 14 with >> Python2.4 (numpy 2.0.0.dev), ?Python2.5 (numpy 2.0.0.dev), Python2.6 >> (numpy 2.0.0.dev), Python2.7(numpy 2.0.0.dev-f72c605) or Python3.1 >> (numpy 1.5.1). >> >> The Python2.7 output is given below because I get the CObject >> deprecation warning. I think this is addressed by ticket 1237 (scheduled >> milestone 0.9.0) that needs to be applied to Python 2.7 as well as >> Python3.1. >> http://projects.scipy.org/scipy/ticket/1237 >> > > We only use the capsule object for python 3.*, for 2.7 we decided to stick > with PyCObject for backwards compatibility. > > > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Okay, On 64-bit Win7 with 32-bit Python 2.6.3 and 3.1 , I get a crashes with both binary installers at 'test_interpnd.TestCloughTocher2DInterpolator.test_dense '. A window pops up asking to close the application. I don't have access to this computer all the time and don't understand the test to help immediately. However I think it to do with lines like ~Line 17: yi = interpnd.LinearNDInterpolator(x, y)(x) as there is no crash if remove all other class content except the first test and I remove the trailing '(x)': yi = interpnd.LinearNDInterpolator(x, y) Bruce >>> import scipy >>> scipy.test(verbose=10) Running unit tests for scipy NumPy version 1.5.1 NumPy is installed in E:\Python26\lib\site-packages\numpy SciPy version 0.9.0rc3 SciPy is installed in E:\Python26\lib\site-packages\scipy Python version 2.6.3 (r263rc1:75186, Oct 2 2009, 20:40:30) [MSC v.1500 32 bit (Intel)] nose version 0.11.1 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok ... test_subclassing (test_fitpack.TestUnivariateSpline) ... ok test_interpnd.TestCloughTocher2DInterpolator.test_dense ... From david at silveregg.co.jp Tue Feb 15 23:45:45 2011 From: david at silveregg.co.jp (David) Date: Wed, 16 Feb 2011 13:45:45 +0900 Subject: [SciPy-User] using power iteration method: In-Reply-To: <4D5B2B2F.4070000@gmail.com> References: <4D5B2B2F.4070000@gmail.com> Message-ID: <4D5B5679.2030708@silveregg.co.jp> On 02/16/2011 10:41 AM, Vineeth Mohan wrote: > Hello all, > > Can you tell me what module to use for power iteration method to find > eigen values? an example would be much appreciated. Is it for educational purpose ? Otherwise, I don't know many cases where power iteration should ever be used. Not only it is slow to converge in general, but it is also very poor numerically speaking. Scipy does have methods to find eigenvalues for both dense and sparse matrices, some of which are derived from power iteration (the arpack module for sparse matrices is based on Krylov spaces, which expand on the idea behind power iteration). cheers, David From mjakubik at ta3.sk Wed Feb 16 03:18:43 2011 From: mjakubik at ta3.sk (Marian Jakubik) Date: Wed, 16 Feb 2011 09:18:43 +0100 Subject: [SciPy-User] cdf In-Reply-To: References: <20110215144149.269c4ce7@jakubik> Message-ID: <20110216091843.7550c651@jakubik> Hi Robert, thank you very much. Now I see that it is simple, but ... Best, Marian D?a Tue, 15 Feb 2011 09:34:46 -0600 Robert Kern nap?sal: > On Tue, Feb 15, 2011 at 07:41, Marian Jakubik wrote: > > Hi All, > > > > i would like to create cumulative distribution function (CDF) from the > > array of data. I attempted to solve this using stats.rv_discrete.cdf() > > - see attached script. > > import numpy as np > from scipy import stats > > qarr = np.array([...]) > qarr.sort() > ranks = stats.rankdata(qarr) > ecdf = ranks / (len(qarr) + 1) > > > There are a whole variety of conventions for normalizing those ranks. > You can see them here: > > http://old.nabble.com/Re:-how-to-plot-the-empirical-cdf-of-an-array--p29117695.html > From driftman350z at gmail.com Wed Feb 16 03:20:19 2011 From: driftman350z at gmail.com (kuroro luciruful) Date: Wed, 16 Feb 2011 09:20:19 +0100 Subject: [SciPy-User] Positive gain with IIR design Message-ID: Hi everybody I use Scipy for a while now for some personal work's, mainly for design digital filter, i have a question did it possible to design IIR with positive gain with scipy ?, i need it to design a parametric equalizer and i need to increase or decrease a particular frequency. Best regard Kuroro -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Wed Feb 16 04:57:28 2011 From: tmp50 at ukr.net (Dmitrey) Date: Wed, 16 Feb 2011 11:57:28 +0200 Subject: [SciPy-User] [ANN] New package: SpaceFuncs (2D, 3D, ND geometric modeling, optimization, solving) Message-ID: Hi all, I'm glad to inform you about new, 4th OpenOpt Suite module: SpaceFuncs - a tool for 2D, 3D, N-dimensional geometric modeling with possibilities of parametrized calculations, numerical optimization and solving systems of geometrical equations with automatic differentiation. The module is written in Python + NumPy, requires FuncDesigner (and OpenOpt, DerApproximator for some operations). It has completely free license: BSD. For details see its home page http://openopt.org/SpaceFuncs and documentation http://openopt.org/SpaceFuncsDoc Also, you can try it online via our Sage-server (sometimes hangs due to high load, through) http://sage.openopt.org/welcome Regards, Dmitrey. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.raizada at gmail.com Wed Feb 16 14:48:17 2011 From: rajeev.raizada at gmail.com (Raj) Date: Wed, 16 Feb 2011 11:48:17 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: <444a6e79-f1a8-478c-8d4e-0f0fbb9cf17a@y4g2000prh.googlegroups.com> Hi Fernando, Thanks for the info about setting up a gist. Very helpful! I'll try it soon. By the way, I made a YouTube video showing the interactive demos in action: http://www.youtube.com/watch?v=nCv_MhaeFo8 I'd be very interested to hear any comments or suggestions. I also put a note into the scripts and on the webpage suggesting that Linux users run them in an iPython started with "ipython -pylab", in order to work around the problems with some Matplotlib backends. Raj P.S. I'll convey your enthusiastic comments to Yarik and Michael about their git virtuosity! Our offices are very near to each other and we use the same coffee machine, so I see them all the time. From brendanarnold at gmail.com Wed Feb 16 15:28:45 2011 From: brendanarnold at gmail.com (Brendan Arnold) Date: Wed, 16 Feb 2011 20:28:45 +0000 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? Message-ID: Hi there, When I run numpy.fft.rfft on an array of length 4096, it returns an array of 2049 and not 2048. Is this right behaviour? I am using numpy 1.5.1 Brendan From matthieu.brucher at gmail.com Wed Feb 16 15:33:13 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Feb 2011 21:33:13 +0100 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? In-Reply-To: References: Message-ID: Hi, Yes, it is correct. When doing a real FFT, you have 4096 points, but the values 1:2048 are conjugated with 2049:4096, so you need only values 0:2049 to get your original signal back. Matthieu Matthieu 2011/2/16 Brendan Arnold > Hi there, > > When I run numpy.fft.rfft on an array of length 4096, it returns an > array of 2049 and not 2048. Is this right behaviour? > > I am using numpy 1.5.1 > > Brendan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From brendanarnold at gmail.com Wed Feb 16 15:44:04 2011 From: brendanarnold at gmail.com (Brendan Arnold) Date: Wed, 16 Feb 2011 20:44:04 +0000 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? In-Reply-To: References: Message-ID: Right, but what I was wondering about is that instead of 2048 points returned 2049 are returned, what is the last point? Brendan On 16 February 2011 20:33, Matthieu Brucher wrote: > Hi, > > Yes, it is correct. When doing a real FFT, you have 4096 points, but the > values 1:2048 are conjugated with 2049:4096, so you need only values 0:2049 > to get your original signal back. > > Matthieu > > Matthieu > > 2011/2/16 Brendan Arnold >> >> Hi there, >> >> When I run numpy.fft.rfft on an array of length 4096, it returns an >> array of 2049 and not 2048. Is this right behaviour? >> >> I am using numpy 1.5.1 >> >> Brendan >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jdh2358 at gmail.com Wed Feb 16 15:45:05 2011 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 16 Feb 2011 14:45:05 -0600 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: On Mon, Feb 14, 2011 at 10:03 AM, wrote: > And additionally thanks for the link to > http://en.wikipedia.org/wiki/Anscombe%27s_quartet See also http://matplotlib.sourceforge.net/examples/pylab_examples/anscombe.html Very nice examples... From josef.pktd at gmail.com Wed Feb 16 15:47:05 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 16 Feb 2011 15:47:05 -0500 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? In-Reply-To: References: Message-ID: On Wed, Feb 16, 2011 at 3:44 PM, Brendan Arnold wrote: > Right, but what I was wondering about is that instead of 2048 points > returned 2049 are returned, what is the last point? I think it's for zero frequency, symmetric plus a point in the middle ? Josef > > Brendan > > > > On 16 February 2011 20:33, Matthieu Brucher wrote: >> Hi, >> >> Yes, it is correct. When doing a real FFT, you have 4096 points, but the >> values 1:2048 are conjugated with 2049:4096, so you need only values 0:2049 >> to get your original signal back. >> >> Matthieu >> >> Matthieu >> >> 2011/2/16 Brendan Arnold >>> >>> Hi there, >>> >>> When I run numpy.fft.rfft on an array of length 4096, it returns an >>> array of 2049 and not 2048. Is this right behaviour? >>> >>> I am using numpy 1.5.1 >>> >>> Brendan >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From silva at lma.cnrs-mrs.fr Wed Feb 16 15:58:25 2011 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 16 Feb 2011 17:58:25 -0300 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? In-Reply-To: References: Message-ID: <1297889905.6313.2.camel@florian-desktop> El mer., 16-02-2011 a las 20:44 +0000, Brendan Arnold escribi?: > Right, but what I was wondering about is that instead of 2048 points > returned 2049 are returned, what is the last point? Please read http://docs.scipy.org/doc/numpy/reference/routines.fft.html and note the different handling of even and odd number of samples. Whether n=2m or 2m+1, you have m+1 points for non negative frequencies, i.e the ones returned when using rfft -- Fabrice From matthieu.brucher at gmail.com Wed Feb 16 15:57:51 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Feb 2011 21:57:51 +0100 Subject: [SciPy-User] Is Numpy rfft returning the correct number of points? In-Reply-To: References: Message-ID: It's the amplitude at fs/2 (it's a real value). 2011/2/16 Brendan Arnold > Right, but what I was wondering about is that instead of 2048 points > returned 2049 are returned, what is the last point? > > Brendan > > > > On 16 February 2011 20:33, Matthieu Brucher > wrote: > > Hi, > > > > Yes, it is correct. When doing a real FFT, you have 4096 points, but the > > values 1:2048 are conjugated with 2049:4096, so you need only values > 0:2049 > > to get your original signal back. > > > > Matthieu > > > > Matthieu > > > > 2011/2/16 Brendan Arnold > >> > >> Hi there, > >> > >> When I run numpy.fft.rfft on an array of length 4096, it returns an > >> array of 2049 and not 2048. Is this right behaviour? > >> > >> I am using numpy 1.5.1 > >> > >> Brendan > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > > Information System Engineer, Ph.D. > > Blog: http://matt.eifelle.com > > LinkedIn: http://www.linkedin.com/in/matthieubrucher > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From apalomba at austin.rr.com Wed Feb 16 17:35:58 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Wed, 16 Feb 2011 16:35:58 -0600 Subject: [SciPy-User] trying to interpolate... Message-ID: I am trying to write an interpolate function that allows me to interpolate between two sets of numbers. It would basically map (x1, y1) on to (x2, y2). def interpmap(pos, x1, y1, x2, y2, base): pass Where pos is some value between x1 and y1, interpmap returns the interpolated value between x2 and y2. I want to use SciPy .?interpolate . interpolate .?interp2d to do this. http://www.scipy.org/doc/api_docs/SciPy.interpolate.interpolate.interp2d.html My first problem is when I include the following line of code: *from scipy import interpolate * python crashes. Is there any reason why I would not be able to import this? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Feb 16 18:28:25 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Feb 2011 23:28:25 +0000 (UTC) Subject: [SciPy-User] trying to interpolate... References: Message-ID: On Wed, 16 Feb 2011 16:35:58 -0600, Anthony Palomba wrote: > http://www.scipy.org/doc/api_docs/SciPy.html Those are obsolete docs. See current ones here: http://docs.scipy.org/ > My first problem is when I include the following line of code: > > from scipy import interpolate > > python crashes. That should work. Where did you download your Scipy and Numpy (and which files did you download)? -- Pauli Virtanen From apalomba at austin.rr.com Wed Feb 16 18:55:59 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Wed, 16 Feb 2011 17:55:59 -0600 Subject: [SciPy-User] trying to interpolate... In-Reply-To: References: Message-ID: Well I thinkI downloaded them scipy.org I am running things on windows. In my C:\Python26\Lib\site-packages folder, I see numpy-1.3.0-py2.6.egg-info and scipy-0.8.0-py2.6.egg-info. Should I try downloading and installing the latest version? -ap On Wed, Feb 16, 2011 at 5:28 PM, Pauli Virtanen wrote: > On Wed, 16 Feb 2011 16:35:58 -0600, Anthony Palomba wrote: > > http://www.scipy.org/doc/api_docs/SciPy.html > > Those are obsolete docs. See current ones here: http://docs.scipy.org/ > > > My first problem is when I include the following line of code: > > > > from scipy import interpolate > > > > python crashes. > > That should work. Where did you download your Scipy and Numpy (and which > files did you download)? > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Feb 16 19:00:32 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Feb 2011 00:00:32 +0000 (UTC) Subject: [SciPy-User] trying to interpolate... References: Message-ID: On Wed, 16 Feb 2011 17:55:59 -0600, Anthony Palomba wrote: > Well I thinkI downloaded them scipy.org > > I am running things on windows. In my C:\Python26\Lib\site-packages > folder, I see numpy-1.3.0-py2.6.egg-info and scipy-0.8.0-py2.6.egg-info. > > Should I try downloading and installing the latest version? Yes: If I remember correctly, the Scipy 0.8.0 binary requires Numpy >= 1.4. -- Pauli Virtanen From jmccormac01 at qub.ac.uk Wed Feb 16 23:54:33 2011 From: jmccormac01 at qub.ac.uk (James McCormac) Date: Thu, 17 Feb 2011 04:54:33 -0000 (UTC) Subject: [SciPy-User] faster interpolations (interp1d) Message-ID: <51468.161.72.6.236.1297918473.squirrel@star.pst.qub.ac.uk> Hi Guys, Is there a faster way to do 1d interpolations in scipy. I am using interp1d "quadratic" which is already a fair bit quicker than the cubic interpolation. On small arrays <300 elements its fast enough but when I go beyond 500+ it stars to become more noticeable. I need to use at least a quadratic type interpolation and was wondering if there are any tricks. Cheers James From e.antero.tammi at gmail.com Thu Feb 17 03:31:28 2011 From: e.antero.tammi at gmail.com (eat) Date: Thu, 17 Feb 2011 10:31:28 +0200 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: <51468.161.72.6.236.1297918473.squirrel@star.pst.qub.ac.uk> References: <51468.161.72.6.236.1297918473.squirrel@star.pst.qub.ac.uk> Message-ID: Hi, On Thu, Feb 17, 2011 at 6:54 AM, James McCormac wrote: > Hi Guys, > Is there a faster way to do 1d interpolations in scipy. I am using > interp1d "quadratic" which is already a fair bit quicker than the cubic > interpolation. On small arrays <300 elements its fast enough but when I go > beyond 500+ it stars to become more noticeable. > > I need to use at least a quadratic type interpolation and was wondering if > there are any tricks. > Not a real trick, buy you can use vandermonde matrix, along the line - solve b from dot(vander(x, 3), b)= y - interpolate with any x_new, y_new= dot(vander(x_new), b) In my machine within the range 5e2 to 2e3, it's roughly 4x faster than interp1d. My two cents, eat > > Cheers > James > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Malte.Mauelshagen at dlr.de Thu Feb 17 04:30:43 2011 From: Malte.Mauelshagen at dlr.de (Malte.Mauelshagen at dlr.de) Date: Thu, 17 Feb 2011 09:30:43 +0000 Subject: [SciPy-User] Numpyio modules Message-ID: <26384C2388ED424A98235A07D7BE1E1A0BD2CEB4@DLREXMBX01.intra.dlr.de> Hello, I am working on a project that uses several scipy functions such as fread and fwrite. from scipy.io.numpyio import fwrite, fread from scipy.io import npfile I downloaded the latest scipy-version and realised that these functions are not available anymore. Have they been moved to another package? (Maybe in the numpy module?) If they have been removed completely, are there still other methods which provide the same functionalty? (I read something about fromfile, tofile). Thank you in advance! Malte From josef.pktd at gmail.com Thu Feb 17 06:46:11 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 17 Feb 2011 06:46:11 -0500 Subject: [SciPy-User] cdf In-Reply-To: <20110216091843.7550c651@jakubik> References: <20110215144149.269c4ce7@jakubik> <20110216091843.7550c651@jakubik> Message-ID: 2011/2/16 Marian Jakubik : > Hi Robert, > > thank you very much. Now I see that it is simple, but ... > > Best, > Marian > > D?a Tue, 15 Feb 2011 09:34:46 -0600 > Robert Kern nap?sal: > >> On Tue, Feb 15, 2011 at 07:41, Marian Jakubik wrote: >> > Hi All, >> > >> > i would like to create cumulative distribution function (CDF) from the >> > array of data. I attempted to solve this using stats.rv_discrete.cdf() >> > - see attached script. >> >> import numpy as np >> from scipy import stats >> >> qarr = np.array([...]) >> qarr.sort() >> ranks = stats.rankdata(qarr) >> ecdf = ranks / (len(qarr) + 1) >> >> >> There are a whole variety of conventions for normalizing those ranks. >> You can see them here: >> >> http://old.nabble.com/Re:-how-to-plot-the-empirical-cdf-of-an-array--p29117695.html (I had forgotten to add this) implemented in stats.mstats for masked arrays >>> import scipy.stats.mstats as mstats >>> help(mstats.plotting_positions) Josef >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From amd405 at psu.edu Thu Feb 17 10:19:06 2011 From: amd405 at psu.edu (Ashley DaSilva) Date: Thu, 17 Feb 2011 10:19:06 -0500 Subject: [SciPy-User] fsolve - pass keyword arguments Message-ID: Hello all, I have a function which takes some keyword arguments, def f(x,kw1=None,kw2=None,kw3=None): (some stuff) return F Depending on which of the kw arguments is None the function might do extra steps to determine them. I want to use scipy.optimize.fsolve to find the root of this function when kw3 is an integer. I am calling fsolve like this: ans=scipy.optimize.fsolve(f, x0, args=(kw3=N,)) where x0 is an initial guess to the root and N is the integer. But I get the error: "SyntaxError: invalid syntax". If I move kw3 to the 2nd position def f(x,kw3=None,kw1=None,kw2=None): (some stuff) return F and call like this: ans=scipy.optimize.fsolve(f, x0, args=(N,)) then the function works. Does this mean that passing keyword arguments using fsolve is not allowed? Or am I doing something incorrectly? Thanks! Ashley -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Feb 17 10:26:04 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Feb 2011 15:26:04 +0000 (UTC) Subject: [SciPy-User] fsolve - pass keyword arguments References: Message-ID: Thu, 17 Feb 2011 10:19:06 -0500, Ashley DaSilva wrote: > > ans=scipy.optimize.fsolve(f, x0, args=(kw3=N,)) Using the `args` parameter is not necessary. Rather, do ans = scipy.optimize.fsolve(lambda x: f(x, kw3=N), x0) From story645 at gmail.com Thu Feb 17 10:29:34 2011 From: story645 at gmail.com (hannah) Date: Thu, 17 Feb 2011 07:29:34 -0800 (PST) Subject: [SciPy-User] scipy.test() causes segmentation fault for test_lobpcg In-Reply-To: <20101226025146.315430@gmx.net> References: <20101226025146.315430@gmx.net> Message-ID: <4808994d-5e36-4454-a39f-f4bf0dbf32a5@b15g2000pra.googlegroups.com> Hi, Sorry if you've gotten this a few times, but I've had bad luck with posting to the list. I've gotten the same bug using: scipy-'0.10.0.dev7144' numpy-'1.6.0.dev-af1e833' atlas-3.9.23 umfpack and amd (suitesparse 3.6.0) compiled to shared using these instructions: http://repository.slacky.eu/slackware-12.1/development/suitesparse/3.1.0/src/suitesparse.SlackBuild python2.7 gfortran RHEL Server 5.5 (Tikanga) I changed the Elastic_Rod and Makota_Pair test cases per the suggestions in your post, which results in all the other tests working fine/failing as expected. (no seqfaults) I'm thinking it may be sensible to just run scipy as is and be a bit careful with lobpcg. According to the core dump, the crash seems to be in atlas: http://pastebin.com/zy0RnUCK On Dec 25 2010, 9:51?pm, "Ryota Tomioka" wrote: > Dear Scipy users, > > I have recently installed numpy 1.5.1rc1 and > scipy 0.8.0 on a CentOS 5.5 server. ATLAS was > compiled with gfortran and I also specified gfortran > for the installation of both numpy and scipy. > > numpy.test() ran without trouble, but scipy.test() > crashed due to segmentation fault and this was in > test_lobpcg.test_ElasticRod. > > In order to reproduce the result I copied /usr/local/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py > to my home directory and did the following. > > [ryotat at cyprus ~]$ python > Python 2.6.6 (r266:84292, Nov 19 2010, 22:23:00) > [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 > Type "help", "copyright", "credits" or "license" for more information.>>> from test_lobpcg import * > >>> A,B=ElasticRod(100) > >>> compare_solutions(A,B,100) > >>> compare_solutions(A,B,80) > >>> compare_solutions(A,B,40) > >>> compare_solutions(A,B,30) > >>> compare_solutions(A,B,22) > >>> compare_solutions(A,B,21) > >>> compare_solutions(A,B,20) > > Segmentation fault > > So it seems to happen only around m=20. m=10 did not cause > segmentation fault but resulted in > > AssertionError: > Arrays are not almost equal > > To see it in more detail, I tried > > >>> A,B=ElasticRod(100) > >>> m=20 > >>> n=A.shape[0] > >>> numpy.random.seed(0) > >>> V=rand(n,m) > >>> X=linalg.orth(V) > >>> eigs,vecs=lobpcg(A,X,B=B,tol=1e-5,maxiter=30,verbosityLevel=10) > > Solving generalized eigenvalue problem with preconditioning > > matrix size 100 > block size 20 > > No constraints > > iteration 0 > [ True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True > ? True ?True ?True ?True ?True] > current block size: 20 > eigenvalue: [ ?1.785e+12 ? 1.586e+12 ? 1.356e+12 ? 1.330e+12 ? 1.212e+12 ? 1.155e+12 ? 1.080e+12 > ? ?9.149e+11 ? 8.272e+11 ? 8.229e+11 ? 7.664e+11 ? 6.941e+11 ? 6.769e+11 ? 5.848e+11 > ? ?5.553e+11 ? 4.994e+11 ? 4.283e+11 ? 3.813e+11 ? 3.537e+11 ? 1.058e+10] > residual norms: [ ?7.223e+10 ? 6.780e+10 ? 7.145e+10 ? 7.305e+10 ? 6.290e+10 ? 7.085e+10 ? 6.539e+10 > ? ?5.466e+10 ? 6.137e+10 ? 5.374e+10 ? 5.809e+10 ? 5.725e+10 ? 5.375e+10 ? 5.334e+10 > ? ?5.052e+10 ? 4.746e+10 ? 4.176e+10 ? 3.650e+10 ? 3.283e+10 ? 6.905e+09] > Segmentation fault > > Does anyone experienced something similar? Or could anyone suggest > where I should look into? > > Thanks, > Ryota > > -- > Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir > belohnen Sie mit bis zu 100,- Euro!https://freundschaftswerbung.gmx.de > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From silva at lma.cnrs-mrs.fr Thu Feb 17 10:40:09 2011 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 17 Feb 2011 12:40:09 -0300 Subject: [SciPy-User] fsolve - pass keyword arguments In-Reply-To: References: Message-ID: <1297957209.1811.10.camel@florian-desktop> El jeu., 17-02-2011 a las 10:19 -0500, Ashley DaSilva escribi?: > Hello all, > I have a function which takes some keyword arguments, > > def f(x,kw1=None,kw2=None,kw3=None): > (some stuff) > return F > > Depending on which of the kw arguments is None the function might do > extra steps to determine them. I want to use scipy.optimize.fsolve to > find the root of this function when kw3 is an integer. I am calling > fsolve like this: > > ans=scipy.optimize.fsolve(f, x0, args=(kw3=N,)) > > where x0 is an initial guess to the root and N is the integer. But I > get the error: "SyntaxError: invalid syntax". If I move kw3 to the 2nd > position > > def f(x,kw3=None,kw1=None,kw2=None): > (some stuff) > return F > > and call like this: > ans=scipy.optimize.fsolve(f, x0, args=(N,)) > > then the function works. Does this mean that passing keyword arguments > using fsolve is not allowed? Or am I doing something incorrectly? You could use a wrapper like >>> def wrapper(**kw): >>> return [kw.get('kw1'), kw.get('kw2'), kw.get('kw3')] >>> ans=scipy.optimize.fsolve(f, x0, args=wrapper(kw=N)) -- Fabrice Silva From amd405 at psu.edu Thu Feb 17 10:43:14 2011 From: amd405 at psu.edu (Ashley DaSilva) Date: Thu, 17 Feb 2011 10:43:14 -0500 Subject: [SciPy-User] fsolve - pass keyword arguments In-Reply-To: <1297957209.1811.10.camel@florian-desktop> References: <1297957209.1811.10.camel@florian-desktop> Message-ID: Wow I never knew about lambda functions! That certainly solves the problem. I had considered a wrapper, but I was trying to avoid it mostly because I don't want to remember so many function names (this is a piece of a fairly long python script). Also, I am not sure how much slower it will be with the extra function call. It is certainly something to keep in mind, though. Thanks to both of you! Ashley On Thu, Feb 17, 2011 at 10:40 AM, Fabrice Silva wrote: > El jeu., 17-02-2011 a las 10:19 -0500, Ashley DaSilva escribi?: > > Hello all, > > I have a function which takes some keyword arguments, > > > > def f(x,kw1=None,kw2=None,kw3=None): > > (some stuff) > > return F > > > > Depending on which of the kw arguments is None the function might do > > extra steps to determine them. I want to use scipy.optimize.fsolve to > > find the root of this function when kw3 is an integer. I am calling > > fsolve like this: > > > > ans=scipy.optimize.fsolve(f, x0, args=(kw3=N,)) > > > > where x0 is an initial guess to the root and N is the integer. But I > > get the error: "SyntaxError: invalid syntax". If I move kw3 to the 2nd > > position > > > > def f(x,kw3=None,kw1=None,kw2=None): > > (some stuff) > > return F > > > > and call like this: > > ans=scipy.optimize.fsolve(f, x0, args=(N,)) > > > > then the function works. Does this mean that passing keyword arguments > > using fsolve is not allowed? Or am I doing something incorrectly? > > You could use a wrapper like > >>> def wrapper(**kw): > >>> return [kw.get('kw1'), kw.get('kw2'), kw.get('kw3')] > >>> ans=scipy.optimize.fsolve(f, x0, args=wrapper(kw=N)) > -- > Fabrice Silva > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwoods at aer.com Thu Feb 17 10:50:22 2011 From: bwoods at aer.com (Bryan Woods) Date: Thu, 17 Feb 2011 10:50:22 -0500 Subject: [SciPy-User] another interpolation question Message-ID: <4D5D43BE.3080505@aer.com> I've seen a lot of discussion about interpolation and would like to add my question to the discussion. I am looking for a (fast as possible) way to interpolate my data from a coarse to fine grid where the limits may not match. Bilinear interpolation is fine. I saw ndimage.map_coordinates but it seems to want i,j coordinates whereas I have 1D lat,lon coordinate arrays. Input: coarse and fine 2D arrays with 1D coordinate arrays Output: data from coarse grid interpolated onto the fine grid Thanks, Bryan -------------- next part -------------- A non-text attachment was scrubbed... Name: bwoods.vcf Type: text/x-vcard Size: 341 bytes Desc: not available URL: From nathanielpeterson08 at gmail.com Thu Feb 17 12:01:59 2011 From: nathanielpeterson08 at gmail.com (Nathaniel Peterson) Date: Thu, 17 Feb 2011 12:01:59 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching Message-ID: <877hcyy4qg.fsf@gmail.com> > 2011/2/14 St?fan van der Walt : > > On Mon, Feb 14, 2011 at 6:04 PM, bhanukiran perabathini > > wrote: > >> and nothing happens as I click around on the canvas. > > > > This seems to be a bug in some of the matplotlib backends. If you > > grab the window and resize it, everything should appear (let's hope). > > > Same for me on linux with a tarball from svn mpl 1.0 rev 8222 and > 'WXAgg' or 'Qt4Agg' backend (only 2 I tried). It works as expected if > I run ipython with the -pylab switch though. > Skipper For those for which nothing seems to happen when you click on the canvas, Adding pylab.ion() right after the comment ####################################################################### # This is the main part of the program, which calls the above functions ####################################################################### seems to fix the problem, at least on Ubuntu 9.10, matplotlib 0.99.0, GtkAgg backend. Thank you, Rajeev Raizada, for the great code! From Chris.Barker at noaa.gov Thu Feb 17 12:38:45 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 17 Feb 2011 09:38:45 -0800 Subject: [SciPy-User] another interpolation question In-Reply-To: <4D5D43BE.3080505@aer.com> References: <4D5D43BE.3080505@aer.com> Message-ID: <4D5D5D25.5030908@noaa.gov> On 2/17/11 7:50 AM, Bryan Woods wrote: > I am looking for a (fast as possible) way to interpolate my data from a > coarse to fine grid where the limits may not match. what do you mean by "limits" -- does that mean you may be extrapolating -- always a problem! > I saw ndimage.map_coordinates but it seems to want i,j coordinates > whereas I have 1D lat,lon coordinate arrays. so your input points are not regularly spaced? Ifso, then you need a routine dsigned for that. A couple options: natural neighbor interpolation: See the natgrid toolkit referenced here: http://matplotlib.sourceforge.net/users/toolkits.html Radial basis functions: http://www.scipy.org/Cookbook/RadialBasisFunctions For straight linear, you should be able to do a delauney triangulation, and simple linear interpolation from that, though I don't know of a package that does this out of the box. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From nwagner at iam.uni-stuttgart.de Thu Feb 17 12:52:25 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 Feb 2011 18:52:25 +0100 Subject: [SciPy-User] another interpolation question In-Reply-To: <4D5D5D25.5030908@noaa.gov> References: <4D5D43BE.3080505@aer.com> <4D5D5D25.5030908@noaa.gov> Message-ID: A couple options: > > natural neighbor interpolation: > > See the natgrid toolkit referenced here: > http://matplotlib.sourceforge.net/users/toolkits.html > Just curious. Where can I find matplotlib examples using natgrid ? Nils From Malte.Mauelshagen at dlr.de Wed Feb 16 07:40:47 2011 From: Malte.Mauelshagen at dlr.de (Malte.Mauelshagen at dlr.de) Date: Wed, 16 Feb 2011 12:40:47 +0000 Subject: [SciPy-User] scipy.numpyio modules Message-ID: <26384C2388ED424A98235A07D7BE1E1A0BD2C22C@DLREXMBX01.intra.dlr.de> Hello, I am working on a project that uses several scipy functions such as fread and fwrite. from scipy.io.numpyio import fwrite, fread from scipy.io import npfile I downloaded the latest scipy-version and realised that these functions are not available anymore. Have they been moved to another package? (Maybe in the numpy module?) If they have been removed completely, are there still other methods which provide the same functionalty? (I read something about fromfile, tofile). Thank you in advance! Malte From rajeev.raizada at gmail.com Wed Feb 16 09:59:04 2011 From: rajeev.raizada at gmail.com (Raj) Date: Wed, 16 Feb 2011 06:59:04 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: Message-ID: <0ecbaa55-85b7-49eb-9bbb-7d1138167fd6@s29g2000pra.googlegroups.com> Hi Fernando, Thanks for the gist info. I'll try it out soon! By the way, I made a YouTube video showing the interactive demos in action: http://www.youtube.com/watch?&v=nCv_MhaeFo8 I'd love to hear any comments or suggestions. Raj From fpm at u.washington.edu Wed Feb 16 10:54:40 2011 From: fpm at u.washington.edu (cassiope) Date: Wed, 16 Feb 2011 07:54:40 -0800 (PST) Subject: [SciPy-User] [signal] zpk -> s domain? In-Reply-To: References: Message-ID: <83b8277b-30d0-4e1d-9798-55f4e52a9693@n16g2000prc.googlegroups.com> On Feb 11, 8:23?am, Neal Becker wrote: > Sorry for the cryptic question - for an engineer working in my field it would > not be cryptic. > > I have a digital filter design, which is specified as poles and zeros in the z- > transform domain (zpk). > > k (z-z0)(z-z1)... > ----------------- > (z-p0)(z-p1)... > > I want it expressed in the s domain (laplace transform), in a form such as: > > (s-a)(s-b)... > ----------- > (s-c)(s-d)... > > or > > s^3 + a s^2 + b s + c > --------------------- > s^3 + d s^2 + e s + f > > or similar There is no unique solution; probably the most common way would be to use the bilinear transform. See, for example, http://en.wikipedia.org/wiki/Bilinear_transform . From story645 at gmail.com Wed Feb 16 14:49:14 2011 From: story645 at gmail.com (hannah) Date: Wed, 16 Feb 2011 11:49:14 -0800 (PST) Subject: [SciPy-User] scipy.test() causes segmentation fault for test_lobpcg In-Reply-To: <20101226025146.315430@gmx.net> References: <20101226025146.315430@gmx.net> Message-ID: <737004d5-00d3-43b2-ad9d-4f5d9fa7dc3d@q40g2000prh.googlegroups.com> Hi, I'm getting the exact same bug and I'm running the dev versions of numpy and scipy, using atlas-3.9.23, lapack-3.2.2, and amd and umfpack (from SuiteSparse-3.6.0), compiled with gfortran, built using python2.7, on RHEL Server 5.5 (Tikanga). If you change n for test_Elastic rod and and test_MikotaPair, then it just fails cleanly and will go on to complete the rest of the tests with: OK (KNOWNFAIL=12, SKIP=41) So I'd suggest that you use scipy and remember to be careful with lobpcg. I've had some seemingly random seqfaults when trying to calculate eigenvalues, so it also may be due to how the eigenvalue is calculated. (Or it's a RHEL bug, or some combination thereof) According to the core dump, the code segfaults due to atlas: http://pastebin.com/zy0RnUCK On Dec 25 2010, 9:51?pm, "Ryota Tomioka" wrote: > Dear Scipy users, > > I have recently installed numpy 1.5.1rc1 and > scipy 0.8.0 on a CentOS 5.5 server. ATLAS was > compiled with gfortran and I also specified gfortran > for the installation of both numpy and scipy. > > numpy.test() ran without trouble, but scipy.test() > crashed due to segmentation fault and this was in > test_lobpcg.test_ElasticRod. > > In order to reproduce the result I copied /usr/local/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py > to my home directory and did the following. > > [ryotat at cyprus ~]$ python > Python 2.6.6 (r266:84292, Nov 19 2010, 22:23:00) > [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 > Type "help", "copyright", "credits" or "license" for more information.>>> from test_lobpcg import * > >>> A,B=ElasticRod(100) > >>> compare_solutions(A,B,100) > >>> compare_solutions(A,B,80) > >>> compare_solutions(A,B,40) > >>> compare_solutions(A,B,30) > >>> compare_solutions(A,B,22) > >>> compare_solutions(A,B,21) > >>> compare_solutions(A,B,20) > > Segmentation fault > > So it seems to happen only around m=20. m=10 did not cause > segmentation fault but resulted in > > AssertionError: > Arrays are not almost equal > > To see it in more detail, I tried > > >>> A,B=ElasticRod(100) > >>> m=20 > >>> n=A.shape[0] > >>> numpy.random.seed(0) > >>> V=rand(n,m) > >>> X=linalg.orth(V) > >>> eigs,vecs=lobpcg(A,X,B=B,tol=1e-5,maxiter=30,verbosityLevel=10) > > Solving generalized eigenvalue problem with preconditioning > > matrix size 100 > block size 20 > > No constraints > > iteration 0 > [ True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True > ? True ?True ?True ?True ?True] > current block size: 20 > eigenvalue: [ ?1.785e+12 ? 1.586e+12 ? 1.356e+12 ? 1.330e+12 ? 1.212e+12 ? 1.155e+12 ? 1.080e+12 > ? ?9.149e+11 ? 8.272e+11 ? 8.229e+11 ? 7.664e+11 ? 6.941e+11 ? 6.769e+11 ? 5.848e+11 > ? ?5.553e+11 ? 4.994e+11 ? 4.283e+11 ? 3.813e+11 ? 3.537e+11 ? 1.058e+10] > residual norms: [ ?7.223e+10 ? 6.780e+10 ? 7.145e+10 ? 7.305e+10 ? 6.290e+10 ? 7.085e+10 ? 6.539e+10 > ? ?5.466e+10 ? 6.137e+10 ? 5.374e+10 ? 5.809e+10 ? 5.725e+10 ? 5.375e+10 ? 5.334e+10 > ? ?5.052e+10 ? 4.746e+10 ? 4.176e+10 ? 3.650e+10 ? 3.283e+10 ? 6.905e+09] > Segmentation fault > > Does anyone experienced something similar? Or could anyone suggest > where I should look into? > > Thanks,Ryota > > -- > Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir > belohnen Sie mit bis zu 100,- Euro!https://freundschaftswerbung.gmx.de > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From story645 at gmail.com Thu Feb 17 10:00:00 2011 From: story645 at gmail.com (hannah) Date: Thu, 17 Feb 2011 07:00:00 -0800 (PST) Subject: [SciPy-User] scipy.test() causes segmentation fault for test_lobpcg In-Reply-To: <20101226025146.315430@gmx.net> References: <20101226025146.315430@gmx.net> Message-ID: Hi, I got the same bug using: scipy- '0.10.0.dev7144' numpy-'1.6.0.dev-af1e833' atlas3.9.23 umfpack and amd (suitesparse 3.6.0, compiled to shared using these instructions: http://repository.slacky.eu/slackware-12.1/development/suitesparse/3.1.0/src/suitesparse.SlackBuild gfortran python2.7 Red Hat Enterprise Linux Server release 5.5 (Tikanga) I ended up just changing the test cases (Elastic_Rod and Makota_Pair) following your instructions, which resulted in all the other tests running fine. So I'd suggest basically ignoring this bug and taking care when using lobpcg. If you want to totally debug, according to the coredump the seqfault is in libblas: http://pastebin.com/zy0RnUCK On Dec 25 2010, 9:51?pm, "Ryota Tomioka" wrote: > Dear Scipy users, > > I have recently installed numpy 1.5.1rc1 and > scipy 0.8.0 on a CentOS 5.5 server. ATLAS was > compiled with gfortran and I also specified gfortran > for the installation of both numpy and scipy. > > numpy.test() ran without trouble, but scipy.test() > crashed due to segmentation fault and this was in > test_lobpcg.test_ElasticRod. > > In order to reproduce the result I copied /usr/local/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py > to my home directory and did the following. > > [ryotat at cyprus ~]$ python > Python 2.6.6 (r266:84292, Nov 19 2010, 22:23:00) > [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2 > Type "help", "copyright", "credits" or "license" for more information.>>> from test_lobpcg import * > >>> A,B=ElasticRod(100) > >>> compare_solutions(A,B,100) > >>> compare_solutions(A,B,80) > >>> compare_solutions(A,B,40) > >>> compare_solutions(A,B,30) > >>> compare_solutions(A,B,22) > >>> compare_solutions(A,B,21) > >>> compare_solutions(A,B,20) > > Segmentation fault > > So it seems to happen only around m=20. m=10 did not cause > segmentation fault but resulted in > > AssertionError: > Arrays are not almost equal > > To see it in more detail, I tried > > >>> A,B=ElasticRod(100) > >>> m=20 > >>> n=A.shape[0] > >>> numpy.random.seed(0) > >>> V=rand(n,m) > >>> X=linalg.orth(V) > >>> eigs,vecs=lobpcg(A,X,B=B,tol=1e-5,maxiter=30,verbosityLevel=10) > > Solving generalized eigenvalue problem with preconditioning > > matrix size 100 > block size 20 > > No constraints > > iteration 0 > [ True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True ?True > ? True ?True ?True ?True ?True] > current block size: 20 > eigenvalue: [ ?1.785e+12 ? 1.586e+12 ? 1.356e+12 ? 1.330e+12 ? 1.212e+12 ? 1.155e+12 ? 1.080e+12 > ? ?9.149e+11 ? 8.272e+11 ? 8.229e+11 ? 7.664e+11 ? 6.941e+11 ? 6.769e+11 ? 5.848e+11 > ? ?5.553e+11 ? 4.994e+11 ? 4.283e+11 ? 3.813e+11 ? 3.537e+11 ? 1.058e+10] > residual norms: [ ?7.223e+10 ? 6.780e+10 ? 7.145e+10 ? 7.305e+10 ? 6.290e+10 ? 7.085e+10 ? 6.539e+10 > ? ?5.466e+10 ? 6.137e+10 ? 5.374e+10 ? 5.809e+10 ? 5.725e+10 ? 5.375e+10 ? 5.334e+10 > ? ?5.052e+10 ? 4.746e+10 ? 4.176e+10 ? 3.650e+10 ? 3.283e+10 ? 6.905e+09] > Segmentation fault > > Does anyone experienced something similar? Or could anyone suggest > where I should look into? > > Thanks, > Ryota > > -- > Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir > belohnen Sie mit bis zu 100,- Euro!https://freundschaftswerbung.gmx.de > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Thu Feb 17 13:09:33 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Feb 2011 18:09:33 +0000 (UTC) Subject: [SciPy-User] another interpolation question References: <4D5D43BE.3080505@aer.com> <4D5D5D25.5030908@noaa.gov> Message-ID: Thu, 17 Feb 2011 09:38:45 -0800, Christopher Barker wrote: [clip] > For straight linear, you should be able to do a delauney triangulation, > and simple linear interpolation from that, though I don't know of a > package that does this out of the box. Scipy 0.9.0, currently at release candidate 3, does Delaunay triangulation based interpolation (linear & cubic) out of the box. -- Pauli Virtanen From zachary.pincus at yale.edu Thu Feb 17 13:36:17 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 17 Feb 2011 13:36:17 -0500 Subject: [SciPy-User] another interpolation question In-Reply-To: <4D5D43BE.3080505@aer.com> References: <4D5D43BE.3080505@aer.com> Message-ID: <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> > I am looking for a (fast as possible) way to interpolate my data > from a coarse to fine grid where the limits may not match. Bilinear > interpolation is fine. > > I saw ndimage.map_coordinates but it seems to want i,j coordinates > whereas I have 1D lat,lon coordinate arrays. > > Input: coarse and fine 2D arrays with 1D coordinate arrays > > Output: data from coarse grid interpolated onto the fine grid I'm not sure if I understand the request -- what are the 1D coordinate arrays for? If you have data on a coarse 2D grid and have a fine 2D grid defined, doesn't that alone specify the interpolation? Could you provide a simple example, maybe, with a 2x2 input coarse array, perhaps? Zach From robert.kern at gmail.com Thu Feb 17 13:41:16 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Feb 2011 12:41:16 -0600 Subject: [SciPy-User] another interpolation question In-Reply-To: <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> Message-ID: On Thu, Feb 17, 2011 at 12:36, Zachary Pincus wrote: >> I am looking for a (fast as possible) way to interpolate my data >> from a coarse to fine grid where the limits may not match. Bilinear >> interpolation is fine. >> >> I saw ndimage.map_coordinates but it seems to want i,j coordinates >> whereas I have 1D lat,lon coordinate arrays. >> >> Input: coarse and fine 2D arrays with 1D coordinate arrays >> >> Output: data from coarse grid interpolated onto the fine grid > > I'm not sure if I understand the request -- what are the 1D coordinate > arrays for? If you have data on a coarse 2D grid and have a fine 2D > grid defined, doesn't that alone specify the interpolation? The 1D coordinate arrays are defining the grid points for each axis. E.g. given a 2D array C and the 1D arrays x_coord, y_coord, the data value at C[i,j] will have the "real-world" location x_coord[j], y_coord[i]. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From bwoods at aer.com Thu Feb 17 13:51:55 2011 From: bwoods at aer.com (Bryan Woods) Date: Thu, 17 Feb 2011 13:51:55 -0500 Subject: [SciPy-User] another interpolation question In-Reply-To: References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> Message-ID: <4D5D6E4B.7060507@aer.com> Yes. The idea is that I am using a nested model where I have an outer domain with fixed latitude / longitude coordinates and I want to downscale the data from that outer domain onto a finer inner domain which is contained inside of the outer domain. All the interpolation that I see seeing is to interpolate randomly spaced data onto a fixed grid. I am looking to reproject data from and to fixed grids. Ideally a function that looks something like: z_fine[x_fine|:,y_fine|:] = interp2d(x[:], y[:], z[x|:,y|:], x_fine[:], y_fine[:]) I am very surprised that I can't find a simple function to do a very quick bilinear interpolation. It seems like a very basic operation for gridded data. On 2/17/11 1:41 PM, Robert Kern wrote: > On Thu, Feb 17, 2011 at 12:36, Zachary Pincus wrote: >>> I am looking for a (fast as possible) way to interpolate my data >>> from a coarse to fine grid where the limits may not match. Bilinear >>> interpolation is fine. >>> >>> I saw ndimage.map_coordinates but it seems to want i,j coordinates >>> whereas I have 1D lat,lon coordinate arrays. >>> >>> Input: coarse and fine 2D arrays with 1D coordinate arrays >>> >>> Output: data from coarse grid interpolated onto the fine grid >> I'm not sure if I understand the request -- what are the 1D coordinate >> arrays for? If you have data on a coarse 2D grid and have a fine 2D >> grid defined, doesn't that alone specify the interpolation? > The 1D coordinate arrays are defining the grid points for each axis. > E.g. given a 2D array C and the 1D arrays x_coord, y_coord, the data > value at C[i,j] will have the "real-world" location x_coord[j], > y_coord[i]. > -------------- next part -------------- A non-text attachment was scrubbed... Name: bwoods.vcf Type: text/x-vcard Size: 341 bytes Desc: not available URL: From apalomba at austin.rr.com Thu Feb 17 14:01:37 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Thu, 17 Feb 2011 13:01:37 -0600 Subject: [SciPy-User] trying to interpolate... In-Reply-To: References: Message-ID: Okay installed the latest numpy, that seems to fix things. So back to the real problem... I am trying to write an interpolate function that allows me to interpolate between two sets of numbers. It would basically map (x1, y1) on to (x2, y2). def interpmap(pos, x1, y1, x2, y2, base): pass Where pos is some value between x1 and y1, interpmap returns the interpolated mapped value between x2 and y2. What should I be suing to implement this? Interp1d? Interp2d? Thanks, Anthony On Wed, Feb 16, 2011 at 6:00 PM, Pauli Virtanen wrote: > On Wed, 16 Feb 2011 17:55:59 -0600, Anthony Palomba wrote: > > Well I thinkI downloaded them scipy.org > > > > I am running things on windows. In my C:\Python26\Lib\site-packages > > folder, I see numpy-1.3.0-py2.6.egg-info and scipy-0.8.0-py2.6.egg-info. > > > > Should I try downloading and installing the latest version? > > Yes: If I remember correctly, the Scipy 0.8.0 binary requires > Numpy >= 1.4. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Feb 17 14:05:46 2011 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 17 Feb 2011 14:05:46 -0500 Subject: [SciPy-User] another interpolation question In-Reply-To: <4D5D6E4B.7060507@aer.com> References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> <4D5D6E4B.7060507@aer.com> Message-ID: <5512A6B6-C166-4D46-82BC-A624B1BE334E@yale.edu> > The idea is that I am using a nested model where I have an outer > domain with fixed latitude / longitude coordinates and I want to > downscale the data from that outer domain onto a finer inner domain > which is contained inside of the outer domain. > > All the interpolation that I see seeing is to interpolate randomly > spaced data onto a fixed grid. I am looking to reproject data from > and to fixed grids. Ideally a function that looks something like: > > z_fine[x_fine|:,y_fine|:] = interp2d(x[:], y[:], z[x|:,y|:], > x_fine[:], y_fine[:]) > > I am very surprised that I can't find a simple function to do a very > quick bilinear interpolation. It seems like a very basic operation > for gridded data. If Robert's interpretation of your inputs are correct, it seems then like you have only some minor rearranging of arrays in order to use ndimage.map_coordinates to perform precisely this task. map_coordinates takes i,j coordinates in terms of the original array indices, but given your coordinate arrays it should be pretty trivial to reformulate your request in terms of (fractional) i,j positions in the original array, right? You can do this in 1D easily for the x and y coordinate axes, and then just repeat these values to make the appropriate coordinate array for map_coordinates. (I can provide more details if desired.) Also, map_coordinates has various boundary conditions (constant, mirror, and edge-clamp), which are often useful. Zach > > On 2/17/11 1:41 PM, Robert Kern wrote: >> On Thu, Feb 17, 2011 at 12:36, Zachary >> Pincus wrote: >>>> I am looking for a (fast as possible) way to interpolate my data >>>> from a coarse to fine grid where the limits may not match. Bilinear >>>> interpolation is fine. >>>> >>>> I saw ndimage.map_coordinates but it seems to want i,j coordinates >>>> whereas I have 1D lat,lon coordinate arrays. >>>> >>>> Input: coarse and fine 2D arrays with 1D coordinate arrays >>>> >>>> Output: data from coarse grid interpolated onto the fine grid >>> I'm not sure if I understand the request -- what are the 1D >>> coordinate >>> arrays for? If you have data on a coarse 2D grid and have a fine 2D >>> grid defined, doesn't that alone specify the interpolation? >> The 1D coordinate arrays are defining the grid points for each axis. >> E.g. given a 2D array C and the 1D arrays x_coord, y_coord, the data >> value at C[i,j] will have the "real-world" location x_coord[j], >> y_coord[i]. >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Thu Feb 17 14:09:13 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Feb 2011 19:09:13 +0000 (UTC) Subject: [SciPy-User] another interpolation question References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> <4D5D6E4B.7060507@aer.com> Message-ID: On Thu, 17 Feb 2011 13:51:55 -0500, Bryan Woods wrote: > Yes. > > The idea is that I am using a nested model where I have an outer domain > with fixed latitude / longitude coordinates and I want to downscale the > data from that outer domain onto a finer inner domain which is contained > inside of the outer domain. > > All the interpolation that I see seeing is to interpolate randomly > spaced data onto a fixed grid. I am looking to reproject data from and > to fixed grids. Ideally a function that looks something like: > > z_fine[x_fine|:,y_fine|:] = interp2d(x[:], y[:], z[x|:,y|:], x_fine[:], > y_fine[:]) Yes, we have something like this, but apparently it isn't listed in the docs (aargh!). Try # some random data from numpy import * x = linspace(0, 1, 20) y = linspace(0, 3, 30) z = sin(x)[:,None] * cos(y)[None,:] x_fine = linspace(0, 1, 200) y_fine = linspace(0, 3, 140) from scipy.interpolate import RectBivariateSpline interp = RectBivariateSpline(x, y, z) z_fine = interp(x_fine, y_fine) From bwoods at aer.com Thu Feb 17 14:19:05 2011 From: bwoods at aer.com (Bryan Woods) Date: Thu, 17 Feb 2011 14:19:05 -0500 Subject: [SciPy-User] another interpolation question In-Reply-To: References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> <4D5D6E4B.7060507@aer.com> Message-ID: <4D5D74A9.7050303@aer.com> For all those who need to do the same thing in the future, I was able to whip up a function on my own: def bilin2d(x_old,y_old,z,x_new,y_new): dx,dy = x_old[1]-x_old[0],y_old[1]-y_old[0] y_vals = (y_new-y_old[0])/dy x_vals = (x_new-x_old[0])/dx y = np.resize(y_vals,(x_new.shape[0],y_new.shape[0])) x = np.transpose(np.resize(x_vals,(y_new.shape[0],x_new.shape[0]))) return z[np.floor(x).astype(int),np.floor(y).astype(int)]*(1-x+np.floor(x))*(1-y+np.floor(y)) \ + z[np.ceil(x).astype(int),np.floor(y).astype(int)]*(x-np.floor(x))*(1-y+np.floor(y)) \ + z[np.floor(x).astype(int),np.ceil(y).astype(int)]*(1-x+np.floor(x))*(y-np.floor(y)) \ + z[np.ceil(x).astype(int),np.ceil(y).astype(int)]*(x-np.floor(x))*(y-np.floor(y)) On 2/17/11 2:09 PM, Pauli Virtanen wrote: > On Thu, 17 Feb 2011 13:51:55 -0500, Bryan Woods wrote: > >> Yes. >> >> The idea is that I am using a nested model where I have an outer domain >> with fixed latitude / longitude coordinates and I want to downscale the >> data from that outer domain onto a finer inner domain which is contained >> inside of the outer domain. >> >> All the interpolation that I see seeing is to interpolate randomly >> spaced data onto a fixed grid. I am looking to reproject data from and >> to fixed grids. Ideally a function that looks something like: >> >> z_fine[x_fine|:,y_fine|:] = interp2d(x[:], y[:], z[x|:,y|:], x_fine[:], >> y_fine[:]) > Yes, we have something like this, but apparently it isn't listed in the > docs (aargh!). > > Try > > # some random data > from numpy import * > x = linspace(0, 1, 20) > y = linspace(0, 3, 30) > z = sin(x)[:,None] * cos(y)[None,:] > x_fine = linspace(0, 1, 200) > y_fine = linspace(0, 3, 140) > > from scipy.interpolate import RectBivariateSpline > interp = RectBivariateSpline(x, y, z) > z_fine = interp(x_fine, y_fine) > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: bwoods.vcf Type: text/x-vcard Size: 341 bytes Desc: not available URL: From Chris.Barker at noaa.gov Thu Feb 17 14:40:35 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 17 Feb 2011 11:40:35 -0800 Subject: [SciPy-User] trying to interpolate... In-Reply-To: References: Message-ID: <4D5D79B3.2020004@noaa.gov> On 2/17/11 11:01 AM, Anthony Palomba wrote: > def interpmap(pos, x1, y1, x2, y2, base): > pass > > Where pos is some value between x1 and y1, > interpmap returns the interpolated mapped value between > x2 and y2. > > What should I be suing to implement this? > Interp1d? Interp2d? I'm not sure what "base" is, but that sure looks like interp1d to me: 1d means that you have a function y = f(x), defined on some points x, and you want to interpolate to find f(x) for other values of x. 2d is the same, but for f(x,y) ... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From e.antero.tammi at gmail.com Thu Feb 17 14:45:15 2011 From: e.antero.tammi at gmail.com (eat) Date: Thu, 17 Feb 2011 21:45:15 +0200 Subject: [SciPy-User] trying to interpolate... In-Reply-To: References: Message-ID: Hi, On Thu, Feb 17, 2011 at 9:01 PM, Anthony Palomba wrote: > Okay installed the latest numpy, that seems to fix things. > So back to the real problem... > > > I am trying to write an interpolate function that > allows me to interpolate between two sets of numbers. > It would basically map (x1, y1) on to (x2, y2). > > def interpmap(pos, x1, y1, x2, y2, base): > pass > > Where pos is some value between x1 and y1, > interpmap returns the interpolated mapped value between > x2 and y2. > > What should I be suing to implement this? > Interp1d? Interp2d? > I hope this simple example will help: In []: from scipy.interpolate import interp1d as ip In []: x= arange(9) In []: y= randn(9) In []: f= ip(x, y, bounds_error= False) In []: x= linspace(0, 9, 123) In []: y= f(x) My two cents, eat > > > Thanks, > Anthony > > > > > > On Wed, Feb 16, 2011 at 6:00 PM, Pauli Virtanen wrote: > >> On Wed, 16 Feb 2011 17:55:59 -0600, Anthony Palomba wrote: >> > Well I thinkI downloaded them scipy.org >> > >> > I am running things on windows. In my C:\Python26\Lib\site-packages >> > folder, I see numpy-1.3.0-py2.6.egg-info and scipy-0.8.0-py2.6.egg-info. >> > >> > Should I try downloading and installing the latest version? >> >> Yes: If I remember correctly, the Scipy 0.8.0 binary requires >> Numpy >= 1.4. >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.raizada at gmail.com Thu Feb 17 15:29:34 2011 From: rajeev.raizada at gmail.com (Raj) Date: Thu, 17 Feb 2011 12:29:34 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: <877hcyy4qg.fsf@gmail.com> References: <877hcyy4qg.fsf@gmail.com> Message-ID: <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> On Feb 17, 12:01?pm, Nathaniel Peterson wrote: > For those for which nothing seems to happen when you click on the canvas, > Adding > pylab.ion() > right after the comment > ####################################################################### > # This is the main part of the program, which calls the above functions > ####################################################################### > seems to fix the problem, at least on Ubuntu 9.10, matplotlib 0.99.0, > GtkAgg backend. Thanks, Nathaniel! I confess that I don't really understand the ion() command. The Matplotlib documentation says that this turns on interactive mode, but what exactly does that entail? I thought that interactive mode referred only to when you are running a Python session and typing in new commands, with each new plotting command adding to the plot straight away. But I guess it must also have some effect on GUI behaviour during the running of a script, too. I would be very grateful to hear people's thoughts on whether they might be any potential downsides to inserting this ion() command. Apparently it can sometimes make graphics much more sluggish (?). Matplotlib GUIs appear to run slightly differently on Linux depending upon which backend is being used, and I don't know enough to know whether this ion() command would help to make the scripts' behaviour either more consistent or less consistent across various backends. Also, I don't know enough to know whether the ion() command ends up having the same effect as running the scripts from an "ipython -pylab" session. Perhaps there end up being some subtle differences (?) > Thank you, Rajeev Raizada, for the great code! Thank you for the very positive feedback! Much appreciated! Sorry about the Linux/backend complications. That stuff is way beyond my understanding! :-) Raj From Chris.Barker at noaa.gov Thu Feb 17 16:12:34 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 17 Feb 2011 13:12:34 -0800 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> References: <877hcyy4qg.fsf@gmail.com> <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> Message-ID: <4D5D8F42.8050405@noaa.gov> On 2/17/11 12:29 PM, Raj wrote: > I confess that I don't really understand the ion() command. > The Matplotlib documentation says that this turns on interactive mode, > but what exactly does that entail? what it does is set MPL to re-draw the figure whenever a change is made. Otherwise it only re-draws when specifically instructed to do so. > I thought that interactive mode referred only to when > you are running a Python session and typing in new commands, > with each new plotting command adding to the plot straight away. > But I guess it must also have some effect on GUI behavior > during the running of a script, too. It behaves the same way in either case, but is far more likely to be desired when typing commands one by one in a shell. > I would be very grateful to hear people's thoughts > on whether they might be any potential downsides > to inserting this ion() command. I wouldn't generally do it in a script -- only if you want the user to see the figure built up as changes are made. > Apparently it can sometimes make graphics much more sluggish (?). yup -- if you are making a lot of changes, you don't want MPL to re-draw until they are all done. > Matplotlib GUIs appear to run slightly differently on Linux > depending upon which backend is being used, > and I don't know enough to know whether this ion() command > would help to make the scripts' behavior either > more consistent or less consistent across various backends. nope, that's a separate issue. > Also, I don't know enough to know whether the ion() command > ends up having the same effect as running the scripts > from an "ipython -pylab" session. nope -- ipython's "pylab" mode mostly has to do with how ipython interacts with the mainloop of the GUI backend, though it also sets up matplotlib for you, perhaps turning on interactive mode. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From erik.tollerud at gmail.com Thu Feb 17 16:25:13 2011 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Thu, 17 Feb 2011 13:25:13 -0800 Subject: [SciPy-User] ANN: PyModelFit 0.1 released Message-ID: I'm happy to announce the first release of PyModelFit (0.1.1)! You can get it from the Python Package Index at http://pypi.python.org/pypi/PyModelFit/0.1.1 and the documentation is available at http://packages.python.org/PyModelFit/ PyModelFit is a package that provides a pythonic, object-oriented framework that simplifies the task of designing numerical models to fit data. This is a very broad task, and hence the current functionality of PyModelFit focuses on the simpler tasks of 1D curve-fitting, including a GUI interface to simplify interactive work (using Enthought Traits). For more complicated modeling, PyModelFit also provides a wide range of classes and a framework to support more general model/data types (2D to Scalar, 3D to Scalar, 3D to 3D, and so on). From ralf.gommers at googlemail.com Thu Feb 17 19:12:33 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 18 Feb 2011 08:12:33 +0800 Subject: [SciPy-User] scipy.numpyio modules In-Reply-To: <26384C2388ED424A98235A07D7BE1E1A0BD2C22C@DLREXMBX01.intra.dlr.de> References: <26384C2388ED424A98235A07D7BE1E1A0BD2C22C@DLREXMBX01.intra.dlr.de> Message-ID: On Wed, Feb 16, 2011 at 8:40 PM, wrote: > Hello, > I am working on a project that uses several scipy functions such as fread and fwrite. > > from scipy.io.numpyio import fwrite, fread > from scipy.io import npfile > > I downloaded the latest scipy-version and realised that these functions are not available anymore. > Have they been moved to another package? (Maybe in the numpy module?) > If they have been removed completely, are there still other methods which provide the same functionalty? (I read something about fromfile, tofile). The I/O functions for numpy arrays have been moved to numpy where it made, or removed when they provided duplicate functionality. Use numpy.load and numpy.save for reading writing arrays in numpy's own .npy format, loadtxt/savetxt for ascii. Cheers, Ralf From fkaster at gmx.de Fri Feb 18 07:30:14 2011 From: fkaster at gmx.de (Frederik Orlando Kaster) Date: Fri, 18 Feb 2011 13:30:14 +0100 Subject: [SciPy-User] After compiling Scipy 0.9.0rc3 on Lucid Lynx, scipy.test() gives several failures Message-ID: <20110218123014.166460@gmx.net> Hi everybody, apologies if this has already discussed, but I could not find anything about it. I am trying to use Scipy 0.9 (due to the novel interpolation functionalities) on a Ubuntu 10.04 LTS (Lucid Lynx), which runs inside a Oracle VM VirtualBox (version 4.0.2) on a Windows 7 host OS (Service Pack 1, v. 721). The processor is a Intel Core i7 920. uname output is Linux vgate 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 21:21:01 UTC 2011 i686 GNU/Linux Most installed libraries are from the official Ubuntu packages, and recently upgraded, except for NumPy and SciPy, which are compiled from source. atlas3gf-sse2: 3.6.0-24ubuntu1 gcc/gfortran: 4:4.4.3-1ubuntu1 python: 2.6.5-0ubuntu1 numpy: 1.5.1 scipy: 0.9.0rc3 I installed NumPy with the following lines in the site.cfg file uncommented: [DEFAULT] library_dirs = /usr/lib include_dirs = /usr/include [blas_opt] library_dirs = /usr/lib/sse2 libraries = f77blas, cblas, atlas [lapack_opt] library_dirs = /usr/lib/sse2 libraries = lapack, f77blas, cblas, atlas [amd] include_dirs = /usr/include/suitesparse amd_libs = amd [umfpack] include_dirs = /usr/include/suitesparse umfpack_libs = umfpack [fftw] libraries = fftw3 and the commands python setup.py build --fcompiler=gnu95 sudo python setup.py install --prefix=/usr/local I have attached the output as numpy_build.out (.err) and numpy_install.out (.err). numpy.test("full") succeeded for this installation. Afterwards, I built and installed SciPy using the same commands as for Numpy (see scipy_build.out / .err and scipy_install.out / .err). Running scipy.test("full") yields numerous failures (the stdout and stderr streams are in scipy.test.out and scipy.test.err). Since many failures seem to be BLAS-related, I also ran the ATLAS test suite provided in the libatlas-test suite, and everything worked fine. What have I done wrong? Any help would be highly appreciated. Frederik -- Schon geh?rt? GMX hat einen genialen Phishing-Filter in die Toolbar eingebaut! http://www.gmx.net/de/go/toolbar -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_build.out Type: application/octet-stream Size: 11986 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_build.err Type: application/octet-stream Size: 744 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_install.out Type: application/octet-stream Size: 14121 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_install.err Type: application/octet-stream Size: 744 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_build.out Type: application/octet-stream Size: 17629 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_build.err Type: application/octet-stream Size: 429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_install.out Type: application/octet-stream Size: 17849 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_install.err Type: application/octet-stream Size: 429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.test.out Type: application/octet-stream Size: 4538 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.test.err Type: application/octet-stream Size: 140415 bytes Desc: not available URL: From pav at iki.fi Fri Feb 18 08:33:29 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 18 Feb 2011 13:33:29 +0000 (UTC) Subject: [SciPy-User] After compiling Scipy 0.9.0rc3 on Lucid Lynx, scipy.test() gives several failures References: <20110218123014.166460@gmx.net> Message-ID: Fri, 18 Feb 2011 13:30:14 +0100, Frederik Orlando Kaster wrote: [clip] > Since many failures seem to be BLAS-related, I also ran the ATLAS test > suite provided in the libatlas-test suite, and everything worked fine. > What have I done wrong? Any help would be highly appreciated. It indeed seems that either BLAS or LAPACK is hosed. Are you sure it picks up the correct installation of ATLAS? Lucid comes with Atlas 3.8.3, whereas your build logs show that it has found Atlas 3.6.0. You can specify which ATLAS + LAPACK to use via export BLAS=/path/to/libf77blas.a # or .so export LAPACK=/path/to/liblapack.a export ATLAS=/path/to/libatlas.a From pav at iki.fi Fri Feb 18 08:42:55 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 18 Feb 2011 13:42:55 +0000 (UTC) Subject: [SciPy-User] After compiling Scipy 0.9.0rc3 on Lucid Lynx, scipy.test() gives several failures References: <20110218123014.166460@gmx.net> Message-ID: Fri, 18 Feb 2011 13:33:29 +0000, Pauli Virtanen wrote: [clip] > Are you sure it picks up the correct installation of ATLAS? Lucid comes > with Atlas 3.8.3 Ah, this was wrong -- Lucid comes with 3.6.0, Maverick has 3.8.3. Anyway, the SSE2 build of Atlas in Ubuntu (before Maverick) is known to have serious bugs, and should not be used. -- Pauli Virtanen From fkaster at gmx.de Fri Feb 18 08:52:31 2011 From: fkaster at gmx.de (Frederik Orlando Kaster) Date: Fri, 18 Feb 2011 14:52:31 +0100 Subject: [SciPy-User] After compiling Scipy 0.9.0rc3 on Lucid Lynx, scipy.test() gives several failures In-Reply-To: References: <20110218123014.166460@gmx.net> Message-ID: <20110218135231.13940@gmx.net> Oh, thanks, Pauli. I'll try an upgrade to Maverick and let you know then. > Fri, 18 Feb 2011 13:33:29 +0000, Pauli Virtanen wrote: > [clip] > > Are you sure it picks up the correct installation of ATLAS? Lucid comes > > with Atlas 3.8.3 > > Ah, this was wrong -- Lucid comes with 3.6.0, Maverick has 3.8.3. > > Anyway, the SSE2 build of Atlas in Ubuntu (before Maverick) is known to > have serious bugs, and should not be used. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de From apalomba at austin.rr.com Fri Feb 18 11:18:23 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Fri, 18 Feb 2011 10:18:23 -0600 Subject: [SciPy-User] Timing optimization of python code... Message-ID: I am trying to optimize the timing of my python source. Does scipy have a high resolution timer that would allow me to measure latency of functions in order to optimize things? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Feb 18 11:25:21 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 18 Feb 2011 16:25:21 +0000 (UTC) Subject: [SciPy-User] Timing optimization of python code... References: Message-ID: Fri, 18 Feb 2011 10:18:23 -0600, Anthony Palomba wrote: > I am trying to optimize the timing of my python source. > > Does scipy have a high resolution timer that would allow me to measure > latency of functions in order to optimize things? http://docs.python.org/library/time.html http://packages.python.org/line_profiler/ http://docs.python.org/library/debug.html From jrocher at enthought.com Fri Feb 18 11:27:34 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Fri, 18 Feb 2011 10:27:34 -0600 Subject: [SciPy-User] Timing optimization of python code... In-Reply-To: References: Message-ID: HI Anthony, several things I would recommend for you to use to do that: - if you use Ipython, you can use the "magic" function %timeit. It is quick and pretty smart as it runs a given command many times and make an average. Just do in the ipython prompt for example: In [12]: a= None In [17]: %timeit a is None 10000000 loops, best of 3: 84.9 ns per loop - if you want a profiler that will analyse a large code and tell you how much time is spent in each function, I recommend the line_profiler package: http://pypi.python.org/pypi/line_profiler This provides a script called kernprof.py which can be used to run a script under the standard cProfile profiler or the eponymous line-based profiler. $ kernprof.py code.py Wrote profile results to code.py.prof You can use the stdlib pstats module as a script to view the results: http://docs.python.org/library/profile.html $ python -m pstats Welcome to the profile statistics browser. % help Documented commands (type help ): ======================================== EOF add callees callers quit read reverse sort stats strip Undocumented commands: ====================== help % read ./code.py.prof ./code.prof% strip ./code.py.prof% sort time ./code.py.prof% stats 10 The last command gives you the 10 functions the code spend the most time in. Hope this helps, Jonathan On Fri, Feb 18, 2011 at 10:18 AM, Anthony Palomba wrote: > I am trying to optimize the timing of my python source. > > Does scipy have a high resolution timer that would allow > me to measure latency of functions in order to optimize things? > > > > Thanks, > Anthony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Jonathan Rocher, Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Fri Feb 18 11:31:56 2011 From: e.antero.tammi at gmail.com (eat) Date: Fri, 18 Feb 2011 18:31:56 +0200 Subject: [SciPy-User] Timing optimization of python code... In-Reply-To: References: Message-ID: Hi, On Fri, Feb 18, 2011 at 6:18 PM, Anthony Palomba wrote: > I am trying to optimize the timing of my python source. > > Does scipy have a high resolution timer that would allow > me to measure latency of functions in order to optimize things? > Python do have timeit module, which is for timing small execution times. If you happen to use IPython, then you can use %timeit. Care to elaborate more on the latency issue you have? My two cents, eat > > > > Thanks, > Anthony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.raizada at gmail.com Fri Feb 18 14:43:34 2011 From: rajeev.raizada at gmail.com (Raj) Date: Fri, 18 Feb 2011 11:43:34 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: <4D5D8F42.8050405@noaa.gov> References: <877hcyy4qg.fsf@gmail.com> <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> <4D5D8F42.8050405@noaa.gov> Message-ID: <9562e08e-1cff-424d-97b5-7f8f4d23257c@a21g2000prj.googlegroups.com> On Feb 17, 4:12?pm, Christopher Barker wrote: > > Matplotlib GUIs appear to run slightly differently on Linux > > depending upon which backend is being used, > > and I don't know enough to know whether this ion() command > > would help to make the scripts' behavior either > > more consistent or less consistent across various backends. > > nope, that's a separate issue. > > > Also, I don't know enough to know whether the ion() command > > ends up having the same effect as running the scripts > > from an "ipython -pylab" session. > > nope -- ipython's "pylab" mode mostly has to do with how ipython > interacts with the mainloop of the GUI backend, though it also sets up > matplotlib for you, perhaps turning on interactive mode. Dear Chris, Many thanks indeed for your help explaining ion() and -pylab. On a Python-for-education list, John Posner suggested that inserting a pylab.draw() command into the code fixes the problem for him, in a different Python setup: http://mail.python.org/pipermail/edu-sig/2011-February/010193.html Following his suggestion, I've tried inserting a couple of pylab.draw() commands, and it appears to help, and also not to have any unwanted side- effects. I would be very interested to hear from people whether the new versions of the code with the pylab.draw() commands works properly now with various Matplotlib backends on Linux, without needing to do the "ipython -pylab" workaround. e.g. http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_distribution.py Thanks again for your help, Raj From jsseabold at gmail.com Fri Feb 18 14:58:15 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 18 Feb 2011 14:58:15 -0500 Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: <9562e08e-1cff-424d-97b5-7f8f4d23257c@a21g2000prj.googlegroups.com> References: <877hcyy4qg.fsf@gmail.com> <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> <4D5D8F42.8050405@noaa.gov> <9562e08e-1cff-424d-97b5-7f8f4d23257c@a21g2000prj.googlegroups.com> Message-ID: On Fri, Feb 18, 2011 at 2:43 PM, Raj wrote: > On Feb 17, 4:12?pm, Christopher Barker wrote: >> > Matplotlib GUIs appear to run slightly differently on Linux >> > depending upon which backend is being used, >> > and I don't know enough to know whether this ion() command >> > would help to make the scripts' behavior either >> > more consistent or less consistent across various backends. >> >> nope, that's a separate issue. >> >> > Also, I don't know enough to know whether the ion() command >> > ends up having the same effect as running the scripts >> > from an "ipython -pylab" session. >> >> nope -- ipython's "pylab" mode mostly has to do with how ipython >> interacts with the mainloop of the GUI backend, though it also sets up >> matplotlib for you, perhaps turning on interactive mode. > > Dear Chris, > > Many thanks indeed for your help explaining ion() and -pylab. > > On a Python-for-education list, John Posner suggested that inserting > a pylab.draw() command into the code fixes the problem for him, > in a different Python setup: > http://mail.python.org/pipermail/edu-sig/2011-February/010193.html > > Following his suggestion, I've tried inserting a couple of > pylab.draw() commands, > and it appears to help, and also not to have any unwanted side- > effects. > > I would be very interested to hear from people > whether the new versions of the code with the pylab.draw() commands > works properly now with various Matplotlib backends on Linux, > without needing to do the "ipython -pylab" workaround. > e.g. > http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_distribution.py Works as expected for me now (and I have mpl.interactive(False)). Skipper From apalomba at austin.rr.com Fri Feb 18 15:03:40 2011 From: apalomba at austin.rr.com (Anthony Palomba) Date: Fri, 18 Feb 2011 14:03:40 -0600 Subject: [SciPy-User] Timing optimization of python code... In-Reply-To: References: Message-ID: Awesome thanks for the info! I will give it a try. -ap On Fri, Feb 18, 2011 at 10:27 AM, Jonathan Rocher wrote: > HI Anthony, > > several things I would recommend for you to use to do that: > - if you use Ipython, you can use the "magic" function %timeit. It is quick > and pretty smart as it runs a given command many times and make an average. > Just do in the ipython prompt for example: > In [12]: a= None > In [17]: %timeit a is None > 10000000 loops, best of 3: 84.9 ns per loop > > - if you want a profiler that will analyse a large code and tell you how > much time is spent in each function, I recommend the line_profiler package: > > http://pypi.python.org/pypi/line_profiler > > This provides a script called kernprof.py which can be used to run a > script under the standard cProfile profiler or the eponymous > line-based profiler. > > $ kernprof.py code.py > Wrote profile results to code.py.prof > > You can use the stdlib pstats module as a script to view the results: > > http://docs.python.org/library/profile.html > > $ python -m pstats > Welcome to the profile statistics browser. > % help > > Documented commands (type help ): > ======================================== > EOF add callees callers quit read reverse sort stats strip > > Undocumented commands: > ====================== > help > > % read ./code.py.prof > ./code.prof% strip > ./code.py.prof% sort time > ./code.py.prof% stats 10 > > The last command gives you the 10 functions the code spend the most time > in. > > Hope this helps, > Jonathan > > > On Fri, Feb 18, 2011 at 10:18 AM, Anthony Palomba wrote: > >> I am trying to optimize the timing of my python source. >> >> Does scipy have a high resolution timer that would allow >> me to measure latency of functions in order to optimize things? >> >> >> >> Thanks, >> Anthony >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > Jonathan Rocher, > Enthought, Inc. > jrocher at enthought.com > 1-512-536-1057 > http://www.enthought.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.raizada at gmail.com Fri Feb 18 19:01:56 2011 From: rajeev.raizada at gmail.com (Raj) Date: Fri, 18 Feb 2011 16:01:56 -0800 (PST) Subject: [SciPy-User] Some interactive Python tutorials on basic stats, possibly useful for teaching In-Reply-To: References: <877hcyy4qg.fsf@gmail.com> <1d6ad42d-fc4c-4743-9b2d-70692272869f@a11g2000pro.googlegroups.com> <4D5D8F42.8050405@noaa.gov> <9562e08e-1cff-424d-97b5-7f8f4d23257c@a21g2000prj.googlegroups.com> Message-ID: On Feb 18, 2:58?pm, Skipper Seabold wrote: > On Fri, Feb 18, 2011 at 2:43 PM, Raj wrote: > > > I would be very interested to hear from people > > whether the new versions of the code with the pylab.draw() commands > > works properly now with various Matplotlib backends on Linux, > > without needing to do the "ipython -pylab" workaround. > > e.g. > >http://www.dartmouth.edu/~raj/Python/interactive_mean_std_normal_dist... > > Works as expected for me now (and I have mpl.interactive(False)). Great, thanks for the feedback. Well, it looks as though John Posner's pylab.draw() suggestion helps to ensure more consistent GUI behaviour across various platforms and Matplotlib backends. It is a bit weird that the figure-refreshing happens even without the draw() command on Mac OS X, but that the draw() seems to be necessary on Linux. Having the various backends behave more consistently with each other would definitely be an improvement. However, in the meantime, inserting draw() commands seems to solve the problem, without apparent side-effects. Raj From friedrichromstedt at gmail.com Sat Feb 19 07:46:09 2011 From: friedrichromstedt at gmail.com (Friedrich Romstedt) Date: Sat, 19 Feb 2011 13:46:09 +0100 Subject: [SciPy-User] another interpolation question In-Reply-To: References: <4D5D43BE.3080505@aer.com> <73364B9E-C258-4630-A00F-B4F0FE515EE4@yale.edu> <4D5D6E4B.7060507@aer.com> Message-ID: 2011/2/17 Pauli Virtanen : > Try > > # some random data > from numpy import * > x = linspace(0, 1, 20) > y = linspace(0, 3, 30) > z = sin(x)[:,None] * cos(y)[None,:] > x_fine = linspace(0, 1, 200) > y_fine = linspace(0, 3, 140) > > from scipy.interpolate import RectBivariateSpline > interp = RectBivariateSpline(x, y, z) > z_fine ?= interp(x_fine, y_fine) This is nice. Docs for scipy 0.6.0 [sic] are here: http://www.scipy.org/doc/api_docs/SciPy.interpolate.fitpack2.RectBivariateSpline.html, http://www.scipy.org/doc/api_docs/SciPy.interpolate.fitpack.html. Seems the RectBivariateSpline does B-splines. Seems the knot points are chosen as the data values (x, y). __call__ is not documented. Seems there is no similar thing as splprep() in fitpack2? From wkerzendorf at googlemail.com Sat Feb 19 08:29:17 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 19 Feb 2011 21:29:17 +0800 Subject: [SciPy-User] griddata problem in 0.9rc3 Message-ID: <4D5FC5AD.5090005@gmail.com> Dear all, There is a problem with scipy.interpolate: In [2]: interpolate.griddata([-1.5,-1.0], [5,6],[[-1.12]]) Out[2]: array([[ 5.76]]) In [3]: interpolate.griddata([-1.0,-1.5], [5,6],[[-1.12]]) Out[3]: array([[ nan]]) It depends on the order of the input values. Cheers Wolfgang From wesmckinn at gmail.com Sat Feb 19 20:26:18 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sat, 19 Feb 2011 20:26:18 -0500 Subject: [SciPy-User] ANN: pandas 0.3.0 released Message-ID: Announcing pandas 0.3.0 ----------------------- This major release of pandas represents approximately 1 year of continuous development work and brings with it many new features, bug fixes, speed enhancements, and general quality-of-life improvements. The most significant change from the 0.2 release has been the completion of a rigorous unit test suite covering all of the core functionality. pandas development also migrated to GitHub since the last official release. ========== What is it ========== **pandas** is a library of labeled data structures, statistical models, and general code for working with time series and cross-sectional data. It was designed with the practical needs of statistical modeling and large, inhomogeneous data sets in mind. =============== Where to get it =============== Source code: http://github.com/wesm/pandas Binary installers on PyPI: http://pypi.python.org/pypi/pandas Documentation: http://pandas.sourceforge.net Release notes: http://github.com/wesm/pandas/blob/master/RELEASE.rst pandas 0.3.0 release notes ========================== **Release date:** February 20, 2011 **New features / modules** * DataFrame / DataMatrix classes * `corrwith` function to compute column- or row-wise correlations between two objects * Can boolean-index DataFrame objects, e.g. df[df > 2] = 2, px[px > last_px] = 0 * Added comparison magic methods (__lt__, __gt__, etc.) * Flexible explicit arithmetic methods (add, mul, sub, div, etc.) * Added `reindex_like` method * WidePanel * Added `reindex_like` method * `pandas.io`: IO utilities * `pandas.io.sql` module * Convenience functions for accessing SQL-like databases * `pandas.io.pytables` module * Added (still experimental) HDFStore class for storing pandas data structures using HDF5 / PyTables * `pandas.core.datetools` * Added WeekOfMonth date offset * `pandas.rpy` (experimental) module created, provide some interfacing / conversion between rpy2 and pandas **Improvements** * Unit test coverage: 100% line coverage of core data structures * Speed enhancement to rolling_{median, max, min} * Column ordering between DataFrame and DataMatrix is now consistent: before DataFrame would not respect column order * Improved {Series, DataFrame}.plot methods to be more flexible (can pass matplotlib Axis arguments, plot DataFrame columns in multiple subplots, etc.) **API Changes** * Exponentially-weighted moment functions in `pandas.stats.moments` have a more consistent API and accept a min_periods argument like their regular moving counterparts. * **fillMethod** argument in Series, DataFrame changed to **method**, `FutureWarning` added. * **fill** method in Series, DataFrame/DataMatrix, WidePanel renamed to **fillna**, `FutureWarning` added to **fill** * Renamed **DataFrame.getXS** to **xs**, `FutureWarning` added * Removed **cap** and **floor** functions from DataFrame, renamed to **clip_upper** and **clip_lower** for consistency with NumPy **Bug fixes** * Fixed bug in IndexableSkiplist Cython code that was breaking rolling_max function * Numerous numpy.int64-related indexing fixes * Several NumPy 1.4.0 NaN-handling fixes * Bug fixes to pandas.io.parsers.parseCSV * Fixed `DateRange` caching issue with unusual date offsets * Fixed bug in `DateRange.union` * Fixed corner case in `IndexableSkiplist` implementation From ralf.gommers at googlemail.com Sun Feb 20 06:16:04 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 20 Feb 2011 19:16:04 +0800 Subject: [SciPy-User] ANN: SciPy 0.9.0 release candidate 5 Message-ID: Hi, I am pleased to announce the availability of the fifth - and hopefully final - release candidate of SciPy 0.9.0. This will be the first SciPy release to include support for Python 3 (all modules except scipy.weave), as well as for Python 2.7. Sources and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0rc5/. Binaries will follow there within a day. Changes since release candidate 3: - a fix for a segfault on Windows 7 - a fix for a bug introduced in Python 3.2rc3 - a fix for a bug in scipy.special with MKL builds If no more issues are reported, 0.9.0 will be released in one week. Enjoy, Ralf From m.abdollahi at gmail.com Sun Feb 20 14:11:27 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Sun, 20 Feb 2011 20:11:27 +0100 Subject: [SciPy-User] saving a dictionary to a MATLAB cell format Message-ID: Dear List, I am trying to save a 3 dimensional dictionary of matrices to MATLAB cell structure but I cant do it. does anybody know if it is possible ? I mean of course I can save each component of the originaal dictionary to a MATLAB matrix, but when I try to save the whole dictionary I just get an empty file. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Sun Feb 20 14:22:31 2011 From: robince at gmail.com (Robin) Date: Sun, 20 Feb 2011 20:22:31 +0100 Subject: [SciPy-User] saving a dictionary to a MATLAB cell format In-Reply-To: References: Message-ID: scipy.io.loadmat uses numpy object arrays as the python equivalent for Matlab cell arrays: http://docs.scipy.org/doc/scipy/reference/tutorial/io.html So you would have to convert your dictionary to an object array. Cheers Robin On Sun, Feb 20, 2011 at 8:11 PM, Mohammad Abdollahi wrote: > Dear List, > > I am trying to save a 3 dimensional dictionary of matrices to? MATLAB cell > structure but I cant do it. does anybody know if it is possible ? I mean of > course I can save each component of the originaal dictionary to a MATLAB > matrix, but when I try to save the whole dictionary I just get an empty > file. > > Thanks > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From m.abdollahi at gmail.com Sun Feb 20 14:33:56 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Sun, 20 Feb 2011 20:33:56 +0100 Subject: [SciPy-User] saving a dictionary to a MATLAB cell format In-Reply-To: References: Message-ID: Thanks a lot Robin. That was extremely helpful ! Cheers Mohammad On Sun, Feb 20, 2011 at 8:22 PM, Robin wrote: > scipy.io.loadmat uses numpy object arrays as the python equivalent for > Matlab cell arrays: > http://docs.scipy.org/doc/scipy/reference/tutorial/io.html > > So you would have to convert your dictionary to an object array. > > Cheers > > Robin > > On Sun, Feb 20, 2011 at 8:11 PM, Mohammad Abdollahi > wrote: > > Dear List, > > > > I am trying to save a 3 dimensional dictionary of matrices to MATLAB > cell > > structure but I cant do it. does anybody know if it is possible ? I mean > of > > course I can save each component of the originaal dictionary to a MATLAB > > matrix, but when I try to save the whole dictionary I just get an empty > > file. > > > > Thanks > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yyc at solvcon.net Sun Feb 20 21:55:14 2011 From: yyc at solvcon.net (Yung-Yu Chen) Date: Sun, 20 Feb 2011 21:55:14 -0500 Subject: [SciPy-User] ANN: SOLVCON 0.0.3 released Message-ID: Hello, I am announcing the release of SOLVCON 0.0.3. SOLVCON is a multi-physics, supercomputing software framework for high-fidelity solutions of partial differential equations (PDEs) by hybrid parallelism. The source tarball can be downloaded at https://bitbucket.org/yungyuc/solvcon/downloads . More information about SOLVCON can be found at http://solvcon.net/ . The biggest improvement of this release is the CUDA-enabled solver kernel ``solvcon.kerpak.cuse``. ``cuse`` module is designed to use either pthread on CPU or CUDA on GPU. The release also contains many important features for future development, including interface with CUBIT, incorporation of SCOTCH-5.1 for partitioning large graph. with regards, Yung-Yu Chen -- Yung-Yu Chen PhD candidate of Mechanical Engineering The Ohio State University, Columbus, Ohio +1 (614) 859 2436 http://solvcon.net/yyc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.abdollahi at gmail.com Mon Feb 21 04:50:25 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Mon, 21 Feb 2011 10:50:25 +0100 Subject: [SciPy-User] problem with loading large MAT files Message-ID: <4F9E415F-082F-4BE0-AB4D-CACED45886F3@gmail.com> Dear List I have a couple of MAT files around 450 MB of size that apparently they are too large for scipy.io.loadmat function. In deed I keep getting this error : File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio.py", line 140, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 404, in get_variables hdr, next_position = self.read_var_header() File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 356, in read_var_header stream = StringIO(dcor.decompress(data)) SystemError: Objects/stringobject.c:4271: bad argument to internal function but everything is fine when I use a file with a size around 360 MB o sth. So does anybody know how to fix this ? of course without having to subdivide the original MAT file into samller parts. Best Mohammad From danielstefanmader at googlemail.com Mon Feb 21 09:11:48 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Mon, 21 Feb 2011 15:11:48 +0100 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares Message-ID: Hi, currently, I am using scipy.optimize.fit_curve in order to do least squares fitting to sigmoidal data. Basically, this works OK but now I'd like to introduce some kind of weighting to the fit. >From the help of GraphPad Prism: "Regression is most often done by minimizing the sum-of-squares of the vertical distances of the data from the line or curve. Points further from the curve contribute more to the sum-of-squares. Points close to the curve contribute little. This makes sense, when you expect experimental scatter to be the same, on average, in all parts of the curve. In many experimental situations, you expect the average distance (or rather the average absolute value of the distance) of the points from the curve to be higher when Y is higher. The points with the larger scatter will have much larger sum-of-squares and thus dominate the calculations. If you expect the relative distance (residual divided by the height of the curve) to be consistent, then you should weight by 1/Y2." This is exactly the case for my data, so I'd like to give this a try but I have no clue how. Attached is a basic script which works besides weighting. Maybe someone could point out how to pass this to the underlying scipy.optimize.leastsq function? Thanks a lot in advance, Daniel # -*- coding: utf-8 -*- import scipy, pylab, scipy.optimize def findNearest(array,value): return abs(scipy.asarray(array)-value).argmin() def sigmoid(x,EC50,k,base,amp): return amp / (1 + scipy.exp(-k*(x-EC50))) + base xs = [-5.80914299, -4.60517019, -3.5065579, -2.30258509, -1.2039728, 0., 1.09861229, 2.30258509, 3.40119738 ] ys = [5.15459766e-04, 0.00000000e+00, 8.57757267e-04, 6.35666594e-03, 1.23643898e-01, 5.36029832e-01, 7.95598054e-01, 8.96318087e-01, 1.00000000e+00 ] print "guessing parameters ..." xmin = min(xs) xmax = max(xs) ymin = min(ys) ymax = max(ys) y50 = (ymax - ymin) / 2 + ymin idx = findNearest(ys,y50) EC50 = xs[idx] k = 1 baseline = ymin amplitude = ymax - ymin guess = [EC50,k,baseline,amplitude] print "guess: ", guess print "fitting data ..." fitfunc = sigmoid popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) print "popt: ", popt x = scipy.linspace(min(xs),max(xs),100) pylab.figure() pylab.plot(xs,ys,'x', label='raw') pylab.plot(x,fitfunc(x,*guess), label='guess') pylab.plot(x,fitfunc(x,*popt), label='fit') pylab.legend() pylab.grid() pylab.show() From jrocher at enthought.com Mon Feb 21 09:28:09 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Mon, 21 Feb 2011 08:28:09 -0600 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares In-Reply-To: References: Message-ID: Hi Daniel, I used this recently. You need to pass the optional argument 'sigma' to curve_fit: In [36]: from scipy import optimize In [37]: optimize.curve_fit? (...) sigma : None or N-length sequence If not None, it represents the standard-deviation of ydata. This vector, if given, will be used as weights in the least-squares problem. Hope this helps, Jonathan On Mon, Feb 21, 2011 at 8:11 AM, Daniel Mader < danielstefanmader at googlemail.com> wrote: > Hi, > > currently, I am using scipy.optimize.fit_curve in order to do least > squares fitting to sigmoidal data. Basically, this works OK but now > I'd like to introduce some kind of weighting to the fit. > > >From the help of GraphPad Prism: > "Regression is most often done by minimizing the sum-of-squares of the > vertical distances of the data from the line or curve. Points further > from the curve contribute more to the sum-of-squares. Points close to > the curve contribute little. This makes sense, when you expect > experimental scatter to be the same, on average, in all parts of the > curve. > > In many experimental situations, you expect the average distance (or > rather the average absolute value of the distance) of the points from > the curve to be higher when Y is higher. The points with the larger > scatter will have much larger sum-of-squares and thus dominate the > calculations. If you expect the relative distance (residual divided by > the height of the curve) to be consistent, then you should weight by > 1/Y2." > > This is exactly the case for my data, so I'd like to give this a try > but I have no clue how. > > Attached is a basic script which works besides weighting. Maybe > someone could point out how to pass this to the underlying > scipy.optimize.leastsq function? > > Thanks a lot in advance, > Daniel > > # -*- coding: utf-8 -*- > > import scipy, pylab, scipy.optimize > > def findNearest(array,value): > return abs(scipy.asarray(array)-value).argmin() > > def sigmoid(x,EC50,k,base,amp): > return amp / (1 + scipy.exp(-k*(x-EC50))) + base > > xs = [-5.80914299, > -4.60517019, > -3.5065579, > -2.30258509, > -1.2039728, > 0., > 1.09861229, > 2.30258509, > 3.40119738 > ] > ys = [5.15459766e-04, > 0.00000000e+00, > 8.57757267e-04, > 6.35666594e-03, > 1.23643898e-01, > 5.36029832e-01, > 7.95598054e-01, > 8.96318087e-01, > 1.00000000e+00 > ] > > print "guessing parameters ..." > xmin = min(xs) > xmax = max(xs) > ymin = min(ys) > ymax = max(ys) > y50 = (ymax - ymin) / 2 + ymin > idx = findNearest(ys,y50) > EC50 = xs[idx] > k = 1 > baseline = ymin > amplitude = ymax - ymin > > guess = [EC50,k,baseline,amplitude] > print "guess: ", guess > > print "fitting data ..." > fitfunc = sigmoid > popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) > print "popt: ", popt > > x = scipy.linspace(min(xs),max(xs),100) > pylab.figure() > pylab.plot(xs,ys,'x', label='raw') > pylab.plot(x,fitfunc(x,*guess), label='guess') > pylab.plot(x,fitfunc(x,*popt), label='fit') > pylab.legend() > pylab.grid() > pylab.show() > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jonathan Rocher, Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.abdollahi at gmail.com Mon Feb 21 10:32:00 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Mon, 21 Feb 2011 16:32:00 +0100 Subject: [SciPy-User] Problem with loading large MAT files Message-ID: Dear List I have a couple of MAT files around 450 MB of size that apparently they are too large for scipy.io.loadmat function. In deed I keep getting this error : File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio.py", line 140, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 404, in get_variables hdr, next_position = self.read_var_header() File "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 356, in read_var_header stream = StringIO(dcor.decompress(data)) SystemError: Objects/stringobject.c:4271: bad argument to internal function but everything is fine when I use a file with a size around 360 MB or sth. So does anybody know how to fix this ? of course without having to subdivide the original MAT file into samller parts. Best Mohammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielstefanmader at googlemail.com Mon Feb 21 10:50:09 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Mon, 21 Feb 2011 16:50:09 +0100 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares In-Reply-To: References: Message-ID: Dear Jonathan, thanks for such a quick answer! However, I am not 100% sure if I understood correctly: Currently, I am passing raw values for x and y to curve_fit: popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess) Now, when I want weighting, I just need to pass an additional array to the function, with one entry per x,y pair? I could use the standard deviation alright but I am not sure is this would yield the same results as the 1/Y? weighting used by GraphPad. What would you suggest to use instead? weights = yin**2 so that popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess,sigma=weights) Thanks again in advance for any hint, Daniel 2011/2/21 Jonathan Rocher wrote: > Hi Daniel, > > I used this recently. You need to pass the optional argument 'sigma' to > curve_fit: > In [36]: from scipy import optimize > > In [37]: optimize.curve_fit? > > (...) > sigma : None or N-length sequence > ??????? If not None, it represents the standard-deviation of ydata. > ??????? This vector, if given, will be used as weights in the > ??????? least-squares problem. > > Hope this helps, > Jonathan > > On Mon, Feb 21, 2011 at 8:11 AM, Daniel Mader > wrote: >> >> Hi, >> >> currently, I am using scipy.optimize.fit_curve in order to do least >> squares fitting to sigmoidal data. Basically, this works OK but now >> I'd like to introduce some kind of weighting to the fit. >> >> >From the help of GraphPad Prism: >> "Regression is most often done by minimizing the sum-of-squares of the >> vertical distances of the data from the line or curve. Points further >> from the curve contribute more to the sum-of-squares. Points close to >> the curve contribute little. This makes sense, when you expect >> experimental scatter to be the same, on average, in all parts of the >> curve. >> >> In many experimental situations, you expect the average distance (or >> rather the average absolute value of the distance) of the points from >> the curve to be higher when Y is higher. The points with the larger >> scatter will have much larger sum-of-squares and thus dominate the >> calculations. If you expect the relative distance (residual divided by >> the height of the curve) to be consistent, then you should weight by >> 1/Y2." >> >> This is exactly the case for my data, so I'd like to give this a try >> but I have no clue how. >> >> Attached is a basic script which works besides weighting. Maybe >> someone could point out how to pass this to the underlying >> scipy.optimize.leastsq function? >> >> Thanks a lot in advance, >> Daniel >> >> # -*- coding: utf-8 -*- >> >> import scipy, pylab, scipy.optimize >> >> def findNearest(array,value): >> ?return abs(scipy.asarray(array)-value).argmin() >> >> def sigmoid(x,EC50,k,base,amp): >> ?return amp / (1 + scipy.exp(-k*(x-EC50))) + base >> >> xs = [-5.80914299, >> ? ? ?-4.60517019, >> ? ? ?-3.5065579, >> ? ? ?-2.30258509, >> ? ? ?-1.2039728, >> ? ? ?0., >> ? ? ?1.09861229, >> ? ? ?2.30258509, >> ? ? ?3.40119738 >> ? ? ?] >> ys = [5.15459766e-04, >> ? ? ?0.00000000e+00, >> ? ? ?8.57757267e-04, >> ? ? ?6.35666594e-03, >> ? ? ?1.23643898e-01, >> ? ? ?5.36029832e-01, >> ? ? ?7.95598054e-01, >> ? ? ?8.96318087e-01, >> ? ? ?1.00000000e+00 >> ? ? ?] >> >> print "guessing parameters ..." >> xmin = min(xs) >> xmax = max(xs) >> ymin = min(ys) >> ymax = max(ys) >> y50 = (ymax - ymin) / 2 + ymin >> idx = findNearest(ys,y50) >> EC50 = xs[idx] >> k = 1 >> baseline = ymin >> amplitude = ymax - ymin >> >> guess = [EC50,k,baseline,amplitude] >> print "guess: ", guess >> >> print "fitting data ..." >> fitfunc = sigmoid >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) >> print "popt: ", popt >> >> x = scipy.linspace(min(xs),max(xs),100) >> pylab.figure() >> pylab.plot(xs,ys,'x', label='raw') >> pylab.plot(x,fitfunc(x,*guess), label='guess') >> pylab.plot(x,fitfunc(x,*popt), label='fit') >> pylab.legend() >> pylab.grid() >> pylab.show() From matthew.brett at gmail.com Mon Feb 21 12:57:58 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 21 Feb 2011 09:57:58 -0800 Subject: [SciPy-User] Problem with loading large MAT files In-Reply-To: References: Message-ID: Hi, > I have a couple of MAT files around 450 MB of size that apparently they are > too large for scipy.io.loadmat function. In deed I keep getting this error : > > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio.py", > line 140, in loadmat > ?? matfile_dict = MR.get_variables() > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 404, in get_variables > ?? hdr, next_position = self.read_var_header() > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 356, in read_var_header > ?? stream = StringIO(dcor.decompress(data)) > SystemError: Objects/stringobject.c:4271: bad argument to internal function > > > but everything is fine when I use a file with a size around 360 MB or sth. > So does anybody know how to fix this ? of course without having to subdivide > the original MAT file into samller parts. Oh - sorry - that sounds annoying. Can you put up the larger file somewhere where I can get it? I see you are on a Mac, and it looks like you are using an EPD. Have you got the 32-bit version or the 64? Best, Matthew From m.abdollahi at gmail.com Mon Feb 21 13:25:47 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Mon, 21 Feb 2011 19:25:47 +0100 Subject: [SciPy-User] Problem with loading large MAT files In-Reply-To: References: Message-ID: Hi Matthew Thanks for the reply. I uploaded the file right now, here's the link : http://www.easy-share.com/1913938952/coch_six.mat Well, I think its a 32-bit EPD but I'm not 100% sure. Best Mohammad On Mon, Feb 21, 2011 at 6:57 PM, Matthew Brett wrote: > Hi, > > > I have a couple of MAT files around 450 MB of size that apparently they > are > > too large for scipy.io.loadmat function. In deed I keep getting this > error : > > > > File > > > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio.py", > > line 140, in loadmat > > matfile_dict = MR.get_variables() > > File > > > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > > line 404, in get_variables > > hdr, next_position = self.read_var_header() > > File > > > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > > line 356, in read_var_header > > stream = StringIO(dcor.decompress(data)) > > SystemError: Objects/stringobject.c:4271: bad argument to internal > function > > > > > > but everything is fine when I use a file with a size around 360 MB or > sth. > > So does anybody know how to fix this ? of course without having to > subdivide > > the original MAT file into samller parts. > > Oh - sorry - that sounds annoying. Can you put up the larger file > somewhere where I can get it? I see you are on a Mac, and it looks > like you are using an EPD. Have you got the 32-bit version or the > 64? > > Best, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bug at sas.upenn.edu Mon Feb 21 15:06:34 2011 From: bug at sas.upenn.edu (Gavin W. Burris) Date: Mon, 21 Feb 2011 15:06:34 -0500 Subject: [SciPy-User] build from source on RHEL6 Message-ID: <4D62C5CA.6030104@sas.upenn.edu> Hello SciPy people, I just spent a few days trying to figure out the build process for SciPy on our new Linux desktops. It was an arduous journey, and I though I should share: http://idolinux.blogspot.com/2011/02/atlas-numpy-scipy-build-on-rhel-6.html It seems to be working fine now. Tips and pointers are much appreciated. Cheers, -- Gavin W. Burris Senior Systems Programmer Information Security and Unix Systems School of Arts and Sciences University of Pennsylvania From jrocher at enthought.com Mon Feb 21 17:12:22 2011 From: jrocher at enthought.com (Jonathan Rocher) Date: Mon, 21 Feb 2011 16:12:22 -0600 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares In-Reply-To: References: Message-ID: Yes, you should give to sigma an array of errors of the same length as xs or ys. Note that I tried you code and if you try to use 1/ys**2 without tweaking, a "good enough" fit will not be found with the default max value on the number of calls to the function. One of the values of y is 0, so that creates an infinite error bar. You might therefore tweak certain values of your error array and potentially also play with the "maxfev" keyword as well. Hope this helps. Jonathan On Mon, Feb 21, 2011 at 9:50 AM, Daniel Mader < danielstefanmader at googlemail.com> wrote: > Dear Jonathan, > > thanks for such a quick answer! However, I am not 100% sure if I > understood correctly: > > Currently, I am passing raw values for x and y to curve_fit: > popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess) > > Now, when I want weighting, I just need to pass an additional array to > the function, with one entry per x,y pair? > > I could use the standard deviation alright but I am not sure is this > would yield the same results as the 1/Y? weighting used by GraphPad. > > What would you suggest to use instead? > weights = yin**2 > so that > popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess,sigma=weights) > > Thanks again in advance for any hint, > Daniel > > > 2011/2/21 Jonathan Rocher wrote: > > Hi Daniel, > > > > I used this recently. You need to pass the optional argument 'sigma' to > > curve_fit: > > In [36]: from scipy import optimize > > > > In [37]: optimize.curve_fit? > > > > (...) > > sigma : None or N-length sequence > > If not None, it represents the standard-deviation of ydata. > > This vector, if given, will be used as weights in the > > least-squares problem. > > > > Hope this helps, > > Jonathan > > > > On Mon, Feb 21, 2011 at 8:11 AM, Daniel Mader > > wrote: > >> > >> Hi, > >> > >> currently, I am using scipy.optimize.fit_curve in order to do least > >> squares fitting to sigmoidal data. Basically, this works OK but now > >> I'd like to introduce some kind of weighting to the fit. > >> > >> >From the help of GraphPad Prism: > >> "Regression is most often done by minimizing the sum-of-squares of the > >> vertical distances of the data from the line or curve. Points further > >> from the curve contribute more to the sum-of-squares. Points close to > >> the curve contribute little. This makes sense, when you expect > >> experimental scatter to be the same, on average, in all parts of the > >> curve. > >> > >> In many experimental situations, you expect the average distance (or > >> rather the average absolute value of the distance) of the points from > >> the curve to be higher when Y is higher. The points with the larger > >> scatter will have much larger sum-of-squares and thus dominate the > >> calculations. If you expect the relative distance (residual divided by > >> the height of the curve) to be consistent, then you should weight by > >> 1/Y2." > >> > >> This is exactly the case for my data, so I'd like to give this a try > >> but I have no clue how. > >> > >> Attached is a basic script which works besides weighting. Maybe > >> someone could point out how to pass this to the underlying > >> scipy.optimize.leastsq function? > >> > >> Thanks a lot in advance, > >> Daniel > >> > >> # -*- coding: utf-8 -*- > >> > >> import scipy, pylab, scipy.optimize > >> > >> def findNearest(array,value): > >> return abs(scipy.asarray(array)-value).argmin() > >> > >> def sigmoid(x,EC50,k,base,amp): > >> return amp / (1 + scipy.exp(-k*(x-EC50))) + base > >> > >> xs = [-5.80914299, > >> -4.60517019, > >> -3.5065579, > >> -2.30258509, > >> -1.2039728, > >> 0., > >> 1.09861229, > >> 2.30258509, > >> 3.40119738 > >> ] > >> ys = [5.15459766e-04, > >> 0.00000000e+00, > >> 8.57757267e-04, > >> 6.35666594e-03, > >> 1.23643898e-01, > >> 5.36029832e-01, > >> 7.95598054e-01, > >> 8.96318087e-01, > >> 1.00000000e+00 > >> ] > >> > >> print "guessing parameters ..." > >> xmin = min(xs) > >> xmax = max(xs) > >> ymin = min(ys) > >> ymax = max(ys) > >> y50 = (ymax - ymin) / 2 + ymin > >> idx = findNearest(ys,y50) > >> EC50 = xs[idx] > >> k = 1 > >> baseline = ymin > >> amplitude = ymax - ymin > >> > >> guess = [EC50,k,baseline,amplitude] > >> print "guess: ", guess > >> > >> print "fitting data ..." > >> fitfunc = sigmoid > >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) > >> print "popt: ", popt > >> > >> x = scipy.linspace(min(xs),max(xs),100) > >> pylab.figure() > >> pylab.plot(xs,ys,'x', label='raw') > >> pylab.plot(x,fitfunc(x,*guess), label='guess') > >> pylab.plot(x,fitfunc(x,*popt), label='fit') > >> pylab.legend() > >> pylab.grid() > >> pylab.show() > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jonathan Rocher, Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 21 18:25:38 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 21 Feb 2011 18:25:38 -0500 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares In-Reply-To: References: Message-ID: On Mon, Feb 21, 2011 at 5:12 PM, Jonathan Rocher wrote: > Yes, you should give to sigma an array of errors of the same length as xs or > ys. Note that I tried you code and if you try to use 1/ys**2 without > tweaking, a "good enough" fit will not be found with the default max value > on the number of calls to the function.? One of the values of y is 0, so > that creates an infinite error bar. You might therefore tweak certain values > of your error array and potentially also play with the "maxfev" keyword as > well. I'm not sure this is clear. (If I remember correctly) if you want to weight with 1/ys, then the argument should be sigma=ys ys**2 would be (proportional to the) the error variance sigma**2, and the errors are reweighted by 1/ys. Josef > > Hope this helps. > Jonathan > > On Mon, Feb 21, 2011 at 9:50 AM, Daniel Mader > wrote: >> >> Dear Jonathan, >> >> thanks for such a quick answer! However, I am not 100% sure if I >> understood correctly: >> >> Currently, I am passing raw values for x and y to curve_fit: >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess) >> >> Now, when I want weighting, I just need to pass an additional array to >> the function, with one entry per x,y pair? >> >> I could use the standard deviation alright but I am not sure is this >> would yield the same results as the 1/Y? weighting used by GraphPad. >> >> What would you suggest to use instead? >> weights = yin**2 >> so that >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess,sigma=weights) >> >> Thanks again in advance for any hint, >> Daniel >> >> >> 2011/2/21 Jonathan Rocher wrote: >> > Hi Daniel, >> > >> > I used this recently. You need to pass the optional argument 'sigma' to >> > curve_fit: >> > In [36]: from scipy import optimize >> > >> > In [37]: optimize.curve_fit? >> > >> > (...) >> > sigma : None or N-length sequence >> > ??????? If not None, it represents the standard-deviation of ydata. >> > ??????? This vector, if given, will be used as weights in the >> > ??????? least-squares problem. >> > >> > Hope this helps, >> > Jonathan >> > >> > On Mon, Feb 21, 2011 at 8:11 AM, Daniel Mader >> > wrote: >> >> >> >> Hi, >> >> >> >> currently, I am using scipy.optimize.fit_curve in order to do least >> >> squares fitting to sigmoidal data. Basically, this works OK but now >> >> I'd like to introduce some kind of weighting to the fit. >> >> >> >> >From the help of GraphPad Prism: >> >> "Regression is most often done by minimizing the sum-of-squares of the >> >> vertical distances of the data from the line or curve. Points further >> >> from the curve contribute more to the sum-of-squares. Points close to >> >> the curve contribute little. This makes sense, when you expect >> >> experimental scatter to be the same, on average, in all parts of the >> >> curve. >> >> >> >> In many experimental situations, you expect the average distance (or >> >> rather the average absolute value of the distance) of the points from >> >> the curve to be higher when Y is higher. The points with the larger >> >> scatter will have much larger sum-of-squares and thus dominate the >> >> calculations. If you expect the relative distance (residual divided by >> >> the height of the curve) to be consistent, then you should weight by >> >> 1/Y2." >> >> >> >> This is exactly the case for my data, so I'd like to give this a try >> >> but I have no clue how. >> >> >> >> Attached is a basic script which works besides weighting. Maybe >> >> someone could point out how to pass this to the underlying >> >> scipy.optimize.leastsq function? >> >> >> >> Thanks a lot in advance, >> >> Daniel >> >> >> >> # -*- coding: utf-8 -*- >> >> >> >> import scipy, pylab, scipy.optimize >> >> >> >> def findNearest(array,value): >> >> ?return abs(scipy.asarray(array)-value).argmin() >> >> >> >> def sigmoid(x,EC50,k,base,amp): >> >> ?return amp / (1 + scipy.exp(-k*(x-EC50))) + base >> >> >> >> xs = [-5.80914299, >> >> ? ? ?-4.60517019, >> >> ? ? ?-3.5065579, >> >> ? ? ?-2.30258509, >> >> ? ? ?-1.2039728, >> >> ? ? ?0., >> >> ? ? ?1.09861229, >> >> ? ? ?2.30258509, >> >> ? ? ?3.40119738 >> >> ? ? ?] >> >> ys = [5.15459766e-04, >> >> ? ? ?0.00000000e+00, >> >> ? ? ?8.57757267e-04, >> >> ? ? ?6.35666594e-03, >> >> ? ? ?1.23643898e-01, >> >> ? ? ?5.36029832e-01, >> >> ? ? ?7.95598054e-01, >> >> ? ? ?8.96318087e-01, >> >> ? ? ?1.00000000e+00 >> >> ? ? ?] >> >> >> >> print "guessing parameters ..." >> >> xmin = min(xs) >> >> xmax = max(xs) >> >> ymin = min(ys) >> >> ymax = max(ys) >> >> y50 = (ymax - ymin) / 2 + ymin >> >> idx = findNearest(ys,y50) >> >> EC50 = xs[idx] >> >> k = 1 >> >> baseline = ymin >> >> amplitude = ymax - ymin >> >> >> >> guess = [EC50,k,baseline,amplitude] >> >> print "guess: ", guess >> >> >> >> print "fitting data ..." >> >> fitfunc = sigmoid >> >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) >> >> print "popt: ", popt >> >> >> >> x = scipy.linspace(min(xs),max(xs),100) >> >> pylab.figure() >> >> pylab.plot(xs,ys,'x', label='raw') >> >> pylab.plot(x,fitfunc(x,*guess), label='guess') >> >> pylab.plot(x,fitfunc(x,*popt), label='fit') >> >> pylab.legend() >> >> pylab.grid() >> >> pylab.show() >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Jonathan Rocher, > Enthought, Inc. > jrocher at enthought.com > 1-512-536-1057 > http://www.enthought.com > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From danielstefanmader at googlemail.com Tue Feb 22 03:58:15 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Tue, 22 Feb 2011 09:58:15 +0100 Subject: [SciPy-User] fitting sigmoidal data with weighted least squares In-Reply-To: References: Message-ID: Dear Joseph and Jonathan, thank you both very very much, you helped me a lot! It works as I'd expect. However, given my data, a square weight sigma=ydata**2 makes no sense, plain ydata or yStd is fully enough for a perfect fit in the lower concentrations. Once again, thank you very much and have a great day! Daniel 2011/2/22 : > On Mon, Feb 21, 2011 at 5:12 PM, Jonathan Rocher wrote: >> Yes, you should give to sigma an array of errors of the same length as xs or >> ys. Note that I tried you code and if you try to use 1/ys**2 without >> tweaking, a "good enough" fit will not be found with the default max value >> on the number of calls to the function.? One of the values of y is 0, so >> that creates an infinite error bar. You might therefore tweak certain values >> of your error array and potentially also play with the "maxfev" keyword as >> well. > > I'm not sure this is clear. > > (If I remember correctly) if you want to weight with 1/ys, then the > argument should be sigma=ys > ys**2 would be (proportional to the) the error variance sigma**2, and > the errors are reweighted by 1/ys. > > Josef > >> >> Hope this helps. >> Jonathan >> >> On Mon, Feb 21, 2011 at 9:50 AM, Daniel Mader >> wrote: >>> >>> Dear Jonathan, >>> >>> thanks for such a quick answer! However, I am not 100% sure if I >>> understood correctly: >>> >>> Currently, I am passing raw values for x and y to curve_fit: >>> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess) >>> >>> Now, when I want weighting, I just need to pass an additional array to >>> the function, with one entry per x,y pair? >>> >>> I could use the standard deviation alright but I am not sure is this >>> would yield the same results as the 1/Y? weighting used by GraphPad. >>> >>> What would you suggest to use instead? >>> weights = yin**2 >>> so that >>> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys,p0=guess,sigma=weights) >>> >>> Thanks again in advance for any hint, >>> Daniel >>> >>> >>> 2011/2/21 Jonathan Rocher wrote: >>> > Hi Daniel, >>> > >>> > I used this recently. You need to pass the optional argument 'sigma' to >>> > curve_fit: >>> > In [36]: from scipy import optimize >>> > >>> > In [37]: optimize.curve_fit? >>> > >>> > (...) >>> > sigma : None or N-length sequence >>> > ??????? If not None, it represents the standard-deviation of ydata. >>> > ??????? This vector, if given, will be used as weights in the >>> > ??????? least-squares problem. >>> > >>> > Hope this helps, >>> > Jonathan >>> > >>> > On Mon, Feb 21, 2011 at 8:11 AM, Daniel Mader >>> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> currently, I am using scipy.optimize.fit_curve in order to do least >>> >> squares fitting to sigmoidal data. Basically, this works OK but now >>> >> I'd like to introduce some kind of weighting to the fit. >>> >> >>> >> >From the help of GraphPad Prism: >>> >> "Regression is most often done by minimizing the sum-of-squares of the >>> >> vertical distances of the data from the line or curve. Points further >>> >> from the curve contribute more to the sum-of-squares. Points close to >>> >> the curve contribute little. This makes sense, when you expect >>> >> experimental scatter to be the same, on average, in all parts of the >>> >> curve. >>> >> >>> >> In many experimental situations, you expect the average distance (or >>> >> rather the average absolute value of the distance) of the points from >>> >> the curve to be higher when Y is higher. The points with the larger >>> >> scatter will have much larger sum-of-squares and thus dominate the >>> >> calculations. If you expect the relative distance (residual divided by >>> >> the height of the curve) to be consistent, then you should weight by >>> >> 1/Y2." >>> >> >>> >> This is exactly the case for my data, so I'd like to give this a try >>> >> but I have no clue how. >>> >> >>> >> Attached is a basic script which works besides weighting. Maybe >>> >> someone could point out how to pass this to the underlying >>> >> scipy.optimize.leastsq function? >>> >> >>> >> Thanks a lot in advance, >>> >> Daniel >>> >> >>> >> # -*- coding: utf-8 -*- >>> >> >>> >> import scipy, pylab, scipy.optimize >>> >> >>> >> def findNearest(array,value): >>> >> ?return abs(scipy.asarray(array)-value).argmin() >>> >> >>> >> def sigmoid(x,EC50,k,base,amp): >>> >> ?return amp / (1 + scipy.exp(-k*(x-EC50))) + base >>> >> >>> >> xs = [-5.80914299, >>> >> ? ? ?-4.60517019, >>> >> ? ? ?-3.5065579, >>> >> ? ? ?-2.30258509, >>> >> ? ? ?-1.2039728, >>> >> ? ? ?0., >>> >> ? ? ?1.09861229, >>> >> ? ? ?2.30258509, >>> >> ? ? ?3.40119738 >>> >> ? ? ?] >>> >> ys = [5.15459766e-04, >>> >> ? ? ?0.00000000e+00, >>> >> ? ? ?8.57757267e-04, >>> >> ? ? ?6.35666594e-03, >>> >> ? ? ?1.23643898e-01, >>> >> ? ? ?5.36029832e-01, >>> >> ? ? ?7.95598054e-01, >>> >> ? ? ?8.96318087e-01, >>> >> ? ? ?1.00000000e+00 >>> >> ? ? ?] >>> >> >>> >> print "guessing parameters ..." >>> >> xmin = min(xs) >>> >> xmax = max(xs) >>> >> ymin = min(ys) >>> >> ymax = max(ys) >>> >> y50 = (ymax - ymin) / 2 + ymin >>> >> idx = findNearest(ys,y50) >>> >> EC50 = xs[idx] >>> >> k = 1 >>> >> baseline = ymin >>> >> amplitude = ymax - ymin >>> >> >>> >> guess = [EC50,k,baseline,amplitude] >>> >> print "guess: ", guess >>> >> >>> >> print "fitting data ..." >>> >> fitfunc = sigmoid >>> >> popt,pcov = scipy.optimize.curve_fit(fitfunc,xs,ys) >>> >> print "popt: ", popt >>> >> >>> >> x = scipy.linspace(min(xs),max(xs),100) >>> >> pylab.figure() >>> >> pylab.plot(xs,ys,'x', label='raw') >>> >> pylab.plot(x,fitfunc(x,*guess), label='guess') >>> >> pylab.plot(x,fitfunc(x,*popt), label='fit') >>> >> pylab.legend() >>> >> pylab.grid() >>> >> pylab.show() >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Jonathan Rocher, >> Enthought, Inc. >> jrocher at enthought.com >> 1-512-536-1057 >> http://www.enthought.com >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Tue Feb 22 05:22:51 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 22 Feb 2011 18:22:51 +0800 Subject: [SciPy-User] build from source on RHEL6 In-Reply-To: <4D62C5CA.6030104@sas.upenn.edu> References: <4D62C5CA.6030104@sas.upenn.edu> Message-ID: On Tue, Feb 22, 2011 at 4:06 AM, Gavin W. Burris wrote: > Hello SciPy people, > > I just spent a few days trying to figure out the build process for SciPy > on our new Linux desktops. ?It was an arduous journey, and I though I > should share: > http://idolinux.blogspot.com/2011/02/atlas-numpy-scipy-build-on-rhel-6.html > > It seems to be working fine now. ?Tips and pointers are much appreciated. Thanks for sharing Gavin. It looks like you followed the explanation at http://scipy.org/Installing_SciPy/Linux pretty closely. Can you think of any changes to that description that would have made your life easier? Or did you look at some other doc? Cheers, Ralf From matthew.brett at gmail.com Tue Feb 22 11:57:11 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 22 Feb 2011 08:57:11 -0800 Subject: [SciPy-User] Problem with loading large MAT files In-Reply-To: References: Message-ID: Hi, On Mon, Feb 21, 2011 at 10:25 AM, Mohammad Abdollahi wrote: > Hi Matthew > > Thanks for the reply. I uploaded the file right now, here's the link : > http://www.easy-share.com/1913938952/coch_six.mat Oh dear, I click "Downloads" and then I'm on a True Christians dating site, which is fine, in its own way, but not what I wanted. When I've declined to enter my zip code a few times on the dating site, a download starts that correctly threatens to be be 2 hours. Would you mind trying something like a Dropbox folder? Thanks a lot, Matthew From m.abdollahi at gmail.com Tue Feb 22 13:23:17 2011 From: m.abdollahi at gmail.com (Mohammad Abdollahi) Date: Tue, 22 Feb 2011 19:23:17 +0100 Subject: [SciPy-User] Problem with loading large MAT files In-Reply-To: References: Message-ID: Hi Matthew, lol, well, I didn't expect a dating site to pop up ! but here's the dropbox link : http://dl.dropbox.com/u/3561525/coch_six.mat Best Mohammad On Tue, Feb 22, 2011 at 5:57 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 21, 2011 at 10:25 AM, Mohammad Abdollahi > wrote: > > Hi Matthew > > > > Thanks for the reply. I uploaded the file right now, here's the link : > > http://www.easy-share.com/1913938952/coch_six.mat > > Oh dear, I click "Downloads" and then I'm on a True Christians dating > site, which is fine, in its own way, but not what I wanted. When I've > declined to enter my zip code a few times on the dating site, a > download starts that correctly threatens to be be 2 hours. > > Would you mind trying something like a Dropbox folder? > > Thanks a lot, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrule7404 at gmail.com Tue Feb 22 13:37:43 2011 From: mrule7404 at gmail.com (M.Rule) Date: Tue, 22 Feb 2011 13:37:43 -0500 Subject: [SciPy-User] ImportError: No module named compat Message-ID: I'm trying to install scipy. Both building from source and using easy_install give me this error : % python setup.py build Traceback (most recent call last): File "setup.py", line 86, in FULLVERSION += svn_version() File "setup.py", line 59, in svn_version from numpy.compat import asstr ImportError: No module named compat numpy is installed and functional. what did I do wrong ? Sincerely, Michael Rule. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Feb 22 13:52:03 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 22 Feb 2011 12:52:03 -0600 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: On Tue, Feb 22, 2011 at 12:37 PM, M.Rule wrote: > I'm trying to install scipy. Both building from source and using > easy_install give me this error : > > % python setup.py build > Traceback (most recent call last): > File "setup.py", line 86, in > FULLVERSION += svn_version() > File "setup.py", line 59, in svn_version > from numpy.compat import asstr > ImportError: No module named compat > > > numpy is installed and functional. > What version of numpy do you have installed? You might need to update to a newer version. Warren > what did I do wrong ? > > Sincerely, > Michael Rule. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Feb 22 14:16:01 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 22 Feb 2011 13:16:01 -0600 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: On Tue, Feb 22, 2011 at 12:55 PM, M.Rule wrote: > 1.3.0 apparently To build scipy from source, you'll need at least version 1.5 of numpy. It is surprisingly hard to find this fact. It is mentioned in the release notes in docs/release/0.10.0-notes.rst, for example, but I haven't had any luck finding it on the scipy web page. I did find an incorrect statement in an old FAQ about requiring numpy 0.9.2 or newer. Warren > > On Tue, Feb 22, 2011 at 1:52 PM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> >> >> On Tue, Feb 22, 2011 at 12:37 PM, M.Rule wrote: >> >>> I'm trying to install scipy. Both building from source and using >>> easy_install give me this error : >>> >>> % python setup.py build >>> Traceback (most recent call last): >>> File "setup.py", line 86, in >>> FULLVERSION += svn_version() >>> File "setup.py", line 59, in svn_version >>> from numpy.compat import asstr >>> ImportError: No module named compat >>> >>> >>> numpy is installed and functional. >>> >> >> >> What version of numpy do you have installed? You might need to update to >> a newer version. >> >> Warren >> >> >> >>> what did I do wrong ? >>> >>> Sincerely, >>> Michael Rule. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrule7404 at gmail.com Tue Feb 22 14:23:45 2011 From: mrule7404 at gmail.com (M.Rule) Date: Tue, 22 Feb 2011 14:23:45 -0500 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: I downloaded the latest git pull of numpy, and built and installed that. I'm still getting the error. I assume I need to uninstall the version of numpy that easy_install installed. easy_install does not have an option to undo installation. is there a clean way to remove or update numpy that doesn't involve my manually deleting all folders named "numpy" on my system ? On Tue, Feb 22, 2011 at 2:16 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Tue, Feb 22, 2011 at 12:55 PM, M.Rule wrote: > >> 1.3.0 apparently > > > > To build scipy from source, you'll need at least version 1.5 of numpy. > > It is surprisingly hard to find this fact. It is mentioned in the release > notes in docs/release/0.10.0-notes.rst, for example, but I haven't had any > luck finding it on the scipy web page. I did find an incorrect statement in > an old FAQ about requiring numpy 0.9.2 or newer. > > Warren > > > >> >> On Tue, Feb 22, 2011 at 1:52 PM, Warren Weckesser < >> warren.weckesser at enthought.com> wrote: >> >>> >>> >>> On Tue, Feb 22, 2011 at 12:37 PM, M.Rule wrote: >>> >>>> I'm trying to install scipy. Both building from source and using >>>> easy_install give me this error : >>>> >>>> % python setup.py build >>>> Traceback (most recent call last): >>>> File "setup.py", line 86, in >>>> FULLVERSION += svn_version() >>>> File "setup.py", line 59, in svn_version >>>> from numpy.compat import asstr >>>> ImportError: No module named compat >>>> >>>> >>>> numpy is installed and functional. >>>> >>> >>> >>> What version of numpy do you have installed? You might need to update to >>> a newer version. >>> >>> Warren >>> >>> >>> >>>> what did I do wrong ? >>>> >>>> Sincerely, >>>> Michael Rule. >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrule7404 at gmail.com Tue Feb 22 14:27:47 2011 From: mrule7404 at gmail.com (M.Rule) Date: Tue, 22 Feb 2011 14:27:47 -0500 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: or better yet figure out which of what appear to be several numpy installations on my machine is the installation that actually gets imported and used? On Tue, Feb 22, 2011 at 2:23 PM, M.Rule wrote: > I downloaded the latest git pull of numpy, and built and installed that. > I'm still getting the error. I assume I need to uninstall the version of > numpy that easy_install installed. easy_install does not have an option to > undo installation. is there a clean way to remove or update numpy that > doesn't involve my manually deleting all folders named "numpy" on my system > ? > > > On Tue, Feb 22, 2011 at 2:16 PM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> >> >> On Tue, Feb 22, 2011 at 12:55 PM, M.Rule wrote: >> >>> 1.3.0 apparently >> >> >> >> To build scipy from source, you'll need at least version 1.5 of numpy. >> >> It is surprisingly hard to find this fact. It is mentioned in the release >> notes in docs/release/0.10.0-notes.rst, for example, but I haven't had any >> luck finding it on the scipy web page. I did find an incorrect statement in >> an old FAQ about requiring numpy 0.9.2 or newer. >> >> Warren >> >> >> >>> >>> On Tue, Feb 22, 2011 at 1:52 PM, Warren Weckesser < >>> warren.weckesser at enthought.com> wrote: >>> >>>> >>>> >>>> On Tue, Feb 22, 2011 at 12:37 PM, M.Rule wrote: >>>> >>>>> I'm trying to install scipy. Both building from source and using >>>>> easy_install give me this error : >>>>> >>>>> % python setup.py build >>>>> Traceback (most recent call last): >>>>> File "setup.py", line 86, in >>>>> FULLVERSION += svn_version() >>>>> File "setup.py", line 59, in svn_version >>>>> from numpy.compat import asstr >>>>> ImportError: No module named compat >>>>> >>>>> >>>>> numpy is installed and functional. >>>>> >>>> >>>> >>>> What version of numpy do you have installed? You might need to update >>>> to a newer version. >>>> >>>> Warren >>>> >>>> >>>> >>>>> what did I do wrong ? >>>>> >>>>> Sincerely, >>>>> Michael Rule. >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Feb 22 14:29:58 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Feb 2011 20:29:58 +0100 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: On Tue, 22 Feb 2011 14:27:47 -0500 "M.Rule" wrote: > or better yet figure out which of what appear to be >several numpy > installations on my machine is the installation that >actually gets imported > and used? > nwagner at linux-mogv:~/svn> python Python 2.6.5 (r265:79063, Oct 28 2010, 20:56:23) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy From mrule7404 at gmail.com Tue Feb 22 14:33:28 2011 From: mrule7404 at gmail.com (M.Rule) Date: Tue, 22 Feb 2011 14:33:28 -0500 Subject: [SciPy-User] ImportError: No module named compat In-Reply-To: References: Message-ID: awesome, got that sorted out. now it says I need the BLAS libraries ? onward.... On Tue, Feb 22, 2011 at 2:29 PM, Nils Wagner wrote: > On Tue, 22 Feb 2011 14:27:47 -0500 > "M.Rule" wrote: > > or better yet figure out which of what appear to be > >several numpy > > installations on my machine is the installation that > >actually gets imported > > and used? > > > > > nwagner at linux-mogv:~/svn> python > Python 2.6.5 (r265:79063, Oct 28 2010, 20:56:23) > [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] on > linux2 > Type "help", "copyright", "credits" or "license" for more > information. > >>> import numpy > >>> numpy > '/home/nwagner/local/lib64/python2.6/site-packages/numpy/__init__.pyc'> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bug at sas.upenn.edu Tue Feb 22 17:18:22 2011 From: bug at sas.upenn.edu (Gavin W. Burris) Date: Tue, 22 Feb 2011 17:18:22 -0500 Subject: [SciPy-User] build from source on RHEL6 In-Reply-To: References: <4D62C5CA.6030104@sas.upenn.edu> Message-ID: <4D64362E.2040400@sas.upenn.edu> Hi Ralf, I think it would be very hard for anyone but a professional programmer to build scipy with the current directions. It was a very time consuming. Since ATLAS takes so long to compile, I could only try a new build method about once a day. The directions on scipy.org reference older versions and are confusing. They should be updated for the current versions, including prerequisites. The directions should prominently note how to configure a site.cfg file prior to building numpy, which is a prerequisite to building scipy. Without the numpy site.cfg step, I was left searching and rebuilding many, many times. I tried different options to the BLAS and LAPACK environment variables. I even tried to build reference versions of BLAS. I was desperate. The numpy build with a proper site.cfg was what finally made things work for me. Cheers. On 02/22/2011 05:22 AM, Ralf Gommers wrote: > On Tue, Feb 22, 2011 at 4:06 AM, Gavin W. Burris wrote: >> Hello SciPy people, >> >> I just spent a few days trying to figure out the build process for SciPy >> on our new Linux desktops. It was an arduous journey, and I though I >> should share: >> http://idolinux.blogspot.com/2011/02/atlas-numpy-scipy-build-on-rhel-6.html >> >> It seems to be working fine now. Tips and pointers are much appreciated. > > Thanks for sharing Gavin. It looks like you followed the explanation > at http://scipy.org/Installing_SciPy/Linux pretty closely. Can you > think of any changes to that description that would have made your > life easier? Or did you look at some other doc? > > Cheers, > Ralf > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Gavin W. Burris Senior Systems Programmer Information Security and Unix Systems School of Arts and Sciences University of Pennsylvania From millman at berkeley.edu Tue Feb 22 19:05:35 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 22 Feb 2011 16:05:35 -0800 Subject: [SciPy-User] scipy.stats.invnorm - rename to invgauss? In-Reply-To: References: Message-ID: Hi, On Fri, Apr 9, 2010 at 2:55 PM, wrote: > On Fri, Apr 9, 2010 at 3:38 PM, Matthew Brett wrote: >> Hi, >> >>> But from http://en.wikipedia.org/wiki/Inverse_Gaussian_distribution >>> it looks like it's not a inverse transformation. In that case a >>> different name might reduce some confusion. >>> >>> I will look at what invnorm really is, (I don't know) >> >> My impression is that the accepted name for this distribution is >> 'inverse Gaussian' or 'Wald' - but I don't think anyone else calls it >> the 'inverse normal' - I am happy to be corrected if I am wrong. >> >> Given that 'inverse normal' confused both of us, in different ways, do >> you agree that a switch would be good? > > I agree with the switch, I forgot to reply > > I had checked Johnson, Kotz and Balakrishnan, and they have 3 or 4 > parameterizations of the Inverse Gaussian, one of them is Wald, I > haven't checked yet if scale is a missing parameter in their version. > > According to JKB, Tweedie called it the inverse gaussian because the > cumulants of IG and normal are (somehow) inversely related. But it is > not a transformation as e.g. lognorm. > (IG as abbreviation is confusing because I have seen IG for inverse > gamma or something like this.) > > Numpy.random has this distribution available as wald, invnorm.rvs = > np.random.wald > My impression is InverseGaussian is the main name, in indices I saw > "Wald (see Inverse Gaussian)". > > So the main question is whether to call it invgauss or wald, assuming > nobody objects to a renaming to clarify this. > I still want to compare parameterizations, but I'm in favor of renaming it. I just hit this when doing some coding. Any objections to me submitting a patch to rename to invgauss and deprecating 'invnorm'? Best, Jarrod From josef.pktd at gmail.com Tue Feb 22 19:23:50 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Feb 2011 19:23:50 -0500 Subject: [SciPy-User] scipy.stats.invnorm - rename to invgauss? In-Reply-To: References: Message-ID: On Tue, Feb 22, 2011 at 7:05 PM, Jarrod Millman wrote: > Hi, > > On Fri, Apr 9, 2010 at 2:55 PM, ? wrote: >> On Fri, Apr 9, 2010 at 3:38 PM, Matthew Brett wrote: >>> Hi, >>> >>>> But from http://en.wikipedia.org/wiki/Inverse_Gaussian_distribution >>>> it looks like it's not a inverse transformation. In that case a >>>> different name might reduce some confusion. >>>> >>>> I will look at what invnorm really is, (I don't know) >>> >>> My impression is that the accepted name for this distribution is >>> 'inverse Gaussian' or 'Wald' - but I don't think anyone else calls it >>> the 'inverse normal' - I am happy to be corrected if I am wrong. >>> >>> Given that 'inverse normal' confused both of us, in different ways, do >>> you agree that a switch would be good? >> >> I agree with the switch, I forgot to reply >> >> I had checked Johnson, Kotz and Balakrishnan, and they have 3 or 4 >> parameterizations of the Inverse Gaussian, one of them is Wald, I >> haven't checked yet if scale is a missing parameter in their version. >> >> According to JKB, Tweedie called it the inverse gaussian because the >> cumulants of IG and normal are (somehow) inversely related. But it is >> not a transformation as e.g. lognorm. >> (IG as abbreviation is confusing because I have seen IG for inverse >> gamma or something like this.) >> >> Numpy.random has this distribution available as wald, invnorm.rvs = >> np.random.wald >> My impression is InverseGaussian is the main name, in indices I saw >> "Wald (see Inverse Gaussian)". >> >> So the main question is whether to call it invgauss or wald, assuming >> nobody objects to a renaming to clarify this. >> I still want to compare parameterizations, but I'm in favor of renaming it. > > I just hit this when doing some coding. ?Any objections to me > submitting a patch to rename to invgauss and deprecating 'invnorm'? thanks, Ralf has already done it for 0.9 http://projects.scipy.org/scipy/ticket/1172 Josef > > Best, > > Jarrod > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From millman at berkeley.edu Wed Feb 23 00:45:48 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 22 Feb 2011 21:45:48 -0800 Subject: [SciPy-User] scipy.stats.invnorm - rename to invgauss? In-Reply-To: References: Message-ID: On Tue, Feb 22, 2011 at 4:23 PM, wrote: > thanks, Ralf has already done it for 0.9 > > http://projects.scipy.org/scipy/ticket/1172 Thanks, Jarrod From david at silveregg.co.jp Wed Feb 23 02:30:40 2011 From: david at silveregg.co.jp (David) Date: Wed, 23 Feb 2011 16:30:40 +0900 Subject: [SciPy-User] build from source on RHEL6 In-Reply-To: <4D64362E.2040400@sas.upenn.edu> References: <4D62C5CA.6030104@sas.upenn.edu> <4D64362E.2040400@sas.upenn.edu> Message-ID: <4D64B7A0.3070100@silveregg.co.jp> On 02/23/2011 07:18 AM, Gavin W. Burris wrote: > Hi Ralf, > > I think it would be very hard for anyone but a professional programmer > to build scipy with the current directions. We are aware of the issue. It is unfortunately quite difficult to fix, because the issue is complex. Building numpy/scipy itself is not so hard nowadays on linux because most distributions have only one fortran compiler (gfortran), instead of g77 vs gfortran which are incompatible between each other. Blas/lapack/atlas are often badly packaged, poorly tested by distributions (to be fair, they are hard to build and test). There are tons of inconsistencies between them: the library names, locations and conventions are different between distributions and even between versions of a same distribution. That's why documenting it is so hard - it takes time, and has to be done for many combinations of softwares. On top of that, the build infrastructure around python is poorly thought out for complex packages like numpy - the pletora of options you saw are a direct consequence of those issues. cheers, David From cimrman3 at ntc.zcu.cz Wed Feb 23 09:02:31 2011 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 23 Feb 2011 15:02:31 +0100 (CET) Subject: [SciPy-User] eigenvalues of a symmetric tridiagonal matrix Message-ID: Hi, does anybody know about a Python code for computing some (e.g. the smallest) eigenvalues of a symmetric tridiagonal matrix? I can reasonably get what I need using scipy.linalg.eigvals_banded(), but wonder if there is something even more efficient. In lapack, there are sstebz, dstebz functions, but they seem not to be wrapped in scipy - the following returns nothing: import scipy.lib.lapack as ll ll.get_lapack_funcs(['stebz']) Also, does anyone have experience with [1]? Thanks, r. [1] http://kochanski.org/gpk/misc/PyLapack.html From zufryy at gmail.com Wed Feb 23 14:45:07 2011 From: zufryy at gmail.com (Zufry Malik Ibrahim) Date: Thu, 24 Feb 2011 02:45:07 +0700 Subject: [SciPy-User] How do I get a more digits fvec value with scipy.optimize.leastsq module Message-ID: How do I get up to 20 digits fvec value with scipy.optimize.leastsq module ? So far I just get 9 digit number with this syntax *plsq,cov,info,msg,success = leastsq(function, x0, args=(), , full_output=1, warning=True, ftol=1e-06)* *fvec = info['fvec']* Thanks for your attention. -- *Zufry Malik Ibrahim* Physics Department Bandung Institute of Technology , Indonesia zufryy at gmail.com http://zufryy.blogspot.com/ +628569998494 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Wed Feb 23 17:04:28 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 23 Feb 2011 23:04:28 +0100 Subject: [SciPy-User] eigenvalues of a symmetric tridiagonal matrix In-Reply-To: References: Message-ID: Hi Robert, The answer is perhaps a bit off-topic for the scipy community. For your matrix it is easy to write the characteristic polynomial (the polynomial whose roots give the eigenvalues of the matrix) in a recursive way. This property should enable you to find the eigenvalues very fast and accurately (it can be proven that the eigenvalues satisfy all kinds of nice properties. If you are interested, check the home page of Erik van Doorn at Twente University. The sequence of characteristic polynomial forms a sequence of Sturm polynomials.) Hope this helps somewhat. Nicky On 23 February 2011 15:02, Robert Cimrman wrote: > Hi, > > does anybody know about a Python code for computing some (e.g. the > smallest) eigenvalues of a symmetric tridiagonal matrix? > > I can reasonably get what I need using scipy.linalg.eigvals_banded(), but > wonder if there is something even more efficient. > > In lapack, there are sstebz, dstebz functions, but they seem not to be > wrapped in scipy - the following returns nothing: > > import scipy.lib.lapack as ll > ll.get_lapack_funcs(['stebz']) > > Also, does anyone have experience with [1]? > > Thanks, > r. > > [1] http://kochanski.org/gpk/misc/PyLapack.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pnorthug at gmail.com Wed Feb 23 23:03:49 2011 From: pnorthug at gmail.com (Paul Northug) Date: Thu, 24 Feb 2011 04:03:49 +0000 (UTC) Subject: [SciPy-User] scipy.signal.filtfilt small off by one error Message-ID: I am using scipy 0.8.0 on a Mac. I am not sure, but there may be very small error in this function in the line, s=r_[2*x[0]-x[edge:1:-1],x,2*x[-1]-x[-1:-edge:-1]] should be, s=r_[2*x[0]-x[edge-1:0:-1],x,2*x[-1]-x[-1:-edge:-1]] The intent is to take a signal x and pad it on both sides with negative mirrors of the ends of x of length edge. I have checked that this function doesn't produce the same output as matlab's filtfilt on the boundaries and it looks from the code that the intention was to match matlab. If someone with better python knowledge could check this, that would be great and I can put in a bug report if necessary. From deshpande.jaidev at gmail.com Wed Feb 23 23:28:00 2011 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Thu, 24 Feb 2011 09:58:00 +0530 Subject: [SciPy-User] PCA for sparse matrices, tolerance of eigenvalues Message-ID: Dear all, I tried using the 'scipy.sparse.eigs' tool for performing principal component analysis on a matrix which is roughly 80% sparse. First of all, is that a good way to go about it? Second, the operation failed when the function failed to converge on accurate eigenvalues. I noticed the 'tol' attribute in the function, but how does one define a reasonable tolerance and calculate it? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Thu Feb 24 01:56:52 2011 From: e.antero.tammi at gmail.com (eat) Date: Thu, 24 Feb 2011 08:56:52 +0200 Subject: [SciPy-User] How do I get a more digits fvec value with scipy.optimize.leastsq module In-Reply-To: References: Message-ID: Hi, On Wed, Feb 23, 2011 at 9:45 PM, Zufry Malik Ibrahim wrote: > How do I get up to 20 digits fvec value with scipy.optimize.leastsq module > ? > > So far I just get 9 digit number with this syntax > > *plsq,cov,info,msg,success = leastsq(function, x0, args=(), , > full_output=1, warning=True, ftol=1e-06)* > *fvec = info['fvec']* > Use set_printoptions, for example: In []: randn(1) Out[]: array([-0.09080893]) In []: set_printoptions(precision= 15) In []: randn(1) Out[]: array([-1.072995644389714]) Regards, eat > > > Thanks for your attention. > > -- > *Zufry Malik Ibrahim* > Physics Department > Bandung Institute of Technology , Indonesia > zufryy at gmail.com > http://zufryy.blogspot.com/ > +628569998494 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Feb 24 04:19:53 2011 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 24 Feb 2011 10:19:53 +0100 (CET) Subject: [SciPy-User] eigenvalues of a symmetric tridiagonal matrix In-Reply-To: References: Message-ID: Hi Nicky, thanks for your answer. I googled the algorithm you mentioned as well, but I wondered if something had already been readily available for use in numpy/scipy world (reinventing wheel etc... - but now it seems this particular wheel would be handy). I'll stick with scipy.linalg.eigvals_banded() for the moment. Best regards, r. On Wed, 23 Feb 2011, nicky van foreest wrote: > Hi Robert, > > The answer is perhaps a bit off-topic for the scipy community. > > For your matrix it is easy to write the characteristic polynomial (the > polynomial whose roots give the eigenvalues of the matrix) in a > recursive way. This property should enable you to find the eigenvalues > very fast and accurately (it can be proven that the eigenvalues > satisfy all kinds of nice properties. If you are interested, check the > home page of Erik van Doorn at Twente University. The sequence of > characteristic polynomial forms a sequence of Sturm polynomials.) > > Hope this helps somewhat. > > Nicky > > On 23 February 2011 15:02, Robert Cimrman wrote: >> Hi, >> >> does anybody know about a Python code for computing some (e.g. the >> smallest) eigenvalues of a symmetric tridiagonal matrix? >> >> I can reasonably get what I need using scipy.linalg.eigvals_banded(), but >> wonder if there is something even more efficient. >> >> In lapack, there are sstebz, dstebz functions, but they seem not to be >> wrapped in scipy - the following returns nothing: >> >> import scipy.lib.lapack as ll >> ll.get_lapack_funcs(['stebz']) >> >> Also, does anyone have experience with [1]? >> >> Thanks, >> r. >> >> [1] http://kochanski.org/gpk/misc/PyLapack.html >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dave.hirschfeld at gmail.com Thu Feb 24 05:02:25 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Thu, 24 Feb 2011 10:02:25 +0000 (UTC) Subject: [SciPy-User] splint: bug or expected behaviour? References: Message-ID: Dave Hirschfeld gmail.com> writes: > > It appears that splint doesn't work correctly when asked to integrate > a region outside the support of the original function. > I've created a ticket for this issue so that it doesn't get lost: http://projects.scipy.org/scipy/ticket/1393 If it turn out to be expected behaviour it can always be closed. Regards, Dave From pav at iki.fi Thu Feb 24 06:37:18 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Feb 2011 11:37:18 +0000 (UTC) Subject: [SciPy-User] PCA for sparse matrices, tolerance of eigenvalues References: Message-ID: Thu, 24 Feb 2011 09:58:00 +0530, Jaidev Deshpande wrote: > I tried using the 'scipy.sparse.eigs' tool for performing principal > component analysis on a matrix which is roughly 80% sparse. > > First of all, is that a good way to go about it? If it doesn't work as a dense matrix, then you don't have much choice than to rely on an iterative method. 'eigs' uses ARPACK. > Second, the operation failed when the function failed to converge on > accurate eigenvalues. I noticed the 'tol' attribute in the function, but > how does one define a reasonable tolerance and calculate it? As the docstring states, `tol` is the desired relative tolerance for the eigenvalue. One should expect that the error is approximately abs(exact_value - estimate) < tol*abs(exact_value) You can also try increasing `maxiter` or `ncv` to make it try harder to reach convergence. What are reasonable values for these paremeters depends on the problem in question. You can consult the ARPACK user guide for more: http://www.sc.fsu.edu/~burkardt/pdf/arpack.pdf Check e.g. the section on "Stopping criterion". From bdeb at willmore.eu Thu Feb 24 06:59:08 2011 From: bdeb at willmore.eu (Ben Willmore) Date: Thu, 24 Feb 2011 11:59:08 +0000 Subject: [SciPy-User] recipe for installation on Mac OS X? Message-ID: Hi, I'm trying to install scipy on Mac OS X 10.6 and have not succeeded after a fair bit of effort. I wonder if anyone can provide some guidance. I have tried: 1. Apple's Python 2.6 and the SciPy superpack from -- scipy.test('1','10') fails with errors relating to ARPACK, such as [1]. 2. python.org Python 2.7, gfortran and the compilation instructions from -- scipy.test('1','10') fails; I didn't store the error. 3. python.org Python 2.7, and corresponding prebuilt SciPy/NumPy binaries from -- scipy.test('1','10') fails with errors relating to Weave, e.g. [2]. 4. plus a number of less well defined approaches, none of which has resulted in a SciPy that passes tests. Can anyone tell me what I've done wrong with the above, or provide a different strategy for successful installation of a recent SciPy on Mac OS X. I am non-naive about compilers etc and willing to put in a bit of effort to get this working satisfactorily. Thanks for any advice! Ben [1]: ====================================================================== ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 333, in test_complex_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 314, in eval_evec eval,evec=eigs(a,k,which=which,v0=v0) File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 571, in eigs params.iterate() File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", line 360, in iterate raise ArpackError(self.info) ArpackError: ARPACK error -8: Error return from trid. eigenvalue calculation; Informational error from LAPACK routine dsteqr . [2]: ====================================================================== ERROR: test_call_function (test_c_spec.CallableConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/tests/test_c_spec.py", line 323, in test_call_function compiler=self.compiler,force=1) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/inline_tools.py", line 318, in inline function_catalog.add_function(code,func,module_dir) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", line 656, in add_function self.add_function_persistent(code,function) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", line 691, in add_function_persistent function_list = function_list + cat.get(code,[]) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", line 114, in get return self[key] File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", line 122, in __getitem__ value = Unpickler(f).load() ImportError: No module named sc_d9b504d1a91ae5e28245fdf60a03c4142 From paul.anton.letnes at gmail.com Thu Feb 24 07:25:31 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Thu, 24 Feb 2011 13:25:31 +0100 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: Message-ID: <4F62731C-771F-4A38-9807-772282A06BA6@gmail.com> Hi, the only sensible approach I have experience with is macports. Other people use fink and homebrew, YMMV. With macports, you have to 1) install xcode 2) download and install macports 3) sudo port selfupdate 4) sudo port install py27-scipy # and maybe you want py27-matplotlib and gcc45 too. Just ask if you have more questions. Good luck, Paul. On 24. feb. 2011, at 12.59, Ben Willmore wrote: > Hi, > > I'm trying to install scipy on Mac OS X 10.6 and have not succeeded > after a fair bit of effort. I wonder if anyone can provide some > guidance. > > I have tried: > > 1. Apple's Python 2.6 and the SciPy superpack from > > -- scipy.test('1','10') fails with errors relating to ARPACK, such as [1]. > > 2. python.org Python 2.7, gfortran and the compilation instructions > from > -- scipy.test('1','10') fails; I didn't store the error. > > 3. python.org Python 2.7, and corresponding prebuilt SciPy/NumPy > binaries from > -- scipy.test('1','10') fails with errors relating to Weave, e.g. [2]. > > 4. plus a number of less well defined approaches, none of which has > resulted in a SciPy that passes tests. > > Can anyone tell me what I've done wrong with the above, or provide a > different strategy for successful installation of a recent SciPy on > Mac OS X. I am non-naive about compilers etc and willing to put in a > bit of effort to get this working satisfactorily. > > Thanks for any advice! > > Ben > > > [1]: > ====================================================================== > ERROR: test_complex_nonsymmetric_modes > (test_arpack.TestEigenComplexNonSymmetric) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", > line 333, in test_complex_nonsymmetric_modes > self.eval_evec(m,typ,k,which) > File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", > line 314, in eval_evec > eval,evec=eigs(a,k,which=which,v0=v0) > File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", > line 571, in eigs > params.iterate() > File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", > line 360, in iterate > raise ArpackError(self.info) > ArpackError: ARPACK error -8: Error return from trid. eigenvalue > calculation; Informational error from LAPACK routine dsteqr . > > > [2]: > ====================================================================== > ERROR: test_call_function (test_c_spec.CallableConverter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/tests/test_c_spec.py", > line 323, in test_call_function > compiler=self.compiler,force=1) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/inline_tools.py", > line 318, in inline > function_catalog.add_function(code,func,module_dir) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", > line 656, in add_function > self.add_function_persistent(code,function) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", > line 691, in add_function_persistent > function_list = function_list + cat.get(code,[]) > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", > line 114, in get > return self[key] > File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", > line 122, in __getitem__ > value = Unpickler(f).load() > ImportError: No module named sc_d9b504d1a91ae5e28245fdf60a03c4142 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Thu Feb 24 07:32:41 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Feb 2011 20:32:41 +0800 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: Message-ID: On Thu, Feb 24, 2011 at 7:59 PM, Ben Willmore wrote: > Hi, > > I'm trying to install scipy on Mac OS X 10.6 and have not succeeded > after a fair bit of effort. I wonder if anyone can provide some > guidance. > > I have tried: > > 1. Apple's Python 2.6 and the SciPy superpack from > > -- scipy.test('1','10') fails with errors relating to ARPACK, such as [1]. > Those ARPACK tests are supposed to fail currently on 64-bit, so if that's the only issue you're fine. The superpack author just should have marked them as knownfail. > 2. python.org Python 2.7, gfortran and the compilation instructions > from > -- scipy.test('1','10') fails; I didn't store the error. > Can't help there without more details. > 3. python.org Python 2.7, and corresponding prebuilt SciPy/NumPy > binaries from > -- scipy.test('1','10') fails with errors relating to Weave, e.g. [2]. > That should just work, and I've never seen that error before. Is it possible you mixed the 32-bit and 64-bit versions of Python/NumPy/SciPy? > 4. plus a number of less well defined approaches, none of which has > resulted in a SciPy that passes tests. > > Can anyone tell me what I've done wrong with the above, or provide a > different strategy for successful installation of a recent SciPy on > Mac OS X. I am non-naive about compilers etc and willing to put in a > bit of effort to get this working satisfactorily. > Nothing too much you did wrong as far as I can tell. I'd suggest going with the 2.6 superpacks and python from python.org, or building from source. In the latter case, just give us a build log, compiler versions and test errors and we'll go from there. Cheers, Ralf P.S. Macports & friends also (often) works, but it'll be a lot harder to help you if something goes wrong there. From paul.anton.letnes at gmail.com Thu Feb 24 07:44:02 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Thu, 24 Feb 2011 13:44:02 +0100 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: Message-ID: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> As to what Ralf said on macports: > > P.S. Macports & friends also (often) works, but it'll be a lot harder > to help you if something goes wrong there. > _______________________________________________ I fully agree, but it's worth giving a shot, because things much less often go wrong there. Usually you can also find help on their mailing list / irc channel. Paul. From blattnem at gmail.com Thu Feb 24 07:48:00 2011 From: blattnem at gmail.com (Marcel Blattner) Date: Thu, 24 Feb 2011 13:48:00 +0100 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: I use Mac OS X with Numpy, Scipy, Networkx and other libraries. Installation was done using Macports...worked like a charm ... Marcel On Thu, Feb 24, 2011 at 1:44 PM, Paul Anton Letnes < paul.anton.letnes at gmail.com> wrote: > As to what Ralf said on macports: > > > > P.S. Macports & friends also (often) works, but it'll be a lot harder > > to help you if something goes wrong there. > > _______________________________________________ > > I fully agree, but it's worth giving a shot, because things much less often > go wrong there. Usually you can also find help on their mailing list / irc > channel. > > Paul. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Feb 24 08:14:23 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 Feb 2011 14:14:23 +0100 Subject: [SciPy-User] PCA for sparse matrices, tolerance of eigenvalues In-Reply-To: References: Message-ID: <20110224131423.GI26159@phare.normalesup.org> On Thu, Feb 24, 2011 at 11:37:18AM +0000, Pauli Virtanen wrote: > > First of all, is that a good way to go about it? > If it doesn't work as a dense matrix, then you don't have much choice > than to rely on an iterative method. 'eigs' uses ARPACK. For the application of truncated PCA, for which error on small eigenvalues are not important, and very large data (much much larger than the cache size) randomized projection methods can work increadibly well (partly because they render the problem local in memory, and with large data memory bandwidth is a killer). We have a fast truncated SVD in the scikit learn that is fairly standalone, and can be extracted: https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/utils/extmath.py Don't use this for other applications than PCA! G From bdeb at willmore.eu Thu Feb 24 08:23:50 2011 From: bdeb at willmore.eu (Ben Willmore) Date: Thu, 24 Feb 2011 13:23:50 +0000 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: Many thanks for all these tips! Since the ARPACK errors are expected, I'll stick with the Superpack version for now, but I'll bear macports in mind in case I have trouble. Thanks again, Ben From bdeb at willmore.eu Thu Feb 24 08:57:51 2011 From: bdeb at willmore.eu (Ben Willmore) Date: Thu, 24 Feb 2011 13:57:51 +0000 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: Running the tests again (on Superpack SciPy), I see that it's not just ARPACK that fails the tests. I also get the error below [1]. It's not just a rounding error, the arrays are really different. Is this one also expected? Thanks, Ben ====================================================================== FAIL: test_basic (test_signaltools.TestMedFilt) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/signal/tests/test_signaltools.py", line 284, in test_basic [ 0, 7, 11, 7, 4, 4, 19, 19, 24, 0]]) File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_f72c605_20110113-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 686, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/Library/Python/2.6/site-packages/numpy-2.0.0.dev_f72c605_20110113-py2.6-macosx-10.6-universal.egg/numpy/testing/utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (mismatch 49.0%) x: array([[ 0., 50., 50., 50., 46., 18., 18., 19., 29., 0.], [ 0., 50., 50., 50., 50., 46., 27., 27., 35., 0.], [ 50., 50., 50., 50., 50., 50., 42., 47., 58., 42.],... y: array([[ 0, 50, 50, 50, 42, 15, 15, 18, 27, 0], [ 0, 50, 50, 50, 50, 42, 19, 21, 29, 0], [50, 50, 50, 50, 50, 47, 34, 34, 46, 35],... On 24 February 2011 13:23, Ben Willmore wrote: > Many thanks for all these tips! Since the ARPACK errors are expected, > I'll stick with the Superpack version for now, but I'll bear macports > in mind in case I have trouble. > > Thanks again, > > Ben > From warren.weckesser at enthought.com Thu Feb 24 09:02:06 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 24 Feb 2011 08:02:06 -0600 Subject: [SciPy-User] splint: bug or expected behaviour? In-Reply-To: References: Message-ID: On Thu, Feb 24, 2011 at 4:02 AM, Dave Hirschfeld wrote: > Dave Hirschfeld gmail.com> writes: > > > > > It appears that splint doesn't work correctly when asked to integrate > > a region outside the support of the original function. > > > > I've created a ticket for this issue so that it doesn't get lost: > > http://projects.scipy.org/scipy/ticket/1393 > > If it turn out to be expected behaviour it can always be closed. > > Thanks Dave. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Feb 24 09:06:10 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Feb 2011 22:06:10 +0800 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: On Thu, Feb 24, 2011 at 9:57 PM, Ben Willmore wrote: > Running the tests again (on Superpack SciPy), I see that it's not just > ARPACK that fails the tests. I also get the error below [1]. It's not > just a rounding error, the arrays are really different. > > Is this one also expected? Also looks familiar. In official releases these kind of test failures are marked as knownfail if they can't be resolved on time. But this superpack you're using is just a recent snapshot from trunk, therefore some errors show up. Cheers, Ralf From jlconlin at gmail.com Thu Feb 24 09:40:50 2011 From: jlconlin at gmail.com (Jeremy Conlin) Date: Thu, 24 Feb 2011 07:40:50 -0700 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: Message-ID: I know I'm a bit late to this email chain, but I have succeeded in installing Python 2.7.1, numpy, scipy, matplotlib on Snow Leopard. The instructions I used are attached. I had to follow these instructions because I don't have root access to my machine and needed these updated libraries. This allowed me to have 32 and 64 bit (universal) binaries of Python 2.7.1 and have everything else match. All the tests for numpy and scipy worked for me. As a side note, these instructions also work for installing numpy and scipy on Python 3.2 which was just released on Sunday. I have not tried the development branches of matplotlib or iPython. Hope this helps. Jeremy On Thu, Feb 24, 2011 at 4:59 AM, Ben Willmore wrote: > Hi, > > I'm trying to install scipy on Mac OS X 10.6 and have not succeeded > after a fair bit of effort. I wonder if anyone can provide some > guidance. > > I have tried: > > 1. Apple's Python 2.6 and the SciPy superpack from > > -- scipy.test('1','10') fails with errors relating to ARPACK, such as [1]. > > 2. python.org Python 2.7, gfortran and the compilation instructions > from > -- scipy.test('1','10') fails; I didn't store the error. > > 3. python.org Python 2.7, and corresponding prebuilt SciPy/NumPy > binaries from > -- scipy.test('1','10') fails with errors relating to Weave, e.g. [2]. > > 4. plus a number of less well defined approaches, none of which has > resulted in a SciPy that passes tests. > > Can anyone tell me what I've done wrong with the above, or provide a > different strategy for successful installation of a recent SciPy on > Mac OS X. I am non-naive about compilers etc and willing to put in a > bit of effort to get this working satisfactorily. > > Thanks for any advice! > > Ben > > > [1]: > ====================================================================== > ERROR: test_complex_nonsymmetric_modes > (test_arpack.TestEigenComplexNonSymmetric) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", > line 333, in test_complex_nonsymmetric_modes > ? ?self.eval_evec(m,typ,k,which) > ?File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", > line 314, in eval_evec > ? ?eval,evec=eigs(a,k,which=which,v0=v0) > ?File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", > line 571, in eigs > ? ?params.iterate() > ?File "/Library/Python/2.6/site-packages/scipy-0.10.0.dev_20110113-py2.6-macosx-10.6-universal.egg/scipy/sparse/linalg/eigen/arpack/arpack.py", > line 360, in iterate > ? ?raise ArpackError(self.info) > ArpackError: ARPACK error -8: Error return from trid. eigenvalue > calculation; Informational error from LAPACK routine dsteqr . > > > [2]: > ====================================================================== > ERROR: test_call_function (test_c_spec.CallableConverter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/tests/test_c_spec.py", > line 323, in test_call_function > ? ?compiler=self.compiler,force=1) > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/inline_tools.py", > line 318, in inline > ? ?function_catalog.add_function(code,func,module_dir) > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", > line 656, in add_function > ? ?self.add_function_persistent(code,function) > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/weave/catalog.py", > line 691, in add_function_persistent > ? ?function_list = function_list + cat.get(code,[]) > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", > line 114, in get > ? ?return self[key] > ?File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shelve.py", > line 122, in __getitem__ > ? ?value = Unpickler(f).load() > ImportError: No module named sc_d9b504d1a91ae5e28245fdf60a03c4142 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: SoftwareInstallationNotes.rtf Type: application/rtf Size: 17465 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu Feb 24 14:16:29 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 Feb 2011 20:16:29 +0100 Subject: [SciPy-User] Point cloud to mesh Message-ID: Hi all, I am looking for a python tool that constructs a polygonal mesh from a set of 3D coordinates. Any pointer would be appreciated. Nils From vanleeuwen.martin at gmail.com Thu Feb 24 14:31:52 2011 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Thu, 24 Feb 2011 11:31:52 -0800 Subject: [SciPy-User] Point cloud to mesh In-Reply-To: References: Message-ID: Hi Nils, The Mayavi mesh() command provides this functionality: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi/auto/mlab_helper_functions.html Topology between the 3d points is taken from the input arrays. Neighbouring elements in the input arrays are connected and the arrays (for x, y and z coordinates) need to be of the same shape. Cheers Martin 2011/2/24 Nils Wagner : > Hi all, > > I am looking for a python tool that constructs a polygonal > mesh from a set of 3D coordinates. > > Any pointer would be appreciated. > > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david_baddeley at yahoo.com.au Thu Feb 24 14:34:33 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 24 Feb 2011 11:34:33 -0800 (PST) Subject: [SciPy-User] Point cloud to mesh In-Reply-To: References: Message-ID: <495125.67969.qm@web113420.mail.gq1.yahoo.com> Guess you're after some form of 3D triangularisation. Delny (http://pypi.python.org/pypi/Delny/0.2.0) wraps qHull to do this, I think Mayavi might also have the desired functionality. I've had some trouble getting the stock Delny to compile and link on windows (and osx) so if you want to go that route I can probably dig up my modified sources (I basically dumped the whole qHull tree into the sources & modified setup.py so that is got statically linked) cheers, David ----- Original Message ---- From: Nils Wagner To: scipy-user at scipy.org Sent: Fri, 25 February, 2011 8:16:29 AM Subject: [SciPy-User] Point cloud to mesh Hi all, I am looking for a python tool that constructs a polygonal mesh from a set of 3D coordinates. Any pointer would be appreciated. Nils _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Thu Feb 24 15:59:13 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Feb 2011 20:59:13 +0000 (UTC) Subject: [SciPy-User] Point cloud to mesh References: Message-ID: On Thu, 24 Feb 2011 20:16:29 +0100, Nils Wagner wrote: > I am looking for a python tool that constructs a polygonal mesh from a > set of 3D coordinates. > > Any pointer would be appreciated. Do you need the mesh to consist of triangles (surface mesh), tetrahedra, or some other shape? If tetrahedra are OK, you can use scipy.spatial.Delaunay (in Scipy 0.9) to construct the delaunay tesselation. There's also the Delny package. Fitting a surface mesh is then a different problem. -- Pauli Virtanen From nwagner at iam.uni-stuttgart.de Thu Feb 24 16:26:20 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 Feb 2011 22:26:20 +0100 Subject: [SciPy-User] Point cloud to mesh In-Reply-To: References: Message-ID: On Thu, 24 Feb 2011 20:59:13 +0000 (UTC) Pauli Virtanen wrote: > On Thu, 24 Feb 2011 20:16:29 +0100, Nils Wagner wrote: >> I am looking for a python tool that constructs a >>polygonal mesh from a >> set of 3D coordinates. >> >> Any pointer would be appreciated. > > Do you need the mesh to consist of triangles (surface >mesh), tetrahedra, > or some other shape? If tetrahedra are OK, you can use > scipy.spatial.Delaunay (in Scipy 0.9) to construct the >delaunay > tesselation. There's also the Delny package. > >Fitting a surface mesh is then a different problem. > > -- > Pauli Virtanen > Hi, I am looking for a surface mesh representation. http://en.wikipedia.org/wiki/3D_scanner#From_point_clouds Nils From pav at iki.fi Thu Feb 24 16:49:44 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Feb 2011 21:49:44 +0000 (UTC) Subject: [SciPy-User] Point cloud to mesh References: Message-ID: On Thu, 24 Feb 2011 22:26:20 +0100, Nils Wagner wrote: [clip] > I am looking for a surface mesh representation. > > http://en.wikipedia.org/wiki/3D_scanner#From_point_clouds VTK might be able to do surface reconstruction, and it has Python bindings. Never used it, but here's what Google gives: http://www.vtk.org/doc/nightly/html/classvtkSurfaceReconstructionFilter.html http://public.kitware.com/cgi-bin/viewcvs.cgi/*checkout*/Examples/Modelling/Python/reconstructSurface.py?root=VTK&content-type=text/plain From lutz.maibaum at gmail.com Thu Feb 24 16:58:24 2011 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Thu, 24 Feb 2011 13:58:24 -0800 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: On Thu, Feb 24, 2011 at 4:48 AM, Marcel Blattner wrote: > I use Mac OS X with Numpy, Scipy, Networkx and other libraries. Installation > was done using Macports...worked like a charm ... +1 for MacPorts, it really is a very convenient system. Would it be possible to write a portfile that pulls the latest source code from svn? I would love to try the latest release candidate of SciPy 0.9, but it seems a lot of work to compile it from scratch with all its dependencies. Having a port like py26-scipy-svn as an alternative to the latest stable release would be very convenient. -- Lutz From cournape at gmail.com Thu Feb 24 19:54:13 2011 From: cournape at gmail.com (David Cournapeau) Date: Fri, 25 Feb 2011 09:54:13 +0900 Subject: [SciPy-User] PCA for sparse matrices, tolerance of eigenvalues In-Reply-To: References: Message-ID: On Thu, Feb 24, 2011 at 1:28 PM, Jaidev Deshpande wrote: > Dear all, > I tried using the 'scipy.sparse.eigs' tool for performing principal > component analysis on a matrix which is roughly 80% sparse. > First of all, is that a good way to go about it? It generally is, but keep in mind that 80 % sparse is not that sparse. Indeed, for efficient sparse matrices representations, each item needs besides its value two integer values for indexing. If you use single precision and 64 bits indexing, you are effectively using as much memory as a dense representation. Also, dense methods are often much faster than sparse ones for a same matrix size, especially since in scipy sparse matrices are not as well optimized as numpy arrays. You may want to compare both methods if you can. cheers, David From wesmckinn at gmail.com Thu Feb 24 21:36:26 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Thu, 24 Feb 2011 21:36:26 -0500 Subject: [SciPy-User] ANN: gpustats 0.0.1 Message-ID: I'm happy to announce an early version of a new library for doing calculations related to statistical problems on the GPU. Currently it implements the univariate and multivariate normal probability density functions as a proof of concept, but will over time expand to include many other probability distributions and relevant functions. Why do this at all? Well, the speedups, even on a modest laptop GPU (e.g. my GeForce 320M) can be quite significant: With 1e6 data points: In [6]: timeit gpustats.normpdf(data, mean, std, logged=False) 100 loops, best of 3: 13 ms per loop In [7]: timeit scipy.stats.norm.pdf(data, loc=mean, scale=std) 1 loops, best of 3: 191 ms per loop In higher dimensional problems where GPU shared memory can be highly utilized, speedups can be even larger, depending of course on your GPU. An immediate application of interest is in accelerating likelihood-based Bayesian inference (MCMC). GitHub repo: https://github.com/dukestats/gpustats Also on PyPI: http://pypi.python.org/pypi/gpustats What is it ---------- gpustats is a PyCUDA-based library implementing functionality similar to that present in scipy.stats.distributions. It implements a simple framework for specifying new CUDA kernels and extending existing ones. Here is a (partial) list of target functionality: * Probability density functions (pdfs). These are intended to speed up likelihood calculations in particular in Bayesian inference applications, such as in PyMC * Random variable generation using CURAND Notes ----- Requires working PyCUDA installation Best, Wes From daniele at grinta.net Fri Feb 25 04:36:37 2011 From: daniele at grinta.net (Daniele Nicolodi) Date: Fri, 25 Feb 2011 10:36:37 +0100 Subject: [SciPy-User] Reproducible Research academic reference Message-ID: <4D677825.1020405@grinta.net> Hello, sorry for the slightly off-topic question, but I do not know of a better place where to ask. I'm looking for academic references to the topic of "Reproducible Research", in the form of papers of peer review magazines, but also less "formal" resources. Thank for your help. Cheers, -- Daniele, in the stressful process of PhD thesis writing... :-) From gael.varoquaux at normalesup.org Fri Feb 25 04:39:21 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 25 Feb 2011 10:39:21 +0100 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: <4D677825.1020405@grinta.net> References: <4D677825.1020405@grinta.net> Message-ID: <20110225093921.GE24960@phare.normalesup.org> On Fri, Feb 25, 2011 at 10:36:37AM +0100, Daniele Nicolodi wrote: > sorry for the slightly off-topic question, but I do not know of a better > place where to ask. I'm looking for academic references to the topic of > "Reproducible Research", in the form of papers of peer review magazines, > but also less "formal" resources. A central reference is: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.6201 CiSE had a special issue on the topic recently: http://www.computer.org/portal/web/csdl/doi/10.1109/MCSE.2009.14 G From ralf.gommers at googlemail.com Fri Feb 25 05:01:13 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 25 Feb 2011 18:01:13 +0800 Subject: [SciPy-User] recipe for installation on Mac OS X? In-Reply-To: References: <242B2F19-B6CB-40FC-8CF0-34F3BA594411@gmail.com> Message-ID: On Fri, Feb 25, 2011 at 5:58 AM, Lutz Maibaum wrote: > On Thu, Feb 24, 2011 at 4:48 AM, Marcel Blattner wrote: >> I use Mac OS X with Numpy, Scipy, Networkx and other libraries. Installation >> was done using Macports...worked like a charm ... > > +1 for MacPorts, it really is a very convenient system. Would it be > possible to write a portfile that pulls the latest source code from > svn? I would love to try the latest release candidate of SciPy 0.9, > but it seems a lot of work to compile it from scratch with all its > dependencies. Having a port like py26-scipy-svn as an alternative to > the latest stable release would be very convenient. I'm sure that's possible, but the MacPorts list is probably a better place to ask for help with that. Ralf From sloan.lindsey at gmail.com Fri Feb 25 06:42:47 2011 From: sloan.lindsey at gmail.com (Sloan Lindsey) Date: Fri, 25 Feb 2011 12:42:47 +0100 Subject: [SciPy-User] Writing an Interpolation Function to Disk? Message-ID: Hi, I've been using the new interpolation routines (scipy.interpolate.CloughTocher2DInterpolator) quite happily in my project but I'm wondering if there is a way to save the output from the interpolation function to disk. I'm using rather large datasets (200,000+ points) and it takes an appreciable time to recalculate the interpolant every time that I run my program. I'd like it if I could dump the baked interpolant to disk and then restore it on execution. Of course I probably need to generate the interpolant per machine but I can deal with that. Is there any cool way to do this? Thanks, Sloan Lindsey Technical University of Vienna From fperez.net at gmail.com Fri Feb 25 11:42:14 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 25 Feb 2011 08:42:14 -0800 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: <4D677825.1020405@grinta.net> References: <4D677825.1020405@grinta.net> Message-ID: Hi, On Fri, Feb 25, 2011 at 1:36 AM, Daniele Nicolodi wrote: > sorry for the slightly off-topic question, but I do not know of a better > place where to ask. I'm looking for academic references to the topic of > "Reproducible Research", in the form of papers of peer review magazines, > but also less "formal" resources. I just came back from presenting at a symposium at the annual AAAS meeting (publishers of Science) in Washington precisely on this very topic; details here: http://blog.fperez.org/2011/02/reproducible-research-at-aaas-2011.html There's an ongoing discussion on some of these ideas with several of the participants from the symposium at the google group: http://groups.google.com/group/reproducible-research Cheers, f From fperez.net at gmail.com Fri Feb 25 11:44:45 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 25 Feb 2011 08:44:45 -0800 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: References: <4D677825.1020405@grinta.net> Message-ID: On Fri, Feb 25, 2011 at 8:42 AM, Fernando Perez wrote: > > I just came back from presenting at a symposium at the annual AAAS > meeting (publishers of Science) in Washington precisely on this very > topic; details here: > > http://blog.fperez.org/2011/02/reproducible-research-at-aaas-2011.html I should add that the organizer of the symposium, Victoria Stodden, did her PhD under David Donoho, the author of the reference Gael provided for you (Donoho also was there); this event was pretty much a direct descendant of that original and key reference. Donoho is now working on a URI system for computations with his graduate student, some of that discussion has been going on at the google group I linked. Cheers, f From lorenzo.isella at gmail.com Fri Feb 25 12:02:35 2011 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 25 Feb 2011 18:02:35 +0100 Subject: [SciPy-User] SciPy and Recursion Message-ID: <4D67E0AB.5020601@gmail.com> Dear All, It may be that I do not understand recursion well enough, but when I run the code at the end of the email, I get often plenty of warnings about a maximum number of recursions. Is this a feature of Python or of SciPy/NumPy? Or just a bug in my code? Only 2 functions accept_reject_monomer_pos(cluster_shifted, dist,epsi) and random_on_sphere(radius) use recursion, but I do not understand what is going wrong. Any suggestion is appreciated. Many thanks Lorenzo ##################################################################### #! /usr/bin/env python from enthought.mayavi import mlab import scipy as s import numpy as n import scipy.spatial as sp def accept_reject_monomer_pos(cluster_shifted, dist,epsi): xyz=random_on_sphere(dist) dist_list=s.zeros(0) for i in s.arange(s.shape(cluster_shifted)[0]): my_dist= sp.distance.euclidean(xyz,cluster_shifted[i,:]) # if (my_dist<=(2.+epsi)): #i.e. excessive compenetration if ((my_dist)<(2.-epsi)): \ return accept_reject_monomer_pos(cluster_shifted, dist,epsi) dist_list=s.hstack((dist_list,my_dist)) sel=s.where(dist_list<=(2.+epsi))[0] if (len(sel)==0): return accept_reject_monomer_pos(cluster_shifted,\ dist,epsi) #i.e. there are no contact points cluster_shifted=s.vstack((cluster_shifted, xyz)) return cluster_shifted def random_on_sphere(radius): x12=s.random.uniform(-1.,1.,2) if (s.sum(x12**2.)>=1.):return random_on_sphere(radius) print "x12 is, ", x12 print "s.sum(x12**2.) is, ", s.sum(x12**2.) rvec=s.arange(3)*1. rvec[0]=radius*2.*x12[0]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[1]=radius*2.*x12[1]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[2]=radius*(1.-2.*(x12[0]**2.+x12[1]**2.)) print "rvec is, ", rvec return rvec def new_dist_sq(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) return dsq def new_dist(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) dsq=s.sqrt(dsq) return dsq def find_CM(cluster): CM=s.mean(cluster, axis=0) return CM def relocate_cluster(cluster): cluster_shift=find_CM(cluster) cluster[:,0]=cluster[:,0]-cluster_shift[0] cluster[:,1]=cluster[:,1]-cluster_shift[1] cluster[:,2]=cluster[:,2]-cluster_shift[2] return cluster # NB: the cluster initially has N-1 monomers. N is the number of monomers # after adding a new monomer. N=3. # a=1. and removed from the formula kf=1.3 df=1.8 epsi=0.01 N_iter=100 d_square= new_dist_sq(N,df,kf) print "d_square is, ", d_square print "and the distance is, ", s.sqrt(d_square) r=random_on_sphere(3.) print "r is, ", r r_mod=s.sqrt(s.sum(r**2.)) print "r_mod is, ", r_mod ini_cluster=s.arange(6).reshape((2,3))*1. ini_cluster[0,0]=1. ini_cluster[0,1]=0. ini_cluster[0,2]=0. ini_cluster[1,0]=-1. ini_cluster[1,1]=0. ini_cluster[1,2]=0. print "ini_cluster is, ", ini_cluster # NB: in ini_cluster I am using the coordinates [x,y,z] of the monomer # centre in each row. It is a dimer whose CM is at [0,0,0] N=2 cluster=ini_cluster for i in s.arange(N_iter): cluster=relocate_cluster(cluster) d_calc=new_dist(N,df,kf) cluster_new=accept_reject_monomer_pos(cluster, d_calc,epsi) N=N+1 cluster=s.copy(cluster_new) x=cluster[:,0] y=cluster[:,1] z=cluster[:,2] mlab.clf() pts = mlab.points3d(x, y, z, scale_mode='none', resolution=20,\ color=(0,0,1),scale_factor=2.) #mlab.axes(pts) mlab.show() From gokhansever at gmail.com Fri Feb 25 12:24:24 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 25 Feb 2011 10:24:24 -0700 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: <4D677825.1020405@grinta.net> References: <4D677825.1020405@grinta.net> Message-ID: On Fri, Feb 25, 2011 at 2:36 AM, Daniele Nicolodi wrote: > Hello, > > sorry for the slightly off-topic question, but I do not know of a better > place where to ask. I'm looking for academic references to the topic of > "Reproducible Research", in the form of papers of peer review magazines, > but also less "formal" resources. > > Thank for your help. Cheers, > -- > Daniele, in the stressful process of PhD thesis writing... :-) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Two semi-related links from me: Interactive open access publishing and public peer review: The effectiveness of transparency and self-regulation in scientific quality assurance Publish your computer code: it is good enough : Nature News These are on the fun side of the subject... Annals of Improbable Research Goes Free Online - Slashdot The Journal of Irreproducible Results -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Fri Feb 25 13:41:05 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Fri, 25 Feb 2011 10:41:05 -0800 (PST) Subject: [SciPy-User] SciPy and Recursion In-Reply-To: <4D67E0AB.5020601@gmail.com> References: <4D67E0AB.5020601@gmail.com> Message-ID: <723722.34809.qm@web113408.mail.gq1.yahoo.com> It's a python feature designed to catch infinite recursion - I think the limit is ~1000 calls, although this can be changed (forgotten how at the moment, think it's somewhere in the sys module). Looking at your code, it appears that 'accept_reject_monomer_pos' will recurse infinitely as the recursive call is made with the exact same parameters as the original. hope this helps, David ----- Original Message ---- From: Lorenzo Isella To: scipy-user at scipy.org Sent: Sat, 26 February, 2011 6:02:35 AM Subject: [SciPy-User] SciPy and Recursion Dear All, It may be that I do not understand recursion well enough, but when I run the code at the end of the email, I get often plenty of warnings about a maximum number of recursions. Is this a feature of Python or of SciPy/NumPy? Or just a bug in my code? Only 2 functions accept_reject_monomer_pos(cluster_shifted, dist,epsi) and random_on_sphere(radius) use recursion, but I do not understand what is going wrong. Any suggestion is appreciated. Many thanks Lorenzo ##################################################################### #! /usr/bin/env python from enthought.mayavi import mlab import scipy as s import numpy as n import scipy.spatial as sp def accept_reject_monomer_pos(cluster_shifted, dist,epsi): xyz=random_on_sphere(dist) dist_list=s.zeros(0) for i in s.arange(s.shape(cluster_shifted)[0]): my_dist= sp.distance.euclidean(xyz,cluster_shifted[i,:]) # if (my_dist<=(2.+epsi)): #i.e. excessive compenetration if ((my_dist)<(2.-epsi)): \ return accept_reject_monomer_pos(cluster_shifted, dist,epsi) dist_list=s.hstack((dist_list,my_dist)) sel=s.where(dist_list<=(2.+epsi))[0] if (len(sel)==0): return accept_reject_monomer_pos(cluster_shifted,\ dist,epsi) #i.e. there are no contact points cluster_shifted=s.vstack((cluster_shifted, xyz)) return cluster_shifted def random_on_sphere(radius): x12=s.random.uniform(-1.,1.,2) if (s.sum(x12**2.)>=1.):return random_on_sphere(radius) print "x12 is, ", x12 print "s.sum(x12**2.) is, ", s.sum(x12**2.) rvec=s.arange(3)*1. rvec[0]=radius*2.*x12[0]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[1]=radius*2.*x12[1]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[2]=radius*(1.-2.*(x12[0]**2.+x12[1]**2.)) print "rvec is, ", rvec return rvec def new_dist_sq(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) return dsq def new_dist(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) dsq=s.sqrt(dsq) return dsq def find_CM(cluster): CM=s.mean(cluster, axis=0) return CM def relocate_cluster(cluster): cluster_shift=find_CM(cluster) cluster[:,0]=cluster[:,0]-cluster_shift[0] cluster[:,1]=cluster[:,1]-cluster_shift[1] cluster[:,2]=cluster[:,2]-cluster_shift[2] return cluster # NB: the cluster initially has N-1 monomers. N is the number of monomers # after adding a new monomer. N=3. # a=1. and removed from the formula kf=1.3 df=1.8 epsi=0.01 N_iter=100 d_square= new_dist_sq(N,df,kf) print "d_square is, ", d_square print "and the distance is, ", s.sqrt(d_square) r=random_on_sphere(3.) print "r is, ", r r_mod=s.sqrt(s.sum(r**2.)) print "r_mod is, ", r_mod ini_cluster=s.arange(6).reshape((2,3))*1. ini_cluster[0,0]=1. ini_cluster[0,1]=0. ini_cluster[0,2]=0. ini_cluster[1,0]=-1. ini_cluster[1,1]=0. ini_cluster[1,2]=0. print "ini_cluster is, ", ini_cluster # NB: in ini_cluster I am using the coordinates [x,y,z] of the monomer # centre in each row. It is a dimer whose CM is at [0,0,0] N=2 cluster=ini_cluster for i in s.arange(N_iter): cluster=relocate_cluster(cluster) d_calc=new_dist(N,df,kf) cluster_new=accept_reject_monomer_pos(cluster, d_calc,epsi) N=N+1 cluster=s.copy(cluster_new) x=cluster[:,0] y=cluster[:,1] z=cluster[:,2] mlab.clf() pts = mlab.points3d(x, y, z, scale_mode='none', resolution=20,\ color=(0,0,1),scale_factor=2.) #mlab.axes(pts) mlab.show() _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Feb 25 13:43:25 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 25 Feb 2011 10:43:25 -0800 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: References: <4D677825.1020405@grinta.net> Message-ID: On Fri, Feb 25, 2011 at 9:24 AM, G?khan Sever wrote: > > Two semi-related links from me: This reminded my that I had a little link dump of stuff I read while preparing my talk, might be useful here... Killed by Code: Software Transparency in Implantable Medical Devices ==================================================================== * Karen Sandler * Lysandra Ohrstrom * Laura Moy * Robert McVay http://www.softwarefreedom.org/resources/2010/transparent-medical-devices.html The Climate Code Foundation =========================== http://clearclimatecode.org/the-climate-code-foundation/ Welcome to EuSpRIG, the World?s premier site for information, action, conferences and dialogue on Spreadsheet Risk Management. http://www.eusprig.org/index.htm Wikipedia Entry on Iraq War Turned Into Actual Encyclopedia http://newsfeed.time.com/2010/09/07/wikipedia-entry-on-iraq-war-turned-into-actual-encyclopedia/ GSOC summit session on open science =================================== Summary notes: http://etherpad.osuosl.org/open-source-science Other links ========= >From a Kitware developer: http://www.kitware.com/blog/home/post/67 What is "Reproducibility," Anyway? http://www.scimatic.com/node/361 >From Titus Brown: mock reply to NSF data management request http://ivory.idyll.org/blog/may-10/data-management.html UTOSC: Applying open source ideals to more than software http://lwn.net/Articles/410555/#Comments How science funding is putting scientific data at risk http://arstechnica.com/science/news/2010/10/how-science-funding-is-putting-scientific-data-at-risk.ars Preserving science: what data do we keep? What do we discard? By John Timmer http://arstechnica.com/science/news/2010/11/preserving-science-choosing-what-data-to-discard.ars Science's burden: storing terabytes of research data for decades http://arstechnica.com/science/news/2011/02/how-to-handle-data-in-science.ars http://www.sciencemag.org/content/331/6018/649.full ?To address the growing complexity of data and analyses, Science is extending our data access requirement listed above to include computer codes involved in the creation or analysis of data.? From pav at iki.fi Fri Feb 25 15:40:26 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Feb 2011 20:40:26 +0000 (UTC) Subject: [SciPy-User] Writing an Interpolation Function to Disk? References: Message-ID: On Fri, 25 Feb 2011 12:42:47 +0100, Sloan Lindsey wrote: > I've been using the new interpolation routines > (scipy.interpolate.CloughTocher2DInterpolator) quite happily in my > project but I'm wondering if there is a way to save the output from the > interpolation function to disk. I'm using rather large datasets > (200,000+ points) and it takes an appreciable time to recalculate the > interpolant every time that I run my program. I assume most of the time is taken by constructing the Delaunay triangulation (scipy.spatial.Delaunay(points)). > I'd like it if I could > dump the baked interpolant to disk and then restore it on execution. Of > course I probably need to generate the interpolant per machine but I can > deal with that. Is there any cool way to do this? Ideally, this would work: import pickle f = open('file.pck', 'wb') pickle.dump(interpolator, f) f.close() but it doesn't work, since there's a small unnecessary technical snatch (scipy.spatial.interpnd has the wrong __name__). This will be fixed in the next version of Scipy, so a small workaround is needed in the meantime: class PickleableInterpolator(CloughTocher2DInterpolator): def __getstate__(self): return (self.tri, self.values, self.grad, self.is_complex, self.fill_value, self.values_shape) def __setstate__(self, data): self.tri, self.values, self.grad, self.is_complex, \ self.fill_value, self.values_shape = data self.points = self.tri.points This kind of mucking around might break in future Scipy versions, so I suggest you put it in 'if scipy.__version__ == "0.9.0":' Note that pickling objects like this is quite brittle --- the internal details of what needs to be pickled may change, so do not expect the pickle files to work across different Scipy versions. -- Pauli Virtanen From pav at iki.fi Fri Feb 25 15:42:02 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Feb 2011 20:42:02 +0000 (UTC) Subject: [SciPy-User] Writing an Interpolation Function to Disk? References: Message-ID: On Fri, 25 Feb 2011 20:40:26 +0000, Pauli Virtanen wrote: [clip] > pickle.dump(interpolator, f) For most stuff, one should do pickle.dump(interpolator, f, protocol=2) so that all Numpy arrays get dumped as binary data rather than first converted to text in the pickle stream. -- Pauli Virtanen From gael.varoquaux at normalesup.org Sat Feb 26 05:40:05 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Feb 2011 11:40:05 +0100 Subject: [SciPy-User] Writing an Interpolation Function to Disk? In-Reply-To: References: Message-ID: <20110226104005.GA15530@phare.normalesup.org> On Fri, Feb 25, 2011 at 08:42:02PM +0000, Pauli Virtanen wrote: > On Fri, 25 Feb 2011 20:40:26 +0000, Pauli Virtanen wrote: > [clip] > > pickle.dump(interpolator, f) > For most stuff, one should do > pickle.dump(interpolator, f, protocol=2) > so that all Numpy arrays get dumped as binary data rather than first > converted to text in the pickle stream. Or use joblib's pickler subclass: http://packages.python.org/joblib/generated/joblib.dump.html which can only be read using joblib's load function, but will pickle numpy arrays as .npy, as thus be fast at save and load. G From bhanukiran.perabathini at gmail.com Sat Feb 26 13:14:55 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Sat, 26 Feb 2011 23:44:55 +0530 Subject: [SciPy-User] Double integration with integrate.trapz Message-ID: Hi all, - My task is to perform a double integral (with two free variables) and make a filled-contour plot of the logarithm of its absolute square. - I'm kind of sure that the code has no major mistakes. Nevertheless, I've badly got to accelerate my double-integration algorithm. - I bet there must be ways to increase the speed substantially. My guess is that the over use of scipy.vectorize is the culprit. Here I'm attaching my code and the integral, any suggestions on how to improve the speed would be deeply appreciated. - I've checked out integrate.dblquad and it doesn't seem to handle complex types easily. But, no matter what, I primary intension is to improve on the trapz method. Thanks a tonne for every second of yours, - Bhanukiran, FOSSEE, IITB, India -------------- next part -------------- A non-text attachment was scrubbed... Name: the_integral.pdf Type: application/pdf Size: 31627 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dbl_integral.py Type: text/x-python-script Size: 1320 bytes Desc: not available URL: From lorenzo.isella at gmail.com Sat Feb 26 13:43:06 2011 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 26 Feb 2011 19:43:06 +0100 Subject: [SciPy-User] SciPy and Recursion In-Reply-To: References: Message-ID: <4D6949BA.5010400@gmail.com> Hi David and thanks for helping. > Date: Fri, 25 Feb 2011 10:41:05 -0800 (PST) > From: David Baddeley > Subject: Re: [SciPy-User] SciPy and Recursion > To: SciPy Users List > Message-ID:<723722.34809.qm at web113408.mail.gq1.yahoo.com> > Content-Type: text/plain; charset=utf-8 > > It's a python feature designed to catch infinite recursion - I think the limit > is ~1000 calls, although this can be changed (forgotten how at the moment, think > it's somewhere in the sys module). > Yes, indeed you can set up that value to be higher, which I did in the new version of the script pasted below. > Looking at your code, it appears that 'accept_reject_monomer_pos' will recurse > infinitely as the recursive call is made with the exact same parameters as the > original. No, it won't loop forever, since 'accept_reject_monomer_pos' in turns call 'random_on_sphere(dist)' that generates every time a new random position on a sphere. However, the problem now is that the modified version of the code pasted below (where I simply set a very high maximum number of recursions) crashes for a segmentation fault after a variable number of iterations and I have no idea about where the segmentation fault arises from (never had one in Python). Any suggestion is welcome. Cheers Lorenzo > > hope this helps, > David > > ######################################################################## #! /usr/bin/env python # from enthought.mayavi import mlab import scipy as s import numpy as n import scipy.spatial as sp import sys sys.setrecursionlimit(1000000) def accept_reject_monomer_pos(cluster_shifted, dist,epsi): xyz=random_on_sphere(dist) dist_list=s.zeros(0) for i in s.arange(s.shape(cluster_shifted)[0]): my_dist= sp.distance.euclidean(xyz,cluster_shifted[i,:]) # if (my_dist<=(2.+epsi)): #i.e. excessive compenetration if ((my_dist)<(2.-epsi)): \ return accept_reject_monomer_pos(cluster_shifted, dist,epsi) dist_list=s.hstack((dist_list,my_dist)) sel=s.where(dist_list<=(2.+epsi))[0] if (len(sel)==0): return accept_reject_monomer_pos(cluster_shifted,\ dist,epsi) #i.e. there are no contact points cluster_shifted=s.vstack((cluster_shifted, xyz)) return cluster_shifted def random_on_sphere(radius): x12=s.random.uniform(-1.,1.,2) if (s.sum(x12**2.)>=1.):return random_on_sphere(radius) rvec=s.arange(3)*1. rvec[0]=radius*2.*x12[0]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[1]=radius*2.*x12[1]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[2]=radius*(1.-2.*(x12[0]**2.+x12[1]**2.)) return rvec # def new_dist_sq(N,df,kf): # dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) # return dsq def new_dist(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) dsq=s.sqrt(dsq) return dsq def find_CM(cluster): CM=s.mean(cluster, axis=0) return CM def relocate_cluster(cluster): cluster_shift=find_CM(cluster) cluster[:,0]=cluster[:,0]-cluster_shift[0] cluster[:,1]=cluster[:,1]-cluster_shift[1] cluster[:,2]=cluster[:,2]-cluster_shift[2] return cluster # NB: the cluster initially has N-1 monomers. N is the number of monomers # after adding a new monomer. N=3. # a=1. and removed from the formula kf=1.3 df= 1.2 # 1.8 epsi=0.05 test=0 N_iter=800 N=2 ini_cluster=s.arange(6).reshape((2,3))*1. ini_cluster[0,0]=1. ini_cluster[0,1]=0. ini_cluster[0,2]=0. ini_cluster[1,0]=-1. ini_cluster[1,1]=0. ini_cluster[1,2]=0. cluster=ini_cluster for i in s.arange(N_iter): print "i is, ", i cluster=relocate_cluster(cluster) d_calc=new_dist(N,df,kf) cluster=accept_reject_monomer_pos(cluster, d_calc,epsi) N=N+1 n.savetxt("aggregate.dat", cluster) From pav at iki.fi Sat Feb 26 13:53:22 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 26 Feb 2011 18:53:22 +0000 (UTC) Subject: [SciPy-User] Double integration with integrate.trapz References: Message-ID: Hi, On Sat, 26 Feb 2011 23:44:55 +0530, bhanukiran perabathini wrote: [clip] > - My task is to perform a double integral (with two free variables) and > make a filled-contour plot of the logarithm of its absolute square. Avoid using numpy.vectorize: it will not increase the speed. Instead, try to vectorize your operations. So, suppose I would need to compute the integral I = \int_{0}^{1} dp \int_{-p}^{p**2} dq exp(-q*p*p) I would first change the variables in the inner integral to get rid of the varying bounds q(x) = ((p**2) + p) * x - p x = 0...1 dq = ((p**2) + p) * dx I = \int_{0}^{1} dp \int_0^1 dx (dq/dx) exp(-q(x)*p*p) Then, I would compute the integral using trapz: --------------- from numpy import linspace, newaxis, trapz, exp p = linspace(0, 1, 200) x = linspace(0, 1, 200) # instead of meshgrid, use broadcasting pp = p[:,newaxis] xx = x[newaxis,:] qq = (pp**2 + pp)*xx - pp dq_per_dx = (pp**2 + pp) # compute the value of the integral on the square integrand = exp(-qq*pp*pp) * dq_per_dx # compute the result value = trapz(trapz(integrand, pp, axis=0), x, axis=0) # compare to scipy.integrate.dblquad from scipy.integrate import dblquad value2 = dblquad(lambda q, p: exp(-q*p*p), 0., 1., lambda p: -p, lambda p: p**2) print value, value2[0] # -> 0.899982875122 0.899972594941 ---------------- The same trick be used for *all* double integrals. -- Pauli Virtanen From david_baddeley at yahoo.com.au Sat Feb 26 17:32:59 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Sat, 26 Feb 2011 14:32:59 -0800 (PST) Subject: [SciPy-User] SciPy and Recursion Message-ID: <268291.28960.qm@web113411.mail.gq1.yahoo.com> now got scipy running - you're going to want: dist_list = sp.distance.cdist(cluster_shifted, xyz.reshape((-1, 3))) you might also want to try removing the recursion from random_on_sphere. (if nothing else, recursion is slow) something like: def random_on_sphere(radius): x12=s.random.uniform(-1.,1.,2) while (s.sum(x12**2.)>=1.): x12=s.random.uniform(-1.,1.,2) ....... should do the trick cheers, David ----- Original Message ---- From: David Baddeley To: Lorenzo Isella Sent: Sun, 27 February, 2011 11:10:07 AM Subject: Re: [SciPy-User] SciPy and Recursion Hi Lorenzo, I think the segfault is caused by the python stack overflowing (with all the function calls) - hence the reason for the limit. Still seems like a bug to me though. Looking at your code again, it seems like accept_reject_monomer_pos gets will still get called recursively most of the time. I'd suggest trying a non-recursive version of the algorithm, for example: def accept_reject_monomer_pos(cluster_shifted, dist,epsi): while(True): xyz=random_on_sphere(dist) dist_list=s.hstack([sp.distance.euclidean(xyz,cluster_shifted[i,:]) for i in range(s.shape(cluster_shifted)[0])] if (not (dist_list < (2.-epsi)).any()) and (dist_list<=(2.+epsi)).any(): cluster_shifted=s.vstack((cluster_shifted, xyz)) return cluster_shifted you might be able to make it even simpler/faster by replacing the 'dist_list=s.hstack([sp.distance.euclidean(xyz,cluster_shifted[i,:]) for i in range(s.shape(cluster_shifted)[0])]' line with: dist_list = sp.distance.cdist(cluster_shifted.T, xyz) (I'm not quite sure if the transpose is necessary & don't currently have access to scipy to check it out) cheers, David ----- Original Message ---- From: Lorenzo Isella To: david_baddeley at yahoo.com.au Cc: scipy-user at scipy.org Sent: Sun, 27 February, 2011 7:43:06 AM Subject: Re: [SciPy-User] SciPy and Recursion Hi David and thanks for helping. > Date: Fri, 25 Feb 2011 10:41:05 -0800 (PST) > From: David Baddeley > Subject: Re: [SciPy-User] SciPy and Recursion > To: SciPy Users List > Message-ID:<723722.34809.qm at web113408.mail.gq1.yahoo.com> > Content-Type: text/plain; charset=utf-8 > > It's a python feature designed to catch infinite recursion - I think the limit > is ~1000 calls, although this can be changed (forgotten how at the moment, >think > it's somewhere in the sys module). > Yes, indeed you can set up that value to be higher, which I did in the new version of the script pasted below. > Looking at your code, it appears that 'accept_reject_monomer_pos' will recurse > infinitely as the recursive call is made with the exact same parameters as the > original. No, it won't loop forever, since 'accept_reject_monomer_pos' in turns call 'random_on_sphere(dist)' that generates every time a new random position on a sphere. However, the problem now is that the modified version of the code pasted below (where I simply set a very high maximum number of recursions) crashes for a segmentation fault after a variable number of iterations and I have no idea about where the segmentation fault arises from (never had one in Python). Any suggestion is welcome. Cheers Lorenzo > > hope this helps, > David > > ######################################################################## #! /usr/bin/env python # from enthought.mayavi import mlab import scipy as s import numpy as n import scipy.spatial as sp import sys sys.setrecursionlimit(1000000) def accept_reject_monomer_pos(cluster_shifted, dist,epsi): xyz=random_on_sphere(dist) dist_list=s.zeros(0) for i in s.arange(s.shape(cluster_shifted)[0]): my_dist= sp.distance.euclidean(xyz,cluster_shifted[i,:]) # if (my_dist<=(2.+epsi)): #i.e. excessive compenetration if ((my_dist)<(2.-epsi)): \ return accept_reject_monomer_pos(cluster_shifted, dist,epsi) dist_list=s.hstack((dist_list,my_dist)) sel=s.where(dist_list<=(2.+epsi))[0] if (len(sel)==0): return accept_reject_monomer_pos(cluster_shifted,\ dist,epsi) #i.e. there are no contact points cluster_shifted=s.vstack((cluster_shifted, xyz)) return cluster_shifted def random_on_sphere(radius): x12=s.random.uniform(-1.,1.,2) if (s.sum(x12**2.)>=1.):return random_on_sphere(radius) rvec=s.arange(3)*1. rvec[0]=radius*2.*x12[0]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[1]=radius*2.*x12[1]*s.sqrt(1.-x12[0]**2.-x12[1]**2.) rvec[2]=radius*(1.-2.*(x12[0]**2.+x12[1]**2.)) return rvec # def new_dist_sq(N,df,kf): # dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) # return dsq def new_dist(N,df,kf): dsq=(N**2.)/(N-1.)*(N/kf)**(2./df)-N/(N-1.)-N*((N-1.)/kf)**(2./df) dsq=s.sqrt(dsq) return dsq def find_CM(cluster): CM=s.mean(cluster, axis=0) return CM def relocate_cluster(cluster): cluster_shift=find_CM(cluster) cluster[:,0]=cluster[:,0]-cluster_shift[0] cluster[:,1]=cluster[:,1]-cluster_shift[1] cluster[:,2]=cluster[:,2]-cluster_shift[2] return cluster # NB: the cluster initially has N-1 monomers. N is the number of monomers # after adding a new monomer. N=3. # a=1. and removed from the formula kf=1.3 df= 1.2 # 1.8 epsi=0.05 test=0 N_iter=800 N=2 ini_cluster=s.arange(6).reshape((2,3))*1. ini_cluster[0,0]=1. ini_cluster[0,1]=0. ini_cluster[0,2]=0. ini_cluster[1,0]=-1. ini_cluster[1,1]=0. ini_cluster[1,2]=0. cluster=ini_cluster for i in s.arange(N_iter): print "i is, ", i cluster=relocate_cluster(cluster) d_calc=new_dist(N,df,kf) cluster=accept_reject_monomer_pos(cluster, d_calc,epsi) N=N+1 n.savetxt("aggregate.dat", cluster) From gael.varoquaux at normalesup.org Sat Feb 26 17:53:37 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Feb 2011 23:53:37 +0100 Subject: [SciPy-User] SciPy and Recursion In-Reply-To: <268291.28960.qm@web113411.mail.gq1.yahoo.com> References: <268291.28960.qm@web113411.mail.gq1.yahoo.com> Message-ID: <20110226225337.GA26889@phare.normalesup.org> On Sat, Feb 26, 2011 at 02:32:59PM -0800, David Baddeley wrote: > now got scipy running - you're going to want: > dist_list = sp.distance.cdist(cluster_shifted, xyz.reshape((-1, 3))) By the way, for l2 distance, this function is very suboptimal. You might want to use the code from scikits.learn that does the same thing: https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/metrics/pairwise.py If you don't want to depend on the scikit, you can just grab that file: it doesn't depend on anything else. Ga?l From jsalvati at u.washington.edu Sat Feb 26 23:50:18 2011 From: jsalvati at u.washington.edu (John Salvatier) Date: Sat, 26 Feb 2011 20:50:18 -0800 Subject: [SciPy-User] scipy.stats logpdf gradients Message-ID: Hello, A while back there was a discussion that ultimately got the logpdf, logcdf etc included in scipy.stats (http://mail.scipy.org/pipermail/scipy-user/2010-May/025444.html). There was some mention of possibly also doing the the gradients of the logpdf. the gradients of the logpdf are useful in fitting algorithms. I am curious whether any work has been done on that front. I would be interested in contributing to such work. Is has anyone worked on this? Is anyone willing to work on this with me? I am not intimately familiar with the scipy codebase, but I have coded up many logpdf gradients for PyMC. John From josef.pktd at gmail.com Sun Feb 27 03:15:00 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Feb 2011 03:15:00 -0500 Subject: [SciPy-User] scipy.stats logpdf gradients In-Reply-To: References: Message-ID: On Sat, Feb 26, 2011 at 11:50 PM, John Salvatier wrote: > Hello, > > A while back there was a discussion that ultimately got the logpdf, > logcdf etc included in scipy.stats > (http://mail.scipy.org/pipermail/scipy-user/2010-May/025444.html). > There was some mention of possibly also doing the the gradients of the > logpdf. the gradients of the logpdf are useful in fitting algorithms. > > I am curious whether any work has been done on that front. I would be > interested in contributing to such work. Is has anyone worked on this? > Is anyone willing to work on this with me? I am not intimately > familiar with the scipy codebase, but I have coded up many logpdf > gradients for PyMC. I would be very interested in this and similar enhancements (Hessian of logpdf, hazard function, characteristic function). Adding a distribution specific _logpdf_gradient (?name) would just need the derivative with respect to the shape parameters, and can be directly added to the individual distribution classes. (I think the only restriction is that it should be vectorized to work with a 1d array of arguments. ?) This wouldn't require any knowledge of the generic machinery. So, forking scipy and starting to edit the individual distributions in scipy.stats.distributions.py will get you started. loc and scale are usually handled in the superclass methods, without the underline in the name. For this, I think it would be better to come up with a program friendly interface, instead of a user friendly interface by just copying the pattern of existing methods. Since the gradient returns a vector (per observation) the broadcasting style of the current methods won't work directly, and I'm not a fan of *args and **kwds, optimization arguments should be an array in my opinion. Another thought is overhead of the function call since it will be mainly useful for optimization http://projects.scipy.org/scipy/ticket/1389 although vectorized and with a good optimizer convergence should be pretty fast. The distribution specific methods would be useful independently of the wrapper, and would also allow other packages to write their own wrappers. Maybe we can add a dumb wrapper (as private method?) that just handles loc and scale in the simplest way. Does PyMC use a generic way of getting the derivative with respect to loc and scale or is it distribution specific? ---------- In statsmodels, we currently use only distribution and model specific gradients and hessians of the log-likelihood or numerical gradients. I started to experiment with a generic system when I worked on the t-distribution. sympy worked quite well and has the special function that is required for the gradient of the t-distribution and that I didn't know about. My plan was to use the derivative with respect to shapes, loc and scale, and chain them together with other functions that represent for example loc or scale as function of some parameters and explanatory variables. It turned out to be a lot more work than I had time for. --------- So please go ahead, we should be able to add at least any private methods pretty fast. The tests will require just a few lines if we compare them against a numerical derivative, there is a check_derivative somewhere in scipy.optimize. Josef > > John > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From giacomo.boffi at polimi.it Sun Feb 27 06:34:32 2011 From: giacomo.boffi at polimi.it (Giacomo Boffi) Date: Sun, 27 Feb 2011 12:34:32 +0100 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: References: Message-ID: <19818.14024.445696.976358@aiuole.stru.polimi.it> bhanukiran perabathini writes: > - My task is to perform a double integral (with two free variables) > and make a filled-contour plot of the logarithm of its absolute > square. notwistanding the other excellent advice you've received, please note that your integral is symmetric with respect to the free variables, so that you could double the resolution of your plot using the same amount of computational resources plotting the first quadrant only -- anch'io la penso come me, ma -- SteO153, in IHC -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From daniele at grinta.net Sun Feb 27 08:42:25 2011 From: daniele at grinta.net (Daniele Nicolodi) Date: Sun, 27 Feb 2011 14:42:25 +0100 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: <20110225093921.GE24960@phare.normalesup.org> References: <4D677825.1020405@grinta.net> <20110225093921.GE24960@phare.normalesup.org> Message-ID: <4D6A54C1.1000006@grinta.net> On 25/02/11 10:39, Gael Varoquaux wrote: > A central reference is: > http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.6201 > > CiSE had a special issue on the topic recently: > http://www.computer.org/portal/web/csdl/doi/10.1109/MCSE.2009.14 Thank you Gael, and thank to all the others that gave me pointers on the mailing list and via private mail. Cheers, -- Daniele From pierre.raybaut at gmail.com Sun Feb 27 13:34:53 2011 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Sun, 27 Feb 2011 19:34:53 +0100 Subject: [SciPy-User] ANN: Spyder v2.0.8 Message-ID: Hi all, I am pleased to announced that Spyder v2.0.8 has just been released. As this is mostly a maintenance release, a lot of bugs were fixed (see below) and some minor features were added. Integration Spyder's GUI-based Python shell in your application: Note that this version includes an example of application using the Spyder's internal shell as a debugging console which also demonstrates the py2exe deployment procedure. Spyder (previously known as Pydee) is a free open-source Python development environment providing MATLAB-like features in a simple and light-weighted software, available for Windows XP/Vista/7, GNU/Linux and MacOS X: http://spyderlib.googlecode.com/ Spyder is part of spyderlib, a Python module based on PyQt4, pyflakes and rope (QScintilla's dependency has been removed in version 2.0 and rope features have been integrated since this version as well). Some of Spyder basic features: * Python, C/C++, Fortran source editor with class/function browser, code completion and calltips * consoles: o open as many Python interpreters, IPython consoles or command windows as you need o code completion and calltips o variable explorer with GUI-based editors for a lot of data types (numbers, strings, lists, arrays, dictionaries, ...) * object inspector: provide documentation or source code on any Python object (class, function, module, ...) * online documentation: automatically generated html documentation on installed Python modules * find in files * file explorer * project manager * MATLAB-like PYTHONPATH management dialog box (works with all consoles) * Windows only: current user environment variables editor * direct links to documentation (Python, Qt, Matplotlib, NumPy, Scipy, etc.) * direct link to Python(x,y) launcher * direct links to QtDesigner, QtLinguist and QtAssistant (Qt documentation) Bug fixes (since v2.0.6): Consoles/bugfix: saving history log (see context menu) was not working following a recent code cleaning/refactoring On non-Windows platforms, the file selection dialog "All files (*.*)" filter was not matching files without extension dochelpers.isdefined/bugfix: ignoring syntax errors while evaluating object Preferences Dialog (dialog box + keyboard shortcut page): improved size/resize behavior Editor: when cursor was on the very last line, Duplicate/Delete line features were getting stuck in an infinite loop Editor/duplicate line feature - fixed unexpected behavior: when duplicating selected text, text selection was extended to duplicated part Editor/bugfix with multiple editor windows: when opening file on one editor window, the top-left corner menu (file list) was not updated correctly in other editor windows Editor/fixed unexpected behavior: when clicking on the main window's outline explorer while a separate editor window had focus, the latter was used to show the associated line of code Project explorer: added new debugging options (profiling 'rope' calls) Console/Advanced settings/UMD module list: removing all entries (empty module list) was not working Editor/File list management dialog (Ctrl+E): double-clicking/pressing Return on a listwidget item will switch to the associated file Editor/Tab bar: fixed missing tooltips issue (and missing file switch menu entries) Code completion/bugfix: list widget was not hiding as expected when pressing ':' Editor/fixed unexpected behavior: when some text was selected, "Ctrl+Left mouse click" was trying to "go to definition" instead of doing the standard drag n'drop feature Editor/bugfix: disabling code completion/calltips for non-Python source code (was not working -as expected- but was taking time to simply not work...) Editor/go to line: fixed unicode error Code editor/bugfix: cursor position was not restored when undoing an indent operation with "tab always indent" feature turned on *and* the cursor at the end of the line Tab behavior when "tab always indents" is turned off: inserting 4-(len(leading_text) % 4) spaces (instead of 4) Object inspector/bugfix: ignoring unknown objects when called automatically from editor/console, i.e. do not log, do not show 'no doc available' Other changes (since v2.0.6): Code editor syntax highlighting: added more keywords to Cython syntax highlighter (cpdef, inline, cimport and DEF) Added example of application using the Spyder's internal shell as a debugging console (demonstrates also the py2exe deployment procedure) Object inspector: added "Source" combo box (Console/Editor) -> object inspected from editor are now analyzed only with rope (if available) and then shown in object inspector Added keyboard shortcut to open Preferences Dialog (default: Ctrl+Alt+Shift+P) Editor: added "Copy line" feature (Ctrl+Alt+Down), similar to "Duplicate line" (Ctrl+Alt+Up) but paste text before the current line/selected text (instead of after) Array editor: added option to set row/col labels (resp. ylabels and xlabels) Editor/rope: improved performance for calltips/doc feature Cheers, Pierre From jsalvati at u.washington.edu Sun Feb 27 14:10:20 2011 From: jsalvati at u.washington.edu (John Salvatier) Date: Sun, 27 Feb 2011 11:10:20 -0800 Subject: [SciPy-User] scipy.stats logpdf gradients Message-ID: > > > I would be very interested in this and similar enhancements (Hessian > of logpdf, hazard function, characteristic function). Awesome :) Adding a distribution specific _logpdf_gradient (?name) would just > need the derivative with respect to the shape parameters, and can be > directly added to the individual distribution classes. > (I think the only restriction is that it should be vectorized to work > with a 1d array of arguments. ?) Do you mean having all the arguments (say, alpha beta) as part of one parameter? I'm not really a fan of this. I've had to do optimization on likelihood functions in PyMC, and what I've done is np.ravel() the arguments and then stick them in a known order (a prebuilt dictionary of slices) into an empty array. It's been fast enough. loc and scale are usually handled in the superclass methods, without > the underline in the name. > For this, I think it would be better to come up with a program > friendly interface, instead of a user friendly interface by just > copying the pattern of existing methods. > Since the gradient returns a vector (per observation) the broadcasting > style of the current methods won't work directly, and I'm not a fan of > *args and **kwds, optimization arguments should be an array in my > opinion. Another thought is overhead of the function call since it > will be mainly useful for optimization > http://projects.scipy.org/scipy/ticket/1389 although vectorized and > with a good optimizer convergence should be pretty fast. I'm not clear on what you mean by the "broadcasting style of the current methods". Do they use a different kind of broadcasting than normal? The distribution specific methods would be useful independently of the wrapper, and would also allow other packages to write their own wrappers. Maybe we can add a dumb wrapper (as private method?) that just handles loc and scale in the simplest way. Does PyMC use a generic way of getting the derivative with respect to > loc and scale or is it distribution specific? It is distribution specific. So please go ahead, we should be able to add at least any private > methods pretty fast. > The tests will require just a few lines if we compare them against a > numerical derivative, there is a check_derivative somewhere in > scipy.optimize. I will see about starting a fork and send you an email when I do. Is there an easy way to fork to github? -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Feb 27 14:56:05 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Feb 2011 14:56:05 -0500 Subject: [SciPy-User] scipy.stats logpdf gradients In-Reply-To: References: Message-ID: On Sun, Feb 27, 2011 at 2:10 PM, John Salvatier wrote: >> >> I would be very interested in this and similar enhancements (Hessian >> of logpdf, hazard function, characteristic function). > > Awesome :) >> >> Adding a distribution specific _logpdf_gradient (?name) would just >> need the derivative with respect to the shape parameters, and can be >> directly added to the individual distribution classes. >> (I think the only restriction is that it should be vectorized to work >> with a 1d array of arguments. ?) > > Do you mean having all the arguments (say, alpha beta) as part of one > parameter? > I'm not really a fan of this. I've had to do optimization on likelihood > functions > in PyMC, and what I've done is np.ravel() the arguments and then stick them > in a known order (a prebuilt dictionary of slices) into an empty array. It's > been > fast enough. all scipy optimizers like fmin expect that the optimization argument is an array, and we follow this everywhere in statsmodels, taking it apart inside the loglikelihood function. Often we have a variable number of parameters and having to use *args looks messy to me, especially if there are still kwds afterwards. But that's for the wrapper that would be used with the optimizers, for the gradient with respect to the shape parameters it might not make much difference. >> >> loc and scale are usually handled in the superclass methods, without >> the underline in the name. >> For this, I think it would be better to come up with a program >> friendly interface, instead of a user friendly interface by just >> copying the pattern of existing methods. >> Since the gradient returns a vector (per observation) the broadcasting >> style of the current methods won't work directly, and I'm not a fan of >> *args and **kwds, optimization arguments should be an array in my >> opinion. Another thought is overhead of the function call since it >> will be mainly useful for optimization >> http://projects.scipy.org/scipy/ticket/1389?although vectorized and >> with a good optimizer convergence should be pretty fast. > > I'm not clear on what you mean by the "broadcasting style of the current > methods". > Do they use a different kind of broadcasting than normal? >From the outside, when calling for example pdf, broadcasting works as usual (with the exception for some of the distribution where the support depends on a parameter). Internally, the wrapper function does the broadcasting and checking for valid arguments, checking whether the arguments are in the support of the function, and then calls the private method, _pdf, with a broadcasted array of goodargs. The return has the same shape as the broadcasted inputs. You can look at pdf as an example. In the case of the gradient, the return array will have a different dimension than the inputs, empty if there is no shape parameter like in the normal case, or with an extra axis with up to 4 or 5 gradients, one per parameter. This is if the gradient is in one array, the alternative would be a list of arrays, one for each parameter. This differs then from the existing pattern of pdf, cdf, ... > The distribution specific methods would be useful independently of the > wrapper, and would also allow other packages to write their own > wrappers. Maybe we can add a dumb wrapper (as private method?) that > just handles loc and scale in the simplest way. > >> Does PyMC use a generic way of getting the derivative with respect to >> loc and scale or is it distribution specific? > > It is distribution specific. >> >> So please go ahead, we should be able to add at least any private >> methods pretty fast. >> The tests will require just a few lines if we compare them against a >> numerical derivative, there is a check_derivative somewhere in >> scipy.optimize. > > I will see about starting a fork and send you an email when I do. Is there > an easy way to fork to github? the following was Pauli's answer, the last time the question came up I'd recommend going for git already. Some instructions how to get set up using Scipy's SVN mirror can be found here: http://projects.scipy.org/numpy/wiki/GitMirror Good luck, Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From fperez.net at gmail.com Sun Feb 27 16:10:27 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 27 Feb 2011 13:10:27 -0800 Subject: [SciPy-User] Reproducible Research academic reference In-Reply-To: <4D6A54C1.1000006@grinta.net> References: <4D677825.1020405@grinta.net> <20110225093921.GE24960@phare.normalesup.org> <4D6A54C1.1000006@grinta.net> Message-ID: On Sun, Feb 27, 2011 at 5:42 AM, Daniele Nicolodi wrote: > > Thank you Gael, and thank to all the others that gave me pointers on the > mailing list and via private mail. Another detailed discussion on the topic that was just posted to a climate modeling blog, which you might find useful: http://wattsupwiththat.com/2011/02/26/the-code-of-nature-making-authors-part-with-their-programs/ Cheers, f From ralf.gommers at googlemail.com Mon Feb 28 00:56:31 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 28 Feb 2011 13:56:31 +0800 Subject: [SciPy-User] ANN: SciPy 0.9.0 Message-ID: I'm pleased to announce the release of SciPy 0.9.0. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This release comes seven months after the 0.8.0 release and contains several new features, numerous bug-fixes, improved test coverage, and better documentation. This is the first release that supports Python 3 (with the exception of the scipy.weave module). Sources, binaries, documentation and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.9.0/ Thank you to everybody who contributed to this release. Enjoy, The SciPy developers ========================= SciPy 0.9.0 Release Notes ========================= .. contents:: SciPy 0.9.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.9.x branch, and on adding new features on the development trunk. This release requires Python 2.4 - 2.7 or 3.1 - and NumPy 1.5 or greater. Please note that SciPy is still considered to have "Beta" status, as we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a major milestone in the development of SciPy, after which changing the package structure or API will be much more difficult. Whilst these pre-1.0 releases are considered to have "Beta" status, we are committed to making them as bug-free as possible. However, until the 1.0 release, we are aggressively reviewing and refining the functionality, organization, and interface. This is being done in an effort to make the package as coherent, intuitive, and useful as possible. To achieve this, we need help from the community of users. Specifically, we need feedback regarding all aspects of the project - everything - from which algorithms we implement, to details about our function's call signatures. Python 3 ======== Scipy 0.9.0 is the first SciPy release to support Python 3. The only module that is not yet ported is ``scipy.weave``. Scipy source code location to be changed ======================================== Soon after this release, Scipy will stop using SVN as the version control system, and move to Git. The development source code for Scipy can from then on be found at http://github.com/scipy/scipy New features ============ Delaunay tesselations (``scipy.spatial``) ----------------------------------------- Scipy now includes routines for computing Delaunay tesselations in N dimensions, powered by the Qhull_ computational geometry library. Such calculations can now make use of the new ``scipy.spatial.Delaunay`` interface. .. _Qhull: http://www.qhull.org/ N-dimensional interpolation (``scipy.interpolate``) --------------------------------------------------- Support for scattered data interpolation is now significantly improved. This version includes a ``scipy.interpolate.griddata`` function that can perform linear and nearest-neighbour interpolation for N-dimensional scattered data, in addition to cubic spline (C1-smooth) interpolation in 2D and 1D. An object-oriented interface to each interpolator type is also available. Nonlinear equation solvers (``scipy.optimize``) ----------------------------------------------- Scipy includes new routines for large-scale nonlinear equation solving in ``scipy.optimize``. The following methods are implemented: * Newton-Krylov (``scipy.optimize.newton_krylov``) * (Generalized) secant methods: - Limited-memory Broyden methods (``scipy.optimize.broyden1``, ``scipy.optimize.broyden2``) - Anderson method (``scipy.optimize.anderson``) * Simple iterations (``scipy.optimize.diagbroyden``, ``scipy.optimize.excitingmixing``, ``scipy.optimize.linearmixing``) The ``scipy.optimize.nonlin`` module was completely rewritten, and some of the functions were deprecated (see above). New linear algebra routines (``scipy.linalg``) ---------------------------------------------- Scipy now contains routines for effectively solving triangular equation systems (``scipy.linalg.solve_triangular``). Improved FIR filter design functions (``scipy.signal``) ------------------------------------------------------- The function ``scipy.signal.firwin`` was enhanced to allow the design of highpass, bandpass, bandstop and multi-band FIR filters. The function ``scipy.signal.firwin2`` was added. This function uses the window method to create a linear phase FIR filter with an arbitrary frequency response. The functions ``scipy.signal.kaiser_atten`` and ``scipy.signal.kaiser_beta`` were added. Improved statistical tests (``scipy.stats``) -------------------------------------------- A new function ``scipy.stats.fisher_exact`` was added, that provides Fisher's exact test for 2x2 contingency tables. The function ``scipy.stats.kendalltau`` was rewritten to make it much faster (O(n log(n)) vs O(n^2)). Deprecated features =================== Obsolete nonlinear solvers (in ``scipy.optimize``) -------------------------------------------------- The following nonlinear solvers from ``scipy.optimize`` are deprecated: - ``broyden_modified`` (bad performance) - ``broyden1_modified`` (bad performance) - ``broyden_generalized`` (equivalent to ``anderson``) - ``anderson2`` (equivalent to ``anderson``) - ``broyden3`` (obsoleted by new limited-memory broyden methods) - ``vackar`` (renamed to ``diagbroyden``) Removed features ================ The deprecated modules ``helpmod``, ``pexec`` and ``ppimport`` were removed from ``scipy.misc``. The ``output_type`` keyword in many ``scipy.ndimage`` interpolation functions has been removed. The ``econ`` keyword in ``scipy.linalg.qr`` has been removed. The same functionality is still available by specifying ``mode='economic'``. Old correlate/convolve behavior (in ``scipy.signal``) ----------------------------------------------------- The old behavior for ``scipy.signal.convolve``, ``scipy.signal.convolve2d``, ``scipy.signal.correlate`` and ``scipy.signal.correlate2d`` was deprecated in 0.8.0 and has now been removed. Convolve and correlate used to swap their arguments if the second argument has dimensions larger than the first one, and the mode was relative to the input with the largest dimension. The current behavior is to never swap the inputs, which is what most people expect, and is how correlation is usually defined. ``scipy.stats`` --------------- Many functions in ``scipy.stats`` that are either available from numpy or have been superseded, and have been deprecated since version 0.7, have been removed: `std`, `var`, `mean`, `median`, `cov`, `corrcoef`, `z`, `zs`, `stderr`, `samplestd`, `samplevar`, `pdfapprox`, `pdf_moments` and `erfc`. These changes are mirrored in ``scipy.stats.mstats``. ``scipy.sparse`` ---------------- Several methods of the sparse matrix classes in ``scipy.sparse`` which had been deprecated since version 0.7 were removed: `save`, `rowcol`, `getdata`, `listprint`, `ensure_sorted_indices`, `matvec`, `matmat` and `rmatvec`. The functions ``spkron``, ``speye``, ``spidentity``, ``lil_eye`` and ``lil_diags`` were removed from ``scipy.sparse``. The first three functions are still available as ``scipy.sparse.kron``, ``scipy.sparse.eye`` and ``scipy.sparse.identity``. The `dims` and `nzmax` keywords were removed from the sparse matrix constructor. The `colind` and `rowind` attributes were removed from CSR and CSC matrices respectively. ``scipy.sparse.linalg.arpack.speigs`` ------------------------------------- A duplicated interface to the ARPACK library was removed. Other changes ============= ARPACK interface changes ------------------------ The interface to the ARPACK eigenvalue routines in ``scipy.sparse.linalg`` was changed for more robustness. The eigenvalue and SVD routines now raise ``ArpackNoConvergence`` if the eigenvalue iteration fails to converge. If partially converged results are desired, they can be accessed as follows:: import numpy as np from scipy.sparse.linalg import eigs, ArpackNoConvergence m = np.random.randn(30, 30) try: w, v = eigs(m, 6) except ArpackNoConvergence, err: partially_converged_w = err.eigenvalues partially_converged_v = err.eigenvectors Several bugs were also fixed. The routines were moreover renamed as follows: - eigen --> eigs - eigen_symmetric --> eigsh - svd --> svds From bhanukiran.perabathini at gmail.com Mon Feb 28 01:38:57 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Mon, 28 Feb 2011 12:08:57 +0530 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: <19818.14024.445696.976358@aiuole.stru.polimi.it> References: <19818.14024.445696.976358@aiuole.stru.polimi.it> Message-ID: Hi Pauli Virtanen & Giacomo Boffi, Both of your suggestions helped me. thanks so much. But I guess I'm having some troubles understanding and using broadcasting to its total power. Could you suggest some tutorial where I can read up learn this stuff? Manipulating with the axes seems a little tricky to me. I want to master them so well that I can think in terms of them. Thanks again for your time, Bhanukiran On Sun, Feb 27, 2011 at 5:04 PM, Giacomo Boffi wrote: > bhanukiran perabathini writes: > > ?> - My task is to perform a double integral (with two free variables) > ?> ? and make a filled-contour plot of the logarithm of its absolute > ?> ? square. > > notwistanding the other excellent advice you've received, please note > that your integral is symmetric with respect to the free variables, so > that you could double the resolution of your plot using the same > amount of computational resources plotting the first quadrant only > > -- > anch'io la penso come me, ma ? ? ? ? ? ? ? ? ? ? -- SteO153, in IHC > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From seb.haase at gmail.com Mon Feb 28 03:56:41 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 28 Feb 2011 09:56:41 +0100 Subject: [SciPy-User] SciPy and Recursion In-Reply-To: <20110226225337.GA26889@phare.normalesup.org> References: <268291.28960.qm@web113411.mail.gq1.yahoo.com> <20110226225337.GA26889@phare.normalesup.org> Message-ID: Ga?l, could you explain, what you mean by suboptimal!? Do mean speed-wise ? I had a longish thread on the numpy list recently, where I was trying to gain speed using OpenMP and/or SSE. And cdist turned out to as fast as my (best) C implementation (for less than 2-3 threads). Thanks, Sebastian Haase On Sat, Feb 26, 2011 at 11:53 PM, Gael Varoquaux wrote: > On Sat, Feb 26, 2011 at 02:32:59PM -0800, David Baddeley wrote: >> now got scipy running - you're going to want: > >> dist_list = sp.distance.cdist(cluster_shifted, xyz.reshape((-1, 3))) > > By the way, for l2 distance, this function is very suboptimal. You might > want to use the code from scikits.learn that does the same thing: > https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/metrics/pairwise.py > > If you don't want to depend on the scikit, you can just grab that file: > it doesn't depend on anything else. > > Ga?l > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From bhanukiran.perabathini at gmail.com Mon Feb 28 04:18:43 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Mon, 28 Feb 2011 14:48:43 +0530 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: References: <19818.14024.445696.976358@aiuole.stru.polimi.it> Message-ID: oh looks like this is what i've been looking for.. http://www.scipy.org/EricsBroadcastingDoc On Mon, Feb 28, 2011 at 12:08 PM, bhanukiran perabathini wrote: > Hi Pauli Virtanen & Giacomo Boffi, > > Both of your suggestions helped me. thanks so much. > > But I guess I'm having some troubles understanding and using > broadcasting to its total power. Could you suggest some tutorial where > I can read up learn this stuff? > > Manipulating with the axes seems a little tricky to me. I want to > master them so well that I can think in terms of them. > > Thanks again for your time, > Bhanukiran > > > > On Sun, Feb 27, 2011 at 5:04 PM, Giacomo Boffi wrote: >> bhanukiran perabathini writes: >> >> ?> - My task is to perform a double integral (with two free variables) >> ?> ? and make a filled-contour plot of the logarithm of its absolute >> ?> ? square. >> >> notwistanding the other excellent advice you've received, please note >> that your integral is symmetric with respect to the free variables, so >> that you could double the resolution of your plot using the same >> amount of computational resources plotting the first quadrant only >> >> -- >> anch'io la penso come me, ma ? ? ? ? ? ? ? ? ? ? -- SteO153, in IHC >> >> -- >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From bhanukiran.perabathini at gmail.com Mon Feb 28 06:05:12 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Mon, 28 Feb 2011 16:35:12 +0530 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: References: <19818.14024.445696.976358@aiuole.stru.polimi.it> Message-ID: Hi again! let's say we have a siple function that goes like def func(x, y): return x + y now I want to get this value for every point on the x-y plane, here's what I do x = sp.array([-2, -1, 0, 1, 2]) y = sp.array([-2, -1, 0, 1, 2]) xx = x[:, sp.newaxis] yy = y[sp.newaxis, :] >>> func(xx, yy) >>> array([[-4, -3, -2, -1, 0], [-3, -2, -1, 0, 1], [-2, -1, 0, 1, 2], [-1, 0, 1, 2, 3], [ 0, 1, 2, 3, 4]]) works as expected right? How do we get the following function working with broadcasted arrays? def func2(x, y): if x > y: return x + y else: return x - y func2(xx, yy) raises the error ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() How do we handle such situations? Thanks so much, Bhanukiran On Mon, Feb 28, 2011 at 2:48 PM, bhanukiran perabathini wrote: > oh looks like this is what i've been looking for.. > > http://www.scipy.org/EricsBroadcastingDoc > > > > On Mon, Feb 28, 2011 at 12:08 PM, bhanukiran perabathini > wrote: >> Hi Pauli Virtanen & Giacomo Boffi, >> >> Both of your suggestions helped me. thanks so much. >> >> But I guess I'm having some troubles understanding and using >> broadcasting to its total power. Could you suggest some tutorial where >> I can read up learn this stuff? >> >> Manipulating with the axes seems a little tricky to me. I want to >> master them so well that I can think in terms of them. >> >> Thanks again for your time, >> Bhanukiran >> >> >> >> On Sun, Feb 27, 2011 at 5:04 PM, Giacomo Boffi wrote: >>> bhanukiran perabathini writes: >>> >>> ?> - My task is to perform a double integral (with two free variables) >>> ?> ? and make a filled-contour plot of the logarithm of its absolute >>> ?> ? square. >>> >>> notwistanding the other excellent advice you've received, please note >>> that your integral is symmetric with respect to the free variables, so >>> that you could double the resolution of your plot using the same >>> amount of computational resources plotting the first quadrant only >>> >>> -- >>> anch'io la penso come me, ma ? ? ? ? ? ? ? ? ? ? -- SteO153, in IHC >>> >>> -- >>> This message has been scanned for viruses and >>> dangerous content by MailScanner, and is >>> believed to be clean. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > From gael.varoquaux at normalesup.org Mon Feb 28 06:20:59 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Feb 2011 12:20:59 +0100 Subject: [SciPy-User] SciPy and Recursion In-Reply-To: References: <268291.28960.qm@web113411.mail.gq1.yahoo.com> <20110226225337.GA26889@phare.normalesup.org> Message-ID: <20110228112059.GC14712@phare.normalesup.org> On Mon, Feb 28, 2011 at 09:56:41AM +0100, Sebastian Haase wrote: > could you explain, what you mean by suboptimal!? > Do mean speed-wise ? > I had a longish thread on the numpy list recently, where I was trying > to gain speed using OpenMP and/or SSE. > And cdist turned out to as fast as my (best) C implementation (for > less than 2-3 threads). I did mean speed-wise: for high-dimensional data, scikit learn can be significantly faster: In [1]: X = np.random.random((1000, 500)) In [2]: Y = np.random.random((1000, 500)) In [3]: from scipy import spatial as sp In [4]: %time sp.distance.cdist(X, Y) CPU times: user 0.56 s, sys: 0.00 s, total: 0.56 s Wall time: 1.16 s Out[5]: array([[ 9.14394009, 9.27152238, 8.9976296 , ..., 9.18902138, 8.63073757, 8.8818356 ], [ 9.03243891, 9.37592823, 8.76692936, ..., 9.25943615, 9.09636773, 8.75653576], [ 9.06511143, 8.69746052, 9.12285065, ..., 9.08133078, 8.93667671, 9.00539463], ..., [ 9.35929309, 8.87066188, 9.24649229, ..., 9.4306161 , 9.12252869, 9.00311071], [ 9.25729667, 8.9454522 , 9.17794614, ..., 9.30332972, 9.43599469, 9.00881447], [ 9.10675538, 8.67428177, 8.6647222 , ..., 8.89505099, 9.12760646, 9.01155698]]) In [6]: from scikits.learn.metrics import pairwise In [7]: %time pairwise.euclidean_distances(X, Y) CPU times: user 0.17 s, sys: 0.01 s, total: 0.18 s Wall time: 0.20 s Out[8]: array([[ 9.14394009, 9.27152238, 8.9976296 , ..., 9.18902138, 8.63073757, 8.8818356 ], [ 9.03243891, 9.37592823, 8.76692936, ..., 9.25943615, 9.09636773, 8.75653576], [ 9.06511143, 8.69746052, 9.12285065, ..., 9.08133078, 8.93667671, 9.00539463], ..., [ 9.35929309, 8.87066188, 9.24649229, ..., 9.4306161 , 9.12252869, 9.00311071], [ 9.25729667, 8.9454522 , 9.17794614, ..., 9.30332972, 9.43599469, 9.00881447], [ 9.10675538, 8.67428177, 8.6647222 , ..., 8.89505099, 9.12760646, 9.01155698]]) However, I it does depend on the dimensionality of the data: In [9]: X = np.random.random((1000, 3)) In [10]: Y = np.random.random((1000, 3)) In [11]: %timeit sp.distance.cdist(X, Y) 100 loops, best of 3: 11.9 ms per loop In [12]: %timeit pairwise.euclidean_distances(X, Y) 10 loops, best of 3: 35.4 ms per loop and juging by David's question, he was probably operating with 3D data: > > On Sat, Feb 26, 2011 at 02:32:59PM -0800, David Baddeley wrote: > >> now got scipy running - you're going to want: > >> dist_list = sp.distance.cdist(cluster_shifted, xyz.reshape((-1, 3))) So, I must apologies, I answer off-topic: David you probably should be using scipy spatial. Gael From e.antero.tammi at gmail.com Mon Feb 28 06:44:34 2011 From: e.antero.tammi at gmail.com (eat) Date: Mon, 28 Feb 2011 13:44:34 +0200 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: References: <19818.14024.445696.976358@aiuole.stru.polimi.it> Message-ID: Hi On Mon, Feb 28, 2011 at 1:05 PM, bhanukiran perabathini < bhanukiran.perabathini at gmail.com> wrote: > Hi again! > > let's say we have a siple function that goes like > > def func(x, y): > return x + y > > now I want to get this value for every point on the x-y plane, here's what > I do > > x = sp.array([-2, -1, 0, 1, 2]) > y = sp.array([-2, -1, 0, 1, 2]) > > xx = x[:, sp.newaxis] > yy = y[sp.newaxis, :] > > >>> func(xx, yy) > >>> > array([[-4, -3, -2, -1, 0], > [-3, -2, -1, 0, 1], > [-2, -1, 0, 1, 2], > [-1, 0, 1, 2, 3], > [ 0, 1, 2, 3, 4]]) > > works as expected right? > > How do we get the following function working with broadcasted arrays? > > def func2(x, y): > if x > y: > return x + y > else: > return x - y > > func2(xx, yy) raises the error > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > > How do we handle such situations? > For example In []: x= arange(5) In []: y= arange(5) In []: xx= x[:, None] In []: yy= y[None, :] In []: A= xx+ yy In []: A[xx< yy]= (xx- yy)[xx< yy] In []: A Out[]: array([[ 0, -1, -2, -3, -4], [ 1, 2, -1, -2, -3], [ 2, 3, 4, -1, -2], [ 3, 4, 5, 6, -1], [ 4, 5, 6, 7, 8]]) Regards, eat > > > Thanks so much, > Bhanukiran > > > On Mon, Feb 28, 2011 at 2:48 PM, bhanukiran perabathini > wrote: > > oh looks like this is what i've been looking for.. > > > > http://www.scipy.org/EricsBroadcastingDoc > > > > > > > > On Mon, Feb 28, 2011 at 12:08 PM, bhanukiran perabathini > > wrote: > >> Hi Pauli Virtanen & Giacomo Boffi, > >> > >> Both of your suggestions helped me. thanks so much. > >> > >> But I guess I'm having some troubles understanding and using > >> broadcasting to its total power. Could you suggest some tutorial where > >> I can read up learn this stuff? > >> > >> Manipulating with the axes seems a little tricky to me. I want to > >> master them so well that I can think in terms of them. > >> > >> Thanks again for your time, > >> Bhanukiran > >> > >> > >> > >> On Sun, Feb 27, 2011 at 5:04 PM, Giacomo Boffi > wrote: > >>> bhanukiran perabathini writes: > >>> > >>> > - My task is to perform a double integral (with two free variables) > >>> > and make a filled-contour plot of the logarithm of its absolute > >>> > square. > >>> > >>> notwistanding the other excellent advice you've received, please note > >>> that your integral is symmetric with respect to the free variables, so > >>> that you could double the resolution of your plot using the same > >>> amount of computational resources plotting the first quadrant only > >>> > >>> -- > >>> anch'io la penso come me, ma -- SteO153, in IHC > >>> > >>> -- > >>> This message has been scanned for viruses and > >>> dangerous content by MailScanner, and is > >>> believed to be clean. > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhanukiran.perabathini at gmail.com Mon Feb 28 06:59:42 2011 From: bhanukiran.perabathini at gmail.com (bhanukiran perabathini) Date: Mon, 28 Feb 2011 17:29:42 +0530 Subject: [SciPy-User] Double integration with integrate.trapz In-Reply-To: References: <19818.14024.445696.976358@aiuole.stru.polimi.it> Message-ID: Thanks for the reply, I am aware of this trick though. In light of the previous discussion, it is about 'using broadcasted arrays as inputs of an already defined function' I'm concerned about. It was with that intention that I made up that easy example. Look at the following function for instance, def func(x, y): res = 0 for j in xrange(x): res += y + j return res now what if I want to use apply this function on arrays? by doing something as simple as func(xx, yy). In which shape are we supposed to manufacture xx and yy for this to work? -bhanukiran On Mon, Feb 28, 2011 at 5:14 PM, eat wrote: > Hi > > On Mon, Feb 28, 2011 at 1:05 PM, bhanukiran perabathini > wrote: >> >> Hi again! >> >> let's say we have a siple function that goes like >> >> def func(x, y): >> ? ?return x + y >> >> now I want to get this value for every point on the x-y plane, here's what >> I do >> >> x = sp.array([-2, -1, 0, 1, 2]) >> y = sp.array([-2, -1, 0, 1, 2]) >> >> xx = x[:, sp.newaxis] >> yy = y[sp.newaxis, :] >> >> >>> func(xx, yy) >> >>> >> array([[-4, -3, -2, -1, ?0], >> ? ? ? [-3, -2, -1, ?0, ?1], >> ? ? ? [-2, -1, ?0, ?1, ?2], >> ? ? ? [-1, ?0, ?1, ?2, ?3], >> ? ? ? [ 0, ?1, ?2, ?3, ?4]]) >> >> works as expected right? >> >> How do we get the following function working with broadcasted arrays? >> >> def func2(x, y): >> ? ?if x > y: >> ? ? ? return x + y >> ? ?else: >> ? ? ? return x - y >> >> func2(xx, yy) raises the error >> ValueError: The truth value of an array with more than one element is >> ambiguous. Use a.any() or a.all() >> >> >> How do we handle such situations? > > For example > In []: x= arange(5) > In []: y= arange(5) > In []: xx= x[:, None] > In []: yy= y[None, :] > In []: A= xx+ yy > In []: A[xx< yy]= (xx- yy)[xx< yy] > In []: A > Out[]: > array([[ 0, -1, -2, -3, -4], > ?? ? ? [ 1, ?2, -1, -2, -3], > ?? ? ? [ 2, ?3, ?4, -1, -2], > ?? ? ? [ 3, ?4, ?5, ?6, -1], > ?? ? ? [ 4, ?5, ?6, ?7, ?8]]) > Regards, > eat >> >> Thanks so much, >> Bhanukiran >> >> >> On Mon, Feb 28, 2011 at 2:48 PM, bhanukiran perabathini >> wrote: >> > oh looks like this is what i've been looking for.. >> > >> > http://www.scipy.org/EricsBroadcastingDoc >> > >> > >> > >> > On Mon, Feb 28, 2011 at 12:08 PM, bhanukiran perabathini >> > wrote: >> >> Hi Pauli Virtanen & Giacomo Boffi, >> >> >> >> Both of your suggestions helped me. thanks so much. >> >> >> >> But I guess I'm having some troubles understanding and using >> >> broadcasting to its total power. Could you suggest some tutorial where >> >> I can read up learn this stuff? >> >> >> >> Manipulating with the axes seems a little tricky to me. I want to >> >> master them so well that I can think in terms of them. >> >> >> >> Thanks again for your time, >> >> Bhanukiran >> >> >> >> >> >> >> >> On Sun, Feb 27, 2011 at 5:04 PM, Giacomo Boffi >> >> wrote: >> >>> bhanukiran perabathini writes: >> >>> >> >>> ?> - My task is to perform a double integral (with two free variables) >> >>> ?> ? and make a filled-contour plot of the logarithm of its absolute >> >>> ?> ? square. >> >>> >> >>> notwistanding the other excellent advice you've received, please note >> >>> that your integral is symmetric with respect to the free variables, so >> >>> that you could double the resolution of your plot using the same >> >>> amount of computational resources plotting the first quadrant only >> >>> >> >>> -- >> >>> anch'io la penso come me, ma ? ? ? ? ? ? ? ? ? ? -- SteO153, in IHC >> >>> >> >>> -- >> >>> This message has been scanned for viruses and >> >>> dangerous content by MailScanner, and is >> >>> believed to be clean. >> >>> >> >>> _______________________________________________ >> >>> SciPy-User mailing list >> >>> SciPy-User at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >> >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From washakie at gmail.com Mon Feb 28 07:25:45 2011 From: washakie at gmail.com (John) Date: Mon, 28 Feb 2011 13:25:45 +0100 Subject: [SciPy-User] OT: Literature management, pdf downloader Message-ID: Sorry for an OT post, but I thought this might be a community that would have interest in the attached script. For those of you actively conducting research, I imagine you have a variety of 'tools' for managing PDFs. For anyone using a Mac, I guess it's 'Papers', which seems to be quite brilliant software. On Linux, I've gone with Mendeley, which I am very pleased with. For my actual searching, I rely on the webofscience or ISI searches. Here is my process: 1) ISI search for articles, add to 'marked list' 2) export marked list to bibtex 3) download pdf files to which I have access 4) dump them into a 'staging' folder for Mendeley 5) let Mendeley import them into my library (making copies) This has worked very well, but recently I became frustrated with the amount of time I spent downloading articles. I decided to write a script to do it for me. Attached you'll find a script which uses the DOI numbers (if present) and essentially accomplishes steps 3 & 4 above. I would like to add this eventually as functionality to either Mendeley or kbibtex or pybibliographer. The functionality I see is that you could select some references in any of the aforementioned software, and then click a 'download PDFs' button. Does this exist at all?!? If so, please let me know. Okay, so assuming it does not, in the attached script, you'll see that what it does is to parse a bibtex file to extract the DOI numbers. If they don't exist, the article is skipped, SOL. If the DOI number is available it then accesses the dx.doi.org website to figure out where to get the article. Then after some 'screen scraping' the link to the pdf is used to download the PDF to a 'LIBRARY' directory. Of course the major assumption here is that you have access to the articles through your network. There are some outstanding issues, and in general this is an email reaching out to more experienced programmers for comments on the following: 1) I need to do Error handling better, (i.e. at least I should have a timeout) 2) I would like to be able to include authentication handling (maybe in a config file, you could provide access credentials for various journals) 3) Getting rid of the BeautifulSoup and pybtex dependency (or learn how to package so that when someone uses easy install, those dependencies will also be installed) 4) I need to be able to handle cookies (this is a problem so far only for the get_acs method). 5) Are my various journal methods the best way to do this?? If folks object to my posting this here, please suggest a place you might think would be more appropriate. If I get positive feedback, I'll post this to a public site where version control can be done so folks can do their own legwork to add 'screen scraper' methods for other journals. All the best, john -------------- next part -------------- A non-text attachment was scrubbed... Name: get_publications.py Type: text/x-python Size: 7057 bytes Desc: not available URL: From vincefn at users.sourceforge.net Mon Feb 28 07:40:16 2011 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Mon, 28 Feb 2011 13:40:16 +0100 Subject: [SciPy-User] OT: Literature management, pdf downloader In-Reply-To: References: Message-ID: <201102281340.16807.vincefn@users.sourceforge.net> > This has worked very well, but recently I became frustrated with the > amount of time I spent downloading articles. I decided to write a > script to do it for me. Attached you'll find a script which uses the > DOI numbers (if present) and essentially accomplishes steps 3 & 4 > above. I would like to add this eventually as functionality to either > Mendeley or kbibtex or pybibliographer. The functionality I see is > that you could select some references in any of the aforementioned > software, and then click a 'download PDFs' button. > > Does this exist at all?!? If so, please let me know. This is indeed really OT - but have you tried Zotero ? It's a really great tool for managing bibliography, especially (but not only) for research articles. http://www.zotero.org/ -- Vincent Favre-Nicolin CEA / INAC http://inac.cea.fr Universit? Joseph Fourier http://www.ujf-grenoble.fr http://vincefn.net ObjCryst & Fox : http://objcryst.sourceforge.net From gael.varoquaux at normalesup.org Mon Feb 28 07:51:11 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Feb 2011 13:51:11 +0100 Subject: [SciPy-User] OT: Literature management, pdf downloader In-Reply-To: References: Message-ID: <20110228125111.GH14712@phare.normalesup.org> On Mon, Feb 28, 2011 at 01:25:45PM +0100, John wrote: > searching, I rely on the webofscience or ISI searches. > 1) ISI search for articles, add to 'marked list' Continuing with off topic, but in my experience google scholar works much better than ISI, it is accessible freely, can link to preprints, and if you adjust your preferences, will give you the bibtex files. G From washakie at gmail.com Mon Feb 28 08:26:37 2011 From: washakie at gmail.com (John) Date: Mon, 28 Feb 2011 14:26:37 +0100 Subject: [SciPy-User] OT: Literature management, pdf downloader In-Reply-To: <20110228125111.GH14712@phare.normalesup.org> References: <20110228125111.GH14712@phare.normalesup.org> Message-ID: I do use scholar a lot, and I have played with Zotero (which actually integrates quite well with Mendeley), but do either of these actually have a way to 'batch download' the PDFs, that was more the point of the script. I wonder if screen scraping scholar or working with scholar somehow to find the PDFs may be a better approach rather than relying on DOIs?? Thanks, john On Mon, Feb 28, 2011 at 1:51 PM, Gael Varoquaux wrote: > On Mon, Feb 28, 2011 at 01:25:45PM +0100, John wrote: >> searching, I rely on the webofscience or ISI searches. > >> 1) ISI search for articles, add to 'marked list' > > Continuing with off topic, but in my experience google scholar works much > better than ISI, it is accessible freely, can link to preprints, and if > you adjust your preferences, will give you the bibtex files. > > G > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Configuration `````````````````````````` Plone 2.5.3-final, CMF-1.6.4, Zope (Zope 2.9.7-final, python 2.4.4, linux2), Python 2.6 PIL 1.1.6 Mailman 2.1.9 Postfix 2.4.5 Procmail v3.22 2001/09/10 Basemap: 1.0 Matplotlib: 1.0.0 From paul.anton.letnes at gmail.com Mon Feb 28 09:35:07 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 28 Feb 2011 15:35:07 +0100 Subject: [SciPy-User] OT: Literature management, pdf downloader In-Reply-To: <20110228125111.GH14712@phare.normalesup.org> References: <20110228125111.GH14712@phare.normalesup.org> Message-ID: <6D6592BC-1435-46D7-A4C7-583DDB51C5A4@gmail.com> On 28. feb. 2011, at 13.51, Gael Varoquaux wrote: > On Mon, Feb 28, 2011 at 01:25:45PM +0100, John wrote: >> searching, I rely on the webofscience or ISI searches. > >> 1) ISI search for articles, add to 'marked list' > > Continuing with off topic, but in my experience google scholar works much > better than ISI, it is accessible freely, can link to preprints, and if > you adjust your preferences, will give you the bibtex files. > Do you know if googlecl works with scholar? If not, maybe it could be easily extended? Also OT, BibDesk is great software for managing papers on the mac. Since it works directly with pure text .bib files, it should be easier to talk to than these other programs, if I understand things correctly. You can also save things like DOI, making PDF download a luxury, but no necessity until you start making notes or going away from a network connections for longer periods. Just click on the doi, and you are taken to the webpage of the article. Good luck though, your project sounds interesting! Paul From jmccormac01 at qub.ac.uk Mon Feb 28 10:25:24 2011 From: jmccormac01 at qub.ac.uk (James McCormac) Date: Mon, 28 Feb 2011 15:25:24 -0000 (UTC) Subject: [SciPy-User] faster interpolations (interp1d) Message-ID: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> Hi eat, you sent me a suggestion for faster 1d interpolations using matrices a few weeks back but I cannot find the email anywhere when I looked for it today. Here is a better explanation of what I am trying to do. For example I have a 1d array of 500 elements. I want to interpolate them quadratically so each array becomes 10 values, 50,000 in total. I have 500x500 pixels and I want to get 0.01 pixel resolution. code snipet: # collapse an image in the x direction ref_xproj=np.sum(refarray,axis=0) # make an array for the 1d spectra x = np.linspace(0, (x_2-x_1), (x_2-x_1)) # interpolation f2_xr = interp1d(x, ref_xproj, kind='quadratic') # new x array for interpolated data xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) # FFT of interpolated spectra F_ref_xproj = fftpack.fft(f2_xr(xnew)) Can I do this type of interpolation faster using the method you described before? Cheers James From gokhansever at gmail.com Mon Feb 28 12:06:00 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 28 Feb 2011 10:06:00 -0700 Subject: [SciPy-User] OT: Literature management, pdf downloader In-Reply-To: <20110228125111.GH14712@phare.normalesup.org> References: <20110228125111.GH14712@phare.normalesup.org> Message-ID: On Mon, Feb 28, 2011 at 5:51 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Mon, Feb 28, 2011 at 01:25:45PM +0100, John wrote: > > searching, I rely on the webofscience or ISI searches. > > > 1) ISI search for articles, add to 'marked list' > > Continuing with off topic, but in my experience google scholar works much > better than ISI, it is accessible freely, can link to preprints, and if > you adjust your preferences, will give you the bibtex files. > > G > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > In my experience scopus.com gives more refined results compared to both gscholar and ISI provided you have access the site. You can do author based queries (e.g. list an authors published papers) which I find very useful to access someone's publication history. For a few atmospheric sciences related specific key term searches, scopus gives broader publication coverage comparing to the gscholar and ISI. They have provided an API for programmatic queries but not sure about their publication access procedure. I register into their search update service (where I get updates when their database is updated for my saved searches --gscholar and ISI have similar services but again scopus provides better results and allows more refinement in searches) and manually download the articles that I am interested in. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lamblinp at iro.umontreal.ca Mon Feb 21 16:28:34 2011 From: lamblinp at iro.umontreal.ca (Pascal Lamblin) Date: Mon, 21 Feb 2011 22:28:34 +0100 Subject: [SciPy-User] Theano 0.3.1 released Message-ID: <20110221212834.GA26433@bob.blip.be> ========================= Announcing Theano 0.3.1 ========================= This is a bug/crash fix and small feature release. The upgrade is recommended for everybody. For those using the bleeding edge version in the mercurial repository, we encourage you to update to the `0.3.1` tag. Deleting old cache ------------------ Since the default path of the cache directory for compiled object changed, we encourage you to delete the previous one. The easiest way to do that is to execute: python -c 'import theano; print theano.config.base_compiledir' and then call "rm -rf" on the returned result. A new cache directory will then be created next time you import theano. What's New ---------- Deprecation: * The theano shared variable attribute `value` is deprecated, use `get_value()` or `set_value()`! See http://deeplearning.net/software/theano/tutorial/aliasing.html Bugs fixed: * The random number generator in theano/sandbox/rng_mrg.py did not always return the same sequence of number on the CPU and GPU. * In some cases, there was a (possibly large) fraction of non-random garbage in the returned sequence. * In python mode (not the default mode) when input of elemwise operation was an empty ndarray, we were not returning an empty ndarray. * Scan cached the number of steps. This caused no problem because each time you called scan the number of steps would got refreshed. The problem was when you called ScanGrad which would use the cached number of steps without refreshing it. To be affected by this bug, one would have to compile two graph, one that would contain a Scan and the other the corresponding GradScan, and call the first function to cache the number of steps, and then call the second function with a different number of steps. * In GpuConv, errors in conv_patch_stack_reduce when the entire kernel doesn't fit into shared memory. The error was not found before as the impact was less then the relative tolerance of 1e-3. Now the relative tolerance is 1e-5. Crash fixed: * Add a feature to not have an exception that makes Theano crash when taking the gradient on DimShuffle in some particular case. * Compilation crash for GpuElemwise with tensor with high number of dimensions (~6 or more). * Disabled C code generator that make gcc crash on complex type. * Crash in optimization when an Op has no input. * Output shape is now computed correctly for matrix-vector multiplication on GPU. * In Scan, when using numbers as inputs, not symbolic variables. * In GradScan, when there is only 1 inputs in the Scan. * In GpuSum, bug in calculation of n_blocks for the 10 pattern. (Sum on the row of a matrix) * Some segfault at exit with GPU code. Optimization: * New SpecifyShape op that allow to pass more shape info in the graph. * Speed up gemv by a work around scipy gemv slowness when the matrix is in C order (the default). * Remove join of only 1 element. * During optimization, consider one more case in get_constant_value. GPU: * cuda_shared.value = X now works inplace! * cuda_shared_var.set_value(new_ndarray) will overwrite the old value inplace in the most common case. * Allow to create a CudaNdarraySharedVariable from a CudaNdarray. * New init_gpu_device theano flags. * Fuse GpuElemwise more often (in the case where there are so many inputs that fusing them all would bust the 256 bytes limit of parameter to gpu function). * CPU join of only 1 element that was not moved to the GPU. New features: * tensor.reshape now makes dimensions of length 1 broadcastable. * tensor.prod now implements the gradient. * DebugMode now warns if an Op declared itself as returning a view of the input but did not do so. * This behaviour is a problem, because it can block other Ops from being inplace on the same inputs. This could lower the reuse of memory. * Sparse.structured_dot now works when both matrices are sparse * Sparse type is now supported by the shape op, and the ShapeFeature optimizer works correctly with them. * New 3D convolution ops, with CPU and GPU implementations. * New colors in pydotprint. Documentation: * Documented lib.amdlibm and (new) init_gpu_device config variables. * A new page (was done for 0.3 but an error was hiding it on the web page) on the memory aliasing contract of Theano. * Revision to the Windows installation instructions. * The cuda documentation is now generated on the web server. * Better documentation of .theanorc and its sections. Unit tests: * Stop usage of deprecated functions or syntax in the unit tests. * Better testing of GPU convolution nets. * Make more tests able to use different random seeds. * Tests of sparse now use default mode, not a hard-coded one. * Remove some tests of unimplemented features. Other: * The name of compiledir now includes the Python version to make it easier for people with many Python versions * Added theano.tensor.std as a shortcut to sqrt(var(input=input, axis=axis)). * Whitespace, tabulation and indentation clean-up in the code. * Better detection of memory sharing between variables. Download -------- You can download Theano from http://pypi.python.org/pypi/Theano. Description ----------- Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features: * tight integration with NumPy: a similar interface to NumPy's. numpy.ndarrays are also used internally in Theano-compiled functions. * transparent use of a GPU: perform data-intensive computations up to 140x faster than on a CPU (support for float32 only). * efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs. * speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1+ exp(x) ) for large values of x. * dynamic C code generation: evaluate expressions faster. * extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems. Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal). Resources --------- About Theano: http://deeplearning.net/software/theano/ About NumPy: http://numpy.scipy.org/ About Scipy: http://www.scipy.org/ Machine Learning Tutorial with Theano on Deep Architectures: http://deeplearning.net/tutorial/ Acknowledgments --------------- I would like to thank all contributors of Theano. For this particular release, the people who have helped resolve many outstanding issues: (in alphabetical order) Frederic Bastien, Arnaud Bergeron, James Bergstra, Josh Bleecher Snyder, Olivier Delalleau, Guillaume Desjardins, Dumitru Erhan, Ian Goodfellow, Pascal Lamblin, Razvan Pascanu and Francois Savard and David Warde-Farley. Also, thank you to all NumPy and Scipy developers as Theano builds on its strength. All questions/comments are always welcome on the Theano mailing-lists ( http://deeplearning.net/software/theano/ ) -- Pascal From joakim.asklund at gmail.com Wed Feb 23 04:49:12 2011 From: joakim.asklund at gmail.com (Spiffalizer) Date: Wed, 23 Feb 2011 01:49:12 -0800 (PST) Subject: [SciPy-User] [SciPy-user] mgrid format from unstructured data Message-ID: <30993544.post@talk.nabble.com> Hi I am trying to write some code to find the interpolated value of some specified points. My input is x,y,z values. Where the x,y value is the coordinates in 2d and z is the value I am interested in. My input is sort of unstructured. I get it from looping over some data that I have. It is however constructed from a grid pattern to begin with. But i am not certain in what order the points are in the loop. example: 1: 0,0,1 2: 1,0,5 3: 2,0,12 4: 0,1,0 5: 1,1,10 6: 2,1,25 7: 0,3,9 8: 1,3,20 9: 2,3,11 the order point 1-9 comes is unknown to me. I have found some examples that looks like this x,y = np.mgrid[-1:1:10j,-1:1:10j] z = (x+y)*np.exp(-6.0*(x*x+y*y)) xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j] tck = interpolate.bisplrep(x,y,z,s=0) znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) So my question really is how to sort/convert my input to a format that can be used by the interpolate function? Thanks Joakim -- View this message in context: http://old.nabble.com/mgrid-format-from-unstructured-data-tp30993544p30993544.html Sent from the Scipy-User mailing list archive at Nabble.com. From lasagnadavide at gmail.com Sun Feb 27 18:04:49 2011 From: lasagnadavide at gmail.com (Davide Lasagna) Date: Mon, 28 Feb 2011 00:04:49 +0100 Subject: [SciPy-User] ANN: Spyder v2.0.8 In-Reply-To: References: Message-ID: <4D6AD891.3020304@gmail.com> Thanks for your work Pierre! Keep Going! Ciao